Today’s topic is one I’ve wrestled with a lot in the last couple years. I think I’ve finally resolved it in a coherent way, but, who knows? Think of this as a progress report, at least.

For some people well versed in evolutionary biology and psychology, simply asking the question in the title to this post may seem anomalous. Obviously human morality is universal: our moral sense is one of the things that distinguishes us as human. A lot of (metaphorical) ink has been spilt in explanation of how our moral faculty evolved – a singular object. The situation isn’t nearly as obvious as that. Indeed, while I may draw back from a strictly relativist position, I’m going to argue that evolutionary theory equally requires dismissing a simple universalist position on morality.

The first step is to clear away the underbrush of potential confusion. I’m not suggesting humans don’t all have a moral faculty. Even the sociopath and sadist have a morality; it’s just that the well-being of others isn’t especially highly valued in their morality. (Obviously, if you define morality as being what you believe, then there’s no concern over relativism.) But humans, in general, have moralities that distinguish the value of others, usually based more on in- and out-group affiliation and kinship. The sociopath and sadist are just at the far end of the spectrum regarding the value placed on others in one’s morality. So, there’s no disputing that a moral faculty is part of the evolutionary mental hardware of our species. It’s the software that makes things complicated.

Consider a comparison: we all also possess an evolved language acquisition faculty. As long as we’re clinically normal humans, growing up around other humans, in the first years of life we’ll learn to speak the language being spoken by those around us. Having a universal language acquisition faculty on its own predicts nothing about the language any individual will wind up speaking: that’s a historically and ecologically contingent fact. The morality faculty operates the same way: we all have the ability to formulate morality. That’s universal, or at least species typical: the content of the morality, like that of language, is a historically and ecologically contingent fact.

Part of where the content of morality comes from, of course, will be culture: like language, to some degree, we take it from the world around us. Obviously, though, this is not a complete answer: there are plenty enough people, now and in the past, morally out of step with their cultural contemporaries. Others argue that such non-conformist morality (at least, when the arguer approves of it) is the product of reason: free thinkers explore the world and life with rational inquiry and discover moral principles that transcend the accepted moral precepts of their society. There are a number of historical and philosophical reasons to doubt the plausibility of such an explanation. Furthermore, if morality was subject to reason, Jonathan Haidt and his colleagues wouldn’t have experimentally discovered the pervasive resort to what they call moral dumbfounding: rationalization of one’s morality flying in the face of the empirical evidence. However, the point here is to take a biological realist approach to the issue, so my response will be restricted to evolutionary theory.

The rationalist approach assumes that human reason is a transcendent force that provides its human users some unmediated access to the deep meaning of the universe. Evolutionary theory doesn’t support such a premise. All nervous systems, including the human brain, are evolved for the fitness enhancement of the organism in question. (And organisms can be treated as clustering around species categories.) Think of evolved vision. As used as we are all to thinking that, as we look around the world, we see things as they objectively are, this is manifestly untrue. There’s plenty of color in the world that we can’t see, which is seen by birds and reptiles: infrared and ultraviolet. Bees can see polarized light, which allows them to identify the location of the sun on a cloudy day, so they can inform their hive-mates of the location of food. We cannot. A mantis shrimp not only sees more color than can we, but even than birds (many of who already see twice as much color as us), but each of the mantis shrimp’s two eyes, independently, have – not bi-, but – trinocular vision. Marine mammals have lost one of their color cones as they adapted to the visual conditions of underwater life. They see in what’s called black and white, though in fact it’s more like shades of blue. In all these case, the fitness benefits of a particular way of seeing has channelled selective retention of visual features to match the ecological circumstances of the species (or, more accurately, the organisms of its various lineages).

What’s true of evolved seeing is true of evolved thinking. Evolved capacities for thinking will be informed by the fitness benefiting traits that best adaptively manages the conditions of a lineage’s environment. It is true that a trait evolved for one purpose might be beneficially applied to another purpose. There’s no reason to believe that adaptations evolved for learning second languages, probability analysis or reading. Yet cross-applying other adapted traits allows us, with some effort, to do these things. Nature has many of these examples of what Stephen Gould called ex-aptations: applying some trait in a fitness enhancing way for which it was not originally selectively retained. This is a far cry though from regarding evolved human reason as some trans-fitness informed capacity to uncover the deep structural meaning or lessons of the universe. Evolutionary theory predicts that our cognition “sees” as limited a part of the spectrum of the world as do our eyes. And, by and by, it doesn’t seem terribly likely the universe has any such deep meaning or lessons – aside from what we narratively ascribe to it.

So, we all have evolved the hardware of a moral faculty, but popular – rationalist and theist – explanations for the software, the content, of morality make little sense from an evolutionary perspective. A more evolutionarily consistent explanation comes from recognizing the importance of our sociality in human evolution. We are a strangely cooperative animal. No other cooperates with non-kin in such widespread and complex ways as we do. Yet, our selfish genes necessitate serving our own genetic interests. These interests are easily served in cooperation with kin, but the degree and scope of non-kin cooperation requires both the redirecting and veiling of our competitive traits. Those traits are both turned outward, upon out-groups, and veiled in deception and self-deception when they are turned inward, within our own group.

Under these conditions, our fitness is benefited by legitimizing the regard of out-groups as less virtuous and managing to get others – either out-groups or members of our in-group – to behave in ways that benefit our own fitness. Though not always direct in its rhetoric, the attempt to induce such behavior is usually a little more explicit in the practice of politics. More surreptitious is the use of morality to gain the same benefits. Indeed, political positions are often rationalized by morality. For instance, Jason Weeden and Robert Kurzban, in their book The Hidden Agenda of the Political Mind, demonstrate that an individual’s evolved, phenotypic, sexual strategy – as revealed in mating history – reliably predicts that individual’s position on one of the most morally charged political issues of our time: abortion.

Just as the specific conditions of our historical and ecological context determines the software of our language acquisition faculty, so too do such conditions determine the software of our morality faculty. The morality we hold is essentially a description of how our fitness would be most benefited by the conduct of others. Obviously this doesn’t rule out large areas of overlap between us. Since most people’s fitness would not be served by being murdered, moral prohibition against murder is common – at least among the moralizer’s in-group. Though, such moral prohibition is not so widely applied among those mentioned above who place lower value upon others: e.g., sociopaths and sadists. But even among those who agree on a certain core moral consensus, very quickly areas of disagreement give rise to strongly held opposing moral stances. Is it moral to discriminate on phenotype features? Is it moral to use coercive mechanisms to equalize wealth? Is it moral to suppress certain kinds of speech or consensual relationships? And, if so, which ones, how and why? When is it, or is it not, moral to use violence? These examples, which just scratch the surface, are obviously moral disagreements that divide people all over the world and sometimes over backyard fences. And evolutionary theory predicts, given the fitness enhancing character of our cognitive adaptations, such moral positions – however unconsciously held and implemented – will be in the interest of the individual staking out the moral position. After all, our huge and expensive brains, allowing us to do remarkable cognitive stuff, like moralize, would not provide fitness cost effectiveness if the processes generated provided random results.

So, then, just as our evolved moral capacity, the hardware, is universal (or at least species typical) so the moral software – the actual moral content of any individual’s moral hardware – is generated by the fitness interests of the specific moralizer. The content of morality is not derived from universal principles, but from our own self-interest. Moralizing is the instrumental, self-serving action of fitness optimizing organisms. Does that mean that the answer to our title question is: yes, evolutionary theory does entail moral relativism? Since everyone’s morality is composed of the content that uniquely benefits them, all morality, however much it differs from one individual to the next, is equally valid as it serves the fitness interests of the moralizer. Moral validity is relative to the fitness of the moralizer. That sure sounds like moral relativism.

And, yet, it’s not quite that simple. If the capacity to moralize is universal, might there be something universal yet to discover in human morality? There is, but that something lies not in the content of morality, but its function. Consider this: no less than morality, love too is an evolved cognitive adaptation that improves our fitness. Like language and morality, we have an evolved capacity to love; but we don’t love just anybody. From the gene’s-eye-view of the Modern Synthesis in evolution, we love those who, if treated lovingly by us, help advance our fitness. We love our mates as signals of trustworthy reproductive partners, so as to achieve the best comingling genes we can get; we love our children so as to ensure we provide them the care needed to carry our genes into subsequent generations. From an evolutionary perspective, love is just as instrumental and self-serving as is morality. Having realized this, even were it possible, should we then stop loving our children?

Whether addressed from a genotype or phenotype perspective, the answer to this question seems to me obviously to be no, we shouldn’t. If one could simply turn off one’s emotions and decide dispassionately, ceasing to love would greatly hinder the survival of one’s genetic lineage. Unless you have some strong reason to believe your line should go extinct, ceasing to love would be an evolutionary disaster. Most of us, though, could never even get to such an analysis: so profound are the emotional bonds and gratifications of love. Nor does understanding love’s instrumental evolutionary function diminish the intensity of the emotional pleasure we take in it.

If we could stop loving, doing so would be genetic suicide and impoverish our subjective emotional lives. However, we can’t stop: we’re evolved to love our children and our prospective mates. I’ve come to regard morality in pretty much the same way. Morality is self-serving and the evolutionarily informed understand that the depth of their moral convictions in no way renders those convictions transcendent or objectively superior to those of anyone else. In that way, our morality may be relative. However, we’ve all evolved to engage in such morality – doing so is in the nature of our species. By not moralizing, however self-interested doing so may be, we would be abandoning our humanity no less than if we were to stop loving our children.

Does recognizing the relativistic and self-serving nature of our moral software ensure a descent into a world of moral “might makes right”: where the most belligerent moral bully wins? Maybe, but the invention of the scientific method reveals that we need not be slavish victims to our moral interests; there is some hope that at least some of the time rational dialogue might help find workable compromise. However, the fruits of that same scientific method reveal that mostly rationality has very little to do with people’s moral and political commitments and works best to rationalize decisions and actions based on those commitments: e.g., as in the moral dumbfounding discussed above. I do maintain a small flickering hope that we might yet be able to agree to forsake coercion and violence in the name of morality and allow the multitude of small experiments that is evolution to take its course: let the fittest experiments-in-living thrive. Yet, self-awareness requires me to acknowledge that that too is a moral position, which serves my own fitness interest, and can be rejected by the equally self-serving morality of others.

In the final analysis, confronted with aggressive “irrational” moral certitude, an “enlightened,” educated, shunting aside of one’s own morality as relativist, embarrassingly dismissed as some regressive primordial impulse, is a betrayal of your lineage, an abdication of kin responsibility, and a forsaking of your evolutionary heritage. And, if you believe in “the good of the species,” you’ve denied your species the vital input of the adapted mechanisms which allow evolution to produce the species’ best-available adapted evolutionary future.

My conclusion is that it is best to cultivate a dual outlook on questions of morality. (A conclusion unsurprising to the students of Richard Alexander who appreciate the interwoven nature of cooperation and competition in human sociality.) On the one hand, I want to be open to those who support free speech and scientific inquiry, bringing the lessons of science into civil life; but, on the other, one must be prepared to go to war – figuratively and even literally – with those whose morality disavow such commitments: however phenotypically legitimate may be such disavowal. The combat of moralities may not be pretty, but neither is evolution; it’s largely about death and extinction. Whether evolutionary theory leads you to see yourself as an entirely (genetically) unique vector in the flux of biota or a cog in the great machine of evolution (in fact, both are true), no less than forsaking love, forsaking your phenotypic morality deprives the system of valuable input and leaves your present and potential loved ones exposed to destruction. It has been to prevent that outcome we’ve evolved to be moral animals.

No one has any idea what evolution will ultimately favour, but I do know that evolution has endowed me with this phenotype, this kin, and these moral commitments: it would be denying my evolutionary life purpose to treat my morality as disposable.