Transcending Human Values

A central thread of human history has been the self-transformation of our species. For better and for worse, the growth of our knowledge and of our ability to apply it has led to almost continuous alterations in the conditions of human life. And these created changes in our circumstances have changed us too, in ways it is platitudinous to enumerate. We are different from medieval people in our understanding of how the world works. Consequently, we are different from them in our health and expectation of life, in our ability to travel, in the comforts and conveniences of life, and in our ability to use our minds and technology in turn to increase the knowledge and understanding on which all this depends. Less happily, our dark side –the side that leads to torture and cruelty, genocides and war- has also had its destructive powers hugely enhanced, to the point where our survival as a species is in question. Also less happily, our technology threatens our survival through its potential to make the earth uninhabitable. All these changes, and our knowledge of them, mean that our minds too –our beliefs, values and concerns- are in many ways utterly different from those of people in the Middle Ages.

These platitudes of transformation are countered by the platitudes of continuity. We recognize the people depicted in the Bayeux tapestry –with the strength of their ambition to overcome their rivals and to rule, with their use of duplicity and war in the contest- as potential players in the politics of our own time. The bits of the tapestry depicting ecclesiastical sex scandals are familiar too. To a twenty-first century eye, they come somewhere between graffiti on a lavatory wall and items in a tabloid newspaper. This second set of platitudes suggests that human nature never changes.

Of course there is truth in both pictures. People change in some ways and stay the same in others. This raises a problem for the contemporary debate on whether we should use biotechnology to transcend the limits of human nature. The debate often assumes that underneath the historical changes, there is a central core of human nature, so far unaltered, but which genetics and neuroscience may give us the power to change. The assumption is plausible yet hard to pin down. If this central core exists, to establish its boundaries will take a lot of empirical and conceptual work. Until that work has been done, a key part of the debate about transcending our nature remains out of focus. But here I will put up with that blur and just assume that we have some intuitive idea of what transcending human nature might involve.

Proposals for such “transhumanist” projects are usually about changes that may help people live lives more in accord with desires and hopes that are already part of our stock of “human values”. Perhaps, if we were changed in various ways, we would more closely approximate to ideals of the good life we already have. We might be kinder, more generous, more intelligent, more imaginative or more creative. Perhaps a modified version of human nature would make it easier to eliminate some of the wars and other killing, the torture and other cruelties, that disfigure our human world.

What makes these proposals appealing and at the same time frightening is that their promises and their dangers are both so great. The Nazis used primitive technology in the service of improving the gene pool, a project guided by equally primitive values. Brave New World suggests that more benevolent values combined with more sophisticated technology can also create a nightmare. These dangers show there is something to be said for sticking with the free range version of people. But, against that, the history of humanly created mass death and mass misery shows there is something to be said against it too. It is not easy to balance these considerations against each other.

But the balancing is a matter of judging the alternatives in the light of our human values. Perhaps even deeper problems are raised by a more radical kind of proposal for transcending human nature. Could some of our values themselves be in need of change? Is there some external perspective from which “human values” can be seen to be limited or distorted? If so, as well as there being a case for changing our nature to fit our values, there could be a case for changing our values themselves.

The question I want to ask here is whether this more radical proposal is an intelligible one. Could there be reasons that we could accept for “transcending human values”?



In some ways, people’s values obviously differ. On abortion, on war, on how to respond to terrorism, on how far to tolerate intolerant political or religious views, intelligent and thoughtful people disagree deeply. Two people may agree on the logic of the arguments, but have utterly conflicting intuitions about premises and conclusions. They may agree that avoiding a massive terrorist outrage could, in principle, and in some circumstances, require torturing a terrorist to gain information. One says that using torture is unthinkable and so we might have to accept a preventable terrorist attack. The other says that allowing so much loss of life is totally unacceptable and so we might have to accept the use of torture.

The same conflict may take place within a single person. Any of us may at one time be torn between the different views, or at different times oscillate between them. Many of us have intuitions that do not add up to an orderly and coherent system.

Some of the major ethical theories attempt to sort out this chaos by giving criteria for the rightness of actions and so, indirectly, for judging the reliability of intuitions. The principle of utility and Kant’s Categorical Imperative are obvious examples. The utilitarian can judge moral intuitions about a particular case to be unreliable prejudices where they conflict with what the principle of utility says about the case. The Kantian can judge that they are mere subjective emotions, to be over-ridden by reason in the form of the Categorical Imperative it conflicts with them.

As is well known, both utilitarianism and the Categorical Imperative come into conflict with intuitions deeply embedded in us. Utilitarianism seems to allow ruthless trampling over the interests and even lives of some people if this is necessary to secure a greater good for a larger number of others. Kantianism seems absolutely to exclude telling a lie, even where this is necessary to save someone’s life. And both make it problematic how far we should act on our natural affections. Each of these theories suggests that, in a life-threatening emergency, the decision to put saving our own children before saving a major public philanthropist needs first to be cleared by a test based on utility or on universality.

In denying or making marginal so much of what people value, both theories can seem like moralities for Martians rather than for human beings. Sensitive to this charge, both kinds of theorists stretch and squeeze the doctrines to bring them closer to our intuitions: the heavens won’t really fall if justice is done, or we need to remember the long-term utility of encouraging natural affections. In the purest forms of these two doctrines, moral intuitions either have no weight or are extremely subordinate to the theory. But these efforts of theorists to make their principles more intuitively acceptable are a tacit acknowledgement of the need to align theory with things people actually care about.

Kant has a well known argument against a morality based purely on religious authority. Even if the Divine message is brought to us, we still have to use our own judgement in recognising that it does come from God, and so we cannot escape moral autonomy. “Even the Holy One of the gospel must first be compared with our ideal of moral perfection before we can recognize him to be such.” (REFERENCE TO GROUNDWORK, 29.) If someone we at first took to be God told us to torture children, we would assess this in the light of our own values and decide that this person was not after all God or even a Divine messenger.

A parallel argument can be used against moral principles that we find too much at odds with deeply rooted moral intuitions. “If the Categorical Imperative tells me to let someone be killed rather than lie to their attacker, then there must be something wrong with it.”


All this suggests that a morality with only shallow roots in our intuitions is in trouble. This conclusion fits with the well known criticism made by Bernard Williams of utilitarian dismissal of anti-utilitarian intuitions as mere squeamishness to be overcome: we “cannot regard our moral feelings merely as objects of utilitarian value. Because our moral relation to the world is partly given by such feelings, and by a sense of what we can or cannot “live with”, to come to regard those feelings from a purely utilitarian point of view, that is to say, as happenings outside one’s moral self, is to lose a sense of one’s moral identity; to lose, in the most literal way, one’s integrity”. (REFERENCE TO BERNARD WILLIAMS: A CRITIQUE OF UTILITARIANISM, IN J.J.C. SMART AND BERNARD WILLIAMS: UTILITARIANISM, FOR AND AGAINST, PAGES 103-104.)

But an uncritical acceptance of moral intuition is also in trouble. If two people have different intuitions, this may just reflect the different kinds of upbringing they had. Why should we give so much moral authority to the people who had such power to shape our emotional responses in infancy and childhood?

And deeply ingrained intuitions often seem to be arbitrary. As Peter Unger and others have shown, our intuitive responses to an ethical dilemma often vary when the described situation is changed in ways that critical examination suggests may be of no moral importance. To most people, failing to respond to a charitable appeal for money to save lives may seem open to some criticism, but does not feel very bad. But refusing to drive someone to hospital because the blood from his wound will spoil the upholstery of your car, knowing that he will as a result probably lose a leg, feels much worse. (REFERENCE TO PETER UNGER: LIVING HIGH AND LETTING DIE, PAGES 24-25.)

And some of these arbitrary variations in moral intuitions may be the result of genetic programming reflecting our evolutionary past. Confronted with the “trolley problem”, many people intuitively feel that diverting a trolley from a track where it will kill five people onto a track where it will kill one person is a justifiable choice of the lesser evil. But very few people find it intuitively acceptable to save the five by pushing one person off a bridge so that the trolley kills him but is stopped. The different intuitions about the two cases could result from a genetically programmed inhibition against physical assaults against people close enough to hit back. Perhaps in the Stone Age, this inhibition helped gene survival. But a parallel inhibition against assaults at a distance might not have developed, either because they were rarely possible or because there was less danger of retaliation.

Of course such an account is speculative, but if our responses were shaped by gene survival in the Stone Age, are they a good basis for morality? Richard Dawkins, who is not usually accused of downplaying the importance of genetic programming, thinks that in morality we may be able to improve on it. He hopes that we have the capacity for disinterested altruism, through having an understanding that allows us to over-rule our selfish genes: “we have the power to turn against our creators. We, alone on earth, can rebel against the tyranny of the selfish replicators”. (REFERENCE TO THE SELFISH GENE, PAGE 215.) Even if a response is useful to gene survival now (as caring more for our compatriots than for people physically or in other ways different from us may be), nothing follows about its moral acceptability. But usefulness to gene survival in the different circumstances of the remote past seems a particularly poor candidate as a reason for conferring moral authority on an intuition.

Our intuitions when examined seem often arbitrary, and their origins –whether in our family or in our evolutionary past- do not provide overwhelming credentials. Critical thought about them is clearly needed. In morality too, “intuitions without concepts are blind”.


It seems that morality cannot live without intuitions but cannot live entirely by them either. If there are good reasons not to exclude them and good reasons not to accept them uncritically, the Rawlsian approach of seeking “reflective equilibrium” seems to win by default. Intuitions should be probed and criticized, partly on the basis of moral principles that seem plausible. We should also be willing to test proposed moral principles, partly on the basis of how far they accommodate intuitions that are deep-rooted and which seem robust in the face of criticism. The Rawlsian hope for each of us is that, with a bit of accommodation on both sides, most of the moral principles that appear plausible will combine with most robust intuitions in a single reflective equilibrium. And, with long discussion and a huge amount of luck, this “narrow reflective equilibrium” at the individual level might even mutate into a “wide reflective equilibrium” shared across a society, or even perhaps across the human race.

There is a degree of optimism in the idea even of narrow reflective equilibrium. Even among people who have been thinking about ethics for most of our lives, some of us change our minds from time to time and have tensions between values still unresolved. And there is obviously even greater optimism in the idea of a reflective equilibrium everyone comes to share. Perhaps all moral differences one day will fade away. But, equally (at least!), perhaps they will not.

However, as a thought experiment, let us suppose that the most optimistic view turns out to be true, and that a human consensus emerges. This consensus will then be the “human values” we are wondering about transcending. If, instead, there are irreducibly alternative versions, the discussion can still take place in terms of transcending some or all of a plurality. But, to avoid cumbersome formulations, we will imagine a human consensus, and simply bear in mind the necessary adjustments to the discussion needed to accommodate any irremovable plurality.

If we were finally to reach a species-wide agreement on basic questions of value, what possible reasons could there be to consider “transcending” the set of values that emerged?


If the human species were to reach a stable wide equilibrium about values, it might seem that there could be no reason to “transcend” it. We would have found the harmonious set of values whose possibility Isaiah Berlin consistently denied. Liberty and equality might still conflict with each other in particular cases, but there would be agreement on their relative weight. So the deepest disagreements we now have, either within a single person or between different people or groups, would have been eliminated. It seems like a version of “the end of history”, at least for ethics. Through accommodating all the robust intuitions and all the robust principles, the consensus seems to have eliminated any possible basis for a challenge from within. So it may seem that any challenge to this value utopia would have to come from outside. What can “outside” mean here?


One version of a challenge from outside could be from intelligent aliens. I will discuss this challenge as a thought experiment, but, as is the way with thought experiments, one day it may become real. (Our galaxy may contain a hundred billion stars. In 1999, the Hubble telescope gave a figure of a hundred and twenty-five billion galaxies in the universe, but improved detection now suggests perhaps twice as many. It is not absurd to think there may be twenty-five thousand billion, billion stars. So far we have discovered over a hundred stars with planetary systems, and it is thought that such systems may be fairly common. (REFERENCES.) Even if only one in a million stars has planets, and only one in a million of those planetary systems includes a planet capable of supporting life, this still suggests a figure of twenty-five thousand million planets where life could emerge. Obviously there is huge arbitrariness in all these estimates. But if the number of potentially life-supporting planets is anywhere near the thousands of millions, the claim that alien life is unlikely may not be the obvious default position.)

If aliens do exist and we develop contact with them, how might the encounter go? A lot will depend on how powerful relative to each other they and we are. Or it may depend on –what may come to the same thing- how intelligent relative to each other they and we are. If there is a large disparity between how advanced they and we are, the relationship could be one of exploitation and domination, on the “colonial” model. We know the human capacity for colonial domination. And, if they are more advanced, we may discover their capacity for the same behaviour. Or, given that we are talking about different species, there is the even more discouraging model coming from the history of our treatment of animals, mainly as food or as domesticated slaves. If they are in this position of power over us, we can certainly imagine value changes being imposed on us, by such techniques as conditioning, selective breeding or some other form of genetic intervention. But that kind of non-rational, coercive value change is not of interest here.

The relevant thought experiment is one where we and aliens encounter each other as intellectual equals, and with no other imbalance of power. We and they have roughly the same scientific picture of how the universe works. It may be that we can fill in some gaps in their picture and they can fill in some gaps in ours. This just underlines how much our pictures agree. But some at least of their values are different from ours. The issue is about how far we and they could discuss this, and how far we could be persuaded by reasons to change our values.


Could their values be utterly different from ours? Or must there be some shared values to enable them to communicate with us? Wittgenstein and Quine in different ways have argued that mutual intelligibility depends on some shared beliefs. We get at what other people believe partly on the basis of what we take them to mean by what they say. And we get at what their words mean partly by hypotheses about what they believe, which are likely to be influenced by what we ourselves believe. What we think they believe gives us an idea of what they may be likely to say and so influences our interpretation of their words.

Wittgenstein and Quine, in elaborating on this circle of belief and meaning, both concentrate on factual beliefs. But something similar may hold for values. Suppose the aliens’ behaviour suggests that they are enthusiastic about falling snow. (They behave in ways that we have in other contexts interpreted as showing pleasure; they show every sign of wanting to see it, going out of their way to do so, etc.) Among various other hypotheses, we may wonder whether they attach some religious significance to snow, or whether it gives them aesthetic pleasure. It is going to be much easier to find the “aesthetic” account plausible if their response is also triggered by sunsets, rich and interesting views, starry night skies: by things that evoke aesthetic responses in us. Even then, the response could be religious, or could be some emotion we do not have at all. But, if there is no overlap between the triggers of their aesthetic responses and ours, recognizing theirs is going to be either impossible or at least extremely difficult.

This suggests that the interpretation of the meaning of the alien evaluations is interwoven with their content in a way parallel to that which holds for the meaning and content of factual beliefs. So let us make things easier in two ways. Assume they and we understand at least most of each others’ languages. And assume a degree of overlap between their evaluations and ours. But there are also some important differences in values, and these lead them to suggest that we should transcend our purely human values (the ones not shared with them) in order to adopt theirs. How far will we be able to understand the values we do not share?

Half a century ago, Elizabeth Anscombe raised the issue of a person whose wanting something we might find unintelligible. A man says he wants a saucer of mud. “He is likely to be asked what for; to which let him reply that he does not want it for anything, he just wants it.” Unless the other person thinks the man is either making some philosophical point or is “a dull babbling loon”, he will want to ask the man more: “would he not try to find out in what aspect the object desired is desirable? Does it serve as a symbol? Is there something delightful about it? Does the man want to have something to call his own, and no more? Now if the reply is: “Philosophers have taught that anything can be an object of desire; so there can be no need for me to characterise these objects as somehow desirable; it merely so happens that I want them”, then this is fair nonsense.” (REFERENCE TO INTENTION, 1958, PAGE 70.)

It seems right that, for such a strange desire to be intelligible to us, some account needs to be given of what it is about the object that makes it desirable. The passage brings out the way in which some accounts and not others give an intelligible explanation. If the offered reply was, “I want a saucer of mud because it is brown, squelchy stuff in a shallow, round container”, this would not explain. It would just generate the further question, “Why do you want that?” But the explanations proposed in the passage do make it intelligible. The person who wants a saucer of mud to use at the climate change demonstration as a symbol of the destruction of agricultural land, or someone who finds the colour and texture of mud delightful, or the person desperate to own something, can all be understood. Wanting something, having a high opinion of something, or caring about something, are intelligible attitudes when they hook on to the framework of concerns we already understand: such concerns as political campaigning, aesthetic appreciation or ownership.

When we come to understand values, tastes and desires that first seem strange or weird, it is by hooking them up in this way to human concerns we do recognize. When first heard of, some of the sexual tastes of other people, or some of their tastes in food, may strike us as bizarre. So may willingness to accept martyrdom. But we come to understand all these things because we accept “it gives them sexual/gastronomic pleasure” or “they believe God may require them to die for their religion” as intelligible accounts.

The same goes for other people’s phobias that may first seem bizarre. Alan Hollinghurst describes preparations for the philosopher Richard Wollheim coming to dinner. “Every scrap of newspaper had to be either thrown away or thoroughly concealed (not just tucked away findably under a cushion): the mere sight of newsprint would make it impossible for him to eat his dinner.” (REFERENCE TO ALAN HOLLINGHURST: REVIEW OF RICHARD WOLLHEIM: GERMS: A MEMOIR OF CHILDHOOD, THE GUARDIAN SEPTEMBER 18, 2004.) Richard Wollheim bravely discussed this in his memoir and traced it back to a childhood experience. But even without a Freudian-type account, we can to some extent imagine even highly idiosyncratic phobias from the inside because we know what it is to be horrified, frightened or disgusted.

Perhaps there are hundreds of these intelligible attitudes, independent of each other. Or, perhaps, as some Aristotelians believe, there is some fairly short and manageable list of ingredients of the good human life, to which all these intelligible concerns can be reduced: safety, shelter, health, love, friendship, sex, religion, sport, art, recognition, and so on. Either way, suppose the map of the framework of humanly intelligible interests, values and concerns has been drawn up. (This is no more radical than reaching a stable wide reflective equilibrium in ethics, and may well be part of it.) The key point that Elizabeth Anscombe brought out in her discussion is that intelligible attitudes, values and desires have to hook up to this basic human framework. Her point has great plausibility within our species. But what happens in our encounter with aliens when a “saucer of mud” moment occurs?


Suppose the aliens have a pattern of behaviour that resembles that of lemmings. On a certain day of the year, they gather together in huge crowds and rush over a cliff. As a result about half of them die. We ask them why they do this and they say it is the deepest experience of a lifetime: an ecstasy which gives life its central meaning and which makes the high risk of death seem insignificant by comparison.

We might conjecture an explanation of why this behaviour might have come about: perhaps a Darwinian one about the benefits for gene survival of producing more offspring than the environment can support and then weeding out the weaker half by the rush over the cliff. But what they say does not succeed in making us understand from the inside. We do not have any grasp of what is so good about the experience or how it gives meaning to their lives. We try to understand by seeing if we can link it to our human framework of values. Is it in some way like sport, or sex, or religion, or military parades, or a music festival? But the aliens happen to know about these and they assure us that it is not at all like any of them.

It seems that the most we can say is that we know what it is for an experience to be ecstatic. But the values that make it ecstatic are obscure. In this respect, they and we do not have enough of a shared framework. It is hard to see that they could give us a reason to participate ourselves in the rush over the cliff, other than the very minimal one that it may be interesting to experience anything at all that is new. Without re-designing our brains so that we became like the aliens, there is no reason to think we would experience anything wonderful. They could give us a reason for not destroying the cliff: that it matters to them. But, without enough shared framework, the values that make it matter remain impenetrable.

Karl Popper claimed that a fruitful dialogue in which both sides can learn from each other is possible in the absence of a shared framework. (REFERENCE TO THE MYTH OF THE FRAMEWORK.) He argued that, while agreement is easier if there are shared assumptions and values, we are likely to learn more from the more radical challenge posed by people of extremely different outlook.

He might have seen the encounter with aliens who have very different values as part of the historical process by which contact with other cultures opens our minds: “The Greeks started for us that great revolution which, it seems, is still in its beginning –the transition from the closed to the open society… Perhaps the most powerful cause of the breakdown of the closed society was the development of sea-communications and commerce. Close contact with other tribes is liable to undermine the feeling of necessity with which tribal institutions are viewed.” (REFERENCE TO Karl Popper: The Open Society and its Enemies, Vol.1. Ch.10.) In Popper’s account, Western civilization had its origin in a clash between frameworks. When the Greeks met the Egyptian and Persian civilizations, the contrasting outlooks made them aware of the fallibility of local beliefs. As a result, Popper believed, they stopped teaching beliefs as dogma and developed the method of exposing them to critical discussion.

I do not know enough about ancient Greece to assess Popper’s specific claim. But the response he describes is recognisable as one (sadly, only one) response to the encounter between different systems of belief in the current stage of globalization. There is no serious doubt that huge numbers of people have beliefs and values that have been changed and shaped partly by challenges posed by contact with other cultures with other ways of seeing things. It is cheering to think that, even if aliens have very different beliefs and values, there is still the possibility of discussion, perhaps even with important benefits.

But it is worth noting that Popper defines a framework in a way that does not raise deep issues of intelligibility: “I mean by “framework” here a set of basic assumptions, or fundamental principles –that is to say, an intellectual framework”. (REFERENCE TO THE MYTH OF THE FRAMEWORK, PAGE 35.) It is clear that he has in mind such debates as those between a Darwinian and a Creationist, between a Marxist and a liberal, or between supporters of rival views on abortion. In other words, they need not have difficulties in understanding each other’s meaning: they share the range of background beliefs that make this possible. The basic assumptions and fundamental principles of their different frameworks are about such issues as religion, evolution, politics or the onset of the right to life. Behind these disagreements, if they are to understand each other, will be a whole set of broadly agreed “commonsense” beliefs about what the world is like, what words mean, and even to some extent about evidence and argument. The fruitful dialogue takes place in the context of framework disagreements that are only local rather than global.


So, fruitful (or even intelligible) dialogue with aliens about the values we do not share will depend on how much framework we do share. Fruitfulness is likely to be a matter of degree. How far might we get? Might they persuade us to come closer to their value system? (Of course, we might persuade them. But it is simpler to stick to them persuading us. The question of this paper is about transcending human values.)

They might persuade us that it was our loss not to have the positive side of the rush over the cliff. We might regard fifty per cent mortality as a very high price, too high a price. But, the analogy could be with music and deafness. Many congenitally deaf people, while not knowing exactly what they are missing, are willing to accept that not having music is a loss. Whether the aliens could persuade us would depend greatly on the degree of our overlap with them in values. If we shared very little with them, we might not trust their judgement. But if we shared a lot we might have sufficient rapport to be convinced. (Think of how, when someone tells us we must go and see a particular film, or read a particular book, we react differently according to the degree of affinity we feel with the person.)

More generally, the aliens might be able to show us that some of our values were limited. They might be able to show us that, when the Parliament of the Planets meets, all the other alien species, from many different galaxies, share a common deep structure of values. Ours might be a limited case of this structure, perhaps one distorted by rather special features of life on earth and of our evolutionary history. It would be open to us to accept this account and still to claim that our values are an improvement on the shared deep structure of the other species. Again, a lot would depend on whether we had enough shared framework to generate a real dialogue rather than just assertion and counter-assertion. But, if the shared framework was extensive enough, it seems conceivable that we might start to see our own human values as limited and parochial. In doing so, we would start to transcend them.

The encounter with aliens is a useful thought experiment to bring out both the possibilities and limitations of transcending human values. We might be made to see our values from another perspective and so become open to new critical thoughts about them and to their possible modification. There is a very large limitation on this process. At each stage, the alternative point of view has to be made intelligible to us. This is not just a matter of our understanding that aliens do value the rush over the cliff or whatever it is. It means that we have to have some grasp of why they value it: what they see in it. And this means that, to some degree, it has to be able to be mapped on to our value system. This links to a point that often strikes readers of Nietzsche. He talks about the revaluation of values, but this revaluation has to be itself undertaken from some evaluative standpoint. And, for us to have any reason to adopt it, this standpoint has to be part of our existing framework of values. (Neurath’s image of rebuilding the boat at sea, applied to values: perhaps our value system needs rebuilding, but at any one time we have to keep enough of the boat afloat to reconstruct other parts.)

So the encounter with aliens does not suggest the possibility of total revolution, but rather a gradual evolution of our values, deciding to adopt or reject proposed new attitudes in the light of values we already have. The encounter with aliens turns out to be like the encounter with other cultures on our own planet. It is a continuation of the process we already know. For that reason, the “end of history” version of the final, stable wide reflective equilibrium is a myth. One aspect of our human values is their openness to change. And we can expect this to continue whether or not we encounter aliens. In this way, debating whether or not to transcend human values is like debating “transhumanism”. For our species, the urge to transcend the way human nature has been up to now is itself part of our nature.