[Notes for a talk to transhumanists in Second Life, March 2008]
Ten Objections To Radical Mood-Enrichment
- ETHICAL Objection
- TECHNICAL Objection
- EXPERIENCE MACHINE Objection
- 'INAPPROPRIATE RESPONSES' Objection
- CHARACTER-SAPPING Objection
- STUCK-IN-A-RUT Objection
- SOCIALLY DISRUPTIVE Objection
- SELECTION PRESSURE Objection
- RISKS-OF-HASTE Objection
- CARBON CHAUVINISM Objection
Transhumanists are ambitious. We want unlimited lifespan, unlimited intelligence, unlimited computer power. But this doesn't mean that we're ambitious about everything, for example height. Perhaps we want to be a bit taller, and we want to ensure that e.g. midgets have the opportunity to reach "normal" stature. Yet even in Second Life, or in tomorrow's immersive virtual realities, we don't for the most part want to be a thousand metres tall - despite freedom from the constraints of gravity. Of course, there are some very exotic creatures in Second Life: they might say the rest of us have stunted imaginations. But intuitively, there is quite a narrow optimum for body height. Moreover, height may be regarded as what economists call a "positional good". It's socially advantageous to be slightly taller than average; but if everyone were to become taller, then no one would be better off.
What about happiness - which I'm here going to use as a lame piece of shorthand for emotional well-being in the very richest sense. Is happiness best regarded as an absolute good, or as a positional good, like height? Is there an optimal range of hedonic tone that we should all aspire to - both for ourselves and for other sentient beings - just as there is for human body-stature under Earth's gravitational regime? Perhaps the heritable "set-point" of our hedonic treadmill might be genetically raised a little, just as some of us may wish to be slightly taller. By the same token, perhaps victims of chronic low mood or anxiety disorders may benefit from gene-therapies or designer drugs so they can reach an idealised version of today's "normal" mental health - just as growth hormone can help the "abnormally" short.
There is a much more radical conception of well-being. Is happiness more akin to intelligence or lifespan, something that transhumanists should strive to enhance without limit - with the almost unimaginable implications that such an indefinite increase entails? The Transhumanist Declaration calls for the "well-being of all sentience". But well-being extends all the way from the barest contentment to peak experiences orders of magnitude more marvellous than unenriched humans can comprehend. Just how ambitious should rational agents aim to be in the scope of our reward pathway enhancements - both for ourselves and for other life-forms? What is technically feasible? What are the potential pitfalls? Could anything go catastrophically wrong if we succeed? Should some state-spaces of sentience be placed perpetually off-limits as too wonderful even to explore?
This question won't be answered here. As it happens, I tentatively predict that superintelligent posthumans will be animated by gradients of bliss that are literally billions of times richer than anything biologically accessible today; but whether or not such blissful civilisations exist beyond extremely low density branches of the universal wave-function is pure conjecture. Instead, I want to raise ten objections to the indefinite amplification of well-being - and sketch out ten possible replies.
1) The ETHICAL objection
Even talking about posthuman psychological superhealth is morally frivolous. Debating levels of posthuman bliss is akin to mediaeval theologians discussing the different levels of the celestial hierarchy - all those angels, archangels, cherubim, seraphim, and the like. Back in the real world, there are billions of sentient beings, human and non-human, who suffer varying degrees of ill-being - sometimes extreme ill-being. There's no sense in dwelling obsessively on the unpleasant side of life; but even the healthiest and happiest among us are in mortal danger of ending our lives "sans teeth, sans eyes, sans taste, sans everything". Ensuring a minimum of well-being for all sentient creatures is an immense enough technical and ideological challenge as it is. On a more positive note, much can be accomplished via incremental progress. Thus the impending reproductive revolution of designer babies should lead to "unnatural" selection pressure against some of our nastier genes - allowing us to become smarter, happier, longer lived and, more controversially, perhaps nicer too. Critically for the well-being of all sentience, it's imperative that we stop killing and eating each other. If this utopian-sounding vision is to be realized, then cheap, palatable vatfood will need to replace flesh from factory-farmed non-human animals in our diets; perhaps biotechnology plus market economics will succeed where moral argument fails. But ultimately, ending the Darwinian holocaust and securing the well-being of all sentient life entails an engineering mega-project: genomic rewrites, nanorobotics, and ecosystem redesign penetrating the furthest recesses of the oceans. So why ask for more? If and when the abolitionist project is complete, runs this objection, then we will have discharged all our ethical obligations. Or at least, only after suffering has been abolished throughout the living world should we consider truly revolutionary interventions to enrich our emotional lives. Maybe the critic here is a neo-Buddhist, or a negative utilitarian, or perhaps an enlightened bioconservative who shares the desire to get rid of cruelty and [involuntary] suffering, but doesn't see any need to strive beyond its abolition.
I have a lot of sympathy with this objection. The moral urgency of using biotech to eradicate suffering should be carefully distinguished from speculative flights of fantasy about "paradise engineering" and so forth. Unless one is a strict classical utilitarian, the relief of suffering carries greater moral weight than enhancing well-being. So in that sense, the topic of this talk is comparatively unimportant - and arguably even morally trivial. However, it's hard to believe that there is anything inherently morally wrong with long-term planning. It's worth stressing that none of the things that transhumanists so ardently desire - unlimited lifespan, superintelligence, morphological freedom, novel sensory modalities and modes of consciousness, molecular nanotechnology, etc - will leave us significantly happier in the long-run unless we also redesign and recalibrate our hedonic treadmill. If we opt to do so, then it seems arbitrary to "freeze" its genetic calibration on the absolute minimum settings necessary to abolish the substrates of suffering - or to "lock in" merely a modest increase in the upper range of hedonic tone beyond that bare minimum. Why such poverty of ambition?
Clearly, this isn't the place for a philosophical treatise on the nature of value. Yet one needn't be any kind of hedonist or classical utilitarian to recognize that there are intimate links between the creation of life-long emotional well-being and the creation of value. Provisionally, let's just make a weak but still fertile working assumption. Other things being equal, the most rewarding music, comedy, art, computer game, virtual reality software, personal relationship, etc, is more valuable than its less enjoyable counterpart. A world with ever more richly rewarding experiences is, other things being equal, preferable to comparatively emotionally impoverished worlds. Of course, as the critic will rightly insist, very often things aren't equal. We can all cite multiple counterexamples. But intuitively, it's departures from the default assumption that need justifying, not the default assumption itself.
Perhaps this response is a bit abstract. So for illustrative purposes, try to recall for a moment the most wonderful "peak experience" of your life. Imagine that its neuronal substrates could be identified, genetically enhanced, and conditionally activated at will. Assume, more controversially, that utopian neuroscience will be able to identify the complex molecular signatures of any valuable human experience and amplify their biological substrates. Will post-human experiences that seem millions of times more valuable than today's peak experiences really be millions of times more valuable? Or instead, as the moral nihilist claims, are value-judgements by their very nature truth-valueless? In other words, is this debate all just idle opinion, since the fact-value distinction is logically unbridgeable? Here I'll leave the question open; but if, provisionally, we may assume that some of our experiences are more valuable than the best experiences of, say, an earthworm, then one may wonder whether mature posthuman modes of sentience might not proportionately be more valuable than ours. So if value can be naturalised and biologically enhanced, then why not plan how to create a sustainable abundance of its molecular substrates by the most computationally effective means? Or at least, before passing judgement on posthuman well-being, let's first discover what we're missing.
2) The TECHNICAL objection(s)
It's intelligible to speak of becoming a thousand times taller - though the biomechanics might pose a problem. But does it even make sense to speak of becoming a thousand times happier - except as a rhetorical device? Can happiness sensibly be treated as a biological category at all? Is emotional well-being really a natural phenomenon that can be objectively measured and quantified? Do happiness and other desirable states of mind really have well-defined neurological substrates that can be selectively amplified indefinitely? Is there even a unidimensional pleasure-pain scale?
"Happiness" is indeed a crude label, evoking everything from the noblest triumphs of the human spirit to a nice day at the seaside. Identifying the molecular correlates of our emotional states in terms of receptor-density and neurotransmitter occupancy ratios, alternate splice variants, phosphorylated proteins, gene expression profiles, etc, is a daunting challenge for computational neuroscience. In future, our conceptual scheme for the emotions will need to be enriched along with our emotional repertoire itself. Eventually, some of our nastier emotions may be abolished: their fitness-enhancing computational role on the African savannah is now redundant. Others may be recalibrated: the posthuman analogue of boredom, for instance, needn't feel unpleasant to retain an analogous functional role; subjectively, its posthuman analogues need feel only comparatively less interesting than spellbound fascination. More speculatively, genes for novel core emotions may be spliced into the limbic pathways: our emotional palette may be genetically expanded. Whether a unidimensional pleasure-pain scale exists is controversial. In rats, at least, the ultimate "hedonic hotspots" are a cubic millimetre of tissue in the ventral pallidum and another comprising medium spiny neurons in the rostrodorsal region of the medial shell of the nucleus accumbens. But even if it transpires there is nothing akin to the final common pathway of reward in the human brain, such complexity wouldn't fundamentally change the technical feasibility of indefinite emotional growth. As brain-scanning technology becomes ever more sophisticated and finer-grained, we'll be able to identify the multiple neural correlates of well-being and selectively "overexpress" them in ways that transcend old-fashioned environmental tinkering.
More concretely, brainy "Doogie mice" with an extra copy of the NR2B subtype of NMDA receptor suffer from a chronically increased sensitivity to pain. That's a nasty example. Conversely, neuroscientists can in principle genetically splice in multiple extra copies of other subtypes of receptor e.g. the mu opioid receptor, implicated in hedonic tone. Gene therapy can already be used experimentally to multiply a thousandfold the number of opioid receptors expressed on the surfaces of nerve cells carrying pain signals back and forth between an arthritic joint and the spinal cord; the pain is banished. In future, nerve cell responsiveness to naturally occurring endogenous opioids can be increased via receptor enrichment in the brain too. In principle, we can modulate their lifelong "overexpression", intermittently heightened (or gently diminished) by whatever kinds of personal and environmental contingencies we judge fit. Both functionally and anatomically, our reward pathways can be made "bigger and better". But intelligent emotional self-mastery will involve re-engineering the mind-brain so we derive the most intense rewards from activities we deem most lastingly worthwhile: i.e. prioritising our higher-order desires over legacy first-order appetites. Natural selection has "encephalised" our emotions to benefit our genes. Rational agents can "re-encephalise" our emotions to benefit us.
Long-term, perhaps the big challenge technically will not be amplifying "reward" circuitry or genetically re-editing "punishment" circuitry per se. The real challenge ahead could be doing so in ways that are socially responsible, intellectually insightful, sustainably empathetic, preserve nurturing behaviour, avoid triggering psychosis or mania, and don't provoke adverse side-effects - either for the enriched individual or for society as a whole. These are severe constraints. For example, a problem with existing so-called antidepressants is not just that they are often ineffective and "dirty"; they can also trigger mania in the genetically susceptible instead of high-functioning well-being. [see also "Touched with Fire: Manic-Depressive Illness and the Artistic Temperament" (1993) by Kay Redfield Jamison] Becoming truly "better than well" entails not just an extended lifetime of feeling on-top-of-the-world, but retaining insight, intellectual acuity and social intelligence. In mania, critical judgement is lost.
I'm making a controversial assumption here. The traditional way to produce, say, aesthetic beauty is to create a painting or a sculpture that stirs a rewarding aesthetic response in one's audience. Hence the decorative arts. The advanced way to create awe-inspiring beauty is to use brain-scanning technology, identify the neural signature of aesthetic experience, purify its biomolecular essence and then amplify its substrates. Transcendentally beautiful experiences on-demand can then be selectively triggered far more potently than today - perhaps managed from a user-friendly interface as intuitive as your iPad, perhaps thought-activated, or perhaps stimulus-driven as now. Hence the claim that posthumans may have the innate capacity for aesthetic experiences that are billions of times more beautiful than anything accessible at present - possibly more so after the imbecilic constraints of the human birth-canal are overcome: artificial wombs are no more "unnatural" than artificial clothes. It's said that mystics and poets can "see the world in a grain of sand". In the future, why can't the rest of us raise our aesthetic default-settings so that our set-point of beauty-recognition fluctuates around a vastly higher baseline? Posthuman aesthetic appreciation (almost) certainly won't be uniform - an undiscriminating cosmic "wow". But on at least one family of scenarios, everyday posthuman life may consist entirely of gradients of the sublime.
Or to use another speculative example: the traditional route to spiritual experience is via meditational discipline and prayer. The futuristic route - if one thinks spirituality is a valuable dimension of experience - is to identify the neural substrates of spiritual experience, perhaps even the neural substrates of divine revelation and the experience of God, and then amplify them, stripping out the incidental junk and amplifying both their molecular essence and the metabolic pathways that regulate their expression. It should be technically feasible for our descendants to enjoy daily experiences of the divine billions of times more profound than anything physiologically possible today. This argument can also be used to rebut the charge that transhumanists are all soulless materialists oblivious of the richer dimensions of experience. Some of us do indeed inhabit a spiritual wasteland. But ironically, it's religious bioconservatives who prevent the godless from communing with the divine; and it's traditional mystics who prevent the rest of us from accessing the technologies of mystical experience.
Admittedly, this kind of neurological reductionism can easily smack of phrenology. A critic might mock that one might as well speak of the brain having a "humour centre" - and "enhancing its biological substrates" too. Well, funnily enough, the brain does have a humour centre, not just functionally but anatomically. Crudely stimulating a region of the left basal temporal cortex induces an undiscriminating sense of everything being hilariously funny. But instead of the crude neurostimulation of undiscerning mirth, our descendants [or future selves?] may decide to recalibrate the default-setting of their native humour response. Today we describe some people as temperamentally humourless; other people are prone to see the funny side of life. Well, assuming that a keen sense of humour is valuable, what if we could reset our own propensity to find things funny? Is there an optimum humour-range for a given environment - low and infrequently expressed for brutish Darwinian life, modestly higher for posthumans? Or should the range of our sense of humour be ratcheted up indefinitely when conditions permit? For if we can identify the neural substrates of humour, then we can biologically enrich these substrates indefinitely too. In theory, given a post-human world without suffering, our descendants could appreciate humour billions of times more richly hilarious than anything possible now. The traditional route to comic genius has been to crack funnier jokes or write a comic masterpiece. The sophisticated posthuman route to cultivating a fantastic sense of humour is not (just) to be wittier; it's to amplify and enrich the neural substrates of amusement. This increase might seem a recipe for inanity. On the other hand, recall Wittgenstein's remark that good philosophical work could be written consisting entirely of jokes. In a Darwinian world full of suffering this prospect might seem obscene; tomorrow such a mind-set may be perfectly appropriate.
Okay, that's a whimsical example. Yet exactly the same reasoning holds for information-signalling gradients of bliss; and given even a weak version of the pleasure principle, the adoption of a motivational system based on gradients of bliss is more sociologically plausible than an enhanced propensity to find everything funny. Thus the archaic route to improving well-being has been through manipulating the external environment - tempered on occasion by incompetent alcohol abuse. Environmentalist utopias invariably run aground on human nature and the inhibitory feedback mechanisms of the hedonic treadmill. Their polar opposite is wireheading: direct stimulation of the reward centres. Wireheading is effective but indiscriminate. It's not an evolutionarily stable solution. The mature posthuman route to happiness will presumably continue to embrace environmental improvement; but an environment perceived or simulated through what kind of affective filters? Perhaps posthuman sensory input will be processed via an innately blissful medium of thought. Of course it's far harder technically to amplify gradients of complex "thick" social emotions than it is to amplify raw orgasmic bliss, or even spiritual raptures. Yet such amplification can be accomplished if so desired as our neuroscanning technology and gene-therapies improve. Technologies of sustained cerebral bliss are feasible in principle. The challenge is to use them wisely on a planetary scale and beyond. And unfortunately "wisdom" here isn't well-defined.
3) The 'EXPERIENCE MACHINE' objection
According to this objection, the prospect of "artificially" ratcheting up our hedonic set-point via biotech interventions just amounts to a version of Harvard University philosopher Robert Nozick's hypothetical Experience Machine. Recall the short section of Anarchy, State, and Utopia (1974) where Nozick purportedly refutes ethical hedonism by asking us to imagine a utopian machine that can induce experiences of anything at all in its users at will. A full-blown Experience Machine will presumably provide superauthenticity too: its users might even congratulate themselves on having opted to remain plugged into the real world - having wisely rejected the blandishments of Experience Machine evangelists and their escapist fantasies. At any rate, given this hypothetical opportunity to witness all our dreams comes true, then most of us wouldn't take it. Our rejection shows that we value far more than mere experiences. Sure, runs this objection, millennial neuroscience may be able to create experiences millions of times more wonderful than anything open to Darwinian minds. But so what? It's mind-independent facts in the real world that matter - and matter in some sense to us - not false happiness.
This Objection isn't fanciful. In future, technologies akin to Experience Machines will probably be technically feasible, perhaps combining immersive VR, neural nanobots and a rewiring of the pleasure centres. Such technologies may conceivably become widely available or even ubiquitous - though whether their global use could ever be sociologically and evolutionarily stable for a whole population is problematic. [If you do think Experience Machines may become ubiquitous, then you might wonder (shades of the Simulation Argument) whether statistically you're most probably plugged into one already. This hypothesis is more compelling if you're a life-loving optimist who thinks you're living in the best of all possible worlds than if you're a depressive Darwinian convinced you're living in the unspeakably squalid basement.]
However, feasible or otherwise, Experience Machines aren't the kind of hedonic engineering technology we're discussing here. Genetically recalibrating our hedonic treadmill at progressively more exalted settings needn't promote the growth of escapist fantasy worlds. Measured, incremental increase in normal hedonic tone can allow (post-)humans to engage with the world - and each other - no less intimately than before; and possibly more so. By contrast, it's contemporary social anxiety disorders and clinical depression that are associated with behavioural suppression and withdrawal. Other things being equal, a progressively happier population will also be more socially involved - with each other and with consensus reality. At present, it's notable that the happiest people tend to lead the fullest social lives; conversely, depressives tend to be lonely and socially isolated. Posthuman mental superhealth may indeed be inconceivably different from the world of the happiest beings alive today: meaning-saturated and vibrantly authentic to a degree we physiologically can't imagine. Yet this wonderful outcome won't be - or at least it needn't be - explicable because our descendants are escapists plugged into Experience Machines, but instead because posthuman life is intrinsically wonderful.
Perhaps. The above response to the Experience Machine objection is simplistic. It oversimplifies the issues because for a whole range of phenomena, there is simply no mind-independent fact of the matter that could potentially justify Experience Machine-style objections - and deter the future use of Experience Machine-like technologies for fear of our losing touch with Reality. Compare, say, mathematical beauty with artistic beauty. If you are a mathematician, then you want not merely to experience the epiphany of solving an important equation or devising an elegant proof of a mathematical theorem. You also want that solution or proof to be really true in some deep platonic sense. But if you create, say, a sculpture or a painting, then its beauty (or conversely, its ugliness) is inescapably in the eye of the beholder; there is no mind-independent truth beyond the subjective response of one's audience. For an aesthete who longs to experience phenomenal beauty, there simply isn't any fact of the matter beyond the quality of experience itself. The beauty is no less real, and it certainly seems to be a fact of the world; but it is subjective. If so, then why not create the substrates of posthuman superbeauty rather than mere artistic prettiness?
There's also a sense in which our brains already are (dysfunctional) Experience Machines. Consider dreaming. Should one take drugs to suppress REM sleep because our dreams aren't true? Or when awake, should one's enjoyment of a beautiful sunset be dimmed by the knowledge that secondary properties like colour are mind-dependent? [Quantum theory suggests that classical macroscopic "primary" properties as normally conceived are mind-dependent too; but that's another story] If you had been born a monochromat who sees the world only in different shades of grey, then as a hard-nosed scientific rationalist, should you reject colour vision gene therapy on the grounds that phenomenal colours are fake - and grass isn't intrinsically green? No, by common consent visual experience enriches us, even if, strictly speaking, we are creating reality rather than simulating and/or perceiving it. Or to give another example: what if neural enhancement technologies could controllably modify our aesthetic filters so we could see 80-year-old women as sexier than 20-year-old women? Is this perception false or inauthentic? Intuitively, perhaps so. But actually, the perception is no more or less authentic than seeing 20-year-old women of prime reproductive potential as sexier. Evolution has biased our existing perceptual filters in ways that maximised the inclusive fitness of our genes in the ancestral environment; but in future, we can optimise the well-being of their bodily vehicles (i.e. us). Gradients of well-being billions of times richer than anything humans experience are neither more nor less genuine than the greenness of grass (or the allure of Marilyn Monroe). Could such states become as common as grass? Again, I suspect so; but speculation is cheap.
4) The INAPPROPRIATE BEHAVIOUR objection
Some critics are concerned that promoting superhappiness may lead to what one may call, informally, "inappropriate" behavioural responses. The scare quotes are necessary here because our sense of appropriateness is systematically biased by our evolutionary past. All our intuitions are tainted. But to give a concrete example of inappropriateness as commonly understood: suppose that you were to fall under the proverbial bus. Even if the accident didn't cause you to suffer, would you really want your friends to stay happy on hearing the news, despite your misfortune? Less dramatically, even as life gets better, we will presumably still make mistakes. There will be setbacks and disagreements, perhaps strenuous disagreements. Negative feedback is vital to preserving critical insight. Even if suffering as we understand it today is abolished, then something analogous to anxiety and discontent will surely be needed as the engine of progress?
A counterargument here is that even radically enriching hedonic tone can preserve a full range of negative feedback mechanisms. Optionally, our range of hedonic contrast can actually be increased - even if posthumanity's genetically-predetermined affective floor is set higher than today's affective ceiling. For most purposes, however, fine gradations and nuances of hedonic tone can presumably serve well enough. Enriched posthumans can still be informationally sensitive to good and bad stimuli even if our baseline hedonic set-point is elevated orders of magnitude beyond the contemporary norm. We can still experience the functional analogues of some of today's negative feelings even as the textures of consciousness get ever better.
Optionally as well, the greater part of our existing preference architecture can be preserved. If you prefer Beethoven to Brahms, or philosophy to pushpin, then enriching hedonic tone can still leave your preference architecture more-or-less intact. Hedonic contrast-ratios can in principle be conserved even if the scale is itself is recalibrated. Now of course there are serious grounds for asking whether we really want to leave our existing preference architecture unchanged. After all, a lot of our core desires and preferences are quite unpleasant: they have been shaped by humanity's red-in-tooth-and-claw evolutionary history to allow selfish DNA to make more copies of itself. Perhaps a lot of our nastier preferences should be abolished, not just recalibrated. But "preference conservatism" is consistent with the hedonic enrichment technologies canvassed here - at least as a theoretical option. In practice, a mood-congruent conceptual revolution would (presumably) accompany global hedonic enrichment. Its nature and scope we can now scarcely imagine.
And what of mourning? Should grief be abolished before we have conquered death - a far more formidable biotechnological challenge than enriching subjective well-being? Well, if I were to fall under the proverbial bus, then I would indeed want, selfishly for sure, that such an accident diminish my friends' well-being. Otherwise I'd find it hard to conceive of them as friends. But if one truly values one's friends, then surely one wouldn't - surely one shouldn't - want them ever to suffer on one's account. A conditionally-activated reduction of their well-being, I'd argue, is the most one can appropriately ask for. If we're talking of "inappropriate" responses, then a prime candidate instead might be the Darwinian desire for others to suffer, including on occasion those we nominally "love".
More prosaically, one may hope that transhumanists will be careful crossing roads.
5) The CHARACTER-SAPPING objection
A fifth worry is that gradients of extreme well-being may be bad for our character. One thinks of partygoers who pursue a "hedonistic" lifestyle in the popular sense of the term, or drug addicts, or feckless parents who neglect their children. A futuristic example of character-deterioration might be variants of wireheading - perhaps in the guise of a neurochip that delivers undifferentiated bliss. In general, episodes of "unnaturally" extreme well-being tend to promote selfishness, egotism, impaired judgement, risk-taking, manic behaviour - and a lack of consideration for others. Surely, runs this objection, the future of life in the universe isn't foreshadowed by analogues of wireheading, heroin and crack cocaine?
Indeed not. A counterargument here is that true hedonic engineering, as distinct from mindless hedonism or reckless personal experimentation, can be profoundly good for our character. Character-building technologies can benefit utilitarians and non-utilitarians alike. Potentially, we can use a convergence of biotech, nanorobotics and information technology to gain control over our emotions and become better (post-)human beings, to cultivate the virtues, strength of character, decency, to become kinder, friendlier, more compassionate: to become the type of (post)human beings that we might aspire to be, but aren't, and biologically couldn't be, with the neural machinery of unenriched minds. Given our Darwinian biology, too many forms of admirable behaviour simply aren't rewarding enough for us to practise them consistently: our second-order desires to live better lives as better people are often feeble echoes of our baser passions. Too many forms of cerebral activity are less immediately rewarding, and require a greater capacity for delayed gratification, than their lowbrow counterparts. Likewise, many forms of altruistic behaviour - giving even a paltry 10% of one's income to Oxfam, for instance - are less rewarding than personal consumption. But in future it should be feasible to derive gradients of richly flavoured bliss from studying sixteen hours a day, or being angelically kind and "insanely" generous. Posthuman control of our emotions should allow us to amplify the character traits that we regard as admirable, overcoming the limitations of Darwinian minds in ways that environmental manipulation alone cannot match. In a superficial manner, Second Life allows us to assume the personae of the type of beings we'd ideally like to be; but future enrichment technologies can empower us to become ideal beings in our First Life incarnations too.
One worry about such a rosy scenario is worth noting. Will genetically-underwritten superhappiness rob us of the opportunity for personal growth, character-building struggles against adversity, and the chance to practise heroic self-sacrifice?
Well, it was said of the late Madame de Staël that she would throw all her friends into the water for the pleasure of fishing them out again. Certainly, a civilisation run on gradients of superbliss would have no need of heroism in the traditional sense. But lifelong mental superhealth needn't turn us into milksops. Quite the reverse: superenriched reward circuitry promises to make us stronger-minded and thereby more able to fulfil our life projects - and promote the well-being of others. It's the clinically depressed and other victims of "learned helplessness" who give up too easily: sadly, there's more than a grain of truth in the popular stereotype of depressives as "weak". By contrast, genetically predestined superhappiness promises tomorrow's children "larger-than-life" personalities, uncompromising integrity, and a willpower stronger than anything neurologically feasible today. Potentially, superhappiness will also enable non-utilitarians to realise their projects more effectively.
Obviously, it remains an entirely open question whether we will in fact use such technologies prudently - if we use them at all. But given the terrible emotional shipwrecks of Darwinian life, why shouldn't we (re)design our personalities to at least exacting specifications that we demand of, say, our cars? Why shouldn't post-Darwinian life be robust, exhilarating and crash-proof?
6) The 'STUCK-IN-A-RUT' objection
This is the worry that directly enhancing well-being by neurobiological interventions will lead to a civilisation becoming trapped in a suboptimal rut. This isn't the historically-based objection that pursuing utopian visions inevitably leads to nightmarish dystopias. Indeed, perhaps there's an important sense in which nothing can go wrong, in the ordinary unpleasant sense of "going wrong", if you replace the biological substrates of suffering and malaise with adaptive gradients of bliss. But that's the underlying point of this objection: reaching too avidly or prematurely for what is on offer may lock us permanently into a local optimum that prevents us from maximising our full potential - whatever that full potential might ultimately be. One might think here of long-acting analogues of soma, Aldous Huxley's supposedly ideal pleasure drug, or more refined and globally sustainable analogues of wireheading. No, this isn't the gulag; but surely transhumanists are entitled expect more?
Again, this scenario can't be excluded. But its very conceivability is one reason why humanity would do well to think ahead strategically rather than collectively "stumbling on happiness", to borrow Daniel Gilbert's hopeful phrase. The credence we assign to such global-rut scenarios depends on the kinds of biologically enhanced well-being, if any, our descendants decide to embrace. For example, perhaps genetically encoding the substrates of contemplative, mystical well-being sounds potentially attractive to people of a troubled cast of mind today, especially the temperamentally anxious and angst-ridden. Buddhists, of course, identify the extinction of desire with Nirvana. However, globally engineering this kind of lifelong bliss might indeed lead to behavioural stagnation - and a whole civilisation in perpetual stasis - even if it delivers unprecedented spiritual growth. Now in response, one might say: so what? But rather than opting to become constitutionally serene, perhaps policy-makers persuaded by the stuck-in-a-rut objection should instead promote elements of what (very) crudely one may label dopaminergically-enhanced well-being - with its tendency to enhanced novelty-seeking, exploratory behaviour and intellectual curiosity. Unfortunately, this kind of well-being has multiple pitfalls of its own. So modes of biological well-being radically different from any contemporary human stereotype deserve to be comprehensively researched too. But at least in the medium-term, "outward-looking" futures are presumably more likely to unfold than introverted civilisations based on varieties of meditative bliss. For an ecological niche remains to be populated in the shape of our local galaxy. Vacant ecological niches tend to get filled. Unless we were all to become contemplatives, or all opt to dwell in immersive virtual reality etc, then our descendants will probably radiate out and colonise the accessible universe within our forward light-cone. What they'll do next is unclear.
7) The SOCIALLY DISRUPTIVE objection
Biologically enhanced well-being might exert catastrophically disruptive effects on the wider structure of society. This objection is the very opposite of the commonly expressed concern that "artificial" happiness will make us contented dupes more vulnerable to control by the ruling elites (cf. Huxley's soma). Instead, the argument here is that super-enhanced well-being would be disruptive of the social pecking-order - the dominance hierarchies on which all existing social primate societies are based. Low mood and submissive behaviour evolved in social mammals as an adaptation to group living - itself an adaptation against predators. To abolish the substrates of social anxiety/low mood/subordinate behaviour might turn us all into potential "alphas". Rampant alpha-plus behaviour would make society ungovernable, even in the minimal libertarian sense.
The counterargument here is that such scenarios just illustrate the importance of far-sighted planning. Uncontrolled mass mood-elevation - as distinct from emotional enrichment - might indeed provoke socially disruptive hypercompetitive behaviour, thereby worsening global catastrophic risk. Competitive alpha-male dominance behaviour in an age of nuclear, biological and chemical weaponry is perhaps the gravest threat to life on Earth. So this objection is actually much more serious than it sounds. On the other hand, mood-elevation can also be empathetic and pro-social. "Mirror neurons", for instance, can be multiplied and functionally amplified as well as hedonic tone, thereby enhancing our propensity to cooperative behaviour. Likewise, long-acting designer "hug-drugs", safe and sustainable analogues of MDMA and its congeners, are feasible too - as are their genetic equivalents. Social cohesion may thereby be biologically enhanced. The possible ramifications of radical mood-enrichment for existing social hierarchies are poorly understood because such scenarios have never been systematically modelled. Yet this neglect is no reason permanently to "freeze" the greater part of humanity in the biology of subordinate timidity - the condition of many "low ranking" social primates in the world today.
8) The SELECTION PRESSURE objection
It may be technically feasible, in the short run, directly to amplify the substrates of well-being across the lifespan. It may even be technically feasible to elevate our normal hedonic set-point through somatic or germline gene-therapy. But in the long run, there will be selection pressure against escalating gradients of superhappiness. So the scenarios discussed here aren't realistic.
In a post-ageing world centuries hence, reproduction will need to be exceptionally rare and centrally-controlled - regardless of whether or not our quasi-immortal descendants practise hedonic engineering. Otherwise the Earth (or in theory our galaxy or local galactic supercluster, etc) will exceed its physical carrying capacity. However, this kind of speculation involves very complex arguments on the nature of selection pressure in an era when traditional childbearing has more-or-less ceased.
In the meantime, there will be intense selection pressure, but there are powerful grounds for believing such selection pressure will work against any genotypes/allelic combinations predisposing to Darwinian unpleasantness in all its forms. This is because we are on the brink of a reproductive revolution of designer babies. Prospective parents will shortly be choosing the personalities/genetic make-up of their future children rather than playing genetic roulette. As responsible child-planning becomes common, and preimplantation genetic screening becomes routine, severe selection pressure will come into play against genes/genotypes predisposing to the darker modes of human experience. This isn't the place to attempt formal game-theoretic modelling or a treatise on posthuman population genetics. So for illustrative purposes just imagine: If you were a prospective parent choosing the genetic make-up of your future children, what genetic dial-settings would you opt for? You wouldn't want genotypes predisposing to anxiety disorders, depressive illness, schizoid tendencies, and other undisputed pathologies of mind; but how high (or in theory, how low?) would be the settings you'd prefer for your children's normal hedonic tone? Cross-culturally, parents typically say they want their children to be happy, albeit "naturally" so; but how happy? Redheads may prefer to have red-headed children; but few depressives will want depressive children. All that's needed for selection pressure to get to work here is a partially heritable slight preference for children who are modestly more temperamentally happy [or less gloomy] than oneself. Selection pressure is fundamentally different when evolution is no longer "blind" and random with respect to what is favoured by natural selection - i.e. when genes/allelic combinations are chosen/designed in anticipation of their likely effects. Such selection pressure is already manifest in non-human domestic animals; it will shortly come into play in humans. Hence we are entitled to speak of an impending post-Darwinian era - not because selection pressure will be absent (on the contrary!) but because we are poised to switch from the era of "natural" to "unnatural" selection.
This momentous reproductive shift certainly doesn't exclude the likelihood of continuing selection pressure against some modes of subjective well-being e.g. undifferentiated bliss. Thus wireheads and their natural analogues, for instance, will presumably always be at a reproductive disadvantage. But a motivational system of high-functioning gradients of superhappiness may be extremely adaptive if that's the behavioural phenotype we want for our children. Children genetically predisposed to be abundantly happy and affectionate are more rewarding to raise than surly, depressive children. It should be stressed that this optimistic scenario doesn't mean that posthuman social life will resemble a communal hug-in or an MDMA-driven rave. There can be functional analogues of depressive realism even in paradise.
9) The RISKS OF HASTE objection
The priority should be superintelligence, not superhappiness. Only after we are intelligent enough to understand the implications of what we're doing should we explore radical mood-enrichment. The risks of acting prematurely and building a fool's paradise are too great.
As it stands, this objection may well be correct. Only superintelligence can maximise the utility function of the universe. But emotional enrichment - as distinct from crude pleasure-amplification - is itself presumably a critical ingredient of superintelligence. So we should take care to avoid constructing a false dichotomy: mature superintelligence will presumably entail an unimaginably enriched capacity for empathetic understanding - a "God's eye view". This point is relevant because - given some fairly modest assumptions and even the slightest sense of moral urgency - we should be prepared, if necessary, to take risks to eliminate a terrible scourge, to prevent suffering and cruelty to our fellow creatures, or to act when the risks of inaction are greater than action. What's important is assessing risk-reward ratios. One obvious parallel is ageing. Bluntly, we are all dying. If you regard ageing as a horrible disease, then you may be prepared to run risks to retard its progression. Thus one might take a daily cocktail of supplements (e.g. resveratrol, selegiline, etc) that increases lifespan and life expectancy in "animal models", but whose efficacy and long-term safety is unproven in controlled longitudinal studies in humans. Perhaps the minority of "healthy" [i.e. dying] humans who adopt such a regimen misjudge the risk-reward ratio involved; but if so, the error doesn't reside in a willingness to take calculated risks - merely in their miscalculation. There are perils in inertia no less than in initiative. Likewise, current victims of intractable pain or chronic depression, whose quality of life is meagre (or worse), may justifiably take more therapeutic risks, and explore more experimental treatments, to alleviate their distress than the psychologically robust who already enjoy life to the full - by mediocre Darwinian standards, at any rate.
A complication of this analysis is that all enhancement technologies may be viewed as remedial therapies by the enlightened standards of our successors. Yet there is a fundamental difference between taking risks to alleviate serious disease, chronic pain syndromes or prolonged psychological distress and taking risks to enhance pre-existing well-being.
Sadly, there aren't any short-cuts. So in that sense the objection is unanswerable. Current recreational euphoriants, for instance, may give their users a faint, fleeting, shallow foretaste of posthuman bliss; but for the most part, they activate the hedonic treadmill - and produce nasty side-effects, insidious or otherwise. It's worth recalling that some very smart people have been seduced. Twenty-eight-year-old Viennese neurologist Dr Sigmund Freud wrote a paean of scholarly praise for the therapeutic benefits of cocaine, newly isolated from the coca plant. Bayer introduced Heroin as a non-addictive remedy for coughs. And in the words of one intravenous heroin user: "It's so good. Don't even try it once." Any potential wonderdrug or gene-therapy that promises a miraculous breakthrough to posthuman nirvana needs to be investigated with both extraordinary urgency and extraordinary scepticism.
10) The CARBON CHAUVINISM Objection
This talk has focused on enriching the "biological substrates" of emotion. Yet given some quite widely accepted functionalist arguments in contemporary philosophy of mind, why not scan, digitize, and "upload" ourselves into silicon or another medium - and then reprogram ourselves? The exponential growth of computing power promises to endow uploads with the self-reprogramming ability to cure ageing, infirmity and disease; attain true superintelligence; enjoy total morphological freedom; and amplify our reward pathways too. If the exponential growth of [inorganic] computer power continues unchecked, then this transformation may be only decades away - not the millennia that a meatware transition to posthumanity would presumably entail.
The range of opinions among transhumanists on uploading runs all the way from those who think it's inevitable to those who view it as some kind of millennialist death cult. If your overriding ethical goal is "merely" to eradicate suffering, then uploading could almost certainly achieve its abolition - one way or the other. However, most people aren't negative utilitarians. If you want "your" upload to achieve supersentience as well as superintelligence, or to enjoy posthuman levels of well-being, to achieve quasi-immortality, or simply to conserve your identity as understood today, then the existential risk posed by uploading is immense - perhaps the biggest existential risk the human species has ever contemplated. So before embarking on anything so revolutionary, it's vital that we have a compelling theory of consciousness - and a mathematically exact description of its myriad textures - on pain of creating zombies. Maybe you feel 99% certain that the sceptics are wrong e.g. neurophilosophers who believe that unitary consciousness depends on quantum coherence, and hence any aspiration to non-trivial digital sentience falls foul of the "von Neumann bottleneck". But either way, the postulation of sentience in silico is not a testable scientific hypothesis. So advocates of uploading are placing a lot of faith in a metaphysical theory. Of course, the conviction that anyone else is conscious is a metaphysical theory too, albeit less controversial.
By way of [false] analogy, consider the game of chess. Imagine a misguided philosopher who claims that what matters when playing chess is not just the sequence of moves, but also the particular textures of the individual chess pieces; and that chess games played with wooden or metal pieces, say, or games played online via computer, can be different in character even if the sequence of moves played is the same. Surely, we would say, this fellow is simply confused: he is missing the point of chess. The particular textures of the pieces, and even the complete absence of any such textures in computer chess matches, are unimportant, since the textures, coloration, and physical composition (etc.) of the pieces are functionally irrelevant to the gameplay - a mere implementation detail. The same game of chess can be multiply realised in different physical substrates. Now consider uploading. Imagine again a naïve-sounding bioconservative who insists that what matters for successful uploading is not just the behaviour [and behavioural dispositions] of hypothetical uploads, but also the particular textures [aka qualia: "what it feels like"] of their mental-cum-perceptual states. Now in one sense, yes, the phenomenal textures [if any] and substrate composition of a hypothetical upload are mere implementation details - functionally irrelevant insofar as the upload has the right functional architecture to support input-output relations identical to its meatworld counterpart. ["If it walks like a duck, quacks like a duck...", etc.] Yes, if we were exhaustively defined by our behavioural patterns, then the spectre of inverted qualia, "Martian pain", absent qualia, and so forth, is of no consequence. But in another, critically important sense, the analogy with chess fails. "What it feels like" to be me is of the very essence of my personal identity: it's not a trivial implementation detail, but definitive of who one is - one's intrinsic nature. If we had the slightest idea how to scan, record and digitise qualia, then uploading might be feasible; but alas we don't. It is scarcely possible to overstate our scientific ignorance of consciousness. For now, at least, uploading belongs to the realm of science-fantasy rather than science-fiction.
However, let's assume for the sake of argument that sentient uploading will in future be technically and societally feasible - perhaps using quantum computers with a non-classical architecture. Given a mass-upload scenario, the fate of meatware "left behind" is unclear. Unless traditional organic life is to be liquidated - i.e. "destructive" uploading, the final solution to the organic life problem - then primordial Darwinian organisms will still need to be "rescued" by their postorganic descendants. So here we come back to the biological substrates of consciousness with which we began.
Centuries of technological and socio-economic "progress" haven't left us discernibly happier in the course of a lifetime than our hunter-gatherer ancestors. There's no compelling scientific evidence that thousands of years of reshaping our environment has cheated the hedonic treadmill one iota. Will the future resemble the past? Almost certainly not. Tomorrow's neuroscience promises to revolutionise subjective well-being, both individually and for our species as a whole. More speculatively, we may overcome our anthropocentric biases and enrich the rest of sentient life too.
Superintelligence, Superlongevity and Superhappiness?
But by how much? Unlike computing power, an exponential growth of happiness is (presumably) impossible, short of technologies beyond human imagination. Yet securing even an approximate linear growth of its biomarkers would represent a stunning discontinuity in the history of life to date. Posthuman versions of the Goldilocks zone - "not too hot, not too cold" - could potentially exceed the hedonic range adaptive for our hominid ancestors by several orders of magnitude, if not more. Will our posthuman descendants eventually decide, to echo Bill McKibben, "Enough!". Possibly; but if so, it's unclear how, when and why.
It's worth emphasising that the sorts of scenarios for posthuman mood-enrichment explored here aren't, for the most part, an alternative to other transhuman scenarios of our future, notably superintelligence and superlongevity. On the contrary, a fine-grained control of our emotions together with motivational enhancement should enable us, other things being equal, to realise these scenarios more effectively - and to savour their outcome all the more appreciatively. Nor is hedonic enrichment some kind of prescription for how to live posthuman life - any more than being cured of a chronic pain condition dictates how one should lead a pain-free existence. "The world of the happy is quite different from that of the unhappy" observes Wittgenstein in the Tractatus. Yes, and the world of the superhappy is quite different from the human world. Whether we'll ever investigate its properties, however, is an open question.
La Superfelicidad (Spanish)
Neurociência Utopista (Portuguese)
Utopische Neurowissenschaften (German)
Social Media (2015)
The Abolitionist Project
Quotations on Happiness
The Repugnant Conclusion
Second Life Transhumanism
The Reproductive Revolution
The Biointelligence Explosion
La Superfelicidad (en español)
The Transhumanist Declaration
Transhumans and Transhumanists
World Transhumanist Association/H+
What Is Full-Spectrum Superintelligence?