Erik’s Newsletter

Share this post

Consciousness and Ethically Optimal Futures

eriklockwood.substack.com

Consciousness and Ethically Optimal Futures

Erik Lockwood
Apr 1, 2021
Share this post

Consciousness and Ethically Optimal Futures

eriklockwood.substack.com

The topic of consciousness has a notorious reputation in philosophy and psychology for being one of the last great mysteries, and one which, if taken seriously, has a direct line to other exotic problems such as the fundamental nature of the universe or multiverse, what the most fundamental entities are, etc. What I have set out here attempts to trace a conceptual line from what I regard as the best contemporary theory of consciousness to various ethical questions as well as the ultimate fate of the human species and the universe.

The topic invites confusion because, apart from anything else, it is often so arcane that people cannot even mutually agree on how to use language, so setting out some definitions is important.

Consciousness may be defined, accurately albeit inelegantly, as the common denominator of all things which can only be described in qualitative language, such as love, pain, and colours. Academics refer to these ineffable raw-feelings of subjectivity as “qualia.” Of course, this is not to say that qualia have no relationship to scientific investigation. In fact, we understand the neurophysiological correlates of emotions such as love fairly well, but the feeling itself is purely qualitative and it is not understood why any matter, including brain matter, should have qualia, regardless of its physical or chemical composition: why is a molecule such as oxytocin implicated in emotions but not simpler molecules like water, etc. It seems like question-begging to simply say “One of them is utilised by the brain.” Alternatively, you could say, a la Thomas Nagel, that consciousness is the what-it’s-likeness of being, i.e. “What is it like to be a bat?” (Nagel). If there is something it is like to be a bat (or a person, or robot, or whatever else), it is conscious. If not, it is not. Consciousness is the source and qualia the individual varieties of subjectivity. At times I might use the terms consciousness and subjectivity interchangeably. The term qualia variety can be used to refer to a category of qualia, such as colours and tastes. Whereas, qualia values refer to some minimally differentiable location within the aforesaid categories, such as a specific colour hashcode in the colour space. In the future, it is possible that we will discover qualia varieties that do not fit within our everyday classification schemes, e.g. maybe there is something “halfway” between taste- and sound-qualia, whatever that would even mean: it is mysterious to us for now in exactly the same way that colour is mysterious to a colour-blind person.

At this point, it may be useful to distinguish consciousness from conscious minds. Conscious minds are subjects of experience: the metaphorical “audience” in the theatre of consciousness (like ourselves), but it need not be the case that qualia themselves, experiential qualities, require conscious minds to exist. An important aspect of the theory as set out here is to distinguish the one thing from the other. Since sentience is a somewhat elusive property of conscious minds and (apparently) only conscious minds which is often regarded as supremely important in various debates of practical ethics, such as the abortion and veganism debates, in the future it may be useful to come up with some rigorous definition of sentience. Possible components of such a definition could, and I think will, include synchronic identity, a synonym for the term “binding” which I will discuss at length later in this article, intelligence, and how many qualia values a thing has access to. Consequently, a hypersentient being, the most sentient thing physically possible, will be some entity with effectively infinite computational power (see: “Matrioshka brains”), and crucially, access to all possible qualia varieties and values with effectively infinite degrees of freedom to move among them, i.e. they will be able to traverse all qualitative states at their whim; more on this later.

Explaining consciousness, scientifically and philosophically, requires solutions to four problems: the hard problem of consciousness, the binding problem, the causal indetermination problem, and the palette problem. The first two of these have been discussed at length, whereas the latter two are rarely addressed.

The hard problem of consciousness means addressing both the how and the why of qualia: how is consciousness even physically possible, and secondarily, what adaptive purpose does it serve. It is a conundrum for the obvious reason that our current models of physics are supposed to be causally closed and complete, and yet physics is blind to subjectivity: no physical, mathematical, or neurological description of the colour red will ever be sufficient to cross the explanatory gap and make red reveal itself to the colour-blind person in the same way it reveals itself to the rest of us. Moreover, it is not obvious why consciousness is necessary to life or reproductive fitness at all. It is theoretically possible to conceive of an android which behaves and responds to its environment exactly as a human would but is also not conscious: the lights are off, there is nothing “it is like” to be such an android. This concept is sometimes referred to as the philosophical zombie or p-zombie. So, why are we not p-zombies?

Emergentism and eliminativism are fairly common responses. The former proposes some sort of “strong emergence” of consciousness from inert matter, which is untenable because, again, even when physics is complete, however this is achieved (m-theory, superstring theory, or whatever else), the above story still feels an awful lot like saying “And then a miracle happens.” This is a problem for physicalism, the doctrine that all reality is exhaustively described by the equations of physics and their solutions, without a robust theory to make room for consciousness within physics. Eliminativism is probably the easiest solution to dismiss: it simply denies that there is anything to explain, that the class of entities under the umbrella “consciousness” does not exist. Undergoing tooth extraction without anaesthetic will quickly disabuse you of this.

The causal indetermination problem arises from another proposed solution to the hard problem: that of epiphenomenalism. This is the claim that consciousness is simply a side-effect of physical and biological processes but itself has no causal power. For example, an epiphenomenalist might claim that the emotion of fear is an epiphenomenon and that it is the biochemical secretions (adrenaline) associated with fear, rather than the emotion itself, which causes the heart to beat faster. But the cognitive complexity of consciousness alone seems unusual for something that is merely a “byproduct.” Moreover, if consciousness has no causal power, how can it cause us to have conversations about its existence? This is the causal indetermination problem or causal efficacy problem.

Physics, if one is to be a reductive physicalist, is supposed to exhaustively describe the properties of matter, but to say that consciousness “emerges” from non-conscious matter defies reductive physicalism because it means there is some whole new category of properties to contend with governed by as yet unknown laws. In that sense, we should be p-zombies if physicalism is true (except in light of some account which preserves physicalism). Physics can give an account of how the brain instantiates the colour red anatomically, physically, chemically etc, but this does not get around the explanatory gap: neurons can be described in causal-relational mechanistic terms, whereas the experiential redness of red is still purely qualitative. It is unclear why epiphenomenalism is apparently so attractive except to say that it gets around the hard problem by rendering consciousness a mere “shadow” (by analogy) of the functionalistic activities of the brain. However, even this analogy is flawed since one’s shadow is perfectly visible and has lots of effects in the world.

Particle-ontology panpsychism, what I sometimes humorously call lego-brick panpsychism, comes close. The literal translation of this word in Greek is “the all-mind theory,” but this has to be distinguished from animism, the notion that tables, chairs, mountains, etc, are conscious minds, and the notion that the universe is one vast psychotic megamind. It is neither of these. Rather, it attempts to tackle the emergence problem by conjecturing that the most fundamental parts of matter, the fundamental particles of the standard model (as per particle ontology), have both subjective and objective aspects. This gets around emergence because, if consciousness is already baked into reality at the level of fundamental entities, then there is no “emergence.”

However, this picture still leaves both (1) the binding problem and (2) the palette problem. (1) Sometimes also called the combination problem, is intuitive enough: it is all very well to suppose that the 31 fundamental particles of the standard model encode all subjective qualities (taste, temperature, pain, love, fear, colours, etc). But how does one go from this to the unitary subjects of conscious experience that we all know? When one looks at a blue car, for example, how does the brain combine the vast array of distinct sense-data involved in the experience: edge detectors, motion detectors, colour, proprioception, and anything else that is happening in any moment of the experience, into a cohesive, unitary perceptual field populated by dynamic objects and apprehended by a unitary sense of the self? An easy way to get a handle on how important binding is, is to look at syndromes in which it even partially breaks down. For example, motion blindness: for people with this disorder the natural “flow” of movement stalls and motion is instead perceived as a series of frames. Or, more vividly, florid schizophrenia, in which even the sense of the self is fractured and experience is mangled, disjointed, and adaptively useless.

An even better example of binding in the brain is to imagine looking at a sheet of paper with a blue triangle and a red circle inscribed on it. You do not have to pause to work out which colour is associated with which shape, unless you have a disorder such as simultanagnosia. People with this syndrome struggle to perceive more than one object in consciousness at once and would thus confuse the association between the colours and shapes they are not directly attending to. Classical digital computers, likewise, are simultanagnosics in the sense that they can scan images for particular objects of interest and try to isolate certain features about them if you ask them to search for them (such as colours), but they do not get a complete picture. A counter-argument which might be made is that our perception really is not that unified. Most people have seen that video where you have to focus on a group of people passing a ball around, focusing just on the ball, and the man in the gorilla suit walking between the people passing the ball never enters your perceptual field; you miss it. People like Dan Dannett might use this to show that binding really is not that special. But of course, the reason – let us focus on vision alone for simplicity – that you do not pick up on every detail in your line of sight is because it would not be adaptive. It would not be informationally useful to do so. Hence, there is also the other extreme. On one extreme there are simultanagnosics who struggle to perceive more than one object at a time, and on the other there are some severe autistics, etc, who lack perceptual filters and pick up on things which for most of us would just blur into the background. Between the two is a place of adaptive unity, and that is what the perceptually bound visual field looks like for most of us, most of the time.

If it is possible for neurons (or mind dust, or whatever) to accomplish this simply by virtue of the fact that they are interconnected, packed extremely close to one another, and send chemical/electrical messages to one another, then you might have to take seriously the possibility that the United States is conscious. I cannot remember the name for the people who actually believe this. The United States has all the types of properties normally associated with conscious beings. Its population is inter-connected and inter-communicating, it has centres of control through the state and bureaucracy, it shares genetic material, it responds to external stimuli (threats to national security), it responds to opportunities and threats, etc. Is there less information, less intelligence, less internal co-ordination, etc, in all this, than in, say, a hamster? (Schwitzgebel.) Bearing in mind that most people would indeed consider a hamster to be conscious.

And yet, most people, including me, do not take this possibility seriously, and maintain that the United States: no matter how big its population, no matter how inter-connected the population was, no matter how fast the rate of information exchange, or anything else, would never constitute a single conscious mind. But it is not possible to explain why not on a view such as particle-based panpsychism without ad hoc noodling. It is possible to make appeals to unique laws of nature to explain these oddities, psychophysical laws, and I do not discount this possibility, but it somewhat takes the “reductive” out of “reductive physicalism” if we are to postulate new laws of nature with such alacrity.

Moreover, this panpsychist view also does not address the palette problem. Why is conscious experience so varied and complex if its varieties map onto a particle ontology? Conventionally, there are only a few dozen types of fundamental particles in the standard model. Compare this with the staggering variety of qualia: from ordinary waking consciousness to dreaming, sleep paralysis states, and exotic drug-induced states. Why is it even possible to access the states of consciousness of, say, dimethyltryptamine, when it does not seem to serve any adaptive biological purpose? Any good theory of consciousness will have to explain the binding problem and account for the diversity of qualia.

To cut to the chase: the only theory to date which can resolve all these problems without resort to ad hoc justifications or needlessly multiplying the laws of nature is what David Pearce calls physicalistic idealism. It can be regarded as an update of panpsychism which incorporates quantum physics. It is physicalistic in that it does not propose new categories of properties which are not accounted for by the standard model of physics and still maintains that reality is exhaustively explained by the equations of physics and their solutions. However, it is idealistic because it maintains that the most fundamental entities in reality are experiential in nature. This will require further elaboration:

Unlike classical Newtonian mechanics, the concept of quantum superposition can serve as a solution to the binding problem. In quantum mechanics, the sum of all possible states of a quantum system can also be considered a valid state, hence the famous Schroedinger’s cat thought experiment. The binding problem disappears if the components of each moment of experience, rather than being an amalgam of various feature processors and their mere “interconnectedness,” are instead explained as neuronal superpositions, where the individual feature processors of the brain are in superposition with each other, allowing the creation of informationally sensitive “Schroedinger’s cat” states in the brain. This process is referred to as quantum coherence, and the idea of quantum mechanics’ being essential to some function of the brain is called quantum mind. The idea is normally dismissed because the “window” of realistic superpositions is a very narrow window both spatially and temporally: the brain is hot enough to trigger decoherence unless the individual “frames” of experience are extremely brief (attoseconds or picoseconds), and since quantum systems are liable to environmental disturbances unless they are extremely small (such as the photons or electrons in the double-slit experiment), any neuronal superpositions would be interacting with matter at the level of the smallest particles to create subjects of experience. A priori, however, these are not problems with this theory. One need only assume that billions of years of evolution by natural selection were sufficient to preserve information-bearing superpositions because they were necessary for the adaptive fitness of the organism. If it sounds foreign to you, please note that there has already been talk of plants utilising quantum coherence in the process of photosynthesis, and if organisms as simple as plants can make use of such things there is no reason to conclude that the brains of mammals or any category of conscious minds could not do likewise.

Furthermore, you do not need to understand the mathematics of the quantum wavefunction to comprehend what is going on ontologically. The wavefunction is just a mathematical description of the behaviour of, relatively speaking, isolated quantum systems. Let us say the system in question is a pair of entangled electrons. To say that they are “entangled” means that they are coherent with each other (hence: quantum coherence), and to say that they are coherent with each other means that their behaviour is governed by the same wavefunction. That is why the behaviour of one is perfectly correlated with the behaviour of the other and any change to one also happens to the other, even if they are separated by light years. Because, even though there is no time for any signal to propagate between them, they are mathematically identical to each other. Whether this constitutes type or token identity is controversial, but it is their unity which is of interest here – and whether such unity could be a structural “best fit” for ontologically unitary macroscopic phenomena such as our minds.

Incidentally, just in case people are not aware of the distinction between type and token identity: if I construct a perfect clone of you, atom by atom and quark by quark, who is standing outside your house, that person is not token-identical to you but rather type-identical. Whereas, the entangled electrons are (arguably) local manifestations of the exact same instance of an object; they are token-identical, which is “allowed” because it is mathematically cogent.

When you say that this system has decohered, that just means that the wavefunction of the system has become entangled with that of the surrounding environment, say, the room you are in. Then the room becomes entangled with the state of the building, the building with the state of the Earth, the Earth with the state of the solar system, and so on. It is obviously not as “cleanly” cut as that; the entanglement network does not care about categories such as “building,” but you get the idea. In the end, there is just the universal wavefunction, and all these smaller systems like the electron pair are conceptualised as local manifestations of the universal wavefunction. But that still leaves you with the question of: since it seems to take extreme conditions to avoid decoherence, how are all these decohered systems in the universe summing up to a quantum-coherent structure at the level of the whole universe? Remember that an isolated wavefunction is coherent by definition; to say that it has decohered just means it is entangled with the state of the environment – if one takes the “universe” to simply mean all which observes our laws of quantum mechanics, then the universe, definitionally, is coherent, as there is no greater environment for it to be nested in. This was puzzling to me for a while, because I was still thinking in terms of a Newtonian atomistic ontology, where parts like atoms are ontologically prior to wholes like the universe. Of course, this quantum-mechanical picture suggests that the opposite is actually true, i.e. it is actually a holistic ontology which is the correct one. In other words, there is only one basic object, which is the entire universe, and the wavefunctions of any structures within the universe, such as the electron pair or Schroedinger’s cat, or our minds, are actually just local branches, or topological segmentations, however you want to say it, of the universal wavefunction which describes the whole universe, which is ontologically prior to its parts. The idea that there is one basic object from which all other objects are derived is known as priority monism. This is a version of it, motivated by the evidence for quantum holism and wavefunction monism.

In quantum mechanics, when you are calculating the position of an electron in space, its position is actually in a superposition of multiple states (locations) which can be described mathematically using the wavefunction, and you can read off the probabilities of each position. And we know that superposition is on solid ground, because when you do the double-slit experiment and fire an electron at a screen through two slits you get a certain interference pattern which would suggest that the electron goes through both slits at once. This is superposition. At least when we are not observing this process, this is how it plays out (I will get to that).

We also know that if you fire two electrons at each other they will scatter off each other but we do not know exactly how because their trajectories are both described by separate wavefunctions that only give probabilities. Yet, once you switch your detector on and observe one electron it instantly seems to collapse the wavefunction of the other no matter how far apart they are. Rather than spooky action at a distance, the standard explanation of what is actually going on here is that once the electrons interact they cease to have separate wavefunctions and fall under a single wavefunction. To be strictly rigorous, you have to be a wavefunction monist and say that there is actually only one wavefunction: the wavefunction of the entire universe. What I have just described is called quantum entanglement. Decoherence is just the name for what happens when a quantum system in superposition becomes entangled with its environment. In the case of the Schroedinger’s cat scenario, the radioactive isotope is in a superposition of decay and not-decayed, and it is entangled with the state of the detector which is going to release some poison or whatever depending on whether the atom decays or not, thereby killing the cat (or not). Now, because we (humans) are made of matter which also adheres to the laws of quantum mechanics, when we open the box and look inside, we become entangled with the state of everything inside, so we see the cat alive and we see the cat dead, but in separate worlds. Decoherence causes the wavefunction to branch the universe in two. This is the many-worlds interpretation, but that is a separate issue. In order to maintain these exotic non-classical states without the wavefunction branching the universe, the individual “frames” of bound, i.e. locally quantum-coherent, consciousness must be incredibly short because the brain is such a warm system that thermal decoherence kicks in fast. Max Tegmark estimates picoseconds or something like that, and takes it as read that this means quantum mind cannot be possible. But suppose that Pearce’s account of quantum mind is true, how would your experience of the world be any different from the experience you are having right now? After all, we do not see the individual frames of films that are playing at even 30 frames per second.

Another nuance of this that is probably worth mentioning: the binding problem is normally described in terms of particle-based ontologies because it is just the way people are used to thinking. But if you have the quantum-mechanical ontology, where the entire universe works under the universal wavefunction, then it is more like a boundary problem where you have to explain how the universal wavefunction is manifesting separate parts. What I would say, as we shall get into in a minute, is that rather than having a particle ontology, the universe is made of fields of qualia – you can maybe imagine the universe as a tank filled with many different fluids sloshing around, which represent the fields. The majority of the tank’s volume is completely smooth and uniform, except for the “bubbles” of locally quantum-coherent superpositions, presumably only a fraction of which have anything to do with brains representing worlds.

The palette problem also requires an update from quantum field theory. We are normally taught that the smallest subatomic particles are the fundamental constituents of reality, but quantum field theory suggests that this is not the case. Rather, particles are conceptualised as perturbations or excited states of underlying fields, and it is those fields which are the most fundamental reality. Without going into too much technical detail, it was not obvious in Newton’s time why action at a distance was possible. For instance, how should the Sun, 93 million miles away, have any effect on the Earth’s motion when they are spatially non-contiguous? You can get around that problem if you see space, or spacetime as we now understand, as a single contiguous structure rather than just being a non-entity or a medium through which objects like the Sun and Earth travel, and gravity is just a perturbation within that structure: the Sun is a gravitational well in spacetime, and that is what affects the motion of the planets. Likewise, when you call someone, this is not action at a distance. Rather, your phone is creating perturbations in the electromagnetic field propagating between you and the cell tower.

Thus, rather than a particle ontology where the varieties of qualia are constrained by the 31 standard particles, they instead map to these deeper, underlying structures called fields, whose variety and complexity are not so bound. Reality, at the bottom, is entirely composed of fields of mostly decohered qualia, and bound conscious minds evolve through natural selection by exploiting (local) quantum coherence, in which the brain “mines” the most fitness-enhancing qualia and combines them into a dynamically unified world-simulation. This leaves no binding problem, no palette problem, no hard problem, and no problem of causal indeterminacy since only consciousness exists and only consciousness has any causal power.

If you want to intergrate qualia into a reductive physicalist worldview, you have got to come up with some natural kind that qualia can be mapped onto. The 31 standard particles seem inadequate because the diversity of qualia is so immense. What I would argue is that there is some categorisation schema for qualia, as yet obscure to us, which structurally corresponds to certain natural kinds at the level of quantum fields and our neurons instantiate them in the brain with the use of certain proteins inside those neurons. That is what genes are; they are just instructions to build certain proteins. And of course, we will have to revise our understanding of what a protein actually is in terms of quantum field theory.

That still leaves you with the question of how the universe, if the universe is an ontologically unitary object, is manifesting separate parts, such as our minds, and I would argue that part of the answer to that is local quantum coherence, as opposed to the ultimate universal coherence, and the two are meaningfully distinguishable.

Local quantum coherence may be both necessary and sufficient to solve the binding problem. However, minds in any sense we would recognise must require more than this. The universe is a coherent quantum object composed of multiple states – that is an example of what you might call universal qualia-binding. Nonetheless, it is not a mind. Equally, at the local level, if you super-cool liquid helium-4 down to just 2 degrees above absolute zero, it becomes something called a superfluid: the total volume of liquid becomes a quantum-coherent volume, with all the wavefunctions of the individual atoms syncing up with one another and they act in unison instead of rubbing past one another. That is why superfluids have no viscosity and can flow without any loss of kinetic energy. So if you stirred a pot of superfluid helium into a whirlpool, it would just keep spinning for ever. But, that is neither here nor there. Because one wavefunction describes the whole volume of liquid, that is technically a form of binding, and there technically IS something it is like to be a volume of superfluid helium. But it would not be a mind. It would just be some random aggregation of obscure qualia; minds need natural selection to mine bedrock reality for the most adaptive information-signalling varieties of qualia.

All of this makes it somewhat hard to define exactly what a philosophical zombie is. Of course, p-zombies are only supposed to be a thought experiment to criticise materialism – materialism being the view that the world is made from non-conscious matter and that consciousness “emerges” from it instead of being fundamental as it is in other views. The technical term which makes sense under panpsychist views is “micro-experiential zombie,” that is, qualia that do not cohere with each other. This technically does mean that you can refer to any arbitrary aggregation of matter in the universe as a zombie. But then, you also need a new term to refer to this weird class of objects which are bound but do not constitute minds in any sense we would recognise: superfluid helium, the centres of neutron stars, etc. Perhaps: unselected binding, or “noise-binding,” i.e. random binding of qualia that have not been recruited by the process of natural selection for any information-signalling purpose.

Once again, to avoid confusion, it must be stressed that this is NOT “Matrixism,” the idea that reality itself is subjective. Reality is mind-independent. The mind-independent objective world described by science exists as a function of the structural-relational properties of qualia and their interactions with one another, whereas consciousness is “the thing in itself” (Kant), the intrinsic nature of being. This argument from intrinsic natures has been explicated by many philosophers: science can describe concepts such as mass and charge in terms of their structure and mathematical relations but it cannot tell you what actually is mass or charge. I remember once asking my science teachers about this without realising it: they would give me a description of an electron in terms of its behaviour and mathematical properties, and I kept saying, “Yeah, I get all that, but what actually is it?” Of course, they just shrugged their shoulders. As Stephen Hawking observed, the quantum wavefunction must track physical reality, but it is still only a mathematical description. We don’t know what it “is.” The intrinsic nature, that which “breathes fire into the equations” (Hawking) is still very much up for grabs, and if you follow the argument set out here, the intrinsic nature is consciousness.

“Causal-relational” probably gets the point across even better than “structural-relational.” So, what is mass and what is charge? Mass is defined in terms of resistance to acceleration, i.e. the way something affects other objects. Charge is defined in terms of how an object behaves inside an electromagnetic field. But again, what is mass and what is charge? We do not know. One way to visualise the problem is to ask: what properties would an electron still have were it completely alone in the universe? How would you describe it if it wasn’t travelling through any other structure, interacting with it, creating causal relationships with it, etc? And this is where Pearce’s conjecture comes in that the missing link here is consciousness. Consciousness, or strictly, qualia, disclose the intrinsic nature of the physical.

There is also a slightly more modest (maybe) version of epistemic structuralism which says that the universe is a mathematical structure and all its properties are mathematical, and as long as this network of mathematical relations is internally consistent then there is nothing else to explain. This might sound attractive, but the problem with mathematical structuralism is that it falls prey to something called Newman’s problem. Newman observed that any set of objects can be arranged to satisfy structure X provided that there are the right number of them.

Think about a game of chess. Whether or not a certain activity counts as chess is dictated purely by observance of the rules of chess. So, if you and your friend survived a plane crash into a mountainside and wanted to play chess to pass the time but did not have any chessboard or chess pieces to hand, you can just use a slab of rock as the board, assemble a set of pebbles together and label them: this one is the knight, the queen, etc, and as long as you had enough makeshift chess pieces and observed the rules, no one would say that this is not chess. Whereas, the universe is clearly not like this. The structure of the atom is unique to the atom; you cannot just build atoms out of matchsticks and then claim that that is equivalent to an atom just because their structure is identical. Theoretically, you could arrange penguins, or anything, into the shape of a hexagonal steroid molecule and then claim that that the penguin-molecule is type-identical to the real thing. Likewise, if you knew everything there was to know about the planet Jupiter and its system of moons, you could construct a computer simulation of it which would be structurally and relationally and mathematically identical to the real thing, but unless you are a hardcore mathematical structuralist I imagine you would balk at the idea that this is actually a type-identical copy of the Jovian system. If intrinsic natures are not supernatural, then what might they consist in? The only kind of intrinsic nature that we seem to have any access to is that of consciousness.

You might wonder how it is that this view is different from the traditional particle-based panpsychism that I mentioned earlier. The difference is: lego-brick panpsychism says that particles are the most fundamental reality, as opposed to the quantum-field-theoretic ontology which says particles are just epiphenomena to more fundamental structures called fields. And, traditional panpsychism is property dualist. It says that matter has two sets of properties: physical properties, which are described by physics, and mental properties, which are (as yet) opaque to physics. What I am saying is that it is all mental properties (hence, idealism), matter and qualia are one and the same, and when physicists study matter, what they are really studying are the causal-relational dynamics of qualia. Another way you could think about it is: traditional panpsychism is ontologically dualist, because it says that matter has two sets of properties. By contrast, my view is epistemologically dualistic: it says there is just one set of properties, just one kind of stuff. You can observe it “from the inside” as a subject of experience, hence consciousness discloses the intrinsic nature of the physical, or you can observe “from the outside,” as a physicist does, in terms of its causal-relational behaviours and dynamics. This integrates consciousness into our scientific picture of reality without the need for any form of strong emergence – new laws of nature that explain consciousness.

So, now that we finally know what consciousness is and what the universe is made of, after 5700 words of dense text, we may turn to some ethical implications. Philosophers fret about theories of ethics, but under physicalistic idealism, with consciousness as the fundamental stuff of nature, there can be little doubt that the pleasure-pain axis is the universe’s in-built metric of value. You do not have to subscribe to any particular ethical theory to understand this. Just recognise that although it is possible to imagine an alien species a billion light-years away with radically different politics, family arrangements, attitudes to education and spending, authoritarianism and libertarianism, or any other series of abstract categories, it is not possible to imagine an alien civilisation with an inverted pleasure-pain axis, for whom agony is beautiful and good and states of supreme bliss and insight are evil. The pleasure-pain axis motivates all other ethical concerns. Normal waking consciousness encourages us to think about pleasure and pain in terms of transient social phenomena such as having sex, getting married, graduation ceremonies, listening to music, and so on. However, these are merely the extrinsic correlates of intrinsic feelings. Subject a person on the happiest day of his life to opoid-antagonists, for example, and the experience will lose all its meaning. Likewise, subject a person undergoing a typical boring day at the office to methamphetamine and he will suddenly be hyper-motivated into a frenzy of impassioned activity, the most mundane tasks suddenly becoming saturated with meaning and significance. This illustrates what is sometimes referred to as the tyranny of the intentional object: the prejudice of attributing moral significance to mind-independent events, when the reality is the opposite. Qualia are at the bedrock of reality, and it is the qualia of the pleasure-pain axis which saturate conscious beings with the sense of meaning and significance. Without it, there is nothing to motivate them, as can be seen from patients with injury-induced anhedonia. This is why a single 70-IQ drug-user has more insight into morality and the ultimate nature of reality than all the Church Fathers in the entire multiverse.

The continuum of qualia-values from pain to pleasure is sometimes referred to as valence (Emilsson), or hedonic tone, and it is the aspect of experience which accounts for its pleasant and unpleasant qualities: from cluster headaches, to kidney stones, from to the warmth of walking at the seaside on a summer’s day, to sex, to the most sublime heights of the human soul. This axiological hedonism does not claim that only pleasure is valuable – but rather, that it is the only thing which has intrinsic value, upon which all other values, attitudes, and preferences supervene. Obviously, self-immolating monks do not self-immolate for the purpose of increasing their pleasure. However, the convictions that lead them to do so cannot but be reliant, at some point in the causal chain, on the continuum of valence. Try to imagine a person with no pleasure-pain axis – forever stuck on hedonic zero. Such a person would not even be a robot, but rather, a vegetable, with nothing to motivate any preferences or even resist interference from outside except maybe through mindless muscle reflexes. Viewing happiness (yes, I am treating happiness and pleasure interchangeably) as dispositional is not only trivially true but has many advantages in the realm of practical ethics, since it does not require one to give up any of one’s existing preferences. Indeed, it is as compatible with liberal worldviews as it is with traditionalist ones, since the life experiences that tend to most acutely correlate with high valence are often the ones that traditionalist conservatives place the greatest value on: marriage and reproduction being clear-cut examples. Snorting cocaine is pathetic and trifling by comparison, and in a world of genetically programmed wellbeing, it is questionable what incentive there would be to turn to drugs anyway. Regardless of one’s political and social preferences, we can aim for a hyperthymic society of massively enhanced average wellbeing. In the forthcoming world of designer babies, this is not an idle philosopher’s fantasy, since no parents want depressed, angst-ridden, pain-sensitive children.

As mentioned earlier, the brain is a world-simulation machine, an experience machine if you will, which has developed over aeons of evolutionary pressure, mining the adaptationally optimal qualia-varieties of bedrock reality and combining them into its simulations for various information-signalling purposes. Our noses evolved to compute chemical signatures from the environment in terms of what we know as scent qualia, but it is perfectly possible to imagine an alien species for whom the same chemical-sensing apparatus of the nose is instead converted into visual qualia. This isn’t that crazy when you consider that there are people who can already do this: synaesthetes who can see sounds and smell colours, etc. Equally, when the intrepid psychonaut trips from dimethyltryptamine or any other psychedelic compound, he is accessing what might be termed extra-evolutionary qualia, the “elements” in the periodic table of consciousness which were never mined by the brain for any adaptive signalling function. The full state-space of consciousness is, in effect, a vast and uncharted aspect of reality, from the hyperbolic geometries of DMT (qualiaspace) to the exotic time-fracturing and looping of Salvia divinorum (qualiatime), so far known only to those stupid enough to actually take it.

But consider the implications of the evolutionary model. We understand the underlying neurophysiology of exotic states, and of the pleaure-pain axis, to some extent: the structural, genetic, and informational properties of the brain which allow it to “channel” our qualia. However, our brains did not evolve to be psychonautic explorers: those who wish to chart the complete state-space of consciousness and complete the periodic table of qualia, or those who wish to eradicate all experiences below hedonic zero. Consider all the possible combinations of surgically induced, genetically engineered, or cybernetically enhanced experiences, and then consider that we have barely begun to catalogue the total space of simple molecules (drugs), let alone the more complex ones. Then consider that our brains may not even be physically capable of traversing the full state-space: there is no reason to assume so since there is nothing in our evolutionary history that would suggest that this goal is adaptive. Mapping the state-space of consciousness, let’s face it, is going to take untold billions of years, and creating the perfect psychonaut will require minds way bigger than our own, with goals bigger than mere reproduction.

As technology improves and the human sphere of influence expands into space, if we are ethically serious, we are going to have to get real about how best to make use of the universe. Just as policing our cities becomes easier with the growth of surveillance technology, thus creating an ethical imperative to use technology to safeguard our citizens, just as genetic screening allows use to phase out the horrors of congenital disorders, we must consider our ethical obligations to safeguard the Cosmos in light of our understanding of the pleasure-pain axis of value and advances in AI.

We already understand with reasonable confidence how human beings can colonise the universe. Simply use 3D printing technology to create a fleet of self-reproducing solar collectors to envelop the Sun and harvest all of its energy – a large asteroid or small planet is usually recommended as the building material for the collectors. Then, using giant lasers derived from sunlight, accelerate a fleet of micro-probes to 99% of the speed of light and expand thus to all the reachable galaxies, using nanotechnology to construct habitats along the way. The probes may carry human zygotes about which to then construct artificial uteri upon reaching the probe’s destination, and thus, you have a universe-spanning civilisation, given enough time. However, I think this is ethically inadequate for many reasons. The notion of an infinitely expanding universe filled with semi-sentient scatter-brained apes ruled over by the oligarchs of neoliberalism devising ever more ingenious ways to exploit and ruin their peoples strikes me as a pretty horrifying future for reality.

Moreover, our current understanding of physics suggests that eternalism is true: the doctrine which holds that all moments of time are equally real and constitute a fourth dimension alongside the spatial dimensions: it just happens that only “the present” is accessible to us, normally.

You can jump into a spaceship and travel at a large percentage of the speed of light for a relatively short amount of time within your non-inertial frame of reference and return to Earth to find that it is the year 3000. Just as equally, someone in the year 1000 AD could get into a spaceship, do likewise, and return to the Earth of 2021 with only a few years having passed on the spaceship. In other words, it is physically possible, NOT merely logically possible (logically possible merely meaning that it does not entail a contradiction), for you to interact with people from both the past and the future, and all parties involved are convinced that their present moment is "the only real one." This gets across why the assumption that "only the present exists" (presentism) does not hold. I do not understand how you can possibly make sense of this unless you take the view that all moments in time are of equal ontological status (eternalism). This is one reason that I doubt whether antinatalism/extinctionism are truly a fruitful avenue for suffering-focused ethics. Even if all life in the universe disappeared right now, this would only cancel all suffering under the false assumptions of presentism even setting aside the obvious impracticalities of it.

There are, as you may have noted, still interesting differences between the past and the future on this view: you can only interact with people from the past if they come to you, whereas to interact with future people, you have to go to them. However, I do not think this poses any problem for the argument. If transtemporal interaction of any sort is possible, eternalism is true.

You might wonder whether it is really just some sci-fi anti-ageing technique. But an anti-ageing technique pertains only to you, not to the rest of the world. Since you only age at one year per year, if regenerative medicine were to grant you indefinite youth it would take you ~979 years to "arrive" in the Earth-year 3000 with all else held equal. On the other hand, if you were on a spaceship continuously accelerating at ~13.72m/s², it would take less than 10 years to "arrive" in Earth-year 3000. It is correct to say that from the point of view of an observer on Earth the spaceship's time is "slowing down" as its relative velocity increases, but equally correct to say that from the point of view of the person on the spaceship the Earth's time is "speeding up" as it remains spatially stationary relative to the ship. By travelling at relativistic speeds, one can effectively skip chapters in the "the Book of Earth." You can ask "What's going on from a God's-eye view that has nothing to do with the reference frames of Earth and the spaceship," but that is a meaningless question since it is a pre-special relativity notion of the nature of time. Special relativity, as the name implies, means that time is relative. There is no universal time. I used "people", "spaceship," and "the Earth" in my example, but you could use anything and the same outcomes would follow as long as the relative velocities are correct. Special relativity rests on the idea that there is not "space and time," there is spacetime, which means that all travel through space is also travel through time purely in virtue of travelling, albeit time dilation is imperceptible until you reach relativistic speeds. And special relativity does not provide any reason to privilege certain observational frames of reference over others, so all observational frames of reference (i.e. all "present moments") are equally real. It feels intuitively weird, sure, but so does quantum mechanics.

Consider the ethical implications of this: all the suffering that has ever existed in the universe is eternal and, unless time travel to the past is possible, no respite is coming. Overcoming this, or finding a store of such immense positive ethical value that it can outweigh all this, is going to require God-building in the only true sense of the phrase and moral projects of religious proportions.

Artificial intelligences using classical digital computing will never be conscious for they do not utilise the aforedescribed quantum dynamics which are necessary for subjects of experience. However, this is not the major concern of AI safety. The problem is not that a super-intelligent AI would become conscious and self-aware and so “turn against us,” but that its intelligence might make it ruthlessly efficient at some project which is to the detriment of sentient life. This is what Elon Musk referred to as summoning the Demon. However, just as much as this is possible, it is also possible to perform benevolent God-building through the project of AI, as long as the AI is programmed with three goals in mind: (1) to fully retro-engineer the human brain, (2) to use this knowledge as a foundation for building beings with access to the total and complete state-space of consciousness, and (3) to eradicate all experience below hedonic zero while preserving life. In short: the solar system, home to perhaps quadrillions of humans going about their lives, surrounded by an ever-outwardly-expanding hedonium shockwave.

Physics suggests that the Catholic concept of transubstantiation is literally possible and, for a hyper-intelligent AI, trivial. The causal chain through which stars or space-dust become conscious minds is of course complex, but we know it is possible, because – here we are. And the only reason it took as long as it did is because there was no intelligence involved, and no morality except the vagaries of the evolutionary fitness landscape. We know not what the true nature of hedonium could be, what its chemical composition will be, what would be the optimal literal size of its minds – although we have to assume there would be physical constraints on it such as the speed of light. However, we know that the future’s quantum computers could, in theory, utilise quantum coherence to produce conscious minds like our own – and better ones.

As we busy ourselves in our remote little corner of the universe with our daily lives, the AI will reach its benevolent hands across the stars and reconstruct the cosmos: transforming the lifeless value-neutral hydrogenous wastes of galactic space to reveal their hearts, converting all matter and energy in the visible universe into computationally optimised sentient experience machines and their batteries – hypersentient megaminds with powers of insight, spirituality, bliss, and calculation that will make humans, and their parochial false gods, look less sentient than rocks. A race of perfect psychonauts, ready to journey across the freshly charted topography of qualiaspace and qualiatime with as much ease and fluidity as we traverse extrinsic space with our arms and legs. Minds that know no suffering. Minds with essentially infinite degrees of freedom and total mastery of the will. Minds directly plugged into Heaven. Minds with effectively infinite lifespans, capable of living off the energy of rotating black holes after the end of the Stelliferous Era all the way to the ultimate heat death of the universe. There is nothing at all supernatural about this vision. All it requires is a source of energy, intelligence, and the ingeniousness of Planck tech (technology which can rearrange matter at the smallest scales).

Normally technology is the technology we make with our hands, assembling a PC for instance. Micro-tech, technology at the scale of a millionth of a meter, would be, for example, the transistors on a standard computer processing chip. Nanotechnology would be a technology that can manipulate matter at the scale of a billionth of a meter. Femtotechnology, a quadrillionth of a meter, 1 metre times 10^-15. Planck tech would be a technology that can manipulate matter at the level of literally the smallest entities that are observable by physics, and with this you could perform true acts of transubstantiation.

The entities at the endpoint of this whole programme that I’ve described would be gods in the only sense that’s meaningful. They would omnipotent wielders of Planck tech, and so able to remake the universe in their image, they would be omnipresent, since they would eventually fill the universe if not the multiverse. They would be omnibenevolent, omnisentient (because they would have a roadmap of the entire state-space of consciousness), and all-knowing, because they would be able to convert, if they wanted to, whole stars, galaxies, superclusters, whatever, into computers to perform whatever computations they were interested in. And they might even be eternal, or at least, as long-lived as the universe itself.

I have sometimes said that if there is a God already, the only reason he could have put us here is to see if we can best his creation or overcome some greater evil. And I am now convinced that if there is a God, this is his plan. He wants us to use our biological intelligence to create self-improving machine intelligence. He then wants that machine intelligence to utilise nanotech, femtotech, and ultimately perhaps Planck tech, to achieve feats of transubstantiation, and transubstantiate the entire metaverse (the complete set of universes) into an infinite pantheon of all-power hypersentient gods.

It may even be possible for minds so far beyond ours to find loop-holes in the formalisms of physics and travel to parallel realities, if you take multiverse conjecture seriously, leaving any native civilisations alone but otherwise continuing to perform their ultimate moral calling, the naturalisation of Heaven across all realities.

Share this post

Consciousness and Ethically Optimal Futures

eriklockwood.substack.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 Erik Lockwood
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing