The Real Problem of Consciousness
A philosophical mystery that actually matters. (Still evolving #5)
Humans living today have moral obligations towards future people insofar as we can influence the experiences they have. But much more than this, we’re obligated to care about what kinds of minds will exist in the future, as this determines what kinds of experiences can be had. This line of thought leads us to the problems of consciousness.
The problems of consciousness
Before proceeding let’s clarify a few terms.
Firstly, when I talk about consciousness, I’m using the widely accepted definition from Thomas Nagel’s foundational philosophy paper, What is it like to be a Bat? — to say that something is conscious is to say that there is something that it’s like to be that thing. Equivalently, to say something is conscious is to say that thing has subjective experience. This is a somewhat imprecise definition, but it’s a good operational definition that will do us well for now.
Secondly, the word phenomenology refers to the study of subjective experience from the perspective of the thing doing the experiencing (i.e., from the “first-person” point of view, although it doesn’t require that the experiencer is a “person”). By phenomenological experience, I mean the experience as it is perceived by the subject doing the experiencing.
Jargon done. Let’s continue.
The easy problems of consciousness
The so-called “easy” problems of consciousness are those that can be addressed using conventional tools of cognitive science.
For example, consider the problem of predicting phenomenological experiences based on neural activity (i.e., predicting how someone would describe the experiences they have based on what electrical activity we observe in their nervous system). We might for instance put someone in a brain scanner and monitor electrical activity in the brain, while exposing them to various stimuli, and asking them to describe what they’re feeling. Over time we could correlate neural activity with the experiential states reported, and build up statistically robust, predictive models that take as input neural activity, and output predictions about what the subjects would claim to be experiencing. In principle, these models might become arbitrarily accurate.
Problems of this type may be difficult, but they are nevertheless termed “easy” to differentiate them from the so-called “hard” problem of consciousness.
The hard problem of consciousness
The hard problem of consciousness is usually defined as the problem of explaining why any physical state is conscious rather than nonconscious. It is the problem of explaining why the lights are on; why there is “something it is like” to be a subject in conscious experience; why there should be any conscious experience at all.
To be able to predict what phenomenological experiences will be generated from a certain pattern of neural activity would be to solve an easy problem, but to satisfactorily explain why that (or any) pattern gives rise to any consciousness experience in the first place would be to solve the hard problem.
Gripes with the hard problem
I have philosophical gripes with how much attention the hard problem gets in the consciousness literature. It’s certainly interesting to ponder, but at the end of the day I think it’s an ill-posed and mostly unhelpful framing of an important problem.
As currently stated, it’s unclear what a solution to the hard problem could even look like. How would we know that the hard problem had been solved? Would it just be a collective agreement that we have good intuition for how consciousness arises? That our explanation feels like it makes sense? If that’s the case, then it’s not “hard” in a qualitatively different sense from other questions in science and philosophy.
Consider by analogy the “hard problem of gravity” (my made up terminology):
You: “Why do apples fall from trees?”
Heraclitus (~500 BCE): “Because of the logos — the natural law which keeps all things in harmony.”
You: “But why…”
Archimedes (4th century): “Because apples are made of the element ‘earth’, which has a tendency to move downwards.”
… ~1400 years later …
You: “But why…”
Newton (18th century): “Because of the gravitational attraction of Earth, and here’s my Universal Law of Gravitation that explains it.”
You: “But why…”
Einstein (early 20th century): “Because things with mass curve spacetime, and here’s my Theory of General Relativity that explains it.”
You: “But why…”
Quantum physics pioneers (mid 20th century): “Because of the exchange of virtual particles called gravitons… but stand by, we don’t yet have the full theory for this yet…”
You: “But why…”
String theory pioneers (late 20th century): “Hmm… maybe because of the interaction of stringy things in very low-energy vibrational states… the maths certainly looks interesting… but the physics, well, we’re really not sure at this stage…”
And so it goes.
How is the “hard problem of consciousness” qualitatively “harder” than the “hard problem of gravity”? (It isn’t.)
You: “But in the above example we did come to a much better understanding of gravity, and now it does feel like it makes intuitive sense why apples fall from trees! So talking about the ‘hard problem’ is helpful, isn’t it?”
Me: “Don’t be misled by my made up analogy. I made it up! There never was a ‘hard problem’ of gravity. The progress we’ve made in understanding gravity is in spite of that fact.”
My recommendation is that we put the hard problem aside for now, and focus efforts on developing a theory that can explain, predict, and provide means of controlling experiences. Which leads us to our next problem — the real problem of consciousness.
The real problem of consciousness
Neuroscientist Anil Seth defines a new problem, which he terms the real problem of consciousness, as follows:
The real problem is to explain, predict, and control the phenomenological properties of conscious experience.
While the real problem of consciousness is technically an “easy problem”, it’s philosophically different in that it focuses on phenomenology (nature of experience from the first-person perspective) and not function or behaviour, as well as demanding that the theory be explanatory (rather than only predictive).
The word “explain” here is loosely defined, but that’s alright — it’s not different to how we use the word in other areas of science and philosophy. It basically means that the theory should provide a concise mechanistic description as to why certain brain states lead to certain conscious experiences, not just that they do. This is similar to how the theories of gravity mentioned above all include descriptions of underlying mechanisms of how gravity works (in the form of equations).
As Anil Seth correctly puts it,
“When a complex phenomenon is incompletely understood, a prematurely precise definition can be constraining and misleading.”
As an aside — I had the pleasure of interviewing Anil Seth on the Paradigm podcast a several months after writing this series. Our interview is available here: Anil Seth: Perception, illusion, hallucination, dream machines.
A well-developed theory solving the real problem of consciousness would enable answers to a wide range questions, such as:
Explanation: Explain what range of experiences are and are not possible for a given neural architecture:
eg: “Based on the structure of Hal’s brain, we know that brain activity pattern X cannot occur, and so we’re confident that he cannot experience conscious states Y or Z.”
Prediction: Predict what kind of experiences a person would claim to have experienced, based on activity in their nervous system, including in cases where the specific pattern of neural activity had not yet been directly observed:
eg: “While we’ve not see this specific brain pattern before, based on our theory of consciousness we’re confident that Hal is experiencing something like awe and elation.”
Control: Provide a framework for controlling experienced precisely by manipulating underlying mechanisms:
eg: “If we tweak electrical activity in Hal’s brain a specific way during his surgery, we’re confident that he’ll remain alert and responsive but will not feel any fear or pain.”
We need to focus on the real problem of consciousness
The real problem is where consciousness research should be allocating most effort.
As I’ve stated several times in this series — we are at the beginning of our evolutionary journey. There will never again be as many future people as there are today. We have ethical obligations towards what kinds of conscious experiences are had in the future, and therefore toward what kinds of minds exist to have those experiences — whether these minds are made of meat, silicon, or anything else. And so, it’s critical that we understand the relationship between mind and phenomenology.
We want the future to be full of minds that flourish, not suffer.
The real problem is by no means the end of the story. Indeed, it will likely never provide a final answer to questions about consciousness because, for any question it does answer, one could always ask “But why is that?” However, this is no different from any good theory in science or philosophy — an inquiring mind always has another “why” up its sleeve. My point is that the real problem is sufficiently well-defined and actionable so as to be the right angle of attack.
We must solve the real problem. The future depends on it.
Matt Geleta, February 2023