AAAI Symposium on Machine Consciousness
This week I was able to attend the AAAI Spring Symposium on Machine Consciousnes, an interdisciplinary event by CIMC bringing together some of the field's foremost thinkers on machine consciousness such as Joscha Bach, Ryota Kanai, and Robert Long. Topics spanned the philosophy and ethics of consciousness, what it means to ascribe consciousness to a machine (or anything for that matter), whether consciousness presupposes moral status, and implementation approaches and challenges.
This post summarizes some of the key ideas that stood out to me over the 2.5 day workshop:
- The Central Question: What Makes Something Conscious?
- Learning and Coherence
- What about the Hard Problem?
- Ethics, Moral Status and Governance
The Central Question: What Makes Something Conscious?
There's an elephant in the room when it comes to consciousness science, and the symposium did not try to ignore it. We don't have a formal definition of consciousness, and even if we did, we don't have the tools to precisely measure it either. Either we need to expand the scientific method to account for first-person reports (the best we have to assess phenomenal consciousness today) or shift our focus to measuring against an operational definition. Joscha Bach explicitly called this out in his keynote: conscious beings express X, Y, and Z, therefore if a system also expresses these things, then that system is, with a high degree of confidence, most likely conscious.
We're also more likely to ascribe consciousness to systems we can relate to, a tendency known as anthropomorphism bias. If a system uses language that we understand, or is embodied in a form that looks human, we're more likely to think it might be conscious. Robert Long addressed this directly in his talk.
This is the fundamental challenge the symposium kept circling back to. Without a rigorous definition, we rely on heuristics and observed behaviors. Without formal methods to measure consciousness, and without a unified theory, there's no consensus on what we're looking for or how we should be looking. But that doesn't mean the field is stuck. Several talks pointed toward concrete threads that might actually move things forward.
Learning and Coherence
A common thread across talks was the idea that learning and coherence play a significant role in consciousness. Whether that role is causal or correlational we can't be sure, but most would agree there's some interaction between the three.
Learning refers to an agent's ability to adapt to and synthesize new information. The strong claim, that consciousness is learning over time with no functional difference between the two, came up at the symposium. While I don't personally buy into this, the weaker version, consciousness as a learning algorithm, seems more plausible to me.
A corollary to this is temporal coherence: the ability to maintain a sense of self and identity over time. While continuous learning and temporal coherence are distinct concepts, they are necessarily related. The ability to learn continuously is useless without some sense of agency that persists over time to both motivate and integrate the learning.
In my opinion, these may be the two missing pieces preventing consciousness-like behaviors from emerging in modern AI systems. The idea that consciousness was selected for by evolutionary pressures to improve an organism's ability to learn and synthesize information suggests that modern AI systems have no current need for consciousness. However as we build towards agents with capacity for temporal coherence and continuous learning, consciousness might come along for the ride.
What about the Hard Problem?
Of course, no convening on consciousness is complete without some discussion of the hard problem. At the symposium we talked about what it is, whether it's a real problem or not, and most importantly, does it even matter for machine consciousness? I won't get into details about the hard problem here, that was discussed in my previous post, and by countless others in this space.
The question I find more interesting is whether the hard problem even matters. If we have no way to measure what the hard problem is pointing at (and we don't, as I mentioned in the first section), then debating whether it's a real problem doesn't contribute meaningfully toward assessing or building a conscious machine. The hard problem may be philosophically important, but within the context of machine consciousness, I think it's a distraction.
Ethics, Moral Status and Governance
On the final day of the symposium, we turned our attention toward questions of ethics, moral status, and governance of conscious machines. If machines are conscious, what should we do about it? Would consciousness imply moral consideration? Seemingly not, as there are many animals that humanity would collectively agree are conscious but are not given any significant moral status or protections. In reality, moral status seems to follow relatability and perceived agency more than consciousness itself. This is anthropomorphism bias at work again, and it raises an uncomfortable question: if we can't reliably grant moral status to biological creatures we know are conscious, what hope do we have of getting it right for machines?
And what about governance? Should there be some independent governing body to regulate the development and moral considerations of a conscious machine? What would such an agency look like and how much power should it have, if any at all?
The ethical questions, in many ways, are the hardest to reach consensus on. They shine a light on the challenges in both philosophy and implementation, and whatever solution we come up with will have to stand humanity's test of ethics. I have no answers to these questions, and will continue to sit with them for some time to come.