In June 2022, Google engineer Blake Lemoine made headlines by claiming that the company’s LaMDA chatbot achieved sentience. The software had the conversational abilities of a seven-year-old, Lemoine said, and we should assume that he has the same awareness of the world.
LaMDA, later released to the public as Bard, is powered by a “large language model” (LLM) that also powers OpenAI’s ChatGPT bot. Other big tech companies are racing to deploy similar technologies.
Millions of people now have the opportunity to play with LLM, but few seem to be aware of it. Instead, in the poetic phrase of linguist and data scientist Emily Bender, they are “stochastic parrots,” chattering with certainty without understanding. But what about the next generation of artificial intelligence (AI) systems and beyond?
Our team of philosophers, neuroscientists, and computer scientists looked at current scientific theories about how human consciousness works to create a list of basic computational properties that any hypothetically conscious system would need to possess. In our view, no current system comes anywhere near the bar for consciousness – but at the same time, there’s no clear reason why future systems won’t be truly conscious.
Looking for pointers
Since computing pioneer Alan Turing proposed his “imitation game” in the 1950s, the ability to successfully impersonate a person in conversation has often been taken as a reliable marker of consciousness. This is usually because the task is so difficult that it requires awareness.
However, like the 1997 defeat of Grandmaster Garry Kasparov by the chess computer Deep Blue, the conversational flow of LLM can only move the goalposts. Is there a principled way to approach the question of AI consciousness that does not rely on our intuitions about what is difficult or special about human cognition?
Read more: A Google software engineer believes AI has become sentient. How do we know if he is right?
Our recent white paper aims to do just that. We compared current scientific theories of what makes humans conscious to compile a list of “indicative properties” that can then be applied to AI systems.
We don’t think systems that have indicative properties are necessarily conscious, but the more indicators, the more seriously we should take claims of AI consciousness.
Computational processes behind consciousness
What kind of indicators were we looking for? We avoided obvious behavioral criteria – such as being able to converse with people – as these are both human-centric and easy to fake.
Instead, we looked at theories of the computational processes that support consciousness in the human brain. These can tell us about the information-processing required to support subjective experience.
For example, the “global workspace theory” posits that consciousness arises from the presence of a capacity-limiting barrier that integrates information from all parts of the brain and selects information to be made available globally. “Circular processing theories” emphasize the role of feedback from later processes to earlier processes.
Each theory in turn suggests more specific indicators. Our final list consists of 14 indicators, each focusing on an aspect of how systems are doing. work How about that to behave.
There is no reason to think that the current systems are conscious
How does current technology stack up? Our analysis suggests that there is no reason to assume that current AI systems are conscious.
Some fulfill certain indicators. Systems using the same machine-learning model behind the Transformer architecture, ChatGPT and similar tools, meet the three “global workspace” indicators, but lack significant capabilities for global rebroadcast. They also fail to satisfy most other indicators.
So, despite ChatGPT’s impressive conversational capabilities, there probably isn’t anyone inside. Other architectures similarly meet a handful of criteria.
Read more: Not everything we call AI is actually ‘artificial intelligence’. Here’s what you need to know
Most current architectures meet only a few indicators. However, for most indicators, there is at least one current architecture that fulfills it.
This suggests that there are no clear, in-principle technical barriers to building AI systems that meet most or all of the indicators.
This is probably the case when than So Some such system has been created. Of course, when that happens, many questions remain.
Beyond human consciousness
The scientific theories we propose (and the authors of the paper!) don’t always agree with each other. We used a list of indicators rather than strict criteria to acknowledge that fact. This can be a powerful method in the face of scientific uncertainty.
We are inspired by similar debates about animal consciousness. Most of us think that at least some nonhuman animals are conscious, yet they cannot communicate with us what they are feeling.
A 2021 report from the London School of Economics argues that the possibility that cephalopods such as octopuses feel pain is important in changing UK animal ethics policy. Focusing on structural features leads to the surprising result that even simple animals such as insects may have minimal forms of consciousness.
Our report does not recommend what to do with conscious AI. As AI systems inevitably become more powerful and more widely deployed, this question will become more pressing.
Our pointers will not be the last word – but we hope they will be the first step in tackling this difficult question in a scientifically sound way.