How to tell if your AI has consciousness

Rate this post

Have you ever talked to someone who is “in the know”? How did that conversation go? Did they make vague gestures in the air with both hands? Did they refer to the Tao Te Ching or Jean-Paul Sartre? Are they saying that, in reality, there is nothing that scientists cannot be sure of and that reality is only as real as we make it out to be?

The vagueness of consciousness, its obscurity, has made its study in the natural sciences unwieldy. At least until recently, this project was largely left to philosophers, who were usually marginally better than others at articulating the aims of their study. Hod Lipson, a roboticist at Columbia University, said some in his field refer to consciousness as the “C-word”. “The idea was that you couldn’t study consciousness until you had tenure,” said Grace Lindsey, a neuroscientist at New York University.

Yet, a few weeks ago, philosopher, neuroscientist and computer scientist, Dr. Lindsey’s group proposed a rubric to determine whether an AI system like ChatGPT can be considered conscious. The report, which Dr. Lindsey surveys what he calls the “new” science of consciousness, bringing together elements from half a dozen nascent empirical theories and proposing a list of measurable qualities that might indicate some presence in machines.

For example, recurrent processing theory focuses on the differences between conscious perception (for example, actively studying an apple in front of you) and unconscious perception (such as the feeling that an apple is flying towards your face). Neuroscientists have argued that when electrical signals travel from the nerves in our eyes to the primary visual cortex and then to deeper parts of the brain, we unconsciously perceive things, like passing a baton from one cluster of nerves to another. These perceptions seem to become conscious as the baton passes back from deeper parts of the brain to the primary visual cortex, creating a loop of activity.

Another theory describes specific areas of the brain that are used for specific tasks—the part of your brain that can balance your top-heavy body on a pogo stick is different from the part of your brain that can take in a wider landscape. We are able to put all this information together (like you can pick up on a pogo stick while admiring a great view), but only to a certain extent (it’s hard to do). Neuroscientists therefore postulate the existence of a “global workspace” that allows us to control and coordinate what we pay attention to, what we remember, even what we perceive. Our consciousness can emerge from this integrated, changing field of activity.

But it can also arise from the ability to be aware of our own awareness, create virtual models of the world, anticipate future experiences, and locate our bodies in space. The report argues that any one of these characteristics could, potentially, be an essential part of what it means to be conscious. And, if we are able to recognize these characteristics in machines, we may be able to make machines think consciously.

One problem with this approach is that most advanced AI systems are deep neural networks that “learn” how to do things on their own, in ways that humans can’t always interpret. We can glean some information from their internal structure, but only in limited ways, at least for the moment. This is the black box problem of AI so even if we had a complete and accurate rubric of consciousness, it would be difficult to apply it to the machines we use every day.

And the authors of the recent report are quick to note that theirs is not a definitive list that makes one aware. They rely on an account of “computational functioning,” according to which consciousness is reduced to bits of information that pass back and forth through the system, like a pinball machine. In principle, according to this view, a pinball machine could be conscious if made more complex. (This means it’s no longer a pinball machine; if we come to that, let’s cross that bridge.) But others have proposed theories that take our biological or physical characteristics, social or cultural context, as essential pieces of consciousness. It’s hard to see how these things could be coded into a machine.

And even for researchers who are largely on board with computational efficiency, no existing theory seems adequate for consciousness.

“For any of the report’s conclusions to be meaningful, the theories must be sound,” Dr. Lindsay said. “Which they are not.” That may be the best we can do for now, she said.

Finally, does it seem to comprise any one of these features, or a combination of them, that William James described as the “warmth” of conscious experience? Or, in the words of Thomas Nagel, “what you are like to be”? There is a gap between the subjective experience we can measure with science and subjective experience. David Chalmers has labeled this the “hard problem” of consciousness. Even if an AI system has frequent processes, global scope, and awareness of its physical location – what if it still lacks something that makes it? feel like something?

When I brought this void up to Robert Long, a philosopher at the Center for AI Safety who worked on the report, he said, “It’s kind of a feeling when you try to explain it scientifically or reduce it to a physical process. , some high-level concepts.

The stakes are high, he added; Advances in AI and machine learning are coming faster than our ability to explain what’s going on. In 2022, Blake Lemoine, an engineer at Google, argued that the company’s LaMDA chatbot is conscious (although most experts disagree); The further integration of generative AI into our lives means this topic may become more controversial. Dr. Long argues that we have to start making some claims about what consciousness can be, and laments the “vague and sensationalist” way we have gone about it, often conflating subjective experience with common intelligence or rationality. “We are facing this problem now and in the next few years,” he said.

Megan Peters, a neuroscientist at the University of California, Irvine and author of the report, said, “Whether or not someone is there makes a big difference in how we treat them.”

While we already do this kind of research on animals, other species need to be carefully studied to make the most basic claim that they have experiences similar to, or even comprehensible to, our own. It can be like a fun house activity, like shooting empirical arrows from a moving platform at shape-shifting targets, the bows of which occasionally become spaghetti. But sometimes we get hit. As Peter Godfrey-Smith writes in his book “Metazoa”, cephalopods probably have a strong but distinctly different kind of subjective experience from humans. Each arm of an octopus has 40 million neurons. what is that

To solve this problem of other minds we rely on a series of observations, conclusions, and experiments — both systematic and not. We talk, touch, play, hypothesize, produce, control, x-ray, and dissect, but, ultimately, we still don’t know what we perceive. We just know that we are.

Leave a Comment