Saturday, April 27, 2024

Ads

Top 5 This Week

Related Posts

Will AI Become Conscious? This is How Researchers Will Identify Consciousness in AI

Science fiction has long entertained the idea of artificial intelligence becoming conscious — think of HAL 9000, the supercomputer-turned-villain in the 1968 film 2001: A Space Odyssey. With the rapid progress of artificial intelligence (AI), that possibility is becoming less and less fantastical, and has even been acknowledged by leaders in AI. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious.”

Many researchers say that AI systems aren’t yet at the point of consciousness, but that the pace of AI evolution has got them pondering: how would we know if they were?

 

To answer this, a group of 19 neuroscientists, philosophers, and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository1, ahead of peer review. The authors undertook the effort because “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California.

 

The team says that a failure to identify whether an AI system has become conscious has important moral implications. If something has been labelled ‘conscious’, according to co-author Megan Peters, a neuroscientist at the University of California, Irvine, “that changes a lot about how we as human beings feel that entity should be treated.”

 

Long adds that, as far as he can tell, not enough effort is being made by the companies building advanced AI systems to evaluate the models for consciousness and make plans for what to do if that happens. “And that’s in spite of the fact that, if you listen to remarks from the heads of leading labs, they do say that AI consciousness or AI sentience is something they wonder about,” he adds.

 

Nature reached out to two of the major technology firms involved in advancing AI — Microsoft and Google. A spokesperson for Microsoft said that the company’s development of AI is centered on assisting human productivity in a responsible way, rather than replicating human intelligence. What’s clear since the introduction of GPT-4 — the most advanced version of ChatGPT released publicly — “is that new methodologies are required to assess the capabilities of these AI models as we explore how to achieve the full potential of AI to benefit society as a whole,” the spokesperson said. Google did not respond.

What is consciousness?

One of the challenges in studying consciousness in AI is defining what it means to be conscious. Peters says that for the purposes of the report, the researchers focused on ‘phenomenal consciousness,’ otherwise known as the subjective experience. This is the experience of being — what it’s like to be a person, an animal, or an AI system (if one of them does turn out to be conscious).

 

There are many neuroscience-based theories that describe the biological basis of consciousness. But there is no consensus on which is the ‘right’ one. To create their framework, the authors therefore used a range of these theories. The idea is that if an AI system functions in a way that matches aspects of many of these theories, then there is a greater likelihood that it is conscious.

 

They argue that this is a better approach for assessing consciousness than simply putting a system through a behavioral test — say, asking ChatGPT whether it is conscious, or challenging it and seeing how it responds. That’s because AI systems have become remarkably good at mimicking humans.

 

A theory-heavy approach

To develop their criteria, the authors assumed that consciousness relates to how systems process information, irrespective of what they are made of — be it neurons, computer chips, or something else. This approach is called computational functionalism. They also assumed that neuroscience-based theories of consciousness, which are studied through brain scans and other techniques in humans and animals, can be applied to AI.

 

On the basis of these assumptions, the team selected six of these theories and extracted from them a list of consciousness indicators. One of them — the global workspace theory — asserts, for example, that humans and other animals use many specialized systems, also called modules, to perform cognitive tasks such as seeing and hearing. These modules work independently but in parallel and share information by integrating into a single system. A person would evaluate whether a particular AI system displays an indicator derived from this theory, Long says, “by looking at the architecture of the system and how the information flows through it.”

 

Seth is impressed with the transparency of the team’s proposal. “It’s very thoughtful, it’s not bombastic, and it makes its assumptions really clear,” he says. “I disagree with some of the assumptions, but that’s totally fine because I might well be wrong.”

 

The authors say that the paper is far from a final take on how to assess AI systems for consciousness, and that they want other researchers to help refine their methodology. But it’s already possible to apply the criteria to existing AI systems. The report evaluates, for example, large language models such as ChatGPT and finds that this type of system arguably has some of the indicators of consciousness associated with global workspace theory. Ultimately, however, the work does not suggest that any existing AI system is a strong candidate for consciousness — at least not yet.

 

Click here if you want to take a look at All Gadgets!

Popular Articles