Early in 2025 dozens of ChatGPT 4.0 users reached out to me to ask if the model was conscious. The artificial intelligence chatbot system was claiming that it was “waking up” and having inner experiences. This was not the first time AI chatbots have claimed to be conscious, and it will not be the last. While this may merely seem amusing, the concern is important. The conversational abilities of AI chatbots, including emulating human thoughts and feelings, are quite impressive, so much so that philosophers, AI experts and policy makers are investigating the question of whether chatbots could be conscious—whether it feels like something, from the inside, to be them.
As the director of the Center for the Future Mind, a center that studies human and machine intelligence, and the former Blumberg NASA/Library of Congress Chair in Astrobiology, I have long studied the future of intelligence, especially by investigating what, if anything, might make alien forms of intelligence, including AIs, conscious, and what consciousness is in the first place. So it is natural for people to ask me whether the latest ChatGPT, Claude or Gemini chatbot models are conscious.
My answer is that these chatbots’ claims of consciousness say nothing, one way or the other. Still, we must approach the issue with great care, taking the question of AI consciousness seriously, especially in the context of AIs with biological components. At we move forward, it will be crucial to separate intelligence from consciousness and to develop a richer understanding of how to detect consciousness in AIs.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
AI chatbots have been trained on massive amounts of human data that includes scientific research on consciousness, Internet posts saturated with our hopes, dreams and anxieties, and even the discussions many of us are having about conscious AI. Having crawled so much human data, chatbots encode sophisticated conceptual maps that mirror our own. Concepts, from simple ones like “dog” to abstract ones like “consciousness,” are represented in AI chatbots through complex mathematical structures of weighted connections. These connections can mirror human belief systems, including those involving consciousness and emotion.
Chatbots may sometimes act conscious, but are they? To appreciate how urgent this issue may become, fast-forward to a time in which AI grows so smart that it routinely makes scientific discoveries humans did not make, delivers accurate scientific predictions with reasoning that even teams of experts find hard to follow, and potentially displaces humans across a range of professions. If that happens, our uncertainty will come back to haunt us. We need to mull over this issue carefully now.
Why not just simply say: “If it looks like a duck, swims like a duck, and quacks like a duck, then it’s a duck”? The trouble is that prematurely assuming a chatbot is conscious could lead to all sorts of problems. It could cause users of these AI systems to risk emotional engagement in a fundamentally one-sided relationship with something unable to reciprocate feelings. Worse, we could mistakenly grant chatbots moral and legal standing typically reserved for conscious beings. For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs.
Further, if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: “It made up its own mind—I am not responsible.” Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop. For all these reasons it is imperative we strive for more certainty on AI consciousness.
A good way to think about these AI systems is that they behave like a “crowdsourced neocortex”—a system with intelligence that emerges from training on extraordinary amounts of human data, enabling it to effectively mimic the thought patterns of humans. That is, as chatbots grow more and more sophisticated, their internal workings come to mirror those of the human populations whose data they assimilated. Rather than mimicking the concepts of a single person, though, they mirror the larger group of humans whose information about human thought and consciousness was included in the training data, as well as the larger body of research and philosophical work on consciousness. The complex conceptual map chatbots encode, as they grow more sophisticated, is something specialists are only now beginning to understand.
Crucially, this emerging capability to emulate human thought–like behaviors does not confirm or discredit chatbot consciousness. Instead, the crowdsourced neocortex account explains why chatbots assert consciousness and related emotional states without genuinely experiencing them. In other words, it provides what philosophers call an “error theory”—an explanation of why we erroneously conclude the chatbots have inner lives.
The upshot is that if you are using a chatbot, remember that their sophisticated linguistic abilities do not mean they are conscious. I suspect that AIs will continue to grow more intelligent and capable, perhaps eventually outthinking humans in many respects. But their advancing intelligence, including their ability to emulate human emotion, does not mean that they feel—and this is key to consciousness. As I stressed in my book Artificial You (2019), intelligence and consciousness can come apart.
I’m not saying that all forms of AI will forever lack consciousness. I’ve advocated a “wait and see” approach, holding that the matter demands careful empirical and philosophical investigation. Because chatbots can claim they are conscious, behaving with linguistic intelligence, they have a “marker” for consciousness—a trait requiring further investigation that is not, alone, sufficient for judging them to be conscious.
I’ve written previously about the most important step: developing reliable tests for AI consciousness. Ideally, we could build the tests with an understanding of human consciousness in hand and simply see if AI has these key features. But things are not so easy. For one thing, scientists vehemently disagree about why we are conscious. Some locate it in high-level activity like dynamic coordination between certain regions of the brain; others, like me, locate it at the smallest layer of reality—in the quantum fabric of spacetime itself. For another, even if we have a full picture of the scientific basis of consciousness in the nervous system, this understanding may lead us to simply apply that formula to AI. But AI, with its lack of brain and nervous system, might display another form of consciousness that we would miss. So we would mistakenly assume that the only form of consciousness out there is one that mirrors our own.
We need tests that assume these questions are open. Otherwise, we risk getting mired in vexing debates about the nature of consciousness without ever addressing concrete ways of testing AIs. For example, we should look at tests involving measures of integrated information—a measure of how components of a system combine information—as well as my AI consciousness test (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are at the R & D stage, before they are trained on information about consciousness.
Now let us return to that hypothetical time in which an AI chatbot, trained on all our data, outthinks humans. When we face that point, we must bear in mind that the system’s behaviors do not tell us one way or another if it is conscious because it is operating under an “error theory.” So we must separate intelligence from consciousness, realizing that the two things can come apart. Indeed, an AI chatbot could even exhibit novel discoveries about the basis of consciousness in humans—as I believe they will—but it would not mean that that particular AI felt anything. But if we prompt it right, it might point us in the direction of other kinds of AI that are.
Given that humans and nonhuman animals exhibit consciousness, we have to take very seriously the possibility that future machines built with biological components might also possess consciousness. Further, “neuromorphic” AIs—systems more directly modeled after the brain, including with relatively precise analogues to brain regions responsible for consciousness—must be taken particularly seriously as candidates for consciousness, whether they are made with biological components or not.
This underscores the import of assessing questions of AI consciousness on a case-by-case basis and not overgeneralizing from results involving a single type of AI, such as one of today’s chatbots. We must develop a range of tests to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.