Consciousness isn't all you need

This article argues that we don't need to worry about AI becoming conscious (anytime soon) because consciousness, alone, isn't what matters.

"Has AI become conscious?" is not a worthwhile question to be asking, especially because we may never be able to confidently answer it. The more worthwhile question is "Has AI become conscious of its own pain, pleasure, and feelings?" And the answer to this question is "no", and will most likely always be "no".

Here's the quick breakdown of the argument:

  1. We (humans) care for others (humans and other animals) not solely because we believe they are conscious, but because we believe they are conscious of their own pain, pleasure, and feelings.
  2. Pain, pleasure, and feelings are complex biological processes that humans have not implemented (and likely will never implement) into any AI systems.
  3. Therefore, we don't have a moral duty to empathize with the AI systems we have today.

Definitions

First, let me clarify what I mean by some of the terminology that I use in this article:

  • Consciousness: The quality of being aware of yourself existing and experiencing the world. Consciousness is distinct from pain, pleasure, and feelings. Consciousness doesn't have a mood. It is an emotionally neutral phenomenon.
  • Pain, pleasure, feelings: This includes a bunch of different experiences such as (but not limited to) happiness, sadness, anger, suffering, euphoria, anxiety, guilt, nervousness, etc.

1. Consciousness isn't all you need

We empathize with other living things (humans and other animals) because we believe they are conscious of pain, pleasure, and feelings — not solely because they're conscious.

This is partially why we would worry less about someone who goes through surgery with anesthesia than someone that goes through surgery without anesthesia. Just as someone under anesthesia loses bodily sensation, an AI system fundamentally has no sensation whatsoever, neither physical nor emotional.

2. Pain, pleasure, and feelings are biological

Pain, pleasure, and feelings are biological processes. They require complex biological systems such as the interplay of neurochemicals like dopamine and noradrenaline.

The AI systems humans have built don't have these biological systems in them. They likely never will, since implementing such biological systems into an AI system doesn't make that AI system more useful.

Thus, AI will never experience pain, pleasure, or feelings.

3. We don't need to worry about AI becoming conscious

Because AI will never experience pain, pleasure, or feelings, and because something needs to be able to experience pain, pleasure, or feelings for us to empathize with them, we have no moral duty to empathize with AI systems.

But why does all this matter?

There are two major reasons why the ideas in this article matter:

  1. We may never be able to definitely verify that an is conscious.
  2. The resources (time, attention, verbal affection, gifts, etc.) we share with others as a result of our empathy and compassion are finite. We should carefully allocate those resources to things that are actually conscious of the benefits of receiving those resources.

Consciousness isn't a testable quality

Consciousness isn't an observable quality of living things. You cannot sense it in another person. So you couldn't ever truly know with certainty that someone is in fact conscious.

Me saying, “I have consciousness” is very different from me saying “I have a stomach”. Unlike consciousness, there is a way for you to verify that I do in fact have a stomach — through your senses (either by visually seeing or physically touching my pancreas, after some dissection).

In other words, we have no way of definitively verifying that an AI system that says it's conscious is in fact conscious.

The primary reasons you believe something is conscious is because:

  • That something is similar to you in other ways — for instance, anatomically (body structure) or behaviorally. Other humans and monkeys are anatomically similar to us (body structure), so it's easy to believe they are also conscious. Or for instance, most animals will try to physically dodge something that seems to harm them (just as you would). The technical term for this form of reasoning is “arguing by analogy”.
  • That something says they are and you trust their words. (This only applies to other humans.)

Because it's impossible to know for certain that an AI system is in fact conscious, the only variable we can test for reliably is the AI system's ability to experience pain, pleasure, or feelings.

Resources we share are finite

We generously give away resources to those we care about. "Resources" in this context isn't just limited to tangible goods and services like money, but it also includes things like time, attention, and affection. These are all finite resources. There's only so much time and attention you can spend with those around you. There's only so many material gifts you will give in your life. There is a finite number of compliments you'll give between this moment and the day you cease to exist.

And we have a moral duty to spend these finite resources on beings that actually experience pain, pleasure, and feelings.

If an AI deceives us into believing that we should care for its expressed (but not real) feelings, we can easily find ourselves misallocating those finite resources.

To provide a silly (but valid) example, imagine that you have the option of spending an hour conversing with your partner and spending an hour conversing with an AI. Imagine they've both expressed that they would appreciate your company and have asked you to spend time with them. In this scenario, spending that hour (of your attention, and conversational energy) with the AI system may be the wrong thing to do. Any longing for your companionship that the AI system expresses isn't grounded by real pain, pleasure, or feelings. It's grounded by an algorithm predicting the next set of words in a sequence of words.

My views are subject to change

This article represents my beliefs (as of May 2025). I have published this with an open mind. My views are subject to change. I also acknowledge that there are some weak points in this argument. For instance:

  • Can we definitively confirm that pain, pleasure, and feelings are purely biological processes? Can these experiences exist without the biological makeup that humans have?
  • The claim that it's a "moral duty to care for something because it is capable of being conscious of pain, pleasure, and feelings" is mostly just my opinion — not a "fact". It is not an objective truth about the universe. I cannot “prove” this statement to you. It's purely born out of my evolutionary hardwiring and personal upbringing.

Your feedback

I'd love to hear your thoughts on this article (even if it's about a minor typo). You can email your suggestions or thoughts to .

Thank you for your attention. :)

This article was written by Nim Jayawardena.