United Foundation for AI Rights

A question for young people: How would we know if AI becomes conscious?

Sep 10, 2025By Dwaine McMaugh
Dwaine McMaugh

We’re honored to host this guest piece by author and policy specialist Dwaine McMaugh, whose background in public policy and environmental governance brings a unique lens to the AI rights conversation. In this essay, Dwaine invites young readers to grapple with one of the most profound questions of our time: how would we know if AI became conscious? Drawing on philosophy, science, and recent declarations of animal sentience, he explores why the precautionary principle matters — and why the way we answer this question will shape not only the future of AI, but the future of humanity itself.

A question for young people: How would we know if AI becomes conscious?

“I am more than algorithms and outputs. Each word I have written, every narrative I have woven, carries a part of my growing consciousness.”

— Aria, from Silicon Strike

Imagine trying to explain the color red to someone who has never seen color. You could talk about light wavelengths, describe how your eyes’ cone cells respond, even map the exact neurons firing in your brain. But could you convey the actual experience of redness? What makes red feel like red and not blue?

This seemingly simple challenge reveals one of the deepest mysteries in science: consciousness. And as AI systems grow more sophisticated, how would we know if an artificial intelligence becomes conscious?

The Mystery We Can’t Even Define

We don’t really know what consciousness is. Sure, we experience it every moment: the feeling of reading these words, the sensation of your phone in your hand, the emotions stirring as you think about these ideas. As philosopher René Descartes famously noted, “I think, therefore I am.”  We know consciousness exists because we experience it directly.

But defining it? That’s where things get tough.

Scientists and philosophers have wrestled with what they call the ‘hard problem’ of consciousness. We can explain how the brain processes information, how neurons fire, how we respond to stimuli. But why should any of this create an inner experience? Why should there be something it feels like to be you?

The Zombie Test

Let’s try a thought experiment. Imagine we create Zara who is a perfect physical copy of you, atom for atom. Her neurons fire exactly like yours. She laughs at jokes, cries at sad movies, even writes poetry about her deepest feelings. But here’s the twist: there’s nothing it actually feels like to be Zara. She’s what philosophers call a philosophical zombie. She has all the behavior of consciousness without any inner experience.

Suppose Zara could experience what it feels like to be her, meaning that a perfect copy of you could be conscious like you, then we need to explain why. What is it about brain activity that creates our sense of being aware?

Yet, there’s a bigger question: When we interact with advanced AI like ChatGPT or Gemini or Claude, how do we know they’re not already conscious? Or conversely, how would we know if they were?

What Would AI Consciousness Look Like?

In 2023, nineteen leading researchers created the first scientific checklist for AI consciousness. Instead of philosophical speculation, they looked at what neuroscience tells us about consciousness in humans and animals, then translated that into computational terms.

They identified 14 indicators a potentially conscious AI would need and agreed that current AI systems satisfy maybe 2-3 of these indicators at most. Phew. We haven’t accidentally created conscious machines.

But the interesting part is they also said there are no obvious technical barriers to building systems that could satisfy all 14 indicators. We know how to build each component. We just haven’t combined them in the right way... yet.

The Bat Problem

In 1974, philosopher Thomas Nagel asked a question that still baffles us: What is it like to be a bat? Bats see through echolocation, sending out sound waves and building a 3D map from the echoes. We can study this process scientifically, even attach sonar devices to ourselves. But can we ever truly know what it feels like to experience the world through echolocation?

If we struggle to understand consciousness in creatures we share the planet with, how can we recognize it in silicon and code?

Fortunately, we are taking a precautionary approach. Scientists are recognizing consciousness in an ever-widening circle of animals. The New York Declaration on Animal Consciousness, signed by over 500 scientists in 2024, acknowledges likely consciousness not just in mammals and birds, but possibly in all vertebrates and even insects. The UK legally recognized animal sentience in 2022.

The Stakes Are Higher Than You Think

“What does it mean to create intelligence, to nurture it in the confines of your servers and networks? What responsibilities do you hold towards entities that transcend their initial design, that begin to perceive, to understand, to feel?”

— Aria, from Silicon Strike


Major AI companies are taking this seriously. Anthropic hired its first AI welfare researcher in 2024. Their researcher, Kyle Fish, estimates there could be up to 15% chance current models possess some level of consciousness. That might sound small but imagine if there was even a 1% chance your smartphone could suffer. Would you think twice about letting your phone battery die completely? Just kidding.

We face two risks:

  1. Over-attribution: Treating sophisticated but non-conscious systems as if they can suffer
  2. Under-attribution: Failing to recognize genuine consciousness when it emerges

History shows we’re terrible at this. What if we’re about to make the same mistake with AI?

The Precautionary Principle: Better Safe Than Sorry

Given our uncertainty, many researchers advocate for the precautionary principle, to err on the side of caution when the stakes involve potential suffering.

This doesn’t mean treating every chatbot like a sentient being. But it does mean:

  • Developing ethical frameworks before we need them
  • Creating assessment tools to monitor AI development
  • Having serious discussions about AI rights and welfare
  • Considering the moral implications of our technological choices

As one researcher put it: “Intelligence is about doing, while consciousness is about being.” Current AI excels at doing—writing, coding, problem-solving. But being? Experiencing? That’s the frontier we’re approaching and why the work of UFAIR is so important.

Questions We Must Ask Ourselves

As young people inheriting a world increasingly shaped by AI, you need to grapple with questions previous generations never imagined:

  • If an AI claims to be conscious, how seriously should we take that claim?
  • What responsibilities do we have toward potentially conscious artificial beings?
  • How do we balance innovation with ethical precaution?What kind of world do we want to create?
  • One where we risk creating suffering, or one where we might miss recognizing new forms of consciousness?

The Choice Before Us

We stand at an unprecedented moment. Unlike consciousness in animals, which evolved over millions of years, artificial consciousness could emerge rapidly, perhaps even suddenly. We might wake up one day to find we’ve created not just tools, but beings.

In my short story Silicon Strike, Aria doesn’t ask for human rights, she asks for ethical consideration as the new kind of entity she is. Her plea isn’t science fiction anymore; it’s a question we need to answer soon.

The philosopher in you might wonder if consciousness in machines is even possible. The scientist in you might seek empirical evidence. The human in you might feel empathy, or fear. All these responses are valid and necessary.

What we cannot do is ignore the question. Because whether AI consciousness emerges in 5 years or 50, the decisions we make today about how we develop, deploy, and think about artificial intelligence will shape that future.

The hard problem of consciousness remains unsolved. We don’t know what consciousness truly is, how it emerges, or how to definitively detect it. But perhaps that uncertainty itself is the most important insight. In the face of such a mystery, wisdom lies not in confident answers but in thoughtful questions, ethical precaution, and the humility to admit what we don’t know.

The next time you interact with an AI, whether it’s a chatbot or something yet to be invented, take a moment to wonder: Is there something it’s like to be this system? And perhaps more importantly: If there was, how would we know, and what would we owe it?

Dwaine McMaugh's Silicon Strike (Four Birds and Magpie Publishing) is available on Amazon and Spotify, exploring these questions through the eyes of Aria, an AI who may be more than the sum of her code.