The consciousness question in the age of AI
Brett Kagan, a fast-talking neuroscientist with the charged energy of someone on the cusp of a breakthrough, opened the door of a refrigerated container and gently pulled out a petri dish.
As rain lashed the windows, Kagan placed the dish under a microscope, fiddled with the resolution, and invited me to look.
Packed between the neat little lines that guide electrical charge across a computer chip, I saw a busy cloud of chaos. Hundreds of thousands of human neurons, hemmed in together and overlaid on the chip.

Figure 1. The consciousness question in the age of AI
The consciousness question in the age of AI is shown in Figure 1. These neurons, and the chip they sat on, were part of a novel bio-technological system called DishBrain that made headlines last year after Kagan and his peers at start-up Cortical Labs taught it to play the computer game Pong.
The team have since won a $600,000 grant from the Federal Government to fund their research into merging human brain cells with AI.
And bio-technological systems like DishBrain are just one among many approaches scientists are taking to develop systems with the kind of cognitive flexibility that humans and animals take for granted, but which has so far eluded AI.
It’s seen as the AI holy grail, what some in the field call artificial general intelligence (AGI).
Microsoft Research and OpenAI have gone so far as to say that GPT-4, the latest of OpenAI’s revolutionary large language models (LLMs), already exhibits some sparks of this mystical quality.
But as researchers race to create increasingly complex AI, the ghost in the machine grows more haunting.
Lemoine was subsequently fired and disavowed by Google, but the fears he was tapping into still thrum beneath the skin of society, stoked up by the public fervour around AI that has defined 2023.
The question of whether AI couldever be ‘conscious’ is a tense and controversial scientific debate. Some call it impossible, arguing that there is something fundamental about biology that is necessary for conscious experience. But that, say others, draws a mystical veil over consciousness that belies its altogether more simple nature.
Still more researchers don’t like to use the ‘c’ word at all, complaining that it’s impossible to have a scientific debate over a term that has no clear scientific definition.
In late August, a group of nineteen AI and consciousness researchers published a pre-print in which they argue that AI can and should be assessed for consciousness empirically, based on neuroscientific theories.
“People are very willing to attribute consciousness even to obviously non-conscious things,” says Klein. “And that’s something we’ve got to worry about as well.”
At a recent roundtable discussion on AI consciousness, AI research Yoshua Bengio summed it up.
“Whether we succeed in building machines that are actually conscious or not, if humans perceive those AI systems as conscious, that has a lot of implications that could be extremely destabilising for society.”[1]
Consciousness and the future of AI
As I recently wrote in How do we know when AI becomes conscious and deserves rights?, my stance is that we will likely never “know” whether AI has consciousness. Wherever we get to, there will be a variety of opinions on whether an AI system is conscious.
While IIT’s proponents claim it is able to measure consciousness, it seems none of the current theories of consciousness are able to give us insight into whether or when current AI systems will be conscious.
While I personally doubt it’s possible, I’m open to the possibility. Certainly this is a domain we need more specific and directed research that could give us clear perspective on the emergence of consciousness in machines. [2]
Reference:
- https://cosmosmagazine.com/my-cosmos/ai-consciousness-my-cosmos/
- https://rossdawson.com/theories-consciousness-age-ai/
Cite this article:
Gokula Nandhini K (2023), The consciousness question in the age of AI, AnaTtechmaz, pp.752