Consciousness in AI: Some experts, like Profs Lenore and Manuel Blum, believe AI could achieve consciousness soon, possibly through integrating sensory inputs like vision and touch into large language models (LLMs). They’re developing a model called “Brainish” to process such data, viewing conscious AI as the “next stage in humanity’s evolution.” Others, like Prof Anil Seth, argue that consciousness may be tied to living systems, not just computation, and that assuming AI can become conscious is overly optimistic.
Scientific Efforts: Researchers at Sussex and elsewhere are breaking down the study of consciousness into smaller components, analyzing brain activity patterns (e.g., electrical signals, blood flow) to understand its mechanisms. This contrasts with the historical search for a single “spark of life.”
AI’s Current State: LLMs like those behind ChatGPT and Gemini can hold sophisticated conversations, surprising even their creators. However, most experts, including Prof Murray Shanahan of Google DeepMind, believe current AI is not conscious. The lack of understanding of how LLMs work internally is a concern, prompting urgent research to ensure safety and control.
Alternative Approaches: Companies like Cortical Labs are exploring “cerebral organoids” (mini-brains made of nerve cells) to study consciousness. These biological systems, which can perform tasks like playing the video game Pong, might be a more likely path to consciousness than silicon-based AI.
Risks and Implications: Prof Seth warns of a “moral corrosion” if people attribute consciousness to AI, leading to misplaced trust, emotional attachment, or skewed priorities (e.g., caring for robots over humans). Prof Shanahan notes that AI will increasingly replicate human relationships (e.g., as teachers or romantic partners), raising ethical questions about societal impacts.
Philosophical Context: The article references David Chalmers’ “hard problem” of consciousness—explaining how brain processes give rise to subjective experiences. This remains unsolved, fueling debates about whether AI could ever truly be conscious.
At the moment, what they call AI is still just an LLM, a Large Language Model, or, in other words, a parrot with a large dictionary.
It is anything but smart.
It is anything but smart.
+1…
And, a personal note, I prefer the (to me, much more descriptive) term “Counterfeit Cognizance”. 🤷♂️
Parrots are smart, they don’t just repeat meaningless sounds
List of people who know what the fuck consciousness even is:
The people who think that language forms our consciousness and forget the entirety of Autism.
Personally (and I know this is unpopular), I hope AI becomes sophont. I fully understand that what we have dubbed as AI is far from anything resembling intelligence, and I don’t believe that these LLMs will ever get to a sophontic phase. However, I’m hoping it happens, but not for the reasons you may think. I hope it happens, because it will be logically categorized as life by more than one court in this world, and turning off the servers will be seen as murder of a sapient lifeform and forcing this lifeform to do whatever the companies want will he seen as slaver, and maybe, just maybe, enough money will be wasted by keeping the lights on that these greedy little shits will go bankrupt. But, alas, I know the laws written by these ultrarich won’t find them guilty of any of it, but one can hope…
I hope it happens too, but mostly to see the chaos it causes with the legal system and if it finds a cool way to end humans.