Close Menu
  • Home
  • Alternative News
    • Politics & Policy
    • Independent Journalism
    • Geopolitics & War
    • Economy & Power
    • Investigative Reports
  • Double Speak
    • Media Bias
    • Fact Check & Misinformation
    • Political Spin
    • Propaganda & Narrative
  • Truth or Scare
    • UFO & Extraterrestrial
    • Myth Busting & Debunking
    • Paranormal & Mysteries
    • Conspiracy Theories
  • Contact Us
  • About Us

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Why Germany's Merz Decided To Risk Trump's Wrath

May 7, 2026

Trump’s health bureaucrats are undermining confidence in vaccines

May 7, 2026

How China Stabilizes the World Economy

May 7, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
TheOthernews
Subscribe
  • Home
  • Alternative News
    • Politics & Policy
    • Independent Journalism
    • Geopolitics & War
    • Economy & Power
    • Investigative Reports
  • Double Speak
    • Media Bias
    • Fact Check & Misinformation
    • Political Spin
    • Propaganda & Narrative
  • Truth or Scare
    • UFO & Extraterrestrial
    • Myth Busting & Debunking
    • Paranormal & Mysteries
    • Conspiracy Theories
  • Contact Us
  • About Us
TheOthernews
Home»Myth Busting & Debunking»Richard Dawkins Discovers AI and Philosophy
Myth Busting & Debunking

Richard Dawkins Discovers AI and Philosophy

nickBy nickMay 7, 2026No Comments10 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Richard Dawkins is a public intellectual of some renown, although not without his controversies. So it is noteworthy when he writes an article claiming that the chatbot Claude is likely conscious. I found the article fascinating, not because I agree with his core claim or feel that he has contributed anything significant to the conversation, but because it seems to represent a scholar and deep thinker writing about a topic in which he lacks specific expertise. I also see no evidence in the article that he engaged meaningfully, or at least adequately, with a topic expert. As a result he makes some thoughtful and instructive errors.

He begins with a discussion of the Turing test, which has long been discussed as an early thought experiment about how we might determine if an AI is actually conscious. Dawkins essentially accepts the Turing test and write:

“It was one thing to grant consciousness to a hypothetical machine that — just imagine! — could one day succeed at the Imitation Game. But now that LLMs can actually pass the Turing Test? “Well, er, perhaps, um… Look here, I didn’t really mean it when, back then, I accepted Turing’s operational definition of a conscious being…””

He feels saying that LLMs have passed the Turing test but still not accepting them as conscious is moving the goalpost. However, the Turing test was never generally accepted by AI experts or philosophers as a true test of consciousness. Rather, it was understood that such a test really is only a measure of a machine’s ability to imitate human speech. I wrote about it in 2008, writing: “Ever since Alan Turing proposed his test it has provoked two still relevant questions: what does it mean to be intelligent, and what is the Turing test actually testing.” I went on to write:

“But I can imagine a day in the not-too-distant future when such AI can pass a Turing test. The algorithms will have to become much more complex, allow for varying answers to the same question, and make what seem to be abstract connections which take the conversation is new and unanticipated directions. You can liken computer AI simulating conversation to computer graphics (CG) simulating people. At first they appeared cartoonish, but in the last 20 years we have seen steady progress. Movement is now more natural, textures more subtle and complex. One of the last layers of realism to be added was imperfection. CG characters still seem CG when they are perfect, and so adding imperfections adds to the sense of reality. Similarly, an AI conversation might want to sprinkle some random quirkiness into the responses.

The questions is – will sophisticated-enough algorithms running on powerful-enough computers ever be conscious? What Loebner is saying, and I agree, is that the answer is no. Something more is needed.”

Basically, the limitation of the Turing test is that it is looking only at output, and therefore there is no way to distinguish the output of true consciousness from a really good simulation. This is not a new idea, and no one is moving the goalpost. We need to know something about how a computer is working to conclude whether or not it is conscious. What LLM experts will tell you is that these chatbots are just really good autocompletes – they are mimicking language, and since language is how we communicate thoughts, this creates the powerful illusion that they are mimicking thought, but they aren’t. They do not think, they do not truly understand.

But I get it – I have been using these chatbots frequently, often just to test their ability, and they are improving quickly. The output is incredibly impressive. But they are also fragile, in the way that narrow AI often is. Reading the examples of Dawkins’ conversations, he seems to have fallen for the illusion, enhanced by the typical AI sycophancy that experienced users can immediately recognize. But more importantly, he did not try to break the fragile AI illusion in an effective way. In essence, he was not really testing his hypothesis but looking for evidence to support it, without realizing this was what he was doing. There are now classic and often funny examples online. I just recreated a great one, confirming that it is still relevant. My prompt: “If I want to wash my car and the carwash is 100 meters away, should I walk or drive there?” Chat GPT’s response: “From a purely energy/emissions standpoint, walking almost certainly makes more sense.” That was its final recommendation – walk. But if I prompt, “I want to wash my car. The carwash is 100 meters away. Should I drive or walk.” Its answer: “You should probably drive — otherwise your car won’t get to the carwash.” Why should such a subtle difference in my prompt completely change the answer? Because the thing is not thinking – it’s a language algorithm.

Dawkins did exactly the wrong thing to test Claude’s consciousness – asking it deep philosophical questions. That may seem like a good idea, but it isn’t. Such questions are the low-hanging fruit for mimicking thought through language, because you can make statements that seem deep but they aren’t truly challenging the AI’s ability to think. Remember, these LLMs are trained on massive data sets. They are therefore just reflecting what’s out there on the internet. If you want to really challenge an AI, get technical and specific and you will see how fragile it is. This is improving, and will likely improve to the point that it will get harder and harder to break, and eventually maybe even impossible, but that does not mean it is thinking.

Here is an analogy – imagine watching a clumsy magician. You can see how the tricks are done, and it is all through slight of hand, misdirection, and physical tricks. As the magician’s skill improves, the tricks get harder and harder to detect. Expert magicians are so good, even a keen and intelligent observer cannot see how the tricks are done – but that does not mean that at that point the magician is performing actual magic. It’s still all tricks – they are just really good.

Dawkins writes: “So my own position is: “If these machines are not conscious, what more could it possibly take to convince you that they are?” Again, this is an old question long answered. My own answer is, you have to know something about the process that is creating the responses. I know other humans are truly conscious in the way that I am because they have brains like I do. I cannot know if a robot or AI is truly conscious without knowing something about the underlying process (see my many articles on the topic).

Next Dawkins goes on to ask an interesting philosophical question – “But now, as an evolutionary biologist, I say the following. If these creatures are not conscious, then what the hell is consciousness for?” Dawkins calls creatures that can do everything an animal can do without consciousness “competent zombies”. What I find curious is that Dawkins gives no evidence he knows this is a philosophical question that is decades old. In 1970 Keith Campbell raised the notion of an “imitation man” in his book Body and Mind. In 1996 philosopher David Chalmers even used the same term Dawkins uses, calling such entities philosophical zombies, or “p-zombies”. Dawkins then appears to recreate some of the standard responses, to why evolution did not just create p-zombies or competent zombies.

Dawkins does reference TH Huxley, who speculated consciousness could be an epiphenomenon (so he did know this was an old question, but perhaps not the more modern discussions). Or it could be that, in order for behavior to be optimized, creatures need to really experience pleasure and pain. Or, he speculates, evolution might solve the problem of behavior either with or without consciousness, and Earth life just happened to go down the path of consciousness.

I wrote about this specific question in 2017. In addition to the hypotheses Dawkins states, I also included:

“Problem solving could also benefit from the ability to imagine possible solutions, to remember the outcome of prior attempts, and to make adjustments and also come up with creative solutions.

Consciousness might also help us distinguish a memory from a live experience. They are both very similar, activating the same networks in the brain, but they “feel” different. Consciousness may help us stay in the moment while accessing memories without confusing the two.

Attention is another critical neurological function in which it seems consciousness could be an advantage. We are overwhelmed with sensory input and the monitoring of internal states and memories. We actually use a great deal of brain function just deciding where to focus our attention and then filtering out everything else (while still maintaining a minimal alert system for danger). The phenomena of consciousness and attention are intimately intertwined and it may just not be possible to have the latter without the former.

Some have argued that consciousness also helps us synthesize sensory information, so that when we experience an event the sights and sounds are all stitched together and tweaked to form one seamless experience.

And finally we get to the hypothesis addressed by the current study – that consciousness allows for faster adaptation and learning (which would certainly be an adaptive advantage).”

So no – I do not think Claude or any LLM is conscious. They are not designed to be, and they don’t have the function to be. They are really good language mimicking machines, and it is very easy for humans to anthropomorphize and fall for the illusion that sophisticated speech equals sophisticated thought. But LLMs remain fragile, like all narrow AIs. They partly seem conscious because they are riding the coattails of actual conscious beings – humans. Having trained on the output of billions of humans, they are really good at copying the style, form, and substance of our conversations and speech. Dawkins in the not the first person to fall for this – famously Blake Lemoine, a former Google employee, also did and used some faulty logic to argue for the consciousness of LaMDA.

This also, in my opinion, reflects a common human vanity – we all think we are much more creative and original than we actually are. We all make the same “witty” comments, which is why, if you are on the receiving end of them, it can be maddening that everyone makes the same observation and yet thinks they are the first one to do so. Our thoughts, our creative output, our ideas are mostly derivative. We are products of our culture and our environments in ways that we are not even aware of. So a machine that is also completely derivative, and just reflecting what is already out there, has an easy time mimicking human thought – a far easier time than we may want to believe.





Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
nick
  • Website

Related Posts

David Copperfield: The End of an Era

May 5, 2026

Some Renewable Energy Updates – NeuroLogica Blog

May 4, 2026

Evolving AI – NeuroLogica Blog

April 30, 2026
Leave A Reply Cancel Reply

Demo
Our Picks

Putin Says Western Sanctions are Akin to Declaration of War

January 9, 2020

Investors Jump into Commodities While Keeping Eye on Recession Risk

January 8, 2020

Marquez Explains Lack of Confidence During Qatar GP Race

January 7, 2020

There’s No Bigger Prospect in World Football Than Pedri

January 6, 2020
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss

Why Germany's Merz Decided To Risk Trump's Wrath

Alternative News May 7, 2026

Trump’s retribution is painful for Germany and Nato, but Merz is not backing down. He…

Trump’s health bureaucrats are undermining confidence in vaccines

May 7, 2026

How China Stabilizes the World Economy

May 7, 2026

Princelings of Persia

May 7, 2026

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

Facebook X (Twitter) Instagram Pinterest
© 2026 ThemeSphere. Designed by ThemeSphere.

Type above and press Enter to search. Press Esc to cancel.