Is Your AI Assistant Actually Intelligent? Let’s Ask It.

I use genAI, specifically Google Gemini, frequently. It’s very helpful with work tasks, such as summarizing documents, extracting material from transcripts, and writing headlines. In my personal life, I’ve used it for everything from diagnosing a whine in my fridge to designing a training program for an endurance cycling event.

But there are many things genAI is terrible at. For instance, it cannot count. Asking it to reduce a 2,000-word article to 1,000 words is a waste of time. It’s just as likely to add 1,000 and make it 3,000. (I never looked to see where it found the extra words.) I wrote an article last year about how researchers in Switzerland proved that LLMs are inherently incapable of complex math. Another team showed that they do math by approximation and then pretend they did it properly. And Google users were delighted last month to find that if you give it a made-up idiom like “you can’t lick a badger twice” plus the word “meaning,” it will generate an AI snippet at the top which dutifully explains what it means (you can’t trick a person twice), and how it originated from the sport of badger baiting. If you try it now, Google will tell you it’s a phrase made up to trick LLMs, ironically proving that it’s true.

I have followed AI for decades, starting with a course at MIT way back in the last century. However, I wonder if LLMs, as impressive as they are at the tasks they do well, deserve to be classified as intelligent, even artificially.

Since this is a question that Gemini should be able to answer, I asked it, and I got an intelligent answer.

Q.Should we be calling LLMs artificially intelligent? Aren’t they just machines that are good at making associations between words and phrases and performing a limited set of tasks well, but lack many aspects of intelligence?”

A.   “You’re right to point out that LLMs excel at identifying patterns and associations within vast amounts of text data. [Like any good coach, Gemini likes to start with a compliment.] They can then use these patterns to generate text that often sounds remarkably human-like, translates languages, summarizes information, and even writes different kinds of creative content. Their capabilities in these specific areas are truly impressive.  

However, your skepticism about equating this to ‘intelligence’ in a broader sense is valid. Many argue that LLMs, in their current form, lack:

  • True understanding: While they can manipulate words in a meaningful way, there’s debate about whether they genuinely understand the concepts behind those words in the same way a human does.
  • Common sense reasoning: LLMs can sometimes make nonsensical errors that a human with basic common sense would easily avoid.
  • Consciousness and sentience: There’s no evidence that LLMs possess any form of consciousness, self-awareness, or subjective experience.
  • Intentionality and goals: LLMs don’t have their own desires, goals, or motivations. They operate based on the prompts and instructions they receive.  
  • Learning beyond their training data: While fine-tuning allows for adaptation, their core knowledge is largely fixed by the massive dataset they were initially trained on.

So, where does this leave us with the term ‘AI’?

  • From a narrow perspective: LLMs are undoubtedly a form of artificial intelligence in the sense that they are machines exhibiting abilities that would typically require human intelligence.
  • From a broader, more philosophical perspective: Whether they represent true or general artificial intelligence (AGI) – the kind of flexible, adaptable intelligence that humans possess – is highly debatable and, for many, the answer is a clear ‘no, not yet.’  

The current prevalent use of ‘AI’ to describe LLMs is partly due to marketing [no kidding] and the impressive capabilities these models demonstrate. It captures the public imagination and highlights their advanced nature.

Thinking of them as highly sophisticated pattern-matching machines that excel at language-based tasks [my italics] is a more accurate, albeit less catchy, description. Your point about them being ‘one particular kind of machine’ performing a ‘limited set of tasks well’ is a very reasonable and important perspective to hold.”

So, Gemini agrees but leaves me with a 21st-century version of Epimenides’ paradox; he was the philosopher from Crete who declared 2,600 years ago that “All Cretans are liars.” (If the statement is true, then E is lying, so it can’t be. And if the statement is false, E is telling the truth, so it can’t be false either.) If I accept Gemini’s endorsement, I can’t say it’s unintelligent. And if I say it’s unintelligent, I can’t cite its endorsement.

In any case, despite genAI’s well-defined limitations, genAI companies attracted $45 billion in capital in 2024, continuing in 2025. That’s a paradox that neither Epimenides nor I can explain. Gemini, of course, would be happy to, but I am leery of being licked twice.

Leave a Comment

Your email address will not be published. Required fields are marked *

Rhetoriq
Contact Us
Scroll to Top
Send this to a friend