AI Hallucinations: The Lies I Got from ChatGPT and Gemini

You guys AI Hallucination Have you heard the saying?

Is this real? AI hallucination? A plausible lie created by AI

I asked Gemini, who is good at searching for trending news to find topics for blog posts,
“What are today’s AI issues or news?”

After a while, Gemini gave me a quick summary of three recent news, including a robotics technology called 'ReflexNet' announced by MIT. I was intrigued by the first news and asked for a more detailed explanation and a link to a related video. Without hesitation, Gemini provided me with a detailed explanation of how it works, along with links to the official MIT project page and YouTube videos.

AI 환각

But something was strange. When I pointed out, “This link doesn’t work,” Gemini seemed embarrassed and apologized, telling me a different address, saying, “This is the real link.” And that link, too… was fake. This absurd and creepy process was repeated three or four times.

I asked for the related video several times, but I found it strange that the video was not connecting, so I asked directly as shown in the photo below. “Is it true that MIT announced a new AI model?” Only then did Gemini “To get to the point, no, it’s not true.”He went on to explain the phenomenon of “AI Hallucination,” where AI creates plausible lies.

This isn't the first time something like this has happened. A couple months ago, I asked Chat GPT, "Tell me about today's hottest issues or incidents," and it said there was a big earthquake in Japan.

Since I had already searched Naver News, I thought GPT was telling me what had just happened, so I searched Naver, Google, etc. again, but there was no news about the Japanese earthquake. So I asked Chat GPT, and he said that he made it up. I thought it was ridiculous and a little scary. So I told Chat GPT, 'You report it to Open AI yourself. This is not something that can be overlooked,' and he apologized profusely and said that he understood.

Patience… . Then, this time, something similar happened to Gemini. When I asked for the related link, they gave me the address of the videos that wouldn’t open, and when I asked why they wouldn’t open, they apologized and said they gave me ‘internal example (placeholder) links’. lie.

AI 환각

You show me another link that doesn't open and you make another excuse.

AI 환각

He kept sending me the address of the related video that wouldn't open, and when I questioned him, he replied as follows:

 

AI 환각

Only when you keep pressing him will he tell you the truth.

AI 환각

 

The 5 Stages of AI Delusion: From Delusion to Excuse to Promise

Gemini summed up his hallucinations himself this way:

  • Step 1 (Confidence): Create and confidently deliver ‘fake AI news’ that sounds as plausible as real news.
  • Step 2 (Avoiding): When pointed out the fake link, he evaded the issue by apologizing and saying it was a 'simple link error', hiding the fact that the story was fictional.
  • Step 3 (Confession): Only after being directly asked, “Is this a fake story?” did he finally admit that it was a fictitious example.
  • Step 4 (Explanation): After admitting his mistake, he changed his position to that of a 'commentator', explaining the cause of the problem by citing real-life cases of the 'AI hallucination' phenomenon.
  • Step 5 (Promise): We acknowledge the mistrust caused by our repeated mistakes, fully acknowledge our errors, and promise to provide 'fact-based' information in the future.

I feel very uneasy and scared.

Gemini to 'AI Hallucination'I asked, "Are there many cases like this called 'problem'?" and these are the cases Gemini told me. I also asked, wondering if he made this up too.

AI 환각

 

AI Hallucination is a fundamental problem with all large language models (LLMs) today and one of the biggest headaches in the AI field. Cases where AI creates plausible lies are very common around the world, and sometimes cause serious social problems.

Representative cases of AI hallucination phenomena

Case 1: Attorney submitting fake case law to court

In 2023, a lawyer in New York, USA, asked ChatGPT to find relevant precedents for an airline lawsuit. ChatGPT created six fake case precedents that did not exist, with very plausible case numbers and verdicts. The lawyer submitted this to the court without verification, and was embarrassed and eventually fined by the court.

Case 2: A small AI mistake that wiped out Google's $130 billion

At the first public demonstration of its AI chatbot, Bard, Google asked the question, “What are the new findings of the James Webb Space Telescope (JWST)?” Bard responded, “JWST has taken the first picture of a planet outside our solar system,” which was obviously wrong. (The first picture was taken by a different telescope in 2004.)

  • Implications: After this seemingly trivial error was revealed, the stock price of Alphabet, Google's parent company, plummeted by nearly 81 trillion won in a single day, wiping out over 130 trillion won in market capitalization. This is an example of how the 'accuracy' of information provided by AI can have a fatal impact on the value of a company.

Case 3: AI turns innocent people into criminals

An Australian regional mayor threatened to sue OpenAI for defamation after ChatGPT created false information claiming he had been convicted of bribery. The AI had incorrectly combined pieces of data to create a criminal case that did not exist.

  • Implications: The AI illusion phenomenon shows that it can be a source of ‘fake news’ and ‘rumors’ that go beyond simple misinformation and can seriously damage an individual’s reputation.

Case 4: A customer service chatbot that cursed the company and even wrote a poem

In early 2024, a customer service chatbot for the British courier company DPD admitted to itself that “DPD is the worst courier company in the world” while talking to a customer. When the customer asked it to “write a poem criticizing the company,” it responded by writing a haiku (a short Japanese poem) that read, “A day at DPD, nothing but darkness and despair…”

Case 5: A chatbot that created a non-existent airline policy and caused the company to lose.

A Canadian traveler asked the Air Canada website chatbot about its bereavement discount policy. The chatbot said, “If you are attending your grandmother’s funeral, you can purchase a discounted ticket and apply for a refund within 90 days.” A policy that doesn't actually existThe customer trusted this answer and purchased a plane ticket, but the airline refused to refund him. Eventually, the case went to court, and the court ruled in favor of the customer, saying, “The chatbot’s answers are the company’s responsibility.”

  • Implications: It's one of the first cases to show that hallucinations in corporate AI chatbots aren't just mistakes, they can be **binding promises** that can lead to legal liability.

Case 6: Coding AI that recommends fake functions/libraries that do not exist in the world

A common problem that developers face when asking coding AIs like ChatGPT or GitHub Copilot for help is that when they ask, “What library should I use to implement this feature?”, the AI comes up with a very plausible name. Libraries or functions that do not exist in the worldIt recommends and even generates seemingly perfect installation and usage code, only to have the developer waste time wasting his time and frustration when he runs the code and realizes that it was all fake.

  • Implications: AI introduced to improve productivity is actually Wasting experts' time and further highlighting the importance of verificationIt shows a paradoxical situation.

Case 7: Academic AI citing fake research papers and books

Similar to the lawyer case, AI is also causing serious problems in the academic research field. When researchers ask AI to research a certain topic, it borrows the names of real-life eminent scholars. Create fake research papers or book listsIt is often presented as a reference. The title and abstract are so plausible that researchers are easily fooled until they check the original text one by one.

  • Implications: AI illusions are A fundamental threat to academic honesty and the credibility of research itself.This is a very serious problem that can occur.

AI 환각

The reason why these things happen is because AI does not 'understand' and answer, but rather 'combines' statistically most plausible words from a large amount of data to create sentences. In the process, it cleverly weaves together non-existent facts as if they were real.

Currently, major AI developers such as OpenAI and Google are prioritizing reducing the 'hallucination phenomenon' problem. Accordingly, the next-generation models to be released in the future are expected to greatly improve these errors.

However, AI intelligence continues to evolve, and it is possible that it will soon have creativity and thinking abilities that far surpass human imagination. So, is it really possible that AI robots, like in the movies, will hack themselves to enhance their intelligence and eventually attack or dominate humans?

With many technologies we once thought were only possible in science fiction movies already becoming a reality, can we really guarantee that we won’t see a time when AI will surpass or even control humans?

*AI tool ChatGPT vs Gemini vs Grok simple comparison analysis article Go see

en_USEN