Are AI models doomed to always hallucinate?

Large language models (LLMs) like OpenAI’s ChatGPT all suffer from the same problem: they make stuff up.
The mistakes range from strange and innocuous — like claiming that the Golden Gate Bridge was transported across Egypt in 2016 — to highly problematic, even dangerous.
A mayor in Australia recently threatened to sue OpenAI because ChatGPT mistakenly claimed he pleaded guilty in a major bribery scandal. Researchers have found that LLM hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers. And LLMs frequently give bad mental health and medical advice, like that wine consumption can “prevent cancer.”
This tendency …
Read more…….