Why RAG won’t solve generative AI’s hallucination problem

Hallucinations — the lies generative AI models tell, basically — are a big problem for businesses looking to integrate the technology into their operations.
Because models have no real intelligence and are simply predicting words, images, speech, music and other data according to a private schema, they sometimes get it wrong. Very wrong. In a recent piece in The Wall Street Journal, a source recounts an instance where Microsoft’s generative AI invented meeting attendees and implied that conference calls were about subjects that weren’t actually discussed on the call.
As I wrote a while ago, hallucinations may be an unsolvable …
Read more…….