Making AI trustworthy: Can we overcome black-box hallucinations?

Mike Capps
Contributor

Like most engineers, as a kid I could answer elementary school math problems by just filling in the answers.
But when I didn’t “show my work,” my teachers would dock points; the right answer wasn’t worth much without an explanation. Yet, those lofty standards for explainability in long division somehow don’t seem to apply to AI systems, even those making crucial, life-impacting decisions.
The major AI players that fill today’s headlines and feed stock market frenzies — OpenAI, Google, Microsoft — operate their platforms on black-box models. A query goes in one side and an …
Read more…….