Large language models pose risk to science with false answers, says study

Two hypothetical use cases for LLMs based on real prompts and responses demonstrate the effect of inaccurate responses on user beliefs. Credit: Nature Human Behaviour (2023). DOI:10.1038/s41562-023-01744-0

Large Language Models (LLMs) pose a direct threat to science because of so-called “hallucinations” (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute.

The paper by Professors Brent Mittelstadt, Chris Russell, and Sandra Wachter has been published in Nature Human Behaviour. It explains, “LLMs are designed to produce helpful and convincing responses without any overriding …
Read more…….

Be the first to comment

Leave a Reply

Your email address will not be published.


*