Large language models pose risk to science with false answers, says study
Large Language Models (LLMs) pose a direct threat to science because of so-called "hallucinations" (untruthful responses), and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute.