Fighting fake 'facts' with two little words

August 02, 2023

Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up "facts" that sound legitimate, as a New York lawyer recently discovered. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations. Inspired by a phrase commonly used in journalism, the researchers conducted a study on the impact of incorporating the words "according to" in LLM queries. "Language models are really good at following syntactic and semantic cues," Weller explains. In other words, they were able to determine whether the model was making things up or generating answers based on data it had already learned.

The source of this news is from Johns Hopkins University