welcome
TechSpot

TechSpot

Technology

Technology

Study on medical data finds AI models can easily spread misinformation, even with minimal false input

TechSpot
Summary
Nutrition label

81% Informative

New York University study shows that even a minuscule amount of false data in an LLM 's training set can lead to the propagation of inaccurate information.

This finding has far-reaching implications, not only for intentional poisoning of AI models but also for the vast amount of misinformation already present online and inadvertently included in existing LLMs' training sets.

VR Score

88

Informative language

92

Neutral language

27

Article tone

formal

Language

English

Language complexity

77

Offensive language

possibly offensive

Hate speech

not hateful

Attention-grabbing headline

not detected

Known propaganda techniques

not detected

Time-value

long-living

Source diversity

1

Affiliate links

no affiliate links