welcome
Sky News

Sky News

Technology

Technology

Can we trust ChatGPT despite it 'hallucinating' answers?

Sky News
Summary
Nutrition label

67% Informative

Sky News revealed ChatGPT was fabricating entire transcripts of a Sky News podcast.

When challenged, it doubles down, gets shirty, and only under sustained pressure does it cave in.

Internal tests have found that the most recent models are more likely to "hallucinate" - come up with answers that are simply untrue.

The o3 model was found to hallucinate in 33% of answers to questions when tested on publicly available facts; the o4-mini version did worse.