I wanted to extract some crime statistics broken by the type of crime and different populations, all of course normalized by the population size. I got a nice set of tables summarizing the data for each year that I requested.

When I shared these summaries I was told this is entirely unreliable due to hallucinations. So my question to you is how common of a problem this is?

I compared results from Chat GPT-4, Copilot and Grok and the results are the same (Gemini says the data is unavailable, btw :)

So is are LLMs reliable for research like that?

You are viewing a single thread.
View all comments View context
5 points

Asking an LLM for raw R code that accomplishes some task and fixing the bugs it hallucinates can be a time booster, though

permalink
report
parent
reply

AI

!artificial_intel@lemmy.ml

Create post

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

Community stats

  • 203

    Monthly active users

  • 115

    Posts

  • 215

    Comments

Community moderators