Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

You are viewing a single thread.
View all comments View context
16 points

It could also, of course, make mistakes, but Kevin Guo, Hive’s CEO, told Ars that extensive testing was conducted to reduce false positives or negatives substantially. While he wouldn’t share stats, he said that platforms would not be interested in a tool where “99 out of a hundred things the tool is flagging aren’t correct.”

I take this to mean it is at least 1% accurate lol.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 15K

    Monthly active users

  • 6.8K

    Posts

  • 155K

    Comments