Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

You are viewing a single thread.
View all comments View context
14 points

I was going to say… Sure would be nice to have this feature in all the open source AI image generator tools but you’re absolutely right 😩

permalink
report
parent
reply
11 points

Yeah, unless someone publishes even a set of hashes of known bad content for the general public… I kind of doubt the true intentions are preventing CSAM to the benefit of everyone.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 15K

    Monthly active users

  • 6.8K

    Posts

  • 155K

    Comments