Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

143 points

Not a single peep about false positives.

I’m sure it won’t be abused though. And if anyone does complain, just get their electronics seized and checked, because they must be hiding something!

permalink
report
reply
89 points

Reminds me of the A cup breasts porn ban in Australia a few years ago, because only pedos would watch that

permalink
report
parent
reply
50 points

Awe man, I love all titties. Variety is the spice of life.

permalink
report
parent
reply
49 points

Not to mention the self image impact such things would have on women with smaller breasts, who (as I understand it) generally already struggle with poor self image due to breast size.

permalink
report
parent
reply

Believe it or not, straight to jail.

permalink
report
parent
reply
49 points

There was a a porn studio that was prosecuted for creating CSAM. Brazil i belive. Prosecutors claimed that the petite, A-cup woman was clearly underaged. Their star witness was a doctor who testified that such underdeveloped breasts and hips clearly meant she was still going through puberty and couldn’t possible be 18 or older. The porn star showed up to testify that she was in fact over 18 when they shot the film and included all her identification including her birth certificate and passport. She also said something to the effect of women come in all shapes and sizes and a doctor should know better.

I can’t find an article. All I’m getting is GOP trump pedo nominees and brazil laws on porn.

permalink
report
parent
reply
15 points

Pretty sure the adult star was lil Lupe. She was everywhere at the time because she did, indeed, look underage.

permalink
report
parent
reply
5 points

I’m just glad they protected her

permalink
report
parent
reply
17 points

This sort of rhetoric really bothers me. Especially when you consider that there are real adult women with disorders that make them appear prepubescent. Whether that’s appropriate for pornography is a different conversation, but the idea that anyone interested in them is a pedophile is really disgusting. That is a real, human, adult woman and some people say anyone who wants to live them is a monster. Just imagine someone telling you that anyone who wants to love you is a monster and that they’re actually protecting you.

permalink
report
parent
reply
4 points

Australia has a more general ban on selling or exhibiting hard porn, but is is legal to possess it. So it’s not just small boobs.

permalink
report
parent
reply
16 points

It could also, of course, make mistakes, but Kevin Guo, Hive’s CEO, told Ars that extensive testing was conducted to reduce false positives or negatives substantially. While he wouldn’t share stats, he said that platforms would not be interested in a tool where “99 out of a hundred things the tool is flagging aren’t correct.”

I take this to mean it is at least 1% accurate lol.

permalink
report
parent
reply
140 points
*

Thorn, the company backed by Ashton Kutcher and which tried to get its way to monitor all messages in the EU via Chat Control. No thanks.

https://fortune.com/europe/2023/09/26/thorn-ashton-kutcher-ylva-johansson-csam-csa-regulation-european-commission-encryption-privacy-surveillance/

permalink
report
reply
72 points

Just remember folks. Kutcher is a slimeball too.

The guy went from a D list star and hanging out with the likes of Danny Masterson and going to Diddy’s infamous parties - to suddenly overnight courting the US government and being the face of ‘helping’ children everywhere.

Yeah right……

permalink
report
parent
reply
25 points

People can grow and change. Not saying he did or didn’t. Just saying that people aren’t a monolith. It’s plausible he just grew and his views changed / evolved.

That being said, it’s highly convenient where he’s positioned himself these days…

permalink
report
parent
reply
25 points

I’d be wary of calling him guilty by association. Maybe when he realized who he was really hanging out with he was so horrified and disgusted that he just had to get involved and do something to fight back?

permalink
report
parent
reply
9 points

It’s awful coincidental that he seems to hang out with the ‘rapist’ crowd. Even going as far as writing a letter for Masterson as to how nice of a guy he is to try to get him a lenient sentence.

Even Hollywood has ostracized him and his wife - news sites recently reported they were looking to leave the country and let things cool off for a while.

I’m sure everyone is right though that keep posting here, that he is a swell guy who was just in the wrong place at the wrong time, multiple times. Several years worth of multiple times with wrong people. Just a coincidence.

permalink
report
parent
reply
1 point

Nah, it’s much easier to chastise people for not knowing what nobody knew

permalink
report
parent
reply
2 points
*

Wasn’t he also featured in a video about how he couldn’t wait until Hillary Duff and the Olsen twins turned 18 because he wanted to date them when they were like 15 ?

permalink
report
parent
reply
49 points
*

It’s the earliest AI technology striving to expose unreported CSAM at scale.

horde-safety has been out for a year now. Just saying… It’s not a trained AI model in this way, but it’s still using Neural Networks (i.e. “AI Technology”)

permalink
report
reply
8 points
*

How did you figure out it had issues with broccoli?. Were you checking your vegetable gallery for CSAM?

permalink
report
parent
reply
8 points

haha, nah people reported some unexpected censors, and we investigated what part of their prompt might be causing it.

permalink
report
parent
reply
25 points

Man… That AI is going to be so fucked up when it gains sentience

permalink
report
reply
7 points

Skynet’s real origin story. We might just deserve judgement day.

permalink
report
parent
reply
0 points

Oh we definitely do! Definitely for this, and definitely for many other things.

permalink
report
parent
reply
24 points

And will we get that technology to keep the Fediverse and free platforms safe? Probably not. All the predecessors have been kept away for sole use of the big players, despite populism always claiming we need to introduce total surveillance to keep the children safe…

permalink
report
reply
14 points

I was going to say… Sure would be nice to have this feature in all the open source AI image generator tools but you’re absolutely right 😩

permalink
report
parent
reply
11 points

Yeah, unless someone publishes even a set of hashes of known bad content for the general public… I kind of doubt the true intentions are preventing CSAM to the benefit of everyone.

permalink
report
parent
reply
4 points

IFTAS is already working with Thorn towards this goal. But you already have access to such technology through my toolset.

permalink
report
parent
reply
2 points
*

This one? I loosely followed your work… Maybe I should try it someday. See how it does on a regular VPS. Thanks for the link to the IFTAS. Seems they have curated some useful links… I’ll have a look at their articles. Hope they get somewhere with that. At this point, I don’t think there is any blocklist accessible to the average Fediverse admin?!

Edit: Thx, saw your other comment with the link to horde-safety.

permalink
report
parent
reply
2 points

Ye, a normal VPS would be too slow for production use, as a GPU is recommended. But you can plug in any home PC to do it without risks

permalink
report
parent
reply
1 point
*

If everyone has access to the model it becomes much easier to find obfuscation methods and validate them. It becomes an uphill battle. It’s unfortunate but it’s an inherent limitation of most safeguards.

permalink
report
parent
reply
2 points

You’re probably right. I’m not sure if it’s a good idea to walk close to the edge with things like this, though. Every update to the detection model could change things and get them in jail… So I certainly wouldn’t play a cat and mouse game with something that has several years of jailtime attached… But then I don’t really know the thought process of the average pedo. And AI image detection comes with problems anyways. In the article they say it detected 6 million pictures already. While keeping quiet about the rate of false positives. We know people have gotten in serious trouble for (false) claims. And I also wouldn’t want to be the Fediverse admin who has to go through thousands of flagged pictures and look at them and decide which is which. With consequences attached… Maybe a database of hashes would be the only option. That doesn’t detect new pictures, but at the same time it comes without flase positives and you can’t draw conclusions from hash values.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 15K

    Monthly active users

  • 6.7K

    Posts

  • 154K

    Comments