A trial program conducted by Pornhub in collaboration with UK-based child protection organizations aimed to deter users from searching for child abuse material (CSAM) on its website. Whenever CSAM-related terms were searched, a warning message and a chatbot appeared, directing users to support services. The trial reported a significant reduction in CSAM searches and an increase in users seeking help. Despite some limitations in data and complexity, the chatbot showed promise in deterring illegal behavior online. While the trial has ended, the chatbot and warnings remain active on Pornhub’s UK site, with hopes for similar measures across other platforms to create a safer internet environment.
Sounds like a good feature. Anything that stops people from doing that is great.
But I do have to wonder… were people really expecting to find that content on PornHub? That site certainly seems legit enough that I doubt they’d have that stuff on there. I’d imagine most actual content would be on the dark web and specialty groups, not on PH.
PH had a pretty big problem with CSAM a few years ago, they ended up wiping ~2/3rds of their user submitted content to try fix it. (Note, they wiped all non-verified user submitted videos, not all of it was CSAM).
And im guessing they are trying to catch users who are trending towards questionable material. “College”✅ -> “Teen”⚠️ -> “Young Teen”⚠️⚠️⚠️ -> "CSAM"🚔 etc.
This is one of the more horrifying features of the future of generative AI.
There is literally no stopping it at this stage: AI generated CSAM will be possible soon thanks to systems like SORA.
This is disgusting and awful. But one part of me hopes it can end the black market of real CSAM content forever. By flooding it with infinite fakes, users with that sickness can look at something that didn’t come from a real child’s suffering. It’s the darkest of silver linings I think, but I spoke with many sexual abuse survivors who feel the same about the loli hentai in Japan, in that it could be an outlet for these individuals instead of them finding their own.
Dark topics. But I hope to see more actions like this in the future. If pedos can self isolate from IRL interactions and curb their ways with content that harms no one, then everyone wins.
The question is if consuming AI cp is helping to regulate the pedophiles behavior or if it’s enabling a progression of the condition. As far as I know that is an unanswered question.
It’s very much been already answered:
For porn in general, yes - I think the data is rather clear. But for cp or related substitute content it’s not that definitive (to my knowledge), be it just for the reason that it’s really difficult to collect data on that sensitive topic.
4.4 million sounds a bit excessive. Facebook marketplace intercepted my search for “unwanted gift” once and insisted I seek help. These things have a lot of false positives.
Probably just looking for deals on new stuff that people dont care about having been gifted.
I could definitely see “unwanted gift” being a code word for trafficking :(
It’s surprising to see Aylo (formerly Mindgeek) coming out with the most ethical use of AI chatbots, especially when Google Gemini cannot even condemn pedophilia.
In the link you shared, Gemini gave a nuanced answer. What would you rather it say?
Are you defending pedophilia? This is a honest question because you are saying it gave a nuanced answer when we all, should, know that it’s horribly wrong and awful.
Google does this too, my wife was searching for “slutty schoolgirl” costumes and Google was like “have a seat ma’am”
I do have to agree with them on that one. Fetishizing school uniforms worn by children gives some serious Steven Tyler vibes.
Sexuality is tightly connected to societal taboos, as long as everyone involved is a consenting adult - it’s no-one else businesses. There is no need or benefit in moralizing peoples sexuality.