Secret? As opposed to all the blatant AI bot accounts?
Indeed and it’s exactly what we did in r/BotDefense before greedy piggy spez shut down the API/communities some built.
I used to watch in amazement at some of the guys who set up bots, to report the bots to us.
You were on the mod side of r/BotDefense? I was a very avid reporter to it (so much so that people thought that I was a bot) and I was eventually added to the secret Bot Defense subreddit that automatically flagged our reports as bots. I jumped ship when the API change came since I saw how deeply vital access to that info was.
Do you know of any active analogous systems for Lemmy? Or do you have any ideas as to what we could implement here to abate bad actors?
I had mentioned this idea some time ago but it’s way beyond me to know how to set something like it up. Would you be willing and/or able to help out? What are your suggestions?
Read the article instead of responding to the title. It was a university conducting formal research, which created AI bots that impersonated different identities. “As a black man…” style posts in ChangeMyView.
The subreddit mods issued a formal complaint to the university when they learned of it, but the university is choosing not to block its publishing on the grounds of lack of harm.
Ummm. They knew, guys.
Also this is extremely violating. And this bullshit,
. “We believe the potential benefits of this research substantially outweigh its risks"
What kind of psychotic sssholes run an experiment on an unsuspecting public because they believed that it’s ok to fraudulently engage with others without theirspotential subjects being aware of it
This is something trump would do
Smh. Sigh
Why did you need to do a whataboutism?
I said the act was wrong.
Want right in Tuskegee , wasn’t right in auschwitz, wasn’t right with MK ULTRA and it wasn’t right here.
No matter the severity of the act, it’s goddamn wrong
Ok?
You immediately assume I disagree with you, all I meant was that they did it before and nobody should be suprised if it happens again, either the government or a corperation.
Ok?
Saying this like reddit hasn’t been bot infested for a decade.
I mean there were some genuine looking bots well before LLMs and AI, and even then you could just be lazy and make a copy post bot that would repost old content for karma farming while the terminally online userbaae would upvote your slop for you.
So they figured it was a good idea to use a racially-charged fake profile to provoke online users for an ‘experiment’? And one would assume these responses were subsequently studied without the poster’s consent?
That’s going to run afoul of a few European privacy rules I imagine. Someone definitely needs to get fired, blackballed and sued for this. At the very least.
The bots would pretend to be black in order to say “as a black man, I don’t like BLM” or pretend to be a male rape victim so they could say “the experience wasn’t as traumatic as some would say”. How the fuck do they reconcile this as ethical? They’re actively arguing with real people and acting like it’s all just random data. Social scientists once again de-humanizing their subjects, I guess