Secret? As opposed to all the blatant AI bot accounts?
Indeed and it’s exactly what we did in r/BotDefense before greedy piggy spez shut down the API/communities some built.
I used to watch in amazement at some of the guys who set up bots, to report the bots to us.
You were on the mod side of r/BotDefense? I was a very avid reporter to it (so much so that people thought that I was a bot) and I was eventually added to the secret Bot Defense subreddit that automatically flagged our reports as bots. I jumped ship when the API change came since I saw how deeply vital access to that info was.
Do you know of any active analogous systems for Lemmy? Or do you have any ideas as to what we could implement here to abate bad actors?
I had mentioned this idea some time ago but it’s way beyond me to know how to set something like it up. Would you be willing and/or able to help out? What are your suggestions?
Read the article instead of responding to the title. It was a university conducting formal research, which created AI bots that impersonated different identities. “As a black man…” style posts in ChangeMyView.
The subreddit mods issued a formal complaint to the university when they learned of it, but the university is choosing not to block its publishing on the grounds of lack of harm.