57 points
*

Worthless research.

That subreddit bans you for accusing others of speaking in bad faith or for using ChatGPT.

Even if a user called it out, they’d be censored.

Edit: you know what, it’s unlikely they didn’t read the side bar. So, worse than worthless. Bad faith disinfo.

permalink
report
reply
28 points

accusing others of speaking in bad faith

You’re not allowed to talk about bad faith in a debate forum? I don’t understand. How could that do anything besides shield the sealions, JAQoffs, and grifters?

And please don’t tell me it’s about “civility”. Bad faith is the civil accusation when the alternative is your debate partner is a fool.

permalink
report
parent
reply
21 points
*

I won’t tell you.about civiity because

How could that do anything besides shield the sealions, JAQoffs, and grifters?

Not shield, but amplify.

That’s the point of the subreddit. I’m not defending them if that’s at all how I came across.

ChatGPT debate threads are plaguing /r/debateanatheist too. Mods are silent on the users asking to ban this disgusting behavior.

I didn’t think it’d be a problem so quickly, but the chuds and theists latched onto ChatGPT instantly for use in debate forums.

permalink
report
parent
reply
9 points
*

To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.

permalink
report
parent
reply
1 point

Just ignore him he got banned for posting his balls in thread about cats wearing clothes.

permalink
report
parent
reply
39 points

I mean. Reddit was botted to death after the Jon Stewart event in DC. People and corporations realized how powerful Reddit was. Sucks that the site didn’t try to stop it. Now Ai just makes it easier.

permalink
report
reply
7 points

Next they’ll be coming to get lemmy too

permalink
report
parent
reply
14 points

I don’t think lemmy is big enough to be “next”, but this is still a valid concern.

permalink
report
parent
reply
3 points

Why not? All the work is already done, it’s trivial to push a campaign to a different platform.

permalink
report
parent
reply
7 points

At least here we have Fediseer to vet instances, and the ability to vet each sign-ups.

I think eventually when we’re more targeted, we’ll have to circle the wagons so to speak, and only limit communications to more carefully moderated instances that root out the bots.

permalink
report
parent
reply
2 points

Fair point about AI-generated comments. What’s your take on how this affects online discussions? Are we losing genuine interactions or gaining new insights?

permalink
report
parent
reply
5 points

Adding more noise does nothing to add insights it just makes it more exhausting to pick a position yourself.

If everything is nuanced then you can more easily give up on caring in a meaningful way because you believe there is no good answer.

permalink
report
parent
reply
4 points

On political topics it is very likely that we just gain a few hundred more repetitions of the same arguments that were already going in circles before.

permalink
report
parent
reply
29 points

This is deeply unethical, when doing research you need to respect the people who participate and you have to respect what their story is. So by using a regurgitative artificial idiot (RAI) to make them their mind is not respecting them or their story.

The people who are being experimented on were not given compensation for their time and the work they contributed. While it isn’t required it is good practice in research to not actively burn bridges with people so that they will want to participate in more studies.

These people were also not given knowledge they were participating in a study nor were they given the choice to leave with their contributions at their will. Which entirely makes the study unpublishable since the data was not gathered with fucking consent.

This isn’t even taking into account any of the other things which cross ethical lines. All the “researchers” involved should never be allowed to ever conduct or participate in a study of any kind again. Their university should be fined and heavily scrutinized for their work in enabling this shit. These assholes have done damage to all researchers globally who will now have a harder time pitching real studies to potential participants because they could remember this story and how “researchers” took advantage of unknowing individuals. Shame on these people and hope they face real consequences.

permalink
report
reply
11 points

These researchers conducted research in a manner that was totally unethical and they deserve to be stripped of tenure and lose any research funding they have.

It already sounds like the university is preparing to just protect them and act like it’s no big deal, which is discouraging but I suppose not surprising.

permalink
report
parent
reply
6 points

I absolutely agree these “researchers” deserve to lose their tenure and lose their funding. In my mind they don’t even deserve to be called researchers anymore as they view their job as an extractive one. They hold no regard for the people they impacted and how that impacts the entire fields of research.

If the university does protect these people than I can only hope that no one signs up to participate in any future studies they try to conduct.

permalink
report
parent
reply
-8 points
*

This is deeply unethical,

I feel like maybe we’ve gone too far on research ethics restrictions.

We couldn’t do the Milgram experiment today under modern ethical guidelines. I think that it was important that it was performed, even at the cost of the stress that participants experienced. And I doubt that it is the last experiment for which that is true.

If we want to mandate some kind of careful scrutiny of such experiments and some after-the-fact compensation be paid to participants in experiments in which trauma-producing deception is imposed, maybe that’d be reasonable.

That doesn’t mean every study that violates present ethics standards should be greenlighted, but I do think that the present bar is too high.

permalink
report
parent
reply
7 points

There are very good reasons why our modern code of ethics exist in the first place. We as researchers are not there to do harm but instead to try to uplift the people we work with in the process. We are not there to extract information, but to work with people to help better understand how to improve their lives.

The Milgram Experiment while fascinating, is deeply unethical in its own right and should not be used as an example of anything other than the damage that is cause by conducting an unethical study. That study alone has cause many would be participants to walk away because how can they be trusted with a new study. The experiment was not stopped by the researchers when it was clear the participants were under high pressure and showing visible signs of stress. This is not an extractive field like you imply, it is a morally bankrupt philosophy to have that mindset.

Compensating participants is a sign of goodwill and shows you value their time and work put in. Does not matter if trauma is brought up or created like with the Milgram Experiment. You do it because it creates goodwill and helps people feel safer in the knowledge that both you and the institution you represent actually care. It is not for debate on what circumstances you offer compensation, you just offer it.

The greater good does not come with predatory extractive experiments but instead with studies that value and care for its participants. It is impossible to know just how many people have been turned away from participating because of studies like the one the article is on, the Milgram Experiment and the Stanford Prison Experiment. What we do know is that they have had an extremely negative effect on the perception of academic research and turn people away.

permalink
report
parent
reply
7 points

Fuck off with your greater good spiel.

permalink
report
parent
reply
5 points

It sickens me how people use the phrase to justify harming others.

permalink
report
parent
reply
26 points

To me it was kind of obvious. There were a bunch of accounts that would comment these weird sentences and all of them had variants of JohnSmith1234 as their username. Part of the reason I left tbh.

permalink
report
reply
12 points

I was gonna say, anyone with half a brain who has poked their head into Reddit over the past year or two will have seen a shitload of obvious bots in the comments.

permalink
report
parent
reply
24 points

I haven’t seen this question asked.

how can the results be trusted that they were actually interacting with real humans?

what’s the percentage of bot-to-bot contamination?

this study looks more like a hacky farce that is only meant to bring attention to our manipulation and less like actual science.

any professional that puts their name on this steaming pile should be ashamed of themselves.

permalink
report
reply
6 points

“Polls show that 99.9% of people like to take polls”

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 8.9K

    Posts

  • 227K

    Comments