You are viewing a single thread.
View all comments
29 points

AI companies work around this by paying human classifiers in poor but English-speaking countries to generate new data. Since the classifiers are poor but not stupid, they augment their low piecework income by using other AIs to do the training for them. See, AIs can save workers labor after all.

On the one hand, you’d think the AI companies would check to make sure they aren’t using AI themselves and damaging their models.

On the other hand, AI companies are being run by some of the laziest and stupidest people alive, and are probably just rubber-stamping everything no matter how blatantly garbage the results are.

permalink
report
reply
14 points
*

Is there even a way to check if something is written by an LLM? Only way I can think of is to monitor their computers and also make them turn on their webcams to see if they’re using any other devices.

permalink
report
parent
reply
13 points

Oh that’s beautiful. This is at the crux of generative AI’s disruptive potential: you can never tell for sure if it’s AI. At least theoretically. For most meaningful tasks its output is often dubious. But for the mind rotting stuff done to train the models, there’s no way they can tell. Unless they monitor their microtaskers. But proctoring is no trivial task. Considering the pittance they’re paying microtaskers, I doubt any form of effective proctoring is justifiable.

In the end Saltman will be the main victim of the disruption it hoped to unleash at large.

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 417

    Posts

  • 11K

    Comments

Community moderators