Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this - gonna try posting last week’s thread a different way this time)
Hi, I’m new here. I mean, I’ve been reading but I haven’t commented before.
I’m sure you all know about how cheap labour is used for labelling data for training “AI” systems, but I just came across this video and wanted to share. Apologies if it has already been posted: Training AI takes heavy toll on Kenyans working for $2 an hour.
Welcome! The situation these people find themselves in is dire, since they’re both literally and emotionally as far as possible from the people making decisions about their labor. The modern economy doesn’t function without exploitation, and generative AI is the latest innovation in expanding that exploitation and pushing it farther away from the people who benefit and who make the decisions that require it. It does to modern knowledge workers what automation and outsourcing did to manufacturing, and the distance is sufficient that I don’t expect to see even the kind of lukewarm pushback that sweatshops got in the 90s actually manifest for them.
In the aftermath of an LGBT hate incident, the then-CEO of cloud computing giant Digital Ocean told upset staff his mentor was a member of the KKK as an attempt to explain why they must bend their values because “we love the company”
presented without comment
Asking employees to “bend” their perfectly sensible values like “I don’t like homophobes” or “members of the KKK suck” is insane to me, but exactly the sort of thing a tech CEO would think would resonate with his workers.
I stay at my job not because I have molded my soul into a perfect vessel for my companies values (which, TBH, kind of suck), but because I have a mortgage payment.
(Also as the header graphic points out, “love is at our core” and “inclusive environment” are apparently some of their values so maybe it’s Digital Ocean which needs to bend to Digital Ocean’s values).
At least there’s a happy ending:
A month after the all-hands meeting, in August 2023, DigitalOcean announced that it was conducting a search for a new CEO, but did not say why.
My hands are requesting the CEO’s home address (in Minecraft real life):
Quick bonus I found in the replies:
And a quick sidenote from me:
This is sorta repeating a previous prediction of mine, but I expect this AI bubble’s gonna utterly tank the public image of tech as a whole. When you develop a tech whose primary use case boils down to “make the world worse so the line can go up”, its gonna be virtually impossible for the public to forgive you.
Being more specific, I expect artists/musicians/creatives in general to be utterly hostile to AI, if not tech as a whole - AI has made their lives significantly harder in a variety of ways, and all signs pointing to the tech industry having done so willingly.
in real life
Careful not to post things that could get you into trouble with a judge without a sense of humor, or just cops/secret service. Wouldn’t be the first time I hear of a home visit because the cops got confused at a joke.
Esp if it turns out the conspiracy theory about the murder being crypto related and they confuse this place for being a pro cryptocurrency place.
E: on topic just not LLMs and artists but also the idea that Musk got Trump elected will cause a backlash. (Which I find dubious, more despite the man, they made the election be about a gay onlyfans support squirrel ffs)
5-10 years ago I’d say OP’s comment is definitely protected under the First Amendment (assuming US based) but now who the fuck knows what those turdwagons on the bench will come up with to dismantle it.
Sure, but that step comes after the cops lift you out of your bed. And you get a good lawyer etc etc. Nobody was charged after the home visit but it still was a home visit, granted that was in .nl so less risk of being shot or getting your stuff stolen. And that is assuming the law still works.
edit: context https://www.independent.co.uk/tech/chatgpt-david-mayer-name-glitch-ai-b2657197.html
Time for another round of Rothschild nutso’s to come around now that ChatGPT can’t say one of their names.
At first I was thinking, you know, if this was because of the GDPR’s right to be forgotten laws or something that might be a nice precedent. I would love to see a bunch of people hit AI companies with GDPR complaints and have them actually do something instead of denying their consent-violator-at-scale machine has any PII in it.
But honestly it’s probably just because he has money
I think Sam Altman’s sister accused him of doing this to her name awhile ago too (semi-recent example). I don’t think she was on a “don’t generate these words ever” blacklist, but it seemed like she was erased from the training data and would only come up after a web search.
Being erased from the training data is frankly even more galling than the kind of brute force GDPR compliance they seem to have been using. It puts the lie to any claim that they’re just “moving fast and breaking things” without mind to the consequences, because clearly there was some reason to prune the training data and make sure that the model didn’t have certain information when it was to the company’s (or the founder’s) liking.
I’m not super familiar with Lobsters but I love how they represent bans: https://lobste.rs/~SuddenBraveblock
- Joined: 5 years ago
- ✧∘* 🌈"““Left””"🦄✧・゚: 3 hours ago
@hrrrngh @gerikson Pretty sure that’s that person deleting their account and not a ban, bans look a bit different (e.g. https://lobste.rs/~AAAAAAAAAAAAAAAA)