Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this - this one was a bit late, I got distracted)
is biometric “gait analysis” total bullshit? it feels like total bullshit, but I only DuckDuckWent for 20 seconds or so, idk
I don’t know. Some walks are very distinctive.
Starting things off with a fresh post from Brian Merchant: Tech under Trump, part 1
Sidenote: Love how the tech VCs all grew up in the media landscape of tech workers going ‘the management of this company is a group of idiots’ an then didn’t think that would apply to themselves.
someone pointed out that (paraphrasing) “yeah, you and I are never gonna care for autoplag output but kids are gonna grow up on it and expect it for everything” and that makes me want to do bad things.
ehh i don’t know, as a child i’d occasionally get a vhs with weird cheap counterfeit cartoons on it and they just creeped me out. children can actually tell imo.
I can see the challenge in sorting out AI slop from actual art or writing being normalized in the same way that occasionally having to check your spam filter in case an important work email got filed alongside “GrOwYoUrEgGpLaNtEmOjIfOrChEaP”, but there’s a difference between a world where AI slop exists and AI slop itself actually being worth a damn.
The promptfans testing OpenAI Sora have gotten mad that it’s happening to them and (temporarily) leaked access to the API.
https://techcrunch.com/2024/11/26/artists-appears-to-have-leaked-access-to-openais-sora/
“Hundreds of artists provide unpaid labor through bug testing, feedback and experimental work for the [Sora early access] program for a $150B valued [sic] company,” the group, which calls itself “Sora PR Puppets,” wrote in a post …
“Well, they didn’t compensate actual artists, but surely they will compensate us.”
“This early access program appears to be less about creative expression and critique, and more about PR and advertisement.”
OK, I could give them the benefit of the doubt: maybe they’re new to the GenAI space, or general ML Space … or IT.
But I’m not going to. Of course it’s about PR hype.
I’d say lol but I’m like 72% sure this is straight out of the video game industry’s playbook and very much intentional to create hype because everyone has forgotten this shit even exists.
Also, I’m still waiting for just one use case for video-generating autoplag that is, even in theory, not either morally reprehensible or outright criminal.
I woke up and immediately read about something called “Defense Llama”. The horrors are never ceasing: https://theintercept.com/2024/11/24/defense-llama-meta-military/
Scale AI advertised their chatbot as being able to:
apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities
However their marketing material, as is tradition, include an example of terrible advice. Which is not great given it’s about blowing up a building “while minimizing collateral damage”.
Scale AI’s response to the news pointing this out – complaining that everyone took their murderbot marketing material seriously:
The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.
On the one hand, that spectacular failure could potentially dissuade the military from buying in and prolonging this bubble. On the other hand, having an accountability sink for war crimes would be a tempting offer to your average army.
I’ve been wondering about this
One the one hand, military procurement (at least afaik) tends toward complete functional product
On the other hand, military R&D programs have been among the most spectacularly profligate financial black holes in recent decades
None of the options involved feel great, even if “it gets shunted from mil procurement and all industry claims get publicly brandished as the bullshit it is” comes to pass (which tbh still feels like an optimistic outcome, with unclear time horizons)
I mean it fits into the pattern of procurement projects that aren’t allowed to fail despite having had serious coherence issues starting at the design stage. Though the military is usually less prone to the “problem in search of a solution” dynamic that VCs are prone to if a project gets started it can shamble forwards as a zombie for years before anyone finds the political will to kill it.