Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

11 points

Nvidia doing their part to help consumers associate AI with unwanted useless bloatware that’s foisted upon them.

https://arstechnica.com/gaming/2024/12/the-new-nvidia-app-is-probably-hurting-your-pc-gaming-performance/

permalink
report
reply
6 points
*

Not A Sneer But: “Princ-wiki-a Mathematica: Wikipedia Editing and Mathematics” and a related blog post. Maybe of interest to those amongst us whomst like to complain.

permalink
report
reply
3 points

very interesting, thank you for sharing

permalink
report
parent
reply
9 points
*

ai fan asks chempros about their use of lying boxes: majority opinion is that this shit is useless, leaks confidential information and is a massive legal liability https://www.reddit.com/r/Chempros/comments/1hgxvsj/ai_in_the_workplace_how_have_chemistsscientists/

top response:

It’s a good trick to be instantly dismissed. No, really, that’s the latest I had in terms of company policy. If you’re caught using AI for anything, you’re out the door. It’s a lawsuit waiting to happen (and a lawsuit we cannot defend against). Gross misconduct, not eligible for rehire, and all that. Same as intentionally misrepresenting data (because it is). (Pharma)

permalink
report
reply
6 points

AI could be a viable test for bullshit jobs as described by Graeber. If the disinfotmatron can effectively do your job then doing it well clearly doesn’t matter to anyone.

permalink
report
parent
reply
2 points

idk, genai can fuck up a couple of these too

permalink
report
parent
reply
3 points
*

It’s not an exhaustive search technique, but it may be an effective heuristic if anyone is planning The Revolution™.

permalink
report
parent
reply
5 points

permalink
report
parent
reply
1 point

Days since last comparison of Chat-GPT to shitty university student: zero

More broadly I think it makes more sense to view LLMs as an advanced rubber ducking tool - like a broadly knowledgeable undergrad you can bounce ideas off to help refine your thinking, but whom you should always fact check because they can often be confidently wrong.

Seriously why does everyone like this analogy?

permalink
report
parent
reply
1 point
*

good question, i have no clue especially that i wasn’t like this as undergrad, it’s really not hard to say “i don’t know, boss” or “more experimental data is needed” and chatgpt will never say this

shitty undergrad won’t probably leak confidential info either (maybe on sender side, but never on receiver side, as in receiving unexplained stolen confidential info from cosmic noise)

permalink
report
parent
reply
7 points

From the replies:

In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.

Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.

There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.

And a good sneer:

With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.

permalink
report
parent
reply
1 point

for anyone wondering cgmp/cglp means current good manufacturing/laboratory practices and it’s mostly a set of paperwork concerning audits etc and repeatability of everything

permalink
report
parent
reply
11 points

In further bluesky news, the team have a bit of an elon moment and forget how public they made everything.

https://bsky.app/profile/miriambo.bsky.social/post/3ldq2c7lu6c25 (only readable if you are logged in to bluesky)

permalink
report
reply
11 points

the team have a bit of an elon moment

“Oh shit, which one of them endorsed the German neo-Nazis?”

Aaron likes a porn post

“Whew.”

permalink
report
parent
reply
7 points

True Anon podcast (began with dissecting the Jeffrey Epstein case) goes deep on Luigi, his shooting, and his grey tribe ideological background. https://www.patreon.com/posts/episode-425-blue-118079355

permalink
report
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 416

    Posts

  • 11K

    Comments

Community moderators