Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
OK to start us off how about some Simulation Hypothesis crankery I found posted on ActivityPub: Do we live in a computer simulation? (Article), The second law of infodynamics and its implications for the simulated universe hypothesis (PDF)
Someone who’s actually good at physics could do a better job of sneering at this than me, but I mean but look at this:
My law can confirm how genetic information behaves. But it also indicates that genetic mutations are at the most fundamental level not just random events, as Darwin’s theory suggests.
A super complex universe like ours, if it were a simulation, would require a built-in data optimisation and compression in order to reduce the computational power and the data storage requirements to run the simulation.
How sneerable is the entire “infodynamics” field? Because it seems like it should be pretty sneerable. The first referenced paper on the “second law of infodynamics” seems to indicate that information has some kind of concrete energy which brings to mind that experiment where they tried to weigh someone as they died to identify the mass of the human soul. Also it feels like a gross misunderstanding to describe a physical system as gaining or losing information in the Shannon framework since unless the total size of the possibility space is changing there’s not a change in total information. Like, all strings of 100 characters have the same level of information even though only a very few actually mean anything in a given language. I’m not sure it makes sense to talk about the amount of information in a system increasing or decreasing naturally outside of data loss in transmission? IDK I’m way out of my depth here but it smells like BS and the limited pool of citations doesn’t build confidence.
I read one of the papers. About the specific question you have: given a string of bits s, they’re making the choice to associate the empirical distribution to s, as if s was generated by an iid Bernoulli process. So if s has 10 zero bits and 30 one bits, its associated empirical distribution is Ber(3/4). This is the distribution which they’re calculating the entropy of. I have no idea on what basis they are making this choice.
The rest of the paper didn’t make sense to me - they are somehow assigning a number N of “information states” which can change over time as the memory cells fail. I honestly have no idea what it’s supposed to mean and kinda suspect the whole thing is rubbish.
Edit: after reading the author’s quotes from the associated hype article I’m 100% sure it’s rubbish. It’s also really funny that they didn’t manage to catch the COVID-19 research hype train so they’ve pivoted to the simulation hypothesis.
Oh the author here is absolutely a piece of work.
Here’s an interview where he’s talking about the biblical support for all of this and the ancient Greek origins of blah blah blah.
I can’t definitely predict this guy’s career trajectory, but one of those cults where they have to wear togas is not out of the question.
This feels like quackery but I can’t find a goal…
But if they both hold up to scrutiny, this is perhaps the first time scientific evidence supporting this theory has been produced – as explored in my recent book.
There it is.
Edit: oh God it’s worse than I thought
The web design almost makes me nostalgic for geocities fan pages. The citations that include himself ~10 times and the greatest hits of the last 50 years of physics, biology, and computer science, and Baudrillard of course. The journal of which this author is the lead editor and which includes the phrase “information as the fifth state of matter” in the scope description.
Oh God the deeper I dig the weirder it gets. Trying to confirm whether the Information Physics Institute is legit at all and found their list of members, one of whom listed their relevant expertise as “Writer, Roleplayer, Singer, Actor, Gamer”. Another lists “Hyperspace and machine elves”. One very honestly simply says “N/A”
The Gmail address also lends the whole thing an air of authority. Like, you’ve already paid for the domain, guys.
OK this membe list experience is just 👨🍳😗👌
- Psychonaut
- Practitioner of Yoga
- Quantum, Consciousness, Christian Theology, Creativity
Perfect. No notes.
General sneer against the SH: I choose to dismiss it entirely for the same reason that I dismiss solipsism or brain-in-a-vat-ism: it’s a non-starter. Either it’s false and we’ve gotta come up with better ideas for all this shit we’re in, or it’s true and nothing is real, so why bother with philosophical or metaphysical inquiry?
You’re missing the most obvious implication, though. If it’s all simulated or there’s a Cartesian demon afflicting me then none of you have any moral weight. Even more importantly if we assume that the SH is true then it means I’m smarter than you because I thought of it first (neener neener).
But this quickly runs into the ‘don’t create your own unbreakable crypto system’ problem. There are people out there who are a lot smarter who quickly can point out the holes in these simulation arguments. (The smartest of whom go ‘nah, that is dumb’ sadly I’m not that enlightened, as I have argued a few times here before how this is all amateur theology, and has nothing to do with STEM/computer science (E: my gripes are mostly with the ‘ancestor simulation’ theory however)).
The SH is catnip to “scientific types” who don’t recognize it as a rebrand of classical metaphysics. After all, they know how computers work, and it can’t be that hard to simulate the entire workings of a universe down to the quark level, can it? So surely someone just a bit smarter than themselves have already done it and are running a simulation with them in it. It’s basically elementary!
The “simulation hypothesis” is an ego flex for men who want God to look like them.
I sneered that in a blog post last year, as it happens.
“feel free to ignore any science “news” that’s just a press release from the guy who made it up.”
In particular, the 2022 discovery of the second law of information dynamics (by me) facilitates new and interesting research tools (by me) at the intersection between physics and information (according to me).
Gotta love “science” that is cited by no-one and cites the author’s previous work which was also cited by no one. Really the media should do better about not giving cranks an authoritative sounding platform, but that would lead to slightly fewer eyes on ads and we can’t have that now can we.
If you’re in the mood for a novel that dunks on these nerds, I highly recommend Jason Pargin’s If This Book Exists, You’re in the Wrong Universe.
https://en.wikipedia.org/wiki/If_This_Book_Exists,_You're_in_the_Wrong_Universe
It is the fourth book in the John Dies at the End series
oh damn, I just gave the (fun but absolute mess of a) movie another watch and was wondering if they ever wrote more stories in the series — I knew they wrote a sequel to John Dies at the End, but I lost track of it after that. it looks like I’ve got a few books to pick up!
Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They’re Underage
Three billion dollars and its going into Character AI AutoGroomer 4000s. Fuck this timeline.
automated grooming is just what progress is and you have to accept it. like the printing press
AI finally allowing grooming at scale is the kind of thing I’d expect to be the setup for a joke about Silicon Valley libertarians, not something that’s actually happening.
Just needs a guy who goes “if we don’t build the automated grooming machine, somebody else will”.
HN runs smack into end-stage Effective Altruism and exhibit confusion
Title "The shrimp welfare project " is editorialized, the original is “The Best Charity Isn’t What You Think”.
Apologies for focusing on just one sentence of this article, but I feel like it’s crucial to the overall argument:
… if [shrimp] suffer only 3% as intensely as we do …
Does this proposition make sense? It’s not obvious to me that we can assign percentage values to suffering, or compare it to human suffering, or treat the values in a linear fashion.
It reminds me of that vaguely absurd thought experiment where you compare one person undergoing a lifetime of intense torture vs billions upon billions of humans getting a fleck of dust in their eyes. I just cannot square choosing the former with my conscience. Maybe I’m too unimaginative to comprehend so many billions of bits of dust.
lol hahah.
Not that I’m a super fan of the fact that shrimp have to die for my pasta, but it feels weird that they just pulled a 3% number out of a hat, as if morals could be wrapped up in a box with a bow tied around it so you don’t have to do any thinking beyond 1500×0.03×1 dollars means I should donate to this guys shrimp startup instead of the food bank!
Shrimp cocktail counts as vegetarian if there are fewer that 17 prawns in it, since it rounds down to zero souls.
Hold it right there criminal scum!
spoiler
Image of two casually dressed guys pointing fingerguns at the camera, green beams are coming out of the fingerguns. The Vegan Police from the movie Scott Pilgrim vs. The World. The cops are played by Thomas Jane and Clifton Collins Jr, the latter is wearing sunglasses, while it is dark.
Ah you see, the moment you entered the realm of numbers and estimates, you’ve lost! I activate my trap card: 「Bayesian Reasoning」 to Explain Away those numbers. This lets me draw the「Domain Expert」 card from my deck, which I place in the epistemic status position, which boosts my confidence by 2000 IQ points!
Obviously mathematically comparing suffering is the wrong framework to apply here. I propose a return to Aristotelian virtue ethics. The best shrimp is a tasty one, the best man is a philosopher-king who agrees with everything I say, and the best EA never gets past drunkenly ranting at their fellow undergrads.
This almost reads like an attempt at a reductio ad absurdum of worrying about animal welfare, like you are supposed to be a ridiculous hypocrite if you think factory farming is fucked yet are indifferent to the cumulative suffering caused to termites every time an exterminator sprays your house so it doesn’t crumble.
Relying on the mean estimate, giving a dollar to the shrimp welfare project prevents, on average, as much pain as preventing 285 humans from painfully dying by freezing to death and suffocating. This would make three human deaths painless per penny, when otherwise the people would have slowly frozen and suffocated to death.
Dog, you’ve lost the plot.
FWIW a charity providing the means to stun shrimp before death by freezing as is the case here isn’t indefensible, but the way it’s framed as some sort of an ethical slam dunk even compared to say donating to refugee care just makes it too obvious you’d be giving money to people who are weird in a bad way.
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
wat
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
wat
This entire fucking shrimp paragraph is what failing philosophy does to a mf
I think the author is just honestly trying to equivocate freezing shrimps with torturing weirdly specifically disabled babies and senile adults medieval style. If you said you’d pledge like 17$ to shrimp welfare for every terminated pregnancy I’m sure they’d be perfectly fine with it.
I happened upon a thread in the EA forums started by someone who was trying to argue EAs into taking a more forced-birth position and what it came down to was that it wouldn’t be as efficient as using the same resources to advocate for animal welfare, due to some perceived human/chicken embryo exchange rate.
My professor is typing questions into chat gpt in class rn be so fucking for real
He’s using it to give examples of exam question answers. The embarrassment
I’d pipe up and go “uhhh hey prof, aren’t you being paid to, like, impart knowledge?”
(I should note that I have an extremely deficient fucks pool, and do not mind pissing off fuckwits. but I understand it’s not always viable to do)
It was there and gone fairly quickly and I wouldn’t say I’m a model student so I didn’t say anything. I’ve talked to him about Chat GPT before though…
“So, professor sir, are you OK with psychologically torturing Black people, or do you just not care?”
I mean, that kind of suggests that you could use chatGPT to confabulate work for his class and he wouldn’t have room to complain? Not that I’d recommend testing that, because using ChatGPT in this way is not indicative of an internally consistent worldview informing those judgements.
We’re going to be answering two essay questions in an in-class test instead of writing a paper this year specifically to prevent chat gpt abuse. Which he laughed and joked about because he really believes chat gpt can produce good results !
At work, I’ve been looking through Microsoft licenses. Not the funniest thing to do, but that’s why it’s called work.
The new licenses that have AI-functions have a suspiciously low price tag, often as introductionary price (unclear for how long, or what it will cost later). This will be relevant later.
The licenses with Office, Teams and other things my users actually use are not only confusing in how they are bundled, they have been increasing in price. So I have been looking through and testing which licenses we can switch to a cheaper, without any difference for the users.
Having put in quite some time with it, we today crunched the numbers and realised that compared to last year we will save… (drumroll)… Approximately nothing!
But if we hadn’t done all this, the costs would have increased by about 50%.
We are just a small corporation, maybe big ones gets discounts. But I think it is a clear indication of how the AI slop is financed, by price gauging corporate customers for the traditional products.