2 points

We have a few Wikipedians who hang out here, right? Is a preprint by Yud and co. a sufficient source to base an entire article on “Functional Decision Theory” upon?

permalink
report
reply
3 points

page tagged and question added to talk page

permalink
report
parent
reply
2 points

There’s a “critique of functional decision theory”… which turns out to be a blog post on LessWrong… by “wdmacaskill”? That MacAskill?!

permalink
report
parent
reply
2 points

Forgive me but how is lightcone different from conebros???

permalink
report
reply
9 points
*

You might think that this review of Yud’s glowfic is an occasion for a “read a second book” response:

Yudkowsky is good at writing intelligent characters in a specific way that I haven’t seen anyone else do as well.

But actually, the word intelligent is being used here in a specialized sense to mean “insufferable”.

Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.

Ah, the book that isn’t actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn’t sufficiently self-aware to know that’s what she was writing.

permalink
report
reply
3 points

If you want to read Yudkowsky’s explanation for why he doesn’t spend more effort on academia, it’s here.

spoiler alert: the grapes were totally sour

permalink
report
parent
reply
1 point

Gah. I’ve been nerd sniped into wanting to explain what LessWrong gets wrong.

permalink
report
parent
reply
4 points

You could argue that another moral of Parfit’s hitchhiker is that being a purely selfish agent is bad, and humans aren’t purely selfish so it’s not applicable to the real world anyway, but in Yudkowsky’s philosophy—and decision theory academia—you want a general solution to the problem of rational choice where you can take any utility function and win by its lights regardless of which convoluted setup philosophers drop you into.

I’m impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It’s a manner of thinking that couldn’t have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.

permalink
report
parent
reply
14 points

Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.

I’m trying, but I can’t not donate any harder!

The most popular LessWrong posts, SSC posts or books like HPMoR are usually people’s first exposure to core rationality ideas and concerns about AI existential risk.

Unironically the better choice: https://archiveofourown.org/donate

permalink
report
reply
11 points

Yes but if I donate to Lightcone I can get a T-shirt for $1000! A special edition T-shirt! Whereas if I donated $1000 to Archive Of Our Own all I’d get is… a full sized cotton blanket, a mug, a tote bag and a mystery gift.

permalink
report
parent
reply
11 points

Holy smokes that’s a lot of words. From their own post it sounds like they massively over-leveraged and have no more sugar daddies so now their convention center is doomed (yearly 1 million dollar interest payments!); but they can’t admit that so are desperately trying to delay the inevitable.

Also don’t miss this promise from the middle:

Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. […] Building an LLM-based editor. […] AI prompts and tutors as a content type on LW

It’s like an anti-donation message. “Hey if you donate to me I’ll fill your forum with digital noise!”

permalink
report
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 344

    Monthly active users

  • 162

    Posts

  • 2.5K

    Comments