We have a few Wikipedians who hang out here, right? Is a preprint by Yud and co. a sufficient source to base an entire article on “Functional Decision Theory” upon?
Forgive me but how is lightcone different from conebros???
You might think that this review of Yud’s glowfic is an occasion for a “read a second book” response:
Yudkowsky is good at writing intelligent characters in a specific way that I haven’t seen anyone else do as well.
But actually, the word intelligent is being used here in a specialized sense to mean “insufferable”.
Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.
Ah, the book that isn’t actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn’t sufficiently self-aware to know that’s what she was writing.
Gah. I’ve been nerd sniped into wanting to explain what LessWrong gets wrong.
You could argue that another moral of Parfit’s hitchhiker is that being a purely selfish agent is bad, and humans aren’t purely selfish so it’s not applicable to the real world anyway, but in Yudkowsky’s philosophy—and decision theory academia—you want a general solution to the problem of rational choice where you can take any utility function and win by its lights regardless of which convoluted setup philosophers drop you into.
I’m impressed that someone writing on LW managed to encapsulate my biggest objection to their entire process this coherently. This is an entire model of thinking that tries to elevate decontextualization and debate-team nonsense into the peak of intellectual discourse. It’s a manner of thinking that couldn’t have been better designed to hide the assumptions underlying repugnant conclusions if indeed it had been specifically designed for that purpose.
Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.
I’m trying, but I can’t not donate any harder!
The most popular LessWrong posts, SSC posts or books like HPMoR are usually people’s first exposure to core rationality ideas and concerns about AI existential risk.
Unironically the better choice: https://archiveofourown.org/donate
Holy smokes that’s a lot of words. From their own post it sounds like they massively over-leveraged and have no more sugar daddies so now their convention center is doomed (yearly 1 million dollar interest payments!); but they can’t admit that so are desperately trying to delay the inevitable.
Also don’t miss this promise from the middle:
Concretely, one of the top projects I want to work on is building AI-driven tools for research and reasoning and communication, integrated into LessWrong and the AI Alignment Forum. […] Building an LLM-based editor. […] AI prompts and tutors as a content type on LW
It’s like an anti-donation message. “Hey if you donate to me I’ll fill your forum with digital noise!”