blakestacey
You might think that this review of Yud’s glowfic is an occasion for a “read a second book” response:
Yudkowsky is good at writing intelligent characters in a specific way that I haven’t seen anyone else do as well.
But actually, the word intelligent is being used here in a specialized sense to mean “insufferable”.
Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.
Ah, the book that isn’t actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn’t sufficiently self-aware to know that’s what she was writing.
Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.
I’m trying, but I can’t not donate any harder!
The most popular LessWrong posts, SSC posts or books like HPMoR are usually people’s first exposure to core rationality ideas and concerns about AI existential risk.
Unironically the better choice: https://archiveofourown.org/donate
The lead-in to that is even “better”:
This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We’ve never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).
“The reason for optimism is that we can cozy up to fascists!”
The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent
Uh-huh.
An interesting thing came through the arXiv-o-tube this evening: “The Illusion-Illusion: Vision Language Models See Illusions Where There are None”.
Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something “really is” and how something “appears to be”, and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.
I must have been living under a rock/a different kind of terminally online, because I had only ever heard of Honey through Dan Olson’s riposte to Doug Walker’s The Wall, which describes Doug Walker delivering “an uncomfortably over-acted ad for online data harvesting scam Honey” (35:43).
I saw this floating around fedi (sorry, don’t have the link at hand right now) and found it an interesting read, partly because it helped codify why editing Wikipedia is not the hobby for me. Even when I’m covering basic, established material, I’m always tempted to introduce new terminology that I think is an improvement, or to highlight an aspect of the history that I feel is underappreciated, or just to make a joke. My passion project — apart from the increasingly deranged fanfiction, of course — would be something more like filling in the gaps in open-access textbook coverage.