corbin
Oh, you misunderstand. It’s not for me.
NSFW
I’ve been to plenty of shrinks and never been diagnosed with anything outside of neurodivergence: giftedness, ADHD, and autism. I appreciate TLP because it had helped me understand and manage my narcissistic parent.
Nonetheless I agree with your critique of the writing style; it’s got all the fake edge of a 20s frat boy learning about existentialism for the first time.
In the sense that TLP isn’t Blackbeard, no, we don’t. But I would suggest that, unlike Scott, TLP genuinely understands the pathology of narcissism. Their writing does something Scott couldn’t ever do: it grabs the narcissist by the face and forces them to notice how their thoughts never not involve them. As far as I can tell, Scott’s too much of a pill-pusher to do any genuine psychoanalysis.
Also, like, consider this TLP classic. Two things stand out if we’re going to consider whether they’re Scott in disguise. The first is that the dates are not recent enough, and indeed TLP’s been retired for about a decade. The second is that the mythology and art history are fairly detailed and accurate, something typically beyond Scott.
(In true Internet style, I hope that there is a sibling comment soon which shows that I am not just wrong, but laughably and ironically wrong.)
Let’s see how long HN takes to ban yet another literal Nazi.
What really gets me is that we never look past Schrödinger’s version of the cat. I want us to talk about Bell’s Cat, which cannot be alive or dead due to a contradiction in elementary linear algebra and yet reliably is alive or dead once the box opens. (I guess technically it should be “alive”, “dead”, or “undead”, since we’re talking spin-1 particles.)
At risk of being NSFW, this is an amazing self-own, pardon the pun. Hypnosis via text only works on folks who are fairly suggestible and also very enthusiastic about being hypnotized, because the brain doesn’t “power down” as much machinery as with the more traditional lie-back-on-the-couch setup. The eyes have to stay open, the text-processing center is constantly engaged, and re-reading doesn’t deepen properly because the subject has to have the initiative to scroll or turn the page.
Adams had to have wanted to be hypnotized by a chatbot. And that’s okay! I won’t kinkshame. But this level of engagement has to be voluntary and desired by the subject, which is counter to Adams’ whole approach of hypnosis as mind control.
NSFW (including funny example, don't worry)
RAG is “Retrieval-Augmented Generation”. It’s a prompt-engineering technique where we run the prompt through a database query before giving it to the model as context. The results of the query are also included in the context.
In a certain simple and obvious sense, RAG has been part of search for a very long time, and the current innovation is merely using it alongside a hard prompt to a model.
My favorite example of RAG is Generative Agents. The idea is that the RAG query is sent to a database containing personalities, appointments, tasks, hopes, desires, etc. Concretely, here’s a synthetic trace of a RAG chat with Batman, who I like using as a test character because he is relatively two-dimensional. We ask a question, our RAG harness adds three relevant lines from a personality database, and the model generates a response.
> Batman, what's your favorite time of day?
Batman thinks to themself: I am vengeance. I am the night.
Batman thinks to themself: I strike from the shadows.
Batman thinks to themself: I don't play favorites. I don't have preferences.
Batman says: I like the night. The twilight. The shadows getting longer.
My NSFW reply, including my own experience, is here. However, for this crowd, what I would point out is that this was always part of the mathematics, just like confabulation, and the only surprise should be that the prompt doesn’t need to saturate the context in order to approach an invariant distribution. I only have two nickels so far, for this Markov property and for confabulation from PAC learning, but it’s completely expected weird that it’s happened twice.
Show me a long-time English Wikipedia editor who hasn’t broken the rules. Since WP is editable text and most of us have permission to alter most pages, rule violations aren’t set in stone and don’t have to be punished harshly; often, it’s good enough to be told that what you did was wrong and that your edits will be reverted.
NSFW: When you bring this sort of argument to the table, you’re making it obvious that you’ve never been a Wikipedian. That’s not a bad thing, but it does mean that you’re going to get talked down to; even if your question was in good faith, you could have answered it yourself by lurking amongst the culture being critiqued.
He tells on himself by saying “Gerard” vs “Scott” and “David Gerard” vs “Scott Alexander”. What’s really pathetic is that he thinks politics on Wikipedia is about left vs right or authoritarians vs anarchists. Somebody should let him know that words are faith, not works.