Avatar

Architeuthis

Architeuthis@awful.systems
Joined
6 posts • 125 comments

It’s not always easy to distinguish between existentialism and a bad mood.

Direct message

Not sure if it’s a NSFW assertion, but to me the p-zombie experiment seems like the result of a discourse that went off the rails very early and very hard into angels on the head of a pin territory, this lw post notwithstanding.

Like, as far as I can tell, imagining a perfectly cloned reality except with the phenomenon in question assumed away, is supposedly (metaphysical) evidence that the phenomenon exists, except in a separate ontology? Isn’t this basically like using reverse Occam’s razor to prove that the extra entities are actually necessary, at least as long as they somehow stay mostly in their own universe?

Plus, the implicit assumption that consciousness can be defined as some sort of singular and uniform property you either have or don’t seems inherently dodgy and also to be at the core of the contradiction; like, is taking p-zombies too seriously a reaction specifically to a general sense of disappointment that a singular consciousness organelle is nowhere to be found?

permalink
report
reply

Seriously, the mandatory forced equanimity of the text went from merely off-putting to pretty gross actually as it was becoming increasingly apparent the nonlinear people are basically sociopaths who make it a point of pride to flagrantly abuse anyone who finds themselves at the other end of a business arrangement with them, not to mention that their employment model and accounting practices as described seem wildly illegal anywhere not a libertarian dystopia, even without going into the allegations about workplace romance.

Except they are EAs doing unspecified x-risk work, aka literally God’s work, so they are afforded every lenience and every benefit of a doubt, I guess.

permalink
report
parent
reply

Hi, my name is Scott Alexander and here’s why it’s bad rationalism to think that widespread EA wrongdoing should reflect poorly on EA.

The assertion that having semi-frequent sexual harassment incidents go public is actually an indication of health for a movement since it’s evidence that there’s no systemic coverup going on and besides everyone’s doing it is uh quite something.

But surely of 1,000 sexual harassment incidents, the movement will fumble at least one of them (and often the fact that you hear about it at all means the movement is fumbling it less than other movements that would keep it quiet). You’re not going to convince me I should update much on one (or two, or maybe even three) harassment incidents, especially when it’s so easy to choose which communities’ dirty laundry to signal boost when every community has a thousand harassers in it.

permalink
report
reply

Past 1M words

That’s gonna be 4.000 pages of extremely dubious porn and rationalist navel gazing, if anyone’s keeping count.

permalink
report
reply

tvtropes

The reason Keltham wants to have two dozen wives and 144 children, is that he knows Civilization doesn’t think someone with his psychological profile is worth much to them, and he wants to prove otherwise. What makes having that many children a particularly forceful argument is that he knows Civilization won’t subsidize him to have children, as they would if they thought his neurotype was worth replicating. By succeeding far beyond anyone’s wildest expectations in spite of that, he’d be proving they were not just mistaken about how valuable selfishness is, but so mistaken that they need to drastically reevaluate what they thought they knew about the world, because obviously several things were wrong if it led them to such a terrible prediction.

huh

permalink
report
parent
reply

Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.

Sound like Oxford increasingly did not want anything to do with them.

edit: Here’s a 94 page “final report” that seems more geared towards a rationalist audience.

Wonder what this was about:

Why we failed […] There also needs to be an understanding of how to communicate across organizational communities. When epistemic and communicative practices diverge too much, misunderstandings proliferate. Several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received. Finding friendly local translators and bridgebuilders is important.

permalink
report
reply

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His ‘effective safety measures’ are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

permalink
report
parent
reply

It’s a sad fate that sometimes befalls engineers who are good at talking to audiences, and who work for a big enough company that can afford to have that be their primary role.

edit: I love that he’s chief evangelist though, like he has a bunch of little google cloud clerics running around doing chores for him.

permalink
report
parent
reply

Google pivoting to selling shovels for the AI gold rush in the form of data tools should be pretty viable if they commit to it, I hadn’t thought if it that way.

permalink
report
parent
reply