In a recent Hard Fork (Hard Hork?) episode, Casey Newton and Kevin Roose described attending the recent “The Curve” conference – a conference in Berkeley organized and attended mostly by our very best friends. When asked about the most memorable session he attended at this conference, Casey said:

That would have been a session called If Anyone Builds It, Everyone Dies, which was hosted by Eliezer Yudkowski. Eliezer is sort of the original doomer. For a couple of decades now, he has been warning about the prospects of super intelligent AI.

His view is that there is almost no scenario in which we could build a super intelligence that wouldn’t either enslave us or hurt us, kill all of us, right? So he’s been telling people from the beginning, we should probably just not build this. And so you and I had a chance to sit in with him.

People fired a bunch of questions at him. And we should say, he’s a really polarizing figure, and I think is sort of on one extreme of this debate. But I think he was also really early to understanding a lot of harms that have bit by bit started to materialize.

And so it was fascinating to spend an hour or so sitting in a room and hearing him make his case.

[…]

Yeah, my case for taking these folks seriously, Kevin, is that this is a community that, over a decade ago, started to make a lot of predictions that just basically came true, right? They started to look at advancements in machine learning and neural networks and started to connect the dots. And they said, hey, before too long, we’re going to get into a world where these models are incredibly powerful.

And all that stuff just turned out to be true. So, that’s why they have credibility with me, right? Everything they believe, you know, we could hit some sort of limit that they didn’t see coming.

Their model of the world could sort of fall apart. But as they have updated it bit by bit, and as these companies have made further advancements and they’ve built new products, I would say that this model of the world has basically held so far. And so, if nothing else, I think we have to keep this group of folks in mind as we think about, well, what is the next phase of AI going to look like for all of us?

You are viewing a single thread.
View all comments
6 points
*

And all that stuff just turned out to be true

Literally what stuff, that AI would get somewhat better as technology progresses?

I seem to remember Yud specifically wasn’t that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.

permalink
report
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 362

    Monthly active users

  • 161

    Posts

  • 2.5K

    Comments