You are viewing a single thread.
View all comments View context
1 point
*

I’d say having some kind of goals is definitional in AGI, so in a broad sense of “alignment” that would include “paperclip optimisers”, sure, it’s bound to be possible. Natural GI exists, after all.

Speculatively, if you allow it to do controversial things some of the time, my guess is that there is a way to align it so that the average person will agree with most of the time. The trouble is just getting everyone to accept the existence of the edge case.

The versions of utilitarianism usually give acceptable answers, for example, but there’s the infamous fact that they imply we might consider killing people for their organs. Similarly, deontology like “don’t kill people” runs into problems in a world where retaliation is usually the only way to stop someone else violent. We’re just asking a lot when we want a set of rules that gives perfect options in an imperfect world.

permalink
report
parent
reply

Community stats

  • 1.9K

    Monthly active users

  • 737

    Posts

  • 3.3K

    Comments