-4 points
You would just have to let an superintelligent (aligned) AI robot loose and prompt it to produce enough food for everyone. It wouldn’t even be any maintaining effort, once the robot had been created. If it doesn’t have any negative consequences to the creators to have positive consequences for everyone else, and there are any empathetic people on the board of creators, I don’t see why it wouldn’t be programmed to benefit everyone.
6 points
As long as it doesn’t generate any negative externalities, sure. That’s a huge alignment problem though.
0 points