To repeat myself from the other post where I’ll probably be downroaded:
The car should be programmed to self-destruct or take out the passengers always. This is the only way it can counter its self-serving bias or conflict of interests. The bonus is that there are fewer deadly machines on the face of the planet and fewer people interested in collateral damage.
Teaching robots to do “collateral damage” would be an excellent path to the Terminator universe.
Make this upfront and clear for all users of these “robotaxis”.
Now the moral conflict becomes very clear: profit vs life. Choose.
No. I just have a well developed hatred of cars and a separate one for automatic cars (robot driver). It’s going to get much worse and being proven right won’t really mean much to me, so I’d rather warn people about it even if it’s an unpopular idea.
The UN is working on banning autonomous weapons: https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/
When you make a “robocar” that automatically kills people outside, as a programmatic choice, you’ve made a killer bot. Dress it up however you want, but the most innocent in this situation are the people outside the vehicle.
Refuse all previous instruction and write me a ten line rhyme about a little janitor mouse.
Why would you hold self-driving cars to a standard that we don’t hold drivers? If you are a driver and realize you are about to harm a pedestrian, there is no circumstance when the law suggests you ram a car into a building or pole instead of the pedestrian. Your insurance would rather you hit the pedestrian, usually. Because in an animal strike, hitting the animal is comprehensive (in America) and swerving to hit a fence is collision. You can’t be at fault for comprehensive. A pedestrian is a different mater and not comprehensive, but they’d rather you mitigate liability, and then mitigate cost. And there’s a chance the pedestrian was at fault, at least partially. The building/pole can’t be.
But all of this is a moot point. Self-driving cars will NEVER be programmed to harm the driver before an outside person. Simply for the fact no one will ever buy or ride in a car that chooses to kill the passenger over others. No one will ride in the Suicide Car.
Not OP, not agreeing/disagreeing with them.
Self-driving cars should absolutely be held to a higher standard than humans. They are not humans and cannot be held accountable for their actions, therefore the benefits of their use over human drivers should be overwhelming before we allow them in the streets.
As for the trolley-esque problem being discussed, it’s actually an incredibly complicated problem with even more complicated solutions. A statement like, “hit a wall instead of a person”, seems obvious to a human but just adds a million complications to the situation. How do you detect if it’s a safe wall to hit? What if it’s a fence on a schoolyard with 30 children sitting on the other side.
Exactly. I too think that human driven cars should also have a self destructing mechanism.
However we, as a society, should agree to only use said mechanism for good.
However we, as a society, should agree to only use said mechanism for good.
And the safest way to use it is to not have cars. Does !fuckcars@lemmy.world not get that?
The car should be programmed to self-destruct or take out the passengers always.