Yes, you probably are. Please don’t forget that the current available technology constantly improves, and that we actually don’t see any good examples of self - driving cars that much - the most prominent displays are from Tesla, and they arguably build the worst cars we’ve seen since Ford came up with the assembly line.
The technology used in autonomous vehicles, e. g. sensors, have been used in safety critical application for decades in other contexts, and a machine is capable of completely different reaction times. Also, if autonomous vehicles cooperate in traffic sticking to their programmed behavior, observing traffic rules etc., you will get less reckless driving, with traffic flow becoming more deterministic. These benefits will particularly increase once self-driving cars don’t have to share the road with human drivers.
I would always trust a well-engineered, self-driving car more than one driven by a human.
Disclaimer: I used to work on these things in a research lab. Also, we’re not quite there yet, so please have a little patience.
What about things on the road that are not automated? There will be situations where a machine’s ethics might override a human driver’s ethics. It would be good for us to be able to override the system and know how to safely drive in emergencies.
It’s not about everything being automated. We also have to differentiate between early incarnations of autonomous vehicles and the desired, final state.
A manual override will of course be part of early models for the foreseeable future, but the overall goal is for the machine to make better decisions than a human could.
I don’t have any quarrel with life or death decisions being made by a machine if they have been made according to the morals and ethics of the people who designed the machine, and with the objective data that was available to the system at the time, which is often better than what would be available to a human in the same situation.
It’s the interpretation of said data that is still not fully there yet, and we humans will have to come to terms with the fact that a machine might indeed kill another human being in a situation where acting any different might cause more harm.
I don’t subscribe to the notion that a machine’s decision is always worth less than the decision of another entity participating in the same situation, just because it so happens that the second entity happens to be a human being.
Not having control of a vehicle in a life or death situation is terrifying to me. I probably trust my driving more than most, and trust my decisions over those decided by a corporation beholden to rich investors.
I’m worried about the growing pains before we get to the ideal state, and that would have to be full autonomy of everything on the road so nothing that enters the space can collide with another, or if they do, it’s not dangerous.
But then guess what? People will be able to pay for the fast lane. Or a faster rate of speed. You make a whole economy out of trying to get to work, trying to go to a wedding, trying to go anywhere. I don’t trust it, but I get it.