164 points

China has a long history of siding with vehicles running people over.

permalink
report
reply
24 points
10 points

Not sure if you’re referring to tianmen or the fun Chinese practice of double tap

permalink
report
parent
reply
3 points

This never happened

  • PRC
permalink
report
parent
reply
37 points

Person crosses street when they shouldn’t.

Car lightly taps them and stops.

Person is not injured.

Person is stupid.

I think regulation is important, but this isn’t news.

permalink
report
reply
1 point

TBF, the car is stupid to, not because of this just in general AI is stupid, if it was a human in the car we would just say he was angry but with AI we know it wasn’t angry and made a mistake, it happened to catch the mistake before it killed somebody but that mistake is in the programming of every single car of that type in the world, letting of a small problem like this is equal to saying it’s legal, since it will be a nation wide bug that is allowed.

permalink
report
parent
reply
-2 points

Yeah. The person was on ‘FA’ and now (barely) on ‘FO’.

What happened to looking both ways and being wary of the things that could crush you?

permalink
report
parent
reply
3 points
Removed by mod
permalink
report
parent
reply
3 points

Fuck around - find out

permalink
report
parent
reply
31 points

Whether or not to run over the pedestrian is a pretty complex situation.

permalink
report
reply
35 points

what was the social credit score of the pedestrian?

permalink
report
parent
reply
6 points

To be fair, you have to have a pretty high IQ to run over a pedestrian

permalink
report
parent
reply
4 points

Right?

I saw “in a complex situation” and thought “what’s complex? Person in road = stop”

permalink
report
parent
reply
4 points
*

Person in road = stop”

i recommend trying https://www.moralmachine.net/ and answering 13 questions to get some bigger picture. it will take you no more than 10 minutes.

you may find out that the problem is not as simple as 4 word soundbite.


In this week’s Science magazine, a group of computer scientists and psychologists explain how they conducted six online surveys of United States residents last year between June and November that asked people how they believed autonomous vehicles should behave. The researchers found that respondents generally thought self-driving cars should be programmed to make decisions for the greatest good.

Sort of. Through a series of quizzes that present unpalatable options that amount to saving or sacrificing yourself — and the lives of fellow passengers who may be family members — to spare others, the researchers, not surprisingly, found that people would rather stay alive.

https://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html

same link: https://archive.is/osWB7

permalink
report
parent
reply
7 points
*

Is every scenario on that site a case of brake failure? As a presumably electric vehicle it should be able to use regenerative breaking to stop or slow, or even rub against the guardrails in the side in each instance I saw

There’s also no accounting for probabilities or magnitude of harm, any attempt to warm anyone, or the plethora of bad decisions required to put the car going what must be highway speeds down a city stroad with a sudden, undetectable complete brake system failure.

This “experiment” is pure, unadulterated propaganda.

Oh, and that’s not even accounting for the intersection of this concept and negative externalities. If you’re picking an “AI” driving system for your car, do you pick the socially responsible one, or the one that prioritizes your well-being as the owner? What choice do you think most people pick in this instance?

permalink
report
parent
reply
7 points

Can you swerve without hitting a person? Then swerve, else stay. This means that the car will act predictable and in the long run that is safer for everyone.

permalink
report
parent
reply
5 points
*

Interesting link, thanks. I find this example pretty dumb though. There is a pedestrian crossing street on zebra crossing. Car should, oh I don’t know, stop?

Nevermind, read the description, car has a break problem. In that case try to cause least damage like any normal driver would.

permalink
report
parent
reply
4 points
*

90% of the Sophie’s choice hand wringing about this is just nonsense anyway. The scenarios are contrived, exceedingly unlikely, and the entire premise that you can even predict outcomes in these panic scenarios simply does not resemble any real moral framework which actually exists. A self driving car which attempts to predict chaotic probabilities of occupant safety is just as likely to get it wrong and do more damage.

Yes, the meta ethics are interesting, but the idea that this is any more actionable than trolley problems is silly.

permalink
report
parent
reply
2 points
*

The car should be programmed to self-destruct or take out the passengers always. This is the only way it can counter its self-serving bias or conflict of interests. The bonus is that there are fewer deadly machines on the face of the planet and fewer people interested in collateral damage.

Teaching robots to do “collateral damage” would be an excellent path to the Terminator universe.

Make this upfront and clear for all users of these “robotaxis”.

Now the moral conflict becomes very clear: profit vs life. Choose.

permalink
report
parent
reply
4 points
*

Well yes and no.

First off, ignoring the pitfalls of AI:
There is the issue at the core of the Trolley problem. Do you preserve the life of a loved one or several strangers?

This translates to: if you know the options when you’re driving are:

  1. Drive over a cliff / into a semi / other guaranteed lethal thing for you and everyone in the car.
  2. Hit a stranger but you won’t die.

What do you choose as a person?

Then, we have the issue of how to program a self diving car on that same problem. Does it value all life equally, or is it weighted to save the life of the immediate customer over all others?

Lastly, and really the likely core problem, is that modern AI aren’t capable of full self driving, and the current core architecture will always have a knowledge gap, regardless of the size of the model. They can, 99% of the time, only do things that are in their data models. So if they don’t recognize a human or obstacle, in all of the myriad forms we can take and move as, they will ignore it. The remaining 1% is hallucinations that end up being randomly beneficial. But, particularly for driving, if it’s not in the model they can’t do it.

permalink
report
parent
reply
8 points

We are not talking about a “what if” situation where it has to make a moral choice. We aren’t talking about a car that decided to hit a person instead of a crowd. Unless this vehicle had no brakes, it doesn’t matter.

It’s a simple “if person, then stop” not “if person, stop unless the light is green”

A normal, rational human doesn’t need a complex algorithm to decide to stop if little Stacy runs into the road after a ball at a zebra/crosswalk/intersection.

The ONLY consideration is “did they have enough time/space to avoid hitting the person”

permalink
report
parent
reply
0 points

Just a lil ml posting

permalink
report
parent
reply
30 points

Isn’t China the place where they make sure your dead when they hit you? Backing up running over you multiple times.

permalink
report
reply
6 points

It’s not that bad anymore.

permalink
report
parent
reply
26 points

Why is social media options any factor in this discussion?

permalink
report
reply
3 points

It’s beneficial to know what the general public thinks about issues?

permalink
report
parent
reply
6 points

I don’t think “posts on social media” is a good indicator for what the public thinks anymore, if ever. The amount and reach of bot or bought accounts are disturbingly high.

permalink
report
parent
reply
3 points

Social media aren’t “the general public”

permalink
report
parent
reply
0 points

Do you have a better way of interviewing Chinese Nationals for Western media?

permalink
report
parent
reply
1 point
*

With the terrible demographic distribution, the absolute sewage social media is and the bots that make more than half the content. If you want to know what the general public thinks, you could not chose any worse source

permalink
report
parent
reply
1 point

…they asked on social media.

permalink
report
parent
reply

A Boring Dystopia

!aboringdystopia@lemmy.world

Create post

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

–Be a Decent Human Being

–Posting news articles: include the source name and exact title from article in your post title

–Posts must have something to do with the topic

–Zero tolerance for Racism/Sexism/Ableism/etc.

–No NSFW content

–Abide by the rules of lemmy.world

Community stats

  • 5.9K

    Monthly active users

  • 561

    Posts

  • 10K

    Comments

Community moderators