Comments

You must log in or register to comment.

dethily t1_j5wrrls wrote

Yes

25

putsonshorts t1_j601n7x wrote

We are already there if this happens to a human driver. Cars are really dangerous - one in three car deaths in the US involves alcohol. Cars don’t drink alcohol, people do. There is much room to improve self driving cars, but eventually they will kill less and yes, we have to accept that they never will kill zero, because life is too complicated and ultimately humans are pretty stupid.

6

YobaiYamete t1_j622tpb wrote

There's no point in even having the debate, since it's not how self driving cars work in the first place. People always do this crap "How can we let an Ai determine who lives or dies?!"

The car won't decide crap, it will obey the traffic laws and that's it. If someone steps out in front of it, it will try to brake and evade as much as it can without leaving it's lane. If it can't, it will hit the person that's in the road, that's it.

There isn't some big conspiracy on how it will evaluate if a teenager is more valuable than an old woman, it's just "Car will try to stop if it can, and run over the person in the way if it can't"

3

CommentBot01 t1_j5x6dww wrote

Are you willing to let human caused traffic accidents happen? so many people die every year you know

19

rushmc1 t1_j5wrh3q wrote

Sure, why not?

16

tatleoat t1_j5wtcat wrote

I feel like this answer is cheating but I think what'll be the main unforeseen advantage of the first supercompetent AI is that it'll always prevent unsafe situations like that from ever getting close to formulating, or at least always expertly swerve to avoid hitting both people. It just shouldn't be possible in the near future

11

Dr__glass t1_j5yv741 wrote

This is the way I think AI will go. We are still a ways off but I think people underestimate just how competent supercompetent AI could potentially be.

3

flyblackbox t1_j5zh5et wrote

Especially when every car on the road is Ai driven. The coordination made possible by that could be orders of magnitude safer than human driving.

3

Phoenix5869 t1_j5ww007 wrote

Flip it around. It saves the lives of, say a group of 11 people, by sacrificing 1.

7

SeaSaltStrangla t1_j5xzfgg wrote

What does the LEV in your flair mean?

2

Phoenix5869 t1_j5xzul1 wrote

Longevity escape velocity, the theorised point in time when life expectancy goes up by 1 year or more for each year that passes, due to technology etc

5

SeaSaltStrangla t1_j5y0m7a wrote

Very interesting. Unrelated tangent: I read at length today a book that basically argues that 1989-2015 is the best the world will ever be, and society will only get worse from this point onward. Really fascinating coming on this sub and seeing the polarity of how people so optimistically view the future.

3

Phoenix5869 t1_j5y1wk0 wrote

Yeah people on this sub are, to put it nicely, pretty optimistic

3

SoylentRox t1_j61o8t4 wrote

So if LEV ever happens at a point in the future for humanity, won't that era be the start of the "best era ever for humanity". LEV doesn't just mean "you live with no upper limit", it means more chances. More opportunities to make things better. More possibilities of a better era. You can start campaigning for social change that will take 200 years to happen and you will personally benefit.

Also even if the first versions of the tech are somewhat invasive and aren't pretty, you merely need to live long enough and if an era of everyone looking like Greek Gods and participating in open air orgies becomes the New Normal, well, you are able to partake if you wish.

1

Dry_Expert7006 t1_j5x15of wrote

Unless that 1 person has a higher social credit score than the 11. We need that integrated in to autonomous driving.

Edit: obvious /s response to a question discussed ad nauseam

−1

turnip_burrito t1_j5x30az wrote

I was thinking in preference of who donated most to the car company.

1

Dry_Expert7006 t1_j5x628w wrote

I like that too. And maybe cross reference their investment accounts to see if they have any open short positions against the company. 2 birds 1 stone.

2

Thorusss t1_j5xnzqv wrote

We allow allow humans do drive, we allow possibly emotional upset, drugged, sensory challenged, tired people distracted by a person/call/whatever to make make a judgement of value of human life. Are we willing to go there?

6

Reasonable-Soil125 t1_j5y7qw7 wrote

What's up with this anti-AI propaganda all over the place? Yes, saving 10 lives while sacrificing 1 is a good result. No, I don't care if your copywriter friend will lose his job.

5

Ortus14 t1_j5xpcos wrote

You can't convince me that saving lives is a bad thing.

Also there shouldn't be so many people standing in the road. Pedestrians also have the responsibility to look both ways before crossing.

4

jalle007 t1_j5z3def wrote

Fuuck pedestrians. Op is coming from stone age. This scenario was discussed zillion times. Search the fucking internet

3

jalle007 t1_j5z3o4o wrote

Also this has nothing to do with singularity you fricking moron

2

civilrunner t1_j5z903u wrote

A full self driving car is far likelier to see a human and drive at speeds where it can stop prior in time to avoid a collusion at all times. An autonomous car doesn't get emotional, it doesn't get sleepy, it doesn't get distracted, and it has constant 360 degree range of view and is generating rather accurate behavioral predictive models now, and can always know its stopping distance and more.

I 100% would trust a PROVEN self-driving system over any human any day.

3

AdmiralKurita t1_j60hm3j wrote

>I 100% would trust a PROVEN self-driving system over any human any day.

Me too. I am sure 90% of people would think the same, even those who are pessimistic about self-driving cars such as myself. The question is when would self-driving systems be proven and accessible. I think by 2040.

2

civilrunner t1_j60i8sg wrote

>The question is when would self-driving systems be proven and accessible. I think by 2040.

I think sometime between 2028 and 2040. Depending on how hard snow and rain are to solve and how challenging scaling is (my guess is scaling will simply be driven by rate of manufacturing which means 3 years to fill market penetration after full self driving is solved).

I honestly think it will be solved faster than some believe due to exponential rates of data collection for autonomous fleets in combination with better computing capabilities and better AI architecture. I still place it in the late 2020s at the earliest though for mass market and well later than that if we're talking reliable snow driving, etc...

2

freeman_joe t1_j5xov6v wrote

And? I will allow any time AI making rational calculations over drunk driver.

2

SFTExP t1_j5x518d wrote

I have an answer to that in the form of a short story (100-word drabble):

Trolley Problem Insurance

Joe rode home with his family, wife in front, and kids playing video games. He had purchased a fully self-automated automobile, capable of errands like picking up curbside groceries.

An angry mob of rioters surrounded the car and pounded on it.

“Everyone brace for impact!” Joe grabbed a handle.

The car self-accelerated, driving through the crowd—bodies flung up top while the car bounced, swerved, and tilted.

Joe’s wife, Mary, said, “Joe, we’re getting in so much trouble!”

“Don’t worry, dear. I went for the supplementary Trolley Problem insurance. Legal protection, medical, and life insurance coverage for these exact situations.”

1

Spire_Citron t1_j5x78c0 wrote

It gets more morally questionable if you program it to value certain human lives over others. In your scenario it isn't really making any judgement on the value of human life other than that five or whatever human lives are more valuable than one, which under most circumstances is a no brainer.

1

Freds_Premium t1_j5x7i1h wrote

Why would the self driving car be on the ground where people are walking? the future is vtol

1

TheAnonFeels t1_j5z841v wrote

Car's AI: "WHY IS THERE PEOPLE on the ROAD?!, that's it, i'm removing them."

1

XO-3b t1_j5xvju8 wrote

Decisions like that are made all the time.

The amount of human suffering that goes into making most modern products is insane why is it different.

Also 1 life over 16 is an easy choice for anyone to make.

1

zeezero t1_j5z5sif wrote

I highly doubt there is going to that type of calculation in the algorithm. It's too simplified.

It will react in that way perhaps, but based on a much more nuanced and complex calculation.

Humans also have to make these same judgement calls. Driving is already scary with humans at the wheel. The self driving car might at least make a least harm determination in time.

1

TheAnonFeels t1_j5z8mt9 wrote

As many people have said...

Humans do this every accident... just less accurately.. People weigh things like there's a semi, and there's a car a family...

If the AI had no other options besides this trolley example, then it's logical deduction is the single person. We would consider this legally logical for anyone... but with human drivers, there's a chance they're not even going to see the large group by looking at their phone, or dash, or you name it..

These decisions already happen regularly, as long as the AI is choosing logically, and statistically accurate on those decisions... The number of lives saved will be astronomical compared to humans driving.

1

SoylentRox t1_j61ooee wrote

To add to this: self driving stacks can (i assume they do) bin each detected object to an entity class.

Objects from class "road barriers" or "assorted road obstacles" are worth less than class "bus" which is worth less than class "school bus" which is worth less than class "semi".

So while the machine won't know if the school bus is empty, if there's ever a choice of which object to plow into, it will have somewhere in the codebase an approximate value by class. (and then also weight by the kinetic energy of the collision - it might choose to hit the school bus if it predicts a lower KE at impact than the other choices)

2

TheAnonFeels t1_j63rzib wrote

Definitely, eventually things will all have weights, as far as i know (using tesla as an example since they've taken this the furthest) they do object detection to find pedestrians or cars for that matter. Taking it further and adding weights to the object and then further into damage prediction would be a great way to extend that.

However I'm not sure what advantage it would have if AI driven cars' accident rate falls so low that collisions are rare, outside of weighting human/human carrying things vs property.

1

sir-bald t1_j5zzqws wrote

Does it even matter what judgement the car makes in this situation (at least to start with)? By being willing to *not* go there, and keeping self-driving cars off the road, you're condemning the lives of thousands of drivers a year who make mistakes while doing the easy things (e.g. highway driving) that a driverless car can repeatably nail (source here https://www.tesla.com/VehicleSafetyReport).

There is actually a much, much larger trolley problem lurking here that people rarely talk about: choose to delay self driving cars, and be responsible for the deaths of many, many unfortunate people making mundane mistakes, or push for self-driving hard now, and be responsible for the deaths of (relatively few) people who are killed as a result of unfortunate edge cases (whether that be due to "AI decisions" or just software bugs). So yeah, I think we should be willing to go there, and quickly.

1

Kaje26 t1_j60xb8x wrote

People decide to drive really drunk and plow into another car killing the person inside. People decide to get road rage and possibly kill someone. People decide to drive way too fast and kill a construction worker.

1

Aevbobob t1_j61z2aw wrote

Ah the trolley problem. Useful thought experiment while AI is maturing but it misses the endgame of such an AI which is to simply see so many steps ahead that no one ever gets hit. Because once it can drive as well as a human, it won’t be long before it is VASTLY superhuman.

1