Viewing a single comment thread. View all comments

bitemenow999 t1_j39qzwa wrote

>but that doesn't mean you can abdicate moral responsibility altogether.

if you design a car model will you take responsibility for each and every accident that happens where the car is involved irrespective of human or machine error?

​

The way I see it, I am an engineer/researcher my work is to provide the next generation of researchers with the best possible tools, what they do with the tools is up to them...

Many will disagree with my opinion here but given past research in any field if the researchers would have stopped to think about the potential bad apple cases then we would not see many of the tools/devices which we take for granted every day. Just because Redmond quit ML doesn't mean everyone should follow in his footsteps. Restricting research in ML (if something like this is even possible) would be similar to proverbial book burning...

2

THENOICESTGUY t1_j3bse8i wrote

I agree with you, scientists/engineers and the like's goals is to produce tools/discoverys whether or not it can be used for someone's benefit or harm, what someone does with what they found or created isn't there concern it's the person who's using it that is of concern

2

Baturinsky OP t1_j3bx0gj wrote

I understand the sentiment, but I think it's irresponsible. Possible bad consequences of AI misuse is worse by far than enything other research before. It's not a reason to stop them, but a reason to treat them with extreme care.

−3

Blasket_Basket t1_j3h9l5p wrote

Got anything solid to back that claim up that isn't just vague handwavy concerns about a "superintelligence" or AGI? You're acting as if what you're saying is fact when it's clearly just an opinion.

2