Viewing a single comment thread. View all comments

Philpax t1_j37i5s5 wrote

> we are just predicting if the image is of a cat or a dog...

And there's no way automated detection of specific traits could be weaponised, right?

I generally agree that it may be too early for regulation, but that doesn't mean you can abdicate moral responsibility altogether. One should consider the societal impacts of their work. There's a reason why Joseph Redmon quit ML.

3

DirkHowitzer t1_j38k6b8 wrote

A tool is just that a tool. Any tool can be used for good or for evil purposes. It's hard to imagine that a well regulated AI is all that is needed to get the Chinese government to stop brutally oppressing the Uygher people. Regulate AI all you want, it won't stop nasty people from doing nasty things. It will stop bitemenow999 from making his cat dog model. It will stop a lot of very productive people from doing important and positive work with AI.

If a graduate student no longer wants to peruse ML because of his own moral code, that is his choice. There is no reason that I, or anyone else, should be regulated from doing research in this area because of someone else's hang ups.

7

bitemenow999 t1_j39qzwa wrote

>but that doesn't mean you can abdicate moral responsibility altogether.

if you design a car model will you take responsibility for each and every accident that happens where the car is involved irrespective of human or machine error?

​

The way I see it, I am an engineer/researcher my work is to provide the next generation of researchers with the best possible tools, what they do with the tools is up to them...

Many will disagree with my opinion here but given past research in any field if the researchers would have stopped to think about the potential bad apple cases then we would not see many of the tools/devices which we take for granted every day. Just because Redmond quit ML doesn't mean everyone should follow in his footsteps. Restricting research in ML (if something like this is even possible) would be similar to proverbial book burning...

2

THENOICESTGUY t1_j3bse8i wrote

I agree with you, scientists/engineers and the like's goals is to produce tools/discoverys whether or not it can be used for someone's benefit or harm, what someone does with what they found or created isn't there concern it's the person who's using it that is of concern

2

Baturinsky OP t1_j3bx0gj wrote

I understand the sentiment, but I think it's irresponsible. Possible bad consequences of AI misuse is worse by far than enything other research before. It's not a reason to stop them, but a reason to treat them with extreme care.

−3

Blasket_Basket t1_j3h9l5p wrote

Got anything solid to back that claim up that isn't just vague handwavy concerns about a "superintelligence" or AGI? You're acting as if what you're saying is fact when it's clearly just an opinion.

2