Submitted by Baturinsky t3_104u1ll in MachineLearning
Omycron83 t1_j378v8s wrote
We don't need to regulate the research in AI in any way (as it, itself, can't really do any harm), only the applications (that often already are:). You can basically asks the question "Would you let any person, even if grossly unqualified or severely mentally unstable, do this?" Any normal application (Browsing the web, analyzing images of plants, trying to find new patterns in data, talking to people etc.) where that answer is "Yes" doesn't need any restriction whatsoever (at least not in the way you are asking). If it comes to driving a car, diagnosing patients or handling military equipment etc. you wouldn't want just ANY person to do that, which is why there are restrictions that regulate who can do it (you need a driver's license, a medical degree and license, be deemed mentally fit etc.). In these areas it is reasonable to limit the group of decision makers, and for example exclude AI. But as algorithms don't have any qualifications for that they, by default, also are not allowed to do that stuff anyways, only when someone on the government side deems it stable enough. Of course there are edge cases where AI may do stupid stuff in normal applications, but those are rare and usually only happen on a small scale (for example a delivery drone destroying someone's window or smth).
TLDR: most cases where you would want restrictions already have them in place as people aren't perfect either.
Baturinsky OP t1_j37bbwe wrote
Imagine the following scenario. Alice has an advance AI model at home. And asks it, "find me a best way to to a certain bad thing and get away from it". Such, harming or even murdering someone. If it's a model like ChatGPT, it probably will be trained to avoid answering such questions.
But if network models are not regulated, she can find an immoral warez model without morals, or retrain the morale out of it, or pretend that she is a police officer that needs that data to solve the case. Then model gives her the usable method.
Now imagine if she asks for a method to do something way more drastic.
anon_y_mousse_1067 t1_j37dth2 wrote
If you think government regulation is going to solve an issue this, I have bad news for you about how government regulation works
Baturinsky OP t1_j37ej92 wrote
Ok, how would you suggest solving that issue then?
EmbarrassedHelp t1_j37qjz1 wrote
Dude, have you ever been to a public library before? You can literally find books on how best to kill people and get away with it, how to cook drugs, how to make explosives, and all sorts of things. Why do you want to do the digital equivalent of burning libraries?
Baturinsky OP t1_j37rkj0 wrote
Yes, but it would require a lot of time and effort. AI has already read it all and can give it an equivalent of millenias worth of human time to analyse.
Omycron83 t1_j37dxva wrote
And why did chat GPT do that? Because the data was already there on the internet, so nothing she couldn't figure out on her own here. In general there is basically no way an AI can (as of rn) think of an evil plan so ingenious no one could come up with otherwise.
Baturinsky OP t1_j37fvgz wrote
Key word here is "right now".
[deleted] t1_j37f28k wrote
[deleted]
Viewing a single comment thread. View all comments