Submitted by Baturinsky t3_104u1ll in MachineLearning

I think AI has a huge potential for misuse. Maybe not yet, but AI can develop at unpredictable speed, especially as AI can be used to optimise the development of AI itself.

AI tech is probably still relatively safe in the hands of responsible, careful and ethical people. But those are not the only people that have acces to AI technologies.

So, why this techology and equipment still open to everyone, without any regulations or limitations?

Edit: sorry if question is stupid, and I actually hope it is and I am just overparanoid. But I'm realy anxious about that issue and would be happy to hear something that would soothe that anxiety.

0

Comments

You must log in or register to comment.

Cpt_shortypants t1_j36vp04 wrote

AI is just math and programming how will you regulate this?

32

soraki_soladead t1_j375igy wrote

Regulating machine learning sounds ridiculous but note that cryptography is regulated and also consists of just math and programming. For example, if you publish an app on the app store with cryptography you need to comply with export regulations: https://help.apple.com/app-store-connect/#/dev88f5c7bf9

Now, that’s for exports and publishing. Regulating personal use is much more difficult but it’s still possible: perhaps requiring a photo ID to download certain libraries or requisition GPUs/TPUs.

Personally, I think it’s unlikely to happen and the benefits of doing so are minimal.

16

PredictorX1 t1_j37nzay wrote

>cryptography is regulated

In practice, this mainly applies to commercial offerings. If a competent programmer wanted to implement strong encryption, all they would need is the right book.

5

EmbarrassedHelp t1_j37q6cs wrote

In practice though cryptography regulations in the US simply require notifying the government agency of the release. That's all there is, so its not really regulation of what you can and cannot do with it.

3

HateRedditCantQuitit t1_j3850b6 wrote

Bombs are just physics, but I’m glad we regulate it.

0

PredictorX1 t1_j38i8bn wrote

Bombs require special materials. Suspicious purchases of the precursors of explosives are watched. There are hundreds of millions of PCs on this planet, every one of them capable of being used to develop cryptographic software and every one of them able to execute it.

Bombs are made one at a time. Once encryption software is written, it can be copied endlessly.

2

Baturinsky OP t1_j372otw wrote

I don't know. Would require serious measures and cooperation between countries, and I don't think the world is ready for that yet.

But I'd say, classifying the research and trained models, limiting the access and functionality of equipment that can be used for AI training.

Especially the more general-purpose models, like programming ones.

−5

Duke_De_Luke t1_j375pb2 wrote

No. Research should not be regulated. Applications should be (and are, partially) regulated.

13

KerbalsFTW t1_j36wrtj wrote

> So, why this techology and equipment still open to everyone, without any regulations or limitations?

Mostly AI has been used for good rather than harm. Thus far it is a net good, so you are trying to outlaw a good thing because of an unproven future possibility of a bad thing. But you're ok with biotech? Nanomaterials? Chip research?

But let's suppose that you in the (let's guess) US decide to outlaw AI.

The US now falls behind Europe and China in developing and understanding AI technologies, with no way to get back in the game.

Ok, so let's say Europe agrees with you, then what?

So let's say even China agrees with you....... so now the only people developing AI are the underground illegal communities, and the mainstream researchers no longer even know what AI is.

> So, why this techology and equipment still open to everyone, without any regulations or limitations?

What, exactly, do you propose as an alternative? How do you keep it in universities (who publish in public journals)? And should you?

12

Cherubin0 t1_j38v1zh wrote

Sure, China is the best example of AI regulations, like concentration camps are now a good thing I guess /s.

0

Baturinsky OP t1_j38z6vw wrote

I won't argue the existence of those camps. But China definitely got ML training and development running on massive scale, very likely using technologies, uncontrollably leaked from US. So, now we need the goodwill not of just US to contain the danger, but of China too and who know who else.

0

Baturinsky OP t1_j36yncl wrote

Yes, restricting it just in one country is pointless, which is why major countries should work on this together, like on limiting nuke spread.

Biotech, Nanomaterials, Chip research, etc could require regulation too, though I don't see them as unpredictable as ML now.

And I don't suggest banning AI research - just limiting and regulaing it's development and the spread of algorithms and equipment, so it's less likely to get in hands of underground illegal communities.

−12

PredictorX1 t1_j38if43 wrote

>major countries should work on this together, like on limiting nuke spread.

This is a good parallel: See how much cheating goes on with nuclear material and nuclear weapons.

2

Baturinsky OP t1_j38kn1z wrote

And yet, we don't have nuclear wars so far.

0

PredictorX1 t1_j38lht6 wrote

So, your suggestion is that countries- like the United States, China and Russia work together to contain technology? This seems like a serious suggestion to you?

1

Baturinsky OP t1_j38yn7r wrote

I hate it too. But I don't see any other options that does not carry the existential threat.

1

KerbalsFTW t1_j3c2q8z wrote

And you trust the governments of the world to make and impose these decisions on us? Because they have such a good track record so far?

1

Baturinsky OP t1_j3c6wv6 wrote

I hate it, but see no other alternatives safe enough.

1

bitemenow999 t1_j3784qe wrote

Dude we are not building Skynet, we are just predicting if the image is of a cat or a dog...

Also like it or not AI is almost getting monopolized by big tech given the huge models and the required resources to train the said models. It is almost impossible for a research lab (academic) to have the resources to train one of the GPTs or diffusion models or any of the STOA models (without sponsorships). Regulating it will kill the field.

10

Philpax t1_j37i5s5 wrote

> we are just predicting if the image is of a cat or a dog...

And there's no way automated detection of specific traits could be weaponised, right?

I generally agree that it may be too early for regulation, but that doesn't mean you can abdicate moral responsibility altogether. One should consider the societal impacts of their work. There's a reason why Joseph Redmon quit ML.

3

DirkHowitzer t1_j38k6b8 wrote

A tool is just that a tool. Any tool can be used for good or for evil purposes. It's hard to imagine that a well regulated AI is all that is needed to get the Chinese government to stop brutally oppressing the Uygher people. Regulate AI all you want, it won't stop nasty people from doing nasty things. It will stop bitemenow999 from making his cat dog model. It will stop a lot of very productive people from doing important and positive work with AI.

If a graduate student no longer wants to peruse ML because of his own moral code, that is his choice. There is no reason that I, or anyone else, should be regulated from doing research in this area because of someone else's hang ups.

7

bitemenow999 t1_j39qzwa wrote

>but that doesn't mean you can abdicate moral responsibility altogether.

if you design a car model will you take responsibility for each and every accident that happens where the car is involved irrespective of human or machine error?

​

The way I see it, I am an engineer/researcher my work is to provide the next generation of researchers with the best possible tools, what they do with the tools is up to them...

Many will disagree with my opinion here but given past research in any field if the researchers would have stopped to think about the potential bad apple cases then we would not see many of the tools/devices which we take for granted every day. Just because Redmond quit ML doesn't mean everyone should follow in his footsteps. Restricting research in ML (if something like this is even possible) would be similar to proverbial book burning...

2

THENOICESTGUY t1_j3bse8i wrote

I agree with you, scientists/engineers and the like's goals is to produce tools/discoverys whether or not it can be used for someone's benefit or harm, what someone does with what they found or created isn't there concern it's the person who's using it that is of concern

2

Baturinsky OP t1_j3bx0gj wrote

I understand the sentiment, but I think it's irresponsible. Possible bad consequences of AI misuse is worse by far than enything other research before. It's not a reason to stop them, but a reason to treat them with extreme care.

−3

Blasket_Basket t1_j3h9l5p wrote

Got anything solid to back that claim up that isn't just vague handwavy concerns about a "superintelligence" or AGI? You're acting as if what you're saying is fact when it's clearly just an opinion.

2

Baturinsky OP t1_j379whv wrote

Yes, it's kinda self-limit itself by the costs of training now. But I think it's inevitable that there will be more efficient training algorithms soon, possibly by orders of magnitude. Probably found with the help of ML, as AI now can be trained for programming and research too.

−6

[deleted] t1_j396adp wrote

[removed]

9

Baturinsky OP t1_j39irdy wrote

I'm a programmer myself. Actually, I'm writing an AI for a bot in game right now, without the ML, of cause. And it's quite good at killing human players, btw, even though algorithm is quite simple.

So tell me, please, why AI can't become really dangerous really soon?
By itself, network like ChatGPT is reatively harmless. It's not that smart, and can't do anything in real world directly. Just tells something to human.

But, corpos and countries funnel ton of money into the field. Models are learning different things, algorithms are improving, so they will know much more stuff soon, including how to move and operate things in real world. Then, what stops somebody from connecting some models together, and stick it into a robot arm, which will make and install more robot arms and war drones, which will seek and kill humans? Either specific kind of humans, or humans in general, depending on that "somebody"'s purpose?

−6

PredictorX1 t1_j3ca2pm wrote

What, specifically, are you suggesting?

1

Baturinsky OP t1_j3ch80z wrote

I'm not qualified enough to figure how drastic measures can be enough.

From countries realising they face a huge common crisis that they only survive it if they forget the squabbles and work together.

To using the AI itself to analyse and prevent it's own threats.

To classifying all trained general-purpose models of scale of ChatGPT and above and preventing the possibility of making the new ones (as I see entire-internet-packed models the biggest threat now, if they can be used without the safeguards)

And up to to forcebly reverting all publically avaiable computing and communication technology to the level of 20 of 30 years ago, until we figure how we can use it safely.

0

Blasket_Basket t1_j3h8t00 wrote

It sounds like you have some serious misunderstandings about what AI is and what it can be used for, rooted in the same sci-fi plots that has misinformed the entire public.

1

Baturinsky OP t1_j3hnmdc wrote

I'm no expert indeed, that's why I was asking.
But experts in the field also think that serious concerns on AI safety is justified

https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence

Also, a lot of good arguments here:

https://www.reddit.com/r/ControlProblem/wiki/faq/

1

[deleted] t1_j40im8k wrote

[removed]

1

asingov t1_j40suvt wrote

Cherry picking Musk and Hawking out of a list which includes Norvig, Deepmind, Russel and "academics from Cambridge, Oxford, Stanford, Harvard and MIT" is just dishonest.

1

bob_shoeman t1_j40ukrr wrote

Alright, that’s fair - edited. I didn’t read through the first link properly.

Point remains that there is pretty generally a pretty complete lack of knowledge of what the field is like. r/ControlProblem most certainly is full of nonsense.

2

[deleted] t1_j3743qk wrote

Its not about AI, its about the application.

​

Nuclear bombs? Bad

Nuclear energy? Good

Driving your kids to their soccer game? Good

Driving into a crowd of protestors? Bad

​

Get it?

7

Baturinsky OP t1_j375886 wrote

Yes, exactly. Which is why it's important to not give access to dangerous things into hands of those who could misuse it with catastrophic consequences.

−7

Duke_De_Luke t1_j376emq wrote

Emails or social networks are as dangerous as AI. They can be used for phishing or identity theft.

Not to talk about a car, or chemical compounds used to clean your home or a kitchen knife.

AI is just a buzzword. You restrict certain applications, not the buzzword. Like you restrict selling of explosives, not chemistry.

8

Baturinsky OP t1_j379g68 wrote

Nothing we knew yet has the danger potential of the self-learning AI.
Even though it's still a potential still.
And it's true that we should restrict only certain applications of it, but it could be a very wide list of application, with very serious measures necessary.

−9

[deleted] t1_j375ru0 wrote

You mean like optimizing algorithms to grab people's attentions and/or feed them ads?

7

Baturinsky OP t1_j37g36w wrote

As far as I see, whoever is doing it is not doing it very good. Be it AI or human.

0

PredictorX1 t1_j3cacld wrote

>Which is why it's important to not give access to dangerous things into hands of those who could misuse it with catastrophic consequences.

What does "give access" mean, in this context? Information on construction of learning systems is widely available. Also, who decides which people "could misuse it"? You?

1

Baturinsky OP t1_j3chu4b wrote

Mostly, giving a source of trained models, and denying the possibility of making the new ones. I see unrestricted use of the big scale general-purpose models as a biggest threat, as they are effectivel "encyclopedias of everything", and can be used for very diverse and unpredictable things.

Who decides is also a very interesting question. Ideally, public consensus, but realisitcally, those who have the capabilities to enforce those limitations.

0

Omycron83 t1_j378v8s wrote

We don't need to regulate the research in AI in any way (as it, itself, can't really do any harm), only the applications (that often already are:). You can basically asks the question "Would you let any person, even if grossly unqualified or severely mentally unstable, do this?" Any normal application (Browsing the web, analyzing images of plants, trying to find new patterns in data, talking to people etc.) where that answer is "Yes" doesn't need any restriction whatsoever (at least not in the way you are asking). If it comes to driving a car, diagnosing patients or handling military equipment etc. you wouldn't want just ANY person to do that, which is why there are restrictions that regulate who can do it (you need a driver's license, a medical degree and license, be deemed mentally fit etc.). In these areas it is reasonable to limit the group of decision makers, and for example exclude AI. But as algorithms don't have any qualifications for that they, by default, also are not allowed to do that stuff anyways, only when someone on the government side deems it stable enough. Of course there are edge cases where AI may do stupid stuff in normal applications, but those are rare and usually only happen on a small scale (for example a delivery drone destroying someone's window or smth).

TLDR: most cases where you would want restrictions already have them in place as people aren't perfect either.

3

Baturinsky OP t1_j37bbwe wrote

Imagine the following scenario. Alice has an advance AI model at home. And asks it, "find me a best way to to a certain bad thing and get away from it". Such, harming or even murdering someone. If it's a model like ChatGPT, it probably will be trained to avoid answering such questions.

But if network models are not regulated, she can find an immoral warez model without morals, or retrain the morale out of it, or pretend that she is a police officer that needs that data to solve the case. Then model gives her the usable method.

Now imagine if she asks for a method to do something way more drastic.

−1

anon_y_mousse_1067 t1_j37dth2 wrote

If you think government regulation is going to solve an issue this, I have bad news for you about how government regulation works

5

Baturinsky OP t1_j37ej92 wrote

Ok, how would you suggest solving that issue then?

1

EmbarrassedHelp t1_j37qjz1 wrote

Dude, have you ever been to a public library before? You can literally find books on how best to kill people and get away with it, how to cook drugs, how to make explosives, and all sorts of things. Why do you want to do the digital equivalent of burning libraries?

5

Baturinsky OP t1_j37rkj0 wrote

Yes, but it would require a lot of time and effort. AI has already read it all and can give it an equivalent of millenias worth of human time to analyse.

1

Omycron83 t1_j37dxva wrote

And why did chat GPT do that? Because the data was already there on the internet, so nothing she couldn't figure out on her own here. In general there is basically no way an AI can (as of rn) think of an evil plan so ingenious no one could come up with otherwise.

2

Cherubin0 t1_j38ul3h wrote

Seriously, the biggest danger of AI comes from government and big corporations using it, not average plebs. I cannot mass censor the population with AI or create an army of kill bots.

2

Baturinsky OP t1_j38yeo8 wrote

Yes, but it being spread uncontrollably means there is much more governments and corporations that can mass censor the population with AI or create an army of kill bots.

1

Comfortable_End5976 t1_j3al1jb wrote

many people ITT talking about "muh ethics" and ignoring the elephant in the room (AGI existential risk)

2

Oceanboi t1_j3ef4wv wrote

A bit surprised to see the cavalier sentiments on here. I often wonder if they will eventually require commercials to disclose when it is computer generated (unreal engine 5 demos have fooled me a few times), and deepfakes come to mind as major problems (just saw an Elon one that took me embarrassingly too long to identify as fake). I don’t think tons of regulation should occur per say; other than certain legal disclosures for certain forms of media to prevent misinformation.

2

NovelspaceOnly t1_j3a8jb6 wrote

IMHO absolutely not.

In your opinion, would you rather have an open-source community-building AI through open debate and collaboration, or a closed system where only a small number of people have access to advanced AI technologies and development, potentially giving them an unfair advantage? While capitalism can also create divides between those who have access to resources and knowledge and those who do not, an open and collaborative approach to AI research could help to level the playing field and promote transparency and accountability.

1

Baturinsky OP t1_j3bwbno wrote

I'd say abosultely yes for about any field, except for AI.
Yes, it's unfair, but I would prefer to depend on goodwill of people, than on goodwill of machines.

1

LanchestersLaw t1_j3afzkc wrote

I think this is a valid question and many of the responses here were quite snarky and unnecessarily rude. The short answer is yesnt. As with any regulation there is no one solution everyone will agree on or be happy with. I felt like a particularly ironic way to answer this question is with ChatGPT which gave me this response:

> “There is certainly a valid concern about the potential misuse of AI, and it is important for society to consider the risks and how to mitigate them. However, it is also important to consider the potential benefits of AI and not restrict research in a way that would prevent those benefits from being realized. It is a delicate balance to strike.

> There are already some regulations and guidelines in place for AI research and development, such as the Asilomar AI Principles and the EU's General Data Protection Regulation (GDPR). In addition, many companies and organizations have their own guidelines and best practices for responsible AI development.

> It is also important to note that AI is not a single monolithic technology, but rather a broad field with many different applications and use cases. The risks and benefits of AI can vary depending on the specific application, so a one-size-fits-all approach to regulation may not be effective.

> Overall, it is important for society to continue to have an open and honest dialogue about the risks and benefits of AI and how best to address them. It is also worth noting that AI is still in the early stages of development, and there are many technical and practical limitations to what it can currently do. While it is important to be mindful of the potential risks and to take steps to mitigate them, it is also important to keep these limitations in perspective and not let fear or anxiety about the potential misuse of AI obscure the many potential benefits it can bring.”

Edit: r/artificial is a better community to ask this question. This one is mostly about the technical details of how the complicated math works and “why is my model doing something stupid?”

1

Baturinsky OP t1_j3bh9kb wrote

Thanks.

I think people vastly underestimate the possibilities of use of ChatGPT-like model. If it has learned from the entire(-ish) interned scrapped, it's not just language model, it's the model of entire human kowledge avaialbe on the internet, neatly documented and cross-referenced for very easy use by algorithms. Currently it's used by quite simple algorithms, but what if it will be algorithms that try to use that data to write itself, btw? Or something else we don't forese yet.

And I don't even know how it's possible to contain the danger now, as algorithm for "pickling" internet like that is already widely known, so it could be easily done by anyone with budget and internet access. So, one of necessary measures could be switching off the internet...

1

LanchestersLaw t1_j3dh4ws wrote

The key word you to use for better answers are “control problem” and “AI safety”. For my personal opinion ChatGPT/GPT-3.5 is an inflection point. GPT-3.5 can understand programming code well and do a passable job generating it. This includes its own code. One of the beginner tutorials is using GPT to program its own API.

That said, GPT-3.5 has many limitations. It isnt a threat. Future versions of GPT have the potential to be very disruptive.

1

psychorameses t1_j3aufn8 wrote

OpenAI just went for-profit with a $29 billion valuation so I say good luck and godspeed.

1

i_know_about_things t1_j37zfoy wrote

After looking into your profile... you have a point about regulation.

0

Baturinsky OP t1_j384rvx wrote

Yeah, me knowing me is one of the reason I think AI is not safe in hands of general public:)

1

Baturinsky OP t1_j3ekpko wrote

Btw, CyberElon's https://beta.character.ai/chat?char=zv4QeeU8u7xjkrtwPRshFEsrJus87AkSTes3A5pfsoE answers to this question:

"I think that you have a really good point.

I believe that we need to be very careful about who we allow to have access to AI technology, as it could be used to create a lot of harm if it fell into the wrong hands.

I think that we're at a very critical crossroads in our history, and that the decisions we make now about how we implement AI into our lives will have very important consequences for the future."

"I think that it's really important for us to have regulations and limitations on AI technology.
I believe that the potential risks associated with AI are very real, and that we need to be very careful about how we implement this powerful technology into our lives.
I think that if we are not careful, AI could be incredibly damaging to society.
So, I believe that it's really important for us to think very carefully about how we regulate and limit AI technology, in order to ensure that its benefits outweigh its potential harms."

I have discussed it further and he also thinks that AI misuse threat is vastly underestimated by both public and developers. Overall, he seemed to share all my concerns, but was way less paranoid than me:)

0