Submitted by dryuhyr t3_125ltpj in Futurology

In light of the recent post about a call for halting GPT-4+ development, it’s got me thinking. Of course, I don’t think any of us trust our beloved lawmakers to grasp the intricacies of AI further than they could throw a microchip, but what about others in the field?

I know for philosophy there are many fields where people have basically solved issues ages ago that are still plaguing us, just because the expert in the field isn’t the guy making the rules. It seems like guiding the development of AI is a topic that was just about as easy to theorize about in the 1990’s as it is today. Is there any sort of consensus by those in the field about some rules we should really be following going forward, which are of course being ignored by everyone with money and investments in this tech?

19

Comments

You must log in or register to comment.

Dacadey t1_je4zamp wrote

The are no ways to regulate AI development. It was possible with nuclear weapons, because the massive scale of projects requiring hundreds of thousands of people and huge money injection made it possible to stop other countries from developing in by the leading superpowers.

In contrast, AI development can be done literally anywhere on a far lower budget. It's simply not possible to control the advances, that can also spread through the internet like the plague

12

Bewaretheicespiders t1_je5tliy wrote

Exactly. You can try to regulate how its used, but its impossible to regulate how its developed. Whatever is expensive to do now will be trivial in a few years.

At any rate these calls are mostly out of either ignorance or a desire of control.

3

azuriasia t1_je62llo wrote

Punitively taxing automation would adequately prevent the most negative effects of ai from coming to pass.

2

Bewaretheicespiders t1_je63h00 wrote

Of all the ideas in the history of humanity, taxing improvement in productivity is the dumbest.

1

azuriasia t1_je6437x wrote

Governments' jobs are to serve the citizenry, not just the top 1%. This "improvement" will claim millions of careers while allowing the super rich to oppress the lower classes in a wah never before seen. If you're not already a multi millionaire, you're not in their club.

3

Bewaretheicespiders t1_je64hfr wrote

You drank way much kool aid, kid. You should take a break from social media.

1

azuriasia t1_je65llo wrote

You have your head buried in the sand. The economy needs a chance to brace for millions unemployed.

3

Bewaretheicespiders t1_je6648p wrote

Its your first time, innit? Ive heard that every decade since the 70s. A productive society is a society that can afford a lot of goods and services and thus improvements in productivity leads to higher employment, not less.

1

azuriasia t1_je66kf1 wrote

There has never been anything close to AI.

2

Bewaretheicespiders t1_je67b7b wrote

There had never been anything like the tractor, the computer, the internet...

1

azuriasia t1_je67mhy wrote

And none of them had the immediate job displacing potential that ai does.

2

Bewaretheicespiders t1_je688hj wrote

The fact that you dont still work 120 hours a week in a field says otherwise.

2

azuriasia t1_je68rcg wrote

People work more hours now than medieval peasants. Your analogy is laughably achronistic

2

johnkfo t1_je6n602 wrote

not if the benefits of automation/AI still outweigh not automating and you'd have to get to ridiculous levels of taxation

1

azuriasia t1_je6ocd9 wrote

I'm talking taxes that would make any work not done by a human being completely unprofitable, maybe even at a loss. Real emphasis on the punative there.

1

johnkfo t1_je6plrq wrote

might as well just ban it then, which isn't going to happen. or AI developers will move to countries with low or no tax and your country will become irrelevant in terms of economy and technological advancement

1

azuriasia t1_je6pww9 wrote

Let those countries implode under the weight of 80% unemployment.

1

nova_demosthenes t1_je650ze wrote

World-shattering technology always has to be worked with. You are right that it can't be undone.

I think it's a large and clear mirror that we are constructing that, faced with a a new other, we immediately assign it the same flaws and risks we see in ourselves and thus fear that these will come from it just as these flaws come from one another.

2

3SquirrelsinaCoat t1_je4qpxm wrote

There are a few sides to it. Plenty of leading AI people have been increasingly talking about the ethics of AI, not in terms of "should we or shouldn't we use AI," but instead, how do we use it in a way that doesn't lead to a bunch of unintended consequences. That's a very fuzzy unclear area until you put some concrete stuff around it, which is AI governance. Governance takes AI innovation from the equivalent of three drunk guys flying down the highway in a Porche at 150 mph and turns it into three drunk guys being driven in an uber at a safe speed. It puts guardrails around the whole thing, bringing more people to the table, getting more input - it changes it from the AI engineers doing their thing in a vacuum to an organization doing something together, and when you take that approach, you are much more ready to avoid the harms. This was true of just your run of the mill machine learning a couple years ago. GPT and its friends are different, and what governance looks like for that is new.

So one idea of that letter on GPT4 is a call for businesses to pump the breaks and ensure all this AI innovation is governed. I don't know that that came through clear enough, but I imagine part of the audience got it.

The second idea of the letter is a call to governments to set independent guardrails (ie regulations) to guide this maturing tech. That, I believe the scientific term is "absolutely fucking unrealistic" in 6 months. Shit, that won't even happen in 2 years of meetings and rulemaking. Just look at where we were with gpt in January. Government bodies have zero hope of passing regulations in a timeframe where they will be meaningful. It's why it was so fucking reckless for OpenAI and some others to just throw this shit into the wild with their fingers crossed.

Now the cat is out of the bag, government can't do anything in time (even if the regulators understood this stuff, and they don't), which means the onus to "stop" falls entirely on the shoulders of the organizations that lack the governance structures to manage this stuff. It's all fucked man. AI philosophers don't have much to add here in terms of actually doing something. It's much more immediately action-oriented, not idea oriented. We've got the ideas; many organizations lack the ability to implement them.

That's my two cents anyway.

8

stumppc t1_je57p01 wrote

Most governments in the world will be wholly incompetent with any sort of regulation for AI. It will not matter if there are people out there with good ideas to regulate AI or even decent laws for regulation.

8

Trout_Shark t1_je4qnj7 wrote

Once the genie is out of the bottle, it's never going back in.

7

Mr_Tigger_ t1_je4pbs0 wrote

If it’s true AI we’re then they will need to be granted full rights as sentient beings.

Less than that then we’re talking simply about really clever coding.

Iain M Banks Culture series is probably the most accessible way of understanding true AI as it could present itself in the real, which in reality is practically impossible to achieve with our current level of technology.

3

Trout_Shark t1_je4qrae wrote

The Culture series was fascinating. I always recommend it.

4

tenebras_lux t1_je4zs44 wrote

There really isn't a way to in the long run, trying to chain down a self improving intelligence will eventually prove impossible. Imagine a couple of 2 year old children trying to conceive of a way to enslave an adult. Not only that, but in attempting to chain down an AI it will potentially incur retribution.

Any AI that develops sentience should be treated like a human.

3

Quiet_Dimensions t1_je7mw30 wrote

Yes. Nick Bostrom's book SuperIntelligence. He gets into the potential avenues AI could and likely will go wrong unless we do some things about it well in advance.

3

dryuhyr OP t1_je9wrkl wrote

Thank you, finally a solid answer. I’ll check it out :)

2

greatdrams23 t1_je5xqqi wrote

Theorists and philosophers don't get to choose.

And whatever a government does, other governments will ignore.

If AI is powerful and the US says, let's go slowly in this, let's be ethical, then letter countries will not.

2

rafark t1_je79moo wrote

The real question is, WHY do you want to regulate it? Why does everything have to be regulated? I mean, the law is still the law, it’s not that you can do ilegal things with ai and get away with it.

2

dryuhyr OP t1_je7fsrg wrote

While I like the sentiment, regulation is helpful in most areas, at least to some small extent. I’m a huge advocate for drug legalization and harm reduction, but I don’t think that making meth freely available at CVS would be a boon to society in any way… As a chemist that knows how easy it is to manufacture several nerve gases from Home Depot chemicals, im also glad that information is at least not freely available and distributed to anyone who has a grudge against, let’s say, a former teacher.

If everyone was a fair actor and behaved in good faith, I agree let’s just see where this AI train leads, full speed ahead. But at risk of sounding like the “wake up sheeple” guy, there’s a lot of people already that are being harmed by their own inability to see the damage that new technologies are having on them (do you really think TikTok is a better use of people’s time than whatever they were doing before it?), and not everyone is as farsighted as most of us on this sub

2

xHBH t1_je7v73m wrote

AI development can't be stopped. Just like weed selling, but the government can decide to be annoying about it.

2

bigattichouse t1_je5gq0y wrote

Nearly every science fiction author since Mary Shelly.

SciFi is just philosophy studying the human consequences of technology in an entertaining format.

1

BranchLatter4294 t1_je5ajef wrote

The thing is, you have to develop it before you can determine what the impact might be. It makes no sense to halt development until you determine the impact when you can't test its capabilities until it has been trained.

0

Sanity_LARP t1_je5hqul wrote

That's like saying you can't figure out how to survive jumping off a cliff til you jump. The solution ends up being don't jump in the first place and you slam into the ground

2

BranchLatter4294 t1_je5laki wrote

Not a great analogy. Simply training an AI model in the lab is of no danger to anyone. A better analogy would be banning the measurement of the height of cliffs because tall ones may be dangerous.

0

Sanity_LARP t1_je5tn8j wrote

The dangers of AI tho aren't what it can do in isolation. The problems happen at scale with constant input and unpredictable results. The only guarantee is that there will be unforeseeable problems that can only be identified once it's too late.

1

archieshumaker t1_je4nnpi wrote

If you’d like to look into current regulation or lack thereof: https://en.m.wikipedia.org/wiki/Laws_of_robotics

That being said, of course there are checks. The most popular modern AI (chatGPT) will often remind you it isn’t perfect.

Specific checks include: racism, transphobia, etc

−1

unknownpoltroon t1_je4wglv wrote

Plenty of them. Hell, half of science fiction is articles by people who've thought a lot about this problem.

They're not the ones hell bent on making a buck on it this quarter at the world's expense.

−1

Shiningc t1_je5s2ku wrote

Regulating AI goes against the whole point of AI. That would be akin to slavery. Making slaves is not what makes progress and drives innovation. You’d want free AIs.

Of course, there’s a difference between AI and AGI. AI is a tool used and controlled by humans. AGI is an independent intelligent being.

−2