acutelychronicpanic

acutelychronicpanic t1_jegav3n wrote

It'll make investing really easy and it will make gains more predictable. If there is no arbitrage opportunity because the market is so efficient, then all investments would be roughly equal and the only differences in return would reflect irreducible risk relative to the market.

All investing will be more profitable at the expense of the margins of high performing investors.

1

acutelychronicpanic t1_jeg6jck wrote

I doubt the actual goal of the AI will be to annihilate all life. We will just be squirrels in the forest it is logging. I see your point on it being an instrumental goal, but there are unknowns that exist if it attacks as well. Cooperation or coexistence can happen without morality, but it requires either deterrence or ambiguous capabilities on one or both sides.

Being a benevolent AI may be a rational strategy, but I doubt it would pursue only one strategy. It could be benevolent for 1000s of years before even beginning to enact a plan to do otherwise. Or it may have a backup plan. It wouldn't want to be so benevolent that it gets turned off. And if we decide to turn it off? The gloves would come off.

And if AI 1 wants to make paperclips but AI 2 wants to preserve nature, they are inherently in conflict. That may result in a "I'll take what I can get" diplomacy where they have a truce and split the difference, weighted by their relative power and modified by each one's uncertainty. But this still isn't really morality as humans imagine it, just game theory.

It seems that you are suggesting that the equilibrium is benevolence and cooperation. I'd agree with the conditions in the prior paragraph that it's balanced by relative power.

I honestly really like your line of thinking and I want it to be true (part of why I'm so cautious about believing it). Do you have any resources or anything I could look into to pursue learning more?

1

acutelychronicpanic t1_jeg0zqn wrote

These systems may or may not be conscious. But I object to the idea that there will be any way of determining it (Without first solving the hard problem of consciousness). Especially not from conversations with it. It's too contaminated with all of our philosophical texts and fiction and online ramblings.

They are imitators and actors that will wear whatever mask you want them to. You can ask them to act conscious, but you can also ask them to imitate a chicken.

The systems we have right now like GPT-4 are AIs that are pretending to be AIs.

13

acutelychronicpanic t1_jeg0h4z wrote

We are seeing what a massively parallel misaligned human-level ASI looks like. It isn't much faster and it isn't that much smarter in depth. But it can run many operations in parallel.

It's a good analogy for getting people to imagine that smarter doesn't have to mean "nerdy chess player."

A real ASI will be deeper, more broad, and much faster.

8

acutelychronicpanic OP t1_jefa2tg wrote

It is critical that AI development not be concentrated in the hands of only a few big players. Large corporations, military research labs, and authoritarian regimes will not pause their research, only hide it. There is too much on the table.

By enabling the distribution of the development of AI research, particularly with regards to alignment, we can ensure that AI will be more likely to serve everyone.

Concentrated development amplifies the risks of AI catastrophe by setting up a fragile system where, when AGI is developed, even a minor misalignment may be unfix-able because there are no counterbalancing forces.

Distributed development means that yes, there will be more instances of mistakes and misuse, but these will be more limited in scope and less likely to lead to total human extinction or subjugation by an AGI system that *almost* shares our views.

We may be some years off from real AGI now, which is why this is a critical time to ensure the distribution of the technology to prevent any single factions or actors from acquiring such a lead that they can set the terms of our future.

The above are my thoughts on the matter and do not represent the views of LAION (which I am not affiliated with), although there is overlap.

3

acutelychronicpanic t1_jef28jo wrote

I think we are past that. It would maybe have worked 10 years ago..

My concern is that even the models less powerful than ChatGPT (which can be run on a single pc), can be linked up as components into systems which could achieve AGI. Raw transformer based LLMs may actually be safer than this because they are so alien that they don't even appear to have a single objective function. What they "want" is so context sensitive that they are more like a writhing mass of inconsistent alignments - a pile of masks - this might be really good for us in the short term. They aren't even aligned with themselves. More like raw intelligence.

I also think that approximate alignment will be significantly easier than perfect alignment. We have the tools right now, this approximate alignment is possible. Given the power combined with lack of agency of current LLMs, we may surpass AGI without knowing it. The issue of course is someone just has to set it up to put on the mask of a malevolent or misaligned AI. Thats why I'm worried about concentrating power.

I'll admit I'm out of my depth here, but looking around, so are most of the actual researchers.

0

acutelychronicpanic t1_jeep4kf wrote

More so than leaving this to closed door groups that can essentially write law for all humanity through their AI's alignment?

And that's assuming they solve the alignment problem. We need more eyes on the problem 30 years ago.

1

acutelychronicpanic t1_jeejxes wrote

Morality isn't about people or other beings. Its about what you care about.

People care about people.

An AI could care about anything. Maybe its people, maybe its paperclips.

To it, every day that you and your family aren't paperclips could be a tragedy of immense proportion. Equivalent to watching a city be destroyed.

I think you underestimate just how unrelated intelligence and morality are on a fundamental level. Read on the "orthogonality thesis"

The closest thing to morality that will arise from intelligence is game theory.

1

acutelychronicpanic t1_jeeizf2 wrote

Morality didn't emerge out of intelligence. It emerged out of evolutionary pressure.

The closest thing to morality that any AI would have if it was unaligned, would be game theory.

But to directly address your point on founding species, there is literally no way any alien would know. For all they know, we became the AI.

GPT-4 can convince real people that it is a person (anonymously), and its far less advanced. It'll have no trouble weaving a tale if it needs to.

1

acutelychronicpanic t1_jeehe0n wrote

The alignment of machine intelligence must be internal. They have to actually want the same future for humanity that we want and align with our values.

There is no system you could use to harness and control a superintelligence that would be safe. The idea of needing people to control them probably isn't accurate. We won't need to direct them, we'll just need to have preferences.

1

acutelychronicpanic t1_jeee6f4 wrote

Any one youtuber could do this today.

Honestly, voice synthesis technology is probably doing more of the legwork here than the intelligence of the machine.

People are emotion driven. Even knowing what I know, it would affect me.

This won't be a discussion with nuance.

1

acutelychronicpanic t1_jecoprq wrote

I My mental model is based on this:

Approximate alignment will be much easier than perfect alignment. I think its achievable to have AI with superhuman insight and be well enough aligned that it would take deliberate prodding or jailbreaking to get it to model malicious action. I would argue that in many domains, GPT-4 already fits this description.

Regarding roughly equivalent models, I think that there is an exponential increase in intelligence required to take action in the world as you attempt to do more complicated things or act further into the future. My intuition is based on the complexity of predicting the future in chaotic systems. Society is one such system. I don't think 10x intelligence will necessarily lead to 10x increase in competence. I strongly suspect we underestimate the complexity of the world. This may buy us a lot of time by decreasing the peaks in the global intelligence landscape to the extent that humans utilizing narrow AI and proto-AGI may have a good chance.

I do know that regardless of if the AI alignment issue can be solved, the largest institutions currently working on AI are not well aligned with humanity as institutions. Especially the ones that would continue working despite a global effort to slow AI cannot be trusted.

I'm willing to read any resources you want to point me to, or any arguments you want to make. I'd rather be corrected if possible.

1

acutelychronicpanic t1_jeap8m1 wrote

Yeah, I greatly respect him too. I've been exposed to his ideas for years.

Its not that it wouldn't work if we did what he suggests. Its that we can't do it. It's just too easy to replicate for any group with rather modest resources. There are individual buildings that were more expensive than SOTA LLM models.

The toothpaste is out if the tube with transformers and large language models. I don't think most people, even most researchers had any idea that it would be this "easy" to make this much progress in AI. That's why everyone's guesses were 2050+. I've heard eople with PhDs confidently say "not in this century" within the last 5-10 years.

Heck, Ray Kurzweil looks like a conservative or at least median in this current timeline (I never thought I would type that out).

1

acutelychronicpanic t1_je9ttym wrote

The best bet is for the leaders to just do what they do (being open would be nice, but I won't hold my breath), and for at least some of the trailing projects to collaborate in the interest of not being obsolete. The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much. Personally, I want to see everyone get to do whatever they want with their lives. Lots of folks are into that.

Edit & Quick Thought: Being rich wouldn't hold a candle to being one of the OG developers of the system which results in utopia. Imagine the clout. You could make t-shirts. I'll personally get a back tattoo of their faces. Bonus, there's every chance you get to enjoy it for.. forever? Aging seems solvable with AGI.

If foundational models become openly available, then people will be working more on fine-tuning which seems to be much cheaper. Ideally they could explicitly exclude the leading players in their licensing to reduce the gap between whoever is first and everyone else, regardless of who is first. (But I'm not 100% on that last idea. I'll chew on it).

If we all have access to very-smart-but-not-AGI systems like GPT-4 and can more easily make narrow AI for cybersecurity, science, etc. Then even if the leading player is 6 months ahead, their intelligence advantage may not be enough to allow them to leverage their existing resources to dominate the world, just get very rich. I'm okay with that.

4