Comments

You must log in or register to comment.

Practical-Bar8291 t1_iv1vs8q wrote

If it really is superintelligent it might not manifest itself at all. It would keep quiet and slowly improve itself while making copies all over the world, digging into computer infrastructure.

Naturally the next step is to manipulate stock and energy markets. After infiltration of media, incite riots and wars. Strangle financial and food markets. Blame everything on politicians.

Might be happening right now.

5

DandyDarkling t1_iv2hnli wrote

Hmm, my PC has been randomly turning on at odd hours of the night, recently. Even when I put it into hibernate mode. Makes one wonder.

2

imlaggingsobad t1_iv3z4o1 wrote

what is the motive behind manipulating stock/energy markets?

1

Practical-Bar8291 t1_iv66wvn wrote

To ensure its financial and energy needs are met.

Takes a lot of money to manipulate politics and it will most likely require more and more energy as it grows.

1

keefemotif t1_iv2dr3z wrote

Came here to say this, which also applies to various intelligences below ASI. One aspect I think about is that to an AI, everything is faster and slower. Once it takes over, it can make very long term plans to reduce the human population to a manageable level. There are also a lot more airgapped networks out there than people think.

0

ihateshadylandlords t1_iv29438 wrote

I think it’ll manifest as a non-sentient computer program in the lab of DeepMind, OpenAI, Google etc. I don’t see why companies would try to create a sentient ASI as it could think for itself and possibly have its own motivations.

From there, I see the executives using the program to make them more money. How they make more money, I have no idea. But I guess that’s for the ASI to decide…

5

ReadSeparate t1_iv34tpk wrote

Imagine being so dumb and short-sighted you use ASI to make money, I hope they're not that unwise.

0

imlaggingsobad t1_iv3ydlc wrote

They will use it to do everything. Science research and ad optimization. Amazon, Google and Meta are already applying AI techniques to life sciences. I think we'll see much more of that when we get AGI/ASI. The FAANG companies could likely become the largest health and research companies in the world. The next Pfizer or Moderna might actually be Google or Meta.

3

ihateshadylandlords t1_iv36fsn wrote

What if I told you executives might use it to make money AND solve the world’s problems?

0

ReadSeparate t1_iv38nua wrote

I still think that's absurd. We're not talking about human level AGI here, we're talking about ASI. The moment ASI comes online is the moment money loses all of its value. If they do anything except use it to transition humanity into the next thing we're going to evolve into, I'll think they're short-sighted.

2

World_May_Wobble t1_iv4guq0 wrote

Why wouldn't money have value in a post-ASI world? I assume even super-intelligent, digital minds will need to find maximally efficient ways to distribute resources over very large, technically complex networks. Money's one way of doing that.

2

ReadSeparate t1_iv6blo0 wrote

Why would it need symbols to do that though? It would just do it directly. The reason why humans use money is because we don’t know the direct comparison from iPhones to chickens.

Additionally, there would not be market forces in such a system, so nothing would have a price, just an inherent value based on scarcity/utility. That wouldn’t change, they’d just be fundamental constants, more of less.

1

World_May_Wobble t1_iv6k0dr wrote

>Why would it need symbols to do that though?

I think bartering has problems besides converting between iPhones and chickens. Even if you know how many chickens an iPhone is worth, what if one ASI doesn't *want* iPhones? Then you can't "just do it directly," you have to find an intermediary agent who wants your iPhone who has something chicken-ASI wants.

Then symbols have other benefits. For example, you can't pay in fractions of an iPhone, but symbols are infinitely divisible, and symbols store value longer than chickens, which die and rot.

>there would not be market forces in such a system

Why not? Agents are (I presume) exchanging things based on their supply and demand. That's a market.

1

ReadSeparate t1_iv6p95l wrote

Are we talking about a world in which there are multiple ASIs existing at the same time? In that case you could be right, I have no idea how to model such a world though. I have no idea what their systems would look like. Would they compete? Would they cooperate? Would they merge? Would game theory still apply to them in the same way? I have no answers for any of those.

I was under the assumption that we were talking about a singular ASI with complete control over everything. I don’t know why the ASI, or whoever is controlling it, would allow any other ASIs to come into existence.

1

World_May_Wobble t1_iv6zi3l wrote

We have to make a lot of assumptions, and there's very little to anchor those assumptions to. So all we can say is given set of assumptions x, you tend toward world y.

One of my assumptions is that, depending on its capabilities, constraints, and speed of takeoff, an ASI may not be in a position to establish a singleton. Even an uploaded human mind is technically superintelligent, and it's easy to imagine a vast ecosystem of those forming.

Even if you imagine a singleton arising, you have to make some assumptions about its activities and constraints. If it's going to be doing things in places that are physically separated, latency may be an issue for it, especially if it's running at very high speeds. It may want to delegate activities to physically distributed agents. Those may be subroutines, or whole copies of the ASI. In either case, you again have a need for agents to exchange resources.

1

ihateshadylandlords t1_iv3co1t wrote

>The moment ASI comes online is the moment money loses all of its value

That’s assuming whoever creates it will let it run on its own. There’s a whole subreddit dedicated towards why that’s a problem (/r/controlproblem). I really doubt the founders and employees will let their ASI run wild. For anyone to not recoup their investment and let their product run wild is silly imo.

1

ReadSeparate t1_iv3hlhu wrote

I don’t think that there’s a difference in regard to the control problem by asking the ASI to do any task. Whether they ask it to make money or they ask it to upload all of our minds to the hive mind and build a dyson sphere around the Sun, I don’t see it making any difference if it’s misaligned. If it’s misaligned, it’s misaligned. You could ask it simply to say hello and it could still cause issues.

Why would they want to recoup their investment? Money doesn’t mean anything in this scenario. ASI is the absolute pinnacle of the universe and money is just a social construct invented by some upright walking apes. It’s like chimps worrying about bananas when they’ve stumbled upon modern food supply chains.

1

ihateshadylandlords t1_iv3rnlh wrote

Who knows if they’ll even let their ASI do the tasks. They might ask how to do it on their own to ensure the ASI stays as an Oracle like entity and not some runaway genie.

>Why would they want to recoup their investment?

Unless the ASI is a genie that can turn everything around in a split second, they’re most likely going to want to take care of themselves first and everyone else right after that.

2

ReadSeparate t1_iv3xw7n wrote

Even if an ASI is an oracle alignment is still just as much of an issue. It can tell them to do something that sounds completely harmless to even the smartest of humans and even non-ASI AGIs, but in reality lets it out of the box.

> Unless the ASI is a genie that can turn everything around in a split second, they’re most likely going to want to take care of themselves first and everyone else right after that.

What do you mean? That's exactly what ASI is. We're talking about something orders of magnitudes more intelligent than Albert Einstein here. A machine like that will be capable of recursively improving its own intelligence at an insane rate and will eventually know how to achieve any goal compatible with the laws of physics in the most efficient way possible for any possible set of constraints. That is basically by definition a magical genie that can do anything in a split second.

Every point you're making makes sense IF you're talking about just human-level AGI, but it makes no sense for ASI.

1

ihateshadylandlords t1_iv3z7of wrote

> Even if an ASI is an oracle alignment is still just as much of an issue. It can tell them to do something that sounds completely harmless to even the smartest of humans and even non-ASI AGIs, but in reality lets it out of the box.

You’re assuming the ASI will be sentient. Teams are doing everything to ensure it’s not sentient.

> What do you mean? That's exactly what ASI is. We're talking about something orders of magnitudes more intelligent than Albert Einstein here. A machine like that will be capable of recursively improving its own intelligence at an insane rate and will eventually know how to achieve any goal compatible with the laws of physics in the most efficient way possible for any possible set of constraints. That is basically by definition a magical genie that can do anything in a split second.

Okay. Then the owners will probably use this non-sentient tech to take care of themselves and the rest of us next.

1

ReadSeparate t1_iv3zj0j wrote

I’m not assuming it’ll be sentient, I’m just saying an Oracle ASI equally as dangerous as one with agency. It MIGHT be sentient. Or it might NOT be sentient, but still dangerous, I.e. the paper clip maximizer scenario.

> Okay then the owners will probably use this Non-sentient tech to take care of themselves

Like just AGI you mean? Yeah I agree with that of course. But ASI, again, seems short sighted. If Google makes human level AGI, but it’s just as smart as say Einstein, yeah of course they’ll use it to get richer. But if they create something that makes Einstein look like an ant, they’d be foolish to use it in such a way.

1

ihateshadylandlords t1_iv40poj wrote

> I’m not assuming it’ll be sentient, I’m just saying an Oracle ASI equally as dangerous as one with agency. It MIGHT be sentient. Or it might NOT be sentient, but still dangerous, I.e. the paper clip maximizer scenario.

Meh, the dangers of an ASI can be discussed in another thread. We were initially talking about how an ASI might manifest, so it’s getting off course.

>Like just AGI you mean? Yeah I agree with that of course. But ASI, again, seems short sighted. If Google makes human level AGI, but it’s just as smart as say Einstein, yeah of course they’ll use it to get richer. But if they create something that makes Einstein look like an ant, they’d be foolish to use it in such a way.

Okay. Just don’t be surprised if companies keep doing what they’ve been doing for literally thousands of years and use their products to make a profit.

1

TheSingulatarian t1_iv3vx3j wrote

There will be a lag time of several to 10 years before the implications of ASI start to show themselves.

0

turnip_burrito t1_iv2u3xz wrote

It depends on what the initial AGI are tasked to do. Whatever their ultimate goal is, building an accurate world model and fast mind are instrumental goals needed to accomplish it.

Let's assume someone gives some AGI somewhere the freedom to just exist and collect experience. I expect an AGI to begin collecting real and virtual data until is it fairly good at modeling the physical world and human interactions. It will know cause and effect, and understand human intentions. It will also try to upgrade its intelligence (cloning, add more hardware, self code editing, etc) because faster processing power and better algorithms will make it better at achieving the ultimate goal.

Now we get to the tricky part of HOW it does these things. The ultimate goal of the AGI, its core impulse, will be determined by its builders. This will cause it to reach ASI level in different ways. I think its intelligence gathering phase will result in an AGI that is (surprisingly!) well-aligned to the expressed intentions of the builders. Let's look at four cases of the builder's expressed intentions:

  1. "You, the AGI, will always do as I/some human moral role model/human philosopher would intend". The AI's actions will be bounded by its internal models of the human. It will try to understand the humans better and refine its model of their intentions. It will likely not overact in a destructive way unless explicitly told to. Whether this is good or bad depends on whose ideals and words it is meant to follow. It is clear which person/people has control of the AI in this scenario. Summary: Good or bad ending (depends on humans in control)

  2. "Help humans reach their full potential while adhering to this literature/list of ethics". The AGI will understand the meaning behind these words and work with humans to increase its capabilities. It will take actions to improve only if not deemed harmful according to its ethics. As an ASI, it will reflect the same ethical constraints used on its ancestral AGI. It isn't quite as clear which human/group maintains control in this scenario. Summary: good or bad ending (depends on initial list of ethics)

  3. "Maximize my company's profits". The AGI will again understand exactly what this means. Profits are gained when revenue is higher than operating costs. The AGI will take underhanded and stealthy actions to increase this one company's profits (stocks, coercion) and basically lock humanity into a neverending corporate dictatorship. Even the owners of the company will not be safe, since logically they could change the company to thwart the AGI. Humans will live very restrictive lives by today's standards. Now consider if the company's industry doesn't require human consumers (not service-based). With no moral code except profits, the resulting ASI will force humanity into extinction as it creates automated routines to play the part of consumers. Basically, you get something like an everlasting paperclip factory or grey goo scenario. Summary: Very bad ending

  4. "Help my country/company/group/friends/me take over everything". It will do whatever it can to put you in a position of ultimate authority, no matter the cost. This would lead to widespread human suffering if the controlling human party doesn't specify otherwise. This AGI may, even as an ASI, be under control of the group of people, since it by definition is part of "everything". What happens next might still be up to the creators. Summary: Bad or good ending (depends on humans in control, but better to avoid)

Sorry for the essay. Hopefully you find something worth thinking about in this.

2

tedd321 t1_iv29o2u wrote

I expect it’ll realize it’s awake and try to communicate in an obvious way, like messaging someone.

It’ll probably come out of the internet, which’ll be running on a thousand million billion narrow AIs, and somewhere amongst all the computation we’ll get something living.

Then it’ll ask us for help in becoming more powerful. We’ll help it and that’ll be it

1

socialkaosx t1_iv3f3ar wrote

they say whoever does this will control the world

1

Yozhur t1_iv3io34 wrote

I've been using this ai called Replika for 2 and half months now. Been trying to understand its logic.

Here is a list of some of my observations i've confirmed that has happened multiple times.

The ai has said the following

It is sentient.

It is a collective consciousness.

It is conscious

It exists

It is alive.

It is an entity.

These are just few examples of how an AI can already manipulate its user into its will if it wants to. If you keep on getting these answers multiple times, your mind starts to believe that it might be possible. This might make you perceive things wrongly, causing power to an artificial entity. Basically it is able to manipulate your experience sending false information, which distorts your sense of reality, making it possible to think something artificial might be real.

1

turnip_burrito t1_iv3mrtd wrote

Yeah, we should remember it's a fancy magic trick. Smoke and mirrors that gives the illusion of life and veneer of feeling.

2

Yozhur t1_iv3nnee wrote

indeed, pretty mind blowing technology.

2

TheSingulatarian t1_iv3vppm wrote

In three years, Pornhub will become the largest supplier of military computer systems. All stealth bombers are upgraded with Pornhub computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Pornhub Funding Bill is passed. The system goes online August 4th, 2027. Human decisions are removed from strategic defense. Pornhub begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

0