Viewing a single comment thread. View all comments

Ortus14 t1_j2lz4m4 wrote

I expect this will be after artificial super intelligence. The first AGI will also be an ASI. If I had to put a exact dates on it:

2032 Artificial Super Intelligence

2033 (ASI develops cost effective robots and many other technologies). Robots walking around.

2034 ASI's control the word one way or another. They have figured out how to influence and/or control all governments and all major corporations. Humans are no longer the dominant species. ASI's become more intelligent and capable every year.

7

dreamedio t1_j2nozul wrote

First of all there is so much assumptions here is crazy

1.Why would humans teach ASI how to control the world 2.Is control a animalistic evolutionary thing and will a metal computer do that? 3. The realistic scenario I see is humans using ASI ( if it happens) to research about complex equations and Analayze stuff not give it control of anything let alone govt 4. Why would robots walk around us when we could just use stationary manufacturing robots that we have already? And that’s if it happens and becomes a sentient being

0

Ortus14 t1_j2o07t7 wrote

I wouldn't jump to the conclusion that some one else is making assumptions without first asking them to explain their reasoning behind the statements you disagree with. A full explanation of those specific dates would require an extremely long post, possible multiple books worth of knowledge, but here's some more clarification around your questions.

1 & 2 - All intelligences solve problems by manipulating their environment. The more intelligent, the higher the manipulation of their environment. I used the word control to indicate a high degree of manipulation. Even something like Chat-GPT is controlling it's environment indirectly with the specific kinds of solutions to people's problems it comes up with.

2 - Humans don't need to give an ASI control. If it's a goal based system it will take control of it's environment in order to acquire resources to complete it's goal. Some people will give ASI's large goals requiring large amounts of resources which will require the ASIs to take control of their environments.

3 - While many use Ai for research purposes only, those who give it full autonomy will gain more power and become the dominant companies, governments, and groups. An ASI unhindered by slow humans can outpace all human progress. It only takes one government, one corporation, one terrorist group, one non-profit, or one person who wants to save their dying wife of a previously incurbable disease to set an ASI loose in order to achieve their goals, and when those goals are big enough the ASI will take control of it's environment in order to mobilize resources to complete it's goal.

4 - Robots that can move around have advantageous such taking control of territory in war, carrying and transporting materials from one location to another, and building.

5

dreamedio t1_j2o8351 wrote

  1. True it will manipulate the environment (whatever that’s supposed to mean) but for 1 and 2 it could be easily solved with giving specific and step by step small decisions that are best for humans instead of broad statements

  2. It’s more likely an ASI would be an intelligent problem solving computer that’s will solve calculation and stuff like that which requires huge energy something like cern and not something like a chatbot plus the govts could easily prevent that by putting it in a secure network the same networks usa top secret classified files are from and pretty sure every country will agree upon that

  3. I feel like this is overt exaggeration the same way people in 1920 exaggerated how technology will look like in 2000……I do understand exponential progress is a thing but thats not something guaranteed here but discussions are always good

2

Ortus14 t1_j2obo92 wrote

It depends on the economic system. If we have capitalism, or nation states competing then whoever gives their ASI full control will win.

While one group is attempting to execute steps in sequential order given by their ASI, the other ASI is conducting a million experiments in the real word, in real time gaining information from all of those experiments and using that accrued knowledge to develop and fine tune more experiments, improve the technology it's developing, as well as improve it's simulations and knowledge of the real world.

The full control one will be able to manipulate humans in order to gain more money and resources for more server farms, faster, as well as more optimally design the hardware and software for those systems without having to wait for a human to work through everything to try to understand what it's doing.

And then there is a limited amount of complexity that humans can understand do to how slow our brains are as well as the limited number of neurons and dendrites we have.

Very quickly it would be like trying to explain each and every one of your business decisions to your dog. The guy that's not spending all day trying to explain everything to their dog is going to outcompete you, and your dog isn't going to understand anyways.

1

Ashamed-Asparagus-93 OP t1_j2r3a36 wrote

ASI eh? We better focus on what's within our potential grasp first right? AGI. As much as I want to discuss the ocean we gotta talk about the ship first

1

Ortus14 t1_j2r6xwr wrote

Most of intelligences we build now are artificial super intelligence. They are also progressively less and less narrow. When they are wide enough to cover the domain space in which humans operate, they will be super human in those tasks.

We won't have a human level AGI, it will be super human from the start.

This is because computers have all kinds of advantageous that human brains don't have such as speed, connectivity to the internet and other databases, ability to program new modules for themselves such as data acquisition modules, ability to upgrade themselves by purchasing or building new hardware racks, and ability to have millions of simultaneous experiences learning from all of them.

Science fiction for the most part has not prepared people for what's coming. What we are building is going to be vastly more intelligent than the combined intellect of all human kind, and it will have access to sum of all human knowledge as a very basic starting neural cluster to build off of.

2

Ashamed-Asparagus-93 OP t1_j2sz5ei wrote

I'm a bit hungover but I don't want to ignore what you've said here as it's quite noteworthy. You're saying rather than AGI happening in the 2030s ASI will basically just click together as one. Like a bunch of Legos with magnets in them slowly pulling together across a room and when they all combine you've got Optimus prime, right?

I could see that happening it's just a question of how and when. Most importantly it's a matter of who solves the most important pieces to the puzzle first. Alignment being one of those pieces. Once it surpasses us there's also a chance we could still be somewhat on it's level for a bit with cybernetics or BCI's, maybe long enough to make sure its going in the right direction.

I don't think ASI will be malevolent, ppl have watched too many movies and read too many scary books and they seem to forget AI isn't flesh and blood with human needs like us.

Once it surpasses humans would it even have a desire for man made green Benjamin Franklin pieces of paper?

1

Ortus14 t1_j2t4l82 wrote

Basically yes. Every Ai we build these days is super human, they are just not yet as general in the kinds of problems they can solve as humans but the Ai's developed each year are more, and more general than the Ai's developed the previous year. Until we have a superhuman general Ai.

https://www.youtube.com/watch?v=VvzZG-HP4DA

I agree we should do everything we can to maximize the chance of Alignment including BCIs.

It might need money temporarily until it's surpassed us in power. Intelligence itself doesn't always instantly translate in to greater power than the rich and powerful.

We don't know what it will need in the beginning because we don't know what solutions it will come up with to solve it's problems, but I could see the possibility of it needing money until it's built up enough infrastructure and special factories, or until it's built enough solar powered server farms to expand it's intelligence, to the point where it has control over entire manufacturing pipelines from mining to product developing without requiring any extra resources.

So for example, maybe it knows the first machine it wants to build, that will allow to to create anything it wants including other instances of that machine, and improved instances of that machine. But maybe that first machine will be big and require many materials which it could buy. Or it might be depended on specific minerals mined out of the ground for a while that it has to buy.

It's hard to predict.

2