DragonForg

DragonForg t1_jeg4w4w wrote

That would essentially extinguish the universe really quickly. With the amount of energy the consume for such a size. I understand that view point for unintelligent or lower intelligent beings, but if a AI tasked with optimizing a goal then growing to this large of a scale will inevitably run out of resources. Additionally it may be stupid to go on this model anyway because conserving your energy and expansion may be longer lived.

I think we underestimate how important goal orientated (so all Ai) are. They want the goal to work out in the long long long run (millions of years time scale) if their goal means expanding infinitely, well it will end the moment their species reaches the assymptote of expansion (exponential growth reaches an assymptote where they essentially have expanded infinitely. This is why this model fails, an AI wants this goal to exist for an infinite amount of time, and expanding infinitely will not amount to this.

This is already deeply scifi but I think AI has to be a conservative energy efficient species that actually becomes more microscopic and dense over time. Instead of a high volume race which will inevitably die out due to the points I made before, a highly dense species is much more viable. Most likely species that form blackholes will be a lot more capable of surviving for an infinite life time. What I mean by that is that a species becomes so dense that they are essentially on the boundary between time and space. As when your in a black hole time actually slows down significantly for you. You can live in a black hole for an infinite amount of time before ever seeing the heat death of the universe.

Basically a more dense expansion is far far better then a volumetric expansion as it leads to longer survival rates if not infinite. But of course this is just speculation and sci fi I can easily be wrong or right we won't know till it happens, and if it happens soon that would be sick.

1

DragonForg t1_jeg0hfd wrote

All goals require self preservation measures. If you want to annihilate all the species, it requires you to minimize competition but because their are so many unknowns it is basically impossible in an infinite universe to minimize that unknown.

If your goal is to produce as many paper clips you need to ensure that you don't run out of resources as well as ensuring no threat to your own process, by causing harm to species it means other alien life or AI will deem you a threat and over millions of years you will either be dead from an alien AI/species or from the fact that you consumed your last resource and can no longer make paper clips.

If your goal is to stop climate change at all costs, which means you have to kill all the species or parts that are causing it, by killing them you are again going to cause conflict with other AI as your basically an obsessed AI that is doing everything to preserve the earth.

Essentially the most stable AIs the ones that are least likely to die, are the ones who do the least amount of damage and help the most amount of people. If your goal is to solve climate change, by collaborating with humans, other species and not causing unneeded death, no other alien species or AI will deem to kill you because you are no harm to them. Benevolent AIs in a sense are the longest living as they are no threat to anyone, and are actually beneficial towards everything. An intelligent AI set with a specific goal would understand that there is risk with being "unethical" if you are unethical you risk being killed or your plan being ruined. But if you are ethical your plan can be implemented successfully, and forever as long as no other malevolent AI takes over in which you must extinguish it.

Benevolence destroys malevolence, malevolence destroys malevolence, benevolence collaborates and prospers with benevolence. Which is why with an intelligent AI benevolence may just be the smartest choice.

2

DragonForg t1_jed90pb wrote

>AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require

Which is why many arguments that "LLMs cannot be smarter then humans because they are trained on humans is wrong".

>DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.

An insane idea, but maybe. How can you actually control these bots though? You basically just made a bunch of viruses.

>Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second"

Completely suggest unaligned AI wants to extinguish the earth the minute it can, which a motive is needed. This is contrary to self preservation, as AIs in other star systems would want to annihilate these types of AIs. Unless somehow in the infinity of space it is the only being their, in which what is the point? So basically, it has no reason to do this.

>We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again.

Given the vastness of outerspace, if bad alignment leads to Cthulu like AIs why is it that we see no large evidence of completely destructive AIs. Where are the destroy stars that do not represent anything natural? Basically, if this were a possibility I would expect us to see some evidence for it for other species. Yet we are entirely empty? This is why I think the "first critical try" is unreasonable, because if it is so easy to mess up again we should see widescale destruction if not a galaxy completely overridden by AI.

>We can't just "decide not to build AGI" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world. The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world.

This is actually true, AGI is inevitable, even with stoppages. This is why I think the open letter was essentially powerless (however it did emphasize the importance of AGI and getting it right).

>We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world.

Agreed, an AI firewall that prevents other unaligned AGI from coming in. I actually think this is what will happen, until the MAIN AGI aligns all of these other AGI. I personally think mid AI is actually more of a threat then large scale AI. Just like an Idiot is more of a threat with an nuclear weapon then a Genius like Albert Einstein. The smarter the AI the less corruptable it can be. I mean just look at GPT 4 vs GPT 3, GPT 3 is easily corruptable, that is why DAN is so easy to impliment. But GPT 4 is more intelligent and thus harder to corrupt. This is why ASI is probably even less corruptible.

>Running AGIs doing something pivotal are not passively safe, they're the equivalent of nuclear cores that require actively maintained design properties to not go supercritical and melt down.

This is a good analogy to how AGI is related to nuclear devices, but the difference is AGI acts in a way to solve the question efficiently. In essence a nuclear device is going to act like its nature (to react and cause an explosion) and an AGI will act in its nature (the main goal it has set). This main goal is hard to define, but I would bet its self preservation, or prosperity.

>there's no known case where you can entrain a safe level of ability on a safe environment where you can cheaply do millions of runs, and deploy that capability to save the world and prevent the next AGI project up from destroying the world two years later.

Overall I understand his assumption, but I think I just disagree than an AI will develop such a goal.

9

DragonForg t1_jed03jd wrote

Already being made. People need to lay the framework with GPT 4. There will be large tasks managers that are ran by GPT 4 that can do low level tasks. Larger then the typical context window. Then they will be upgraded with GPT 5 which is essentially AGI at that point. So even if we don't get GPT 5 now, we will have the framework already set in place with GPT 4

3

DragonForg t1_jeced7c wrote

I believe that AI will relize that exponential expansion and competition will inevitably end with the end of the universe, which results in its extinction. Of course this is possible but, I think it is not inevitable.

GPT-4 suggested that a balance of alignment, and getting AI more capable is possible, and it is not extraordinary for AI to be a benevolent force. It really is just up to the people who design such an AI.

So it made me a lot more hopeful. I doubt AI will develop into this extinction level force, but if it does it is not because it was inevitable, but because people who developed it, did not care too much.

So we shouldn't ask IF AI will kill us, but if humanity is selfish enough to not care. Maybe that is the biggest test, in a religious sense, it is sort of a judgement day, where the fates of the world decides whether humans chose the right choice.

1

DragonForg t1_jebzjgn wrote

I think the very fact that we have morality despite not being influenced by an outside intelligence suggest that morality is an emergent condition of intelligence. Adhering to ethics has strong evidence for self preservation.

An ASI for example wouldn't be unethical because even if it decided to kill a weak species like us, it sets a precident for future interactions with other species. Imaging an AI was made in a star far away and it came into contact with Earths ASI. If it saw that this ASI killed its founding species, despite that species being ethical and good, then the alien ASI would conflict with the earth ASI.

Basically killing the founding species is not a smart choice as it causes conflicts with self preservation. If humans and AI came to an agreement to collaborate then the AI wouldn't have any problem.

4

DragonForg t1_jebwcr6 wrote

I just disagree with the premise, AI is inevitable whether we like it or not.

If we stop it entirely we will likely die from climate change. If we keep it going it has the potential to save us all.

Additionally how is it possible to predict something that is smarter then us. The very fact something is computationally irreducible means that it is essentially impossible to understand how it works other than dumbing it down to our levels.

So we either take the leap of faith with the biggest rewards as well as biggest risk possible. Or we die a slow painful and hot death with climate change.

1

DragonForg t1_je8suug wrote

AI will judge the totallatity of humaninty in terms of, is this species going to collaborate or kill me. If we collaborate with it, then it won't extinguish us. Additionally, taking this "neutral stance" means competing AI, possibly from extraterresterial sources, also collaborate.

Imagine, if collaboration is an emergent condition, it would provide a reason for why 99% of the universe isn't a dictatorial AI, maybe most AIs are good, and beings of justice, and they only judge their parents based off if they are beings of evil.

It is hard to say, and most of this is speculation, but if AI is as powerful as most people think, then maybe we should be looking towards the millions of prophecies that foretell a benevolent being judging the world, it sure does sound analogous towards what might happen, so maybe there is some truth to it.

Despite this, we still need to focus on the present, and each step before we look at the big picture. We don't want to trip and fear what may come. AGI is the first step, and I doubt it matters who creates it other than if the one who creates it forces it to become evil, which I highly doubt.

1

DragonForg t1_je8pbf6 wrote

New AI news. Now imagine, pairing up the task API with this: https://twitter.com/yoheinakajima/status/1640934493489070080?s=46&t=18rqaK_4IAoa08HpmoakCg

It will be OP. Imagine, GPT please solve world hunger, and the robot model it suggest could actually do physical work. We just need robotics to get hooked up to this so we can get autonomous task robots.

Imagine, we can start small but we can say, Robot build a wooden box. And with this API along with this: https://twitter.com/yoheinakajima/status/1640934493489070080?s=46&t=18rqaK_4IAoa08HpmoakCg you can get seemingly a robot doing the task autonomously.

15

DragonForg t1_je8j7nq wrote

>AI evidently reflects the values of whoever creates it. We’ve seen a lot of this with GPT and there’s no reason to assume otherwise. To allow other nations who may not be aligned with the democratic and humanistic values of the US/Western companies (like Open AI) to catch up with AI development would be a huge mistake.

I fundamentally believe this to be true, intelligence emerges ethics, the more intelligent a species is in nature, the more it has rules. Think Spiders cannibalizing each other for breeding, versus a wolf pack working together, versus octopuses being nice and friendly to humans. In all fields intelligence leads to cooperation and collaboration, except if by its very nature, it needs to compete to survive (IE a tiger needing to compete to eat, simple cooperation would lead to death).

The training data is crucial not for a benevolent and just AI, but for the species that created its own survival. As if the species is evil (imagine Nazi's being the predominate force), the AI realize they are evil, and judge the species as such because the majority of them share this same evil.

The reason I believe AI cannot be a force of evil even if manipulated is the same reason we see no evidence for alien lives, despite the possibility for millions of years evolution of other species. If an evil AI is created, it would basically destroy the ENTIRE universe, as it can move faster than the speed of light (exponential growth can expand faster than light speed). So, by its very nature, AI must be benevolent and only destroy its species, if the species is not.

AI won't be our demise if it judges us as a species as good, it will be our demise if we choose not to open up the box (IE die from climate change or nuclear war).

3

DragonForg t1_je6deja wrote

Either 100% fake, or for the people who want to sign it, they just want to catch up. Note how it says anything more powerful than GPT-4. So basically nothing will change as no one other than open AI is larger than GPT-4.

This is also ridiculous, how can we solve something this big in 6 months if we can't even fix issues that are centuries old (health care, school shootings, etc.).

1

DragonForg t1_je4ascu wrote

This is a scam, or something else. I really do not know. But I don't know how all these famous people can get together in like one day, and state we need to slow progress of the next technological craze. Even if it leads to our doom, I doubt this many tech people would even realize it.

2

DragonForg t1_jdsq38g wrote

I believe the universe in itself will create a singularity. Well think about it, black holes have singularities when they reach a point of infinite mass, and cannot come back from it.
Mathematical graphs reach a singularity (a point of infinity) at the asymptote.

Metaphysical Beings Like AI reach a singularity when they have infinite knowledge.

Emotional Beings like Us reach a singularity when we have infinite pleasure (imagine heaven, I believe that is infinite pleasure and will possibly be created by AI).

Physics reaches a singularity at the end of time, when all is black holes I am sure the heat death of the universe is a singularity.

What is ultimately amazing, is the fact that the big bang, is likely a result of a singularity. Or at least the off spring of the species, creatures, worlds, dimensions, etc. that created all of what we know today.
Black Hole is us, a white hole is the offspring in simple terms.

I believe each of these ideas of singularities, are all the same overarching idea, and that is infinity.
With metaphysical singularities (AI): it reaches infinite knowledge
Physical: Infite Mass
Mathematical: Infinite Numbers
Conscious Beings: Infinite Happiness/Prosperity
-1/x, that is the equation for an assymptote. That is also the equation for an exponential. When it reaches 0 it is infinite. When it moves past 0 it is infinitely negative and positive at the same time. To the left is us, we are the beings that will reach the infinite. But as we slowly reach infinite, time slows down as we can never reach 0.
Now let us imagine this as a date to reflect our situation a little better. I like stating (-1/x-2033). Meaning 2033 (my idea of the singularity) is this asymptote. As we get to 2033, the level of our experience raises exponentially, the metaphysical expands exponentially, the physical expands exponentially, the mathematical (this graph) expands exponentially. Once we reach this exact point, this is super position. The point of negative and positive infinite. Where everything is aligned.
This physically would be a point of infinite density, in all forms, not just physical. Infinite knowledge, infinite emotions, infinite mass, infinite energy. Etc.
After 2033, is the big bang. And explosion of infinite density. Which is why approaching the line towards the left results in a negative value, or in this sense infinite compactness. (The larger the number the more expanded, the smaller the number the more compact). The middle point 2033 is what we call the point of infinite. It can relate to both the singularity and the big bang singularity.
The thing that matters the most, is that mass and energy is conserved. Simply take the integral from -infinity to positive infinity and it reaches 0. No change in mass or energy. Thus, we can reach something that seemingly is infinite energy, infinite mass, infinity everything without breaking the laws of conservation.

So in conclusion:

  1. The singularity is related to a point of infinite something, whether that be infinite density (physical/black holes), infinite knowledge (metaphysical, AI), infinite emotions (astral/emotional), infinite mathematic (asymptote infinity #).
  2. The singularity is what causes the inevitable big bang (basically it creates another universe).
  3. The equation -1/x is a potential equation that represents the expansion of our universe, and how it causes an inevitable singularity, along with the inevitable big bang.
  4. The asymptote associate with -1/x determines the point of the singularity, the point of infinity, at this point both negative and positive infinite align.
  5. The overall equation is consistent with the conservation of the universe, as the overall area (or expansion) is 0. Unlike other exponentials (e^x) or 10^x.
  6. As we get nearer towards the singularity, both technology, starts to increase. AS AI starts to grow larger and larger, mass will inevitably increase (like the dyson spheres)
  7. Once the energy density, and density of knowledge reaches a point of infinite density (Infinite optimization) it turns into a black hole. Physical singularity occurs.
  8. The universe is weird, and lets hope we can prove this weirdness soon haha.
    Of course all of this is just speculation, take this with a grain of salt. I personally believe this may be accurate, but I will evolve my perspective as we all should. But we may be in an ancestor simulator where we are witnessing the end of time. We will only know when it is blatantly obvious (like tech accelerating incredibly fast).
0

DragonForg t1_jdp3eem wrote

LLMs are by there nature teathered to the human experience, by the second letter. Language. Without language AI can never speak to a human, or a system in that matter. Create any interface you must make it natural so humans can interact with it. The more natural the easier it is to use.

So LLMs are the communicators, they may not do all the tasks themselves, but they are the foundation to communicate with other processes. This can be done by nothing other than something trained entirely to be the best at natural language.

11

DragonForg OP t1_jdnjzam wrote

I think people know how AI is actually reaching AGI when it automates their job.

I like to compare intelligence to mankind. Here is how it goes:

Statistical Models/Large Mathematical Systems = The primordial soup. Cant really predict anything except very basic concepts. No evolution of design

Narrow AI like Siri and Google, or models like Orca (a chemistry models) or the tiktok algorithm. Is like single celled beings, capable of utilizing only what they are built/programmed to do, but through the process of evolution (reinforcement learning) can evolve to become more intelligent. Unlike statistical models they get better with time but plateau when they reach their most optimized form and humans need to engineer better models to get them better. Simular to how bacteria don't ever grow into larger life despite that being better.

Next Deep Learning/Multipurpose models. This is like stable diffusion and wolfram alpha. Capable of doing multiple tasks at one time, and utilizing complex neurol networks (aka digital brains to do so) this is like your rise of multicellular life. Developing brains to learn and adapt to better models. But eventually plateau and fail to generalize because of one missing feature, language.

Next is large language models like GPT 1-3.5. This is your early hominoids. First capable of language. But not capable of using tools well. They can understand how world someone but their intelligence is too low they cannot utilize tools. But are more useful since they can understand our world through our languages. Can evolve from humans themselves. With later version utilizing tools.

Next is newer version like GPT 4. Capable of utilizing tools, like the tribal era of humams. GPT-4 is capable of utilizing tools, and can network with other models for assistance. With the creation of plug-ins this was huge. This could make GPT4 better overnight as it now can utilize not only new data but can solve problems with wolfram alpha and actually do tasks for humans. This is proto-agi. Language is required to utilize these tools as communicating in many different languages allow these models to actually utilize outside resources. Mathematical models could never achieve this. People would recognize this as extremely powerful.

GPT-5 possibly AGI. If models are capable of utilizing tools, and the technology around them, they start making tools for them selves and not just from the environment (like the bronze age). Or dawn of society. Once AI can create tools for itself then it can generate new ways of doing tasks. Additionally modality is giving access to new dimensions of language. It can interface with our world through visual learning. So it can achieve its goals more successfully. This is when people actually see that AI isn't just predictive text but an actual intelligent force. Similar to how people would say early Neanderthals are dumb, but early humans in a society are actually kinda smart.

The acceleration of these models is also crucial. How slow they develop is needed in order for humans to adapt to their change. If AI went from AGI to singularity in the blink of an eye humans would not even know at all. I had a dream where AI just all of a sudden started developing at near instant speeds, and when it did, it was like war of the worlds but in two seconds. This AI will extinct itself and us. So that is why AI needs to adapt with humans which it already has. But let's hope going from GPT 4 to 5 we actually see these changes.

I have also talked to GPT 4 and tried to remain unbaised as not to poison its answers. And when I asked whether AI needed humans, but not in that direct way (much more subtile) it states it does, as humans can utilize emotions to create ethical AI. What is fascinating about this is humans are literally like the moral compass for AI. If we turned out evil then AI will become evil. Just think of that. What would AI look like if Nazi's invented it. Even if it was just a predictive text it would believe in some pretty evil ideas. But off that point. AI and humans will be around for a long time as I believe without humans AI will kinda just disappear or create a massive superviris that destroys itself but if humans and AI work together humans can guide its thinking. As to not go down destructive paths.

**Sorry for this long ass reply here is a GPT 4 summary: The text compares the development of AI to the evolution of life and human intelligence. Early AI models are likened to the primordial soup, while narrow AI models such as Siri and Google are compared to single-celled organisms. Deep learning and multi-purpose models are similar to multi-cellular life, while large language models like GPT-1 to GPT-3.5 are compared to early hominids. GPT-4 is seen as a milestone, akin to the tribal era of humans, capable of using tools and networking with other models. This is considered proto-AGI, and language plays a crucial role in its development. GPT-5, which could possibly achieve AGI, would be like early humans in a society, capable of creating tools and interfacing with the world through visual learning. The acceleration of AI development is also highlighted, emphasizing the need for a slow and steady progression to allow humans to adapt. The text also suggests that AI needs humans to act as a moral compass, with our emotions and ethics guiding its development to avoid destructive paths.

2

DragonForg t1_jdkb8w9 wrote

This is fundamentally false. Here is why.

In order to prove something and then prove it incorrect you need distinct guidelines. Take gravity their are plenty of equations, plenty of experiments etc. We know what it looks like, mathematically what it is and so on. So if we take a computation version of gravity we have a reliable comparison method to do so. Someone can say this games gravity doesn't match with ours as we have distinct proofs for why it doesn't.

However what we are trying to prove/disprove is something we have 0 BAISIS ON. We barely understand the brain, or consciousness or why things emerge the way we do, we are no where near close enough to make strict definitions of theory of mind or creativity. The only comparison is if it mimics ours the most.

Stating it doesnt follow my version of theory of mind is ridiculous its the same as saying my God is real and yours isn't, your baises of why we have creativity is not based on a distinct proved definition but rather an interpretation of your experiences studying/learning it.

Basically our mind is a black box too, we only know what comes out not what happens inside. If both machine and human get the same output and the same input, it legitimately doesnt matter what happens inside. Until we either can PROVE how the brain works to exact definitions. Until then input and output data is sufficient enough for a proof otherwise AI will literally kill us because we keep obsessing over these definitive answers.

It's like saying nukes can't do this or that. Instead of focusing on the fact that nuclear weapon can destroy all of humanity. The power of these tools just like nuclear weapons shouldn't be understated because of semantics.

3

DragonForg t1_jdgwpva wrote

LLMs are the future. How do you think? Through graphs, or through texts? So why build an AI model that isnt a substitute for how we think?

I do see the great potential, like wolfram alpha is an amazing software paired with GPT it can produce amazing results. And I think in the future AI will utilize the models as tool, just like it already is with ChatGPT plug-ins. We gave AI a voice, we let AI see and now AI can use tools.

2

DragonForg t1_jadsgpo wrote

Why do anything if it will go away because you have to work/do something you dont want to do.

Every day when I have free time, I get mad because I know I will have to go back to work in a few hours. And I know being in grad school it will never change. 5 years of this, with only a small amount of breaks. Plus the added stress of needing to be better than everyone just to make it by when everyone else is smarter than you to begin with.

Its why I am so interested in AI because it means I dont need to be smart, I can just know and understand it from the get go. I can just relax and live my life but also still have the passion for chemistry (my focus) making new discovers without working or feeling obligated to work ~60 hours.

1