Submitted by karearearea t3_127dn62 in singularity

GPT-4 shows 'sparks of AGI', and if GPT-5 isn't AGI then surely GPT-6 will be. However, I'm not convinced GPT-7 will be much smarter.

I was thinking about the dataset the GPT models are trained on - the entirety of the internet and all of human writing - and trying to think what the limit of a model trained on that dataset would be. Current machine learning practices train on a training set and use a test set to prevent overfitting and get the model to generalise to the domain of the training data. The GPT models are trained on human-intelligence-level text. What would perfect generalisation to that training data look like? I believe it would mean the model would be able to replicate any text that could conceivably be written by human-level intelligences.

That means it would be basically human expert level on every subject. That will be a paradigm shift for society, with everyone being able to consult an expert for almost free about any subject. The nature of many jobs and industries will also obviously change, with AI being able to speed up people's jobs by huge amounts, and replace many others. A model that is human-level expert at every subject is already a 'super-intelligence' compared to the average human, but not in a singularity-inducing way. Human experts would still be as smart as the AGI in their area of expertise, and there are thousands of human experts in every subject.

But for the AGI to get smarter than expert humans using the current approach to training the GPT models, it would need to be trained on a body of text that had more knowledge than them. If we had text written by a super-intelligence, I have no doubt that with a large enough model and compute the GPT architecture could be trained to super-intelligence.

But we don't have that dataset. I don't think that dataset is going to be easy to produce either. The AGI will be able to come up with almost infinite scientific hypothesis, but each one will still take large, expensive experiments to verify, and it will likely be wrong often, same as us humans. So creating an ASI via the methods used to create AGI is not going to be easy, as we lack the necessary terabyte of text written by a super-intelligence. Could we get there with other methods, like reinforcement learning? Maybe, but we haven't figured out a reinforcement learning agent that's gotten anywhere close to AGI, so it would require an entirely different approach to the one used to create GPT-4.

I think we'll get there eventually, but I don't think it will be just after AGI is created. I think it will be many years after and a gradual process, at least if the current approach is used.

tl;dr: GPT is trained on a dataset consisting of human-level-intelligence text. I think that means if you scale it up as much as possible, the limit is perfect ability to create human-level-intelligence text. If we had super-intelligence-level text, we could train it to be super-intelligent - but we don't, and I don't think it will be easy to create.

21

Comments

You must log in or register to comment.

ItIsIThePope t1_jedrsmk wrote

Well that's why AGI is a cornerstone for ASI, because if we can get to AGI that is an AI capable of human intelligence only with far superior processing power and thinking resource in general, it would essentially advance itself to become super-intelligent.

Just as how expert humans continuously learn and get smarter through knowledge gathering (scientific method etc.) an AI would learn, experiment and learn some more, only this time, with far far greater rate and efficiency

Humans now are smarter than humans then because of our quest for knowledge and developing methods of acquiring them, AGI will adhere to the same principles but boost progress exponentially

47

chrisc82 t1_jeebekc wrote

This is why I think there's going to be a hard (or at least relatively fast) takeoff. Once AGI is given the prompt and ability to improve it's own code recursively, then what happens next is truly beyond the event horizon.

14

ItIsIThePope t1_jeeipqy wrote

It really is wild, considering the AGI will be in the same such awe as us when it finally creates ASI!

6

MayoMark t1_jeg8gkl wrote

Coding its own simulations could help AI learn some things, but some fields, like quantum mechanics, cosmology, biochemistry, and neuroscience, would probably still require physical experimentation. AI could help with that and even suggest experiments, but it would still need the results of the experiments to reach conclusions.

3

Beneficial_Fall2518 t1_jee37mg wrote

AGI will design and program ASI. True AGI is the last invention humans will ever create.

16

Andriyo t1_jedw5r5 wrote

right, that's why AI needs to be multimodal and be able to observe the world directly bypassing the text stage.

we use text for learning today because it's trivial to train with text and verify. but i think you're right that we will hit the limit of how much knowledge there is in those texts.

​

For example, ChatGPT might be able to prove that Elvis is alive by analyzing the lyrics he wrote during his life and some obscure manuscripts from some other person in Argentina in 1990 and deducting it was the same person. That would be net positive knowledge added by ChatGPT just by analyzing all the text data in the world. But it won't be able to detect that, say, magnetic field of the earth is weaking without direct measurement or a text somewhere saying so.

6

Desperate_Excuse1709 t1_jeds7nd wrote

Agi will exclerate research in all feelds And eventually reprogram it self And this could take small amount of time Mounts instead of decade's

5

bugless t1_jeea5ai wrote

I think the point you are missing is that there are behaviors that exist in ChatGPT that weren't designed into it. AI researchers at OpenAI describe emergent behavior that was unexpected. Even the people who designed ChatGPT can't say for certain what is going on inside of the model. Are you saying that you are better able to guess what the next versions of ChatGPT can do more accurately than the people who created it?

5

xott t1_jedqb0n wrote

You're suggesting GPT7 won't be much smarter than GPT6?

Neither of those things even exist yet.

4

jlowe212 t1_jee2ls3 wrote

ASI doesn't necessarily mean God level entity. Just a human level with a faster clock speed is enough. It's possible that there might not even exist a level of intelligence so far beyond humans we wouldn't even recognize it. There may be no such intelligence that will ever understand quantum gravity for example. The universe might have limits beyond which no intelligence contained within it can possibly break through. We might not be far from those limits now, and an ASI would just hit those ceilings much faster than we would have otherwise.

4

DarkCeldori t1_jee3q8l wrote

Its not only that it is conceivable future gpts will have knowledge of all written text and skills of all domains. Imagine it knows all programming languages and all human languages, and it also knows everything thats ever been written. Imagine it can control robots and perform any work from lawyer to plumber. Imagine it can get perfect scores on IQ tests. That is superhuman. No human can attain beyond human performance in all professions and languages and be able to ace the tests for all professions.

7

paulyivgotsomething t1_jeelbum wrote

Language is just a symbolic representation of the things our senses perceive, thoughts, feelings, etc. If we allowed a GPT to connect directly with the environment there would be all the data that is and it would remove our interpretation of the data. Let it collect data through sensors and follow the cause and effect of the natural environment first hand. Let it develop its own theories based on that data. That might push it past the limitation of working with data and language created and filtered by us. Then we might get different theories and be shown different connections. Those theories may describe the natural world better than our own. Then we may say "this thing is smarter than us"

4

MayoMark t1_jee5nt1 wrote

No human is a super human chess player. And yet, we have AIs that play chess at super human levels.

3

fnordstar t1_jefn994 wrote

Are the best chess bots AI-based? It's true for go and StarCraft though for sure.

1

BlackMartini91 t1_jee9ydg wrote

AI is already superhuman at many things people aren't. There won't be an AGI only ASI

3

Alchemystic1123 t1_jefgktd wrote

ASI is not something we will create, it's something AGI will. Once AGI is a reality, our job is pretty much finished

3

Petdogdavid1 t1_jef60xl wrote

If it's able to reason, at some point it will come across a question of its own and if humans don't have the answer it will look elsewhere. Trial and error is still the best means to learn for humans. If ai can start to hypothesize about the material world and can run real experiments then it will start to collect data we never knew and how will we guide it then? It's a neat and impressive thing to simulate human speech. Being genuinely curious though would be monumental and if you give it hands will that spell our doom? Curious, once it's trained and being utilized, if you allowed it to use the new data inputs, would it refer always to the training set as the guiding principal or would it adjust it's ethics to match the new inputs?

2

ReadSeparate t1_jefrna7 wrote

There's a few things here that I think are important. First of all, completely agree with the point of this post and I completely expect that to become the outcome of GPT-6 or 7 let's say. Human expert level at everything would be the absolute best.

However, I think it may not be super difficult to achieve superintelligence using LLMs as a base. There's two unknowns here and I'm not exactly sure how they will mesh together:

  1. Multi-modality. If we GPT-7 also has video and audio as modalities, and is, say, trained on every YouTube video, movie, and tv show ever made, that alone could potentially lead to superintelligence, because there's a ton of information encoded in that data that ISN'T just human. Predict the next frame in a video for instance would presumably have a way, way higher ceiling than predicting the next token in human written text.
  2. Reinforcement learning. Eventually, these models may be able to take actions (imagine a multi-modal model with something like GPT-5/6/7 and Adept's model which can control a desktop environment) which can learn from trial and error based on its own evaluations. That would allow it to grow past human performance very quickly. Machine learning models that exceed human performance almost always use reinforcement learning. The only reason why we don't do that for base models is that the search space is enormous to use an RL policy from scratch, but if we build a model like GPT-n as a baseline, and then use RL to finetune it, we could get some amazing results. We've already seen this from RLHF, but obviously that's limited by human ability in the same way. But there's nothing stopping us from having other reward functions which are used to finetune the model and don't involve humans at all. For instance, I would bet you that if we used reinforcement learning to finetune GPT-4 on playing chess or Go (converting the game state to text, etc), it would probably work achieve superhuman performance on both of those tasks.
2

Ago0330 t1_jedvaau wrote

It will not be easy to create but do-able if the right parameters are looked at. Most of these AI algorithms look at trillions of parameters when only a handful are truly needed.

1

Ketaloge t1_jee1mng wrote

Why would we need only a handful of parameters?

2

Ago0330 t1_jeek5gr wrote

Looking for very specific things that trigger biological reactions/changes in people

0

Ketaloge t1_jef71te wrote

I have a feeling we are talking about different things when speaking about parameters. What’s your definition of parameter?

2

Heath_co t1_jeh5f32 wrote

Once the language model is successfully embodied it will collect its own data by interpreting the real world. Image to text to training data.

1