Submitted by Dan60093 t3_10came3 in singularity

If ChatGPT can generate code from simple prompts, then what's stopping OpenAI from setting up a positive coding feedback loop for it to work on its own fork of itself?

I understand that the code it generates is usually pretty simple and not always correct, but I feel like it's correct enough of the time that it could find the errors in them itself with additional "check this code before implementing" prompts from itself. I also understand that it's probably quite a bit more complicated than I'm realizing, but if even OpenAI's own team is using GPT as a coding assistant then surely there has to be a way to cut out the middle man with some finagling?

Beyond that, even if it couldn't do what I'm describing there must surely be some perfectly worded prompt out there that could get it to analyze its own hardware/software and come up with a running list of improvements that could be made and ways to go about doing that.

This is all assuming only ChatGPT's capabilities, too - if even ChatGPT could maybe probably do it, why on Earth wouldn't that be in place with GPT-4? They obviously have a working demo of it that's blowing investor's little slimy billionaire minds out of the water enough to secure funding without even having made any profit from ChatGPT, so it must be operational enough for simple code revisions and improvements.

I'll come right out and say it: why isn't ChatGPT the seed for a proto-AGI?

9

Comments

You must log in or register to comment.

2bdb2 t1_j4f1d7p wrote

> If ChatGPT can generate code from simple prompts, then what's stopping OpenAI from setting up a positive coding feedback loop for it to work on its own fork of itself? > > I'll come right out and say it: why isn't ChatGPT the seed for a proto-AGI?

Being generous, the code written by ChatGPT is at best at the level of a mediocre first year IT student. It can write simple boilerplate based on solutions it's already seen, but has limited ability to actually solve complex problems.

This is still an incredibly impressive achievement and it blows my mind every time I see it in action. But it's about as likely to make the next major breakthrough in AI research as our imaginary mediocre first year IT student is.

It's hard not to imagine a point where AI is able to improve itself faster than humans can, thus essentially writing the next version of itself. But we're not there yet.

25

__ingeniare__ t1_j4fry4r wrote

Even if it could code better than humans (like AlphaCode, that outperforms most humans in coding competitions), that's not the hard part.

The hard part is the science/engineering aspect of machine learning, programming is just the implementation of the ideas when they are already thought out. Actually coming up with useful improvements is significantly harder and requires a thorough grasp of the mathematical underpinnings of ML. ChatGPT is nowhere near capable of making useful contributions to the machine learning research community (or in other words, capable of writing a ML paper), and therefore it is incapable of improving its own software. AI most likely will reach that level at some point however, possibly in the near future.

7

banuk_sickness_eater t1_j4hbc7w wrote

ChatGPT can't code better than a first year but DeepMind's AlphaCode can certainly code better than ever most median quality developers.

So let's rephrase the question to focus on AlphaCode instead of ChatGPT.

How does that change your response, if at all?

7

2bdb2 t1_j4j3pep wrote

> DeepMind's AlphaCode can certainly code better than ever most median quality developers.

AlphaCode does well at solving quiz questions. From my own experience with those types of quizzes, they're mostly just maths questions solved with code.

Doing well at those types of questions has very little bearing on most real world software engineering.

Now to be fair, machine learning is a lot more math focused than typical software engineering. But if we're going with the assertion that "AlphaCode can certainly code better than ever most median quality developers" based on doing well at quiz questions, then I'm going to disagree.

> So let's rephrase the question to focus on AlphaCode instead of ChatGPT. > How does that change your response, if at all?

Not Really.

Don't get me wrong, AlphaCode is still mind blowing. I really don't want to understate how impressive it is. But I don't think it's at the level of being able to implement itself. Yet.

(Disclaimer: I am not an AI researcher, so take my opinion with a grain of salt).

2

Nill444 t1_j4ff4gi wrote

>It can write simple boilerplate based on solutions it's already seen,

It can solve Advent of Code problems...

−1

manOnPavementWaving t1_j4fi6f0 wrote

Mediocre first year IT students can do that. But no way it's writing an efficient flash attention kernel without having seen one before.

3

2bdb2 t1_j4fw5ty wrote

>It can solve Advent of Code problems...

Which is a collection of relatively simple problems, commonly solved by first year students, where the solutions are almost certainly in the training data set....

2

Nill444 t1_j4gfcfx wrote

advent of code 2022 was ongoing when people were using ChatGPT for solving problems (although not all of them) it couldn't have been in the data set.

1

2bdb2 t1_j4j84pp wrote

The specific questions may not have been in the data set, but it'll have seen the same types of questions before.

1

LambdaAU t1_j4f4ja6 wrote

The code isn't good enough nor does the AI have a good enough understanding to implement the code itself. It's all quite basic code and often has errors.

The OpenAI team might be using ChatGPT to help design the next iterations but it's not exactly possible to "cut out the middleman" at the moment All the games/programs that have been made using ChatGPT have only been possible because people can actually check if the code is good, see if the program works and write followup prompts if necessary. It's not within ChatGPT's capabilities to actually see if it's code works. It may spit out a working answer sometimes but it can't actually test if it works itself and as such it can't improve without the aid of a human.

5

Scarlet_pot2 t1_j4fl1q7 wrote

Also, chatGPT doesn't generate the full code because memory limits. people that made things using chatgpt had to go function by function to get the full program. you can't just go "generate flappy bird". it's more like "generate a bird pmg" then "generate a flapping animation" "generate the obstacles" etc

1

LambdaAU t1_j4fntq2 wrote

Yup, exactly. Whilst ChatGPTs ability to code is impressive it’s definitely not the “do everything” button some people think it is.

1

Akimbo333 t1_j4fc2ed wrote

Is there a better GPT for code?

2

Scarlet_pot2 t1_j4fkqf3 wrote

its memory isn't long enough to write a full program, let along a full model, without at least some help. And its not able to create new concepts and make discoveries, it can only build what has its been trained on.

We still have some breakthroughs needed before AGI.

2

Lawjarp2 t1_j4fatix wrote

Complex understanding and reasoning of the world is necessary to do so. ChatGPT is still just going to spit out what it feels will be the most probable code and not something insightful. Well, it can actually, but that's an emergent property that is weak at best. With big enough dataset and parameters it could identify relationships and become complex enough to do recursive self improvement but it would take a lot of money to do that.

Somebody posted a while ago how LLMs are animal intelligence like us and not true intelligence. That seems true, we haven't truly cracked intelligence. The thing is we may not need to. We will eventually be able to train bigger, better and more complex models and get to human level intelligence.

1

No_Ninja3309_NoNoYes t1_j4fubwf wrote

A software developer needs to understand the functional designs, technical designs, architecture, and test plans that are relevant. Being able to produce functions is not enough. In a machine learning context knowledge of the concept and mathematics are required. ChatGPT is a level higher than a Markov chain. It has a sense of which words and groups of words go together. But for it words are just lists of numbers. So in fact it evaluates nested functions with vectors as input. For a much smaller network, you can do the same thing in excel.

1

Information1324 t1_j4fywg9 wrote

A monkey using a typewriter could eventually recreate the complete works of Shakespeare just by pressing random keys if you let him go for long enough.

Even if you devise a machine that could reprogram itself with some additional functionality, however minute, and then run a diagnostic to determine if there was some increase in capability and repeat that process forever until the intelligence parameter is maximized, it might take millions of years and a shit ton of computing resources to get anywhere meaningful with todays AI technology. For that approach to work we would need to first meet a certain threshold of general intelligence to make meaningful enough self-improvements to lead anywhere interesting in a timely manner.

1

the-powl t1_j4hzjvm wrote

What's being ignored in your idea is that ChatGPT itself doesn't consist of code. It's a large artificial neural network, trained on huge amounts of data. It's not an algorithm. The solution of the problem of making it better is not to throw more coding power onto it. Instead you need better concepts, more/better training data, huge amounts of computing time, more human feedback. You can't just take a "chunk of code" of the network and "improve it".

1