ReadSeparate

ReadSeparate t1_jefrna7 wrote

There's a few things here that I think are important. First of all, completely agree with the point of this post and I completely expect that to become the outcome of GPT-6 or 7 let's say. Human expert level at everything would be the absolute best.

However, I think it may not be super difficult to achieve superintelligence using LLMs as a base. There's two unknowns here and I'm not exactly sure how they will mesh together:

  1. Multi-modality. If we GPT-7 also has video and audio as modalities, and is, say, trained on every YouTube video, movie, and tv show ever made, that alone could potentially lead to superintelligence, because there's a ton of information encoded in that data that ISN'T just human. Predict the next frame in a video for instance would presumably have a way, way higher ceiling than predicting the next token in human written text.
  2. Reinforcement learning. Eventually, these models may be able to take actions (imagine a multi-modal model with something like GPT-5/6/7 and Adept's model which can control a desktop environment) which can learn from trial and error based on its own evaluations. That would allow it to grow past human performance very quickly. Machine learning models that exceed human performance almost always use reinforcement learning. The only reason why we don't do that for base models is that the search space is enormous to use an RL policy from scratch, but if we build a model like GPT-n as a baseline, and then use RL to finetune it, we could get some amazing results. We've already seen this from RLHF, but obviously that's limited by human ability in the same way. But there's nothing stopping us from having other reward functions which are used to finetune the model and don't involve humans at all. For instance, I would bet you that if we used reinforcement learning to finetune GPT-4 on playing chess or Go (converting the game state to text, etc), it would probably work achieve superhuman performance on both of those tasks.
2

ReadSeparate t1_jdi9wic wrote

What if some of the latent patterns in the training data that it's recreating are those that underlie creativity, critique, and theory of mind? Why are people so afraid of the idea that both of these things can be true? It's just re-creating patterns from its training data, and an emergent property from doing that at scale is a form of real intelligence because that's the best way to do it, because intelligence is how those patterns originated from in the first place.

3

ReadSeparate t1_jcsi6oz wrote

Agreed. The proper way to conceive of this, in my opinion, is to view it purely through the lens of value maximization. If we have a hypothetical set of values, we can come up with some rough ideas of what an ASI might do if it possessed such values. The only other factor is capabilities - which we can assume is something along the lines of the ability to maximize/minimize any set of constraints, whether that be values, resources, time, number of steps, computation, etc. in the most efficient way allowable within the laws of physics. That pretty much takes anything except values out of the equation, since the ASI's capabilities, we assume, are "anything, as efficiently as possible."

It's impossible to speculate what such a mind would do, because we don't know what its values would be. If its values included the well-being of humans, it could do a bunch of different things with that. It could merge us all into its mind or it could leave Earth and leave us be - it completely depends on what its other values are. Does it value human autonomy? Does it value humanity, but less than some other thing? If so, it might completely wipe us out despite caring about us. For instance, if it values maximizing compute power over humans, but still values humans, it would turn all matter in the galaxy or universe (whatever it has the physical capabilities to access) into computronium, and that would include the matter that makes up our bodies, even if that matter is a completely insignificant fraction of all matter it has the ability to turn into computronium.

I don't think any of these questions are answerable. We just don't know what it's going to value. I actually think it's somewhat feasible to predict ROUGHLY what it's going to do IF we had a full list of its values, but outside of that it's impossible.

1

ReadSeparate t1_j8fb4cr wrote

One can easily imagine a generalist LLM outputting an action token which represents prompting the specialized LLM, which then gets routed to the specialized LLM, then the response is formatted and put into context by the generalist.

1

ReadSeparate t1_iyz1wbt wrote

Awesome comment, thank you, I'm gunna check all of these out. For the external database thing, to clarify, I was wondering if part of the model training could be learning which information to store so that it can be remembered for later. Like for example, in a conversation with someone, their name can be stored in a database and retrieved later when they want to reference the person's name, even if that's not in the context window any longer.

1

ReadSeparate t1_iywxpuh wrote

I wonder how feasible it is to use an external database to store/retrieve important information to achieve coherency.

If it’s not, then I guess we’ll have to wait for something to replace Transformers. Perhaps there’s a self-attention mechanism out there which runs in constant time.

3

ReadSeparate t1_iyo883j wrote

I do agree with this comment. It’s feasible that long term memory isn’t required for AGI (though I think it probably is) or that hacks like reading/writing to a database will be able to simulate long term memory.

I think it may take longer than 2025 to replace transformers though. They’ve been around since 2017 and we haven’t seen any real promising candidates yet.

I can definitely see a scenario where GPT-5 or 6 has prompts built into is training data which are designed to teach it to utilize database read/writes.

Imagine it says hello to you after seeing your name only once six months ago. It could have a read database token which has sub-input tokens to fetch your name from a database based on some sort of identifier.

It could probably get really good at doing this too if it’s actually in the training data.

Eventually, I could see the model using its coding knowledge to design the database/promoting system on its own.

2

ReadSeparate t1_iynt062 wrote

They can’t just increase it. The context window’s time complexity is O(n^2) which means the amount of compute needed per token added grows exponentially.

This is an architectural constraint of transformers. We’ll either need a better algorithm than transformers, or a way to encode/decode important information to, say, a database and insert it back into the prompt when it’s required

9

ReadSeparate t1_iw4nifp wrote

I’m not saying this group of people are going to be permanently unemployed, I’m saying they’re not going to be making art for money. Many of them may facilitate the process somehow, like prompt engineering, etc, but that’s very different and FAR less time consuming than actually creating art.

7

ReadSeparate t1_iw3ivdm wrote

Let me clarify, I’m not saying it will go away as a hobby or as a passion, just the percentage of people who are doing it for money will be a TINY fraction of those who are doing it for money today.

Think of the numbers of horses being used for transportation today vs the number of horses being used for transportation before the invention of cars. Horses for transportation are irrelevant today compared to back then.

12

ReadSeparate t1_iw0xb6p wrote

You’re getting downvoted but you’re speaking straight facts tbh. Human art is gunna be irrelevant in less than 5 years, aside from people who want art specifically made by humans. These kinds of things people are bitching about don’t matter, in just a few short years these models will advance so much they won’t need any new training data anyway. They’ll be able to get what they need and these artists will still be out of a job.

That said, my heart goes out to the artists losing their livelihoods at the altar of profit and technological progress, we ought to have a UBI/unemployment program for automation job loss.

30

ReadSeparate t1_iv6p95l wrote

Are we talking about a world in which there are multiple ASIs existing at the same time? In that case you could be right, I have no idea how to model such a world though. I have no idea what their systems would look like. Would they compete? Would they cooperate? Would they merge? Would game theory still apply to them in the same way? I have no answers for any of those.

I was under the assumption that we were talking about a singular ASI with complete control over everything. I don’t know why the ASI, or whoever is controlling it, would allow any other ASIs to come into existence.

1

ReadSeparate t1_iv6blo0 wrote

Why would it need symbols to do that though? It would just do it directly. The reason why humans use money is because we don’t know the direct comparison from iPhones to chickens.

Additionally, there would not be market forces in such a system, so nothing would have a price, just an inherent value based on scarcity/utility. That wouldn’t change, they’d just be fundamental constants, more of less.

1

ReadSeparate t1_iv3zj0j wrote

I’m not assuming it’ll be sentient, I’m just saying an Oracle ASI equally as dangerous as one with agency. It MIGHT be sentient. Or it might NOT be sentient, but still dangerous, I.e. the paper clip maximizer scenario.

> Okay then the owners will probably use this Non-sentient tech to take care of themselves

Like just AGI you mean? Yeah I agree with that of course. But ASI, again, seems short sighted. If Google makes human level AGI, but it’s just as smart as say Einstein, yeah of course they’ll use it to get richer. But if they create something that makes Einstein look like an ant, they’d be foolish to use it in such a way.

1

ReadSeparate t1_iv3xw7n wrote

Even if an ASI is an oracle alignment is still just as much of an issue. It can tell them to do something that sounds completely harmless to even the smartest of humans and even non-ASI AGIs, but in reality lets it out of the box.

> Unless the ASI is a genie that can turn everything around in a split second, they’re most likely going to want to take care of themselves first and everyone else right after that.

What do you mean? That's exactly what ASI is. We're talking about something orders of magnitudes more intelligent than Albert Einstein here. A machine like that will be capable of recursively improving its own intelligence at an insane rate and will eventually know how to achieve any goal compatible with the laws of physics in the most efficient way possible for any possible set of constraints. That is basically by definition a magical genie that can do anything in a split second.

Every point you're making makes sense IF you're talking about just human-level AGI, but it makes no sense for ASI.

1

ReadSeparate t1_iv3hlhu wrote

I don’t think that there’s a difference in regard to the control problem by asking the ASI to do any task. Whether they ask it to make money or they ask it to upload all of our minds to the hive mind and build a dyson sphere around the Sun, I don’t see it making any difference if it’s misaligned. If it’s misaligned, it’s misaligned. You could ask it simply to say hello and it could still cause issues.

Why would they want to recoup their investment? Money doesn’t mean anything in this scenario. ASI is the absolute pinnacle of the universe and money is just a social construct invented by some upright walking apes. It’s like chimps worrying about bananas when they’ve stumbled upon modern food supply chains.

1

ReadSeparate t1_iv38nua wrote

I still think that's absurd. We're not talking about human level AGI here, we're talking about ASI. The moment ASI comes online is the moment money loses all of its value. If they do anything except use it to transition humanity into the next thing we're going to evolve into, I'll think they're short-sighted.

2