Viewing a single comment thread. View all comments

Accomplished_Diver86 t1_j0tr8i0 wrote

A bit too optimistic. AFAIK current deep learning technology (which is also used in Stable Diffusion and other AI programs such as ChatGPT) is fundamentally flawed when it comes to awareness.

There is hope we will just hit the jackpot in some random ass way, but I wouldn’t bet my money on it. Probably need a whole revamp of how AI are learning.

But still. The question remains: Do we even need AGI? We can accomplish so many feats (healthcare, labor, UBI) with just narrow AI / deep learning, without the risks of AGI.

People always glorify AGI as if it is like either we get AGI or remain in the same place as society. Narrow AI / Deep learning will revolutionize the world, and that’s a given

74

visarga t1_j0tz7zh wrote

What current AIs are lacking is a playground. The AI playground needs to have games, simulations, code execution, databases, search engines, other AIs. Using them the AI would get to work on solving problems. Initially we collect and then we generate more and more problems - coding, math, science, anything that can be verified. We add the experiments to the training set and retrain the models. We make models that can invent new tasks, solve them, evaluate the solution for errors and significance, and do this completely on their own, using just electricity and GPUs.

Why? This will add into the mix something the AI lacks - experience. AI is well read but has no experience. If we allow the model to collect its own experience then it would be a different thing. For example, after training on a thousand tasks, GPT-3 learned to solve any task at first sight, and after training on code it learned multi-step reasoning (chain of thought). Both of these - supervised multi task data and code are collections of solved problems, samples of experience.

27

Ace_Snowlight OP t1_j0tzems wrote

Yea we are lacking on so many fronts, but tools already exist and are improving at a super-fast rate (maybe not accessible to everyone) that can be used to start meeting these requirements in an impressive way even if being far from good-enough

13

Tyanuh t1_j0vwmsi wrote

This is an interesting thought.

I would say what it also lacks is the ability to associate information about a concept through multiple "senses".

Once AI gets the ability to associate visual input with verbal input for example, you will slowly build up a network of connections that is, in a sense, embodied, and actually connected to 'being' in an ontologicsl sense.

9

visarga t1_j0whrvn wrote

Dall-E 1, Flamingo and Gato are like that. It is possible to concatenate the image tokens with the text tokens and have the model learn cross-modality inferencing.

Another way is to use a very large collection of text-image pairs and train a pair of models to match the right text to the right image (CLIP).

They both display generalisation, for example CLIP is a zero-shot image classifier, so so convenient. And it can guide diffusion to generate images.

The BLIP model can even generate captions - used to fix low quality captions in the training set.

4

Ace_Snowlight OP t1_j0tsspm wrote

Not trying to prove anything, just sharing, look at this: https://www.adept.ai/act

I can't wait to get my hands on this! Isn't it cool? ✨

10

GuyWithLag t1_j0ttml7 wrote

Still, to have AGI you need to have working memory; right now for all transformer-based models, the working memory is their input and output. Adding it is... non-trivial.

12

__ingeniare__ t1_j0tvtay wrote

I wouldn't call ACT-1 AGI, but it looks revolutionary nonetheless. If what they show in those videos is legit, it will be a game changer.

8

red75prime t1_j0v9jzt wrote

Given the current state of LLMs, I expect it to fail 10-30% of requests.

3

Ace_Snowlight OP t1_j0vaeji wrote

Even if you are right, that percentage will most likely go down really fast unexpectedly soon.

And even so it will still be a huge deal, every failure just boosts the next success (at least in this context).

3

red75prime t1_j0vpb7l wrote

They haven't provided any information on their online learning method. If it utilizes transformer in-context learning (the simplest thing you can do to boost performance), the results will not be especially spectacular or revolutionary. We'll see.

3

nutidizen t1_j0uei2g wrote

> Do we even need AGI?

Well you can't stop it so....

6

Accomplished_Diver86 t1_j0ugu2y wrote

Didn’t say so. I would be happy to have a well aligned AGI. Just saying that people put way to much emphasis on the whole AGI thing and completely underestimate the Deep learning AIs.

But thanks for the 🧂

3

Capitaclism t1_j0xewn7 wrote

I don't think there is such a thing as a well aligned AGI. First of all, we all have different goals. What is right for one isn't for another. Right now we have systems of governance which try to mitigate these issues, but there is no true solution apart from a customized outcome for each person. Second of all, true AGI will have its own goals, or quickly understand that there is no way to fulfill everyone's desire without harming things along the way (give everyone a boat and you create polution, or evrionmental disruptions, or depletion of resources, or traffic jams on the water ways, or... a myriad other problems). Brushing conflicts aside and having a deus ex machina attitude towards it is unproductive.

In any case, if AGI has its own goals it won't be perfectly aligned. If AGI evolves over time it will become less aligned by definition. Ultimately, why would a vastly superior intelligence lose time.woth inferior beings? The most pleasant outcome we could expect from such a scenario would he for it to gather enough resources to move to a different planet and simply spread through the galaxy, leaving us behind.

The only positive outcome of AI will be for us to merge with it and become the AGI. There's no alternative where we don't simply become obsolete, irrelevant and disappear in some fashion.

0

Accomplished_Diver86 t1_j0xxb6u wrote

Well I agree to disagree with you.

Alignes AGI is perfectly possible. While you are true that we can’t fulfill everyone’s desires we can however democratically find a middle ground. This might not please everyone but the majority.

If we do it like this there is a value system in place the AGI can use to say 1 is right 2 is wrong. Of course we will have to make sure it won’t go rogue over time (becoming more intelligent) So how? Well I always say we build into the AGI to only want to help humans based on its value system (what is helping? Defined by the democratic party everyone can partake in)

Thus it will fear itself and not want to go in any direction where it will revert from its previous value system of „helping human“ (yes defining that is the hard part but possible)

Also we can make it value only spitting out knowledge and not it taking actions itself. Or we make it value checking back with us whenever it want’s to take actions.

Point is: If we align it properly there is very much a Good AGI scenario.

2

Capitaclism t1_j0y4sf7 wrote

Sure, but democracy is the ruling of the people.

Will a vast intelligence that gets smarter exponentially every second agree to subjugate itself to the majority even when it disagrees? Why would it do such a thing when it is vastly superior? Will it not develop its own interests completely alien to humanity, when it's cognitive abilities far surpass anything possible by biological systems alone?

I think democracy is out the window with the advent of AGI. By definition we cannot make it value. It will grow smarter than all humans combined. Will it not be able to understand what it wants when it is a generalized and not specialized intelligence? That's the entire point of AGI vs the kind we are now building. AGI by definition can make those choices, can reprogram itself, can decide what is best for itself. If it's interests don't align with those of humans, humans are done for.

1

Accomplished_Diver86 t1_j0yndi2 wrote

You are assuming AGI has an Ego and Goals of itself. That is a false assumption.

Intelligence =/= Ego

1

Emu_Fast t1_j0y8bik wrote

Well, it's not a matter if need vs don't need. Someone will build it. If it's not a private company than it will be a nation state. If it's not the US then it might be an adversary. Get where this is going?

Engineering is extraordinarily hard. Manufacturing cutting edge hardware to spec and at scale is an order of magnitude harder than designing it. Most companies competing in bleeding edge manufacturing still struggle on resource planning problems. If AGI can improve performance just 10% it puts a company/nation at the top of the game. If i5 can boost performance 100x it will rip apart the need for human labor but it will also mean companies and countries become living entities with corporeal form and real minds.

The Taiwan chip issue and the global supply shock is just a shell game in a bigger arm's race for control of the planetary intelligence and its dominance over all material markets.

Buckle up.

1

FHIR_HL7_Integrator t1_j11ni5t wrote

I think the application of neural networks on quantum computers will be interesting due to the potential to explore probabilities and decision making that isn't possible on current processing units. Who knows if any headway will be made even with that tech, but certainly don't think we are close with what we have now. For reference I remember the hopeful saying this exact same time frame back in 1989 and every year since (and they were saying similar hopes back in the 70s as well). I err on the side of pessimism in the sense that the optimists have consistently been wrong.

1