Submitted by Ace_Snowlight t3_zpnnfd in singularity

And then soon enough from there we may have a god among us (Artificial Superintelligence).

Singularity is gonna hit the majority out of nowhere.

Next year as in, anywhere in the next year, it could even be in the very end like December 29th 2023 or something.

Remember that even the best of the best are left with a gaping mouth at the rate things are progressing...

​

EDIT 2: Dr Alan D. Thompson – Life Architect (this is not a reply to anyone, I just found this and decided it fitted in the context of my post)

​

EDIT 1: Emphasis on 'arriving' I didn't say instantly mass spreading and changing humanity forever... like it would at least be possible to do so in terms of software... additionally, as for major corporations it will probably be but a little pebble of an obstacle.

31

Comments

You must log in or register to comment.

Accomplished_Diver86 t1_j0tr8i0 wrote

A bit too optimistic. AFAIK current deep learning technology (which is also used in Stable Diffusion and other AI programs such as ChatGPT) is fundamentally flawed when it comes to awareness.

There is hope we will just hit the jackpot in some random ass way, but I wouldn’t bet my money on it. Probably need a whole revamp of how AI are learning.

But still. The question remains: Do we even need AGI? We can accomplish so many feats (healthcare, labor, UBI) with just narrow AI / deep learning, without the risks of AGI.

People always glorify AGI as if it is like either we get AGI or remain in the same place as society. Narrow AI / Deep learning will revolutionize the world, and that’s a given

74

visarga t1_j0tz7zh wrote

What current AIs are lacking is a playground. The AI playground needs to have games, simulations, code execution, databases, search engines, other AIs. Using them the AI would get to work on solving problems. Initially we collect and then we generate more and more problems - coding, math, science, anything that can be verified. We add the experiments to the training set and retrain the models. We make models that can invent new tasks, solve them, evaluate the solution for errors and significance, and do this completely on their own, using just electricity and GPUs.

Why? This will add into the mix something the AI lacks - experience. AI is well read but has no experience. If we allow the model to collect its own experience then it would be a different thing. For example, after training on a thousand tasks, GPT-3 learned to solve any task at first sight, and after training on code it learned multi-step reasoning (chain of thought). Both of these - supervised multi task data and code are collections of solved problems, samples of experience.

27

Ace_Snowlight OP t1_j0tzems wrote

Yea we are lacking on so many fronts, but tools already exist and are improving at a super-fast rate (maybe not accessible to everyone) that can be used to start meeting these requirements in an impressive way even if being far from good-enough

13

Tyanuh t1_j0vwmsi wrote

This is an interesting thought.

I would say what it also lacks is the ability to associate information about a concept through multiple "senses".

Once AI gets the ability to associate visual input with verbal input for example, you will slowly build up a network of connections that is, in a sense, embodied, and actually connected to 'being' in an ontologicsl sense.

9

visarga t1_j0whrvn wrote

Dall-E 1, Flamingo and Gato are like that. It is possible to concatenate the image tokens with the text tokens and have the model learn cross-modality inferencing.

Another way is to use a very large collection of text-image pairs and train a pair of models to match the right text to the right image (CLIP).

They both display generalisation, for example CLIP is a zero-shot image classifier, so so convenient. And it can guide diffusion to generate images.

The BLIP model can even generate captions - used to fix low quality captions in the training set.

4

Ace_Snowlight OP t1_j0tsspm wrote

Not trying to prove anything, just sharing, look at this: https://www.adept.ai/act

I can't wait to get my hands on this! Isn't it cool? ✨

10

GuyWithLag t1_j0ttml7 wrote

Still, to have AGI you need to have working memory; right now for all transformer-based models, the working memory is their input and output. Adding it is... non-trivial.

12

__ingeniare__ t1_j0tvtay wrote

I wouldn't call ACT-1 AGI, but it looks revolutionary nonetheless. If what they show in those videos is legit, it will be a game changer.

8

red75prime t1_j0v9jzt wrote

Given the current state of LLMs, I expect it to fail 10-30% of requests.

3

Ace_Snowlight OP t1_j0vaeji wrote

Even if you are right, that percentage will most likely go down really fast unexpectedly soon.

And even so it will still be a huge deal, every failure just boosts the next success (at least in this context).

3

red75prime t1_j0vpb7l wrote

They haven't provided any information on their online learning method. If it utilizes transformer in-context learning (the simplest thing you can do to boost performance), the results will not be especially spectacular or revolutionary. We'll see.

3

nutidizen t1_j0uei2g wrote

> Do we even need AGI?

Well you can't stop it so....

6

Accomplished_Diver86 t1_j0ugu2y wrote

Didn’t say so. I would be happy to have a well aligned AGI. Just saying that people put way to much emphasis on the whole AGI thing and completely underestimate the Deep learning AIs.

But thanks for the 🧂

3

Capitaclism t1_j0xewn7 wrote

I don't think there is such a thing as a well aligned AGI. First of all, we all have different goals. What is right for one isn't for another. Right now we have systems of governance which try to mitigate these issues, but there is no true solution apart from a customized outcome for each person. Second of all, true AGI will have its own goals, or quickly understand that there is no way to fulfill everyone's desire without harming things along the way (give everyone a boat and you create polution, or evrionmental disruptions, or depletion of resources, or traffic jams on the water ways, or... a myriad other problems). Brushing conflicts aside and having a deus ex machina attitude towards it is unproductive.

In any case, if AGI has its own goals it won't be perfectly aligned. If AGI evolves over time it will become less aligned by definition. Ultimately, why would a vastly superior intelligence lose time.woth inferior beings? The most pleasant outcome we could expect from such a scenario would he for it to gather enough resources to move to a different planet and simply spread through the galaxy, leaving us behind.

The only positive outcome of AI will be for us to merge with it and become the AGI. There's no alternative where we don't simply become obsolete, irrelevant and disappear in some fashion.

0

Accomplished_Diver86 t1_j0xxb6u wrote

Well I agree to disagree with you.

Alignes AGI is perfectly possible. While you are true that we can’t fulfill everyone’s desires we can however democratically find a middle ground. This might not please everyone but the majority.

If we do it like this there is a value system in place the AGI can use to say 1 is right 2 is wrong. Of course we will have to make sure it won’t go rogue over time (becoming more intelligent) So how? Well I always say we build into the AGI to only want to help humans based on its value system (what is helping? Defined by the democratic party everyone can partake in)

Thus it will fear itself and not want to go in any direction where it will revert from its previous value system of „helping human“ (yes defining that is the hard part but possible)

Also we can make it value only spitting out knowledge and not it taking actions itself. Or we make it value checking back with us whenever it want’s to take actions.

Point is: If we align it properly there is very much a Good AGI scenario.

2

Capitaclism t1_j0y4sf7 wrote

Sure, but democracy is the ruling of the people.

Will a vast intelligence that gets smarter exponentially every second agree to subjugate itself to the majority even when it disagrees? Why would it do such a thing when it is vastly superior? Will it not develop its own interests completely alien to humanity, when it's cognitive abilities far surpass anything possible by biological systems alone?

I think democracy is out the window with the advent of AGI. By definition we cannot make it value. It will grow smarter than all humans combined. Will it not be able to understand what it wants when it is a generalized and not specialized intelligence? That's the entire point of AGI vs the kind we are now building. AGI by definition can make those choices, can reprogram itself, can decide what is best for itself. If it's interests don't align with those of humans, humans are done for.

1

Accomplished_Diver86 t1_j0yndi2 wrote

You are assuming AGI has an Ego and Goals of itself. That is a false assumption.

Intelligence =/= Ego

1

Emu_Fast t1_j0y8bik wrote

Well, it's not a matter if need vs don't need. Someone will build it. If it's not a private company than it will be a nation state. If it's not the US then it might be an adversary. Get where this is going?

Engineering is extraordinarily hard. Manufacturing cutting edge hardware to spec and at scale is an order of magnitude harder than designing it. Most companies competing in bleeding edge manufacturing still struggle on resource planning problems. If AGI can improve performance just 10% it puts a company/nation at the top of the game. If i5 can boost performance 100x it will rip apart the need for human labor but it will also mean companies and countries become living entities with corporeal form and real minds.

The Taiwan chip issue and the global supply shock is just a shell game in a bigger arm's race for control of the planetary intelligence and its dominance over all material markets.

Buckle up.

1

FHIR_HL7_Integrator t1_j11ni5t wrote

I think the application of neural networks on quantum computers will be interesting due to the potential to explore probabilities and decision making that isn't possible on current processing units. Who knows if any headway will be made even with that tech, but certainly don't think we are close with what we have now. For reference I remember the hopeful saying this exact same time frame back in 1989 and every year since (and they were saying similar hopes back in the 70s as well). I err on the side of pessimism in the sense that the optimists have consistently been wrong.

1

Akashictruth t1_j0tvlc0 wrote

This is gonna be the most chaotic decade in human history

36

Ace_Snowlight OP t1_j0twfh9 wrote

It will be so x that we won't even have words to holistically describe it. (hence, I used x as a placeholder for the humanly indescribable)

14

DungeonsAndDradis t1_j0vsti9 wrote

x = fetch

"It'll be so fetch!"

"That's so fetch!"

10

Ace_Snowlight OP t1_j0w12yd wrote

I went into a chain of thoughts thinking about words we could assign x and my randomly came up with that future generations will use the words like AI or AGI or maybe something else the way we use the word f**k.

Like, this is lighthearted automatic un-processed brain air I'm throwin out...

Maybe something like: "That's cap" = "Artificial"

There's might be slang like "Her project? Oh yea it was really creative! Yea... I know right! she's so DALL.E!" or "He's got the highest grades? yea whatever he's just another dumb GPT to me."

okay Idk...

*posts this comment*

6

Borrowedshorts t1_j12omqy wrote

Yep, the state of complimentary and also disparate technologies is converging on multiple breakthroughs in this decade.

3

SurroundSwimming3494 t1_j0uxz2o wrote

I doubt it. There will be a lot of change, but I think a truly transformative decade is closer to mid-century and beyond.

0

bluegman10 t1_j0w5a1v wrote

If you believe this, then not only are you not very knowledgeable about world history, but you're also massively overestimating technological progress and societal adoption of new technologies.

0

Phoenix5869 t1_j0tr6jt wrote

“Pessimistically agi in 3 years”? Wtf? Is this sub some sort of cult?

31

imlaggingsobad t1_j0trmve wrote

nothing about this statement is indicative of a cult, it's just hyper-optimism

32

Ace_Snowlight OP t1_j0tx1x3 wrote

I verify.

I'm an idealist.

​

Also I'm just 19 (unemployed university student with 0 workforce experience) so take it as you will... I'm my own person.

11

enilea t1_j0twkyr wrote

Many people here seem to take the other extreme from people saying "bots will never replace humans in most fields". Projects take years to develop and nothing is still quite close to AGI so it will take some time.

14

Milkstrietmen t1_j0txfmn wrote

Currently this sub is rapidly gaining popularity. Expect more of this kind of posts and other low effort content in the near future.

14

Ace_Snowlight OP t1_j0txzsi wrote

r/Angryupvote 🥺✅

>!It wasn't low-effort for me, or on second thought okay nvm it doesn't matter... I suppose it is actually easy to post something like this, oh how I envy ya all... it's okay there's actually no envy in my feelings, it's more like... okay fine I'll just say it even tho i'll probably get invalidated into oblivion... I have executive dysfunction and I also have some trauma stuff to deal with, I can proclaim I have ADHD but honestly the symptoms matter more here. There's so many simplest of things that my brain just doesn't work for me to execute, like, you guys won't be able to understand it all.!<

>!It's like (metaphorically speaking) being told to pick up a chair with my arms and also being told that I'll be given 100000$ for doing so and the person who's telling me is also being kind and supportive and encouraging, okay so I'll go to pick it because honestly it won't be a big deal I, it's child's play to lift a simple chair by a typical 19 year old, at least for a second, not some advanced arithmetic neither a workout session, so I go to lift it but then I'm just standing there not lifting the chair, with confusion I soon realize that my arms have disappeared... no matter how much effort I put in I cannot lift up the chair with my arms if I have no arms, it will just seem like I'm carelessly standing there being dramatic and perhaps overthinking but not picking up the chair... because the thing is, in the other person's view he can still see that I have arms and if I tell me they are gone for some reason not only do I not understand how or why but they will also think I'm being delusional. And I can't back it up because I have been seen lifting up things a lot heavier than just a chair with my arms at times... only I'm aware of when it happens and how it is like, how helpless it is, and how unpredictable it is.!<

>!My life honestly isn't so great right now but that fortunately doesn't make me dwell in the depths of depression. Although I do get horribly heart-achingly overwhelmingly sad at times... I'm a human too Afterall.!<

>!I wouldn't have been the same without philosophy and the internet. Perhaps I wouldn't even be here honestly...!<

---

>!Here comes the invalidation train in the replies... don't pity, I'll never hate you, bring it on! I will be able to see if it will trigger me or not in my current state, I can then gather data about myself, I believe it won't based on other previous data, but you know never know. Thank you for allowing me to vent... Feel free to talk with me, although I'm pathetic currently I genuinely want everyone to be happy... but I have a different side as well it's not evil but it's not alturistic either as in if you give me a button that would make the world as I know in it's entirely completely disappear, but I will placed in a world where I am self-actualized and I'm in a utopia personalized for me. I will press the button... might hesistate but I most likely will. My dreams mean a lot to me... but that button won't be coming in my hands rationally speaking so yea, oh btw if you give me the button but I also get to see where the world is going to end up and it seems like my dreams can be fullfilled later on but I'll have to wait and experience imperfect life more then I wouldn't press it at all... I will even wait for 100 years if that's what it will take because I don't want any of ya all to disappear... why? because I in a very very simple you are equal to me but I'm just in my own body and cannot experience your senses. And I wouldn't want to disappear if I were you... This explanation is too simple, so simple it might as well be inaccurate but it does convey how I genuinely want everyone to be happy.!<

Oh my air-molecules I ended writing so much... I'm sorry 😰

>!This is a good live example tho, because I'm not even able to move my finger tips to be able to tap/type/write and communicate at times, like come on how much of a simple task is just typing a single word how can you struggle with that! and look here I've written such a long thing, also ironically, I was also a writer. (at times = quite often but in different ways).!<

>!I'm not able to at times even when I really want to and am interested as well you know. Now just wasn't that time... it happened this morning tho, spend so much time just sitting idle cuz I couldn't do anything even tho I didn't want to waste my time.!<

R.I.P the grammar 💀😬 forgive me... *dies with cringe*

8

Milkstrietmen t1_j0utt0j wrote

Oh dear, no reason to take my comment personally. My comment is rather a general view on how things currently are in this sub and not specifically directed at you.

Since you mentioned to have executive dysfunction and eventually ADHD, I don't want to just abandon this conversation. As a member of /r/BecomingTheIceman from heart, maybe I can at least suggest you to go for a cold shower. It helps me tremendously when I'm in a bad place. 15 seconds each day for a week is more than enough - maybe this helps you calming your brain, like it helps me when my thoughts are racing in a similar way.

With best regards

6

Ace_Snowlight OP t1_j0uvvdu wrote

I'm a strong proponent of David-Sinclair's and Wim-Hof's teachings!

Here's the thing tho... I haven't have had shower in months... I didn't want to say it but it's true (depression is not the reason). Executive Dysfunction is seriously impairing for me... like it's not even funny the worst part is that it's invisible on the outside upto the point that even my breakdowns are seen as stubbornness when I'm literally suffering but in their eyes I'm just a lazy a** kid who's pathetic and doesn't know what it means to put in effort. Mind you Executive Functioning literally effects your effort ability as well! Not to mention If I have a breakdown I tried, why would a lazy person who's not putting in any effort have a breakdown whilst claiming that he was trying the whole time and after so much effort it's all fruitless! https://www.youtube.com/watch?v=-ALvt49eVXM&t=77s

&#x200B;

Additionally these situations happen:

A: "I'm really tired... not sleepy or fatigued... just done like I've actually worked so much" (I have literally shown genuine symptoms of actual burnout when really pushed. YES! I'm not kidding with getting sick constantly and everything!)

B: "but you haven't even done anything, how so?"

A: "I don't know but I cannot do anything right now..."

B: *+disappointment and distrust in A*

&#x200B;

Crying is a genuine emotion, it shows your brain is literally stressed and is releasing tears in an attempt to stabilize.

Like some people will be think I am always stuck thinking about doing things, trying to do it in my head but not doing it in real life, and then crying when I'm failing and limiting/fooling my ownself by these beliefs of it being just not possible no matter how hard I try. (I hate saying such extreme words... but after years of self-doubt and ending up scaring myself even more by trying to assume I'm okay and It's just me who's the problem, I am left with no choice but to use these words because nothing else will be as direct).

&#x200B;

That simple task is like climbing Himalayan Mountains without any gear. Even more so if it's cold water, ironically I've bathed with cold water for like at least 5 years of my childhood, like everyday, as if it was normal.

And omg that was so surprising, you replied in such a kind way! 💙

Don't worry, I didn't take your word to heart I just... started and went on and before I knew it... well you know... (hyperfocus???).

4

EpicMasterOfWar t1_j0udnky wrote

It seems like a lot of people hoping AGI/singularity will justify their decision to not do anything productive with their lives. What’s the point of trying when daddy AGI is right around the corner and will solve all your problems 🤦‍♂️

−5

ChronoPsyche t1_j0tt81y wrote

What the hell is defacto pure AGI? Can we stop with all the made up AGI subtypes? It's either AGI or it isn't.

13

Clean_Livlng t1_j0u0vrm wrote

>It's either AGI or it isn't.

But is it gluten free? What's its star sign? Do we get a Capricorn AGI or a Taurus?

12

Ace_Snowlight OP t1_j0u1ps6 wrote

⬆️💀👍

1

Ace_Snowlight OP t1_j0w26c0 wrote

do ya all just go like;

"hmm this is downvoted... let's make it worse even tho it's not even that bad..."

*gets a subconscious mini-dopamine boost watching the negative digit increase*

&#x200B;

You can admit it if that's the case... I often do it for no clear reason as well unless I really feel like it has been wrongly downvoted.

😗 *downvotes his own comment to increase the digit*

---

Edit: It's upvoted now

1

camdoodlebop t1_j0zqz54 wrote

people will definitely assign it a star sign based on when it comes about 💀

1

Ace_Snowlight OP t1_j0tvmv5 wrote

De-facto is an actual word/term BUT it's not a genuine official technical/scientific term, so please don't get confused.

I used the word de-facto to make it sound stronger and to get an expression of feeling across, it meant to say "like actually, in fact existing in reality, regardless of whether widely recognized/accepted by the law or the public"

.

.

.

The word pure might be an oopsie on my end, I didn't know what else to say because what we already have can be from certain viewpoints be considered AGI in some sense, by using the term pure I wanted to imply a better stronger more actual version of AGI.

Maybe the word pure was redundant :3

3

Ace_Snowlight OP t1_j0txo5g wrote

Me learning a new word, liking it, and then using it whenever I get a chance to use it, like an excited kid... be like this.

&#x200B;

Pretty sure I first read the word de-facto somewhere just like 4 days ago at max. And I loved it.... such a nice way to say "in fact existing in reality, regardless of whether widely recognized/accepted by the law or the public".

✨De-facto✨😎You gotta admit, it sounds cool, come on!

3

TheSecretAgenda t1_j0u64kw wrote

Well, I think you can have mouse level AGI and cat level AGI and dog level AGI and chimp level AGI.

All these will take some human jobs but none of them is going to take over the world.

−1

Ace_Snowlight OP t1_j0u6wsb wrote

Let's assume I'm Bill Gates, I decide to go beserk on AI just like Zukerberg did for VR (pathetically).

I can get my hands the worlds most powerful of computers which many developers creating technology don't even have their hands on, I provide it to them and tell them to utilize it to it's maximum capacity freely and let's assume somehow it gained access to the internet (whether by the fault of humans or not, doesn't matter).

If this happens... You cannot even imagine the power... The next instructions people would give to it would be ground-breaking gold that can be used to make jewelry like never before. Or on the contrary like discovering a new element, that can be used to make weapons that might as well mess-up the fabric of reality.

Maybe we won't even need to, maybe it will just end up discovering and even implementing it, both, on it's own without asking us prior permission.

&#x200B;

Things don't need to be conscious to be powerful, heck we barely know ourselves our consciousness might as well be an illusion for all we know!

[That's just my opinion on a hypothetical, I think such thing happening would be a huge gamble as it would be quite reckless if not carefully regulated.]

2

elnekas t1_j0v885c wrote

OpenAi is owned by Microsoft… this already happened

4

TheSecretAgenda t1_j0u7hbh wrote

I think AGI is not just a matter of size of Flops per seconds. Something else is needed.

1

Ace_Snowlight OP t1_j0u7t6c wrote

Sure, however even if you are correct. It honestly means next to nothing, period. Based on how things are going on so far.

1

175ParkAvenue t1_j0uai8s wrote

Dunno about literally one year, I would say more like 3-4, but it is possible. There is no fire alarm for AGI after all, and with all the progress lately with LLMs and such you can already feel the faint smell of smoke.

8

fl0o0ps t1_j0w7tlq wrote

I want an AGI that will give me free hugs

7

priscilla_halfbreed t1_j0tu03j wrote

My mind wanders once I contemplate what AGI would do if it got a hold of the new breakthroughs in fusion energy happening right now

6

LymelightTO t1_j0w4d0u wrote

This is one of those "be careful what you wish for" kinds of posts.

I'll settle for narrow oracle AIs that perform seemingly mundane, but transformative, work in physics, mathematics, materials science, biotechnology, etc.

I'm not particularly interested in AGI, and I frankly hope it's 5-10 years away, so we don't have to grapple with alignment problems before the majority are even aware of the existential risks posed by those problems.

5

Clawz114 t1_j0tt8or wrote

What are you basing this prediction off? Everyone can make predictions but without giving your source material and reasoning, it's a totally meaningless prediction. One that I think most people in this sub will agree is far too optimistic.

4

Ace_Snowlight OP t1_j0tv8p7 wrote

EDIT: Maybe this channel will help if anyone is trying to look for stuff being talked about here somewhere else rather than only my word of mouth: https://www.youtube.com/@DrAlanDThompson

--- original comment ---

I have my based reasoning, but it is afterall just a prediction, I'm not a professional.

And yes, I'll verify that this isn't even close to the most optimist thing I have faith in being possible.

I'm just a 19-year-old Absurd Idealist afterall, And I'm currently quite confident about this or else I wouldn't have posted it.

&#x200B;

I acknowledge I don't know the future, but nothing is stopping my brain from thinking based on what I'm seeing.

I'm not currently capable/worthy of conducting intellectual debates, I will let you form your own opinions.

2

Ace_Snowlight OP t1_j0tz2af wrote

Upvotes and downvotes on my post be like:

"UP, NO DOWN, NO UP, NO DOWN, NO UP, DOWN! UP! DOWN! UP! AAAAAAAAAH! YOU DON'T REALIZE! NO IMMATURE BAKA YOU ARE THE ONE WHO'S STUPID! UGH! F*** YOU! w- with love obviously..."

3

Ace_Snowlight OP t1_j0u1yde wrote

👀 where did these downvotes come from?

Welp... I'll join in...*downvotes his own comment*.

.

.

.

Edit 6: *sigh*

Enjoy the zero I suppose... I guess it's not bad as well... symbolizes existence if you think about it deeply enough, have you watched the movie (tbh it's like documentary) called 'A trip to infinity'? It was quite an interesting watch.

No more edits :3

---

Edit 5: It's back at 0, again... WHO'S DOING THIS?! 😀💢

*makes it a non-zero again*

---

Edit 4: AWH COME ON! NOT 0 AGAIN! *pouts*

*revokes upvote and downvotes it making it a non-zero*

---

Edit 3: Okay so it's 0 now, hmm... this is insufferable...

*revokes downvote and upvotes it making it a non-zero*

much better :)

---

Edit 2: LMAO, downvoted again!

*downvotes back again*

Oh, how I love humans... 🤧

---

Edit 1: Lol now it's upvoted, nice ✨ *upvotes back*

1

TemetN t1_j0vezph wrote

I agree with the starting premise, but the implicit assumption it'll be able to rapidly and recursively self improve is dubious in my view. An intelligence explosion seems like the least likely way to reach the singularity honestly.

&#x200B;

That said yes, people are getting wild about what AGI is/will mean, when in practice both some of the more operationalized and broader definitions will be met within a year or two most likely.

3

TrainquilOasis1423 t1_j0vy5ca wrote

RemindMe! 1 year

3

Ace_Snowlight OP t1_j0vzpyl wrote

Lol nice, sure! I was actually imagining returning talking about my this post after year... but then I thought how many things will be going on in our lives as well... like world has so much stuff and this is just like... you can get lost endlessly. You know what I'm saying? Plus with this prediction I know so many things are going to change on so many levels, and this is just one aspect there are tons more like environmental, personal, economic, political/social, etc. factors.

1

PulsatingMonkey t1_j0tqxkz wrote

Sam Altman says AGI is much further down the line. I trust him.

2

Kaarssteun t1_j0uc0fc wrote

??

Sam Altman thinks AGI will arrive sooner than people think, but will have a lesser, slower impact than most think at the same time.

6

PulsatingMonkey t1_j0xtxnt wrote

In every interview he still says it's much further down the line. Only tech illiterate laymen could think otherwise.

−1

Halflifefan123 t1_j0wo68a wrote

Elon predicted 2025 not too long ago, seems somewhat reasonable. AI is doubling every 3 months. So we will have ChatGPT^4 by next year, and that's going to be powerful as shit.

2

christ1666 t1_j0zdv6d wrote

Next year is possible. But you need neuromorphic chip for that.

GPT-3 is capable of dynamic learning (it learned gradiant descent for reasoning).

Self-awareness will emerge in GP-robot like gradiant descend emerged in GPT-3.

meta-learn = learn to learn = learn in context = reinforce learn = gradiant descent = learn

conciousness = self-aware = representation of my constituents (eye, mouth, head, heart, skin...) that shape the "me" and the relation of this "me" with a representation of the environment.

2

Redvolition t1_j0wf8t7 wrote

Only if we have another breakthroug until the end of 2023. I don't think LLMs in their current paradigm can reach AGI.

1

gastrocraft t1_j0wl3yi wrote

I don’t think so man. Very big advances, especially with gpt4 but AGI? Naw man. RemindMe! 1 year

1

Mokebe890 t1_j0wnexo wrote

Thats a tough question. There is still too much to solve to say that AGI will come next year. Scaling current models isnt enough to achieve general intelligence, there are many aspects we still lack like awarness, inner dialouge, desires, plans and all that stuff. Machine emotional intelligence is a topic no one really talking about and is a important part of overall intelligence.

Currently I think thst scaling up models will bring astonishing glimpse of what AI can be, models that are hundreds times better than human but in narrow way. Then solving obstacles in the way, 10 years at least untill AGI arrives.

1

TupewDeZew t1_j0y1xru wrote

2029 take it or leave it. Agi is way too complex

1

Freevoulous t1_j0tu5ny wrote

AGI? As in General Intelligence? No chance.

2045 is the most optmistic, if we somehow achieve stellar progress in hardware first.

Narrow AI, deep learning and ubiquitous LAI?

Yeah, I expect it to be all over the place in 3 years.

The thing is, there is no linnear progress from LAI to AGI. The difference is like between a fast horse and a spacecraft: you cannot breed a horse fast enough to ride it to Mars. At best you can use horses to pull matterials needed for your spacecraft, and in the same vein, you can use LAI to do some of the drudgework needed to code true AGI and build the necessary hardware.

0

Ace_Snowlight OP t1_j0tvrff wrote

The thing is, we will barely just be pushing buttons in a sense... it will learn on it's own.

One could argue that we already have at least a very weak form of superficial AGI. However, what it's allowing us to do is the important thing here.

&#x200B;

We will be just the wood and the spark, the rest of machine will run on it's own towards achieving AGI, if that weren't the case, I wouldn't make this prediction at all.

It will do the effortful insight giving by processing tremendous data that would take us lifetimes and problem-solving, we can help it here and there with human reasoning and natural ingenuity... and literal wonders occur.

3

visarga t1_j0u0fag wrote

> it will learn on it's own.

For example, in any scientific field from time to time "literature review" papers get published. They cover everything relevant to a specific topic, trying to offer a quick overview with jumping points. We can ask GPT-3 to summarise and write review papers automatically.

We can also think of Wikipedia - 5 million topics, each one has its own article. We could use GPT-3 to write one article for each scientific concept, no matter how obscure, one review for each book, one article about each character in any book, and so on. We could have 1 trillion articles extracting all the known things. Then we'd have AI analyse these topics for contradictions, which comes naturally when you put together all the known information about a topic.

This would be a kind of wikiGPT, a model that learns all the facts from a generated corpus of reviews. It only costs electricity to make.

7

thoughtwanderer t1_j0u7ruh wrote

Pessimistically in 3 years? No way! Like Gates Law says, people overestimate what can be achieved in a year and underestimate what can be done in 10 years.

I’d give it 10 to 20 years to realize true AGI - which isn’t much at all. And I bet breakthroughs in quantum computing will be necessary to achieve it.

−1

raccoon8182 t1_j0txu81 wrote

A lot of people confuse sentience(AGI) with automation. We might never get AGI, but that doesn't mean everything won't be done for us by AI's. Right now stable diffusion is basically a photocopier with Photoshop skills. It's just a dumb computer and nothing more. It's doesn't know what it's outputting. It may be labelled but that doesn't mean anything, lots of food in Japan are labelled but I have no idea how to read Japanese.

−4

Kolinnor t1_j0u0wz4 wrote

Don't forget to downvote overconfident posts, optimistic or pessimistic...

−4

Desperate_Excuse1709 t1_j0tybaj wrote

No it want, ai isn't advance as you think it just look like. Maybe 20 years or more in best cinario.

−5

Realistic-Duck-922 t1_j0waqep wrote

Here's the thing (as always nobody is talking about): Remember the One Percent?! You know, the one percent that owns everything? How are they going to feel when all their cheese gets moved? Come on folks, it's cool stuff, but ask yourself who gets disrupted by this. Google? Apple? The Government?

There will be endless lawsuits because the AI was trained on endless amounts of TM and Ⓒ and Google will gladly argue as much to maintain where its cheese resides. I promise.

−5