Viewing a single comment thread. View all comments

izumi3682 OP t1_jdx3d7e wrote

I never said it was an AGI. But. It's gonna be. And in less than 3 years I prophesy. And very shortly after that, between 6 months and one year, it's gonna be an ASI. And ASI=TS Technological singularity is TS.

I am going to be proven correct.

>As of this commentary there is no such thing as AGI, that is "artificial general intelligence"--A form of AI that reasons and employs "common sense" just like a human, to figure out how to do things it has never been exposed to before. And don't forget--That AGI will also have unimaginable computing power behind it's human like thinking. Something humans don't have--yet, maybe... And we don't even know if such a thing is possible. But I suspect that given enough processing power, speed and access to big data and novel AI computing architectures, that a narrow AI (a computing algorithm that can only do one task, but with superhuman capability) will be able to effectively simulate or mimic the effect of AGI. Then my question is, does it matter if it is narrow AI simulating AGI or real honest to gosh AGI. Is there even a difference? My point being that narrow AI is very much in existence today. Consciousness and self-awareness are certainly not a requirement. And in fact a true EI (emergent intelligence--conscious and self-aware.) would be very undesirable. We don't need that kind of competition.

That is a self quote from my hub essay that I wrote in 2018. I saw it coming even then, although people like you, the AI experts, said no, that is not how AI works.

But that is exactly how people that know, like Sam Altman and Geoffrey Hinton see it. Further they are realizing that LLMs that are comprehensive enough, apparently begin to spontaneously demonstrate emergent traits. They become able to do things that they were not programmed to do.

And nobody knows why. The black box phenomenon growing larger and larger.

Don't take my word for it. Watch it from these guys yourself.

Geoffrey Hinton

https://www.youtube.com/watch?v=qpoRO378qRY

Sam Altman and his CTO

https://www.youtube.com/watch?v=540vzMlf-54

Oh. And former Alphabet CEO Eric Shmidt

https://www.youtube.com/watch?v=Sg3EchbCcA0&t=734s

Here are some AI experts that are part of AI alignment efforts, discussing what is happening today.

https://www.youtube.com/watch?v=APliuwGYDNc

BTW all of these interviews are from less than two weeks ago. That is the impact of GPT-4.

>...CoPilot

Oh. Yer just a coder. I should have known. No, I don't doubt you use Chatgpt exponentially more than I do. It is a fantastic tool to make your tasks ever so much easier. Don't you understand that the reason it is of such incredibly good use to you is that it is exponentially itself developing into a more powerful AI. Did you know that GPT-4 can code at the level of a senior coder? Further it continuously improves based on input from coders like yourself to the model. Then OpenAI sends out little "updates" that demonstrate improvement. Your helpful tool that takes most of the stress off you is going to replace you in about two more years, if not sooner than that.

Right now GPT-4 is hobbled by not having access to training after Sep 2021 and very limited access to the internet. But in 3 to 6 months' time, you're gonna see some serious shizz. And one year from now? We can't model what GPT-4 will be capable of.

And that is how the TS rolls. I still maintain, somewhat reluctantly, that 2029 will be the year of the TS, but the release of GPT-4 may have profoundly changed the game. What do you imagine something like a "GPT-5" or whatever it's called, will be capable of? More importantly, when would it release? And for that matter what kind of unimaginable craziness are we yet to see in the balance of 2023 alone? I know that Nvidia is up to some kind of novel AI right now. I state to you, as a fact, there will be at least 4 more profound AI related news stories that will become public knowledge this year. Not all of them related to coding. But it may not matter either. My god! What we are talking about now, compared to 2018--a technological lifetime ago of 5 years. What will 5 years from today, 2028, look like. We can't model it.

2

GrandMasterPuba t1_jdx8emg wrote

>What do you imagine something like a "GPT-5" will be capable of?

I imagine it will be capable of predictive modeling of language, just like GPT-4, and just like GPT-3; that it will be better at it, that it will continue to confuse people who don't know what they're talking about into believing it is somehow alive or conscious, and that it will continue to just be a statistical model running on silicon in a cooled warehouse.

I imagine that it will be just good enough to convince business leaders to replace all their workers with it, and that it will be ever so slightly shittier than a normal human because it lacks any sort of foresight or higher level reasoning, and as a result the world will be just a little bit shittier for everyone.

1

izumi3682 OP t1_jdxukmm wrote

>I imagine that it will be just good enough to convince business leaders to replace all their workers with it, and that it will be ever so slightly shittier than a normal human because it lacks any sort of foresight or higher level reasoning, and as a result the world will be just a little bit shittier for everyone.

Doubtful. I think you are not seeing the forest for the trees. Did GPT-2 take you by surprise in 2019? How about GPT-3 in 2020, barely a year later. Did you know what it was going to be capable of when it released? The whole point of all of this discussion is that you seem to be the only one who believes that very little improvement will accrue to our efforts at AI in the balance of this decade. I suspect that it will transcend human intelligence as early as the year 2027. It's not going to be "shitty". It's going to be beyond our 2023 "stone-knives-and-bear-skins-primitive" imagination, compared to what will be the ground truth in the year 2025.

Just out of curiosity, what year do you believe humanity will achieve AGI? 2025 which I predict NLT, 2030, 2045, 2060, never? I think you are massively underestimating the improvement of these narrow-ish AIs. I'm not sure why. Could be some kind of defensiveness or denial. And that is understandable from the perspective of the human condition. But bear in mind that there is an abyss of difference between an AGI and an EI, that is "emergent intelligence". An AGI is just more fancy computing that can do lots of different things. It has a level of computing that allows it to understand the laws of physics and probably based on that, what we would regard as common sense. Consciousness and self-awareness not necessary. An EI is a new form of sentience in the universe (our portion of the multiverse), conscious and self-aware. We need to work very hard to avoid bringing about an EI.

Take a look at this. I wrote it when I was wondering how we can make an AI be "motivated" to do something.

https://www.reddit.com/user/izumi3682/comments/9786um/but_whats_my_motivation_artificial_general/

2

GrandMasterPuba t1_jdyrevb wrote

>Just out of curiosity, what year do you believe humanity will achieve AGI?

It won't. Technology is peaking. In the coming decades sociopolitical and environmental stressors will cause technology to enter an inevitable decline and we will enter an era of degrowth and survival.

It is already happening. Software is in decline as we speak. Developers are less productive than ever. Software in general is more broken and error-prone than ever. Understanding of technological fundamentals is lower than ever. An influx of new, green engineers who never bothered to learn how any of this shit works has displaced the old retired graybeards who built it all. Web companies are failing left and right, their castles built on mountains of sand crumbling beneath them as they run out hands to pile the grains back up.

My counter argument to your pie in the sky notions of infinite growth is that GPT and LLMs in general are in a period of logistic growth. They were advancing slowly, then all at once, and soon will reach a limit.

1

izumi3682 OP t1_jdzarvv wrote

>...soon will reach a limit.

OMG! You sound just like 2018 all over again. Well, ok I'll look you up in a year or two, or you can look me up in a year or two and we'll compare notes. I'm not going anywhere; I mean unless I get hit by a truck or something. But I have been continuously in rslashfuturology for nearly ten years now. I pretty much have seen it all. And I will continue to breathlessly report all the latest developments in AI and anything else "futurey" that attracts my attention.

About AI winters. "Limits". AI winters occur when for technical or even science philosophy reasons a wall is hit in progress. The AI winter of the mid 90s to mid 00s, about ten solid years, was because it did not seem to be possible for contemporary computing to realize the long theorized "neural network" first seen in a very primitive form in "Eliza" in 1966. Marvin Minsky, the finest AI scientist of his day, said as much. "The problem seems to be "intractable".

Rising AI scientists like Geoffrey Hinton were basically "alone in the wilderness" struggling to advance the science even a fraction of an increment. But even he had no luck. Now the other element of AI winters is when the investors that had initially seeded these projects with considerable sums of money, begin to think, hmm I don't think this is going to pan out after all. And then the money dries up. A vicious cycle feeds itself. Virtually no progress occurs.

Hinton racked his brain trying to come up with ways to make CPUs realize that elusive neural network. Primitive ones did already exist. But new ideas were needed. I'm not sure how, but Hinton looked at the GPU units that Nvidia was using for advanced (by early 2000 standards) graphics in video games. He probably had significant insight into what he was looking for and likely realized that the same effect that GPUs had on graphics could be used to realize those long sought convolutional neural networks. Further he took a relatively old concept, "back propagation" and used it along with GPUs to almost literally force the CNN into existence. Many other now renowned AI experts were instrumental in this as well.

Hinton, in his typical engineering understatement, said, regarding GPUs; "This seems to work". And from that point forward "narrow AI" began to explode. And explode. And explode. Tens of thousands of narrow AI aided apps to include "Siri" suddenly came into existence. The one that blew me away personally about the year 2015 or 16 was the "Google Translate". The translation font on my iPhone screen was identical to the original font. Even the color was identical if the original font was in color. When I saw how that worked it was like magic, a miracle of technology. Then I had this other app on my iPhone about 2016 called "AIpoly". It was an app that was an experimental sort of beta app for the use of blind people. You set the app to use your iPhone cam to look at objects up close and it would state in text what it saw. I pointed the cam at my hand, my doc i worked with was right there with me. The text said, "The back of a hand". Our jaws collectively dropped. We both said "Whoaaa!!" in genuine and truthful amazement. Then I pointed it at my computer monitor and the text read "computer monitor". There was a way to turn on sound for the blind people but I could not find it. So we just relied on the text. But it could not identify a candy wrapper on the desk. It said, "I am unable to identify this." But OMG! We were blown away.

Two years later in 2018, the first GPT was released with 175 million parameters. And the rest of course is history.

There will never ever again be an "AI winter" for two reasons. The first is that our extant AI is so inextricably entwined into all of human affairs that it is essential that it continuously improves or everything fails. This leads to the second part of the first reason. When Chatgpt released on 30 Nov 22, within one week more than 100 million users had scarfed it up. In less than one week. The fastest technological penetration of society in human recorded history. Of them 100 million users, I'm pretty confident that a goodly percentage of them are AI developers in their own right. And I'm further confident that we shall see an absolute "Cambrian Explosion" of new forms of AI and training algorithms like "transformers" and "diffusion" to name two.

What do you think shall be the next training algorithm to come to our attention. I mean what will it be called. It's coming sure as Christmas. More than likely this year too. And it will be utterly transformational in our efforts to achieve AGI, which i maintain will be in existence NLT 2025.

The second reason is a bit more ominous. Vladimir Putin, stated less than ten years ago I think, that "the first country to develop AGI will control the world". The national defense of the USA, China (PRC), Russia and probably a great many other mostly first world countries depend utterly on ever faster developments in AI. The money is never going to dry up again. And investors know that. BTW Nvidia of GPU fame is working on its own novel form of AI. I don't know when it's going to be released, but it's on the way. Could be this year maybe.

No, AI is going to continue to develop and evolve, some of the evolution on its own--through unanticipated emergent behaviors. But through humans working as hard and as fast as they possibly can to make AGI. Because now, yes, it is a race. And everybody knows it. And like I stated earlier, it is natural and normal that this is happening. It is logical that we are at the point we are at today. Thank the renaissance, the Catholic/Protestant reformation, industrial revolution, the enlightenment, WWII, Eniac, H bombs and "Eliza". Oh! And video games. Further the AI itself will be developing science and technology as a multiplier on top of our now exa-scale computing power. Today that processing speed hovers around 1-1.6 exaflops, but as soon as 2025 it is expected to be between 10-20 exaflops. What are quantum computers up to now? Not sure, they're a bit of a wild card in all this. But I will say this, I suspect it will take quantum computing to realize genuine consciousness in an AI. An "EI" then, may come into existence, and God help us all when (hopefully if, rather) that happens.

2