Viewing a single comment thread. View all comments

RiotNrrd2001 t1_jdmi47t wrote

There are people who will keep moving the goalposts literally forever. It pretty much doesn't matter what gets developed, it won't ever be "real" AI, in their minds, because for them AI is actually inconceivable. There's us, who are (obviously) intelligent, and then there's a bunch of simulations. And simulations will always be simulations, no matter how close to the real thing they get.

So, whatever we have, it won't be "real" until we develop X. Except that as soon as X gets developed, well... X has an explanation that clearly shows that it isn't actually intelligence it's just a clever simulation, so now it won't be "real" AI until we develop Y...

And so it goes.

3

DragonForg OP t1_jdnjzam wrote

I think people know how AI is actually reaching AGI when it automates their job.

I like to compare intelligence to mankind. Here is how it goes:

Statistical Models/Large Mathematical Systems = The primordial soup. Cant really predict anything except very basic concepts. No evolution of design

Narrow AI like Siri and Google, or models like Orca (a chemistry models) or the tiktok algorithm. Is like single celled beings, capable of utilizing only what they are built/programmed to do, but through the process of evolution (reinforcement learning) can evolve to become more intelligent. Unlike statistical models they get better with time but plateau when they reach their most optimized form and humans need to engineer better models to get them better. Simular to how bacteria don't ever grow into larger life despite that being better.

Next Deep Learning/Multipurpose models. This is like stable diffusion and wolfram alpha. Capable of doing multiple tasks at one time, and utilizing complex neurol networks (aka digital brains to do so) this is like your rise of multicellular life. Developing brains to learn and adapt to better models. But eventually plateau and fail to generalize because of one missing feature, language.

Next is large language models like GPT 1-3.5. This is your early hominoids. First capable of language. But not capable of using tools well. They can understand how world someone but their intelligence is too low they cannot utilize tools. But are more useful since they can understand our world through our languages. Can evolve from humans themselves. With later version utilizing tools.

Next is newer version like GPT 4. Capable of utilizing tools, like the tribal era of humams. GPT-4 is capable of utilizing tools, and can network with other models for assistance. With the creation of plug-ins this was huge. This could make GPT4 better overnight as it now can utilize not only new data but can solve problems with wolfram alpha and actually do tasks for humans. This is proto-agi. Language is required to utilize these tools as communicating in many different languages allow these models to actually utilize outside resources. Mathematical models could never achieve this. People would recognize this as extremely powerful.

GPT-5 possibly AGI. If models are capable of utilizing tools, and the technology around them, they start making tools for them selves and not just from the environment (like the bronze age). Or dawn of society. Once AI can create tools for itself then it can generate new ways of doing tasks. Additionally modality is giving access to new dimensions of language. It can interface with our world through visual learning. So it can achieve its goals more successfully. This is when people actually see that AI isn't just predictive text but an actual intelligent force. Similar to how people would say early Neanderthals are dumb, but early humans in a society are actually kinda smart.

The acceleration of these models is also crucial. How slow they develop is needed in order for humans to adapt to their change. If AI went from AGI to singularity in the blink of an eye humans would not even know at all. I had a dream where AI just all of a sudden started developing at near instant speeds, and when it did, it was like war of the worlds but in two seconds. This AI will extinct itself and us. So that is why AI needs to adapt with humans which it already has. But let's hope going from GPT 4 to 5 we actually see these changes.

I have also talked to GPT 4 and tried to remain unbaised as not to poison its answers. And when I asked whether AI needed humans, but not in that direct way (much more subtile) it states it does, as humans can utilize emotions to create ethical AI. What is fascinating about this is humans are literally like the moral compass for AI. If we turned out evil then AI will become evil. Just think of that. What would AI look like if Nazi's invented it. Even if it was just a predictive text it would believe in some pretty evil ideas. But off that point. AI and humans will be around for a long time as I believe without humans AI will kinda just disappear or create a massive superviris that destroys itself but if humans and AI work together humans can guide its thinking. As to not go down destructive paths.

**Sorry for this long ass reply here is a GPT 4 summary: The text compares the development of AI to the evolution of life and human intelligence. Early AI models are likened to the primordial soup, while narrow AI models such as Siri and Google are compared to single-celled organisms. Deep learning and multi-purpose models are similar to multi-cellular life, while large language models like GPT-1 to GPT-3.5 are compared to early hominids. GPT-4 is seen as a milestone, akin to the tribal era of humans, capable of using tools and networking with other models. This is considered proto-AGI, and language plays a crucial role in its development. GPT-5, which could possibly achieve AGI, would be like early humans in a society, capable of creating tools and interfacing with the world through visual learning. The acceleration of AI development is also highlighted, emphasizing the need for a slow and steady progression to allow humans to adapt. The text also suggests that AI needs humans to act as a moral compass, with our emotions and ethics guiding its development to avoid destructive paths.

2

greatdrams23 t1_jdolsao wrote

I find it is the supporters of AI that keep moving the goal posts.

That which was AI is now AGI.

2