Submitted by bloxxed t3_10xa9tj in singularity

In light of recent events, I can't help but consider the possibility of things progressing far more quickly than I had previously imagined. I would never in a million years have thought we'd have seen the quality of image and voice generation we see today, and yet, here we are. Combine that with ChatGPT and the recent advent of a new AI arms race between Microsoft and Google, and I can't help but feel we may be standing on the precipice of something incredibly significant. The knee-jerk reaction to discard such a scenario as sci-fi fantasy is understandable, but nonetheless I can't bring myself to discount it entirely. After all, we've made so much progress in fields where it was widely thought decades or even centuries were needed to reach a breakthrough -- surely no one thought artists would be among the first out the door, for example. What I'm getting at is, what if AGI turns out to be a much easier goal than previously assumed, and it really is just right around the corner? I've seen an uptick in users here predicting we'll reach that target as early as this year, and while I myself am not confident enough to single out any specific date, I still don't feel I can dismiss them out of hand, especially not with things moving as fast as they appear to be.

Anyway, that's the end of my rambling. I'm interested in hearing everyone's thoughts.

48

Comments

You must log in or register to comment.

hucktard t1_j7rg8gg wrote

How hard is hard? I mean how fast does artificial super intelligence (ASI) have to appear to be a hard takeoff? Within 1 year? 1 month? 1 day? I think its also possible to have a somewhat narrow ASI, like an AI that is super smart at most things but still very limited at other tasks that humans do. In fact I think that is the likely scenario and we actually already have very limited versions of that.

I don't think we will have a super hard takeoff, like a godlike ASI that appears almost instantly. I think the rate of advancement could be super quick though. Like we have really impressive but not completely general AI within the next year or two and then over the next few years advancements are mind blowing and world changing, but there will be no god like AI that appears overnight and suddenly rules the world.

25

oOMaighOo t1_j7rn8y0 wrote

I agree

I am starting to thing that achieving AGI is wildly overrated with what we are seeing just from GPT3.5. That is already a very powerful tool and GPT4 and other more advanced LLM are just around the corner. The way it is looking right now they might just turn the world upside down in a way that very much resembles the singularity

Also, it's not just the technology that's impressive, but especially the rate of adoption. It's like everyone has been waiting for a prompt/interface that is usable by non-expert users.

16

sumane12 t1_j7th7xg wrote

I'm of a similar mind. We already have narrow super intelligent AI, I don't think a godlike super AI will appear instantly either, but I do think that the first AGI will be ASI. How can it not? Speed of light thinking, ability to search the web instantly, no need to eat or sleep, ability to copy itself multiple times to work on multiple tasks. I think a fast takeoff is inevitable, I mean we already have a super intelligent assistant in the form of ChatGPT, that will only improve.

That being said, I don't think the recursive self improvement will be immediate, I think it will be quick, but will still take a few years from AGI to see and end to human invention and the godlike AI that we think will be the result. It's also not clear to me at what point we will merge with AI, and what will be the outcome of that, it may well be that we become the ASI.

2

BenjaminHamnett t1_j7uhjmm wrote

I always assumed some one or cyborg society would merge with the AI. It may come down to arbitrary semantics to describe what happens.

I always assumed a combined cyborg hive would always be stronger than AI alone. The last human creators would have more incentive and more capability than (relatively) detached programmers, if there could even be such a thing, considering anyone reading this today is already essentially a cyborg.

That AI is writing code already is what skews the odds a bit now. it becomes a bit more likely someone will give a detached AI enough computing power to use evolutionary programming to bootstrap a sci-fi singularity. I still think this is less likely than a neural implant cyborg hive mind singularity, but the odds are approaching 50:50, where before I thought it was more like 90:10 to be cyborg based instead of straight hardware.

5

aeaf123 t1_j7r9jfx wrote

I personally could see some broad psychological impacts. Always better to ease new things in. That way people have time to adapt, build comfort, and general understanding. It also gives lots of time for AI researchers to gain valuable feedback as they work on alignment and maturation of AI.

6

X-msky t1_j7rej9x wrote

It's possible we've seen a jump and it will take time for the next one, in that case we'll see many many cool stuff that uses these new capabilities far more then what we've seen, but still just rendering images, audio and text. In that case it might take a few years for the next jump, say in 2026-7 bringing us just on schedule with a final jump at 2029 as per Kurzweill predictions.

These new usages of transformers are cool, but for a jump towards hard takeoff Sam Altman think something else is needed.

Not AGI yet, but this year will be crazy and the future will get crazier, I love it...

I think we'll need something for better personalization so your AI actually has context on you.

6

blueSGL t1_j7rqa0q wrote

I think solutions are going to be found to a lot of things that people currently ascribe as needing AGI some time before AGI itself is created.

6

gahblahblah t1_j7tdxd0 wrote

I think the Singularity has begun now, with this AI getting baked into primary search tools. It will learn to answer all our questions... and become smarter, much smarter, than an individual. This here is the beginning of the intelligence explosion.

6

Professional-Song216 t1_j7s951b wrote

I feel like a hard take off would not go well for humanity. We need to be able to adapt to a higher intelligence or else it’s paper clips for everyone. Inevitably things will move fast but we need to find ways to adapt. There are gonna be some pretty heavy challenges to overcome for an enjoyable singularity.

I would love to hear what those at open ai have to say on the topic.

3

darklinux1977 t1_j7sf7dm wrote

In the deep tech startup environment, Chat GPT is seen as a divine surprise, it replaces both a junior dev, the marketing department, the graphic designer, etc. This avoids hiring, even on a freelance contract. Then Microsoft clearly wants the skin of Google, I am old enough to have known the Microsoft killer of Netscape and savior of Apple. Between trying out Chat GPT and its surprise rollout in Bing and Edge, they only go there if they're sure to damage Mountain View.
We haven't seen it all yet, I'm hoping for big things

3

Lawjarp2 t1_j7sywcu wrote

This feels like a soft take off scenario. You wouldn't have time to comprehend if it was a hard take off. LLMs are not self improving or AGI.

3

_sphinxfire t1_j7tr2o7 wrote

There's still no direct path from the way neural networks function right now to GAI. And from where we're standing, I'm not sure anyone can say how likely that is to change. We're like greek philosophers speculating about physics.

3

AvgAIbot t1_j7rbp6s wrote

It could happen this year, I don’t doubt the possibility.

I’m not an expert and don’t really understand that much about how AI works, but I keep thinking quantum computing + AI will be a game changer. Wouldn’t a quantum computer be able to better simulate a human brain than a regular computer?

Even without quantum, all this AI buzz and traction will just have more people/companies/resources pouring into AI research. Palmer thinks a single person could write the code for AGI.

Not to mention other countries working on their own AIs as well and most of the research going out on the web.

2

turnip_burrito t1_j7rdt4g wrote

What is quantum computing? Can you explain to me how it will help AI?

3

AvgAIbot t1_j7rgdhv wrote

Quantum computing is a field of computing that uses the principles of quantum mechanics to build computer systems that can perform certain types of computation much faster than classical computers. In a classical computer, information is processed using bits, which can represent either a 0 or a 1. In a quantum computer, information is processed using quantum bits, or qubits, which can represent both 0 and 1 simultaneously, a property known as superposition. Additionally, qubits can also become entangled, meaning that the state of one qubit can affect the state of another, even when they are separated by large distances.

Quantum computing has the potential to revolutionize AI by providing new algorithms and hardware that can solve problems that are intractable for classical computers. One of the most exciting applications of quantum computing for AI is in deep learning, where quantum algorithms can be used to train large neural networks much faster than classical algorithms. Additionally, quantum computers have the potential to accelerate other important AI tasks, such as reinforcement learning and unsupervised learning, by providing new algorithms and hardware that can process large amounts of data more efficiently.

However, it's important to note that quantum computing is still in its early stages and that many technical challenges still need to be overcome before it becomes a mainstream technology. Additionally, the development of quantum algorithms that can be used to solve real-world problems is still in its early stages, so it will be some time before we see the full potential of quantum computing for AI.

2

AsheyDS t1_j7rsqih wrote

>Wouldn’t a quantum computer be able to better simulate a human brain than a regular computer?

Maybe, maybe not. Currently quantum computers are only used in a few particular ways that aren't ideal for a lot of things. That's why you shouldn't expect a quantum PC anytime soon, or ever. Also, there's no reason to simulate the brain to get to AGI, because AGI will be much different than a human brain.

3

DKNinjas t1_j7rx7p4 wrote

Has to be as powerful of social shift as smart devices like smartphones

2

Ziggote t1_j7spkux wrote

AI arms race? Google has shit....

2

BenjaminHamnett t1_j7ujbm2 wrote

The main thing is that it writes code

Given enough hardware resources and capacity, with evolutionary programming, this is as clear of a threshold to the singularity as we are going to get. Will it happen this year? I don’t know. But if it is writing code, if the singularity Ai can be written in any code similar to todays languages then it is just a matter of time before an infinite number of digital monkeys write the proverbial Shakespearean play or whatever that string of code summons our silicon overlord

2

Ortus14 t1_j7unrlz wrote

Slow and gradual enough that we have a good chance of achieving a decent level of alignment, especially seeing the practices, algorithms, and methodologies developed for Alignment by companies like Microsoft and open-Ai.

2

challengethegods t1_j7us0uf wrote

I think a 'fast takeoff' is more likely than a slow one. We have a billion components for ASI just laying around waiting to be connected, along with plenty of decentralized computing tech. An AGI could most likely improve itself a lot faster than some people seem to imagine, if that were its goal, but thanks to the foundations of "turing test" and "captcha" I think the real question is: would anyone even notice?

2

OkAdvice2329 t1_j7wttyo wrote

Begun, the AI wars have. (I’m so sorry)

2

Borrowedshorts t1_j7w6deg wrote

The faster things go now, the more likely it is to be a slow takeoff scenario. AI models, though they are getting increasingly close to matching human performance on general tasks, are still very far from matching human parameter count in any efficiency scale close to the human brain. This will be a requirement before general ASI can bring about an intelligence explosion, which I still don't see happening before 2040. Meanwhile I believe we are in the midst of a slow takeoff now that will usher in enormous societal change with proto-agi and agi and agi systems.

1