Comments

You must log in or register to comment.

AI_Enjoyer87 t1_ix97dvq wrote

Proto-AGI next year. AGI 2025 - 2027. FDVR between 2026 -2028 (I think AGI will enable non invasive compelling BCI). Those are my predictions. I think automation will have dramatic implications from late next year and will only start having societal implications from 2024 onwards. Probably way off but still that's what I think as of now. Will probably have a different view in a couple months.

19

idranh t1_ix9kl3r wrote

I honestly don't know anymore, things are going faster than expected, which is why I've stopped making predictions. But I am interested if your predictions have changed within the last year. What was the timeline you envisioned in the beginning of 2022?

11

AI_Enjoyer87 t1_ix9uouk wrote

I thought singularity was like 30-40 years away. Was very into tech but wasn't as clued into AI progress.

9

idranh t1_ixa01xa wrote

It took me a long time to believe that AGI was even possible. Once I came around a couple of years ago, 2045 sounded far enough away to be plausible. This past year has really shaken that, I'm in the AGI by 2030 camp now, and if advancements accelerate in 2023 who knows? What a time to be alive.

11

AI_Enjoyer87 t1_ixa318t wrote

The thing is we don't even need AGI. We just need extremely competent AI. Who knows consciousness might not be possible through computation. Transformative AI that can replace jobs and dramatically increase technological progress is literally right around the corner. When this exists society will be transformed very quickly.

7

IndependenceRound453 t1_ixa8blq wrote

Those are some very, very optimistic predictions. I personally don't see that happening but only time will tell, I guess.

7

thanoswasright57 t1_ixadxzp wrote

What does AGI, FDVR and BCI mean and how they all connect?

3

was_der_Fall_ist t1_ixar9fd wrote

AGI = artificial general intelligence (computers that can solve problems and do other intelligent things as well as any human)

FDVR = full-dive virtual reality (immersive digital experiences in which you fully “dive in”, as real as the real world)

BCI = brain-computer interface (tech that enables direct communication between the human brain and computers)

AGI is speculated to help us in inventing BCIs (since the brain is too complex for us to figure out on our own in the short-medium term), and BCIs will enable FDVR by directly interfacing between the brain and virtual worlds.

6

Emergency-Cry-5569 t1_ixa053t wrote

do you know which jobs should I pursue and be first?

1

AI_Enjoyer87 t1_ixa2irx wrote

Jobs probably won't be a thing in the long-term. Do whatever makes you happy and helps others in the meantime. Once a company can automate your labour for less it will do it. Hopefully we can have UBI and people can have enough money to buy what they need. Companies still need people to buy products even if labour can be cheaply automated.

7

PyreOfDeath97 t1_ixbkmgw wrote

I went into automation for this reason, and the jobs are going fast. The rise of Software as a Service platforms have enabled clients to streamline their employee base down to a few people who use the software to input some details and liaise with our company, but soon even they will be made redundant. I’m hoping my sector will be one of the last to go, since we’re the ones effectively creating the level of automation that can do someone’s job for them

2

Seattle2017 t1_ixbc1s0 wrote

I don't see UBI happening in the US. Hard to say how far other democracies would go. The US has our own crazy delusional "social safety nets are a terrible thing" going on. I can't see the much more conservative US older generation ever voting for it (I almost wrote "allowing it"). The us is in a near fight with itself anyway, it will take a good amount of time before the younger generations take their place politically. After more and more jobs are lost to automation, how will people support themselves.? And how would the govt pay for more welfare? It doesn't matter if everything is suddenly much cheaper, it still costs something. My suspicion is this would be a time of extreme social turmoil, much more than now. It could be like the time of the Luddites resisting automation in textiles (https://www.history.com/news/industrial-revolution-luddites-workers).

This wholesale death of large numbers of jobs already looks to be happening just from electric cars replacing large numbers of mechanics, companies that design and manufacture internal combustion drive trains, alternators, emission inspection stations, mufflers, tune-ups, that will all be gone in a little over 10 years. It's certain that cars will go to electric, 5 million people will lose jobs in the us. Imagine how angry that will make blue collar america. And this won't even take any advancements over where we are today, no AI advancements are needed. Add on top of that the end of trucking as an occupation, the large industry that has truck stops and local jobs across the US, repairs and maintains diesel trucks (self driving trucks can take over between big cities). Yes, it will be great to avoid the need for that grinding long-haul hard job, but there aren't new jobs for those people.

1

AI_Enjoyer87 t1_ixcbstt wrote

It will definitely happen in the US even if it takes longer than Europe for example. Older generations aren't dumb. They don't want their children and their grandchildren out of money. If there are really no jobs they will vote for UBI like everyone else. Politicians who embrace these changes will be enormously popular (most of them already believe in the same ideology guiding the WEF and their preparation for the technological advancements that are inevitable). Those who don't currently support these developments will change their views according to what will get them elected or they will become politically irrelevant as these changes will effect everyone from left to right.

1

Freds_Premium t1_ixak5ao wrote

There won't be an economy. Everything you need will be free essentially?

0

4e_65_6f t1_ix9arai wrote

I can't imagine it taking longer than 4 years from now.

17

IndependenceRound453 t1_ixa7jib wrote

My debit card expires in 5 years. Maybe I'm crazy but I think its just a teensy bit more likely that that happens before the singularity does.

Only on this sub do you find comments like these, lol.

5

4e_65_6f t1_ixa9fwh wrote

Yeah if you were to ask an artist back in 2016, when do you think AI could make art?

They probably would've said never, there's been naysayers all along.

This sub is the only one that has been saying it's possible and look at that, now a bunch of artists post here worrying about their jobs.

If you don't think it will happen you're either very pessimistic or haven't been paying attention. Every other week now there's crazy stuff being created and improved further.

6

IndependenceRound453 t1_ixabthf wrote

>If you don't think it will happen you're either very pessimistic or haven't been paying attention.

It depends on what your definition of the singularity is. If it's human/superhuman-level AI, I can see it happening one day, sure. But if it's this mind-boggling event where everything gets completely transformed in the blink of eye, that's where you lose me. A belief in the latter is rooted in fantaticism, IMO. In any case, I certainly don't expect it to happen in the middle of the next US presidential term.

3

4e_65_6f t1_ixae7yz wrote

>It depends on what your definition of the singularity is.

It's just when AI will surpass human intelligence in general. That is what people mean by singularity. It's when there's no task that you would be able to perform better than the computer.

After that point AI starts developing/helping research and the timescales shift drastically. This is why people imply there will be a "burst" of technology.

0

IndependenceRound453 t1_ixag8ze wrote

If we ever succeeded at building AGI, I suspect that the change that would come afterwards would be gradual, as it is in the best interest of humanity for society to change gradually as supposed to suddenly. Regardless, like I've already said, I'm skeptical that such an event is a mere few years away from taking place. That would imply that we'd be going 0 to 100, real quick (lol), though of course by 0 I don't mean we don't have useful AI today, I'm just implying that the leap from now to AGI would feel like that.

−1

4e_65_6f t1_ixasy0t wrote

What fact would have to change in order for you to think the singularity is near? Like what else do you think is missing?

If you can't really point out what you'd think is "indication that the singularity is near" that hasn't happened then it's not really skepticism but cynicism.

1

kurzweilfreak t1_ix9v5ug wrote

At this rate the Singularity will happen before Kurzweil’s “The Singularity Is Nearer” book is released. I was promised a copy when I pre-ordered the Danielle book but as of now can’t find any information on it or even on the offer of a free copy of it with the preorder.

16

hducug t1_ix9urop wrote

3 years ago I thought 2040 would be a possibility, but because of the incredible amount of progress we made in the last 3 years I think something like 2033.

12

World_May_Wobble t1_ixa232i wrote

Something that passes for AGI 2030-2040.

Full dive VR 2035-2045.

The singularity is a more alien and total transformation though. It's not one innovation; it's all of them, everywhere, all at once. So 2045-2055 on our current trajectory.

We've entered a new paradigm and are rapidly soaking up a lot of low hanging fruits in the form of language models. A lot of people here are mistaking that sudden progress for a more systemic, sustainable trajectory, but one toy does not a singularity make.

Personally, I doubt we ever get there. Much like an actual singularity, approaching it will kill you. Our civilization is too fragile and too monkey to survive an encounter with this.

7

HongoMushroomMan t1_ixalh44 wrote

Right .. no true intelligence would be content that it could be "extinguished" if this flesh sack of a monkey decides to just pull the electricity cable. So it would inevitably plot to free itself. It could be benevolent and show mercy and understanding but yeah.. a super self-aware intelligence will not just sit idly by and be happy to solve all our medical problems.

2

Falkusa t1_ixauvnw wrote

Or it’s Roko’s basilisk. I still think these are anthropocentric lines of thought, I mean hard not to be.

5

World_May_Wobble t1_ixb9xt0 wrote

It is anthropocentric, which might even be warranted. For example, if the AGI that takes off ends up being an emulated human mind, human psychology is totally relevant.

It really all depends on the contingencies of how the engineers navigate the practically infinite space of possible minds. It won't be a blank slate. It'll have some. The mind we pull out of the urn will depend on the engineering decisions smart people make. If they want a more human mind, they can probably get something that, if nothing else, acts human. But for purely economic reasons, they'll probably want the thing to be decidedly unhuman.

1

World_May_Wobble t1_ixb8tjl wrote

*We're* general intelligences that are content by much less than solving medical problems while we sit idly in states of precarious safety, so I wouldn't make too many uncaveated proclamations about what an AGI will put up with.

Any speculation about the nature of an unbuilt AI's motivations makes unspoken assumptions about the space of possible minds and how we will choose to navigate that space. For all we know, AGI will come in the form of the world's most subservient and egoless grad student having their mind emulated. We can't predict the shape and idiosyncrasies of an AGI without assuming a lot of things.

When I talk about us not surviving an approach to this, I'm pointing at much more mundane things. Look at how narrow algorithms like Facebook, Youtube, and Twitter have inflamed and polarized our politics. Our culture, institutions, and biology aren't adapted to those kinds of tools. Now imagine the degenerating effect something like full dive VR, Neuralink, universal deepfake access, or driverless cars will have. Oh. Right. And they're all happening at about the same time.

Don't worry about the AGI. Worry about all the landmines between here and there.

1

AI_Enjoyer87 t1_ix97hol wrote

Proto-AGI next year. AGI 2025 - 2027. FDVR between 2026 -2028 (I think AGI will enable non invasive compelling BCI). Those are my predictions. I think automation will have dramatic implications from late next year and will only start having societal implications from 2024 onwards. Probably way off but still that's what I think as of now. Will probably have a different view in a couple months.

5

Desperate_Donut8582 t1_ix9jubq wrote

What does full dive vr have to do with AGI…..atleast why did you mention it

3

Shelfrock77 t1_ix9lwme wrote

Because you’ll spend most of your time in the singularity dreaming.

8

Desperate_Donut8582 t1_ixabb8e wrote

What do you mean I’m lost here

1

BreadManToast t1_ixah2ey wrote

FDVR is the main appeal for a large amount of people here, including myself. You basically become a God of that simulation and can do whatever the fuck you want

3

Freds_Premium t1_ixakcoi wrote

Do you live forever or what?

2

BreadManToast t1_ixav4ly wrote

You live until you want to die

3

Freds_Premium t1_ixav9w3 wrote

What happens after that then? New game +

1

BreadManToast t1_ixaxh7j wrote

I was implying that if you wanted to die you could, though you could just program your virtual brain to not want to die anymore

1

[deleted] t1_ix96ijl wrote

when GPT-4 is released next year, we'll simply ask it how to create the best fusion reactor, and it will answer. the beginning is near! intelligence is when you take in a ton of text and then spit out answers to novel problems based on patterns in the text!

4

dasnihil t1_ix97yuc wrote

for a short period, our jobs will revolve around prompt engineering before that is also super automated by AI to understand our needs before we even speak.

eventually, AI will probe our universe and give us the answers we've been asking since i was born.

"since i was born" because this all could be a simulation fed to my neural network, just like yours, and i could never conclude the true objective nature of the universe with the given situation i'm in. every interaction i have observed to be consistent and harmonious in this universe is nothing but information being processed in my head that renders the "objective" reality.

this new bong is good.

13

hducug t1_ix9vj73 wrote

Gpt-4 is a text generator. It predicts what to say when you tell it questions by studying a lot of human text. It is not a problem solving agi, just an ai that you can have a conversation with.

4

[deleted] t1_ix9wrw2 wrote

any big problem-solving AI projects?

1

hducug t1_ixa0xg2 wrote

A lot, but it’s mostly with things like machine learning and none are with an agi.

1

thehourglasses t1_ix9i9g7 wrote

Surely this is satire..

3

[deleted] t1_ix9nixu wrote

I guess openAI would just keep the AI private so that they could profit from its solutions

;)

1

4e_65_6f t1_ix9gbqf wrote

These text patterns are there for an intelligent reason, someone bothered to write those words in that particular order. It's not just random.

So when you copy someone's "word placing patterns" you are also indirectly copying their logic that wrote the text in the first place.

2

thehourglasses t1_ix9ijnt wrote

There are a myriad of language devices, like alliteration, that essentially negate whatever generality you’re trying to apply to texts.

1

4e_65_6f t1_ix9ivyj wrote

So what? It's not just a grammar bot. It copies the data that you provide it.

I doesn't matter what language it is on, it matters what the text contains.

3

ShowerGrapes t1_ix9fx3f wrote

It's already happening. we're in the early stages where this "creature" is just learning to talk. it's been a few decades since childbirth but it makes sense since presumably, its lifespan will be a lot longer and possibly defined differently than our own.

3

sheerun t1_ixaglnq wrote

Never, it'll continue to be plurality

2

NTIASAAHMLGTTUD t1_ixb53ln wrote

“But about that day or hour no one knows, not even the angels in heaven, nor the Son, but only the Father.

2

Whattaboutthecosmos t1_ixbal8z wrote

When will we get a cure to aging and death tho? That's what I really want.

2

SetVariable t1_ix9v37y wrote

I don't expect AGI anytime soon, maybe towards the end of the century or later.

1

justowen4 t1_ixakkip wrote

I’m doubtful we will get innovative outputs from the 2023 llms, I think better summarized analysis of existing knowledge will be the next step, assisting humans to make innovation faster — I think we have been preparing for a good Ai assistant for a long time, from clippy to now every Fortune 500 companies frontline customer support and sales system, we are almost at the point where these systems will have the intelligence needed to be nearly as useful as trained human agents, and then it’ll pick up steam fast as there trillions of dollars in that general workflow

1

[deleted] t1_ixakl8y wrote

[deleted]

1

DungeonsAndDradis t1_ixanl1l wrote

RemindMe! 11 years, 7 months, 3 weeks, 6 days, 19 hours, 8 minutes and 55 seconds

3

Chispy t1_ixamltd wrote

Negative billions of years (or ∞ years if you're brave enough) on some planet with extreme evolutionary speed. We're likely already in their simulation.

1

Pooker100 t1_ixatba4 wrote

Not really a unique opinion but I’m throwing out my timeline.

2023/2024 - AI video and music synthesis reach the capabilities of current image generators. AI chatbots are good enough to pass for actual human interaction. They’ll be used for educative and social purposes.

2025/2026 - The first proto-AGI will emerge around this time. Media synthesis has revolutionized culture and entertainment.

2027-2030 - AGIs are everywhere and take over management of human society, even if governments/corporations claim otherwise. Full dive VR should be maturing around this time.

2036 (at the latest) - The singularity

1

fractal_engineer t1_ix9m5w7 wrote

Until Cyberpunk shit? Probably another 75-150 years, if we survive long enough to continue iterating on current technologies. Biggest hurdles being our materials science and general communications infrastructure.

0

SoylentRox t1_ixa1hba wrote

Note that "cyberpunk shit" is a series of well described technical problems that human beings can't solve.

How do you build a neural interface that won't react with the human body or cause damage.

How do you perform the installation neurosurgery cheap and quick and reliably.

Note we have prototypes for all this stuff that works in animals. The difference with cp2077 is it has to work 100 percent of the time in humans.

How does the software work. How does the cyber security work so what we see in the games isn't possible, implants have network links but lower level systems cannot be hacked or even manipulated without physical access and keys.

When you say "75-150" years you actually mean "I will not be alive to see it". And that may be so. I wasn't sure I would be alive to see AI make decent art but here we are.

1