Comments

You must log in or register to comment.

Primo2000 t1_j4p9vy3 wrote

Maybe from technological standpoint but it will take time for humans to adopt it. You have long term contract signed and whole business models centered around people so this will take some time and when it comes to biology, medicine etc there are blockers such as FDA that will slow adoptions of new medicines a lot. Still i think we are reaching some kind of treshold point when things will really start to take off

45

blueSGL t1_j4qrbqg wrote

> Maybe from technological standpoint but it will take time for humans to adopt it.

Look at what happened with ChatGPT. There is no adoption on-ramp, it was released to the world and now educational institutes are scrabbling to play catch up.

You can bet if a headline reads 'china cures [x]' where x is anything from aging to cancer to any much sort after medical treatment, that timelines will be shortened due to public pressure.

17

Smellz_Of_Elderberry t1_j4sh6om wrote

And it will be china and countries like it that cure xyz diseases.. Because their regulations aren't quite so... Dumb..

1

blueSGL t1_j4smtax wrote

I've heard plenty said about the societal level benefits of a one party rule country, e.g. being able to plan ahead without fear that it will get stopped or defunded when an apposing party comes to power. This has allowed for a lot more progress in terms of planning and infrastructure than otherwise would.

However the downside of such a thing is that there is lack of care to the individual, and at some level, ends justifying means.

The rules and guides for a lot of safety measures are written in blood, ways to make sure that dire mistakes can never happen again.

I feel there was a very real benefit to this at the speed everything moves at, drugs can be proscribed along with a verbose list of side effects and cofounding medication.

I do also feel that rules and guidelines need to be updated to reflect reality. e.g. as drug simulation becomes better it should be relied on more, I feel regulations should alter in lock step with how easy it is to verify drugs in silico.

5

Smellz_Of_Elderberry t1_j4sotm9 wrote

I don't like china, just fyi.

>The rules and guides for a lot of safety measures are written in blood, ways to make sure that dire mistakes can never happen again.

Admittedly. But when you get an experimental vaccine In less than a year, but at the same time, have to wait 10+ years to access new cancer therapies (even though cancer will kill you) it upsets me, primarily due to the inability for normal people to make their own decisions and take their own risks.

>I do also feel that rules and guidelines need to be updated to reflect reality.

Often, what happens instead is that the rules and guidelines are set up to dictate reality. Immunotherapy is a fine example, it's original founders were colored as quacks, and now it has become one of the most groundbreaking developments in cancer treatment.

Also, laws are very very rarely repealed or removed.. There are still laws that say you can't have a pie cooling on your windowsil in order to prevent attracting bears... Even though the bears in said location were eradicated lifetimes ago.. Adding sunset clauses to laws would be a great first step. Make it so all laws need to be renewed after a set amount of time

1

maskedpaki t1_j4qo6vh wrote

I keep hearing this "AI will take long to blend into civilisation"

I don't buy it. We already have capitalist financial markets. If an AI driven growth engine gets 9% ROI and the market gets 6% then all the worlds capital gets channeled into the 9% growth engine. Especially when it's general purpose and can do everything so to speak.

Capitalism will drive the use of these ais the moment they are past AGI level. It's just a matter of reaching it.

11

korkkis t1_j4s5cy0 wrote

Unhinged capitalism is a disease, toxic

2

maskedpaki t1_j4s6o8j wrote

I don't disagree but that wasn't the point of my comment

I was trying to demonstrate that slow takeoff scenarios post AGI are unlikely.

2

korkkis t1_j4s9cbp wrote

Sure, fair enough. Anyhow I think we’ll use AI everywhere like electricity (as it helps us to automate our daily tasks), without any AGI yet. If the AGI ever appears in this planet, it happens like an explosion but on the foundation that’s already there (like accelerated artificial evolution)

1

TheRidgeAndTheLadder t1_j4r6jfv wrote

I'll just note that this prediction hinges on capitalism being basically unassailable by AI.

Could be a totally fair bet

1

ManasZankhana t1_j4qa7ku wrote

Would companies that don’t adapt just end up becoming less profitable and go out of business

2

Electronic-Jello-633 t1_j4qbmn3 wrote

yes, but again over time. it can take years for some buisnesses to feel the effect of competition, and it can take some companies decades to go out of buisness because of a lack of adaptability.

6

AlwaysF3sh t1_j4rt99p wrote

A lot of obsolete jobs probably exist because of pdf’s.

2

Ginkotree48 t1_j4p9y29 wrote

I am in the same boat. Ive been following machine learning/AI for 5 years now and its picking up speed so quickly. This past year alone felt like the beginning of the climb

39

berdiekin t1_j4s3sje wrote

I've been following it for close to 20 years and this acceleration we've been seeing in the last couple of years. Hell even just this last year.

It's fucking mindblowing.

5

PhilosophusFuturum t1_j4pg0c0 wrote

Remember that technology advances in an S curve. First there is the current paradigm, then a major advancement causes rapid change. After that change has been explored, a new paradigm takes hold and progress slows down.

Right now looks very exciting like how the late 2000’s and early 2010’s looked very exciting. And the world did change drastically after the 00’s-10’s tech revolution. But the rest of the 2010’s was somewhat sleepy in comparison to that era.

We are now on a massive upward slope due to massive advances in machine learning. The exponential beginning of the new paradigm shift has begun, and will likely stagnate somewhat in a few years. We will get way more advanced in the coming years than most people expect, but less so than many people on this sub would hope for.

22

Redditing-Dutchman t1_j4pjhaf wrote

This. On a long time scale it looks like a neat exponential line, but in reality there are many plateaus.

Remember how we had the 3D printing hype and everyone on Reddit was saying that in 5 to 10 years everyone was printing all their plates and cups. Then it turns out there are quite a few issues to get to that last step (ultrafast printing with incredible precision is still very far off for example, especially for a decent price)

As these AI models get better and better I personally think we will hit a gap where we just can't get to the next step. (for example reliable, well reasoning AI that can be used for purposes where failing can be costly or dangerous) .

17

ohimjustakid t1_j4rxwe1 wrote

>Remember how we had the 3D printing hype and everyone on Reddit was saying that in 5 to 10 years everyone was printing all their plates and cups.

https://www.cbc.ca/news/canada/3d-printed-guns-canada-increase-1.6708049 Yea not like we have literal warehouses full of 3D printed gun parts with ability to print receivers with metallic plastics. Even then the plastic receivers can literally pump out thousands of rounds until they begin cracking. Not like any legit revolutionary movement could use that though, let alone criminals!

It's not like China happens to have 1/3 of the AI companies in the world and is set on their 2030 goal. 2024 being the 75th anniversary of the PLA in time with US presidential election that is bound to be the most polarizing year yet, which is cool cause its not like foreign enemies take advantage of this polarization for their own autocratic means.

Nor are there complete rogue states like North Korea or Iran who have in the past spent millions on counterfeit production, nuclear weapons and cyber attacks.

It's not like the Manhattan Project was so top secret that not even the US Congress knew about it until the bombs dropped.

AI and ML is just like NFTs, sure it gets a lot of traction now but it's not like it can be used for practically any industry, let alone big corps like Amazon, Tesla, Microsoft who just value their human labor WAY more than cheap, efficient computer automation!

We all should fully expect an AI winter because this shit is just so gimicky and its not like the entire globe is getting in on the tech to keep up with an ever expanding field of cybercrime!

Too bad there isn't a way to learn all this stuff for free before making an informed opinion on the carrier pigeon service we're using... OH WAIT

2

TopicRepulsive7936 t1_j4rfxul wrote

>Remember that technology advances in an S curve.

Single technologies do, not the whole of technology.

1

themoonpigeon t1_j4qa39v wrote

I think the bottleneck is the speed at which society CAN change. Put a different way, how fast we can turn the wheels or put into action the technology and innovations offered to us by AI and quantum computing advances. It will be a feedback loop, but it will likely take a bit of time to get off the ground. That said, I think your timeline could very well be accurate, but I tend to think it’ll be more towards the latter end.

7

LoquaciousAntipodean t1_j4px0ml wrote

I suppose it would depend on which 'we' that the question is addressing. Certainly it seems like most 'average' people are still relatively unaware of how fast these kinds of things are advancing.

I think however much AI actually improves from here, we've definitely reached a point where it is going to start rapidly changing the world, if only because more and more people are rushing to start messing around and experimenting with AI in all their myriad creative ways.

5

No_Ninja3309_NoNoYes t1_j4puw4k wrote

Deep learning started to work in 2012 thanks to GPUs. It has been a decade. I don't expect the trend to continue into 2030 unless something changes that. But we will be left with a diverse ecosystem of AI services. This will create more billionaires, but even more paupers. Unless we manage to democratize AI. Unless it becomes open source and easy to use for everyone on Earth.

3

Trumaex t1_j4si4pk wrote

Unfortunately it is. I would even risk saying that Kurzweil timeline might be wishful thinking.

Also, stuff moves fast, and change is faster than it used to be. But hype is even faster than actual change in the world. One has to consider that digital space can change quite fast, but physical world is very slow to change...

1

botfiddler t1_j4pv3mh wrote

Projecting fantasies like complete post scarcity or a global UBI is pointless. A AGI would be like adding a lots of smart humans to the planet, anything less than AGI will give us the tools to do more ourselves. That's it. We'll still have climate disruption and ressource scarcity, for example. Political gatekeeping also won't stop, not at least because of the former mentioned things.

0

Professional_Job_307 t1_j4s9byq wrote

It doesn't stop at AGI. Since AGI is as smart as a human, and humans created AGI. Then i see no reason why AGI woudnt be able to improve itself until it eventually becomes ASI

1

botfiddler t1_j4v86oe wrote

I didn't claim it would. Still doesn't work like magic and it won't happen overnight.

1

brilliancemonk t1_j4q0dp9 wrote

We'll reach a plateau soon. Progress is not linear.

−3

Evil_Patriarch t1_j4qenf9 wrote

Funny, I see us as just now getting off that plateau that we have been stuck on for a bit over a decade now

11