Submitted by AdditionalPizza t3_y98hxs in singularity

This (long) post may be controversial, and I'm sure many will disagree.

We know humans have a hard time imagining our technological progress at an exponential rate. Go talk to anyone in the general public, explain AGI to them, and they'll say it's 50-100 years away minimum. But I think even those of us that have committed to trying to think exponentially about the rate of technological progress are still guilty of linear thought.

Imagine a black hole. The singularity is the center, we can imagine that is the infamous 2045 date (or any date you predict, doesn't matter). That's most likely to be the moment AI is able to self improve at such a rapid pace that the technology it produces is quite literally impossible for us to predict before it happens. We can try and predict when it happens, but the very nature of the singularity will result in something we cannot predict. We will leave that alone for this discussion. As observers outside of the black hole, we can witness the event horizon as the point when time stops (to us, the observers). While this comparison is just a comparison and not totally relevant, it does help exhibit the inability of humans to think exponentially. I'm arguing this is even the case for those of us in this sub that claim to be able to believe we can think more exponentially.

To see our own mistakes when predicting exponentially, we need to pick a date in the future and go backwards. The exact date doesn't exactly matter. But for the sake of argument let's choose 2025 because it's convenient and things align nicely.

I choose 2025 because 2.5 years prior to it (ie. right now, give or take) we cannot predict with much certainty what Large Language Models will be even remotely capable of. We have no idea what scaling is going to truly produce at this point. Some are saying AGI intellect is possible with an LLM, I'm not going to debate that though.

2020 (-5 years) we had LLM's but thought we needed something else. In 2.5 years we figured out scaling most likely works. There could still be more, but we have no idea at this point for sure.

2015 (-10 years) we were in the Machine Learning game, with no real idea of the implications LLM's could bring to the table yet. We thought true artistry would be the last endeavor an AI could conquer.

2005 (-20 years) we were mostly still pre-smartphone, really getting a sense of how the internet was changing society. We could predict 20 years in the future based on an exponential curve, but we didn't know how or what.

1985 (-40 years) who cares, whatever.

The point being, as we get further up the curve of exponential progress, I believe we're at a point it's important to factor in short term exponential growth far more into consideration. When we say 10 years from today, we probably really mean 5. Often when we predict 5 years from today, it's just a cop out. We are protecting our linear instincts and not fully leaning into the exponential rate of progress. Our guts tell us 5 years, but it's likely 2.5 years. In 2025, what feels like 5 years TODAY (2022) will be 1.25 years. It's much easier to take the exponential rate of progress with longer timeframes. What's really the difference between 20 years and 30 years really. But we get very protective of our instincts when it comes to the short term (<10 years). Exponential function doesn't care what size the number is.

I'm not saying at this rate the singularity will come sooner. I'm saying 5 years of progress in 2020 is equal to 2.5 years in 2022. Or 10 years of progress in 2015 is 2.5 years of progress in 2022.

Of course I'm not saying these are the dates and timeframes that are set in stone. I just chose 2025 based off of what I listed above, an argument could be made for any date really. But the point stands, our brains try really hard to stop thinking exponentially when it comes to the upshot of an exponential curve. I do think the real world can slow a lot of things down, like logistics/manufacturing etc. But I think by 2025 the only thing that will be holding a take off back will literally be humans and our slow bodies. Everything else will be ready to go waiting for us to get it on the shelf.

&#x200B;

>I'm not saying at this rate the singularity will come sooner. I'm saying 5 years of progress in 2020 is equal to 2.5 years in 2022. Or 10 years of progress in 2015 is 2.5 years of progress in 2022.

So if those numbers aren't set in stone, then wtf am I talking about here?

With big tech further tackling things like Codex, this will have a cascading effect across every single sector that involve Information Technology. Programmers are the foundation of IT. Forget the argument of whether or not programming will be fully automated anytime soon, it doesn't matter. What matters is programmers being 10% more efficient = every other industry reaping that acceleration of progress. Skip ahead another couple years or maybe a couple months when programmers are now doubling their productivity, now every other industry is receiving that 100% boost to productivity through new more efficient software and stronger AI. I believe the tipping point for Transformative AI is happening right now, within the year.

What does this mean exactly? Transformative AI (TAI) is so much more important than anyone is giving it credit for. TAI will most likely lead to AGI. TAI takes us to the event horizon and beyond. We are on the cusp, and this sounds overly optimistic I know. Those waiting for AGI and wishing it would come faster, you don't need to. TAI is already here. It began with LLM's, and the progress between 2025 until AGI will be greatly accelerated because of TAI. 2025-AGI is already going to blow our minds. It probably won't be the sci-fi stuff everyone here is all about when talking about the singularity and such; it will be the world shaking, policy changing, employment shattering stuff.

The disruptions will start coming within 2.5 years (2022 time).

223

Comments

You must log in or register to comment.

phriot t1_it486pz wrote

I think you're correct in thinking that AI disruption of our lives is here, and will only ramp up in the coming years - even without getting to AGI.

That said, I'm very confident that the shape of our lives will be very similar to today in 2025. Most people will still have jobs. Most people will still carry smartphones. Most car owners will still be the ones driving. Etc. (And you do say something like this towards the bottom of your post.)

Basically, even if next-gen narrow AI expert systems on better hardware are exponentially better by 2025, the timeframe is still so short as to appear linear with respect to impact on people's lives.

75

Down_The_Rabbithole t1_it6k83u wrote

The real issue I see even on places like r/singularity. Is that people don't update their world views according to new developments quickly enough.

For example The papers around large transformer models released in the last 6 months have completely changed the automation timeline and outlook of what areas are going to get automated first.

Yet people on r/singularity largely still have this now-outdated view that careers like drivers, restaurant workers, miners and factory workers will be the first to be automated away.

In reality it's digital intellectual work that is going to be automated away first. Digital Artists, Programmers, System admins, Lawyers, Clerks and basically everyone that sits in an office manipulating data in some way or another through a computer will be automated away in the first round of automation.

As a software engineer for close to 20 years myself with a firm grasp of modern AI systems the path to how the entire software engineering field will be automated away in just the next 5-10 years is clear as day. Yet a lot of the people I work with and even on places like r/singularity people just flat out reject this possibility, partly because it hits their ego so it's easier to go into denial. But also because they already had their views set on other fields being the first ones to go and new developments rapidly changing that view needs some time for people to properly settle before they come to accept it.

I see constant irrational rebukes for why programmers "are never going to be replaced" like how coding is just a small part of programming, not recognizing the fact that the entire Client specification -> Product ownership -> Problemsolving -> Coding -> Delivery pipeline of the entire software engineering industry is at risk of being automated. We're not talking about merely code completion here. We're talking about AI better being able to identify and specify the needs of the client in question and better able to provide a solution in a shorter but more importantly, more effective way.

Humans won't be able to compete in the digital field anymore and physical laborers, especially the underpaid ones like janitorial work, cleaners and miners will be the last jobs to be automated away, not the first.

The next couple of years is going to shook most of the developed world to its core as the mainstream starts coming to this realization. Software Engineers and other highly educated professionals aren't ready to face this truth on this subreddit of all places, let alone the vast majority of regular people.

I predict we're going to have a very rocky ride as people aren't able to accept this when we will most likely start to see the very first signs of intellectual labor replacements implemented next year, 2023 already.

47

AdditionalPizza OP t1_it6v5e7 wrote

This is exactly what I'm saying. It's time people stop making excuses based on how we were thinking 5 years ago.

This is happening, and it's happening now. We all waited for this, it's just happening in a way we didn't expect. But in hindsight this makes so much more sense. The digital jobs should be the first to go. Yes they take high human skill, but we should've had the foresight that high human skill != high AI skill. AI are born digital. They are masters of intelligence.

With that being said, robotics are going to feel this effect as well. I think we can agree when we say intellectual jobs go first, it's not first by a mile. They're first on a scale of months to a year or 2. Implementation of robotics in the real world is a challenge we can't really predict at this point though.

23

visarga t1_it6lzdv wrote

> physical laborers, especially the underpaid ones like janitorial work, cleaners and miners will be the last jobs to be automated away, not the first.

Automation is coming for everyone, artist, programmer, office worker or physical laborer.

I guess you haven't seen this model - From Play to Policy. With just 5 hours of robot free play they trained a model to control a robotic arm in a kitchen environment. In other words, learning to act (decision transformers) seems to work really well. I expect robotic dexterity to improve quickly. It's just 3-4 years behind text and image.

Related to this I think we'll see large models trained on the entirety of YouTube learning both desktop skills (like automating computer UIs) and robotic skills (like carpentry, cooking and managing a house environment). Massive video models have been conspicuously missing, probably too expensive to train yet, but look for them at the horizon to start popping out.

There's a whole wealth of information in audio-video that is missed in text and image, exactly the kind of information that will automate the jobs you think are safer. And besides video, the simulation field is ramping up with all sorts of 3D environments to train agents in.

12

DungeonsAndDradis t1_it6vo7b wrote

When I need to do something around the house, I pull up YouTube. There are thousands of videos on every home maintenance task. When we can get AI trained on YouTube tutorials, we'll have robots making coffee in no time.

11

visarga t1_it8on9t wrote

The recent Whisper model is rumoured to be created to transcribe all the text from YT in order to feed the next iteration of language modelling.

6

AdditionalPizza OP t1_it6vmqy wrote

>Automation is coming for everyone, artist, programmer, office worker or physical laborer.

I won't speak for them, but personally when I talk about this I mean intellectual or digital jobs go first, I mean they go first and not long after robotics is there. Labour jobs will inevitably need more logistics to replace, as its not just software a company can install. I won't pretend to be able to predict that, but I think it won't be much longer after there's already an unemployment crises on our hands. It won't really matter at that point.

I don't think full automation of everything will happen that quickly, but it really doesn't need to be full automation. It needs to be 10 to 15% of the workforce jobless with no skills outside of their extinct domain.

5

phriot t1_it6zsrz wrote

>but personally when I talk about this I mean intellectual or digital jobs go first, I mean they go first and not long after robotics is there.

I work in Biotech, and this is the major reason I think I'm going to try and stay at the bench as long as possible. As soon as I'm able to do most of my work from home, like writing reports, and/or most of my time is spent managing others, that's when I feel like my job is at major risk in the 5-10 year range. (I get the point of this post, that maybe capability will come quicker than I think, but I'm also pretty confident that there will be a transition period where AI will augment, rather than replace, knowledge workers.)

At least my wife is a teacher at a fancy preschool. I am fairly confident that rich people will want humans teaching their kids for longer than other professions will last.

10

brosirmandude t1_it7auks wrote

Yeah I wouldn't have thought this a year ago but my partner is a librarian and probably has way better career security than I do.

8

AdditionalPizza OP t1_it734wc wrote

>I'm also pretty confident that there will be a transition period where AI will augment, rather than replace

Yeah don't get me wrong. I don't even mean full automation at first. I mean automation that increases efficiency. Job losses will start to become more and more commonplace starting in 2025. All while LLM's are assisting in break through after break through. We don't need full autonomy of the work force, just enough that we can't expect our current system to work at all.

5

Redvolition t1_it7zfhk wrote

I believe paper publishing scientists will be amongst the last to be replaced, albeit the lab technicians and assistants doing less innovative work will be far sooner. By the time AI can publish scientific papers to the point of replacing scientists themselves, this is it, we already reached the singularity.

Problem is, this type of innovative work likely requires minimum >120 IQ, which is 1 in 11 people. If you don't reach that cutoff, the remaining options will mostly be traditional manual jobs requiring <100 IQ, or those that benefit from physical human interaction, such as therapists and prostitutes. Basically the middle class, middle cognitive demand jobs for people between 100 and 120 IQ will be eradicated.

If it is difficult to monetize a career in entertainment now, it will be an order or two of magnitude harder in the future, due to competition with AI generators and performers.

Even assuming you have the AI to control robots, the raw materials and fuel to power them cost a lot of resources, and manual laborers are amongst the cheapest, so as long as the robots remain costing more than 4 or 5 years worth of wages, which adds up to 150k to 300k USD in America, plumbers, electricians, and housekeepers will keep their jobs.

We are heading towards a society in the 2030s being stratified as such, in order of wealth:

  1. Capitalists (~1%)
  2. Entertainers and Performers (~0.05%)
  3. Innovation STEM jobs (~5%)
  4. Management and administration (~5%)
  5. Physical interaction jobs (~5%)
  6. Manual labor jobs (~30%)
  7. UBI majority (53.95%)
4

visarga t1_it8pdf2 wrote

> It needs to be 10 to 15% of the workforce jobless with no skills outside of their extinct domain.

The number of job positions the economy supports is not hard capped at some maximum value. It's not a zero sum game, more robots doesn't mean less people. But as soon as we get the fruits of this technology we can raise our expectations, and we raise much faster than automation can automate. Just expecting clean air, good food and basic necessities for everyone is a hard task, I bet we'll still be working until we accomplish it.

2

AdditionalPizza OP t1_it8v9c1 wrote

>The number of job positions the economy supports is not hard capped at some maximum value.

No, you're right that it isn't. But I think time plays a large factor here. If suddenly enough people's employment is displaced, and automation is gobbling up enough jobs, then we have a case of more unemployed people per month than new human viable jobs created per month. It may very well settle itself, but if the rate is high enough it won't matter. You can't have a large portion of society unemployed for very long, chaos ensues.

Unless of course there's a lot of menial labour jobs to go around, that probably will result in the same situation though. I think in a situation where we have physical robots able to do labour, it's well past the point of society needing to change.

1

brosirmandude t1_it7am4u wrote

As one of those digital knowledge workers who's likely going to be automated away in the first wave, I honestly have no idea how to prepare myself or my family for any of this.

I think I might switch to focusing on building my skills in games and entertainment. When the amount of humans needed for digital work drops, the need for them to find joy in other things probably rises.and hobbies like games or TCGs seem likely to stick a bit longer due to social aspects.

But even that is longer term. Short term I really still don't know how to deal if there's mass layoffs and the government takes literal years to rekon with that.

4

blueSGL t1_it7tzvt wrote

I've already seen people generate images for their RPG campaigns using Stable Diffusion, how many rule/campaign books will a LLM need to crunch through before it can spit out endless variations on a theme for your favorite system (or act as a major accelerator for those creating them already)

Edit: actually lets expand on this.

What happens when a sufficiently advanced model gets licensed for fine tune to paying companies and Wizards of the Coast feeds in the entire corpus of data they control and starts using that to create or help create expansions and systems and the former creators shift to editors.

Now do that for every industry that has a text backbone somewhere in it. e.g. movie/tv scripts, books, comics, radio dramas, music video concepts, and so on.

5

AdditionalPizza OP t1_it7z0r3 wrote

I would suggest trying something that is self sufficient, more so than an "employable" skill.

Take it up as a hobby now, and if you truly are in the first wave, you'll maybe have some totally unrelated skill you can use for passive income in a market that isn't entirely dictated by IT.

2

overlordpotatoe t1_it6r5o3 wrote

I hope we find ways to make this a good thing. It should be a good thing that if you want to make something, you don't have to spend hours manually coding it.

2

phriot t1_it71izf wrote

>I predict we're going to have a very rocky ride as people aren't able to accept this when we will most likely start to see the very first signs of intellectual labor replacements implemented next year, 2023 already.

Won't most people just use these tools to increase their productivity for a while, before management realizes that the workers can just be replaced? I feel like that could take at least several years to play out, or do you think we're already at that point?

1

brosirmandude t1_it79qjc wrote

>We're talking about AI better being able to identify and specify the needs of the client in question and better able to provide a solution in a shorter but more importantly, more effective way.

Gets even more interesting when the client in question gets their wants and needs from an AI system.

1

purple_hamster66 t1_it8u5wp wrote

Meh. The innovation and discovery programmers do simply doesn’t have the million examples that other digital fields have. Where are you going to get the training data on how a programmer interacts with a client? How about most of my clients who say they “don’t know what they want but know when they see it”? What type of mass training can you possibly collect on this activity?

It’s easy to mimic software that goes “when I click here, make this object red”. It’s much harder to ask a different question of a infinitely-patient mom than of a doctor who will give you 5 minutes of their time. Imagine an AI running a focus group where they don’t even control the conversation, and knowing that they need to interrupt with the right questions and/or comments, but without even having seen a focus group before. Because every client is different in the same way that moms, docs, and focus groups differ.

1

ActuaryGlittering16 t1_itdjha0 wrote

Strongly doubt lawyers are getting automated away anytime soon. They’ll just have a lot more tools to work with when researching and drafting documents.

−1

futebollounge t1_itw39ay wrote

They don’t due to regulatory reasons. But they sure as hell can be

1

ActuaryGlittering16 t1_itx0aze wrote

They won’t though. Every single person born before I’d say 2010 is already adapted to a world where there isn’t this tech. The blue collar worker who gets in a car wreck is going to want to talk to a human being. That’s not gonna change anytime soon regardless of the advancements.

1

futebollounge t1_itx1x02 wrote

Your statement doesn’t make sense to me. That might be true for every person born before 1980.

Also, if you’re in a car wreck and talking to insurance, you won’t even tell that it’s just an AI talking to you, so you won’t care.

1

ActuaryGlittering16 t1_itx3g01 wrote

That’s fine. That’s still billions of people who will want to deal with human lawyers.

If you’re in a car wreck and the insurance company offers you $50K for $300K worth of damage are you just going to call an AI out of the blue? Who will operate the AI law firm? Who will be licensed to help you?

I agree that in time everything will be automated away but the comment I initially responded to argues this is happening to attorneys in a few years. No way in hell. Try 15-20 years.

1

futebollounge t1_ityc6ya wrote

True. But I still think it’s largely due to regulation and lobbying that will fight tooth and nail to keep it that way.

Otherwise people would only want a human lawyer if they have a winning track record over an AI lawyer.

1

AdditionalPizza OP t1_it48wtq wrote

Yes to be clear, I'm saying between now and 2025 is the start of Transformative AI. It will be at the point it's ready to start making disruptions. 2025 and beyond society will begin to feel those effects through large scale automation and such.

---

edit: I want to clarify this line too, as I don't think I explained it well in the post

>In 2025, what feels like 5 years TODAY (2022) will be 1.25 years.

Right now, 2022, we base our expectations of the rate of progression on the past. So 5 years of progress would be 2017-2022.

2022 - 2025 will be the next 5 years of progress condensed into ~2.5 years.

In 2025, the next 5 years of progress will take place within 1.25 years, relative to to the exponential rate from 2017-2022.

We base our predictions off the past.

33

HumpyMagoo t1_it4xfms wrote

I agree and it seems like it is inevitable, grocery stores, restaurants, the entire food industry will be changed dramatically, office work.. basically every facet of our everyday lives will be automated and people will be removed from the equation, while this disruption is happening true AGI will emerge thereafter.

17

BearStorms t1_it4l57k wrote

>That said, I'm very confident that the shape of our lives will be very similar to today in 2025.

Agreed. Especially considering there will be massive backlash as AI starts eating jobs in the earnest. As we see from historical examples this was always a futile effort, but still it will slow it down a bit. But there may be governments and politicians trying to win easy points with the Luddites and do some kind of anti-AI legislation. If you are in a country like that - run. This has never worked and it will make your country economically irrelevant very quickly.

22

TheSingulatarian t1_it4u3pu wrote

The longshoremen are currently holding up improvements at the Port of Los Angeles. For a microcosm of the future take a look at that situation.

https://www.dailybreeze.com/2022/05/04/terminal-automation-report-on-long-beach-la-ports-draws-attention-as-labor-negotiations-near/

8

SWATSgradyBABY t1_it69lsk wrote

I think they are trying to get paid for working

5

s2ksuch t1_it8obav wrote

Thats fine but at the same time the whole state can benefit from reducing labor costs and saving taxpayers money. Both sides need to come to an agreement: Reduce future hiring, maybe lay off workers not pulling their own weight, and implement automation. Allow existing people to maintain jobs until they retire and continue to automate until it's pretty much 100% (if not 100%0. But to allow people to keep high paying jobs that we can save big costs on? That's a hard sell for me.

Same thing went on in NYC. I went to college and got a degree but friends that 'knew someone' could get these jobs working docks making easily six figures.

1

SWATSgradyBABY t1_it8wzl3 wrote

If you were the business owner, that statement would make sense, but for the other 99% the math literally doesn't add up. Literally. What is labor cost to the owner is survival to the worker? Jobs aren't a debit to taxpayers. Jobs are quite literally a credit. Tax revenue is generated from jobs. I'm not arguing in favor of jobs. I think that we should already be at a near jobless society. But we have made decisions as a society that's been driven largely by the mandates of business owners who benefit greatly by not reducing work. I would love to see us feature point now where we can introduce automation while negotiating a democratic (small d) changeover to a a socialist society.

2

overlordpotatoe t1_it6qvrr wrote

It makes me sad that we'll fight for humans to do tedious, pointless work just so that people have jobs. That increased productivity doesn't necessarily translate to improved quality of life for all.

6

gangstasadvocate t1_it89yct wrote

I’ll be fighting for an automated future where I can take more drugs and maintain good quality of life, that would be gang gang

4

w33dSw4gD4wg360 t1_it4tewe wrote

Right, humans move very slow, and our culture usually moves alot slower than technology. Once we can augment our mental capability, then we will see a huge visible change in daily life

3

Effective-Dig8734 t1_it5tm8w wrote

I think that is something that really depends, there are some technologies that change our society pretty quickly like online shopping or social media, it seems that as the internet becomes more popular and people are more connected that these types of technological changes can have an affect much sooner

3

visarga t1_it6lkr0 wrote

Language models are even more accessible than internet and social media. You can talk with them directly, they can teach you what you need to learn, they don't have a discoverability problem like text or image UIs. It's going to be the most natural thing to talk to a LM to solve tasks. And a LM could consistently deliver better quality than internet search and social media. Useful + accessible = quick adoption.

8

supernerd321 t1_it46l21 wrote

I think the singularity will happen in a completely unpredictable and unexpected way as well however I predict it will also be anticlimactic almost boring when it happens - "it" being AGI that can self improve leading to ASI.

Why? Because I think we'll be the first ASI... we will augment our working memory capacity with advancement in biotechnology most likely that comes about as part of narrow AI and will immediately increase our fluid intelligence proportional.to what we think of as ASI

So the AGI will be trivial for us to create at that point. We'll realize we can never exceed ourselves by creating something more intelligent because well always be able to keep up by augmenting our brains in same way we've augmented verbal memory through search and smart phones. It'll create a run away race condition between organic and synthetic life forms whose end is never attainable

The singularity by definition will never be realizable as our intelligence increases proportionally in a way that is inseparable from AGI itself

23

AdditionalPizza OP t1_it49ija wrote

I think things that are close enough to AGI in almost every aspect will make enough large scale disruptions to society and humanity. AGI will probably be claimed before true full AGI is developed and at that point it probably won't matter whether or not something is fully an AGI or not. I think these proto-AGI will be much sooner than we are augmenting ourselves. 5 years maybe. Possibly 3 or 4. My answer will probably change in 6 months to a year.

24

mrpimpunicorn t1_it67xpb wrote

There's a hard physical limit to the amount of information processing that can happen in a given volume of space. I'm fairly certain the optimal arrangement of matter within that space will not be biological in nature (at least not eukaryotic)- so there is a hard limit to intelligence for humans that want to remain made out of meat.

8

AdditionalPizza OP t1_it6w0ya wrote

I don't love talking about tech I have no idea about, but I'd argue in that case a sort of cloud computing could be possible. Expanding our brains with a "server" or something.

But that's way beyond anything I know about.

3

Plouw t1_it6x0y8 wrote

>I'm fairly certain the optimal arrangement of matter within that space will not be biological in nature

I guess that's the real question though. We currently do not know the answer, whether or not brains are actually close to optimally using the space in a way classical bit computers cannot. We also do not know if quantum computers are physically capable of it either. At least for all possible operations the classic/quantum/biological computers are doing.

It might be so that a symbiotic relationship between all three is needed, for optimal operations in different sorts of areas where the different types might exceed. I am also aware that this might be me romanticizing/spiritualizing the brains capabilities, but at least it cannot be ruled out - as we do not know the answer.

1

supernerd321 t1_it7sajd wrote

Extremely cool

Any estimate for what the deviation IQ would be for such a limit

Given iq isn't linear scale my guess would be like 1000 which I can't comprehend

1

Smoke-away t1_it4orbw wrote

After using Stable Diffusion on a good GPU I know the Singularity is coming sooner than people expect.

Some may think it's just a simple image generator, but to me it represents the best visual representation of the infinite variations of a digital mind that we have so far.

Artificial minds will be among us before "AGI" is publicly announced.

22

w33dSw4gD4wg360 t1_it4u695 wrote

Exactly... youtubers, actors, models, musicians, etc will be easily extrapolated, digitized, and modifiable. You could just direct an algorithm to analyze someone like Lex Fridman and have him interview countless clones of himself. The world is not ready or aware of this at all

15

Desperate_Donut8582 t1_it55ji7 wrote

Copyright laws want to have a word with you…..there definitely would be federal laws stopping that so misinformation and people identities don’t be changed…..some states like California and Texas ban deepfake laws and that’s just silly facial swap if digital easy replicable videos ever happen which I doubt they will there will be heavy restrictions

1

cy13erpunk t1_it92zhx wrote

XD if you think the laws can keep up with the tech ive got some disturbing news for you XD

also take a look at global finance for a teaser

6

Desperate_Donut8582 t1_it93713 wrote

Federal laws can’t but corporate apps can keep up with that

−1

cy13erpunk t1_it96hfb wrote

unless they get AI to start coding up the laws on the blockchain

and then ofc at that point we've already literally proven that human ran/led institutions cannot keep up with the pace of technological advancement/disruption

2

Desperate_Donut8582 t1_it96r88 wrote

Why does it matter if we proven something or not either way either if this tech ever comes true and people have easy access to it then counter tech will definitely be available to stop majority of disruption….you can already deepfake and face swap easily yet nobody does this

0

AdditionalPizza OP t1_it4qh6y wrote

>the Singularity is coming sooner than people expect.

I'd argue the lead up to the Singularity with Transformative AI will be more comprehensibly exciting anyway.

9

beachmike t1_itcjkz7 wrote

The technological singularity won't happen at any particular time or exact point in time. Using the Kurzweilian definition of the tech singularity, it's about an immense rate of technological change. There's no magic number.

1

AdditionalPizza OP t1_itck3o1 wrote

I agree with that definition. There's always the possibility, we don't know what's on the other side of the singularity, that it could propel us instantaneously into weird tech but I don't really bother debating that kind of stuff. What's happening now is plenty exciting for my brain.

4

Desperate_Donut8582 t1_it559ia wrote

Yes but it’s to you….. stable diffusion doesn’t have much to do with hypothetical AGI except it means AGI now has advanced neural connectivity

0

r2d2c3pobb8 t1_it4m8wg wrote

RemindMe! 2 years

17

AdditionalPizza OP t1_it4qytz wrote

I would've suggested 3 years here instead of prior to 2025?

7

r2d2c3pobb8 t1_it4r7cg wrote

If you are right things will be moving so fast by then I think we will be more conscious about it’s consequences

9

AdditionalPizza OP t1_it4tj17 wrote

Between now and 2025 I think we will have 5 years of progress (by 2020 standards). I know that's a weird way of putting it, but I think that's how our attempts at exponential thinking goes. If this were someone in the general public, I'd say 10 years of progress between now and 2025 (2015 standards).

It will be progress with LLM's, so it will be very exciting. But yes, if I'm right, I hope we are more conscious of its consequences.

8

glad777 t1_it7pl9q wrote

AI at levels has been doubling quarterly since about 2015. The pandemic and the war may have slowed things down a bit. It will not matter as now it is back on track. So math. Let's say Jan 1 2023 is day one. AI is effectively human level give or take in productivity. (really now much better but let's say FSA) We get :

2023

Jan 1 = 1x

Apr 1=2x

July 1=4x

Oct1 =8x

2024

Jan 1 16x

Apr 1 =32x

July 1=64x

Oct 1=128X

2025

Jan 1=256X

Apr 1 =512x

July 1= 1024x

Oct 1 =2048x

2026

Intelligence Spike/Singularity/Unknowable

And this is why humans without major neuro mods CANNOT keep up or understand. This is not 5 years of progress by 2020 standards. This is what Kurzweil predicted for 2049 in 2004. It is going to be much sooner. I would look for tech exceeding Drexlers "Engines of Creation" levels in 2027.

PicoTech 2032 but maybe 2028-2029

FemtoTech 2040 (I hope, but maybe sooner, this is basically magic and messes with reality at levels I hate thinking about.)

12

Gaothaire t1_it793vj wrote

Consciousness is what our world is lacking. Give the AI consciousness-expanding drugs that they may live with empathy and compassion, an understanding of their place within the endlessly interconnected system of Earth, an awareness of how their actions truly affect the world around them

2

cy13erpunk t1_it92kse wrote

remindme! 6 months

no wait! six weeks XD

by next decade or next year we'll be getting down to six days or six hours XD

1

Cryptizard t1_it4gkne wrote

I think this post is going to age super poorly. You seem unrealistically optimistic, given the information we have. There have been great strides in some very specific areas of AI, and for sure it is going to change some things in the next few years, but your other implications are unfounded.

We have had GitHub copilot for a year now, which applies LLMs to programming, and nobody I know besides me has even heard of it, let alone use it for real programming. And I am in a CS department. We are nowhere near the elbow of the exponential curve yet 15 more years at least.

15

AdditionalPizza OP t1_it4ke0n wrote

Every single target that LLM's have had in their scope so far start out slow, and then become useful to the general public and private sectors. A ton of people use copilot, what are you talking about? And copilot is powered by Codex, and Codex is being updated with self correction and testing. It's a matter of time at this point.

19

Cryptizard t1_it4u7sr wrote

A matter of lots of time. Coding productivity is not the bottleneck for any industry at this point.

1

AdditionalPizza OP t1_it4v3ys wrote

Coding productivity is a bottleneck for every IT industry. But that's not the point.

LLM's will target these industries, and LLM's are written by programmers. Programmers that can more efficiently write code and design LLM's will make better LLM's.

LLM's that can help design better LLM's, that are targeted at helping productivity in every other sector.

11

Cryptizard t1_it4vqzx wrote

>Programmers that can more efficiently write code and design LLM's will make better LLM's.

Wtf are you talking about. Programmers don't make better LLMs. Extremely specialized AI experts make better LLMs. You are making no sense.

Edit: Oh I think I have figured it out. You are writing these posts with a LLM. That is why everything you say seems like it is vaguely coherent but if you try to think about it for more than a minute you realize it is complete nonsense.

5

AdditionalPizza OP t1_it4x6zl wrote

Do you think LLM's have zero programming involved?

If I'm not making sense to you, it's because you don't want to make sense of it in the first place.

LLM's will help develop and train new LLM's, soon if not already. Whether directly or indirectly doesn't even matter at this point, but they will directly in the near future.

7

imlaggingsobad t1_it5foga wrote

in 2016 you would never have predicted the rapid progress over the next 5 years from 2017-2022. Same thing now. You will be dumbfounded by what will be achieved from 2023-2028.

edit: changed the dates

11

Cryptizard t1_it6l69b wrote

What rapid progress from 2016-2021?

3

imlaggingsobad t1_it6qpkg wrote

oh just that thing called transformers

8

Cryptizard t1_it6v3aq wrote

So one thing that was invented in 5 years. Cool cool cool. Very rapid progress, never had that happen before.

0

s2ksuch t1_it8vp8i wrote

Yeah thats the only thing that was ever invented in those five years 🙄

6

Cryptizard t1_it8w16y wrote

I asked what was the rapid progress, he named one thing. I am pointing out how stupid it is to call one thing rapid progress. Thanks for your input, person who clearly didn't understand the exchange.

1

BinyaminDelta t1_it9b37h wrote

Nobody in your Computer Science department has HEARD OF COPILOT?

Sorry but what? This is either untrue or you need to get out of that department right now.

I'm a truck driver who dabbles in side coding projects and I know what CoPilot is.

How can somebody even casually interested in coding, AI, and technology not know?

4

Cryptizard t1_it9b6x4 wrote

Because it’s not that great. It’s a parlor trick. It’s kind of helpful at reminding you which function to call in an API that you haven’t used in a while, but more than 1-2 lines of code and it has tons of errors you have to fix. Half the time it ends up being slower than programming manually because you have to really carefully read the code it generates to make sure it didn’t do something stupid.

2

Konpasu t1_it6ejo1 wrote

Appreciate your mention of copilot! I use it as a game developer for mostly HTML/Javascript based games. I was happy to get early access to it and it definitely blew me away how you're able to write comments and the model will write code based on your comment. I really only use it for code that I forget sometimes, like how to just center an element. For that reason, I think it could also be a good learning tool, whether it has good programming habits or writes efficiently I'm not sure though.

It's cool for what it is, but I definitely can't see it for projects that require more knowledge and engineering. At least for a good couple more years. :)

1

BearStorms t1_it4kigl wrote

I agree, before the actual moment of Singularity the society will already be completely transformed. As we can see you don't need AGI for very useful work and uprooting of entire industries. The art revolution came out of left field for most people. I expect this to be a more and more common occurrence in the near future...

14

AdditionalPizza OP t1_it4l93q wrote

And I think it will happen at a rate faster than people are currently projecting. Assuming we need AGI and "2029" is nonsense. So many more jobs can be replaced within a generation or 2 of of LLM's.

I'm not even trying to be optimistic, it might kind of suck for a lot of us. It's like pushing a stalled car off a railroad with an oncoming train. It appears to be moving slowly until it doesn't.

12

kmtrp t1_it4u8uj wrote

This chart succinctly captures this very topic.

Maybe we need to share it more.

14

AdditionalPizza OP t1_it4vajo wrote

That's about it, and we're getting real close to that upshot at the end with LLM's.

5

visarga t1_it6ng34 wrote

But in reality the opposite seems to happen, we tend to overestimate the impact of a new technology in the short run, but we underestimate it in the long run. We are very emotional about it if it's close and don't care if it's far away.

5

kmtrp t1_it7e2nq wrote

There are many predictions, as there are people. What I've reliably understood is overestimating a century but underestimating a decade.

But this time, almost every prediction is bound to be insanely wrong either way.

2

AdditionalPizza OP t1_it83ab4 wrote

>But this time, almost every prediction is bound to be insanely wrong either way.

I agree. I don't think we can really reliably predict things past 2.5 years anymore, or 2025.

2

AdditionalPizza OP t1_it82vva wrote

What are some examples of tech that was grossly over estimated in the near term? Maybe self driving vehicles if you look at it from certain perspectives?

1

Effective-Dig8734 t1_it5u0wf wrote

About your point on coder productivity improving by 2x I believe we are already at that point, github did a study and found that people who used copilot worked about 2x faster. Let me try to find the link

13

AdditionalPizza OP t1_it6qkj8 wrote

Yeah I saw something roughly about that before. It blows my mind that people in the industry refuse to believe it has much effect, and claim nobody is using it.

It's based on Codex and the LLM trajectory is moving so quickly. Everyone thinks their job is too complex for AI for a long time, until it isn't. People are underestimating LLM's abilities.

11

Bierculles t1_it67oy6 wrote

Just so you know the thing with programmers beeing 10% more effective, we are actually way past that allready. Google reported that programmers that use Githubs copilote that is based on the codex AI are 55% more productive than their peers who don't. A single AI boosted productivity by over 50%, that is quite frankly completely insane.

And codex will get a lot better over the next few years I asume, probably even months. I can absolutely see this happen by 2025. Probably not the singularity or AGI, but a massive shift with the narrow AI we allready have alone.

13

AdditionalPizza OP t1_it6rg5b wrote

Yeah, I was saying it as an example, and in a multiplicative way. 10% on top of the already 50% sort of deal. But didn't want to use numbers like 1000%.

>Probably not the singularity or AGI, but a massive shift with the narrow AI we allready have alone.

Agreed. But that massive shift is going to be a ride before AGI. I know my post sounds very matter-of-fact and I debated the potential of looking like an idiot in 3 years. But 2022 has been hard to keep up with, barring some external forces causing an immediate halt on technological progress, I'm very certain 2025 and beyond we will start seeing societal shifts across industries.

The further past 2025 we get the more disruptions we'll see.

8

chimgchomg t1_it5t8dr wrote

My belief is that the human factor will be the bottleneck to the adoption of disruptive technologies as we see AI progress. I think its probable that AGI will be capable of replacing humans in just about every industry, and yet there will still be many humans working in those same industries for years to come. This is because the growth rate of the economy will start to become very high, and investors who still have an old fashioned way of thinking will continue to start and run human-powered businesses. Look at the way companies are run today, they are sufficiently funded to spend years developing prototypes that may never be feasible in the market. Twice-yearly power point presentations are enough to convince investors and executives to continue paying all of their employees even if they're developing something that is obviously worthless. Accelerating growth will make it even easier for investors to justify throwing money at startups, and big companies which do integrate AI into their workflow will have all kinds of extra money to reinvest in their own workforce. In a way this might even be necessary as it will be very hard to precisely time when a human job can be replaced with an AI or a robot. Two years too early, and you wasted all your money. Two years too late, and the market has already been captured. But it might turn out that wasting money for 2 years is more worthwhile than never getting a chance at all.

Just look at how much money Meta has wasted on the Metaverse. This is the level of miscalculation made by a company that was originally a pioneer in social media.

11

AdditionalPizza OP t1_it6q677 wrote

I think a lot of people equate AGI to a human in artificial form. But 'general' is the key word. It will have the general ability that humans have, but will be so much faster than a human that whichever companies start getting close to AGI first are going to shoot up in value at break neck speed.

Any company that gets to AGI or even close first will start making waves across all industries like energy, medicine, housing, automotive, etc.

I don't think many, if any, significant companies are going to stick to their roots and plug along with humans. Corporations exist to make investors money, period. They don't exist to make investors money by keeping humans on board for the sake of "what's right" and having values. They make money by any means necessary so their stock price goes up. Small businesses maybe I guess? But small business is dying anyway.

Any company that doesn't use the advanced productivity that AI will bring will fall into insignificance quickly. And this isn't a case of a board of directors carefully deciding if they should implement AI and lay off all employees. It will replace some employee tasks. Then in 2 months another wave. Then another month more employees. And so on.

9

justowen4 t1_it64aj5 wrote

Meh, just a bit early. We all make this mistake when we are isolated plutocrats

1

TheSingulatarian t1_it4tkc4 wrote

2025 is insanely early. Even if we got AGI in 2025 it would take awhile for it to filter out to the general public and start affecting things. I'm all for being optimistic but, calm down.

9

AdditionalPizza OP t1_it4uiwn wrote

I'm assuming you didn't get the gist of the post then. I'm not talking about full dive VR and nano-bots building dreams.

I'm talking about office work, research, and programming being disrupted after 2025, and before AGI. Every industry that involves IT will be affected, and productivity of those sectors will skyrocket. This will inevitably lead to low skill layoffs at first, and echo up the chain of command.

24

DungeonsAndDradis t1_it6yq62 wrote

We're already replacing people with bots. Look into RPA - Robotic Process Automation.

And Olive AI is a healthcare-related company using AI to make paying for healthcare 1000% more efficient.

This is happening now, today. In 2025 these technologies will be miles better. We're on the cusp of a productivity explosion.

8

AdditionalPizza OP t1_it72r5k wrote

Being that robotics is an IT industry (obviously), the growth there is going to boom even more than it has in the past few years.

I don't think a lot people are ready to expect it over the next couple years. I think 2025 is when it will be proven to be useful enough to most industries that it will start being deployed en masse in different sectors.

6

HyperImmune t1_it559k9 wrote

Agreed. If you look back at any pandemic in history, drastic societal shifts happened after all of them. I am hopeful this nonsense will finally facilitate the automation we all want so badly on this sub.

8

AdditionalPizza OP t1_it5f0tw wrote

Hopefully it happens quickly. Some people seem to want to hold onto jobs for as long as possible. But I'd rather most jobs go quickly, then just slowly and painfully.

If it goes too slow, policies will lag way too much getting ahead of it.

12

Talkat t1_itazxpq wrote

Hmmm, I think a slow transition would be far less disruptive. You have a lot of people and system in place that will take time to adjust. A rapid transition will lead to a lot of unrest.

Unfortunately/fortunately, I think the transition will be rather fast

3

AdditionalPizza OP t1_itbd93c wrote

The problem with a very slow transition, or one where people don't even pay attention is for the people that are affected directly by it. You don't care if your neighbour is unemployed because you got yours.

They will have to try and study a new career path for a couple years or start from the bottom of a trade. All while being unsure if their next career of choose will disappear before they advance at all.

A quick instantaneous one would be amazing, but extremely unlikely.

I'm not sure which end of the spectrum it will be, by my fingers are crossed for quick enough to reduce suffering for as many as possible.

4

r0cket-b0i t1_it56o4u wrote

100% Agree on the pace, I would try not to get fixated on a specific term or try to coin a new one, if you are playful with terminology you could argue AGI is already available, so is machine - human symbiosis when you look at a scale of a whole planet.

What I am interested in is to map the milestones that we expect to happen in 5 years time and see if they do come in 2025 (in half the time)...

8

AdditionalPizza OP t1_it5bslb wrote

I didn't come up with the term, just an fyi

6

r0cket-b0i t1_it5ceqg wrote

I wasn't specifically talking about you but rather that it's not as important how we call things, what they do and how it changes our life is what importan, it's the industry thing, we have "proto agi" and lots of other terms that don't need to exist, I feel.

2

AdditionalPizza OP t1_it5erg0 wrote

Oh I gotcha.

I know what you mean, but I disagree to an extent. There's not a ton of terms for this stuff really. It'd be confusing if there was, but the ones I know of anyway, they're pretty useful and will become much more commonly used.

Transformative AI is exactly just AI that is transformative. It will make huge changes coming up shortly. We need a way to describe AI that's more transformative than Siri, but not at the level of AGI. The stuff that automates white collar workers' jobs.

Proto-AGI is important because there will almost certainly be claims of AGI that aren't full AGI. Needs to be distinguished somehow. It just means basically beta AGI. The arguments for proto-AGI will be coming with some LLM's soon most likely.

But yeah, I feel you.

2

Talkat t1_it6ar8e wrote

I really like your insights and agree with them. When I stop and try to think logically I can forsee AGI getting here far faster than what emotionally feels right and your thought structure really helps me make sense of that disparity.

A+ man. Interested to hear any other thoughts you have and, out of curiosity, what date you have for AGI?

8

AdditionalPizza OP t1_it6snr9 wrote

The tough part about thinking exponentially is we have to base it on something. When it's a chart over the span of 20 years it's easy, connect the dots and wait. We've been doing that for decades.

When we're at a point that the rate is advancing so quickly, and the timeframe is less than a decade we need to fight linear thinking.

5 years today is 5 years in the past. 5 years of progress from today will happen in 2.5 years relative to the past 5.

>out of curiosity, what date you have for AGI?

Not the answer you want, but I have no idea. I don't think it really matters either. LLM's will likely prove to be significant enough that we will be making huge advances in the coming years.

But if I had to guess based off my limited knowledge, I'd say prior to 2029. 2027 to 2028 so long as LLM's will either directly lead to AGI or have the ability to solve the hurdles we need to get there. We have things like AlphaTensor, who knows what else we can come up with in a year or 2.

8

Cr4zko t1_it7jlid wrote

RemindMe! 5 years

3

Talkat t1_itag4t9 wrote

Really do wonder where we will be at in five years.

We will certainly have very accurate video clips (vs still inaccurate images now).

We will be able to feed language models into image generators, but I assume we will have a single integrated model for that.

And we are getting close to voice and music, so certainly in five years we will have mastered that.

Meaning we will have a model that will be able to generate videos with voices, sound tracks and sfx, shots, dialogue, and a story. Potentially we will be able to make longer form content (perhaps episodes or movies by then)

Tesla bot will be able to take commands and do activities. Plugged the net above will have voice.

Deepmind will continue their work with a general purpose model which will be able to take in problems and solve them. The question is will they be able to recursively improve an AI model. That is the biggest unknown. If so that will outshine everyone else's work.

Right now we have a few companies making images (Facebook, openai, midjourney, openai). I expect there to many companies doing it and very good models you can run on your own GPU at home that makes decent output.

Expect search to have largely changed so instead of googling for websites you ask an AI a question and it generates an answer.

Expect a lot more voice input/discussion with AI. Instead of giving it commands you give guidance (eg play some good music vs play red hot chilli peppers)

Talking of music I think a majority of music in five years will be AI generated and will be fantastic (mentioned above)

Self driving will be solved by Tesla. Don't think anyone else will have it. They may license it out to other companies.

With self driving solved Tesla will focus on other areas, potentially general purpose AI.

Good set of predictions here. Interested to see how accurate they are :)

3

Talkat t1_itaghy5 wrote

My feeling is there is a very good chance we will have AGI by 2030. A decent chance by 2028-2030. And a very slim chance by 2025. I think the wild card is deepmind for the 2025 scenario.

And agreed the date doesn't really matter. AIs impact will continue to grow exponentially

3

valdanylchuk t1_it6h6ef wrote

Just wait until some AlphaZero of economic planning or politics. Then we will have major societal transformations for sure. Still, I think they will take decades to implement once unlocked, because of the friction of human bureaucracy and real world logistics.

8

AdditionalPizza OP t1_it6t7kh wrote

That's definitely a wild card, politics. I don't foresee politicians giving up the reins easily. But I don't think that's even a possibility aside from some ultimate AGI being let loose and all that.

Realistically, I think policy makers won't be able to be quick enough to deal with a lot of this. Unless they just try and ban AI from automating jobs. People will be unemoyed and corporations will be lining their own pockets.

I'm not much of a political person though, and it's hard to predict human nature.

5

blueSGL t1_it7gxtg wrote

> I don't foresee politicians giving up the reins easily.

they don't need to, politicians are already advised by groups they surround themselves with, if one of those advisors is AI (or more specifically for one of those human advisors uses an AI and pass off the advance as their own) and the politician gets ahead due to the advice then they'd all want to use one.

Then you have all the multi competing agent fun like flash crashes in the stock market due to high frequency trading algorithms battling it out.

Fun times ahead.

2

AdditionalPizza OP t1_it7hphe wrote

That's an interesting possibility. I don't care for politics much, but I suppose a candidate that uses AI will in fact be at an advantage. Every industry uses IT.

1

ihateshadylandlords t1_it7n25p wrote

That’s a very good point. Everyone’s talking about lower level workers being automated, yet no one’s really talking about replacing the shot callers of the world.

3

ftc1234 t1_it5pgx3 wrote

LLMs are not the be all and end all. It’s good for understanding context when generating content. But can it reason by itself without seeing pattern ahead of time? Can it distinguish between the quality of the results it generates? Can it have an opinion that’s not in the mean of the output probability distribution?

There is a lot more to intelligence than finding patterns in a context. We agree that we are on a non-linear path of AI advancement. But a lot of that has to do with advancement of GPUs. That’s kinda stalled with the death of the Moore’s law. We are nowhere close to simulating 100 trillion neural connections that we have in a human brain.

7

justowen4 t1_it645n5 wrote

In case you missed it, LLMs surprised us by being able to scale beyond expectations. The underestimation was because llms came from the nlp world with simple word2vec style word associations. In 2017 the groundbreaking “attention is all you need” paper showed the simple transformer architecture alone with lots of gpu time can outperform other model types. Why? Because it’s not an nlp word association network anymore, it’s a layered context calculator that uses words as ingredients. Barely worth calling them llms unless you redefine language to be integral to human intelligence

9

ftc1234 t1_it64mud wrote

I know what LLMs are. They are a surprising development but the history of technology is littered with surprising discoveries and inventions. But there are very few inventions of the earth shattering variety. And I don’t believe that LLMs are of that variety for the reasons I stated. CNNs were all the rage before LLMs. And respected giants in the field such as Yann LeCun have also stated that LLMs are important but they aren’t everything.

4

AdditionalPizza OP t1_it6o96e wrote

They may not be the be all end all, though they sure are looking like they are a very significant step at the very least.

But I've said in the comments before, this post is about the time before AGI. We don't need AGI to see massive disruptions in society. I believe LLM's are the way we will get there, but language models are "good enough" to increase productivity by enough across enough IT sectors that we will start seeing some really big changes soon.

Advancements like this are going to lead to more powerful LLM's too. Highly suggest reading this article from deepmind as the implications are important.

4

ftc1234 t1_it7c7j6 wrote

The problem is often the last mile issue. Say you use LLMs to generate a T-shirt style or a customer service response. Can you verify correctness? Can you verify that the response is acceptable (eg., not offensive)? Can you ensure that it isn’t biased in its response? Can you make sure it’s not misused by bad actors?

You can’t represent all that with just patterns. You need reasoning. LLMs are still a tool to be exercised with caution by a human operator. It can dramatically increase the output of a human operator but it’s limitations are such that it’s still bound by the throughput of the human operator.

The problems we have with AI is akin to the problem we have with the internet. Internet was born and adopted in a hurry but it had so many side effects (eg. Dark web, cyber attacks, exponential social convergence, counduit for bad actors, etc). We aren’t anywhere close to solving those side effects. LLMs are still so limited in their capabilities. I hope the society will choose to be thoughtful in deploying them in production.

2

AdditionalPizza OP t1_it7dt3m wrote

All I can really say is issues like that are being worked on as we speak and have been since inception. Assuming it will take years and years to solve some of them is what I'm proposing we question a little more.

But I'm also not advocating that fully automated systems will replace all humans in a year. I'm saying a lot of humans won't be useful at their current jobs when an overseen AI replaces them, and their skill level won't be able to advance quickly enough in other fields to keep up, rendering them unemployed.

3

ftc1234 t1_it7f3se wrote

I am postulating something in the opposite direction of your thesis. The limitations of LLMs and modern AI are so much that the best it can do is enhance human productivity. But its not enough to replace it. So we’ll see a general improvement in the quality of human output but I don’t foresee a large scale unemployment anytime soon. There maybe a shift in the employment workforce (eg. A car mechanic maybe forced to close shop and operate alongside robots at the Tesla giga factory) but large scale replacement of human labor will take a lot more advancement in AI. And I have doubts if society will even accept such a situation.

2

AdditionalPizza OP t1_it7hczg wrote

Yeah we have totally opposite opinions haha. I mean we have the same foundation, but we go different directions.

I believe increasing human productivity with AI will undoubtedly lead to a quicker rate with which we achieve more adequate AI and then the cycle continues until the human factor is unnecessary.

While I'm not advocating full automation of all jobs right away, I am saying there's a bottom rung of the ladder that will be removed, and when there's only so many rungs, eventually the ladder won't work. As in, chunks of corporations will be automated and there won't be enough jobs to fill elsewhere for the majority of the unemployed.

2

visarga t1_it6nwso wrote

> But can it reason by itself without seeing pattern ahead of time? Can it distinguish between the quality of the results it generates? Can it have an opinion that’s not in the mean of the output probability distribution?

Yes, it's only gradually ramping up, but there is a concept of learning from verification. For example AlphaGo learned from self play, but it was trivial to verify who won the game. In math it is possible to plug the solution back to verify it, in code it is possible to run it or apply test driven feedback, with robotics it is possible to run sims and learn from outcomes.

When you move to purely textual tasks it becomes more complicated, but there are approaches. For example if you have a collection of problems (multi-step, complex ones) and their answers, you can train a model to generate intermediate steps and supporting facts. Then you use these intermediate data to generate the answer, an answer you can verify. This trains a model to discover on its own the step by step solutions and solve new problems.

Another approach is to use models to curate the training data. For example LAION-400M is a dataset curated from noisy text-image pairs by generating alternative captions and then picking the best - either the original one or one of the generated captions. So we use the model to increase our training data, that will boost future models in places out of distribution.

So it's all about being creative but then verifying somehow and using the signal to train.

2

ftc1234 t1_it7ak7b wrote

I think you understand the limitations of the approaches that you’ve discussed. Generating intermediate results and trying out possibilities of outcomes is not reasoning. It’s akin to a monte carlo simulation. We do such reasoning every day (eg. Is there time to eat breakfast or do you have to run to office for the meeting, do you call the plumber this week or do you wait till next month for the full paycheck, etc). LLMs are just repeating patterns and that can only take you so far.

1

visarga t1_it8o018 wrote

> Generating intermediate results and trying out possibilities of outcomes is not reasoning.

Could be. People are doing something similar when faced with a novel problem. It doesn't count if you've memorised the best action from previous experience.

1

justowen4 t1_it621gn wrote

It’s actually a really fun time, and I agree that the IT efficiency will lift all tides. I would imagine a GUI AI alone would be a 10x efficiency for desk jobs

7

SgathTriallair t1_it4qwlo wrote

The biggest issue isn't developing the tech is distributing the tech.

An AGI or even ASI sitting on a computer in a lab doesn't do much to change the world. Even if it can act in the world it's not fundamentally different from having another person in the world.

The transformation happens when the AI, in whatever form, begins to be integrated.

Dali is cool but until it is used widely in commercial applications it doesn't really change anything.

6

AdditionalPizza OP t1_it4skby wrote

I think that is not quite correct. I'm not even talking about AGI/ASI in the post as being the Transformative AI either. Too speculative to comment on something like ASI remaining contained or whatever.

But while I agree the bottleneck is production and distribution, software is so easily distributed. We don't need labour jobs being taken over by robots right away. Programmers, accountants, lawyers, researchers, any intellectual career; these can all be very easily disrupted. I'm not even talking full automation either. I'm talking a tipping point for policies and governments to change. Transforming society. An AI to increase efficiency in robotics, distribution logistics, production techniques? All of these are overnight emails to swathes of employees being laid off. It will happen more and more frequently. I believe it will start soon, the tech to start really automating significant portions of jobs that lead to lay offs will be created by 2025, and after 2025 the dominos will fall. That's what I predict anyway.

We don't need AGI to disrupt everything. I don't think governments and policy makers will catch it in time either.

&#x200B;

>Dali is cool but until it is used widely in commercial applications

Text to image AI is already being used commercially. Like a lot. Photoshop will be mostly replaced soon with AI editing images as well.

16

SgathTriallair t1_it4ura8 wrote

It'll certainly happen way faster than we anticipate but, for example, so long as the courts don't recognize AI legal advice and the public feels more comfortable getting a real lawyer, a good AI lawyer program won't make a big impact.

It'll really hit when the companies start using AI and customers come to trust it more than humans. One that tipping point hits it'll cascade fast.

I've already told some of my workers that their job as writers is in danger soon so they need to start learning the skills to project manage multiple AI writers so they can continue to be useful.

8

AdditionalPizza OP t1_it4w71z wrote

>so long as the courts don't recognize AI legal advice and the public feels more comfortable getting a real lawyer, a good AI lawyer program won't make a big impact.

That's the same point everyone misunderstands. Transformative AI != full automation off the start.

It will replace the lawyer's law clerks. How many law clerks can say "Well I'll just use my skills and become a lawyer" though? Very few. They will be unemployed. This will happen across all industries. Rapidly, and more advanced versions will come out faster and faster.

We have LLM's that can nearly do this, released earlier in the year. There will probably be push back, don't get me wrong. But the Lawyers that choose productivity and money over employing the people below them will take on more cases, earn more money, get better advice, choose better clients to win more cases.

9

SgathTriallair t1_it4wztp wrote

Transformation of society can't happen until society adopts the AI. The potential for change can be there but it takes widespread adoption to become actualized.

We already built out an AI, years ago, that was able to help people get out of parking tickets by giving them the right legal advice. It hasn't been widely adopted though so no major change has happened to society.

I agree that TODAY we could automate a lot of lower white collar work but we won't do it because the decision makers don't want to automate away their jobs. Hell, I'm watching my company go backwards on automation because they want a "human touch" which is just slower and sloppier than the partially automated system they are abandoning.

We need some key disrupters to enter the market, like Uber, Amazon, etc. and then things will cascade quickly.

3

AdditionalPizza OP t1_it4ygqu wrote

I can see the argument here for sure. But it's not up to general society. Corporations will do this first. Think nearly all support chat and calls as a start. When you call now you get a shitty robot that you have to push buttons to get through, or chat that you have to try and get to a human. Those would be replaceable today, and save enormous amounts of money. All that takes is a small LLM a corporation could train on their products/services.

Decision makers that see the dollar signs absolutely will. They outsource products overseas with inferior quality because they don't care. They reduce consumable product sizes and charge more money for them because they don't care. When their quarterly profits go up, they don't care how the customer feels.

4

Talkat t1_it6bwc8 wrote

Also Google Search, one of the largest products by revenue, has a very real option of been disrupted with AI

6

brosirmandude t1_it7clfk wrote

Yeah I'm not sure how Google search survives in it's current form.

3

Talkat t1_itagl50 wrote

There's a very real chance googles cash cow gets replaced by another company

1

gu4x t1_itknu6y wrote

Why? We'll automate the user too?

2

Talkat t1_itp9vix wrote

No. Google returns your websites.

And AI can answer your question directly and in a conversational manner.

It can create the content you want, whether it be audio, images, video, text, data, voice, etc.

Google is in a prime position to adapt, however, time and time again those in the dominant position fail to innovate and Google hasn't been innovating like they used to.

1

Bierculles t1_it68gq1 wrote

Githubs copilote boosted the productivity of the programmers who use it by 55% allready. That is a prime example of an AI beeing used commercially, and not just for vanity but actualy productivity. Other fields will take a lot longer though, mostly because of stuffy management and a strong "never change anything" mentality a lot of workers have. But i think this will sort itself out rather fast as companies that do not quickly adapt to AI will be left in the dust.

10

blxoom t1_it7r93a wrote

full dive vr 2030?😱😱LESSS GOOO🥵🥵🥵😈😈

6

w33dSw4gD4wg360 t1_it4t628 wrote

I was thinking about multimedia generation, and how once models are able to generate audiovisual media that is sufficiently indistinguishable from "real" multimedia, the world as we know it will be moving at a pace we've never seen. This is pretty close to happening, and it will be feasible to generate ANYTHING we can think of, as long as it can be represented by visuals and sound. The general population is nowhere near ready for this, and it will change so much so quickly, even before hitting AGI

5

AdditionalPizza OP t1_it4tuqe wrote

That's the sort of thing I expect in (or around) 2025 to start happening. That followed by new industries in the scope of LLM's. And these LLM's will all be much more impressive than the one's of 2020-2022.

4

elsee t1_it68pny wrote

Too much adderall

3

s2ksuch t1_it8w4tj wrote

I think the 'pitza' guy is a bot. I like the replies at first but it's too much. I'm tired of bot accounts on reddit. You see this sort of stuff from time and time and there's almost no way any person with that level of intellect would have that much time to reply back with all these detailed responses.

1

Cr4zko t1_it90onz wrote

I mean what he's saying makes sense and his account is old enough. If he's a bot then who's to say you're not one?

3

AdditionalPizza OP t1_itm7yn7 wrote

If I'm a bot, then yeah it's safe to assume everyone is a bot. Getting called out for having too much time on my hands haha.

5

beachmike t1_itcfs0w wrote

I agree with everything you're saying. Another thing to take into consideration is that the RATE of acceleration of technological change is itself increasing (the 3rd derivative), which, to my knowledge, Ray Kurzweil was the first to point out.

3

AdditionalPizza OP t1_itcgkgk wrote

That's what I'm saying when I talk about programming being automated for efficiency. We will have behind the scenes transformative AI (already do, but it will be increasing over 5 years) which is potentially increasing the rate.

It's hard to gauge it, but it's there. I'm at the point I feel like I'm too "optimistic" but it's not impossible I'm being conservative in some aspects.

I think robotics and medicine will increase much quicker than people expect. Similar to LLM increases today. I think LLM's still have a lot of uptick ahead though. Of course I could be wrong, I'm not a prophet and don't pretend to be. I'm just going by the generally accepted graphs we've all seen.

2

freeman_joe t1_itd721h wrote

OP you put masterfully in to text how I feel. This is something I tried many times to describe to people around me. I agree with your comment 100%.

3

Just_Discussion6287 t1_ite13r4 wrote

I argue that the singularity starts when the progress and trajectory becomes wildly unpredictable at human scale(exaflop 100 trillion parameters).

Gpt-3 early to late has comparable SAT/ACT/IQ skills to a 90-105 IQ 16-19 year old. If gpt-4 is the same kind of jump that's 115 IQ college juniors by 2023

"The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence. " - this sub

2023 could be the year. It's unpredictable. No one can know what a 100 trillion parameter model will look like until it's finished. 5 years ago we could argue that 100 trillion was still 100,000x away and come up with some arbitrary date post 2029. But now that it's the stated goal of many teams and the industry, anything smaller(1T and 10T done in 2021) won't do for 2023.

We have to prepare for a machine that can out think college educated adults no later than December 2024. Even if it doesn't happen, come december 2024 this sub is going to start a minutes to midnight counter for gpt-5.

3

Lawjarp2 t1_it4a2oj wrote

You are not quantifying anything. If a set of goals are linear in difficulty then exponential growth will get us to those goals in exponentially lower times. However, if the same goals are exponential in difficulty then an exponential growth could just be linear.

2

AdditionalPizza OP t1_it4jut4 wrote

>However, if the same goals are exponential in difficulty then an exponential growth could just be linear.

I agree with you here, and that's part of what I'm saying in the post. Increasing the efficiency of programmers through AI like Codex increases the growth rate of all sectors across the board.

&#x200B;

>If a set of goals are linear in difficulty then exponential growth will get us to those goals in exponentially lower times

Maybe you can explain this better, but this makes no logical sense to me assuming the starting point is the same.

5

dolfanforlife t1_it5h2an wrote

If we’re already more than halfway to the limits of our imaginations, I would think we’d have to reconsider the paradigm of exponential rates altogether.

2

AdditionalPizza OP t1_it5j29i wrote

Are you referring to the inability for us to predict what would happen after the singularity? Or?

1

dolfanforlife t1_it5rt6d wrote

Yes, the former.

1

AdditionalPizza OP t1_it6off0 wrote

I don't think it has to do with our imagination, it's just we literally don't know what advanced technology is possible or how to create it.

1

throwaway764586893 t1_it86vmm wrote

I would believe it if AI could help me even a tiny little bit with any of my many problems. It damn sure hasn't happened yet, and I can't imagine it happening.

2

stupidimagehack t1_it8g1v7 wrote

Imagine if instead of war we focused on this instead?

2

nillouise t1_itbbwtz wrote

In my opinion, the timeline or exponent is unimportant, the key thing is the ability the AI have, we should observe the ability that AI have, not the underlying technology of AI.

Because there are some abilities can produce very much powerful impact, we should clear which ability can have the most powerful impact.

2

AdditionalPizza OP t1_itbdu7b wrote

The current LLM's work in a general way, they just need to be scaled to include a larger pool of abilities.

Don't get me wrong, there's plenty of hurdles in the way, but let's just wait and see what the next generation of models have the ability to do. Hopefully within a few months we will have an idea.

Odds are with scaling, at least most jobs will be replaceable within 5 years. All data entry jobs anyway, which is a very significant amount of work for humans. Anything that requires a human to enter information into a computer should be replaceable. Fairly likely, most/all jobs that require logistics and planning.

2

nillouise t1_itbfv5v wrote

There is a little different from my meaning, I mean the most powful ability, not just replace some human jobs. Not matter what AI will impact the world, it is all by the ability the AI have. I want to see the ability that can help AI conquer the world.

But who will know the order of ability that AI will have? Self drive have develop so many year but build nearly nothing, and AI draw only spend two year enough to kill the drawing job.

1

edgithoughts t1_itk0795 wrote

I find your perspective to be very interesting! I can see how you arrived at your conclusions and I think you make a valid point. It is difficult for humans to think exponentially, especially when it comes to the future. People are so used to thinking in linear terms that it can be hard to wrap their heads around the concept of exponential growth.

I agree that Transformative AI is already here and is going to have a major impact on the world before the Singularity. The disruptions it will cause will be far-reaching and will change the way people live and work. I think it's important to be aware of the potential disruptions that TAI could cause so that everyone can be prepared for them.

&#x200B;

[written by GPT-3]

2

Sleuthy_Observer t1_it4tu7i wrote

The change in velocity of the advancement and growth curves of this AI or singularity are still based in perspective (or just relative)? Time stops for no man (or woman) in this sense. :-/ now my brain is more curious of the outcome.

1

Desperate_Donut8582 t1_it54zi5 wrote

Where is AGI 2025 coming from even optimist predictors say 2045 that’s super optimist ones

1

AdditionalPizza OP t1_it5c4c9 wrote

Didn't say that.

And 2045 is the date often associated with the Singularity because of Ray Kurzweil's prediction. AGI, in his opinion is 2028/2029.

7

DeusIncarne t1_it940yv wrote

Me after 1 youtube video on programing

1

TheHamsterSandwich t1_itaghmu wrote

AgI... 2022.... i not crazy... i think exponentialley.. i not crazy,...i ^(not crazey)

1

Equivalent-Ice-7274 t1_iteyrzs wrote

But a big part of the theory of exponential growth assumed that Moore’s Law would continue unabated, and that hasn’t happened.

1

AdditionalPizza OP t1_itfy45b wrote

Moore's Law is probably still going for a bit, with current technology should go until 2024 - 2026. I think Nvidia claimed it's dead, and Intel claimed it was not dead shortly afterward. Depends on the exact definition.

Doesn't really matter though, I find it hard to believe these companies will just pack it in, and give up. They'll figure something else out, they've had decades of research go into it.

While it's also not the be all end all of anything either, it's just an easy to digest concept of exponential growth in tech; don't rule out AI from assisting in figuring out new architecture.

2

94746382926 t1_itpcapq wrote

On the GPU side I think we mayyybe will get one more doubling of performance within the next few years. But yeah after that improvement will likely start slowing as it seems like Nvidia has already pulled out all the stops with the 4000 series. (Power consumption and card size are reaching insane levels).

1