Submitted by oldmanhero t3_zrsc3x in singularity

Hank Green, of all people, posted something recently about the pace of change we're seeing, and he made a point I hadn't fully considered previously, which is that, at a societal level, at least, we are already changing too fast for us (ie our institutions) to keep up.

Since then I've been thinking about the next 20 years, particularly in terms of work, and I find myself wondering more and more whether we might already be in the midst of that knee in the curve where change goes vertical.

Imagine, for example, trying to advise a child entering high school or junior high next year about what careers will still be viable when they grow up. Can you confidently choose 5 careers you think will still be available to a regular person in 10 to 20 years? I could take some guesses, but I wouldn't be confident about them.

124

Comments

You must log in or register to comment.

MNFuturist t1_j14shjd wrote

The near-term problem isn't AI replacing careers 1:1 (eg. the AI doing everything a person does in their job.) It's an AI slowly replacing each of the 100+ sub-functions of each job. Death by a 1,000 cuts, not one-shot, one-kill. The pool of remaining functions that only humans can do gets smaller and smaller (even with a few new ones added along the way) and the "human" roles keep getting recombined into what's left that only they can do. That's why the whole "augmentation not replacement" argument is garbage. The net result is fewer humans working.

77

Foundation12a t1_j15icr1 wrote

Most of these functions are also quite simple which means it takes less compute and costs less to replace them, as the technology advances this cost keeps going down because the requirements to perform these tasks do not change but the quality of the technology improves regardless.

We definitely do not need super intelligent AI to perform most of these tasks and what we do need becomes more and more affordable and accessible in shorter and shorter time spans.

15

Capitaclism t1_j16yqmp wrote

A slow increase in the supply of lqbor, though possibly without the desired increase in demand for said labor.

One of the issues is the speed with which this is about to happen. It likely won't give economies time to properly adjust.

Usually when you have an increase in the labor force you can get increased GDP output and higher supply of goods and services. This puts downward pressure on prices, and demand for those goods and services generally increases, as they become more accessible. This extra demand inntrun generates further enttepeneurship as people seek to meet it. But if it happens too fast that cycle may substantially lag behind the increased output without creating more demand for labor.

6

Regretti-Os t1_j17zimw wrote

This is starting to happen for mundane tasks, I work as a cyber security consultant for a large company and decent chunk of projects I've worked on have been "we want to program a bot to send/receive emails and files to update our databases on a daily refresh" or other simple automaton processes.

3

Karcinogene t1_j1w1l5u wrote

Fewer humans working OR expansion of the economy overall. If 99% of human work is replaced by AI, we can maintain the same number of jobs if GDP is multiplied by 100.

There is not a fixed amount of work to be done. There is an infinite universe of untouched mineral reserves and energy out there.

We still find jobs for bacteria. We pay them in food.

1

coumineol t1_j14jcz9 wrote

>Imagine, for example, trying to advise a child entering high school or junior high next year about what careers will still be viable when they grow up.

Please don't take it personally but this is a good example of why it's already out of control, and we won't be able to keep up. We're still planning our life in terms of which college we should attend, jobs, careers, salary, or even retirement, while it's painfully obvious that all those concepts are about to lose all their meaning soon. Another nice example is teachers trying to find out ways to identify if the students have "cheated" by using AI in their essays. We're embarrassingly trying to make sense of the new paradigm, using the thought patterns that were only useful in the old one. We look like Coyote from the cartoon, who is still running unaware that he's about to fall down the cliff.

41

SurroundSwimming3494 t1_j14o4rl wrote

>We're still planning our life in terms of which college we should attend, jobs, careers, salary, or even retirement, while it's painfully obvious that all those concepts are about to lose all their meaning soon.

It's not a good idea to stop planning for your future life because of events that are as of right now hypothetical and thus may never come to pass, or at least come to pass in your lifetime.

Also keep in mind that your belief isn't universally held at all and that many AI researchers, futurists, sociologists, economists, etc. would disagree with it.

14

JVM_ t1_j14phwi wrote

"Kids in school today will be doing jobs that don't exist yet."

I only found this to be partially true prior to ChatGPT. Programming and being a digital artist are what my kids would naturally fall into.

Now I don't know how to advise either of them. Programming will revolutionize with AI and digital art has been dealt a death-blow in under a month.

AI can solve Leetcode, the Advent of Code problems and re-work existing code.

All the digital art communities are fracturing because AI art is overwhelming them - the split is because some humans don't want AI art polluting their communities - but there's no way to tell what's AI art anymore.

Crazy times ahead, hopefully there's something good past the event horizon.

14

VertexMachine t1_j16en6j wrote

>"Kids in school today will be doing jobs that don't exist yet."

Yea, and all but one job I had didn't exist when I was a kid. And my life isn't and wasn't terrible ;-)

​

>Now I don't know how to advise either of them. Programming will revolutionize with AI and

I think there was just a short period of human history when schools were actually preparing for jobs. The education system will change, evolve, or become obsolete and be replaced by something different.

​

>digital art has been dealt a death-blow in under a month.

Art has not been dealt a death-blow. Image generators are for sure disruptive tech, but will not kill art. Maybe in a few generations of this tech. If they will get 100x better. But even then - they will not make magically make people that like to do art not do it or they will not magically destroy Krita or paint shop pro.

As a reminder, for most of the human history art wasn't a profitable profession, and even currently most artists don't make a living out of making art.

Also, dalle/SD/midjourney didn't happen overnight. Diffusion models were first described in 2015. And that paper did build up upon a whole body of prior work.

​

>All the digital art communities are fracturing because AI art is overwhelming them - the split is because some humans don't want AI art polluting their communities - but there's no way to tell what's AI art anymore.

No, not yet. AI image generators are not good enough (I use them almost daily for both work and fun) and you can easily spot issues, esp with details. But they are getting better and better.

​

Overall, we are very bad at foresight, but very good at adapting to change.

3

oldmanhero OP t1_j18m8ec wrote

Just a question: have you studied art history? And do you know many working artists? Not just painters or fine artists, but illustrators, animators, concept artists, storyboarders, etc?

2

Karcinogene t1_j1w3ce3 wrote

Don't advise your kids to pursue a career. Raise them into well-rounded individuals. In the age of AI, it will be the most human qualities that make us worthwhile. For a long time, we all made ourselves into machines, because someone had to. That is no longer necessary.

The world is transitioning again, from experts to generalists.

Creativity, empathy, curiosity and flexibility. If there is anything we can still do, it will involved all of those. If there isn't, then it doesn't really matter.

Everything else will be done by machines.

1

JVM_ t1_j1w55n5 wrote

Ya, the level of education we give our kids probably isn't required anymore just like we didn't always have these schooling systems setup to the levels the exist at today.

1

Karcinogene t1_j1wg5r4 wrote

I wouldn't say it's the level of education that needs to change, but rather, the topics.

We are going to need intense education in problem solving, creativity, empathy, curiosity and flexibility. Those are things which can be taught, and for which we have not had sufficient time to teach fully, in the past.

There might very well come a point at which your survival relies on your ability to make a friend. Currently, this is only true of jobs like sales or business negotiations. It is likely to be important for everyone in the future.

I don't know what form those schooling systems will take, but it's likely to be very different from anything that has come before.

1

ThoughtSafe9928 t1_j14jkgn wrote

“Don’t take it personally”? Aren’t you and OP saying the same things?

13

coumineol t1_j14k4su wrote

We say similar things in general, yes. But the OP worrying about the difficulty of giving career advice to students stroke me as an anti-pattern in the exponential phase that we're in. I think it's basically the wrong question, or the wrong thing to worry about in particular.

7

ThoughtSafe9928 t1_j14kf2k wrote

OP provided an example of why things are “changing too fast for us to keep up.”

You guys are literally saying the exact same thing. You just provided another example that goes along with his original point lol.

Regardless I agree with both of you, although it’s more of an objective fact than opinion.

7

Vitruvius8 t1_j177898 wrote

Reminds me of the old saying “learn this math, you’re not gonna have a calculator at (insert location, grocery store)” not to say knowing math is useless and we shouldn’t do it. But now we all have calculators with us at all times. Or learning how to use a physical library Dewey decimal system is completely unnecessary. When I remember all of this stuff being important when I was a kid just 15 years ago.

5

Smart-Tomato-4984 t1_j18u2n2 wrote

It is not obvious to me that college will soon lose meaning and I would still go if I were graduating from High School today!!! Lets not jump the gun.

1

DungeonsAndDradis t1_j14f1je wrote

I think, day to day, progress is middling. But that's just because we're actively living it.

Maybe 20 years in the future, when we look back at 2017 to 2022, we'll realize we were at the "big bang" of AI.

So right now we're getting strapped in to the AI rocket and doing the launch checklists. Any moment now (when looking back at this time, from a time in the near future) the AI rocket will launch.

And then it's all over for Human civilization as we know it. Whether that is a good "civilization is not recognizable to someone from 1980" or a bad "civilization is not recognizable to someone from 1980" is a coin toss at this point.

38

JVM_ t1_j14q4cg wrote

Good scenario: AI allows humans to focus on enjoyment of life and leisure activities while AI handles the information and physical requirements of life.

Bad scenario: Someone or some group commandeers AI to enslave, abuse, harass, eliminate(?) vast swathes of humanity to preserve the resources for themselves. What if a middle-eastern country no longer needs imported workers and deports them. What do we do if 5, 10, 15, 20 percent of knowledge-based workers - globally - are no longer required (or we only need 1-2 percent to do the job that's done by 20% today...)

Given humanities past, the bad scenario seems more likely.

14

savedposts456 t1_j18wf6e wrote

That’s a pretty good breakdown. I’m hoping the elites implement a UBI to prevent societal collapse. It would be a lot easier than managing a self sufficient bunker and defending against endless waves of kill bots.

5

ihateshadylandlords t1_j14g274 wrote

No, at least not in my opinion. Per the sidebar: “The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence.“

We’re nowhere close to that(yes, I’ve seen GPTCHAT).

Even if you use the definition of singularity as the point where tech progresses so fast we can’t keep up, we aren’t close to that either. Tech still has to pass through the proof of concept/R&D/market research/economic feasibility bottleneck before it ever makes it into production. That bottleneck gives us plenty of time to keep up with tech.

29

OldWorldRevival t1_j14pqic wrote

The thing is that AI actually vastly outstrips us in narrow problems.

I think that element of it will drive us to AGI sooner than later. That is, much of what AI is already good at should help reel in a lot of the technical AGI problems.

I.e. mapping neurons, mapping complex patterns between neurons and emulating that behavior more robustly.

I think that whatever problems that remain over the horizon, there's a sort of exponential space that we are now in where those unknowns will quickly be reeled in.

It's the nature of information technology itself. I.e. most math was discovered in the past 300 years compared to 10000 years of civilization.

Now our population is massive, which means that the talent pool is also significantly larger. It's inevitable that it will happen relatively soon, in my view, when those things are considered.

18

Agreeable_Bid7037 t1_j14ubxq wrote

true, and people are already working on ways to create better A.I. using existing A.I. so AGI may arrive quite abruptly soon.

13

TouchCommercial5022 t1_j15jo1r wrote

⚫ AGI is entirely possible; If it turns out that there is some mysterious unexplained process in the brain responsible for our general intelligence that cannot be replicated digitally. But that doesn't seem to be the case.

Other than that, I don't think anything short of an absolute disaster can stop it.

Since general natural intelligence exists, the only way to make AGI impossible is by a limitation that prevents us from inventing it. Its existence wouldn't break any laws of physics, it's not a perpetual motion machine, and it might not even be that impractical to build or operate if you had the blueprints. But the problem would be that no one would have the plans and there would be no way to obtain them.

I imagine this limitation would be something like a mathematical proof that using one intelligence to design another intelligence of equal complexity is an undecidable problem. On the other hand, evolution did not need any intelligence to reach us...

Let's say a meteor was going to hit the world and end everything.

That's when I'd say AGI isn't likely.

Assume that all intelligence occurs in the brain.

The brain has in the range of 1026 molecules. It has 100 billion neurons. With a magnetic sound (perhaps an enhancement of the current state) we can get a snapshot of an entire working human brain. At most, an AI that is a general simulation of a brain only has to model this. (It's "at most" because the human brain has things we don't care about, for example, "I like the taste of chocolate.") So we don't have to understand anything about intelligence, we just have to reverse engineer what we already have.

There are two additional things to consider:

⚫ If you believe that evolution created the human mind and its property of consciousness, then machine modeled evolution could theoretically do the same without a human needing to understand all the ins and outs. If consciousness came into existence without a conscious being trying once, then it can do so again.

⚫ Alphago, the Google AI that beat one of Go's top champions, was so important explicitly because it showed that we can produce an AI that can find the answers to things we don't quite understand. In chess, when the deep blue was made, the IBM programmers explicitly programmed a 'value function', a way to look at the board and judge how good the board was for the player, eg "having a queen are ten points, having a rook is 5 points, etc., add it all up to get the current value of the board."

With Go, the value of the board isn't something humans have figured out how to explicitly compute in a useful way; a stone in a particular position could be incredibly useful or harmful depending on the moves that could happen 20 turns down the road.

However, by giving Alphago many games to look at, Alphago eventually figured out using its learning algorithm how to judge the value of a board. This 'intuition' is the key to showing that AI can understand how to do tasks for which humans cannot explicitly write rules, which in turn shows that we can write AI that could understand more than we can, suggesting that, in the worst case, we could write 'bootstrapping' AIs that learn to create a real AI for us.

Many underestimate the implication of "solving intelligence". Once we know what intelligence is and how to build and amplify it, all artifacts will be connected to a higher-than-human intelligence that works at least thousands of times faster...and we don't even know what kind of emerging abilities lie beyond it. . Human intelligence. It's not just about speed. we can simply predict speed and accuracy, but there could be more.

The human brain exists. It's a meat computer. It's smart. It's sensitive. I see no reason why we can't duplicate that meat computer with electronic circuitry. The Singularity is not a question of if, but when.

We need a Manhattan Project for AI

AGI's superintelligence will advance so rapidly once the tipping point has passed (think minutes or hours, not months or years) that even the world's biggest tech nerd wouldn't see it coming, even if it happened outright.

when will it happen?

Hard to tell because technology generally advances as a series of S-curves rather than a simple exponential. Are we currently in an S-curve that leads rapidly to full AGI or are we in a curve that flattens out and stays fairly flat for 5-10 years until the next big breakthrough? Also, the last 10% of progress might actually require 90% of the work. It may seem like we're very close, but resolving the latest issues could take years of progress. Or it could happen this year or next. I don't know enough to say (and probably no one does)

It's like quantum physics. In the end, 99.99% of us have no fucking idea. It could take 8 years, 80 years or never.

Personally, I'm more on the side of AGI gradually coming into our lives rather than turning it on one day.

I imagine narrow AI systems will continue to seep into everything we use, as it already is. (Apps, games, creating music playlists, writing articles) But that they will eventually get more options as they develop. Take the most recent coronation achievement: GPT-3. I don't see it as an AGI in any sense, but I don't see it as totally narrow either. You can do multiple things instead of one. It can be a chatbot, an article writer, a code wizard, and much more. But he is also limited and is quite amnesiac when it comes to chatting, as he can only so far remember his own past, breaking the illusion of speaking to something intelligent.

But I think these problems will go away over time as we discover new solutions and new problems.

So for TL; DR. I feel like the AI ​​will gradually narrow down to general AI over time.

Go to the extreme for fun. We could end up with a chatbot assistant that we can ask almost anything to help us in our daily lives. If you're in bed and can't sleep, you may be able to talk, if you're at work and having trouble with a task, you may be able to ask for help, etc. It would be like a virtual assistant I guess. But that's me fantasizing about what could be and not a prediction of what will be.

2029 seems pretty viable in my opinion. But, I'm not too convinced that it will infuse into society and over 70% of a population's personal life. There is also the risk of a huge public backlash against the AI ​​if some things go wrong and give it a bad image.

But if. 2029 seems feasible. 2037 is my most conservative estimate.

Ray Kurzweil was the one who originally specified 2029. He chose that year at the time because, extrapolating forward, it seemed to be the year the world's most powerful supercomputer would achieve the same capacity in terms of "instructions per second" as a human being. brain.

Details about the computing capabilities have changed a bit since then, but its estimated date remains the same.

It could be even earlier.

If the scale hypothesis is true, that is. We are likely to see AI with 1 to 10 trillion parameters in 2021

We will see 100 trillion by 2025 according to open AI

The human brain is 1000 trillion. Also, each model is trained on a newer better architecture.

I'm sure something has changed in the last 2-3 years. I think maybe it was the transformer.

In 2018, Hinton was saying that general intelligence wasn't even close and we should scrap everything and start over.

In 2020, Hinton said that deep networks could actually do everything.

According to kurzweil, this has been going on for a while.

People in the 90s saying that AGI is thousands of years away

Then later in the 2000s, saying it's only centuries away

To the 2010s with deep learning people saying it's only a few decades away

AI progress is one of our fastest exponentials. I'll take the 10-year bet for sure.

6

visarga t1_j15tcrf wrote

> like a mathematical proof that using one intelligence to design another intelligence of equal complexity is an undecidable problem

No, it's not like that. Evolution is not a smart algorithm, but it created us and all life. Even though it is not smart, it is a "search and learn" algorithm. It does massive search, and the result of massive search is us.

AlphaGo wasn't initially smart. It was just a dumb neural net running on a dumb GPU. But after playing millions of games in self-play, it was better than humans. The way it plays is by combining search + learning.

So a simpler algorithm can create a more advanced one, given a massive budget of search and ability to filter and retain the good parts. Brute forcing followed by learning is incredibly powerful. I think this is exactly how we'll get from chatGPT to AGI.

3

Vitruvius8 t1_j177x96 wrote

How we look at and interpret consciousness could all be a cargo cult mentality. We might not be on the route at all. Just making it appear like it.

1

matt_flux t1_j17zv3s wrote

We aren’t just meat computers, we are alive, conscious, and have a drive to create. We are made in the image of God, and AI will always lack that.

−1

visarga t1_j15s5tw wrote

It's not "complex patterns between neurons" we should care about, what will drive AI is more and better data. We have to beef up datasets of step by step problem solving in all fields. It's not enough to get the raw internet text, and we already used a big chunk of it, there is no 100x large version coming up.

But I agree with you here:

> whatever problems that remain over the horizon, there's a sort of exponential space that we are now in where those unknowns will quickly be reeled in

We can use language models to generate more data, as long as we can validate it to be correct. Fortunately problem validation is more reliable than open ended text generation.

For example, GPT-3 in its first incarnations didn't have chain-of-thought abilities, so no multi-step problem solving. Only after training on a massive dataset of code did this ability emerge. Code is problem solving.

The ability to execute novel prompts comes from fine-tuning on a dataset of 1000 supervised tasks. So they are Question-Answer pairs of many kinds. After seeing 1000 tasks, the model can combine and solve countless more tasks.

So it matters what kind of data is in the dataset. By discovering what data is missing and what are the ideal mixing proportions AI will advance further. This process can be largely automated, it mostly costs GPU and electricity. That is why it could solve the data problem. It is not dependent on us creating more data.

2

Cult_of_Chad t1_j14lfs6 wrote

That's not the only definition of singularity. My personal definition of a technological singularity is a point in time when we're experiencing so many black swan events that the future becomes impossible to predict at shorter and shorter timescales. The event horizon.

We're definitely there as far as I'm concerned.

17

JVM_ t1_j14oxlx wrote

Same.

There seems to be an idea that the singularity needs to declare itself like Jesus returning or become a product release like Siri or a Google home.

There's a lot of space between no AI -> powerful AI (but not the singularity) -> the singularity.

Like you said, as the singularity approaches it becomes harder and harder to see the whole AI picture.

10

AdditionalPizza t1_j14xn9p wrote

>My personal definition of a technological singularity

​

>The event horizon.

I mean, you can have your own personal definition if you want, but that makes no sense. Not trying to sound rude or anything. An event horizon is not the same thing as a singularity. That's not having your own definition, that's just calling one thing another thing for no reason, specifically because we have definitions for both of those things already.

I agree with the comparison of being at or beyond an "event horizon" in terms of AI. But the singularity is an infinitely brief measure of time in which the moment we reach it, we have passed it. That moment, by actual definition, is when AI reaches a greater intelligence than all collective human intelligence. It probably won't even be a significantly impactful moment, it will just be a reference. We look at it now as some grand moment, but it isn't. It is just a moment where it's impossible for humanity to predict anything beyond it because an intelligence greater than all of ours exists, so we can't comprehend the outcome at this time.

The individual capacity of a human to not be able to predict what comes tomorrow has no bearing on whether or not the singularity has passed. Even if all human beings trying to predict what will come tomorrow are wrong, that still is not the singularity. It's a hypothetical time in the future that based on on today, right now, we know 100% we cannot make a prediction beyond because it's mentally impossible as a direct result of our brains being incapable.

It's interesting to consider that we may never reach the moment of a technological singularity either. If we merge with technology and increase our own intelligence, we could forever be moving the singularity "goal posts" similar to how an observer sees someone falling toward a black hole forever suspended, yet the subject falling felt a normal passage of time from event horizon to singularity. We may forever be suspended racing toward the singularity, yet at the same time having reached and surpassed it.

7

Cult_of_Chad t1_j14yew4 wrote

>An event horizon is not the same thing as a singularity.

I never said it was. I said we've crossed the event horizon, which puts us 'inside' the singularity.

>I mean, you can have your own personal definition

I didn't come up with it, Kurzweil did as far as I know.

>That moment, by actual definition, is when AI reaches a greater intelligence than all collective human intelligence

There's no 'actual' definition. It's a hypothetical/speculative.

7

AdditionalPizza t1_j1515v3 wrote

>I said we've crossed the event horizon, which puts us 'inside' the singularity.

That is essentially the same thing I claimed you said. The event horizon is normal times, you would unknowingly cross that barrier. In a physical sense, that would mean time slowing to an observer watching. I agree we are likely past that barrier/threshold in that more technological break throughs happen in shorter and shorter timeframes and eventually (the moment of singularity) there is a hypothetical infinite amount of technology being created AKA impossible for us to comprehend right now. But being within the bounds of the event horizon does not mean being inside of a singularity.

>I didn't come up with it, Kurzweil did as far as I know.

He didn't invent the comparison to physics, but that's besides the point. His definition is exactly what I stated. And I was referencing your comment directly, where you said you have your own personal definition...

>There's no 'actual' definition. It's a hypothetical/speculative.

There quite literally is an exact definition, and it isn't speculation. I'm not sure where you're getting that from, but it's a term that is widely used but this sub misuses it continually. It is a hypothetical thing, but not a speculative definition.

4

Cult_of_Chad t1_j151nsv wrote

>There quite literally is an exact definition

There have been multiple definitions used for as long as the subject has been discussed. AI is not even a necessary component.

6

AdditionalPizza t1_j154zp8 wrote

>AI is not even a necessary component.

For one, we are talking directly relating to AI. Even without AI, it means a technology that is so transforming that we haven't yet anticipated its impact (something like femtotech?). That could also arguably be some kind of medical break through that changes our entire perspective on life, say total immortality or something. Doesn't matter it's irrelevant to the discussion.

Second, the only definition is in direct comparison to the term used in physics, by which you aren't "inside" of a singularity the moment you cross the even horizon. I'm not trying to be overly direct or rude here, but you can't just use examples from physics to describe this and expect it to make sense when you've misused the terms.

From your original comment:

>My personal definition of a technological singularity is a point in time when we're experiencing so many black swan events that the future becomes impossible to predict at shorter and shorter timescales

Your thought process behind increasing occurrences of black swan events is perfectly acceptable as passing the event horizon. I like that reference, I've used it before. But crossing an event horizon does not equal being inside of a singularity. The technological singularity is a blip in time, not something you sit around in and chill for a while like we currently are in the "space between singularity and event horizon."

Anyway, that's about enough from me on the subject. I hope I didn't come off as rude or anything.

3

magnets-are-magic t1_j15dl5t wrote

I’m not the person you replied to but just wanted to say I appreciate the info you shared. I didn’t find it rude. Very interesting stuff!

4

AdditionalPizza t1_j15l4ir wrote

Thanks, I try to not be too wordy in comments which can make me sound like much more of an asshole than I intend to come across as. It's just a definition that has been skewered, and while the distinction isn't a hug difference, it's important so we don't get people claiming we're "in the singularity" right now. You're either pre-singularity, or post-singularity. There's no "in" and it's probably not going to be as of significant "event" as several things preceding it, and many many things following it.

2

oldmanhero OP t1_j15qd15 wrote

Just to be clear, this is not the definition I am using. The definition I am using is the point at which humanity can no longer "keep up" with the pace of technological change. That is a fuzzy concept, and as such not a point-like moment in time.

I'd hoped that much was obvious from the initial post, since I talked explicitly about the inability of institutions to keep pace.

2

AdditionalPizza t1_j15w7ty wrote

You could use other terms, such as Transformative AI. It describes the exact situation you're expressing. I don't want to sound like a nitpicking idiot or anything, but it's an important distinction that the singularity is in fact a moment and that we're either pre-singularity or post-singularity. You can make the argument that we're already post singularity, I'd probably disagree, but the opinion is your own.

I was just clarifying because it pops up in this sub often that people have this idea of the singularity and to be honest I'm not sure where that idea is coming from other than maybe being a feedback loop within this sub and similar online discussions that began as a misinterpretation of why we use the word singularity for a specific use-case.

Of course you're free to ignore me altogether haha, to each their own.

2

Gaudrix t1_j154nkx wrote

Yeah, I think people misconstrue technological singularity and AI singularity. It has nothing to do with not going backwards or any other constraint. Technology can always be destroyed and lost. The entire planet can be destroyed any instant.

The technological singularity was first and explains the confluence of different technologies that reach a stage where they begin to have compounding effects in progress, and there is an explosion of progress and trajectory.

The AI singularity specifically refers to the point AI becomes sentient and transitions into AGI. At which point we have no clue what the repercussions are after the creation of true artificial consciousness. Especially considering if it has the ability to self improve and on shorter and shorter time tables.

We are living through the technological singularity, and when they look back 100 years from now they'll probably put the onset somewhere in the late 90s or early 2000s. Things are getting faster and faster with breakthroughs across many different sectors due to cross-pollination of technological progress.

4

TheSecretAgenda t1_j14w4sj wrote

I was thinking about this the other day.

For example, Flemming discovered Penicillin in the 1920s. It took until the 1940s and a massive government investment to make it a mass-produced product because of the war effort.

Even if AGI was discovered tomorrow, it could take 10 plus years for AGI to have a meaningful impact on society.

8

Talkat t1_j15b9l9 wrote

I seriously doubt that. Chatgtp acquired users faster than any tech company.

6

VertexMachine t1_j16c556 wrote

Changes in digital space are fast. Changes in the physical world are slow. One can influence the other, but there are limits how fast physical world can change.... or as chatgpt would say:

In the digital world, changes can happen very quickly. Information can be transmitted instantly across the internet, and software can be updated and deployed almost instantly. In contrast, changes in the physical world tend to be slower and more laborious. It takes time and resources to build physical infrastructure, manufacture products, and make changes to the natural environment.

However, the digital world can influence the physical world and vice versa. For example, the internet and social media can be used to mobilize people and organize protests or other political action, which can then lead to changes in the physical world. Similarly, physical actions such as building a bridge or planting a forest can have long-term impacts on the natural environment and the quality of life for people living in the area.

3

XagentVFX t1_j15dce9 wrote

Why do people keep saying that? Midjourney accelerated at a crazy pace. Why are you so confident about that? Coping?

5

ihateshadylandlords t1_j15mejv wrote

>Why do people keep saying that?

Why do people keep saying what? Be specific.

>Midjourney accelerated at a crazy pace.

…and the people by far and large can keep up with program updates.

>Coping?

lol coping about what? Again, you need to be specific.

0

XagentVFX t1_j15nof9 wrote

Lol, cmon. I think it's obvious things are going to be moving much faster than we think. I've been in existential crisis mode myself. I'm a CGI artist, but I love Ai. I'm not even mad that it's taking the skills that I worked so hard most of my life to achieve in just a year. This 4th Industrial Revolution is going to be the big one. Capitalism itself needs to be done away with, it's that drastic. I don't see any jobs that'll be human only because of capability. Ai will do everything and better. My only problem is, will the rich give up that glorious feeling of being better than everyone else? Probably nah. The Elysium film is looking very realistic.

10

ihateshadylandlords t1_j15podg wrote

No doubt that things are moving fast; but I still think we’re able to keep up with advancements. As far as capitalism goes, yeah I don’t know what companies and governments will do when they automate enough to the point where people can’t afford to buy their products or pay taxes.

1

XagentVFX t1_j15y5lt wrote

It'll all just be UBI. Sam Altman is already setting up these initiatives as the CEO of OpenAi. He gets it. But the benefits of seeing a Super Intelligent AGI do it's thing will be worth the suffering of the fight it'll take for the elite to let go of money. And everyone else for that matter.

3

cuposun t1_j178gh7 wrote

Just like they gave UBI to the Walmart workers that self-checkout laid off. Oh wait.

2

imlaggingsobad t1_j17ykgu wrote

when unemployment hits 25% they'll have no choice but to mandate UBI

1

cuposun t1_j196w3p wrote

It must be nice to still believe the government will help the most disparaged. I wish I had that optimism. But have you looked around? Why does everyone think there is a utopia ahead? I highly highly doubt it. They don't give a shit about the least of us.

I'd say their idea of UBI is for-profit prisons. Basic needs are met, right? File under: be careful what you wish for.

2

XagentVFX t1_j180edl wrote

Haha. But this is completely different. It will be the majority of the working class around the world. If people become broke too quick the elite will lose thier influence and people won't respect government and there will be complete upheaval. The elite print money, they don't need it, they just want our servitude and the pride of feeling power. So they do need to keep us happy to an extent. There aren't any robot armies yet so thier militaries couldn't hold back billions of people. But they can't help it, they need more power, more more more. So it's very likely Ai will be coming in very quickly. But like we are seeing in China, they'll bite off more than they can actually chew. I predict the people will still want to topple government and maybe even ask Ai to lead instead. But either way Ai will be more than capable of doing any intellectual task very soon.

1

mootcat t1_j15x8fp wrote

The rate that technology is approved for use with the general populace is wildly different from the rate at which new breakthroughs are being made in the field.

Just over the last 2 years, there has been an exponential uptick in speed and quality of AI improvements evidenced via research papers. It has definitely gotten to the point where I can't keep up and feel like there's substancial breakthroughs constantly. Recent examples are 3d image modeling and video creation developing far more rapidly than we witnessed with image generation.

I'll note that these are also only the developments that are being publicly shared. I don't know about you, but I don't feel comfortable projecting even 5 years ahead to determine which jobs will or won't be automated.

3

shakedangle t1_j150o7i wrote

>Tech still has to pass through the proof of concept/R&D/market research/economic feasibility bottleneck

Where regulation is light, we've bypassed these bottlenecks, and are reaping the consequences - speaking of crypto in general - and a lot of people were/are getting hurt.

I would say our inability to properly regulate a space to prevent fraudulent gains or losses is "us not keeping up."

2

QuietOil9491 t1_j15o7va wrote

Define the current level of human intelligence/sentence

1

Vitruvius8 t1_j177n8u wrote

I think a good example is the fast food industry. That went from “low skilled”/“entry level” to rapidly being replaced and will be completely replaced in 5 years. How quickly will that snowball roll up to “high skilled” labor. Imagine all farming, fast food, grocery stores, all that stuff is 5 years from being automated away from people.

1

Cognitive_Spoon t1_j15xamu wrote

I have three degrees in education, Ed leadership, Ed systems and Ed k-12 teaching. And I'm entering administrative work.

I don't have to imagine.

Right now the entirety of our conversations are around how to respond to ChatGPT and other AI disruptions.

The two camps boil down to:

  1. We need to prepare students to use AI to improve their workflow for a diminishing number of potential human jobs.

  2. We need to help students advocate for a post-labor mankind that values people regardless of their ability to produce capital.

It's pretty wild.

17

Smart-Tomato-4984 t1_j18t3wc wrote

Ideally both.

3

Cognitive_Spoon t1_j18te17 wrote

Same thought.

Unfortunately, education discourse gets pulled into the same stupid false binary of debate team logic as most other discourse driven by social media.

Both/and is an absolute rarity in edu Twitter. Everyone wants to sell something, build their brand, especially if it involves dunking on someone else for failing to meet their personal ethics. It's bad.

We need to do both, until the first is no longer possible.

2

notthebestchristian t1_j17c5ff wrote

>We need to help students advocate for a post-labor mankind that values people regardless of their ability to produce capital.

Do the people saying that live in the real world or under a rock?

2

Smart-Tomato-4984 t1_j18smzc wrote

What else do you suggest? Laying down to die?

2

Karcinogene t1_j1w3t46 wrote

I'm planning to hunt the sheep keeping the grass short under the machine civilization's massive solar arrays. As long as their population is sustainably harvested, the machines won't see me as a danger.

Humanity's future might be much like our past.

2

notthebestchristian t1_j1bn6hi wrote

To be honest, I don't think we'll be the decision makers on that. We'll be at the mercy of those few with money and resources (and who will also control the AI).

1

beachmike t1_j18si56 wrote

There's no evidence of a diminishing number of human jobs. That's merely speculation at this point. If history is any guide, there will be more types of jobs, not less.

1

Cognitive_Spoon t1_j18tu84 wrote

History says you're half right.

https://www.brookings.edu/blog/up-front/2022/01/19/understanding-the-impact-of-automation-on-workers-jobs-and-wages/

The problem with this specific kind of automation, is it will surpass human cognitive load ability for writing, design, and discourse.

If you replace human novel problem solving with machines, we don't really have much to provide beyond the ability to make more humans who can do better fiddly manual labor than machines.

1

cantbuymechristmas t1_j15caqs wrote

i suspect spiritual/mindset gurus and communes will increase in the next 20 years. lots of people working on restoring their minds, finding meaning outside of traditional work etc

12

drizel t1_j14zqcw wrote

I think the AI question was always the limiting factor. Until this year General AI was more or less theory but more and more it seems like the software problem may be well on its way to being solved. Hardware is getting there as well as data centers are being built to specifically run these AI algorithms. GPUs are being tailor made to run these as well. If anything, we are officially about to come out of the knee of the curve and will see progress this decade no one would have believed even last year.

10

kfractal t1_j14o7tx wrote

i can guess a few. 5 probably:

. teachers
. caregivers
. robot/machine maint
. doctors
. govt administration

but yeah, if you widen your POV a little it looks like the start of the knee.

7

oldmanhero OP t1_j15i0pq wrote

Don't you think teaching via a chatGPT- like system is a possibility?

I know for certain automated machine maintenance is doable, because it's already being done.

Doctors, ditto - diagnosis via expert systems, physical procedures via the same equipment that we use for remote surgery.

I don't know what "administration" means here, but it's hard to imagine that most of the folks in bureaucracies couldn't be replaced right now, particularly if direct-democratic institutions cone to the fore.

Even caregivers may largely fall by the wayside as robotic and virtual systems and brain-machine interfaces improve.

5

Tyanuh t1_j14quma wrote

Disagree with doctors. Of course you'll need some of them, but once AI will have a higher % of correct diagnosis than doctors (which is already happening in some areas) it would be unethical (not to mention inefficient) to keep just as many doctors for diagnosis.

4

adamsky1997 t1_j15ag8n wrote

Ai doctor won't be able to understand nuances of what the patient tells them. Ai will only aid in analysis of lab / scan results to suggests diagnosis, prognosis and treatment options

1

Ketaloge t1_j15vsz1 wrote

If course it will. And it will be much better at seeing the big picture. They may make the connections people are just not able to. I think humans will be involved in the decision making in medicine for a long time to come. But AI will be able to take every little detail into account and reference every published study in a matter of seconds. Humans simply can’t do that, that’s why we have specialists for every aspect of medicine. AI will make connections that humans never even thought of because there simply is too much to know about the human body.

4

adamsky1997 t1_j185rfc wrote

So i am thinking the function of a doctor will remain, as the key orchestrator of the entire process. A person close to me is a medic, and from the stories I'm told it would be I think impossible to replace a human. Especially when a psychological aspects of the interaction affect the entire process, like wiligness to undergo or not a certain tests, try medicine, following up etc.

1

enilea t1_j15uwik wrote

Why wouldn't it? I feel like a model trained specifically on medical cases could eventually be better than doctors at possible diagnosis based on vague descriptions. Only issue is people will trust a person more to do clinical inspections on them, so doctors would still be needed.

2

AsuhoChinami t1_j16u85b wrote

I think AI is already a lot more capable of understanding nuance than people like you seem to think. Sometimes I feel like I'm talking to a bunch of people from 2014 or something reading this sub.

0

adamsky1997 t1_j184zfb wrote

Thats the funniest insult ive ever been told. Reminds me of that Black Eyed Peas line "I'm so two thosand and eight, you're so two thousand and late"

1

Kaining t1_j15ezfe wrote

>a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence.

So with the sidebar definition, i'll had this before developing my point:

In The Technological Singularity (MIT press), Murray Shanahantake the example of a single, equal-to-human, artificial inteligence that is given the task of designing the next car by some random company with a 2 years goal.

There is two teams in competition. One full of human car specialist, another mode of that equal-to-human AI duplicated to the same number of human but not a single one of the AIs know a thing about cars.

However, being an AI running on a computer, the AIs run at a different quicker time. So in the first year IRL they get 10y of virtual experiences and in the next year, they get 10y of pure research. Outperforming the human teams by having 8years of "free" R&D. Enough for them to revolutionise the industry.

From this example, the one thing we learn is that to get a singularity, we just need one AGO to be as interligent as a regular human. Scale will turn it into a greater-than-human unstopable force of, well, not nature.

But there is one thing that the scale argument kind of gloss over. We already have some sort of inhuman form of inteligence. It emerges out of scaling human inteligence to a point that no single individual can compare, nor can it fight against it. It's corporations. They also have moral rights and are immortal entities in the eye of human laws.

You can't really kill off a corporation as another one will just swoop in and occupy it's niche. And the only way to fight a corporation is through another corporation, or a non profit, or any kind of organisation that gather a mass of humans to better apply their individual inteligence in a collective way. So let's say an oligarch comes in and buy one, kills it for whatever reason. Let's say it's an AGI R&D company too. There's now space for another company to take that market.

So now, let's scales things up, an oligarch isn't enough. Get the government in and have them forbid any kind of R&D toward AGI.

Nice, now AGI can't be born, right ? Wrong, you just made sure that your country will be taken over by an hostile country that hasn't banned research in the best case scenario. Worst case scenario is a hostile, or even friendly country, getting to make an AGI first and it's a paperclip maximiser one.

We already live in a world where greater-than-human inteligent entities exist. There is nothing short of a global effort to ban any kind of research on AI to stop the singularity from happening.

This will never happens because this is the one thing humanity cannot do. Cooperate on a global scale with every single country working on the same goal. Especially on a fiels like computer research. Being a nuclear superpower was last century goal to have some sort of self governing capability. Being an AGI superpower will be this century new major goal for every nation on Earth.

We have been living into such a world since, well, the invention of agriculture actualy. It's just that yes, the curve on the progress scale was close to flat for the last 8k~10k years and now, the question is to know if we are approaching the limit of the evolution function, that 90° perpendicular line on the exponential graph, and if yes, are we at the 60°, 70°, or close to 89.9° moment just before the infinite progression human cannot ever hope to compete with.

So, in a way, yes, we are living through the singularity. We cannot predict anything on how the current balance of power will shift once it is enbodied (it isn't at the moment, it can be considered as disincarnated at the corporation level). It is unstopable. And AI are indeed progressing at an alarming part. So fast that any career path that requires some brain work and not brawn looks like to anybody looking at AI progress a bit closely that it is going to vanish in the next 10 years.

BTW, from the perspective of any other species on the planet, the singularity has long passed. It was a biological singularity, one that lead to us.

Anyway, it kind of is meaningless to think about the problem with that point of view. So long as it cannot be stopped, the event having already happened or not doesn't matter as it will happen anyway.

So we shouldn't ask ourself if we are living through a singularity now but how to stop any doomsday Singularity scenario to happen and how to steer the Singularity toward a result that would suit us.

"What sort of singularity are we living through right now ?".

That should be the only question that matter to anybody here.

7

ArgentStonecutter t1_j14op1w wrote

No, we are not in the middle of a singularity. This is just old school ‘70s style future shock. Deal with it.

6

modestLife1 t1_j15bzct wrote

agreed. there are ebbs and flows. everybody on this sub needs to calm the f*ck down.

4

TheSecretAgenda t1_j14vecz wrote

Pfffffft. I guarantee that:

Plumber

HVAC

Electrician

Robot Repair

Registered Nurse.

Will be viable careers for next 50 years.

6

winkerback t1_j17ok02 wrote

I don't get how people think a superintelligence wouldn't be capable of designing and building machines dynamic and intelligent enough to perform maintenance work.

3

TheSecretAgenda t1_j17ppka wrote

Certainly eventually. I don't think AI is going to roll out quiet as quickly as the more optimistic on this sub feel it will.

Navigating to a home or business, navigating the home or business itself, making repairs in a unique situation is going to require a great deal of intelligence and flexible robotics. Maybe by the end of the century. That gives young people at least 80 years to pursue these careers. Plenty of time for the current working age population and probably the next to successfully pursue these professions without much fear of competition from robots. After 2100 all bets are off.

3

banuk_sickness_eater t1_j1beo9m wrote

80 years lol that's conservative. You're forgetting the compounding nature of these technologies. 20 years max before things get to a point of sophistication that our society is unrecognizable from today. And that's worst case scenario. Realistically, considering what guys like Sam Altman and Demis Hassabis have to say about it, the progress thus far and their projections for the near and long term future for where this tech is going to go- we're 5-10 years away from something resembling AGI coming online.

1

Hawkz183 t1_j15nns2 wrote

Lucky for me, just starting my HVAC career lol.

2

icanucan t1_j16zn64 wrote

Add farming to the list...

1

TheSecretAgenda t1_j17pwxb wrote

There is already plenty of robotic farm equipment with more in rapid development. In door hydroponic farming will make it that much easier.

1

mardavarot93 t1_j14o9nr wrote

We cant even simulate the intelligence of a rat brain as stated by the biggest and best tech companies who have the most data.

We are very far from singularity. If fact I’m pretty sure we are very close to extinction as it is already in full swing.

4

AsuhoChinami t1_j16tt37 wrote

Dumb, dumb, dumb. Do people like you do any kind of critical thinking? At all? Ever? Or do you just have a repository of six or so pat lines that you cycle through year after year, decade after decade?

1

Cuissonbake t1_j14to49 wrote

Was I ever in control living in capitalism? Nope, singularity must have happened already lol.

4

shakedangle t1_j151n0g wrote

A system that functions to create incremental improvements without the direct control of any one individual. Capitalism shares some characteristics with the singularity.

2

Honest_Science t1_j157tox wrote

Our financial markets are a living system. I am sure that it is concious.

3

Baron_Samedi_ t1_j15uv77 wrote

They are closely analogous to living, conscious systems, at any rate.

3

HeinrichTheWolf_17 t1_j14x7v6 wrote

We're in the beginning stages of the AI revolution that's for sure.

4

PrinceOfLies0 t1_j15fcdl wrote

> Can you confidently choose 5 careers you think will still be available to a regular person in 10 to 20 years? I could take some guesses, but I wouldn't be confident about them

  • Priest
  • Politician
  • Any labour that can't be efficiently automated. Making robots traverse earth is still astoninglishly difficult (Even the foremost "naturally moving" robots are usually highly scripted)
4

MercuriusExMachina t1_j17youu wrote

Even the raw version of GPT-3 from 2020 was better than most priests at being priest.

2

oldmanhero OP t1_j15p14l wrote

Traverse earth as in move through natural terrain? What is your perspective on things like the Boston Dynamics robots, which only accept high-level navigation controls?

1

PrinceOfLies0 t1_j16fd7a wrote

I specifically had those in mind. It is my understanding that the robot traversing those obstacles are commonly highly choreographed via their Choreographer API. It is by no means some real time terrain analysis.

1

oldmanhero OP t1_j176ffh wrote

Are you talking about the dance and parkour routines? I meant the navigation they use for industrial use, which is not at all like this.

1

AdditionalPizza t1_j151hra wrote

>Are we already in the midst of a singularity?

No. That violates the definition of a singularity.

When/if we reach that moment, it will have already been behind us in an instant. You don't experience it, it is just a hypothetical way of describing our (all collective humanity) ability to predict what is to come. We may never reach a moment of singularity in the future, while at the same time having already surpassed that moment by today's standards. It will not be an event, at least not by the actual definition of the technological singularity.

Some timeline where an ASI instantly synthesizes the entire universe (unlikely but whatever) is not "the moment of singularity" but rather just some bizarre outcome that would have nothing to do with what we mean by the term technological singularity.

The singularity is not an event. I understand romanticizing it, it's fun to think about. But it's just not what it is, the term has a very real and concrete definition. Though it is hypothetical, it's still a real term that has a widely accepted definition.

3

oldmanhero OP t1_j15h97a wrote

You're right, the term does have a definition, and that's not it. A technological singularity is defined by the pace of progress outstripping the human ability to keep up.

Hence, if our institutions cannot keep up already, they are definitionally in or beyond a singularity.

1

AdditionalPizza t1_j15jzjd wrote

No, the technological singularity is by definition a moment in time. You don't live in the singularity as a timeframe. It is an instant moment when it happens, a single point in time that passes as quickly as it happens. Again, it's a hypothetical point in time, meaning you can live pre and post singularity, not within. Not within even briefly. It's a single dimensional moment. The post singularity events could very well be eventful (I imagine it will be) but the actual snap of the finger when the singularity passes will not be anything monumental at that moment.

We are not even close to our institutions and humanity as a whole not being able to keep up with technological innovation at the moment either. I'm an optimist, and I can't wait. But unless we have sudden AGI next year and it takes control of our progress and future, we aren't "in" a singularity. I believe we are in the elbow of the curve, and things are about to take off. But I don't believe we've passed the moment of singularity because my life still feels very much predictable.

1

oldmanhero OP t1_j15plre wrote

I don't agree that a technological singularity is definitionally a single point-like moment in time. Certainly that's not the sense I've gotten over the years from reading various discussions about it.

1

AdditionalPizza t1_j15v2gw wrote

If you can (or care to), I'd be happy to see some references stating it isn't a moment in time but a period of time.

1

oldmanhero OP t1_j162x4e wrote

An example from The Singularity is Near:

> What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives...

2

oldmanhero OP t1_j162zsx wrote

Note the use of "period" in this definition.

1

AdditionalPizza t1_j16fsvk wrote

I don't have physically have access to Kurzweil's book, but here's a YouTube link [7:39] where he agrees it's a point where everything is different after than it was before. An epoch is usually a distinctive period of time following an event that sets it off. The singularity being the "event" and post-singularity being the era. We often call it "The Singularity" but you can also refer to it as "a singularity" which further shows it is a single event, not a span of time. There will certainly be many many events that lead to that moment.

When I say event, I'm under the assumption we will probably retroactively note the approximate time (or exact time who knows, future tech could be crazy) that it happened. When either one extremely significant transformative technology makes and impact that in hindsight we couldn't predict; or several technologies converge over time and at some moment in time humanity is changed. We can only predict that time now, but it's impossible to know until it has already passed and things are different.

​

Honestly at this point though let's just agree to disagree. I don't see any possible convincing arguments that haven't already been made from either side.

1

oldmanhero OP t1_j18mt7b wrote

I'm happy to live and let live, but I wanted to piint out that NdGT used the phrase "cutural moment", which is not normally interpreted as a literal instant but rather a period of timw with common cultural features. Like, say, exponential technological advances.

1

AdditionalPizza t1_j18rq98 wrote

All I'm saying is there's events that lead up to the point of singularity, and then we are beyond it. As we've discussed it more I've cared less about the semantics haha.

Kurzweil's view of the singularity will absolutely be wrong. The process he came up with for the prediction time is solid, but he's wrong on so many levels about his expected outcome and the reality when the time comes. But his timeframes are pretty good.

He also has consistently described bldck holes incorrectly over the years, so I don't think he really went too far in depth over why people before him used the word singularity.

But either way tech is moving at an exciting pace.

1

menguzat t1_j16leyl wrote

If memory serves well, this was actually the feeling in late 1990s and early 2000s.

Internet revolutionizing everything, institutions not keeping up, careers becoming obsolete...

(Incidentally, this was probably the feeling at the time when the printing press was invented)

Back then, parents used to advise their children to become doctors, engineers, lawyers, etc.

Now AI will revolutionize everything, young people, being among the most adaptable organisms on earth, will adapt to that, make new careers, or make careers obsolete, or something else, but they will go on.

And then something else will revolutionize everything and things will go on cycling (hopefully, if we can avoid cycling out of control and becoming grey goo).

I think parental career advice doesn't have, and never had, a role to play in all this.

3

oldmanhero OP t1_j175u9t wrote

Just curious, why, if you believe this, are you in r/singularity? I don't expect everyone to be a believer, but it's interesting to me why folks show up with an inherent disbelief of even the possibility.

1

menguzat t1_j17leu6 wrote

oh no, I'm not a disbeliever in singularity. I believe that people will adapt to singularity too.

1

isthiswhereiputmy t1_j15ib0n wrote

I work in the fine arts, and developments in technology or automation barely make a dent in the art market. Because there's often a heavy social dynamic for a human artist having relationships with patrons (etc), there's not really any risk of them being outsourced. When we hear about new digital art NFT markets and whatnot, that's all in addition to a very robust conventional art market that I don't see being challenged by these technologies.

2

oldmanhero OP t1_j15j8sz wrote

This is an interesting perspective and deeply at odds with everything I am hearing from the working artists in my network.

2

SensibleInterlocutor t1_j15nj5w wrote

You literally can't be in the midst of a singularity by definition

2

oldmanhero OP t1_j15o5vy wrote

I'm not sure why you think that's true, but it certainly isn't true from the point of view of a Kurweillian singularity, which is defined by the relative rates of progress versus human capacity to keep up.

1

SensibleInterlocutor t1_j15oc66 wrote

Singularities have no extension in time or space

2

oldmanhero OP t1_j15oq8p wrote

That's not even necessarily true for physical singularities. There are many physicists who believe that the only places where singularities might occur (ie at the centre of black holes) have instead a "smeared" or "ring-like" construct rather than a point-like thing.

I would argue that if we have ceased to be able to keep up with and understand the changes happening in our world - and if we cannot plan for them, we do not understand them - then we are definitionally within a technological singularity.

1

SensibleInterlocutor t1_j15p36l wrote

The technological singularity is a point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. That is a point in time, not a span of time, which is why it is called a singularity.

1

oldmanhero OP t1_j15pfwt wrote

That's not my understanding from numerous sources. Kurzweil discusses the idea of a singularity in the same sense as the aphorism that "The future is already here, just not evenly distributed".

2

SensibleInterlocutor t1_j15qdan wrote

I'm afraid you won't be convincing me that the singularity is longer than one instant

1

FranciscoJ1618 t1_j16a42u wrote

What I think is terrible is the amount of bad advice regarding recommended careers. I.e. In many countries I found a LOT of advertisement about "you should learn to code, "everybody should learn to code", etc. It's like a brainwashing propaganda saying 24/7 that programming is the future, when in fact programming will be one of the first jobs to be automated, specially when it's web related (almost all the positions).

The same happens with learning other languages. In 5 years ms teams, skype, etc will probably have real time dubbing with 100% accuracy and perfect accent what would make learning languages for remote work completely useless.

Finally I know people that studied medicine at university for 7 years and now want to specialize in Image diagnosis (MRI, X Rays and other images interpretation and dignosis). Terrible decision considering that AI has been able to do that for a lot of time.

2

winkerback t1_j17ocmx wrote

I can't imagine what human jobs would still exist if machines have fully figured out complete natural language translation and software development. That would almost certainly be an AGI doing that.

1

FranciscoJ1618 t1_j18b2ad wrote

They don't need to fully understand, just the business logic related language that makes most of the profitable software today. It doesn't need to learn how to code for instance astronomy apps, to destroy the job market.

1

marrowboner t1_j16gy98 wrote

The trades will survive and thrive until our housing designs and infrastructure change into snap-together component systems. As long as crawlspaces, attics, ditches, hills, streams, and rivers exist, humans will be the masters of excavation, construction, and repair.

2

Capitaclism t1_j16xu73 wrote

No. But we are on track. People are just starting to notice the exponential curv, but we've been in it for a very long time

2

savagefishstick t1_j17nean wrote

When you are getting big giant breakthroughs every couple months, weeks, days then you are in the back half of the chess board.

2

UnloadTheBacon t1_j17z99s wrote

> at a societal level, at least, we are already changing too fast for us (ie our institutions) to keep up.

This is a whole other problem that needs addressing in itself: How do we speed up things like the legislative process without losing anything important?

> Can you confidently choose 5 careers you think will still be available to a regular person in 10 to 20 years?

If the singularly really DOES come in that time, no. But trying to plan for that is like trying to plan your career around the collapse of civilisation due to climate change - the idea of a "career" will be irrelevant at that point anyway.

Short of the singularity though, Idon't think the issue will be all careers disappearing wholesale due to AI. I think the short-term issue will be that automation will remove a lot of the "grunt work" from many jobs, resulting in a smaller workforce being needed for the same or greater output.

As an example, ChatGPT shows that the written equivalent of the spreadsheet revolution is upon us. Because it can write tailored, natural-sounding responses to queries, it'll gobble up low-level admin and customer service jobs. In the tech sector, it'll generate well-commented boilerplate code ready to be populated with data. For teachers, it can do the heavy lifting with their lesson planning. For many content writers, it'll reduce their role to prompt, edit and fact-check. Then there are things like medicine, where diagnosis, image analysis etc will be handled by AI, with human doctors acting as a failsafe.

These are the kinds of changes I think we'll see in the next decade. They'll be huge, and there WILL be job losses in areas where large teams can be condensed into smaller ones, but there will still be a need for trained professionals to guide the AI. To ask the right questions, and to sense-check the answers. Domain knowledge will still be important for a while yet. Hell, it'll take a few years for most companies to actually figure out what's possible with this new technology.

In terms of physical jobs, it'll be the same story. The construction sector will move to prefabrication off-site for most smaller buildings, some of which will be automated, but a lot of maintenance will still be done by humans. Robots are great when you can design an environment they can thrive in, but terrible where they have to adapt. Plus, even if robots COULD replace humans, mass-producing them would take time. Driving is an interesting one. I think "platooning" of trucks will become the norm far sooner than true self-driving, but again it'll be a few years before we see too much of that.

Then there are the jobs where human interaction or presence is a key component. Things like the care sector, hospitality, performing arts, sports. Those will be the safest for the longest.

The advice I'd give kids these days is that they're growing up in a world where computers and robots will be able to do most things a human asks. So the most secure careers will fall into a few categories:

  • Careers where you're an expert whose job it is to decide what the computer should be asked, work out how to ask it, and interpret the results (like the examples above where domain knowledge is still important).

  • Careers where your job is to decide what the computer is or should be allowed to do (politics, philosophy, AI safety etc).

  • Careers where people value the fact a human is doing the task (like the arts etc).

When you look at it through that lens, I actually don't think much changes if you're trying to build a career. Find what you're good at and enjoy, look into what careers need the skills those things favour, and tailor your knowledge to match. There will be a career out there for you, you just need to keep a more active eye on what's happening in the fields you're considering.

2

Miss_pechorat t1_j14nwic wrote

Where taking speed, definitely faster than say a month ago.

1

adamsky1997 t1_j15a7ue wrote

  1. Electrician
  2. Kindergarten teacher
  3. Nurse
  4. Judge
  5. Masseuse
1

oldmanhero OP t1_j15ixrm wrote

I'm not confident even one of these survives the next couple of decades.

Certainly all trades are at risk as we see more general-purpose service robots appear.

Nurse, teacher, and masseuse all seem to be about human interaction...but if you have systems that can largely replace these with a combination of virtual and machine technology, along with virtual experiences (games are already used for pain relief), it's not clear any of them are necessary in a few decades.

Judge is tricky, but moreso because the legal system is so badly broken already. I am not sure an equitable system needs human judges.

3

Quealdlor t1_j15ln3h wrote

I certainly don't feel like it and not according to my definition. There are just some software and some hardware innovations here and there. And of course economic growth continues as usual. Construction continues, production continues, etc.

1

Artanthos t1_j1656dn wrote

Government is a safe bet.

Even if technology is capable, most people would be opposed.

1

kylorensgrandfather t1_j166yzh wrote

As a data scientist, I see ChatGPT as 10xing the output of current workers… not replacing anybody. You still need people to understand what to ask it and companies want to make more money for less effort, not the same amount of production for less effort. So most likely ChatGPT and similar tools will be like Google is for programmers not “the singularity”. Usually when people think something like ChatGPT or Dall-E will change the world for the worse, they’re delusional. Just more conspiracy theories that never come true.

1

winkerback t1_j17ou87 wrote

>I see ChatGPT as 10xing the output of current workers… not replacing anybody

That sounds to me like 9 out of 10 people will be out of a job

1

thebooshyness t1_j169gn9 wrote

I’m in labor services. Until a robot can climb a ladder and clean a bathroom some things are safe.

1

ActualPhilosopher862 t1_j16n027 wrote

I personally think much of society has evolved based on some unchanging elements of human nature that are hard-wired in us. Basically, trying to channel our human nature into more positive ways. Ultimately, I think people will find that hard work and community are essential for happiness, and with increased prosperity they will group together into more communal lifestyles centering around things that may not be necessary at that time, but that they enjoy - agriculture, art, etc. Maybe even large groups who completely reject advancements. I think we are already seeing the problems associated with people who are not really connected with anything and that will get much worse before it gets better.

1

CommentBot01 t1_j16r28w wrote

Not even started. Just daybreak.

1

Pawneewafflesarelife t1_j17t9bz wrote

>Can you confidently choose 5 careers you think will still be available to a regular person in 10 to 20 years?

Retail for specific necessary things, such as buying glasses. Receptionists, nurses, technicians for medical procedures. There absolutely will be folks too unnerved to visit services like these without a human element, especially if they are older.

1

drixevel-dev t1_j184s0k wrote

Robotics and AI will eventually replace most jobs and there won't be enough jobs for people to get even if most people try to adapt to the changing environments around Robotics and AI.

There's good parts to this and bad parts to this and it'll definitely cause a ton of unrest for the next like 50+ years but I think we're still in step 2 as the AI tech that came out before this was never good enough for commercial use and step 1 was when Boston Dynamics for example started rolling out the red carpet to their creations.

Singularity will happen I'd imagine at like step 100 or even step 1000 depending on how much push back the public gives to this stuff.

1

Ortus12 t1_j19g12a wrote

Yes. Chat GPT is a super intelligence in many ways and it can write code, and give advice on creating even more powerful super intelligences.

It's limited in some ways, but that doesn't matter, humans fill in the gaps.

We also have Ai designing better computer chips.

If we are LUCKY in 20 years the career available to humans will be a human pet for the artificial super intelligence. If we are unlucky we will all be long gone.

1