Submitted by googoobah t3_11cku2n in Futurology

With the singularity potentially coming within the decade, people need to seriously start to reconsider exactly what fields they should go into to ensure a successful career in the future. It's no secret that many jobs are going to be done by AI in the future (probably all jobs at some point) and it's a scary to think that you may spend 4 years in college only to come out and have a useless degree in a field that's populated by robots.

What do you guys think? Which degrees will be gain and lose value? What about jobs? How are we going to keep from falling behind in this new age? Will any of this even matter?



You must log in or register to comment.

khamelean t1_ja3oakp wrote

By definition, the singularity is a point that you can’t see beyond. Trying to guess what the job market will look like past the singularity is an exercise in futility.


PO0tyTng t1_ja4ny7i wrote

CEOs will never automate their own jobs. Gotta have someone to keep all the profits. Get an MBA and you’ll be fine… 😂🤣😭💀


atleastimnotabanker t1_ja56rci wrote

OP asked for what to do if the singularity happens this decade - in that case it's not up to the CEOs if their jobs will be automated, but rather up to the AGI. And what the AGI will do with this world is not possible to predict.

So probably best to just plan for the case where the singularity still takes longer (or doesn't come at all for some reason)


checker280 t1_ja4zpl4 wrote

CEO’s might never automate their jobs but unless you are at the top I doubt you will be safe.

Blue collar might be safe because there is very little standard from one scenario to the next. Even in something like a car, how the car is used/abused and how the car is maintained will add enough variance that will hinder AI from being efficient. In two homes with identical footprint/layout - how the rooms are used and wired/plumbed will make it impossible for AI to troubleshoot or fix unless it can see/predict everything.

White collar jobs are about to be impacted greatly and they will never see it coming. Disease diagnosis is becoming child’s play for AI who can identify the subtlest differences in comparison to tens of thousands of bodies before them.

I suspect the law and accounting will be no different as long as the AI can be fed enough data for them to compare.


OpusChao t1_ja6ybz2 wrote

Can't be anymore profits because robots don't buy stuff.


billtowson1982 t1_ja73q1x wrote

Most big company CEOs are not the biggest shareholders in their companies. If AI really does get to the point where it can do almost every single possible job more efficiently than any and all humans, then CEO jobs are no safer than anyone else's. Zero, one, a few, or in theory (but unlikely in reality) all people will make production decisions for the AI and no one else will do anything of economic value whatsoever. That doesn't mean some humans won't "work" - humans will still be able to make nice wooden tables, for example, but in such a world the AI could make better tables faster, cheaper, and with less waste of resources. For a person to sell a table that they made, they buyer would have to want it because it was made by a human - despite it being inferior in every other way.


jdragun2 t1_ja7dc4i wrote

I swear I saw an article on how Boards would be wise to replace CEOs with AI as all they do is guess the future and pretty much every current AI is better at that task than any human, their jobs are actually very likely to be replaced. Then boards and shareholders get THAT much more money, and they are the real decision makers. Personally I think CEOs are the biggest threat to their profits when compared to AI. Their jobs will be on the block as much as anything else will be.


[deleted] t1_ja3n5ae wrote



IcebergSlimFast t1_ja4krgy wrote

What held true in the past won’t necessarily hold true as the pace of change and disruption becomes faster and faster.


awfullotofocelots t1_ja4wn0p wrote

The past is the only empirical evidence we have and our remarkable ability at pattern recognition is what's helped us become apex predators and survive as a speciies to this point. Don't throw the baby out with the bathwater.


MikeTheGamer2 t1_ja6ap4y wrote

>Humans need work for psychological well-being

speak for yourself. If I had everythiing I needed, without having to work for it, I'd be as happy as a pig in shit.


billtowson1982 t1_ja74jxn wrote

1.) The idea that humans will ALWAYS be economically productive despite all possible future technological developments is just as much blather as saying in 1950 "in all of history humans have never gone to space, so they never will!" Whether AI ever develops to the point of being able to do all jobs better than any human, I don't know. But the possibility can't be ruled out, and certainly not by "it didn't happen in the past so it never will!"

2.) Strengthening your moral and ethical character is a good thing to do. But it's silly to believe that that is the way to get ahead in a career - a weak moral character can be as much an asset, maybe even more of an asset, to a person's career as a strong one.


WoreOnFreedumb t1_ja3j3co wrote

It’s going to be a long time before plumbers aren’t needed.


IcebergSlimFast t1_ja4kgdi wrote

What do you think is going to happen to wages in the trades when tons of young, competitive, able-bodied gym-bros (of all genders) get displaced from their tech jobs and start looking for ways to earn a living? The trades require varying levels of experience to gain competence, but none of them are rocket science. And the current average standards of professionalism in residential contracting are absolute dogshit and begging to be disrupted.


googoobah OP t1_ja3le7i wrote

Trades do seem very stable. But is it wise to stay a working class citizen as the wealth gap widens like never before thanks to tech advancements?

Though I guess most people don't have a choice.


bodydamage t1_ja3uatr wrote

Do you have a quick way out of being “working class”?


googoobah OP t1_ja3x6r8 wrote

What I meant was I feel like most of those type of jobs don't have much room for financial growth unless you start a business or something.


bodydamage t1_ja3xf7f wrote

You can pretty easily make 6 figures+ in the trades and if you work for a large company you can grow and move up.


the_6th_dimension t1_ja41ha7 wrote

Some people can, most cannot. This isn't a comment about the worker's merit, it's just the empirical reality seeing how the vast majority of trade workers are far from 6 figures. Here's just some examples of median wages for some common trade jobs based on data from the Bureau of Labor Statistics (BLS) as of May 2020:

  • Electricians: $56,900

  • Mechanics: $44,050 (automotive service technicians and mechanics)

  • HVAC servicers: $51,420 (heating, air conditioning, and refrigeration mechanics and installers)

  • Carpenters: $49,520

  • Architects: $87,180

  • Boilermakers: $65,360

  • Millwrights: $59,080

  • Plumbers: $55,160 (plumbers, pipefitters, and steamfitters)

  • Welders: $44,190 (welders, cutters, solderers, and brazers)

These are just some quick examples I could find with numbers attached. Certainly even within these fields some individuals make $100,000+, but these stats show that it is certainly not the norm and therefore shouldn't be described as "easy" to achieve.

And just for the sake of clarity, I want to reemphasize that I'm using the median here, i.e., the value separating the bottom 50% of earners from the top 50%. As such, half of all of the trade workers in these occupations make less than the value I provided.


bodydamage t1_ja42g4s wrote

That data is far from conclusive, and it’s too broad to actually draw any conclusions from.

I know dozens, maybe even more people in the trades who are clearing $100k and do so regularly.

Most of those annual wages = sub $30/hr pay

Union Millwrights, Electricians, Pipefitters etc are all well over $30/hr here and we’re south of the Mason Dixon line.

If you go a few hours north those same trades pay $40-50+ per hour.

I also know guys who will work 9 months out of the year, make $80-90k and then take 3 months off being “unemployed”.


kvothekevin t1_ja4ks7j wrote

I know guys that completely refute what you said. So if your argument is based on your experience of the people you know, consider yourself refuted by me.


bodydamage t1_ja4mjrv wrote

Which part?

Union Pay scales are pretty easy to find online and they’re not shy about it.


the_6th_dimension t1_ja4it8p wrote

Those data are about as conclusive as you can get. The US gov carefully and methodically tracks this specific kind of data because the US gov values having an accurate read on various aspects of the economy with earnings being a huge part of that. You can check out more about the BLS here and here if you'd like.

The claims you are making are purely anecdotal. The data I'm referencing come from researchers who are trained in the proper research methods and statistics and are provided with the resources needed to collect huge amounts of information.

Of the thousands of people in these trades, you would expect that a small proportion of them do make $100,000+ and that's what you find. But it not the norm, nor is it even close to being common.

For example, according to the Bureau of Labor Statistics (BLS), the median annual wage for electricians was $56,180 as of May 2020. This means that half of all electricians earned more than this amount, and half earned less.Assuming a normal distribution of electrician wages, an electrician earning $100,000 would be in the top 10-15% of all electricians in terms of income.

However, income is notoriously positively skewed meaning that this 10-15% number is actually likely to be much much lower. But even if it weren't, I'd say that 15 out of 100 people being able to earn $100,000+ in a trade job does not constitute it as being "easy".


>Most of those annual wages = sub $30/hr pay
>Union Millwrights, Electricians, Pipefitters etc are all well over $30/hr here and we’re south of the Mason Dixon line.
>If you go a few hours north those same trades pay $40-50+ per hour.

Assuming one works 2080 hours per year (i.e., 52 weeks * 40 hours/week), they'd make the following before taxes in a year while being paid at a rate of:

$20 - $41,600

$30 - $62,400

$40 - $83,200

$50 - $104,000

So no, I'll finish by reiterating that it is in fact not easy to make $100k+ whether you're in a trade job or not considering that the vast majority of people will never make this much. You and anyone else can check up on the data I used on Here's an example of all of the kinds of data they have for electricians: They have thousands of different jobs that they track that you can examine for free just by searching for it.

Edit: fixed quotation mistake


bodydamage t1_ja4lmpx wrote

It is easy to do, if the best you can do is toss government statistics around and what you THINK goes on in the trades then by all means do so.

Tradesmen to travel get per-diem which isn’t considered “income” yet is money you get all the same.

I’m sure glad I didn’t pay any mind to BLS numbers, I’d like have stayed away from the trades, but now that I’m in them I’ve found that not only are those numbers largely nonsense, I’ve also found that $100k isn’t all that high in terms of income in the blue collar world.

Go to a HCOL area and many of the union trades are payed $60+hr


the_6th_dimension t1_ja4trs8 wrote

Can you provide any evidence to support your claim other than your own personal experience? And can you provide any evidence that would suggest that the information I supplied is incorrect or misleading? Because the BLS is completely transparent with their methodologies; they provide multiple sources on their website that details this in pretty excruciating detail.

Because if not, it seems like you just want to prop up a narrative that fits your worldview and not necessarily reality. I'd be happy to consider contradicting data if you can provide it but if you can't, I'm going to stick to the data that I do have considering it's the best (only) data that's been offered so far.

It's also a super easy google search to show that blue collar jobs make a median annual income of $39,850 so I'm not sure what the $100k comment is about.

Actually, as an afterthought, I have a question that I should probably ask. I promise I'm not meaning this in a rude way but do you understand what mean (aka average) and median actually indicate? I made an assumption that you did but that wasn't necessarily a fair assumption on my part.


bodydamage t1_ja4uvi9 wrote

I’m sure I could, don’t care enough to go look. Feel free to go look at union payscale different places in the country, I know of more than a few that are over $50/hr

I’ve never found BLS data on income to be particularly accurate in ANY job. It’s also entirely too broad, since you’re looking nationally at that fails to take into consideration the differences in COL and thus pay.

I’ll rephrase; If you live anywhere near a medium-large sized city, making $100k+ is easy to accomplish in the trades.

Average is just all your data points added together and divided by the number of data points.

Median is the mathematical middle point between the highest and lowest data points.


the_6th_dimension t1_ja51mzl wrote

Well I'm not sure that you could because you haven't, so the answer to my first two questions is "no".

But here, let's look at multiple sources and where I can find cost of living information and adjust income with that:

With income adjusted by COL (where possible)

|Source|Median Annual Income for Trade Workers | |:-|:-| |BLS|N/A| |Census Bureau|$56,464 (as of 2019, adjusted for cost of living using the CPI-U-RS) | |Glassdoor|N/A| |Payscale|$60,015 (as of February 2023, adjusted for cost of living using PayScale's Cost of Living Calculator)| |Economic Policy Institute|$70,000 (as of 2021, adjusted for cost of living using the CPI-U-RS) |


Unadjusted income

|Source|Median Annual Income for Trade Workers | |:-|:-| |BLS|$44,840 (as of May 2020) | |Census Bureau|$45,555 (as of 2019) | |Glassdoor|$47,171 (as of February 2023) | |Payscale|$50,331 (as of February 2023) | |Economic Policy Institute|$54,000 (as of 2021) |


I mean, what can I say? 50% of trade workers are estimated to make less than these numbers and that number only increases as salary increases, and even after adjusting for COL the vast majority of individuals are making <$100k with many making less than half that. It wouldn't be that way if it were easy.

Yes cost of living affects pay, but not nearly enough to support your claim. Union members also tend to make more, but most people aren't in unions or benefitting from them (though I wouldn't argue with changing that). These are again just some examples I could find quickly.

  • From BLS, the union membership rate for all occupations in the United States was 10.3% in 2021. This includes both trade and non-trade workers.
    • The BLS also provides data on union membership rates for specific occupations. For example, as of 2021, the union membership rate for construction and extraction occupations (which includes many trade workers) was 12.9%.
  • From EPI, the union membership rate for construction workers specifically was 13.5% in 2020. This is slightly higher than the overall union membership rate for all construction and extraction occupations reported by the BLS.
    • The EPI also reports that the union membership rate for production and transportation workers, which includes some trade workers, was 15.4% in 2020.

So even if we assume these numbers are off, I think it's fair to say that <20% of trade workers are unionized. This certainly helps them, but it doesn't apply to most people.

Have I been able to make solid enough arguments and give enough evidence from a variety of sources to change your mind? Maybe you happen to make $100k+ working in a trade and the other people you work with or know in the trade are also doing similarly well. If that's the case, it makes sense that you'd extrapolate that most people who do a similar kind of job (e.g., trade work) would probably have a similar outcome to yourself. It's just in this case you'd be wrong specifically because you and your immediate circle of reference are all outliers. Like, I'm not trying to sway you on some political point here, I'm just trying to present the actual numbers.


lavendersmoker616 t1_ja51py9 wrote

Bro stop coping!!!


bodydamage t1_ja526j4 wrote

Yup definitely coping.

There’s idk, a dozen or so factories that I know of local to me where this is not only possible but also reality


ApocalypseSpokesman t1_ja3v2wc wrote

I wouldn't sneeze at them, people in the trades make bank.

Check the income of say, an elevator repair technician.


treeof t1_ja4izne wrote

Plumbers can make hundreds of thousands of dollars per year, they’ll be fine.


TheFringedLunatic t1_ja4l6gz wrote

Nah. According to the BLS the real answer is the same as ever; judge, doctor, lawyer, engineer, or specialized IT.

Trades don’t even come close in the median.


essaitchthrowaway3 t1_ja444jz wrote

Holy fuck can this sub be any more cringe??

This sub has become the /blunderyears of outlandish and absolutely ridiculous ideas.


Ragnarotico t1_ja4vf59 wrote

Don't worry, it will only be another few hours before some other bored college student reads something about ChatGPT and asks "What will the future hold now that AI is here?"


essaitchthrowaway3 t1_ja5e5jh wrote

The worst thing is that there seems to be no moderation in here to limit these ridiculous posts.


themistergraves t1_ja5snta wrote

Seriously, this sub ought to be renamed r/antiAI or r/neoluddite


essaitchthrowaway3 t1_ja5x3zd wrote

Neoluddite maybe because today it's all about the evil AI, a few months ago it was robots, a while before that it was some other cataclysmic technology that was going to kill us all in mere days.... And of course it didn't.

This shit is just embarrassing at this point.


Heap_Good_Firewater t1_ja4efm7 wrote

&gt;With the singularity potentially coming within the decade

within 50 years, maybe


greatdrams23 t1_ja4mxvl wrote


In the 60s 70s, AI was'just around the corner'.

I studied AI in 1980 and AI just around the corner.

Now, after another 40 years, it is just around the corner.


Cryptizard t1_ja4wsim wrote

You are trolling if you say you can't see the difference this time.


johnnymoha t1_ja4z0xk wrote

Seems arrogant to think you can see the difference this time.


Cryptizard t1_ja4zefv wrote

No, it's just uh... what is it called... objective reality? Maybe you should try it some time.


boersc t1_ja50jqu wrote

AI currently really isn't that much different from 30-40 years ago. Not really. Back then, they also did mass training of ai and also got it horribly wrong, for reasons difficult to explain. Ai identifying tanks based on whether the sun is shining or not, was a prime example back then.

It hasn't progressed that much beyond that, when you actually study it. Boston dynamics probably are most advanced nowadays and even those robots aren't really 'smart'. They can't do what they are not trained to do. Same with all the chatboxes nowaday. They can only combine and extrapolate that they have been taught. There is no original thought.


atleastimnotabanker t1_ja576s5 wrote

Boston Dynamics is specializing in robotics, there are different companies that are far more advanced when it comes to AI


hervalfreire t1_ja79j62 wrote

Machine Learning (“mass training”?) didn’t exist 40 years ago. Cases like the tank one you described used a completely different technique that didn’t utilize RNNs or the like. Other than hardware capabilities, there’s been a big number of breakthroughs in the past 2-3 decades or so, from LSTMs to diffusion models and LLMs. It’s 100% not even close to what we did back in the 90s…


Cryptizard t1_ja51wb0 wrote

No, lol, you are completely bullshitting here. It is extremely different, even compared to a few years ago. The advent of a transformer model literally changed everything. That's not to say that it is the only advancement, or even that it is ultimately the thing that will lead to AGI, but to claim that it is "not much different" is either uninformed or trolling.


johnnymoha t1_ja6m48v wrote

Sure random redditor. You've cracked the code. You're the smartest among us. Your reaction shows you're less concerned with objectivity than you think.


ianitic t1_ja5chec wrote

Most of the models are based on the same core algorithms from decades ago. The biggest improvements has been from moores law which will end in 2025 at current rates. Even without moores law ending, we are far away from an agi.


Cryptizard t1_ja5d7l8 wrote

You can say that, but it doesn't make it true. The algorithms are extremely different. The attention/transformer model is what made all of this recent progress possible.


ianitic t1_ja5fsnj wrote

So says you too. Transformers are marginal in the grand scheme of technological progress. If transformers were even 10x more efficient than CNNs or LSTMs, transformers would still be an improvement that came orders of magnitude slower than Moores law. CNNs/LSTMs being decades old.

There's a reason why all articles regarding a singularity uses Moore law as it's base, it's been the largest contributor to our increase in technological advancement over the years. That contributor is ending.


Cryptizard t1_ja5ipf2 wrote

>That contributor is ending.

Now its my turn to point out that they have been saying that since the 80s.


ianitic t1_ja5jy61 wrote

That's true, but it was always known to not a be forever thing and it has slowed down. I think I remember the last big milestone where they said that was die size of 45nm or so because of quantum tunneling. Thing is, there is a physical limit to how small we can make transistors.

Once we're dealing with transistors that are as thin as atoms, where do we go from there? Yes quantum computing, optical transistors, graphene, etc, exist, but do they provide a higher performance per dollar than silicon transistors? Probably not and it's all about price per performance.


Cryptizard t1_ja5mqse wrote

Nvidia seems to disagree with you. They think it is speeding up.


ianitic t1_ja5pfbj wrote

A CEO trying to sell their products says that their products are going to be even better in their future? They're trying to make Nvidia seem relevant and ease investor concerns with all the other big tech companies taking a hit recently.


Enzo-chan t1_ja5053v wrote

Yes, but this time we have computers many orders of magnitude more denser, faster and efficient than those in 60s-80s, I'm not sayin it'll happen in the next decade, it's just that claiming that sounds way more credible nowadays.


hervalfreire t1_ja793k1 wrote

It always sounds more credible, as things progress. We’re still VERY far from a singularity or AGIs, the best computers can do today is language models (something we already know and do for decades), just faster/larger ones.

Yes, we’re about to see a big impact in professions that mostly rely on “creativity” and memorization, but I’d not worry about a “singularity” happening any time soon.


karnyboy t1_ja6cfxh wrote

Exactly, I have yet to see Boston Dynamics robot deliver me something that can prove to me it can react with AI speed that a trained human can't do faster (climbing, etc)

Now, AI replacing certain menial jobs..yeah it may be right around the corner. McDonalds is pretty close to fully automated assembly line already. Soon they may only employ like 4 people per building. Maybe even one trained just to "be there"

mailman? maaaaybe a drone, that's about it. But a drone is not going to know wtf a black bin in the back yard by the gargage is from another and open it and put my package in, so maybe not.


net_junkey t1_ja6rg48 wrote

AIs like Chat GPT have the complexity of a brain. With Moore's law predicting PERSONAL commercially available computers with computing power equal to a brain coming in 20-25 years. In 3 decades we should have the convergence of software and hardware for sentient AIs.


billtowson1982 t1_ja74aj4 wrote

1.) Whether AI is sentient not is almost irrelevant for its impact on jobs or pretty much any other aspect of society. Something can be plenty intelligent without being sentient, and even a rather dumb being can still be sentient. AI intelligence (or in other words, capability) will be the main thing that affects society. Not sentience.

2.) No AI today has the complexity of a brain based on any meaningful measurement. Even a brief chat with chatGPT is enough to show a person how stupid it is. Further today's AIs are all absurdly specialized compared to biological actors. Powerful, but in absurdly narrow ways.


net_junkey t1_ja7ar08 wrote

#2 have you talked to people? ChatGPT's answers are as good or better then the average person's. Not to mention this is after it got lobotomized to not give answers that can be considered offensive or that sound like the AI has personal oppinions.


billtowson1982 t1_ja8xpvl wrote

They're only better in the sense that Google circa 2004's answers were better than the average humans - both had access to an extremely large database of reasonably written (by humans) information. ChatGPT just adds the ability to reorganize that information on the fly. It doesn't have any ability to understand the information or to produce truly new information - two abilities that literally every conscious human (and in fact every awake animal) has to varying degrees.


net_junkey t1_ja9ktk7 wrote

AIs understand. Human brains learn concepts by forming a bundle of neurons dedicated to the concept of (lets say) "cat" based on the input of our senses - sight, smell...Modern AI's are designed to replicate the same process 1 to 1 on a software level. If anything they understand basic concepts better then humans.

The big jump right now is AIs understanding the relationship between concepts. Example: "cat" should be linked to the concept of "pet" and definitely not with the concept of "oven".

Problem is there are still kinks in the relationship between concepts part. AI is modeled on the human brain and the human brain is not a perfect system. In theory writing a simulation for the human Id, Ego, and Super- Ego and bundling it into a sentient AI package is quite doable. Making it happen while the foundations are still unstable is practically/near impossible.


billtowson1982 t1_jaa2f0n wrote

You don't know anything about AIs do you? I mean you read an article in USA today and now I'm having to hear you repeat things from it, plus some stuff you imagined to be reasonable extrapolations based on what your read.


net_junkey t1_jabo5vx wrote

The learning part of AI is based on/similar to how neurons learn. Once an AI has learned/been trained it stores data and filters for it on the hard drive.

How does a brain work? Data is written in neuron clusters (scientist have been able to find neuron bundles representing concept). The filters are neural connections coming out of those bundles. Brain optimises performance by strengthening commonly used connections and removing old unused ones.

Tained AI + continuous learning algorithm = basic brain even if only comparable to an insect.


TomatoBustinBronco t1_ja4bqh8 wrote

I asked Chat GPT what the highest paying careers would be after the singularity just now. It says to take its predictions with a grain of salt, but trends suggest:

  1. AI/Robotics Engineer
  2. Futurist/Strategist
  3. Neuroprosthesist
  4. VR Designer
  5. Space Tourism Guide
  6. Cybersecurity Specialist
  7. BioTech Engineer
  8. Energy Management Analyst

It gives short explanations for each. Same answers if I ask it most successful businesses.


RayHorizon t1_ja4q2k2 wrote

Ill play electric guitar in garage disconnected from internet with my friends :D


helaapati t1_ja63ayr wrote

Cybersecurity… someone will need to take down the rogue robot overlords.


ruferant t1_ja4u216 wrote

Rather than focusing on individuals survival techniques, we should be focusing on how we can use this technology to make a better world for everyone.


roncoobi3 t1_ja3sobj wrote

Singularity might come within the next decade, but it will still be a long time before many jobs within a corporation are replaced. I am an OT manager, who drives our organization to ride the front wave of technology. It's insanely difficult to do, because big corporations are slow to adopt.

Especially when you start talking about automation of workflows that involve AI deciding what to do with your data, security is gonna drag that down endlessly.

That being said, id be staying away from any kind of software development. ChatGPT can write me a very well annotated PowerShell script in 30 seconds that used to take my guys a day to build. Once AI in that space really gets going, developers are gonna be sol.


Wolfo_ t1_ja4091v wrote

yes but AIs like ChatGPT are not made with traditional coding languages. The software developers in the fields making and designing these AIs may flourish.

There will still need to be people with a computer science background to direct, understand, and apply not only the concepts but also the results too. There will still need to be specialists to enhance and refine them as well as all of the other supports for the AI.

imo developers aren't sol. they will just have to adapt - it should be like learning a new language.


roncoobi3 t1_ja42quy wrote

Totally agree that we still need the idea people. How does it work, how does it integrate, etc. But companies won't need an agile team of 10 developers. Instead you will pay MSFT (or whomever) X amount of dollars for your code to be developed in minutes based on your requirements. I just think the # of developers required will drop 90% in 10 years.

Once again, I'm talking about the point when we get to true singularity. I can tell you using chatgpt for PowerShell scripts has already freed up significant time (obviously customization of the script is still required), but it's crazy to think what future interations of it will be able to do.

IMO, we probably still can't fully comprehend what AI will be in 5-10 years.


the_6th_dimension t1_ja3z1rb wrote

I think the real question is, once we automate enough tasks such that there won't be enough "good jobs" for people to build a career out of, are we going to support them so that they may thrive without the need for income generated by their work or will we treat them like we do anyone else who is no longer able to generate a profit?

You really only need to be worried about automation in one of those scenarios.


rileyoneill t1_ja4hde7 wrote

Depending on where you live, a lot of markets already have a minority of jobs that are "good jobs". We have a very high cost of living right now. The majority of jobs in my city do not pay well enough for a person to afford their own 1 bedroom apartment at today's prices.

Like AI is not going to touch janitorial work. We need janitorial work as a society, allowing buildings to fall apart is not an option. However, janitorial work is not a well paying job. Its hardly a good career. People look down upon people who do it, even though it has to get done. We want janitors, we just don't want them living in our community.

I think we need to see a future where this technology can do to drastically reduce our cost of living. We need automated food systems, automated transportation, AI/Robo built urban housing. We need to leverage this technology to bring down expenses.


the_6th_dimension t1_ja4ua2h wrote

There you go.

Either use it to bring down expenses for everyone


use it to maximize profits for a select few.

It's only a threat in one of those situations. In the other it's a blessing. People argue about which outcome goes with which option, which is confusing to me but not surprising.


rileyoneill t1_ja6esx2 wrote

The general trend to go from something human made to machine made is a massive reduction in cost. While the companies that do this for a while, eventually competition shows up and drives the cost down. Markets become flooded.

Gutenberg did this with books. His original goal was just to use the printing press to sell printed bibles at hand written prices. But once others figured out how the technology worked the price of books plummets. Once people start mass producing books it becomes very hard to keep them as expensive items.

The big profits don't come from high prices, the big profits come from an enormous volume of sales.

Difference. A landlord owns 3 rental properties. The way they maximize their income is by having the rent as high as it can possibly be. They don't have 4 properties or 5 properties. They want the absolute highest rent per property.

As where some sort of AI Architect/Engineer/Builder company could be going in a city and building not 3-5 units, but building 50,000 units, or in a place like Los Angeles, 2 million units. The goal isn't to become a landlord but get them all sold, even if at a modest profit of $20,000 per unit, that would be billions of dollars in profit.

Then figure they are going to do this in Orange County, and San Diego, and San Jose, and San Francisco, and Sydney, and Portland, and Vancouver, and Chicago, and Miami, and Austin, and Denver. Instead of trying to maximize profit off a few units, the goal will be to maximize profit by building enormous quantities of housing in city centers.

There is far more money in building millions of units, giving the real estate market total shock and awe by collapsing local prices. Sucking up all the renters as buyers and then expanding into other markets. Flood a market like Greater Los Angeles with 4 million units of housing and the price on all housing will crash.

The rush of buyers will have the all time deal of the century on a new condo in LA and then the existing home sellers will find themselves in an impossible to win situation.


ImNotYourOpportunity t1_ja3khz3 wrote

Idk but I’m wondering if computers will become the new slave labor and you have to own a fancy machine to do work and make money for you.


googoobah OP t1_ja3ljq0 wrote

How would that work? Wouldn't whoever is paying you just cut out the middleman and hire the computer?


ImNotYourOpportunity t1_ja3lp9a wrote

I think so but maybe we’d invest and own a portion of a business that owns part of the computer or part of the system.


awfullotofocelots t1_ja4x9ap wrote

They'll become the slavedrivers. Look at gig apps, they maybe already are starting.


stewartm0205 t1_ja3z0ph wrote

It’s like fusion. AI will be here in ten years. But the date just recedes away.


marketlurker t1_ja4afre wrote

This is what I am wondering. Which path with AI go down?


stewartm0205 t1_ja5xgz8 wrote

AI with enough training will do some simple regular stuff like answering the phone. But if the task is complex and requires on the job training then a person will have to do it.


[deleted] t1_ja4jpqd wrote



MikeTheGamer2 t1_ja6awpi wrote

>There would be piles of rubbish on every street corner and much more crime as police forces are closed.

replace those jobs with robots.


scrubbless t1_ja8k709 wrote

I agree with you here, there is a delicate balance in capitalist societies.

If you automate all of the workers and there are none left, then you have no-one to sell products to and companies go under. Doesn't matter how many robots you have making your products, if you have no customers.

The issues I expect to see from Automation are similar to the sort of problems we're seeing through our current iteration of capitalism - inequality. Automation and robots may speed up the process, but at some point the people that have no money and no prospects will find a way to get by, it may even involve violence.


XxMAGIIC13xX t1_ja4fpvu wrote

Ahhhhhh, i also remember the days when I feared automation and rode with the yang gang. Look, just because robots can replace a job doesn't mean those replaced will be out of one. Job loss is not as simple as a one to one, eventually new lines of work can be created or previous ones can expand due to the decreased demand for labor.

Also, there's no real way to predict which jobs will be lost and which ones won't. If I knew the answer to that, i would be shorting stocks instead of browning this site.


just-a-dreamer- t1_ja4r9x8 wrote

Nope, a pandemic can kill people and free labor. Also early retirment, suicides, drug overdoses, low birth rates etc.

There are no new jobs, just less people to fill positions. And positions will get cut fast with AI.


XxMAGIIC13xX t1_ja525qu wrote

So you think humans have already invented every job that will ever exist and from this point on we will only ever have less of them.


scrubbless t1_ja8kmov wrote

When this robot uprising happens, I will make my new job - farming and defending my newly appropriated land. I guess they could make robots to solve that problem too.


just-a-dreamer- t1_ja53rrt wrote

From this point? No, give it a few years, yes.

Here is the deal, AI is growing exponentially in cognitive and robitic capabilities. But you and I are not.

Your body and brain does not improve, quite the opposite. Imagine you must learn something completly new and master it well for a good sallary, that takes like 3-5 years. AI will catch up faster, learns faster than you.

AI will catch up to any job a human does, white collar or blue collar. In fact, white collar will probanly go down first in bigger numbers.

This might take decades, but I think it starts within years.


Orly_77 t1_ja5aadv wrote

To add to this debate, humans will likely merge with technology. Cyborgs!!!


RoyalT663 t1_ja579xy wrote

I've thought about this for a while. We are still teaching kids largely like we did in Victorian times.

What we need to do is teach young people how to think and how to ascertain new information, synthesise it, and disseminate it. Not what to think. We still focus on teaching facts - this is what robots can do easily.

What they can do less well is creativity, humour , sensitivity , perception , nuance. We need to be orienting education to develop emotional intelligence , empathy, social skills, public speaking.

Robots can do the dull , dangerous , dirty, and the dear (expensive). We will still want people in a range of jobs, and there are plenty of jobs that where we need the human touch.


mrbittykat t1_ja4gplh wrote

Honestly I feel like this is something you can’t plan for. Something is going to have to fix the mechanical aspect of this, so I’d say learning that would be a safe bet. They say they’re trying this to free up humans from unfulfilling tasks and mundane or dangerous jobs, but I don’t see it stopping at that. I see artists struggling, I see engineers struggling I see video game developers struggling. I’d say within 20 years top paying jobs in tech being minimum wage. Once AI can be spoken to without being overly specific that’s when we need to worry.


just-a-dreamer- t1_ja4ri5f wrote

I put my money on plumbers to flourish before artists, engineers, software developers.


Vandosz t1_ja4xocu wrote

I'm on the same boat. I dont see how the system will work. Only way is a complete rethinking of humanity. Maybe some sort of UBI with the condition that you accomplish tasks in your hobbies and get some social contact that way. You cant just tell people to sit at home, people will absolutely go depressed.


greatdrams23 t1_ja4nenn wrote

Every incremental step forward requires an exponential growth in computer power.

If you double computer power every couple of years, you can make a small step forward. There will be no giant leap.


NVPcMan t1_ja55zkw wrote

Quantum computing has entered the conversation.


fleeyevegans t1_ja4p6q5 wrote

STEM careers will be valuable. Work of AI will have to be overseen.


Mason-B t1_ja4ppbh wrote

> With the singularity potentially coming within the decade

No, this is a delusional statement. We are not anywhere close. Not in 20 years, probably not even in 50 years. Try something like 90 years before we even get close to artificial general intelligence.

> people need to seriously start to reconsider exactly what fields they should go into to ensure a successful career in the future

This has always been the case. But the reality is that people need to consider being more active in politics to change the economic systems we live under so that work is no longer necessary for survival.

> What do you guys think? Which degrees will be gain and lose value? What about jobs?

Skipping the obvious answer of "programmers will be the last people to be programmed out of a job."

People are always going to want to interact with real people. Any sort of human interaction service job is going to keep existing in some capacity. Therapist, teacher, physical therapist, social worker, personal trainer, and so on.

And then you have all the other jobs for people who can't afford or are ideologically opposed to the fancy robots... which is like all other jobs unless the system is changed.


Cryptizard t1_ja4y67r wrote

You are really not following what is going on, or else you have closed your mind so much you can't process it. 90 years for general intelligence? Buddy, 30 years ago we didn't even have the internet. Or cell phones. Computers were .001% as fast as they are now. And technological progress speeds up, not slows down.

I don't think it is coming tomorrow or anything, but look at current AI models and tell me it will take 3 more internets worth of advancement to make them as smart as a human. Humans aren't even that smart!

>Skipping the obvious answer of "programmers will be the last people to be programmed out of a job."

This is a really terrible take. Programmers are going to be among the first to be replaced, or at least most of them. We already have AI models doing a big chunk of programming. Programming is just a language problem, and LLMs have proven to be really good at language.


Background_Agent551 t1_ja5hzot wrote

I’m sorry, but I’ve seen your comments through this thread and the only close-minded person I’ve seen is you.


Cryptizard t1_ja5ik9t wrote

Cool comment. Excellent details to back up your assertion lol


[deleted] t1_ja5j1yk wrote



Cryptizard t1_ja5jkba wrote

>it’s at least 60 years into the future.

With no argument, cool cool.

>We’re not in a courtroom, I don’t need to cite evidence

And I don't need anything to call you a dumb piece of shit with his head stuck up his ass. Miss me with your bullshit please.


Background_Agent551 t1_ja5js54 wrote

I’m the one with no argument, are you trolling or is someone in real life really this fucking dense?


[deleted] t1_ja5md9m wrote



Le_Corporal t1_ja5pybj wrote

I dont think anyone can be certain of how long it will take for something to develop, if theres anything to learn from the past, its that its very difficult to predict the future


Mason-B t1_ja5xgvt wrote

> You are really not following what is going on, or else you have closed your mind so much you can't process it. 90 years for general intelligence? Buddy, 30 years ago we didn't even have the internet. Or cell phones. Computers were .001% as fast as they are now. And technological progress speeds up, not slows down.

Early internet was 40 years ago if not longer. Which is about the same time as we had wireless phones. Further, we've had technologies like global instant communications, or hyperlinked knowledge bases, for at least a hundred years. I don't think technology moves quite as rapidly as you are imagining here. But even if they do...

Computer speeds can keep doubling but at a certain point we hit atomic limits for silicon transistors. And fundamentally we have a simple problem that AI today is at least seven orders of magnitude away from the needed efficiency to even reach parity with biological intelligence. Which, by the way is beyond that atomic limit. But even if it wasn't, it would take 70 years at a minimum if the computer industry managed to double at it's current speeds (which has been slowing down for decades, take that into account and it's closer to 90). Modern AI models are effectively cockroach brain, or thin slices of larger brains, trained on very specific problems.

And fundamentally, we are at the end of this boom in AI tech. We are at the end of the sigmoid curve of adoption. Things like deep fakes and early drafts of mid journey and GPT were being developed 15 years ago when the latest DNN breakthroughs were made. In that time the technology has matured, it's been put into easily accessible libraries, and engineering work has gone into getting the technology to efficiently run on cutting edge hardware (re: GPUs). Google even got specialized chips with the latest fab cycle made for it to push the envelope of what is possible. And now we are here, at the end of the curve, where it's being put into people's hands and being made generally accessible.

But there is no follow up. There are no new theory breakthroughs in the last decade, there is no more easy hardware performance gains to be grabbed. All the things that were imagined being possible with the tech 15 years ago is now today here. With nothing much new on the horizon. And from an algorithmic complexity standpoint, every doubling of performance you put into these models will give you a sub-linear improvement in output. So as computers get twice as fast, and the companies spend a year training them instead of 6 months, and they buy twice as many servers with their capital to train on, we'll get maybe a 30% improvement. I cannot emphasize enough how in the last 10 years we have gone from researchers running python code on their one computer for a week to teams of engineers running hand tuned assembly on clusters of bespoke hardware for months to get here. That's easily 20 or 30 doubling of performance increases that we cannot repeat, meanwhile hardware doubles every 18 months (more like 2 years)?

But we are at the end of this curve, not the beginning. Just because you haven't been reading AI theory papers for the last 20+ years, just because you have not been paying attention does not make this technology novel or surprising or somehow going to hit big strides. Speaking of, if you had paid attention, you would see that we are coming up on an AI winter scenario again. Probably around 2025/2026.

> We already have AI models doing a big chunk of programming. Programming is just a language problem, and LLMs have proven to be really good at language.

Tell me you don't do programming or computer science without telling me. Sure programming is just a language problem, like rocket science is just a math problem, or like a cow is actually an idealized point in space. What this skips is the HUGE gap in the details that actually allows anything to actually work. Yes it can write code to do a thing, but actually designing a good solution, actually problem solving edge cases, or debugging a specific error? None of that (and tons more) is covered by these models yet. Not even close.

So much of the complexity in non-trivial code bases is in very large language inputs. Like "crash the AI" sized inputs. So yes, we could train the AI on the language model and the code base, but how does that get us to the next step, that just makes it a domain expert in what already exists.

Using these extant models to program something that is even "two steps" complex usually just fails to get anywhere. Yes it can download a website, and yes it can put data in a data base (besides, these are the rote tasks that most programmers have already abstracted into libraries anyway). But it can't put the two together, even if the specification for the data usually wasn't so large that it couldn't even comprehend it anyway. This isn't just something that can be overcome incrementally, it's the whole goddamned ball game.

Going back to the earlier point of history of technology. We are at the end of this technology, we don't have a way forward to solve these fundamental issues with the technology without going back to the drawing board. Perhaps by integrating more classic techniques we will find a path forward. But like last time, that will take an AI winter to reset the space.


Cryptizard t1_ja5yk73 wrote

It’s astonishing how you make like a dozen points and almost every single one of them is flat wrong. I don’t want to argue with you since it seems like you are not open to new information, but I will say that Moore’s law has not been slowing down for decades, transformer/attention models are explicitly a new theory that has made the current wave of AI possible and was not like anything that was done before, and I am a computer science professor and I program all the time an am well-versed in what AI can and can’t do at the moment.


Mason-B t1_ja5zzwq wrote

> It’s astonishing how you make like a dozen points and almost every single one of them is flat wrong.

Hmm, I think I'll just quote you: "Cool comment. Excellent details to back up your assertion lol"

> but I will say that Moore’s law has not been slowing down for decades

Links story showing Moore's law is two years instead of the original assertion of 18 months. Cool story bro.

> transformer/attention models are explicitly a new theory that has made the current wave of AI possible

Which is over 6 years old from publication now, with some lead time before that. I may have rounded up to decade, but still, no new theory on the horizon.

> and I am a computer science professor and I program all the time an am well-versed in what AI can and can’t do at the moment.

After getting my graduate degree I got distracted by my better paying side gig in the industry. But more or less, same.


Cryptizard t1_ja618dl wrote

You said moores law has been slowing for decades and would be the main bottleneck for the future, I show you actual evidence that it has only very slightly started to slow since 2010 and somehow now that was your argument the whole time lol.

You say that current AI is the same as it was 15 years ago (I am using your exact language here), I point out that transformers are very new and different, you say oh but those are 5 years old.

This is the definition of moving the goalposts. Like I said, you are not interested in an actual discussion, you want to stroke your ego. Well, you aren’t as smart as you think friend. Bye bye.


Mason-B t1_ja61gpk wrote

Getting this in before you try and block me again to snipe your responses in.

> You say that current AI is the same as it was 15 years ago (I am using your exact language here)

No, my exact language was:

> > All the things that were imagined being possible with the tech 15 years ago is now today here.

Also (edit),

> You said moores law has been slowing for decades and would be the main bottleneck for the future, I show you actual evidence that it has only very slightly started to slow since 2010 and somehow now that was your argument the whole time lol.

I said Moore's law in either incarnation would be a bottle neck, but it is also slowing (in a parenthetical no less!). Over long periods of time the slowing trend becomes obvious.

> > it would take 70 years at a minimum if the computer industry managed to double at it's current speeds (which has been slowing down for decades, take that into account and it's closer to 90)

It's like you are trying to find enough nitpicks to justify stopping arguing over a technicality.


Cryptizard t1_ja61rh1 wrote

You said “there are no new theory breakthroughs in the last decade.”


Mason-B t1_ja624u2 wrote

I already admitted I rounded up the 6 year old by publication date paper there. I should have said half a decade.

Do you have a substantive counter point instead of nitpicks? (In something I wrote from memory in 30 minutes?)

Because I would very much enjoy being wrong on this.


Cryptizard t1_ja648ox wrote

Yes like I said everything you wrote is wrong. Moore’s law still has a lot of time left on it. There are a lot of new advances in ML/AI. You ignore the fact that we have seen a repeated pattern where a gigantic model comes out that can do thing X and then in the next 6-12 months someone else comes out with a compact model 20-50x smaller that can do the same thing. It happened with DALLE/Stable Diffusion, it happened with GPT/Chinchilla it happened with LLaMa. This is an additional scaling factor that provides another source of advancement.

You ignore the fact that there are plenty of models that are not LLMs making progress on different tasks. Some, like Gato, are generalist AIs that can do hundreds of different complex tasks.

I can’t find any reference that we are 7 orders of magnitude away from the complexity of a brain. We have neural networks with more parameters than there are neurons in a brain. A lot more. Biological neurons encode more than an artificial neuron, but not a million times more.

The rate of published AI research is rising literally exponentially. Another factor that accelerates progress.

I don’t care what you have written about programming, the statistics say that it can write more than 50% of code that people write TODAY. It will only get better.


Mason-B t1_ja67eyi wrote

> You ignore the fact that we have seen a repeated pattern where a gigantic model comes out that can do thing X and then in the next 6-12 months someone else comes out with a compact model 20-50x smaller that can do the same thing. It happened with DALLE/Stable Diffusion

DALLE/Stable Diffusion is 3.5 Billion to 900 million. Which is x4. But the cost of that is the training size. Millions of source images versus billions. Again, we are pushing the boundaries of what is possible in ways that cannot be repeated. With a 3 orders of magnitude more training data we got a 4x reduction in efficiency (assuming no other improvements played a role in that). I don't think we'll be able to find 5 trillion worthwhile images to train on anytime soon.

But it is a good point that I missed it, I'll be sure to include it in my rant about "reasons we are hitting the limits of easy gains"

> You ignore the fact that there are plenty of models that are not LLMs making progress on different tasks. Some, like Gato, are generalist AIs that can do hundreds of different complex tasks.

If you read the paper they discuss the glaring limitation I mentioned above. Limited attention span, limited context length, with a single image being significant fraction (~40%) of the entire model's context. That's the whole ball game. They also point out the fundamental limit of their design here is the known one: quadratic scaling to increase context. Same issues of fundamental design here.

I don't see your point here. I never claimed we can't make generalist AIs with these techniques.

> I can’t find any reference that we are 7 orders of magnitude away from the complexity of a brain. We have neural networks with more parameters than there are neurons in a brain. A lot more. Biological neurons encode more than an artificial neuron, but not a million times more.

Depends how you interpret it. Mostly I am basing these numbers on super computer efficiency (for the silicon side) and the lower bound of estimates made by CMU about what human brains operate at. Which takes into account things like hormones and other brain chemicals acting as part of a neuron's behavior. Which yes, does get us to a million times more on the lower bound.

If you want to get into it there are other issues like network density and the delay of transmission between neurons that we also aren't anywhere close to at similar magnitudes. And there is the raw physics angle about how much waste heat the different computations generate at a similar magnitude difference.

To say nothing of the mutability problem.

> The rate of published AI research is rising literally exponentially. Another factor that accelerates progress.

Exact same thing happened with the boom right before AI winter in the 80s. And also stock market booms. In both cases, right before the hype crashes and burns.

> I don’t care what you have written about programming, the statistics say that it can write more than 50% of code that people write TODAY. It will only get better.

The github statistics being put out by the for-profit company that made and is trying to sell the model? I'm sure they are very reliable and reproducible (/s).

Also can write the code is far different than would. My quantum computer can solve problems first try doesn't mean that it will. While I'm sure it can predict a lot of what people write (I am even willing to agree to 50%) the actual problem is choosing which prediction to actually write. Again to say nothing of the other 50% which is likely where the broader context is.

And that lack of context is the fundamental problem. There is a limit to how much better it can get without a radical change to our methodology or decades more of hardware advancements.


Cryptizard t1_ja69a0b wrote

It seems to come down to the fact that you think AI researchers are clowns and won’t be able to fix any of these extremely obvious problems in the near future. For example, there are already methods to break the quadratic bottleneck of attention.

Just two weeks ago there was a paper that compresses GPT-3 to1/4 the size. That’s two orders of magnitude in one paper, let alone 10 years. Your pessimism just makes no sense in light of what we have seen.


Mason-B t1_ja6cwsg wrote

> It seems to come down to the fact that you think AI researchers are clowns and won’t be able to fix any of these extremely obvious problems in the near future.

No, I think they have forgotten the lessons of the last AI winter. That despite their best intentions to fix obvious problems, many of them will turn out to be intractable for decades.

Fundamentally what DNNs are is a very useful mechanism of optimization algorithm approximation over large domains. We know how that class of algorithms responds to exponential increases in computational power (and re, efficiency), more accurate approximations at a sub linear rate.

> For example, there are already methods to break the quadratic bottleneck of attention.

The paper itself says it's unclear if it works for larger datasets. But this group of techniques is fun because it's a trade off of accuracy for efficiency. Which yea, that's also an option. I'd even bet if you graphed the efficiency gain against the loss of accuracy across enough models and sizes it would match up.

> That’s two orders of magnitude in one paper, let alone 10 years.

Uh what now? Two doublings is not even half of one order of magnitude. Yes they may have compressed them by two orders of magnitude but having to decode them eats up most of those gains. Compression is not going to get enough gains on it's own, even if you get specialized hardware to remove a part of the decompression cost.

And left unanalyzed is how much of that comes from getting the entire model on a single device.

Fundamentally I think you are overlooking the fact that research into this topics has been making 2x, 4x gains all the time but a lot of those gains are being done in ways we can't repeat. We can't further compress already well compressed stuff for example. At some point soon (2-3 years) we are going to hit a wall where all we have is hardware gains.


Sukimay t1_ja4tshn wrote

In general we should be prioritizing education for everyone. STEM gets a lot of attention but we will always need social workers, etc. If AI can reduce or eliminate paperwork and red tape all the better.


Cryptizard t1_ja4yhua wrote

You have to just study what you personally find interesting and fulfilling. Generally, that is a good way to get a job as well because the more passionate you are about a subject, the more you want to learn about it, the more competent you will be. So if the singularity comes and nobody has a job, then at least you spent your time on something that was worthwhile to you.


Le_Corporal t1_ja5ph0w wrote

Tell that to people who end up 40k+ in debt because they decided to study with they find "interesting and fulfilling" and chose (insert useless degree here)


Cryptizard t1_ja5wze4 wrote

I didn’t say you should do it at the most expensive school you can find. Have you heard of community college?


Malakai0013 t1_ja5fm02 wrote

What we should've been doing from the get-go is utilizing automation to shorten the workday. If you have 100 workers, and automation can take over half those jobs, we should've kept the 100 workers and had everyone work half the day with the same paycheck. Then there wouldn't be any concern about any of this stuff. Instead, what we did was allow the rich company ownership to cut the workforce, overwork the remaining workers, and use machines as raw profit. Of course, there are costs with operating machines, I'm not saying they're free, before some chucklehead thinks they have a gotcha. The point is automation has been used to help the rich get richer, instead of helping all of humanity, creating an idea that workers should fear technology instead of embracing it.


Le_Corporal t1_ja5q95z wrote

Good luck saying that to your boss when another mass layoff happens


Affectionate-Aide422 t1_ja7ynac wrote

Back in the 80s, my friend’s dad wouldn’t let him go into programming because AI was going to kill all programming jobs. My buddy missed out on a huge career in tech. Proceed like the singularity is 50 years away. That’s what I did.


Important-Ability-56 t1_ja4mc1a wrote

Figure out how to convince elected officials to spread the wealth and leisure gained from automation around to more people than the assholes who happen to be in the corner offices?


awfullotofocelots t1_ja4x0h5 wrote

People hate communicating with machines and most people are really bad at it. Learn a language or three, (a programming language.)


Vandosz t1_ja4xakx wrote

If you are trying to organize your life around robots taking jobs you might want to do. Good luck. If a singularity does happen, probably any job you can think of doing will be replaceable. Its impossible, just do what you enjoy, make money doing it. And thats that. Nothing else you can do


boersc t1_ja4zrlu wrote

Get into any field associated with renewables, and you're good for the next 20-30 years.


UniversalMomentum t1_ja50noe wrote

Guy isn't what really changes the job market dramatically though it's the robotic engineering because you know you have to physically be able to do the job.

The Brain power part A lot of times isn't going to require a real AI it's just like a bunch of repetitive actions.

Like you don't need AI to like pick vegetables or pick up trash or do deliveries or mine Commodities you just need like endless physical labor.

It's all going to come in waves you know industry will adopt automation at different rates so you really don't have much to worry about anytime soon and by the time you do have something to worry about Society will probably already be adopting in ways that make your speculation pointless right now.

No way you can predict all the new jobs that are created by an emerging technology... I say there's no way I mean you won't even come close so we can't really speculate what the future job markets hold with enough certainty for it to be anything but misleading.


quantumoutcast t1_ja53oms wrote

Um, if there's a singularity within the decade, there won't be any jobs or any other type of physical matter, so really no need to plan for it. Or is there some new internet definition of "singularity" that I haven't heard about yet?


hikingsticks t1_ja594h5 wrote

Own assets. Buy and renovate property, rent it out. Set up a glamping site. Cut people's hair.


DoubleCTech t1_ja5hqyc wrote

I think the last jobs to transition will be government positions, nurses, teachers, and programmers. Just because the “trust” or extreme close interaction with people. I honestly hope the great AI transition will happen fast and smoothly. I doubt it through. I think it could take us a good 50-100 years to fully transition. In that time there will be tons of poverty and hardship. We should come out stronger than ever through. So we might suffer but at least our grandchildren or great grandchildren should live in a paradise that society has been working towards since we started farming/ settling down.


sskoog t1_ja5j5ck wrote

I believe this is already happening with radiology -- robotic vision + edge-detection can identify masses and growths better than human specialists -- medical professionals I trust say "no more radiologists after this generation."

Perhaps the temporary 'trick' is to find sub-disciplines in which "human intuition" still plays a material part -- i.e., not formulaic credit-checks or bid-pricing -- and to acknowledge that we're probably still decades away from true AI singularity, which will be human-disruptive past any predictability.


SamGropler t1_ja5mpqd wrote

There will never come a time where all jobs will be done by AI.


No-Wallaby-5568 t1_ja5qhbf wrote

The trades will always be in demand. You think a robot is going to come out to your house to try and figure out why your drain is clogged and fix it? Nope. And in health care, Do you think anyone is going to want to see a machine for couples counseling? Nope. In STEM fields AI is just going to be another tool to make people more efficient. Eliminate the drudgery so people can focus on high level thinking. If your job is drudgery though, I'd be worried.


I'm sure AI will get good enough to fool a lot of people into thinking it is truly intelligent. But that just points to the gullibility of humans. There are more synaptic connections in the brain than there are stars in milky way. the brain is the most complex thing in the known universe and we do not understand it at all. To think that AI is some kind of sentient being is ridiculous.


ace5762 t1_ja5xo7k wrote

It's worth acknowledging that the definition of 'The Singularity' is when an artificial intelligence is first able to produce an artificial intelligence that is more complex than itself. I'm not necessarily convinced we're in that region yet.

Natural language processors like ChatGPT and other machine learning tools are certainly set to drastically alter the landscape of a lot of industries in the upcoming years, but I struggle to see that these tools would produce The Singularity. Mainly because the basis of their intelligence derives from a statistical evaluation of previous knowledge. In a sense it's a case of repeating back what it believes the statistically astute answer to be, which leaves not a lot of room for apotheosis.

Then again.. humans created these tools from the basis of our observed knowledge so... hard to say. The real interesting stuff will probably start happening once AI tools are made that can incorporate multiple vectors of information on the same platform and draw decisions based on that.


Dickmusha t1_ja64oc5 wrote

People are very confused with this modern AI stuff because a lot of it was already possible for quite some time just no one cared or had a reason to use it or develop it. The AI we are using to do stupid stuff is not really the AI you need to be scared of. So singularity? Eventually.. but its not right now. Also silicon is facing its own issue ... the bigger fear should be that computers are about to hit an impossible to pass limit that will actually STOP further advancements in AI that we actually want to happen. Moores law is dead and transistors are reaching a serious stall in advancement .. that is why AMD and Intel are focusing on work around instead of actual smaller processes. My biggest fear is that chips will hit this wall and the economic issues for this lack of advancement will be a bigger issue. The AI we are working on now will then also hit a wall and the positives of that AI will be stopped in their tracks fucking up all of the futurists realities we are hoping for that could be used for uplifting the undeveloped world. Physics is at a stall, microchips are at a stall, material science is at a stall.. AI may just design tech we won't even be able to act on or make real and we will stalled out in advancement despite knowing what the next steps should be.


beders t1_ja6jmfd wrote

Given the garbage-in/garbage-out problem of currently popular crops of AI, I'd say you have a long fruitful 45y of work ahead of you in any field you choose.

And in general, the higher educated you are, the safer you are from technology-driven disruptions in the economy.


NewDad907 t1_ja6tk7w wrote

Is there a way I can get paid to call out bullshit? I think I’m pretty good at it, and I don’t mind standing in a conference room ripping into idiotic ideas.


Libro_Artis t1_ja7rjy1 wrote

The Singularity is myth. A buzzword to sell books and attract investors


CTDKZOO t1_ja3vcl6 wrote

Singularity preppers are faced with an impossible challenge when it comes to figuring out when it will happen and what that means for all of us.

I don't use "preppers" as a negative - it's pretty much the same task as a doomsday prepper.

  1. Figure out when the big change happens
  2. Pre-guess and prep for surviving it as well as you can

I don't truly think we can get down to brass tacks on what a post Singularity world looks like. Yes, that's just my opinion man... but yeah it's a bit of tilting at windmills IMO.


Sawfish1212 t1_ja6fhci wrote

Aircraft mechanic, no robot could do my job, there's way too much subjective evaluation and non-standard systems.

Not to mention the FAA would never let a nonhuman approve an Aircraft for return to flight status.


just-a-dreamer- t1_ja3kgya wrote

Your main goal in life must be to command a big share of the economic pie floating around at any given time.

To that end it is more important to associate with the right people than the mere job you do. Rich kids go to college, so must you. At least to the bachelor level.

I would aim for a government job in defense at the DoD. Something tech related. The have each other's back.

With the right experince and connections it is easy to transfer to private military contractors. The zip code around Washington DC is the wealthiest area in the entire USA.

When you look at these guys, many are offpring from rich families pushed into this direction with connections. If you can join their club you may enjoy their privileges and thus their job security.


[deleted] t1_ja3risu wrote



just-a-dreamer- t1_ja3tmmk wrote

I don't get the reaction. Do you not need to eat? Are you counting on the kindness of strangers?

In capitalist society you can die in the streets and people walk right over your body. It is how the system works.

The more economic power you can wield, the more grows your power exponentially in appearence. Appearence gives gives you access to more power and thus more appearence. These are compounding effects setting some massively appart from the general population.

With enough credit for example, you can settle in a nice school district. You can bond with other parents and give your kids opportunities they would never have elsewhere.

It is more important to run with the right crowd than anything else. You can be lazy, dumb, incompetent, greedy, whatever, as long as you are plugged into with the right people your life will turn out better.


roleparadise t1_ja42nc2 wrote

He wasn't advocating for free market capitalism you strange reactionary binary-named person. He was giving his take on how to survive in a version of our current system that is more prominently occupied by AI and machines.


Iffykindofguy t1_ja3ohe8 wrote

This is just buying into a system that will never accept you. This is sort of boot licking complacency is why the ultra rich get away with what they do. You will NEVER command a big share of the "economic pie floating around"


peadith t1_ja3qcme wrote

People who think everyone can be rich are, quite frankly, kinda scary.


roleparadise t1_ja46do7 wrote

You're just going to the opposite extreme. The reality of the situation is that lots of people end up in a better economic situation than they were born into, and continue to gain bigger pieces of the economic pie as they develop. It's not impossible, or even that uncommon, and you'd see that if you were willing to. But it takes accepting responsibility for your own growth instead of acting like the only ones in control of your fate are government leaders and the ultra rich. It takes being able to look at yourself and think "what am I doing that isn't working, and how can I do better," rather than just thinking about how much of a victim you are to the system. I know it's hard, but if you just throw your hands up and think it's 100% out of your control, then you'll be 100% right.


Iffykindofguy t1_ja4bpzg wrote

I am not going to the opposite extreme. I never said anything about giving away all your wealth or not having a job. What the fuck are you ranting about lol


roleparadise t1_ja4ekk3 wrote

I never said anything about giving away all wealth or not having a job either. I don't know where you got that from. I was responding to you suggesting that gaining a foothold under the system is strategically impossible. And if you were never disagreeing with that, then you probably misunderstood what just-a-dreamer- meant by "a big share of the economic pie" (hint: it doesn't mean being ultra rich).


googoobah OP t1_ja3ma9g wrote

This seems like good advice, especially the first 2 lines.

One question, though. Why tech? Won't most tech related jobs not directly related to advancing AI start to lose value in the future?

Even if these jobs are secure purely through your connections, I imagine they'll become obsolete eventually.


Iffykindofguy t1_ja3ok3t wrote

its actually really really bad advice. Its telling you to dedicate your life to a hollow goal that is designed to suck you dry of your resources.


googoobah OP t1_ja3pv93 wrote

Well, yeah. Capitalism sucks, obviously. But what are the alternatives? Don't you want the best chance possible of making it through this climate?


Iffykindofguy t1_ja3x4u9 wrote

Well, money gives you security so you're not dumb for wanting to focus on it but part of that is propaganda. If you want happiness get a job that gives you enough to provide for your needs and look elsewhere for the emotional/intellectual security. Focus on your community? Raise a family? Teach yourself useful things? Get a hobby? The best chance you have of making it through whatevers coming is by having a strong sense of connection to a community (if available to you, its not always) and by having the emotional resiliency to not crack under changes or not getting your way. No one knows whats coming. If you put all your eggs in the cash basket eventually you will crack.