Submitted by gaudiocomplex t3_zxnskd in Futurology
neomage2021 t1_j21sx7m wrote
So weird, I worked in academia as an AI and computational perception researcher, and in quantum computing as a researcher and it is absolutely fine.
These posts about AI are ridiculous. We are absolutely no where close to AGI.
Ghostglitch07 t1_j22d8q7 wrote
Kinda feels like someone seeing the kitty hawk flight and panicking about what we'll do when we make it to Mars.
Spunge14 t1_j23pdr6 wrote
You don't have to be afraid of AGI to see that AI is about to entirely transform most industries.
I would say a lot of people should be ready for a shuffle this decade.
walterhartwellblack t1_j23vuug wrote
>You don't have to be afraid of AGI to see that AI is about to entirely transform most industries.
Just like the automobile and the computer.
NeverEndingCoralMaze t1_j24igc1 wrote
Real estate is one. Zillow is working on it. They even launched a brokerage that heavily relied on AI and they lost a ton.
One good area is medicine. My psychiatrist recently retired; I started using Brightside and their providers get AI advice on which meds will work best for their patients based on symptoms and feedback about previous drugs used. My new doc recommended a change to one of the two antidepressants I use, based on my results, and the difference has been awesome. I had no idea it could get better than it was. Zero side effects now.
Spunge14 t1_j26ay8r wrote
I think about medicine a lot too. Basic radiologist will be an essentially non-existent job.
NeverEndingCoralMaze t1_j27toga wrote
They’re not far off. Pay attention to the X-ray monitor for the carryons next time you go through security at the airport. You’ll see it highlight certain items that alert the tsa agent to take a closer look. I know it isn’t the same as diagnosing a human, but it is the start.
Rapscallious1 t1_j240lvb wrote
There’s a big difference between whatever a lot and shuffle means vs most and unemployed means though. I do think people will need to be more open to mid career re-training than they have been traditionally.
MagneSTic t1_j245q0e wrote
“About to” in at best a couple decades. AI is about to be the next fusion energy.
Introsium t1_j24bcn7 wrote
As of a couple of weeks ago, a public, free-to-use LLM-based AI can just casually pass the BAR. It has the fastest adoption curve of anything — anything — we’ve seen in our lifetime.
I’ve already seen the explosion. I’m just waiting for the shockwave.
MagneSTic t1_j24c241 wrote
That doesn’t really surprise or impress me tbh. It’s a computer, which benefits from having a far better memory and recall than a human while also not drinking, partying, or being distracted, passing a standardized test. You could program a non-AI to do something like that. An AI passing a standardized test is line you or me passing it with a searchable answer key.
Introsium t1_j24d816 wrote
You could program a non-AI to perform any given task, but the entire point of my statement is that it casually passes the exam. It was not programmed to do that, but that doesn’t stop it from passing what’s commonly regarded as a very hard test. It simultaneously crushes programming challenges. But, most importantly, it can do most people’s jobs. It can’t do all of them perfectly but it can do them much cheaper than humans can for the loss in quality.
You’re looking at a Fabricator and saying “but that other machine can build a car, this isn’t really impressive”, which is entirely missing the point.
DoktoroKiu t1_j24lxlt wrote
It may have passed the test, but I would not use this as an indication that it could represent you in court. Unless it is fundamentally different than the other large language models it will confidently lie and is only really "motivated" to produce probable responses to given prompts.
The AI they trained on only research papers was shut down very quickly when it started making very detailed lies citing studies that seem plausible yet don't exist.
Now this is by no means an unsolvable problem, but solving it is not something we can just assume. AI alignment is not an easy problem.
[deleted] t1_j24dfmg wrote
[deleted]
IsntThisWonderful t1_j24ipap wrote
Oh, yeah. You could totally just pass the bar with Google search. Sure. Tell me more of your fanciful tales of professions you know nothing about!
#ConfidentlyIncorrect
PapaverOneirium t1_j22lhmj wrote
thank you. I’m so tired of the AGI histrionics just because some highly specialized tools can make some pretty pictures and write high school grade essays.
Yes, it is very impressive and current applications are disrupting industries and will do so much more and quickly. No, it does not mean AGI is right around the corner.
CoolioMcCool t1_j23aev6 wrote
Not a true AGI, but tools powerful enough to make a significant portion of jobs obsolete feels very close. Change has been accelerating(basically forever) and we now live in a world that is very different from a decade ago, whereas centuries used to pass where not much changed and most died in a world similar to the one they were born in.
It's definitely something we should think about before it is right around the corner, and it is plausible in our lifetimes(I guess depending on your age).
daveescaped t1_j23rrsm wrote
Most people view my job in purchasing as a series of binary choices between A and B where information is gathered on both alternatives and then the information is evaluated and a clear winner is selected. That could not be further from the truth.
Business is typically the activity is selecting amount many mediocre options. What humans are good at is presenting the option THEY selected as the superior option when in truth, all options are mediocre. A good employee then ensures that the option they championed succeeds so as to bolster their claims about having selected the best option (and not because it actually was best). This isn’t to say that all options are equal. Some are better. But the determination of which is best is often very subtle. And the skill isn’t simply selecting the best option. It is expediting that option. It is ensuring the purchase is implemented properly.
I guess my point isn’t that my job is difficult. It’s that it is a combination of subtle decisions that the employees themselves are unaware they are making. How would you ever program activities that exceed the conscious mind itself?
How would AI sell a new car using persuasion? How would AI convince a patient they are going to be OK? How would AI mediate a messy divorce? How would AI help a student struggling to grasp a difficult concept?
Honestly, I think some folks imagine some jobs are just these constant analytical, objective choices.
CoolioMcCool t1_j23tmzs wrote
I think many folk understimate AI. We can essentially program for outcomes and let the AI figure it out from there.
Sure, people will still be needed for a lot of stuff and for the foreseeable future they will be making the high level decisions and giving the AI goals, but it will still have the power to automate a lot of jobs.
We are incremental improvements away from convincing dialogue with humans, there goes many phone based roles(tech/customer support and sales). Driving(freight, delivery), factories, fast food, cashiers. All could easily be on their way out soon if we don't actively try to stop it. New roles will come up, but likely in much lower numbers.
daveescaped t1_j240pcx wrote
Those a pretty minor roles. Show me the AI that can provide useful marital advice.
CoolioMcCool t1_j266ly5 wrote
Pretty minor roles probably make up 50+% of the workforce.
What are all the people with no jobs going to do?
jackl24000 t1_j24sglg wrote
Yeah, but try to imagine in any foreseeable future you’d turn loose on e.g., customer facing tasks involving potentially disputed or ambiguous issues like warrantee eligibility and spouting nonsensical corporate gobbledegook to your good customers who are infuriated by the time it gets kicked to a human?
Or any other high value or mission critical interaction with other humans?
How do such systems to replace most human interactions with AGI deal with black swan events not in training sets like natural disasters, pandemics, etc.
CoolioMcCool t1_j267f0m wrote
Ok, so the narrow AIs that are coming in the next several years will only be able to do the job 95% of the time. It'll still take a lot of jobs. What do we do with all of the people it replaces?
Honestly a lot of these replies read like people are threatened and being defensive "there's no way it could do MY job".
Cool. It will be able to do a lot of stuff and massively reduce the number of jobs that require people is my point. What do we do about all of the unemployment?
jackl24000 t1_j26as9o wrote
Try reading it more like trying to understand how this would work, not from a worried worker bee’s perspective, but more from his manager or line executive worried about having to clean up messes caused by a possibly wrong cost saving calculus. Just like today having to backstop your more incompetent employees mistakes or omissions.
And maybe we’ll also figure out the other AGI piece: Universal Basic Income to share in this productivity boon if it happens, not just create a few more billionaires.
CoolioMcCool t1_j26ij1g wrote
As you hinted at, incompetent employees already make expensive mistakes. Once AI gets to a point where it makes less expensive mistakes, employers would be incentivised to replace the people with machines.
Driving is an easy example, humans crash, AI will still get involved in crashes, but if it is involved in significantly fewer crashes then it would seem almost irresponsible to have humans driving.
I think ultimately it just comes down to me having higher expectations of AI ability than others.
Have you played around with chat gpt? I'd highly recommend it, it's pretty incredible, and a lot of it's limitations are ones that have been intentionally placed on it e.g. it doesn't have access to information from the last year or 2, and there are certain topics it has been restricted from talking about(e.g. race issues and religion).
gaudiocomplex OP t1_j23r6xc wrote
Nah, people need to feel superior.
Stillwater215 t1_j23uicc wrote
The thing I keep worrying myself about is that any AI doesn’t need to be perfect to be threatening to peoples jobs, it just needs to be better and cheaper. We’re definitely heading to cheaper, and given that ChatGPT is already being used by students to help with essay writing, we’re not that far off from it being better.
Labrat5944 t1_j24k0xk wrote
Right? Especially in the US, where we can’t even agree that pregnant women might need to recover from giving birth.
Even if most jobs were obsolete and the majority of the population was unemployed, there are plenty of politicians who would rather Logan’s Run all of the “unproductives” rather than vote in AGI.
jamesj t1_j2365eh wrote
I wish your confidence was founded
snowbirdnerd t1_j23xfnc wrote
Yup, people with no understanding of what they are talking about making predictions about the future
grillcheesehamm t1_j2368hz wrote
This.
I don't think I go too deep in biology and computer science, but I'm already aware that AGI is a scam.
Seriously, how can the combination of 2 senses compare with the combination of 5+ senses? Language is special, but still, it alone is just not enough.
Pre-general intelligence, maybe? but it's never general.
samnwck t1_j23k8cs wrote
I don't think AGI is necessary to disrupt most industries. There is a lot of jobs that purely rely on a computer interface to do the job. As long as there is a push by VCs to fund automation, which there is, and companies that would prefer to automate, which there are, then it seems like it's inevitable we'll see more and more in this sense.
Plenty of machine learning models can do better than humans in some disciplines, that will only go up from here. You don't need AGI to do that either. Just a model that's better than human performance.
[deleted] t1_j240rqk wrote
[removed]
uneaknayum t1_j24iiq2 wrote
Hey! I'm in QC too. Doing QML stuff.
I agree about all this hype about AGI/AI.
I keep seeing people talk about ChatGPT like aliens have fallen from the sky and told us there are multiple gods.
Why is it so hard for people to have realistic expectations regarding technology?
Macdac300 t1_j24nnoy wrote
Honestly, it feels a mix of two things for me.
Science fiction books and movies have for the most part portrayed AI as this scary boogey man figure.
Theres a lack of interest in understanding AI, and a lack of communication from innovators of AI
uneaknayum t1_j24oh5n wrote
Great response. Thank you.
As a fan of sci-fi myself I completely get it. Heinlein wrote a book about a computer starting a political revolution. Dope book, highly recommend to anyone. But, like, nah.
I agree totally with the lack of interest in "understanding" AI.
What good would the communication do if people are not interested in the technicalities?
TheLit420 t1_j23xqd5 wrote
So most jobs will be around in 15 years?
Viewing a single comment thread. View all comments