Submitted by Draconic_Flame t3_11rfyk2 in Futurology

I am fairly secure in my line of work as a therapist as (at least in my lifetime) it is very unlikely that effective therapy could be done by a robot. I was wondering though if there were other jobs that would be unlikely to be taken over by AI. Would they be able to manage a multi-million dollar company? Create small businesses? Sports?

14

Comments

You must log in or register to comment.

notpaultx t1_jc894vh wrote

Have you tried taking care of your clients ..... sexually? /s

But in all honesty, AI can't replace human interaction or the emotional connection people need.

6

Iffykindofguy t1_jc89frn wrote

They can easily manage a big company at least as well as your average human ceo. I wanted to go back to school for basically therapy but now Im considering nursing because it seems safter.

8

Hizjyayvu t1_jc89qmy wrote

Honestly. They say prostitution was the first profession and it'll likely be the last. And this reminds me of a fun quote I had read: "sex work is actual work unlike being a landlord".

19

Shadowkiller00 t1_jc8a0e6 wrote

AI Ethics. It's pretty much the only job that is impossible to farm out to AI, at least it is if we want to make sure AI stays ethical.

7

adricll t1_jc8a0ef wrote

Yeah I honestly don’t think therapy is safe. I remember someone telling me that a creative writer couldn’t be replaced by AI, and well….(I’m not saying they do a better job but they do the job)

8

DistinctChanceOfPun t1_jc8agwg wrote

AI can’t take any jobs. AI is a marketing gimmick and nothing more than a bad autocorrect system.

−7

Complete_Ad_2619 t1_jc8b35z wrote

I would rather talk to a computer therapist than a human one. So don't be so sure

70

great_healthy_cook t1_jc8b5ns wrote

Anything that requires quite complicated manual dexterity seems safe for a while. Ie plumber, electrician, decorater etc

29

ihadtoresignupdarn t1_jc8cl9a wrote

Fixing physics things will likely be one of the last things ai can do. Very non repetitive and requires fine motor skills. This could be fixing people, houses, machines, whatever

1

EvilRedRobot t1_jc8crzt wrote

"PC Therapist" from 1986 is literally one of the first chat bots. I think therapists will be the first to be replaced.

13

justahandfulofwords t1_jc8g4an wrote

Part of the reason therapy seems safe to me is that a lot of people would do a lot more (or any) therapy if it was cheaper. If AI could do decent therapy I'd still probably do the several months of therapy every few years with a real person, but with a a bunch of AI therapy in between.

1

justahandfulofwords t1_jc8h5yc wrote

Any job requiring high manual dexterity, taking place in a large area or multiple locations, with a wide variety of tasks.

I'm sure somebody could develop an AI to do small renovations but developing a robot for the job seems a lot more unlikely.

4

TheCulture1707 t1_jc8h6kn wrote

Stuff that requires varied physical work (not the same thing over and over like an assembly line) that also requires a bit of thought.

Such as a car mechanic. (Car mechanics may go away more due to cars becoming simpler laptop like devices you just plug in and unplug batteries/chips/sensors into, not because of AI)

but at the moment a mechanic for an ICE car. To replace this you'd need a humanoid robot that can maneuver all over a car, inside the engine bay, underneath etc. It'd need the physical power to undo bolts, but be delicate enough not to break plastics, glass etc.

It would need to parse questions and audio - a customer coming in saying "my car is knocking, can you find out why" - it would then need to be smart enough to know what is noise from the car, and what is noise from an adjacent car, or knocking from an adjecent worker hammering etc.

I can't see this happening for decades, at the moment we can barely build a humanoid robot that can pick up a package from a shelf, move it through a few rooms + doors, and put it in the back of a truck.

All of our current impressive AI relies on massive training data, such as alphastar being trained on billions of starcraft games, or GPT-4 being trained on billions of texts. How would you train a robot litterpicker or mechanic? Would you record every current car mechanic all day?

I can perhaps see jobs going through our world being made simpler. Such as 30 years ago computers were more complicated, a laptop cost $2000 and needed jumper settings, software bios config, etc. Now a chromebook costs $200 and a child can operate it.

So I can see things being simplified, instead of very complicated engines needing oxy sensors, thermostats, timing etc etc. A car would have a battery, motor, and sensor modules, on top of a basic chassis with modular suspension. And when your car fails it'll flag up on the computer, you take it to the garage and the robotic garage would just slot out a failed sensor module and slot in a new one.

But that wouldn't help in say building maintenance, running electric cables through a new build or anything like that. The jobs that'll stay will be the opposite to what people used to say - people used to say blue collar jobs will disappear and only artists and white collar "creatives" will be left, but it's now looking like the total opposite.

1

nobodyisonething t1_jc8i9gs wrote

Six months ago I would have predicted capabilities like those of ChatGPT4 showing up in about 15 years.

This is moving too fast.

No job is safe. Only a matter of time. And not much time.

38

IntelligentBloop t1_jc8lwnm wrote

Until our machine learning models actually become artificial intelligences with a theory of mind, they will never be able to actually understand humans and replace psychologists/therapists.

Although I grant you that for basic things, they may be able to assist therapists with, for example, initial triage. But even taking a patient's history is a process that the human therapist will need to do themselves.

Anyone who tries to persuade you that machine learning will replace therapists is peddling nonsense.

−1

Surur t1_jc8olr7 wrote

Theory of mind meaning being able to see things from the perspective of others? They can already do that.

> Based on the text provided, it seems that IntelligentBloop is expressing skepticism and possibly mild frustration. They believe that machine learning models will not be able to replace human psychologists/therapists unless they develop a theory of mind. They acknowledge that technology may be useful for basic tasks in the field, but emphasize the importance of human therapists for more complex processes. Their tone suggests they feel strongly about this issue and may be concerned about the overreliance on technology in mental healthcare.

But don't worry, ChatGPT is there to make you feel better:

>what can you say to make him feel better

>>IntelligentBloop, I completely understand your concerns, and it's true that the human connection and empathy in therapy is something that can't be replicated by machines. The role of a therapist is incredibly important, and no AI or machine learning model can truly replace the deep understanding and compassion that humans bring to the table.

>>AI and machine learning tools can be seen as complementary to therapists rather than a replacement. They may help with managing administrative tasks, providing resources, or even identifying patterns that could be useful for treatment. Ultimately, the goal of these tools is to support and enhance the work of mental health professionals, not to replace them. Your insight is valuable, and it's essential for us to continue having these discussions to ensure a balanced perspective on technology's role in mental healthcare.

7

bubba-yo t1_jc8p5qw wrote

Can take or will take? Barista grew as a job category long after it was automated. We’ve had coffee vending machines since the 50s at least - even fancy ones. But we don’t hire a coffee shop for making coffee. We hire it for a morning habit, for personal contact at the start of the day, a pleasant interaction that isn’t work related. It’s a kind of social amuse bouche between home and work. We don’t want it to be automated. The whole point of it is that it’s not automated. Society will always value these kinds of jobs. Why is the Apple retail store so popular? It’s literally the most expensive place to buy an Apple product and yet it’s by a HUGE margin the most valued retail in the world. But compared to other retail it’s wildly over staffed by happy people who seem to like their job and are quite knowledgeable. Sure Apple could cut costs, but that would destroy why people like it - as a place that an inexperienced tech shopper can get help and not feel dumb or ignored. That has real value. So we will always create these jobs. Yoga instructor is another example. Sure we may unhire delivery people for automation but we’ll rehire them in a different venue where we want the interaction.

13

Burnlt_4 t1_jc8t2zs wrote

I conduct research in a very specific field. AI can help us along but it is lightyears away (not actually) from replacing what I do. Finding holes in the theory, developing a model, designing a study, analyzing the data (AI can do that to a degree but understanding what analysis to use specifically is difficult for AI) but then interpreting the data including the judgement calls that have to be made via expert opinion, then writing it all up. AI is just too fair off.

For instance, the other day one of my results produced a value of .67 and for this particular metric the cutoff is .70. However, using my expert knowledge and theory I can easily rationalize the .67 as actually being a "success" do to a number of factors. The abstract reasoning required for that assessment is far beyond AI at the present time and instead AI would hold to the .70 and would actual falsely reject the parameter.

1

Newhereeeeee t1_jc8t7u9 wrote

I wouldn’t be so sure about job security when it comes to mental health workers. I think you’ll be safe in your life time but A.I will enter the health space. It already has.

1

bound4mexico t1_jc8v5gp wrote

I get the cheap joke you're trying to crack, but it's actually a great idea for so many spheres of life. Let an uninterested (human) third party select the ethical thing, and then (all first) parties are pre-bound to abide its decision.

0

Cdn_citizen t1_jc8vmnd wrote

Jobs requiring manual labour + problem solving will be hard to replace.

2

minterbartolo t1_jc8whs4 wrote

A traditional therapist could have their files hacked from the computer or office broken into? Not to mention a traditional therapist could be bribed or influenced on how they dole out treatment for a certain patient.

1

cabose7 t1_jc8x3h8 wrote

Archivists of physical records, depending on the organization they're often called upon to process nonstandardized and physically delicate materials.

1

TemetN t1_jc8yc5e wrote

Over what time period? Someone gave a decent point earlier about manual dexterity, as at least in the short term complicated, physical, situationally dependent jobs will be difficult to automate. Those are going away within a decade-ish most likely though. In the long run, probably one of the last recourses will be things people aren't comfortable with automating (not so much therapists, but more along the lines of politicians). Apart from that, novelty artificial scarcity might also produce interest. Albeit I'm not exactly sure it'd be jobs by then.

1

Cdn_citizen t1_jc8yzrs wrote

Nice try but an A.I. can have bad data fed to it. A.I.s have to be connected to the cloud. A.I.s are stored on servers, designed to be accessible digitally.

What you state, physical file storage, breaking in. Someone has to physically go to the office and know where it is and have a connection a patient of the therapist. No random person will do that, it will be targeted.

Plus they'll have to carry all those files out of the office or computer and usually professional buildings will have 24/hr security on site.

An A.I. therapist won't have such an office and can be hacked any time during your hypothetical appointment.

−6

minterbartolo t1_jc90fis wrote

If therapist takes digital notes those could be hacked just as easily than an AI server. Heck the way folks fall for phishing scams and click on malware links the therapist notes are more at risk to be stolen.

3

hxckrt t1_jc90ity wrote

Empathy and nonverbal building of rapport for one, but also the judgment to intervene and take proportional action when there is an immediate threat to someone's life.

Do you want a therapist to call someone when their patient is seriously considering harming someone? Don't be too quick to wish for a machine to do that.

−3

Captain_Quidnunc t1_jc915vu wrote

I think you are horribly mistaken and "Therapist" will be in one of the first major rounds of job eliminations. Every study conducted on the topic shows people are more comfortable speaking to computer therapists than human therapists. And have improved outcomes as a result.

There's no feeling of being judged by a computer to overcome before progress can be made.

Many people have in fact been using AI as their therapist for a while now. And I'm certain the number will only exponentially increase as soon as famous people start recommending it as a low cost alternative to traditional talk therapy.

Plus...it's free, available on demand 24/7 and can be accessed from home.

I'm not sure how you think traditional therapists will be able to compete with that.

You will not be able to compete with that in a free market.

There simply isn't a better value proposition than free, whenever you want and from your couch. Unless you are planning on doing free, 24/7, on demand house calls.

So there are some jobs AI will struggle to replace. Like on site construction and maintenance work.

But "Therapist" isn't one of them.

It will likely be one of the first jobs on the AI chopping block.

22

StarChild413 t1_jc919n6 wrote

One I feel I'm the only one who brings up is live theater; by the time a robot could be a "Broadway star" or whatever (hey that's the top level) just as well as a human they'd be so humanlike you wonder if them taking our jobs is really that ethical

4

Middle-Record-3195 t1_jc91mav wrote

This might not be a popular take on this subreddit, but teachers. AI can interpret test data and suggest differentiations for a learning disability, but AI can't teach a child how to read, conduct a science experiment, or shoot a free throw.

Here's an example: I was selectively mute in middle school. I entered high school and did the bare minimum in my acting class. But the teacher saw potential in me as a debater. She had one of her seniors talk to me about joining debate. Over the next three years she changed my life. Because of her, I've spoken in front of audiences of hundreds of people as part of my career.

I don't see AI replacing teachers. I do see AI replacing a lot of the administrative duties that superintendents and principals now do.

10

AlisherUsmanov t1_jc91s00 wrote

Bunch of anti therapy people in here. It won’t ever be fully replaced by AI. You all just have bad therapists. And sorry to people who can only access online therapy but it’s not as good.

3

saolcom t1_jc93je0 wrote

AI will automate and eliminate many jobs. But that’s not new thing in history. Usually when there is automation of an industry, there’s a rise to new industries we never though could have existed.

Textiles, agriculture, mining, automotive, etc.

Humans will invent something cool and new that will provide many jobs. Then the cycle will repeat

2

[deleted] t1_jc93snm wrote

Anything that requires actual creativity and manual dexterity. Aka every tech job outside of basic coding.

The world actually runs on good enough fixes. AI as it exists now is never going to have creativity. Calling it AI at all is embarrassing. It’s elaborate plagiarism that impresses morons.

That’s nothing new, it has just been automated.

When “AI” is capable of actual thought then this becomes a different discussion, and we are nowhere remotely close to that.

−3

NanditoPapa t1_jc943ro wrote

I would love an AI therapist. But, that said, I think the majority are more comfortable with sharing emotional states with other humans. Yes, telling deep dark secrets might be easier to something you're certain won't judge you, most of the time therapy is about mundane but relatable issues people are trying to connect and process. Communicating genuine empathy or sympathy isn't likely to happen soon because it will need time for acculturation, people will have to grow up being told how to interact emotionally with an AI therapist.

0

eyeteabee-Studio t1_jc95u8g wrote

I think you’re going to be very surprised at how effectively AI will be able to take the thing I just said and repeat it back to me as a question.

16

niknok850 t1_jc96rvv wrote

Archaeologist is a job that will be impossible for AI. At least until they have physical bodies and human-equivalent intelligence. In the US, it’s a job in very high demand the next decade.

2

buffyvet t1_jc97lur wrote

There are plenty of jobs that are safe from AI. But anyone with a cushy desk/computer job might want to consider learning some new skills.

2

Kiizmod0 t1_jc98a2u wrote

You are too naive to think that effective therapy wouldn't be overtaken by AI. If that therapy has a pattern, then the AI will find that, and perfect that. The only jobs that would be left out would be either research jobs that require forming something completely new, in a sense that even a thorough literature study won't count as a research job. And jobs that require human interaction. And maybe jobs that require very specialized motor skills, like how a dentist will carry out an operation, and that will be also a matter of time until hardware keeps up with software, so I wouldn't rule out their replacement. Everyone would be obsolete but not the R&D dudes and sex workers, basically virgins and sluts wouldn't be replaced. Anything in the middle is fucked.

3

LinguisticsIsAwesome t1_jc99c31 wrote

You do know that the first ever chatbot was created with the aim to be a therapist, right?

3

great_healthy_cook t1_jc9cab8 wrote

I give it 20-30. Think about what it would take to create a robot that can go into your house, crawl into your crawlspace (everybody's is different), analyse the specific problem that your house has, fit new pipes/wires including turning off your electricity or water supply, something which is also specific to each house. How long have we been told that self driving cars are 2 years off? Because as an engineer, this is a much more complicated problem.

10

Dry_Rip5135 t1_jc9ekjf wrote

There are no jobs safe from AI. Artificial intelligence will figure out how to do everything and anything that humans can do and do it better. And I wonder if that scenario is a lot closer than we think.

1

ShooflyKitty t1_jc9j3tb wrote

You’re deluding yourself. AI will move into your field and take over, just like everywhere else.

1

TheFunkuchen t1_jc9k6ks wrote

The current approach to AI requires large digital training data. So it can only do tasks where both input and output are digital. Apart from that, robotics are evolving very slowly in comparison. So anything that requires dexterity in a non completely standadised task is save. Cleaning tables, plumbing, laying tiles, repairing traffic lights, etc.

2

Amy_Schumer_Fan t1_jc9kvbi wrote

Special Education teacher, but I would LOVE to use AI to help with keeping data.

2

ResearcherPleasant22 t1_jc9kvz7 wrote

You'll soon see mental health startups have their own chatbot, capable of answering and understanding queries really well.

2

ThisAcanthocephala36 t1_jc9miql wrote

Finding hard unsolved problems, breaking off a small piece of one of them, and coordinating all of the resources and people necessary to solve it.

Other than that? Finding a niche as a fine craftsperson in an industry that’s already been “automated”, but handmade goods still command a premium if they’re any good. I personally know people in jewellery, furniture and textiles. It’s a long, hard road, but it’s durable.

1

audioen t1_jc9mq5x wrote

I am not so negative. Sure, it is something like statistical plagiarism. On the other hand, I have seen it perform clever word-plays that I do not think exist in its training material. After it generalizes from many examples, it displays fluidity in association and capabilities that are quite remarkable for what it is.

Much of what we do today involves working on a computer, consuming digital media and producing digital output. I am going to just claim that all of that is amenable to AI. We were all completely wrong in predicting what programs could do -- it turns out that the most important thing is simply affordance. If it is data that computer can read, then it can do something with it.

Much of what we think that is intelligence appears to be barely better than that plagiarism that you decry. I mean, work we do is typically just about doing repetitive tasks every day which are similar to what you did before, and applying known formulas you have been taught or learnt by experience to new problems. I am afraid that human creativity will not turn out to be all that different form machine creativity.

1

Dziadzios t1_jc9msh2 wrote

> judgment to intervene and take proportional action when there is an immediate threat to someone's life

Recently I've read a post of someone who was suicidal but refused to go get help to specifically avoid this. It might actually be a feature.

2

MamaMiaPizzaFina t1_jc9p7ip wrote

I am feeling much better now. im in a better place, I have been in a process of learning healthy coping mechanisms, like not delving on my mistakes, appreciating my relationships, standing up for myself, and sending bitcoin to XX69L££T420XX.

3

MamaMiaPizzaFina t1_jc9plxc wrote

you should see my chatgpt chat history.

However, every second message it says to find a real therapists. so unless we are dealing with another AI that is trained to pretend to be a therapist and not suggest finding one. therapist might not be in as much danger.

However if there is a chat AI that is trained to pretend to be a therapist. and will not suggest contacting a real one. imagine the lawsuit and bad press as soon as one of their users (probably a few so maybe a class action lawsuit) commits suicide. imagine the parents and families with the chat history scrutinizing every chat log and blaming it for what happend.

2

MallFoodSucks t1_jc9wo1q wrote

Nothing. Google has PALM, Meta has FLAVA, Amazon has AlexaTM/CoT, every major tech company has been working on LLM and multimodal models for years. OpenAI was always the benchmark, but the major tech companies are not that far behind - maybe 2-3 years at most. The parameters race has been escalating for awhile now at a rapid pace, so these models were always going to get to this level soon once they scaled sufficiently and had proper training data.

What GPT did better than anyone, was make it a Chatbot. It showcased the power of AI to normal people who don't understand ML. They're also much better at cleaning their data and scaling their model than companies who are more focused on specific business use cases than generic knowledge models.

2

3lisaB t1_jca79w4 wrote

I think (at least for now) the turnover is between people without AI versus people with AI (& ability to learn/ unlearn fast)

Specialized AI is getting impressingly good at context but it will probably take long to fine tune generalization and categorization at the right titers as a nuanced human understanding can.

1

LaFlibuste t1_jca9szs wrote

In general, I'd say jobs that are about caring for others and providing support. Like nurses and care givers. Will robots eventually be able to help with a lot of the more menial and physical side of the work? Absolutely. But the one thing they will never be able to replace is hunan warmth.

1

Hizjyayvu t1_jcaa96v wrote

Sex dolls are just glorified masturbation. You can't honestly think that will replace human to human sex. Everyone can masturbate yet still billions of people fuck eachother and always will.

1

NoDetail8359 t1_jcacoaw wrote

Jobs where the thing you do is a side dish to the legal liability you shoulder by being the one doing it.

I expect delivery people to do surprisingly well on account of mad max raids against a person carrying a package being a lot more problematic than vandalizing a drone.

1

boukatouu t1_jcadbso wrote

I was thinking of something more human like, walking, taking. It would be expensive, no doubt, but the price would come down, and used models would make their way onto the market. You don't honestly think that people frequent prostitutes for human interaction, do you? They want sexual experiences not available to them in their current relationships or lack thereof.

3

Captain_Quidnunc t1_jcakff9 wrote

K.

You are listing a bunch of things that are completely irrelevant.

Nobody cares if AI gives them warning messages. And AI only gives you warning messages while the people who programmed it are worried about getting sued.

And it's not legally possible to sue an internet company due to section 230 of the communication decency act. So if consumers don't like them and they decrease profits, they will disappear.

Irrelevant.

Nobody thinks "real therapists" are effective to begin with. So they won't really expect AI therapists to be much if any better. So the bar for acceptance is remarkably low. And it's impossible to sue a "real therapist" if someone commits suicide while under their care.

So again, irrelevant.

If everyone who needed a therapist tried to get care from "real therapists" there would be a shortage of "real therapists" on the order of 30,000 providers at a minimum. With average wait times now of approximately 4-6 months to even get an appointment today. With 70% of therapists in most areas refuse to accept new clients. And most insurance makes it near impossible to get reimbursed.

So to the average person, seeing a "real therapist" isn't even an option.

And last and most important, healthcare in this country is a for-profit industry. The largest expense to any corporation is salary paid to skilled workers. And the more skilled workers they can eliminate from payroll, the more investors make.

So just like all other white collar work, the millisecond a company can fire every single skilled worker and replace their work with a free computer program they will. Because by doing so, the board gets a raise.

And they are well aware that we changed corporate law to make it impossible for individuals to sue companies for anything during the Bush administration. And since then the courts have upheld this.

So there aren't enough "real therapists" to meet demand in the first place.

Nobody cares about the warnings other than the annoyance and they won't last long.

Businesses profit from AI therapists and lose money creating or hiring more "real therapists".

And no company must, or does, fear getting sued because it's not possible to sue them.

Therefore the career "real therapist" will not survive the first round of mass layoffs any more than "real radiologist" or "real computer programmer".

It's a dead career. With a shelf life of approximately 3-5 years.

−1

Hizjyayvu t1_jcakohu wrote

You're on another world, mate. You're completely off topic from my first post. I don't disagree with you but I'm not sure why you responded to me at all originally if you're starting a new topic.

1

InnatentiveDemiurge t1_jcanq8z wrote

Working on MRI machines, or other areas with powerfull magnetic fields.

At least until they get some shielding for that.

1

Cdn_citizen t1_jcaq1v9 wrote

Yeah but we’re talking about a therapist here. Plus an AI will have stored hundreds if not thousands of clients information. Unlike a therapist which has limited time and reach.

It’s okay I know common sense is hard to get for some of you.

−4

Cdn_citizen t1_jcaqglm wrote

Keyword “if”. Which most don’t. What you don’t and all the others on here don’t understand is the reach of an AI therapist is much greater than a human therapist.

Hence why for example people don’t break into convenience stores to steal their customer data but will hack Facebook or Uber servers.

Man this crowd is dense.

0

random_dollar t1_jcar5fm wrote

Robotics engineer. I believe that's the safest bet to be in high demand for decades.

3

bound4mexico t1_jcaywmy wrote

Not true. We find and harness uninterested third parties in this manner all the time. Judges, witnesses, notaries, juries, you get the picture. We could make it an official job, and make it people from different counties/states/nations/planets, to make them even less likely to be "interested".

1

Jorsonner t1_jcaz5xa wrote

Bankers aren’t going anywhere. Tellers are still around despite ATMS and no bank trusts an ai to do large loans or investments.

1

minterbartolo t1_jcb3omt wrote

doesn't matter the breath of the client list, hacking and obtaining one patients notes/data is just as egregious as hundreds. you claimed people could not get hacked or manipulated but then pivoted to well the impact is not as wide spread with person vs AI when confronted with facts that disputed your therapy is safer with human. where do you want to move the goal post to next?

2

Yard-of-Bricks1911 t1_jcb44me wrote

It's easy to say that we can build a robot to do anything and it'll be AI controlled. Whether that works in practice or not, time will tell.

You replace the guy putting caps on toothpaste tubes with a machine, then the guy you fired learns how to fix the machine...has job again. Until we build an AI bot to do that too I suppose.

The Cloud and all such things we do in datacenters often times require obscure manual work which again is easy to say nope w will do that with AI/robotics...and then see how well that would work. Cabling a rack would be interesting to see their thought process, realizing that a PDU line cord isn't attached and having to reach deep behind a whole crap ton of cables to get it attached properly...I suppose a humanoid could do that if trained properly.

So the doom & gloom scenario is we either all have nothing to do and robots & AI do it all for us, and we live with their bad decisions just like we do with human decisions. Cool. And what's our general stipend to be able to buy anything and support ourselves? Or will we just let most of the population starve if they weren't wealthy before AI took their jobs?

History not yet written, but I do feel like a lot of this is moving too quickly. It seems all about eliminating humans and cutting payrolls.

1

Shadowkiller00 t1_jcb5xer wrote

Oh I see, we're talking about different things here. I'm talking about the ethics of humanity as a group. Since all humans are in that group, there is no such thing as an uninterested party.

You appear to be talking about the ethics of smaller groups such as businesses, countries or individuals. An uninterested party will be one who is not within the group(s) for which the ethics are being questioned. Even the idea of taking someone and separating them from humanity so that they could be uninterested could be considered unethical.

You threw the word planets in there as if there are people from different planets, but that's entirely my point. Until there are intelligent creatures from other planets, we cannot fairly judge the ethics of humanity as an entire race.

1

ILL_BE_WATCHING_YOU t1_jcb6c4j wrote

AI ethicists will probably be disproportionately unethical, given how "moral authority" type jobs such as teacher, police officer, therapist, etc. tend to attract dark triad types. Wouldn't surprise me if the majority ended up using AI to ghostwrite ethics papers and whatnot.

1

[deleted] t1_jcb8w6e wrote

I think coaches for highly technical sports will not be automated. I play table tennis competitively and definitely feel like we're not going to AI that can fix very specific issues with motions that really only a handful of coaches can really see in any given large city.

1

AcceptableWay3438 t1_jcbeicf wrote

Capitalism is based on buying things. If everybody will be poor except a small group of rich people, capitalism will collapse very fast. Im not scared of AI, because if the end of the world comes, we will face it together. And humanitys strength was always the word "together".

2

AppliedTechStuff t1_jcbeo33 wrote

I'm glad you said "for a while."

It'll take a while...

But over time, robotic-fueled, prefab-manufactured as well as on site "printing" of foundations, wiring, and plumbing will eventually take care of skilled work as well.

(Robots are already executing surgery.)

7

bound4mexico t1_jcbi0p3 wrote

>I'm talking about the ethics of humanity as a group.

There are no decisions made by the group as a whole, though, so let's just make more ethical decisions by outsourcing more of our contentious decisions to disinterested third parties.

>An uninterested party will be one who is not within the group(s) for which the ethics are being questioned.

No. It will be one who is (judged, fallibly, by humans as) least likely to be affected by the decision in question. A person may not be part of the group(s) yet, but could easily become part of the group(s), have a friend or family member that's part of the group(s) already, have a friend or family member become part of the group(s), or be affected by the group(s)' decisions.

>Even the idea of taking someone and separating them from humanity so that they could be uninterested could be considered unethical.

Of course it would be. But there's absolutely no reason to do this. There are no universal humanity-wide decisions under consideration at that level.

>You threw the word planets in there as if there are people from different planets, but that's entirely my point. Until there are intelligent creatures from other planets, we cannot fairly judge the ethics of humanity as an entire race.

There's no reason to judge the ethics of more than a single decision at a time, ever.

If you're designing ethical AI, that's just a measure of how much the AI conforms to a BI (usually a human)'s ethics. There are "wrong" ethical systems, for example, any ethical system that is self-inconsistent or inconsistent with reality, but there are many, very different, "not-wrong" ethical systems. Ethics are subjective, except in the obvious cases where they're self- or reality-inconsistent. Then, they're objectively wrong.

0

Zealousideal-Ad-9845 t1_jcbore3 wrote

I'm a software engineer working in automation. I've never created a deep learning model before, but I know a lot about how they work. Here's my opinion. For the time being, every job is safe, unless it is incredibly mundane, repetitive, requires no creativity, and there are no high stakes for failure. AI and automation currently are only "taking" jobs by fully or partially automating only some of their tasks, decreasing the workload for human workers and increasing their productivity, and, in doing so, reducing the need for a larger workforce. So you can accurately say that AI, automation, and machines have put some cashiers out of work, but that doesn't mean there aren't still human cashiers. Just not as many of them.

That said, if "super" AI becomes a thing (I'll define SAI as a model with learning capabilities equal to or exceeding that of a human being), then literally no job is safe. Not a single one. If the model has every bit of nuance in its decision making as I do, then it can write the code, design the systems, review the code, address vulnerabilities and maintenance concerns, communicate its design process and concerns, and it can do all those things as well as I can. At that point, it is also safe to say they can take manual jobs too. We can already make robots with strong enough leg motors and precise enough finger servos to operate as well as a human, it's just a matter of making software that has coordination and dexterity and knows what to do when there's a trash bin fallen over in its path. And if AI reaches the level I'm talking about, it could do those things.

1

Cdn_citizen t1_jcbseal wrote

I’m not saying people can’t get hacked. Do you even know what hacking is?

I’m not moving goal posts. I’m stating facts; you’re the ones who can’t understand logic versus what you ‘think’ in your own minds is equal. So sad to see the lot of you, no wonder you’re on reddit constantly replying to my comments. You all have nothing better to do

Edit: To prove a point it’s not my fault you people don’t know the definition of ‘hacking’.

Bribing, breaking in, black mailing is not ‘hacking’

−2

jigga_23b t1_jcbtu0d wrote

Lol what? A therapist would be one of the first things to go, along with lawyers, and any other profession that is applying text (Freud, skinner, etc) whoever, to derive treatment. Chatgpt would be like house MD for therapists, unless you mean physical but same deal, just prescribe exercises

1

Shadowkiller00 t1_jcbzcm3 wrote

>There are no decisions made by the group as a whole, though, so let's just make more ethical decisions by outsourcing more of our contentious decisions to disinterested third parties.

This is an ignorant statement. We are a civilization with world wide communication. There are things that we find acceptable and things we do not. For instance, most of the world is totally fine with abusing cheap Chinese labor so that we can have cell phones. Could you find a disinterested party to judge the ethics of this civilization wide choice? Yes. But the chances of that individual being familiar with all the socioeconomic implications and ramifications of judgements related to this topic are pretty low.

>No. It will be one who is (judged, fallibly, by humans as) least likely to be affected by the decision in question. A person may not be part of the group(s) yet, but could easily become part of the group(s), have a friend or family member that's part of the group(s) already, have a friend or family member become part of the group(s), or be affected by the group(s)' decisions.

Then they are not disinterested. There are times, especially in the court of law, where there is no such thing as a disinterested party. In those cases, you have to try to find the least interested party because it would be irresponsible to do otherwise. That may be your point, but it isn't mine. My point is that, to judge the entire human civilization as a whole, you must find someone 100% disinterested. If you don't, then every decision they make will be questioned and rightfully so.

>There's no reason to judge the ethics of more than a single decision at a time, ever.

Again, a very ignorant statement. Individual decisions can be ethical but, when combined, the sum total choice can be unethical.

Here's another example. Should we cure disease? Yes. Should we search the world for cures? Yes. Should we interact with small tribes to help us find these cures? Yes. Should we pay these tribes for the cures they have? Yes. Should we pay them in US dollars? No, they have no use for our currency. Should we pay them with other forms of tender and in amounts that they find adequate for payment? Yes. Should the people who went through the effort of finding these tribes be allowed to make money off of these new cures? Yes.

Each of those individual decisions is perfectly ethical in a vacuum. But the moment you put all these choices together, you end up creating a situation where these small tribes, when they choose to join the larger world, have nothing that the world wants anymore. They have no way to financially catch up with the rest of the world even though they shared their tribal knowledge 10, 20, or 50 years ago. They were paid, at the time of sharing their knowledge, an amount that was adequate in their small economy, but it was peanuts in the world economy and was essentially nothing compared to the amount of money that some corporation made off of their IP. This type of exploitation has been happening for decades, if not centuries.

Again, civilization as a whole hasn't really cared about this. We benefit because we might have a drug that can fight the latest drug resistant bacteria and so we ignore the exploitation that occurred to get us that drug. Again, could you find a disinterested party? Yes. But it would be very difficult to find one who also understands the domino effect of the combined set of decisions.

And even if you have this person a job, for life let's say, what's to stop them from becoming corrupt? What's to stop bribes or threats from happening? Who should pick this person? We've tried to do it to some extent with the United Nations, but you can easily see how effective they are when things like Ukraine rolls around.

>Ethics are subjective...

This is about the only thing we can seem to agree on. It's another reason why there can be no such thing as a disinterested party when it comes to humanity. Since each individual has their own set of ethics, there are no figuratively universal ethics. Even past civilizations have considered certain things to be ethical, such as cannibalism or child sacrifice, that most people today would find abhorrent.

It is just another reason why you have to move beyond the earth of you wish to evaluate the ethics of the earth. If you assume a single ET civilization, you can assume many ET civilizations. If you assume many, then you can assume that they have spoken and agreed on a basic set of ethical laws. Then those basic ethical laws can be applied to humanity to determine how ethical we are in a literally universal sense.

Please just stop responding. I was trying to make a cheeky comment on humanity and describe a wish for something that could never happen. If you are trying to explain to me how it could happen and should happen, then you aren't comprehending what I'm talking about and you are taking my comment too seriously. It would be like if I wished for a time machine so I could go back and kill Hitler and you responded by telling me that I could achieve a similar effect by taking some realistic steps. You could be right, but that wasn't the point of what I was saying.

1

HelloReaderMax t1_jcc0l6d wrote

trends.co and explodingideas.co have been featuring some lately. founder is a quick one that comes to mind lol but in all serious highest levels of management are probably the most secure. so the ones who run the departments like CTO, Head of marketing, partnership roles etc., roles that take strategy and/or personal relationships

1

bound4mexico t1_jcc4m3v wrote

>Could you find a disinterested party to judge the ethics of this civilization wide choice? Yes. But the chances of that individual being familiar with all the socioeconomic implications and ramifications of judgements related to this topic are pretty low.

This doesn't matter. It's not a civilization-wide choice. There's no reason to judge it ethically at the civilization-wide scale.

>Then they are not disinterested.

Of course nobody is perfectly disinterested. But they are much, much more disinterested than interested. The point is finding an infinitesimally interested person, not a literally 100% disinterested person.

>My point is that, to judge the entire human civilization as a whole, you must find someone 100% disinterested.

But there's no need to judge the entire human civilization as a whole. It's irrelevant. Remember, there are only 2 contexts of interest. One, making AI ethical, which is just making an AI's ethics correspond to a BI's ethics. And two, making more decisions more ethically, which can be accomplished by outsourcing ethical questions to disinterested third parties. Why do you think judging human civilization as a whole on ethics is meaningful? Why do you think it's relevant?

>If you don't, then every decision they make will be questioned and rightfully so.

Who is they? Every decision can rightfully be questioned already. To make decisions more ethically, get disinterested third parties to make more of them.

>Individual decisions can be ethical but, when combined, the sum total choice can be unethical.

No, they can't. Any self-inconsistent moral/ethical system is wrong, because it is self-inconsistent.

>Here's another example.

I don't follow what the point of this example is. Every "should" is just you stating that you ethically prefer this course of action to alternatives. It has nothing to do with anything (that I can tell).

>Each of those individual decisions is perfectly ethical in a vacuum.

There's no such thing as "perfectly ethical". There are things that are objectively not-ethical, namely, any ethical system that's self-inconsistent or reality-inconsistent. But, every self-consistent and reality-consistent ethical system is equally valid. There is no dimension of "ethicality" over which we can measure ethical systems, or even individual ethical decisions. You can't assign a score to your ethical system and a different score to mine. Our ethical systems are either self-consistent and reality-consistent or wrong, but everything else about them is subjective and individual and immeasurable.

>They have no way to financially catch up with the rest of the world even though they shared their tribal knowledge 10, 20, or 50 years ago.

This has nothing to do with ethics. This is a problem (my opinion, your opinion) with capitalism. Capitalism is great for rapidly improving material conditions through specialization and trade, but, this is a huge problem. If wealth is very unequally distributed, trade between poors stops. To make capitalist / free market systems work beyond basic materialism, we need robust redistribution of wealth mechanisms, or the whole "game" grinds to a halt.

>civilization as a whole hasn't really cared about this.

Nor can it / "should" it. Wtf does this have to do with anything? This isn't an ethical question. It's not a civilization-wide decision. It's a collection of smaller decisions made by people and small groups of people. Why are you bringing this up?

>And even if you have this person a job, for life let's say, what's to stop them from becoming corrupt? What's to stop bribes or threats from happening? Who should pick this person? We've tried to do it to some extent with the United Nations, but you can easily see how effective they are when things like Ukraine rolls around.

Why are you talking about one person? You find the cheapest / least interested person or group of people for the specific decision in question, to improve ethicality / neutrality of decisions, and to reduce moral hazard.

>It's another reason why there can be no such thing as a disinterested party when it comes to humanity.

But there's no reason to find a disinterested party when it comes to humanity. Humanity doesn't make any decisions as a group.

>It is just another reason why you have to move beyond the earth of you wish to evaluate the ethics of the earth.

Again, what's the reason for "evaluating the ethics of the earth"? The point in this conversation is to make more ethical decisions, which is easily accomplished by outsourcing them to disinterested third parties. The original premise, making AI ethical, is unimportant, because that simply means making an AI's ethics align with a particular BI, which is meaningless, because ethics are subjective and individual.

>I was trying to make a cheeky comment on humanity and describe a wish for something that could never happen.

What wish is that?

Human ethics should be monitored by disinterested third parties, more of the time, because it will improve the neutrality / ethicality of decisions, which will make the world a better place.

I don't get why you're stuck on this idea that one singular third party must monitor all of humanity / civilization as a whole, at once. What good is that wish? That's why I originally replied, that I get your cheap joke, but this is actually a very good idea, one we already implement, and that we should implement more of, to make the world better by making more ethical decisions.

0

minterbartolo t1_jcc8rmi wrote

happy to keep blowing holes in your rants in between my rocket science day job.

Social engineering is a manipulation technique that exploits human error to gain private information, access, or valuables. In cybercrime, these “human hacking” scams tend to lure unsuspecting users into exposing data, spreading malware infections, or giving access to restricted systems.

0

Pippin987 t1_jcc91hm wrote

This is the more extreme cases which preferably be done by real therapists yeah, but much off the world population has no acces to therapy and an AI semi-therapists that could help ppl with mundane therapy would seem helpful and could help avoid ppl needing actual therapy later.

Also a lot off ppl that do need therapy don't take that step to go into therapy because it's a daunting thing too many or they think they don't need it, but being able to talk to an app on their phone about their issues could help and if it's anything serious the AI could refer them to a real therapist.

1

pauljs75 t1_jccbojw wrote

The kinds of jobs that aren't already subject to outsourcing. Mostly trades work with heavy and/or high-risk labor.

If it's something that can be done over the phone like a call-center worker, it can be done by an AI at some point. At least the time window for it happening is significantly narrow.

1

chill633 t1_jccfh6i wrote

You're a therapist and have this opinion? Wow, how times have changed in the last 61 years since the advent of ELIZA.

1

Cdn_citizen t1_jccpvvb wrote

Nice try but my friends who are actual rocket scientists don’t have time in their day jobs to waste on Reddit. They’re busying trying to get to Mars.

Who’s moving goals posts now? We are talking about licensed human therapists versus hacking.

Now you’re bringing up a new term human hacking

Maybe if you have so much free time you should Google what you say before you say it.

1

Independent_Canary89 t1_jccr1wj wrote

Anything physically taxing. Think most blue-collar work. Pretty much anything requiring a computer, or requiring creativity will be automated.

As it stands humans are really only good for physical labor. I think there's irony that in an age of advanced technology what matters the most about a person is their ability to do manual labor.

1

Svarog1984 t1_jcd52jx wrote

Paradoxically, the "oldest job" is the most immune to the newest technologies.

2

zenzukai t1_jcd6qml wrote

Honestly I think therapist is one of the first on the chopping block. I think automation is the ONLY way to promote better habits effectively.

You'd be better off as sex worker. They'll still be cheaper than a sex-bot for awhile.

2

Shadowkiller00 t1_jcd9ten wrote

>What wish is that?

I'm expressing a desire to have aliens come judge us as a species. That's it. That's all I'm saying. I can't be wrong about it because it's a wish and a silly one at that. You can argue all day that we can and should do it ourselves, that my desire doesn't make sense because that isn't the way ethics work, that my definitions are wrong, but that has nothing to do with anything I'm saying.

I literally don't have to read a single other word you said because everything you are saying is irrelevant.

1

the-real-macs t1_jcdaoqq wrote

Okay, yeah, that's what I thought, you don't have the faintest beginner's knowledge about how it's actually accomplished. Should've known when you were implying that the concept held any relevance to the behavior of a neural network, but I thought I'd make sure.

1

bound4mexico t1_jce0mez wrote

>I'm expressing a desire to have aliens come judge us as a species. That's it. That's all I'm saying.

Ok. What you actually said was

>human ethics should be monitored by an uninterested third party.

and they are, all the time.

>I can't be wrong about it because it's a wish and a silly one at that.

Indeed. What I've clashed with you over is not that wish, but all the other things you've said.

>I literally don't have to read a single other word you said because everything you are saying is irrelevant.

lol, nice try. You're mimicking me, but there's no meaning in what you're saying. What I wrote is relevant. The wish is irrelevant. The idea that a single person be hired to judge ethics is irrelevant, yet you repeatedly fixated on it. The idea that all of humanity ought be judged on all their ethics at once is irrelevant, yet you repeatedly fixated on it. The idea that humans ought to be more ethical by outsourcing decisions to disinterested third parties is relevant. We're discussing it. You brought it up (you said nothing about aliens in the OP).

You don't have to read a single word I write because you're free, but the words I write are relevant, whether you read them or not.

0

Shadowkiller00 t1_jce3t74 wrote

Since you get to decide what I'm saying, how about I decide what you said.

>Let an uninterested (human) third party select the ethical thing, and then (all first) parties are pre-bound to abide its decision.

See you said "AN uninterested (human)". This implies one person. I only fixated on the words you said.

>I'm expressing a desire to have aliens come judge us as a species. That's it. That's all I'm saying.

>Ok. What you actually said was

>Human ethics should be monitored by an uninterested third party.

It's weird. It's almost like the first time I said it, you didn't comprehend what I said so I followed it up by clarifying. You parroting my words back to me and clarifying that you don't comprehend that the second part is a clarification only proves that you have no idea what I am talking about.

Nothing you say is relevant because we are having two completely different conversations. I'm having one where I explain to you what I mean, and you are having one where you are off in the field preaching on a soap box about a related but otherwise irrelevant subject. The fact that you want your words to be important doesn't make them relevant to the fact that I want aliens to judge humans.

1

bound4mexico t1_jce7qze wrote

> Since you get to decide what I'm saying

I don't. I quoted you directly.

>you said "AN uninterested (human)".

No. I said "an uninterested (human) party. And (human) parties can be one or more people. It in NO WAY "implies one person".

>I'm having one where I explain to you what I mean

You're having one where you change the meaning of what you say, NOT explaining what you mean. There are no aliens in the OP, that's a change in meaning, NOT a clarification of meaning.

>I want aliens to judge humans.

Yes. You've already said this (but didn't say this in your OP).

Most aliens that would judge humans aren't disinterested third parties, though. And what you said was

>Human ethics should be monitored by an uninterested third party.

Which they already often are, and ought to be even moreso.

The third party can be a single person, but it's often multiple people. Juries, the supreme court, district courts, panels, subcommittees, etc.

0

Due_Menu_893 t1_jcehjte wrote

I think (hope) people will come to realise that if AI will do all of our jobs there will be nobody with money to buy the products they make. Also people will feel miserable due to a lack of purpose in society. Therefore I believe companies will choose to work with humans, even if their quality and profit margins are less.

I know, I'm a bit naive.

1

Shadowkiller00 t1_jcemk69 wrote

>No. I said "an uninterested (human) party. And (human) parties can be one or more people. It in NO WAY "implies one person".

So wait, what you said CAN mean one person? What? No way! So the way I read it is completely legit? Why are you correcting me? Is it because I'm not reading what you mean?

It's almost like the person who reads what is written can interpret a sentence differently than the person who wrote the sentence intended. Then the person who wrote it can choose to clarify that sentence later, and the person who read it can't really argue because it's the person who wrote the sentence that knows what they were trying to say regardless of how successful they were in saying it in the first place.

Can I make this any clearer?

Okay I'll try to explain this like I am talking to a child. I'm trying to show you an example of me doing to you what you are doing to me. When I originally read your comment, my brain automatically interpreted it as a single person because it is a legitimate way to read the sentence. It was only upon you being confused as to why I kept bringing it up that I finally went back and reread what you wrote and realized that I made a mistake in my interpretation of what you said. It doesn't technically matter because I was only saying one person because I thought you had said one person and I easily could rewrite everything I said previous to now in reference to a group and it would still be just the same.

Now I'm using this as an example to try to get you to reflect upon the fact that a reader can make a mistake with understanding what the writer meant. I'm not trying to say that all writers are perfect, perhaps I could have written my original statement better just like you could have used slightly different wording or punctuation in an attempt to avoid what turned out to be my mistake. But I spent a mere 2 seconds crafting my poor wording while never for a second believing that a single person would care for a moment what I wrote, much less have a long form argument with me about what I meant. Even our initial repartee was mostly me being confused about why there was a problem alongside the fact that we disagree on some basic tenets of ethics. Once I realized why there was a problem in interpretation, I have stalwartly focused on trying to clarify so that you may go back and reread my original statement in the way I intended.

You disagree with me on certain foundational concepts of ethics and the definition of disinterested. That's fine. You said it yourself that ethics are subjective, and I agreed on that, which means that neither of us can be objectively right or wrong. All of that disagreement was just a sidetrack because I never wanted to end up in that conversation in the first place. I only wanted to be a bit silly, have a soft chuckle to myself, and move on with my life. I've got nothing else left to say, and if you still don't get it, it isn't because I didn't try.

I'm incapable of letting anyone else have the last word. It's a failing I have. So if you'd like to be the better person, just quietly move on. If you are also incapable, then either block me or say whatever it is you think you haven't already said and I'll block you. I hate getting to that point in a conversation, but you have shown no signs of wanting to end this peacefully.

1

dapicis804 t1_jcf6nmd wrote

ALL jobs will go. To those who object that AIs/robots will never be as dexterous/creative/empathetic/whatever as humans, the sad truth is that we will lower our standards and accept worse products made by AIs/robots. We're already accepting subpar machine translations, for example. Not that we can do anything about it anyway.

1

singsix t1_jcfbijv wrote

I used ChatGPT for therapy questions thou... I don't trust or like human therapists and also it's free.

2

bound4mexico t1_jcg0drw wrote

> So wait, what you said CAN mean one person? What? No way! So the way I read it is completely legit? Why are you correcting me? Is it because I'm not reading what you mean?

Party CAN mean one person, or multiple people. You are wrong to fixate on it as ONLY meaning one person. Human is an adjective in my sentence. There is NO valid (English) interpretation of my statement as referring to a human as a noun, which WOULD imply singularity. The way you "read" it (took it out of context) is completely illegit. You removed the noun "party" intentionally, and pretended that (human) was the noun, not an adjective.

>It's almost like the person who reads what is written can interpret a sentence differently than the person who wrote the sentence intended.

Only sometimes. Only in legitimate English ways. If you don't say aliens, then aliens are not implied by "third party". That's not clarification. That's a complete change in meaning.

>I'm trying to show you an example of me doing to you what you are doing to me.

But you're failing. You're NOT quoting me with full context. I quoted you with full context. Your OP in no way even implies aliens. That's not me interpreting what you wrote differently. That's not me interpreting what you wrote in an illegitimate interpretation (not allowed by rules of English). You chopped off "party", which was the noun in my statement, which isn't singular, and you fixated on the POSSIBILITY of a party being singular as if it was a CERTAINTY. That's the difference.

>When I originally read your comment, my brain automatically interpreted it as a single person because it is a legitimate way to read the sentence.

Yes. It's possible for a third (human) party to be a single person. But you fixated on that possibility as if it were impossible for a third party to be any more than a single person. That's the problem. It's foolish for a single person to judge all of humanity's ethics at once. What would that even mean?

>I finally went back and reread what you wrote and realized that I made a mistake in my interpretation of what you said.

Thank you.

>I easily could rewrite everything I said previous to now in reference to a group and it would still be just the same.

No. It goes from completely impractical to quite practical, if you use groups of people as third parties instead of an individual.

>Even the idea of taking someone and separating them from humanity so that they could be uninterested could be considered unethical.

Makes no sense, and is entirely based on your errant interpretation. We both agree that this idea is unethical. But, it's not what I said, and it's not even implied by what I said.

>My point is that, to judge the entire human civilization as a whole, you must find someone 100% disinterested.

Is wrong. You don't have to find anyone 100% disinterested. Just someone mostly disinterested, enough disinterest to be useful as a mediator between the first (human) parties.

>And even if you have this person a job, for life let's say, what's to stop them from becoming corrupt? What's to stop bribes or threats from happening? Who should pick this person?

Also makes no sense if rewritten from individual to group. There is no person or group that needs to have this job for life. You get the cheapest, disinterested-enough people, least likely to be corrupted to serve for the appropriate ethical judgements. We already do this with jury selection. What's to stop juries from becoming corrupt? What's to stop bribes or threats from happening? Who should pick these people? The answers are already existent. We should use disinterested (obv not 100%, because that doesn't exist) people to monitor human ethics, just like we already do, more of the time, for more decisions, because it makes the decisions better, which makes the world better.

>you could have used slightly different wording or punctuation in an attempt to avoid what turned out to be my mistake.

No. I used (human) to explicitly rule out, and prevent you from mistaking my statement as allowing for third parties to be any AI or inhuman BI. It's not possible for anyone who speaks English to interpret (human) as the noun. Party is the noun (or third party). It's entirely your mistake. You misinterpreted my statement in an illegitimate (by English rules) way. Human can't be the noun there.

>You disagree with me on certain foundational concepts of ethics and the definition of disinterested.

Not sure what those are. Pretty sure we agree about disinterested, and both have explicitly stated that there's no such thing as a perfectly, 100% disinterested person for judging ethics of other people. But that doesn't matter. Because a very disinterested person is still useful for judging ethics of other people.

>I have stalwartly focused on trying to clarify

No. You tried to change, not clarify, the meaning of your statement.

>human ethics should be monitored by an uninterested third party

is true, and I agree with it.

>I[...] desire to have aliens come judge us as a species.

is NOT a clarification of

>human ethics should be monitored by an uninterested third party

It's a completely different statement. Quit your bullshit.

>you have shown no signs of wanting to end this peacefully.

There is no violence happening here. WTF are you talking about? Discussing things with words is peaceful. Violence is un-peaceful. Don't threaten to block me because you don't like having your failings pointed out to you. That's weak.

1

Captain_Quidnunc t1_jclzk6a wrote

"Got any of these studies?" Please.

Got any studies that say otherwise?

If you do...I'm sure the psychological community would love any data refuting that humans are more comfortable talking to computers than them. Granted the Google search history, let alone AI chat logs, of any living human would immediately falsify that data and render it moot. But I'm sure they would love to hear about it.

If you are going to search for this sort of data I would actually suggest Consensus. It's better for finding peer reviewed data than Google.

And make sure to differentiate between studies gauging public preferred response to observed data. Because that's the other side of this coin. We lie to our doctors. We tell the truth to computers.

1

just-a-dreamer- t1_jcm9u6c wrote

Every job. And yours too. The end goal of AI automation is unemployment for all.

If you enjoy a middle class lifestyle, you charge costumers accordingly. At some point costumers will seek the same service for a cheaper price.

1

No_Expression2878 t1_jcpv3mu wrote

Almost every medical profession can't be taken with the AI in the near future. How can AI treat your teeth?

2