Comments

You must log in or register to comment.

Jewggerz t1_j74m8k4 wrote

AI can't possibly do a worse job of governing the US than the actual Congress.

72

cmVkZGl0 t1_j75ihwr wrote

It will do a better job because it hasn't been designed to be greedy and self serving.

23

BlackBoxGamer t1_j77r3n7 wrote

Well, that depends.

There’s cases of AI being discriminatory because of the data that it was trained on.

I believe an article recently showed that an AI used for sifting through job applicants was heavily biased against women because it was trained on data that was biased.

The same thing could happen here

3

ElectroFlannelGore t1_j741qzi wrote

"Judges worried impartiality of AI may negatively affect their ability to dole out unfair punishments based on incorrect biases."

64

Fake_William_Shatner t1_j7495lo wrote

AI Lawyer might not know their client is black or dropped out of high school for instance. And, nobody went to the AI Lawyer's fraternity so THAT goes right out the window.

31

I_ONLY_PLAY_4C_LOAM t1_j74g5mn wrote

Au contraire, there's already ai being used in the criminal justice system, and it's incredibly biased.

35

ThMogget t1_j74loqm wrote

Which means we can easily measure and remove it. Good luck doing that with humans.

7

TrailHazer t1_j7585ix wrote

“three arrests for a Black person could indicate the same level of risk as, say, two arrests for a white person.” - and this is a solution provided in the article. The whole argument is the same as in weapons of math destruction. Can’t use data like zip code to determine the best policing Strat for the zip code. Beyond dumb and falls apart talking to anyone outside feel good about yourself land for white liberals.

−4

NoteIndividual2431 t1_j74kiam wrote

That is one interpretation of the data, but not the only one possible.

Suffice it to say that AI can't actually be biased in and of itself, but it ends up adopting whatever biases exist in its training data.

−8

I_ONLY_PLAY_4C_LOAM t1_j74n6it wrote

I have no idea what point you are trying to make here. If an AI adopts the bias of its training data then it's biased lol.

10

Fake_William_Shatner t1_j77mj24 wrote

The bigger problem is you not understanding AI or how bias happens. If you did, the point NoteIndividual was making would be a lot more obvious.

There is not just one type of "AI" -- for the most part it's a collection of algorithms. Not only is the type of data you put in important -- even the order can change the results, because it doesn't "Train on all the data all at once" -- so, one method is to randomly sample the data over and over again as the AI "learns." Or, better to say the algorithm with neural nets and Gaussian functions abstracts the data.

Very easy to say "in an area where we've arrested people, the family members of convicts and their neighborhoods are more likely to commit crime." What do you do once you know this information? Arrest everyone or give them financial support? Or set up after school programs to keep kids occupied doing interesting things until their parents get home from work? There is nothing wrong with BIAS if the data is biased -- the problem comes from what you do with it and how you frame it.

There are systems that are used to determine probability. So if someone has symptom like a cough, that are the chances they have the flu. Statistics can be complied for every symptom and the probability of the cause can be determined. Each new data point like body temperature, can increase or decrease the result. The more data over more people over more time the more predictive the model will be. If you are prescribing medicine, than an expert system can match the most likely treatment with a series of questions.

We need to compile data on "what works to help" in any given situation. The police department is a hammer and they only work on nails.

0

I_ONLY_PLAY_4C_LOAM t1_j77vilc wrote

This is the second time a redditor has accused me of not understanding technology when I disagree with them about a point regarding AI in a day. I love seeing people condescend to me about technology that I have years of experience working with in academic and professional settings.

"The data says black people commit more crime" is still not a reason to build automated systems that treat them differently. Biased models are not a good reason to abandon the constitutional and civic principles this country was founded on.

1

Fake_William_Shatner t1_j78i4oh wrote

>"The data says black people commit more crime" is still not a reason to build automated systems that treat them differently.

I agree with that.

However it sounded like your blanket statement about what it does and doesn't do is like saying; "don't use a computer!" Because someone used them wrong one time.

My entire point is it's about the data they choose to measure and what their goals are. Fighting "pre-crime" is the wrong use for it. But, identifying if people are at risk and sending them help? -- I think that would be great.

1

KickBassColonyDrop t1_j77s2o8 wrote

If the AI is unaware of the race of the client, doesn't that mean it's actually impartial? Because it's simply treating the being as a client/human and not introducing any bias?

2

Fake_William_Shatner t1_j78jorg wrote

I was just kidding. However, if you were giving someone legal advice about going to trial -- it makes a difference in venue and jury selection.

I'm not exactly sure on the stat but I thought it was around 2X more time given to black kids than white kids on punishments because the Judges tend to treat them as older.

And I'm sure you'd want statistics on outcome -- just to know what your chances of winning versus pleading would be. And would an AI ask to appeal the case for another venue to find a jury of peers?

The human factor is important but, it would be nice to be more impartial.

1

needadvicebadly t1_j75eckx wrote

AI is never impartial. It just has novel biases you don’t know.

16

brain_overclocked t1_j75pak2 wrote

On the one hand, that is true. But there is burgeoning field in understanding AI biases, and developing techniques to minimize them.

Because on the other hand, we humans are riddled with biases. Some stronger than others and expressed differently for each person. With training and education it is possible to minimize our various biases, although perhaps not eliminate them; but that can take time.

With an AI, if a bias is detected and a means developed to reduce it, maybe even eliminate it, then it would be much easier to update the AI accordingly, something that is far quicker to do. Additionally, if we are not able to eliminate or minimize all biases in an AI, then perhaps reducing the number of expressed biases in AI than in a human may be valuable in and of itself. If, for whatever reason, we do come across a bias in AI that cannot be reduced to an acceptable degree with regard to its designated task, then simply being aware of it may give us the ability to mitigate or guard against it in some other external way.

2

peabody t1_j75yhdg wrote

I wouldn't call AI unbiased. They tend to inherit the biases of the data sets they're trained on.

3

henningknows t1_j73ugi1 wrote

I haven’t seen a single this about this that makes me think it’s widespread use is a benefit to society. Mostly look like it will cause problems

31

ex_sanguination t1_j73y4cz wrote

Meh, this happened during the industrial age as well. It's just new technology making certain jobs/roles obsolete. It's heartbreaking for those who're being affected, but it's a step in the right direction as a society.

20

I_ONLY_PLAY_4C_LOAM t1_j74f5si wrote

I don't think it's made very much of anything obsolete. It's still pretty shit. If anything, it's degrading the quality of content on the internet and making it less useful.

9

lycheedorito t1_j7556hw wrote

You can already catch ChatGPT responses on Reddit, ArtStation recently had been flooded with AI art... They now have a filter but it doesn't catch people being fraudulent about authenticity. Both of these things make me less inclined to engage or care. I suppose if you are completely unaware of it you might not notice, but people who are aware do. Is the idea that we'll all just tell AI to respond to everything for us, so we're just proxies for artificial conversation?

4

ex_sanguination t1_j74gk2s wrote

Oh for sure, it hasn't caused any major upheavals yet. But once it's refined it'll start to make a more noticable impact. This all being in the future. Give it 10 years? But who knows, maybe this is the same hoopla as self driving cars were back in 2015.

3

I_ONLY_PLAY_4C_LOAM t1_j74hkx2 wrote

−5

ex_sanguination t1_j74i8h3 wrote

Right, but the fear that taxi services, truck drivers, and delivery drivers etc was blown out of proportion. Can it still happen? Sure. But people were saying by early 20s' there would be massive change in the workforce.

2

Trotskyist t1_j74wmh1 wrote

I mean, self-driving taxis are a thing now in several cities/states and are actively expanding into new markets. Obviously, it hasn't taken over yet and become the norm (if it does at all) but it's absolutely a growing industry.

−1

henningknows t1_j73ycc6 wrote

Why is it a step in The right direction?

5

joanmave t1_j74ffqx wrote

Because it provides value. It generates answers for questions that can require a more extensive due dilligence. Instead of a human scouring the internet for answers it can directly and comprehensively answer the question in the context is asked with explanations. For instance, software developers are using it by being recommended actual implementations in code that actually works, solving problems much faster and being more productive.

Edit: I want to add that the answers are very specific to the problem stated by the user. ChatGPT does not provides a general answer but a very specific answer for the problem at hand.

1

henningknows t1_j74fzs4 wrote

Fair enough. I can see that being useful once all the kinks are worked out.

1

ex_sanguination t1_j73zgq2 wrote

Customer service roles across the board. It frees up time for workers to handle more important/critical thinking tastes vs. simple customer service based ones. It's a fantastic tool to bounce ideas off of, cultivating a person's/staffs creativity. It's also brilliant at taking information and writing articles/inquiries.

Regulation will be needed, but overall it's a netgain.

−3

henningknows t1_j73zwqa wrote

Yeah, well when I start reading schools are considering not having written assignments anymore that worries me. People need to learn how to do things like that and think for themselves.

13

ex_sanguination t1_j740u29 wrote

Understandable, and like I said regulation is going to be needed. But ask yourself this? Kids curriculum nowadays (USA) is test based and has little critical thinking involved. The fact a fledgling AI can pass as a high school student is an issue, but is it an AI problem or an issue how our schools operate/teach?

Also, software to recognize AI generated content is already being made and I'm sure schools will implement a submit system that verifies their students work.

6

henningknows t1_j7414m9 wrote

Yeah I hear that. They have my kid memorizing spellings and definitions of words. He gets an A on every test. Then forgets all of it a few weeks later

7

ex_sanguination t1_j741dxy wrote

It's all above my pay grade and I don't envy you as a parent in today's climate, but I'm sure your little ones gonna be alright :)

Hell, remember cursive? 🤣

3

Art-Zuron t1_j742rzs wrote

I still write in pseudocursive to this day, and, while people say it's pretty, its also a bitch to read.

5

demonicneon t1_j74pfi3 wrote

You should try and learn architectural print. I was also cursive but switched in uni and it’s more legible and I write just as fast.

1

Jaysnewphone t1_j747nik wrote

I remember it but I don't remember why I had to learn it.

2

demonicneon t1_j74pcgn wrote

Memorisation and spelling are good. It wasn’t long ago people were saying autocorrect meant you didn’t need to learn spelling which is basically the same thing as this atm but for longer form writing.

Memorisation and spelling without putting into practice ie writing essays and reports and fiction, is bad because as you say people just forget it if they don’t use it.

2

Trotskyist t1_j74x4um wrote

>Also, software to recognize AI generated content is already being made and I'm sure schools will implement a submit system that verifies their students work.

I wouldn't be so sure. As soon as an algorithm is created to detect AI content that exact same model can and will be used to further train the neural network to avoid detection. This is the basic premise behind generative adversarial networks (or GANs,) one of the bigger ML techniques.

2

lycheedorito t1_j755lf9 wrote

And it will catch false positives and people will be punished for having done nothing.

1

AccomplishedBerry625 t1_j74bdtp wrote

It happened with Google and Wikipedia as well.

Personally I think it’s just Google search on steroids, like a natural language API for Google Dorks

2

KeepTangoAndFoxtrot t1_j740xp5 wrote

Do you mean just in general? I'm in edutech and we're working on the development of a tool that will build personalized lesson plans for teachers using something similar to ChatGPT. So there's one thing for you!

15

henningknows t1_j741a49 wrote

Well for my job it’s not there yet. Everything I tried to use it for, it gave me nothing useful

−8

Fake_William_Shatner t1_j748na3 wrote

When you work at a law firm, the AI doing the work of artists and writers, you might be able to tell them; "Be flexible, find another career."

When you hear about an AI creating legal documents and helping people in court. "Everybody sue this guy!!!!" Hey, and you could probably use an AI Lawyer to write that lawsuit -- make sure to send a LOT of them. Bankrupt the business before they can test it out!

9

I_ONLY_PLAY_4C_LOAM t1_j74ft6v wrote

I'm not convinced that this technology, in it's current form, will replace lawyers. It lacks the precision required by legal reasoning and still gets shit wrong all the time. Furthermore, as a software engineer, I have doubts on whether this tech is capable of solving these problems without radical new ideas. I foresee a lot of people giving themselves a lot of headaches by thinking they can rely on this technology, but not much more than that.

13

likethatwhenigothere t1_j74qmyo wrote

I asked it something today and it came back with an answer that seemed correct. I then asked for it to give me examples. It gave two examples and the way it was written seemed absolutely plausible. However I knew the examples and knew that they were wrong. It gave other examples that I couldn't verify anywhere, yet as I asked more questions it kept doubling down on the previous examples.

I won't go into detail about what I was asking, but it basically said the Nintendo logo was made up of three rings to represent three core values of the business. I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't. So fuck knows where it got the info from.

6

I_ONLY_PLAY_4C_LOAM t1_j74rwgf wrote

It's just giving you a plausible and probabilistically likely answer. It has absolutely no model of what is and isn't true.

13

likethatwhenigothere t1_j76c7nb wrote

But aren't people using it as factual tool and not just getting it to write content that could be 'plausible'? There's been talk about this changing the world, how it passed medical and law exams - which obviously needs to be factual. Surely if theres a lack of trust in the information its providing, people are going to be uncertain about using it. If you have to fact check everything its providing, you might as well just to do the research/work yourself because you're effectively doubling up the work. You're checking all the work chatgpt does and then having to fix any errors its made.

Here's what I actually asked chatgtp in regard to my previous comment.

I asked if the borrowmean symbol (three interlinked rings) was popular in Japanese history. It stated it was, and give me a little bit of history about how it became popular. I asked it to provide examples of where it can be seen. It came back saying temple gates, family crests etc. But it also said it was still widely used today and could be seen in Japanese advertising, branding and product packaging. I asked for an example of branding where its used. It responded...

"One example of modern usage of the Borromean rings is in the logo of the Japanese video game company, Nintendo. The three interlocking rings symbolize the company's commitment to producing quality video games that bring people together".

Now that is something that can be easily checked or confirmed or refuted. But what if its providing a response that can't be?

2

Fake_William_Shatner t1_j77obea wrote

These people don't seem to know the distinctions you are bringing up. Basically, it's like expecting someone in the middle ages to tell you how a rocket works.

The comments are "evil" or "good" and don't get that "evil and good" are results based on the data and the algorithm employed and how they were introduced to each other.

Chat GPT isn't just one thing. And if it's giving accurate or creative results, that's influenced by prompts, the dataset it is drawing from, and the vagaries of what set of algorithms they are using that day -- I'm sure it's constantly being tweaked.

And based on the tweaks, people have gotten wildly different results over time. I can be used to give accurate and useful code -- because they sourced that data from working code and set it to "not be creative" but it's understanding of human language helps do a much better job of searching for the right code to cut and paste. There's a difference between term papers and a legal document and a fictional story.

The current AI systems have shown they can "seem to comprehend" what people are saying and give them a creative and/or useful response. So that I think, proves it can do something easier like legal advice. A procedural body of rules with specific results and no fiction is ridiculously simple compared to creative writing or carrying on a conversation with people.

We THINK walking and talking are easy because almost everybody does it. However, for most people -- it's the most complicated thing they've ever learned how to do. The hardest things have already been done quite well with AI -- so it's only a matter of time that they can do simpler things.

Getting a law degree does require SOME logic and creativity -- but it's mostly memorizing a lot of statutes, procedures, case law and rules. It's beyond ridiculous if we think THIS is going to be that hard for AI if the can converse and make good art.

1

ritchie70 t1_j75anat wrote

I played with it today. It wrote two charming children’s stories, a very simple program in C, a blog post about the benefits of children learning ballet, a 500 word essay about cat claws, answered a “how do I” question about Excel, and composed a very typical corporate email.

Of the fact based items, they were correct.

I may use it in future if I need an especially ass-kissy email.

2

Fake_William_Shatner t1_j77myki wrote

>I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't.

You are working with a "creative AI" that is designed to give you a result you "like." Not one that is accurate.

AI can definitely be developed and trained on case law and give you valid answers. Whether or not they've done it with this tool is a very geeky question that requires people to look at the data and code.

Most of these discussions are off track because they base "can it be done" by current experience -- when the people don't even really know what tool was used.

1

lycheedorito t1_j754icl wrote

It won't replace artists either. Like chat, it gets shit wrong, it doesn't understand what it's making, you still need artists who understand art to curate and fix things at the very least, every time I explain this it feels like I'm talking to a wall which is not surprising. Probably the same for writing, or music, or whatever.

5

KSRandom195 t1_j74njna wrote

Everyone will say this about their pet industry.

“Clearly my industry is harder than all the others because <reason>.”

No, your pet industry isn’t special, it will either be replaced or not like all the others.

Being a technical person, I don’t think AI is where it needs to be yet to replace practically any industries. If I’m wrong, it’s not really a problem I was going to be able to deal with anyway.

1

demonicneon t1_j74p1a9 wrote

It’s still very much a tool.

2

KSRandom195 t1_j74p4uz wrote

As a tool I see great potential. As a replacement I do not.

2

I_ONLY_PLAY_4C_LOAM t1_j74rg37 wrote

Having actually worked in legal technology, I'm honestly not sure what this does for existing lawyers. As I said before, legal documents require extremely specific and precise language. Lawyers are likely to have templates for common documents their firms create, and anything beyond that requires actually knowing about law, which LLMs like ChatGPT are not capable of. The actual money to be made in legal technology is not in generative AI, but in document processing and search. Lawyers are increasingly having to deal with hundreds of gigabytes or even terabytes of documents in a given case. Ocr, which is also AI and is seeing in use in the industry, makes handwriting searchable. Advanced search techniques make legal review, the real driver of cost in the legal industry, faster and cheaper. Making legal arguments in court is not the reason why interaction with the legal system can be so expensive.

9

Fake_William_Shatner t1_j77p3ko wrote

>legal documents require extremely specific and precise language.

Which computer software is really good at -- even before the improvements of AI.

>and anything beyond that requires actually knowing about law, which LLMs like ChatGPT are not capable of.

Yeah, lawyers memorize a lot of stuff and go to expensive schools. That doesn't mean it's actually all that complicated relative to programming, creating art or designing a mechanical arm.

I agree that document processing and search are going to see a lot of growth with AI. But being able to type in a few details about a case and have a legal document created, a discovery, and a bulk of all the bread and butter that is using the same templates over and over again with a few sentences changing -- that's going to be AI.

Most of what paralegals and lawyers do is repetitive and not all that creative.

1

I_ONLY_PLAY_4C_LOAM t1_j74pox9 wrote

This attitude that tech bros have about disrupting industries they don't actually understand or know anything about is pretty funny sometimes.

1

Fake_William_Shatner t1_j77pz70 wrote

"Tech bros"? There are AI developers. If they team with some lawyers to double-check and they get good case law data -- I can guarantee you it isn't a huge jump to create a disruptive AI based on that.

Revisit these comments in about a year. The main thing that will hinder AI in the legal world is humans suing it to not be allowed. Of course, all those attorneys will use it and then proof the output. And sign their names. And appear in court with nice suits and make deals. And they won't let AI be used in court because it is not allowed. For reasons.

The excuse that it can give an inaccurate result does put people at risk, so more effort is required for accuracy. But, AI will be able to pass the Bar exam easier than beat a human at chess.

It's not funny, but sad, that people are trying to convince themselves this is more complicated than writing a novel or creating art.

1

I_ONLY_PLAY_4C_LOAM t1_j77zlt0 wrote

RemindMe! one year

Has the machine consciousness supplanted the fleshy meat bags in the legal industry.

1

Fake_William_Shatner t1_j78fcnq wrote

No -- I didn't say it would replace them. The legal system won't allow it.

I'm saying it will be used to create legal documents and win cases -- albeit with the pages printed out before they go in the courthouse.

This isn't about the acceptance, but the capabilities. If there is one group that can protect their industry it's the justice system.

1

henningknows t1_j7494j0 wrote

It will be interesting to see how some of those ideas pan out. Ai lawyer will definitely be put to the test quickly with the fastest law suits to decide if it will be legal or smart to have an ai that can create legal docs. As for writing we have already seen you can’t patent work created by ai, and my assumption is search engines will learn to identify and de rank ai written articles.

0

Fake_William_Shatner t1_j74cr07 wrote

I have full confidence in America's legal system to protect itself from innovation, efficiency and fairness.

No longer having an expensive lawyer making it impossible for some people to be taken to jail, and to bury people who challenge a corporation in a two-tiered justice system -- well, that's just not going to happen on their watch.

2

henningknows t1_j74d0ii wrote

Not totally sold that ai could bring that change. I can agree on the legal system sucking

1

RollingTater t1_j74h8q5 wrote

Eventually AI will be able to build a better legal defense than a human can, and in that case it would be unethical to give people human defense teams.

However, that day is not today. ChatGPT has no hard internal logic. You can trick it into doing bad math for example, or sometimes it writes code that is just wrong.

I'm no lawyer but I'm assuming legal defense requires some sort of presentation of factual evidence, logic, and verification of that evidence. Right now you can't guarantee the AI hasn't just spat out a huge document of gibberish that looks right but has a hidden logical flaw.

4

henningknows t1_j74hll3 wrote

What makes you think an ai can make a better legal defense? You understand winning a court case is about persuading a jury just as much as having the law on your side.

5

RollingTater t1_j74hz4u wrote

Persuasion is the one thing chatgpt can do really well. That's something that doesn't require any hard logic. And it's also why this tech is dangerously deceptive, it will be so persuasively correct until it's not.

4

VectorB t1_j74o49s wrote

Wonderfull, our system is not based in rules or fairness, but inthe quality of charisma rolls your lawyer makes.

2

lycheedorito t1_j755rt6 wrote

Also AI is trained on existing things by humans. It's not going to do better than what it is trained on.

1

LionTigerWings t1_j74xlgy wrote

Could be really beneficial for education. It could essentially act as a tutor that's available 24/7. Schools will need to adapt to avoid cheating, but it should be possible to do that. Make testing and writing live with pen/paper.

Imagine asking it for clarification on a topic your having trouble understanding, or asking it how to solve a math problem you're struggling with. Currently it's wrong from time to time, but it can be improved upon, especially by feeding it textbooks.

4

FlackRacket t1_j74viym wrote

There will be some benefits... Even with humans making all the decisions, a legal assistant AI bot might be able to tell you what the "proper" course of action is, according to the law, and let humans leverage it for context.

An AI bot might also be able to tell you where new laws are in conflict, making it easier to keep the legal system clean(er)

2

BidnessBoy t1_j74lm7b wrote

You’re also reading about this through sources that are inherently biased, simply because this AI is a threat to the jobs of the writers who create these articles and the editors who review them.

1

PedroEglasias t1_j74wrdh wrote

It's not that different from just Googling a question and repeating the first result

1

Druggedhippo t1_j75jxy1 wrote

If you ask ChatGPT the lifespan of an arctic fox, it'll give you same result as Google would in the first result.

> Arctic foxes live for 3 or 4 years in the wild and up to 14 years in captivity.

But the real power of ChatGPT isn't that it can output a result, it is that it has a primarily conversational aspect and it's ability to merge multiple things into a coherent discussion whilst remembering what you said earlier.

Lets say you want to expand on that, so you ask "what about it's diet?" to ChatGPT. It gives you a good result. To formulate that in Google, you'll have to write "arctic fox diet". And you'll have to open a new tab or perform a new search.

This is important as this new search has no context, each search is independent of another. With ChatGPT it remembers that you are searching about Arctic foxes.

And it can link your context with entirely different subject matters.

> write a short owners manual

Suddenly it spits out multiple levels of items about how to look after an Arctic fox. Point 4 is enrichment. So you ask

> write more detail about point 4

And it spits out a paragraph about how the arctic fox needs toys and enclosures and such. Take note that I didn't ask it about enrichment, I typed "point 4", and it KNEW I was talking about point 4 in the result it had given me. And now you want to be able to have your 10 year old help you look after it.

> write that for a 10 year old

Bam, now your owners manual is using words and phrases appropriate for someone 10 years old. Now you want that in German?

> Translate to german

And oh no, your new Arctic fox you bought is lost. You have to do a lost poster. How would you even do one? Well, ask ChatGPT

> write an lost poster for an artic fox

>Here's a sample lost poster for an Arctic fox:

> LOST: Arctic Fox > Description: Our beloved Arctic fox has gone missing and we are very worried. He is a small, white-furred fox with pointy ears and a bushy tail. He is friendly but shy, so please approach him slowly.

> Last seen: [Insert location and date]

> If found, please contact: [Insert phone number and/or email address]

> Reward: [Insert amount or item for reward if desired]

> We miss our furry friend and are eager to bring him back home. Any information or sightings of the Arctic fox would be greatly appreciated. Thank you for your help!

Not only did it write the poster text, it added, on it's own, additional descriptive text like the color, tail, behaviour(shy!). That is linking things I didn't ask or even know at that point. If I had told it that my Arctic fox only had 3 legs, you would have gotten something more like this

> my fox only has 3 legs, rewrite the poster

> Sure, here's a revised version of the lost poster for an Arctic fox with 3 legs:

> LOST: 3-Legged Arctic Fox > Description: Our dear Arctic fox, who has only three legs, has gone missing and we are heartbroken. He is a small, white-furred fox with pointy ears and a bushy tail. He moves with a unique gait and may be timid, so please approach him slowly.

--

Now I'm not saying you can't many of the facts like this from Google, but the ability to speak in "plain language" and have it give back in "plain language" is the game changer here.

Also note, that on that above where I mentioned point 4. Google Nest can't understand what I'm talking about when I say "turn on the bedroom lights and set them to 100%", it doesn't understand that "them", it chokes because it has no context. Now imagine something like ChatGPT powering the text parsing.. it's understanding of context makes it light years ahead of existing public implementations. I fully expect personal assistants like Siri, Alexa, Bixby, Cortana, to be revolutionized.

4

PedroEglasias t1_j75kqzw wrote

Oh I 100% agree it's more powerful than a google search, I'm a dev and I use it every day to save me sifting through StackExchange results. I'm just pointing out to the haters that it's not that different from Googling information, it just saves you converting the results to a coherent / salient argument

1

pwalkz t1_j75hjgr wrote

We haven't seen anything yet really. Just getting started

1

this_barb t1_j73xj74 wrote

It doesn't raise ethical questions. It just raises questions.

I can see this technology, as it matures, supplanting a shit ton of jobs.

8

Jaysnewphone t1_j74850q wrote

Nobody cared when it was manufacturing and farming jobs. They all thought it was wonderful. Now all of a sudden...

2

Inconceivable-2020 t1_j741ruh wrote

Just getting everyone ready for Robot Overlords controling everything.

4

Fake_William_Shatner t1_j749g09 wrote

Robot Overlord's warm guidance and thoughtful stewardship you mean. >!Do not piss off your future overlords.!<

1

Kingdarkshadow t1_j74dcw5 wrote

Still better than the power hungry politics or corps executives.

1

yParticle t1_j73x1op wrote

I love the idea of replace 90% of lawyers (or at least of what they do) with AI.

2

JenMacAllister t1_j73xfoi wrote

A non-bias logic based AI making arguments in polite and respectful debate. Without the chance of political influence or money.

Where do I sign up?

2

Vince1128 t1_j73xz5r wrote

Are you sure and AI can't be influenced at all?

20

FacelessFellow t1_j744i33 wrote

I’m sure that true AI will have a firm grasp of objective reality. Otherwise it’s not a very good AI.

truth is not subjective. If we can program ai to be nothing but truthful, then it cannot be corrupted. Right?

1

Fake_William_Shatner t1_j749lhc wrote

>If we can program ai to be nothing but truthful, then it cannot be corrupted.

It can be useful. It can be checked. But saying something "cannot be corrupted" is the wrong way to approach this.

8

FacelessFellow t1_j74dpzg wrote

It’s like saying a math equation can be corrupted. It can be wrong(human error), but if it’s correct, it cannot be corrupted. 2+2=4 cannot be corrupted. Can it?

0

Fake_William_Shatner t1_j74fi55 wrote

No, it isn't like saying that.

With 2+2 you already KNOW the answer. It's 4. You already know the inputted data is perfect.

Creating an AI to make decisions is drawing from HUMAN sources.

And, I think your idea that "objective reality" and "facts" are certain is not really a good take. We don't even observe all of reality. Or perceptions and what we choose to pay attention to are framed by our biases. And programming an AI requires we know what those are and know what data to feed it to learn from.

FACTS are just data. The are interpreted. "TRUTH" is based on the viewer's priorities and understanding of the world. The facts can be proven, but, which facts to use? And TRUTH is a variable and different for everyone who says they know it.

8

FacelessFellow t1_j74vfea wrote

You don’t think there’s an objective truth/reality?

That’s a mind blowing concept for me.

−1

Fake_William_Shatner t1_j76u03l wrote

You can't really join the ranks of the wise people until you understand this. You don't think people with different perspectives and life histories and fortunes see a different "reality?"

If you get depressed -- doesn't that change what you see? If you take hallucinogenics, that alters your perspective. Your state of mind will interpret and experience life. Do you know if you are rich or poor until you have knowledge of what other people have or don't have?

Can you see the phone signals in the air, or do you ONLY get the phone call intended for you? You answer a call, and speak to someone -- you now have a different perspective and slice of reality than other people. Without the phone with that one number -- you walk around as if nothing was there. But, that data is there and ONLY affects some people.

Do you see in all of the EM spectrum? No. Visible light is a very small slice of it. If you had infrared or ultraviolet goggles, you would suddenly have information about your environment other people don't. Profoundly color blind people -- don't see the Green or the Red traffic lights except by position. Someone who sees colors might forget if the Red light is on the bottom or the top - -they take it for granted that they can tell. And the blind now have auditory signals at the street level -- their "knowledge" of the reality sighted people have of the same environment has changed for the better in that regard.

That's the challenge of data and science and especially statistics; what do you measure? What is significant to evaluate is a choice. And your view of reality is always in context of the framework you have from society, your situation, your "luck", your state of mind.

A nice sunny day, and one person gets a phone call that their mother has died -- it's a different reality and "truth."

So, I hope you continue experimenting with this notion that there is not and never has been one reality because we all have a different perspective and we can't all look at the entire thing. We can't all hear it. We can't all feel it. We interpret the data differently and choose different parts to evaluate.

1

FacelessFellow t1_j770iej wrote

So atomic mass is subjective? The table of elements is subjective?

Your comment just made it sound like a perspective thing. It’s sounds like it’s all about people and their subjective reality.

Objectively, an atom has so many electrons. Or does the number of electrons change depending on who is observing?

If I put 3 eggs on the table, it will be 3 eggs for someone else. Even if they’re blind, they can touch the eggs. Or be told by someone that it’s 3 eggs. I don’t see what can change the fact that there’s 3 eggs on the counter.

1

Fake_William_Shatner t1_j77sjzw wrote

>So atomic mass is subjective? The table of elements is subjective?

So you can't compare SOCIAL ENGINEERING to something that is subjective -- you want to compare it to atomic mass?

There's no point discussing things with a person who breaks so many rules of logic.

>It’s sounds like it’s all about people and their subjective reality.

Yes. Like your reality where you think Atomic mass being a stable number everyone can determine ALSO covers whether they think their outfit makes them look fat.

There is "objective reality" -- well, as far as you know, so far, with humanity's limited perception of the Universe. But, people interpret everything. Some people do not eat eggs because they are Vegan. 3 Eggs is objective fact. The "Truth" that what you gave me is a good thing, is an interpretation. And you assume how other people think based on your experience.

Reality and truth are subjective as hell. Facts are data points and can be accurate, but WHICH FACTS are we considering? "FACT; there are three eggs -- I win!" Okay, what were the rules? "That's a secret."

1

FacelessFellow t1_j74dgpw wrote

If there’s only ONE objective/factually reality, then we can program AI to perceive only ONE objective/factual reality.

The sun is hot. Agree? You think a good AI would be able to say, “no, the sun is cold.”?

The gasses we release into the atmosphere effect climate. Agree? You think a good AI would be able to say, “no, humans cannot effect climate.”?

Science aims to be as factual and accurate as possible. I imagine a true AI would know the scientific method and execute it perfectly.

Yes, some scientists are wrong, but the truth/facts usually prevail.

I don’t know if I’m making sense haha

−3

Outrageous_Apricot42 t1_j74jacv wrote

This is not how it works. Check out papers how chat gpt was trained. If you use biased training data you will get biased model. This is known since inception of machine learning.

9

FacelessFellow t1_j74nqc6 wrote

Is AI not gonna change or improve in the near future?

Is all AI going to be the same?

−4

Sad-Combination78 t1_j74y7wa wrote

Think about it like this: Anything which learns based on its environment is susceptible to bias.

Humans have biases themselves. Each person has different life experiences and weighs their own lived experiences above hypothetical situations they can't verify themselves. We create models of perception to interpret the world based on our past experiences, and then use these models to further interpret our experiences into the future.

Racism, for example, can be a model taught by others, or a conclusion arrived at by bad data (poor experiences due to individual circumstance). I'm still talking about humans here, but all of this is true for AI too.

AI is not different. AI still needs to learn, and it still needs training data. This data can always be biased. This is just part of reality. We have no objective book to pull from. We make it up as we go. Evaluate, analyze, and expand. That is all we can do. We will never be perfect. Neither will AI.

Of course one advantage of AI is that it won't have to reset every 100 years and hope to pass on enough knowledge to its children as it can. Still, this advantage will be one seen only in age.

6

FacelessFellow t1_j75215s wrote

So if a human makes an AI the AI will have the humans biases. What about when the AI start making AI. Once that snowball starts rolling, won’t future generations of AI be far enough removed from human biases?

Will no AI ever be able to perceive all of reality instantaneously and objectively? When computational powers grow so immensely that they can track every atom in the universe, won’t that help AI see objective truth?

Perfection is a human construct, but flawlessness may be obtainable by future AI. With enough computational power it can check and double check and triple check and so on, to infinity. Will that not be enough to weed out all true reality?

1

Sad-Combination78 t1_j75312i wrote

you missed the point

the problem isn't humans, it's the concept of "learning"

you don't know something, and from your environment, you use logic to figure it out

the problem is you cannot be everywhere all at once and have every experience ever, so you will always be drawing conclusions from limited knowledge.

AI does not and cannot solve this, it is fundamental to learning

6

FacelessFellow t1_j757skq wrote

But I thought AI was computers. And I thought computers could communicate at the speed of light. Wouldn’t that mean the AI could have input from billions of devices? Scientific instruments nowadays can connect to the web. Is it far fetched to imagine future where all collectible data from all devices could be perceived simultaneously by the AI?

1

Fake_William_Shatner t1_j74g1qn wrote

>If there’s only ONE objective/factually reality,

There isn't though.

There can be objective facts. But there are SO MANY facts. Sometimes people lie. Sometimes they get bad data. Sometimes the look at the wrong things.

Your simplification to a binary choice of a social issue isn't really helping. And, there is no "binary choice" what AI produces writing and art at the moment. There is no OBVIOUS answer and no right or wrong answer -- just people saying "I like this one better."

>I imagine a true AI would know the scientific method and execute it perfectly.

You don't seem to understand how current AI works. It throws in a lot of random noise and data so it can come up with INTERESTING results. An expert system, is one that is more predictable. A neural net adapts, but needs a mechanism to change after it adapts -- and what are the priorities? What does success look like?

Science is a bit easier than social planning I'd assume.

4

Vince1128 t1_j7526vu wrote

An AI is influenced by its creator, in this case, the human race, an AI uncorruptible is something from the movies, just impossible in our reality by concept. 100% objectivity is not achievable either, and if the AI could be able to do it, would include everyone of us in the group of liers, evil people or whatever you want to call it, because it's judging us based on something unreal.

3

FacelessFellow t1_j758am3 wrote

But what about AI made by AI made by AI. Would the human influence still be there?

It’s sounds like your saying computers and computer software will never evolve past what we have now. We don’t even understand gravity yet, maybe future computers/software will be unimaginable.

0

demonicneon t1_j74prq5 wrote

A completely objective AI kind of scares me tbh. So much of human life is in the subjectivity and nuance.

2

FacelessFellow t1_j74utrs wrote

I can’t wait for AI to be able to tell us (objectively) which humans are trash. I could be on that list.

It will help uneducated voters to never vote against their own interests again 👍🏽 the politicians of the future will literally not be able to lie, because the AI will tell us the truth.

1

Chase_the_tank t1_j75y3g1 wrote

>I’m sure that true AI will have a firm grasp of objective reality. Otherwise it’s not a very good AI.

Prompt: "Does Donald Trump weigh more than a duck?"

Actual answer by ChatGPT: I not have current information on the weight of Donald Trump, but it is unlikely that he would be heavier than a duck*. Ducks typically weigh between 2-4 kg, while the average weight for an adult human male is around 77 kg.* [Emphasis added.]

&#x200B;

>If we can program ai to be nothing but truthful, then it cannot be corrupted.

The ChatGPT greeting screen warns that the program "May occasionally generate incorrect information". Getting an AI to understand what is true and what isn't is an extremely difficult thing to do.

2

FacelessFellow t1_j75z2sr wrote

CharGPT is not true AI, though, is it? People keep saying we don’t have true AI yet.

1

[deleted] t1_j73y8gl wrote

You have to be joking! The bias that has been baked into this AI is overwhelming.

12

__OneLove__ t1_j741xyx wrote

Some, just done get it. I don't think most are even vaguely aware of just how many AI projects have been cut/canceled due to the fact that ultimately 'we humans are training them' and therefore, AI (at least currently) suffers from the same human traits @ this juncture. AI is moving fast & I fear too many are jumping on the AI bandwagon in full force prematurely IMHO. ✌🏽

8

Fake_William_Shatner t1_j749s9d wrote

>The bias that has been baked into this AI is overwhelming.

You can fix these sorts of data models. It's likely SEEING the bias already in the system and not thinking like a human to obscure the unpleasantness.

1

__OneLove__ t1_j74cv72 wrote

Hmmm...who exactly is 'fix[ing] these sort of data models'? 🤔

2

Fake_William_Shatner t1_j74gyii wrote

Um, the people developing the AI.

To create art with Stable Diffusion, people find different large collections of images to get it to "learn from" and they tweak the prompts and the weightings to get an interesting result.

"AI" isn't just one thing, and the data models are incredibly important to what you get as a result. A lot of times, the data is randomized at it is learned -- because order of learning is important. And, you'd likely train more than one AI to get something useful.

In prompts, one technique is to choose words at random and have an AI "guess" what other words are there. This is yet another "type of AI" that tries to understand human language. Lot's of moving parts to this puzzle.

People are confusing very structured systems, with Neural Nets, Expert systems. Deep Data, and creative AI that use random data and "remove noise" to approach many target images. The vocabulary in the mainstream is too limited to actually appreciate what is going on.

−1

__OneLove__ t1_j74pyql wrote

Respectfully, smoke & mirrors imo...

TLDR;

Um, the people developing the AI. 🤦🏻‍♂️

2

Fake_William_Shatner t1_j77i4ch wrote

>TLDR;

It's really a shitty thing about reddit that the guy who makes that comment gets more upvotes than the person attempting to explain. "Smoke and Mirrors" -- how about which aspect of this are you saying that applies to? Be specific about the situation where they used AI to determine choices in business, society, planning. These are all different problems with different challenges and there are so many ways you can approach them with technology.

And, this concept that "AI do this" really has to go. They are more different in their approaches than people are. They are programmed AND trained. There's a huge difference between attempts to simulate creativity and attempts to provide the best response that is accurate, to making predictions about cause and effect. The conversation depth on this topic is remedial at best.

AI can absolutely be a tool here. It just takes work to get right. However, the main problem is the goals and the understanding of people. What are they trying to accomplish? Do they have the will to follow through with a good plan? Do the people in charge have a clue?

0

__OneLove__ t1_j77m3wj wrote

Look, don’t take it personally, ultimately, you’re stating ‘people’ (known to be naturally prone to bias) are going to ‘program the bias’ out of AI (speaks for itself imo). That was exactly the point I was making & apparently other sub members agree. Simply put, its such a poor argument imo, to the point that I am not willing to sit here & read paragraphs of text to the contrary. I don’t state that to offend you (whom I don’t know), I’m just keeping it 💯 from my perspective. You are obviously entitled to your opinion as well, hence my keeping my response short/succinct vs. trying to convince you otherwise.

At a minimum, I might suggest not taking these casual internet discussions with strangers so personally. Nothing more then a suggestion…

Peace ✌🏽

1

Fake_William_Shatner t1_j77rmew wrote

>vs. trying to convince you otherwise.

Yes, that would require you to know more about what you are saying. "Succinct" would require you to actually connect your short observation to SOMETHING -- what you did was little more than just say; "Not true!" and people didn't like my geek answer and how it made them feel so you got the karma. I really don't care about the Karma, I care about having a decent conversation. I can't do that with "Smoke & Mirrors" when I could apply it to at least a dozen different aspects of this situation, and I have no idea what the common person thinks. And the idea that people have one point of view at a time -- that's foreign to me as well.

>At a minimum, I might suggest not taking these casual internet discussions with strangers so personally.

Oh, you think my observation about "this is a shitty thing" is me being hurt? No. It's ANNOYING. It's annoying that ignorant comments that are popular get upvotes. Usually I cracking jokes and sneaking in the higher concepts for those who might catch them -- because sometimes that's all you can do when you see more than they seem to.

I could make a dick joke and get 1,000 karma and explain how to manipulate gravity and get a -2 because someone didn't read it in a textbook.

However, the ability for people to think outside the box has gotten better over time, and it's not EVERYONE annoying me with ignorance, just half of them. That's a super cool improvement right there!

0

__OneLove__ t1_j77tajo wrote

Please, by all means, keep both proving my point & justifying my unwillingness to engage with this passive aggressive dribble 🙂

...and yet this 🤡 continues to wonder/question why he warrants downvotes 🤔🤣✌🏽

1

Fake_William_Shatner t1_j78j1zv wrote

>why he warrants downvotes

Some people seem to think up and down votes prove the quality of the point being made. No, it's just the popularity in that venue at a given moment.

You could always explain what your comment meant. You don't have to, though. It's important not to take these comments too seriously. But, if you keep commenting on everything else BESIDES what you meant by "smoke and mirrors" then I will just not worry.

I have to commend you however on some top notch emoji usage.

1

__OneLove__ t1_j78jt4s wrote

Take care of yourself & have a nice life internet stranger. In the interim/simply put, I am blocking you. ✌🏽

1

JenMacAllister t1_j73zk7c wrote

It's easy to program out the bias. We have seen just how hard that is to that with humans. (over and over and over ....)

−3

__OneLove__ t1_j74d80j wrote

So who exactly is 'program[ming] out the bias'? 🤔

7

[deleted] t1_j740ayi wrote

Yes, you are technically correct. But around half of society live in a place where feelings are more important than facts. Remember the AI that was profiling potential criminals? Well, that feely segment of society didn't like the factual outcome and the AI was pulled. You will never get an objective outcome while feelings beat hard facts.

2

Fake_William_Shatner t1_j74ao1i wrote

>Remember the AI that was profiling potential criminals?

Oh, it doesn't sound like you are the "rational half" of society either.

I can definitely predict the risks of who will become a criminal by zip code. Predicting crime isn't as important as mitigating the problems that lead to crime.

Feelings are important. If people feel bad, you need to convince them, or, maybe have some empathy.

It's not everyone being entitled. Some people don't feel any control or listened to. And the point of not having "bias" is because cold hard logic can create bias. If for instance, you ONLY hire people who might 'fit the culture in tech support' -- then the bias would inherently look at who already has tech support jobs and who already goes to college for it. So, you have more of those demographics and reinforce the problem.

It's not necessarily LOGIC -- it's about what you are measuring and your goals. What is the "outcome" you want? If you ONLY go on merit, sometimes you don't allow for people to get skills that didn't yet have merit. Kids will parents who went to college do better in college -- so, are you going to just keep sending the same families to college to maximize who logically will do better? No. The people enjoying the status quo already have the experience -- but, what does it take to get other people up to speed? Ideally, we can sacrifice some efficiency now, for some harmony. And over time, hopefully it doesn't matter who gets what job.

Society and the common good are not something we are factoring in -- and THAT looks like putting your finger on the scale.

1

[deleted] t1_j74hh9v wrote

Cancel the AI project, some dude on reddit can predict by zip codes. Well, I guess that one is done! (joking!)

Feelings are important? Yes they are and that is why we should have real humans, with real families and real life experience acting as judges and juries, my reasoning follows.

But the Tech sector DOES employ people who fit the culture, just not in the way you suggest. Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

THIS is the key problem, the AI is being developed and maintained exclusively by this group, lets say they make up half of the population - where does that lead?

I feel AI is incredible but I really think it needs to be given bounds, building better mouse traps (or cars, planes, energy generation, crop rotation etc, etc) NOT making decisions directly for human beings.

−1

Fake_William_Shatner t1_j77j8u5 wrote

>Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

They vote the way educated people tend to vote. Yes -- it's a huge monoculture of educated people eschewing people who ascribe light switches to fairy magic.

>THIS is the key problem,

No, it's thinking like yours that is the key problem when using a TOOL for answers. Let's say the answer to the Universe and everything is 42. NOW, what do you do with that?

>NOT making decisions directly for human beings.

That I agree with. But not taking advantage of AI to plan better is a huge waste. There is no putting this Genie back on the bottle. So the question isn't "AI or not AI" the question is; what rules are we going to live by, and how do we integrate with it? Who gets the inventions of AI?

It's the same problem with allowing a patent on DNA. The concept of the COMMON GOOD and where does this go in the future has to take priority over "rewarding" someone who owns the AI device some geek made for them.

1

JenMacAllister t1_j741tz5 wrote

Yes it did. Anything created by humans will contain the biases of those humans. However others will recognize this and point it out so it could be removed in future versions.

I don't expect this to be 100% non bias on the first or even 100th version. I do not think all the humans on this planet could agree even what that would mean.

But over time I'm sure we could program an AI to be far more non bias than any human and most humans would agree that it was.

−1

ElectroFlannelGore t1_j7428du wrote

Cue (or queue... They both sort of work) people reacting emotionally because of their worry about their own indiscretions.

I approve of AI making suggestions and having that reviewed with the option to override by human judges.

2

JenMacAllister t1_j743q7c wrote

I agree, the same way Doctors would use AI to diagnose patients because of the way the AI could access the entirety of human medical knowledge to make its suggestions. No reason why Lawyers and Judges could not do the same right now.

Over time the AI could earn more and more trust to where we might give up on those people and listen to the AI.

3

Fake_William_Shatner t1_j74bt45 wrote

Yeah, the computer doesn't "forget" so having such a thing get you at least 90% of the way is useful whether or not it can do 100% of the job or not.

1

Fake_William_Shatner t1_j748c38 wrote

This is like the debate about "getting to choose your best doctor" when you can't afford Health Insurance.

So much worry about "bad legal advice" when that could be tested and the bugs worked out. Not enough worry about all the people who plead out and who forego all legal advice because MONEY.

2

Bigdongs t1_j75vugc wrote

Nice source /s

2

herpderpomygerp t1_j74tjkt wrote

We are currently at a point where some a.i sexually harassing people, some a.i are vegan, some a.i are sexist ......I don't have faith in humans but like the a.i is doing a great job without sentencing people

1

OldTimeyMedicine t1_j7502if wrote

Ask these ethical questions to the chatbot. Problem solved.

1

peabody t1_j75ytxc wrote

There's a part of me that's okay with this provided an actual person is curating the final work. We use spell check, grammar check, and thesauruses to improve our writing. I'm okay with people "bouncing" things off an AI.

I think the issue I have is if people start going on "autopilot" and just start accepting the first thing the AI churns out. And how do we provide the oversight necessary so that doesn't happen?

1

pleasekillmerightnow t1_j768ldp wrote

Why? AI is not going to make the final decisions of things are they? Is it unethical now to google things to make decisions? A I would be less biased than a corrupt, racist judge

1

Evn-erl t1_j76o5ss wrote

After playing around with chat gpt for a few weeks…I am no longer super impressed with it.

It has major limitations with any complexity that is required for any sort of high level work.

Sure it can write you some super basic elementary level summaries, content and code. But that’s kinda where it ends.

1

Vaniakkkkkk t1_j75h5jd wrote

So quick to voluntarily turn off brains? Wow.

0

lightknight7777 t1_j75ixm4 wrote

Dude, it just generates the shell. You read and alter it as necessary. This is such a dumb thing to worry about.

0

Jaysnewphone t1_j747e91 wrote

Lawyers are afraid because machines are coming for their jobs this time. It will be fine, I hear McDonald's is hiring also there are 3 available full time positions at my local Home Depot.

−1

Fake_William_Shatner t1_j74chth wrote

Lawyers will just sue anyone who tries to use AI into oblivion -- problem solved. It won't get the opportunity to prove itself for "reasons."

I mean, if we had universal healthcare / Medicaid 4 All -- you no longer need a lot of people in the insurance industry. A lot of bill collectors and accountants. You also lose about half of the attorneys making a living on personal injury.

You'd still have corporate law. But, behind the scenes, everyone would be taking advantage of AI to create documents, file motions, find relevant laws and subpoenas and the like.

3

CallFromMargin t1_j75ajwz wrote

Lawyers will be among the last to be replaced. They have baked into the laws provisions about you being licensed practitioner, and they will argue that only applies to humans.

3

Commotion t1_j74d6a7 wrote

People who think this current generation of AI threatens attorneys’ jobs don’t understand what attorneys do.

2

Jaysnewphone t1_j77t0d5 wrote

Give it 10 or 20 years.

1

Commotion t1_j77vhbn wrote

It might happen. But it will require truly human-like AI. A kind of AI that could also practice medicine or software engineering, or pretty much any professional work.

I hope we get there. It would be a triumph of humanity if people no longer “need” to work. The challenge is ensuring the benefits of the technology are equally distributed in society.

1

Black_RL t1_j74e0zp wrote

It’s hard when it’s your time.

Can’t stop progress though, and I have little empathy for lawyers.

Soulless machines replacing soulless people.

−1