Submitted by demauroy t3_11pimea in Futurology

While I was helping my youngest daughter this morning doing her homework, I let my oldest daughter (young teenager) use my computer to discuss with ChatGPT. I had a look at the transcript after, and I was very surprised that after a few minutes, she started talking with ChatGPT about her problems (with her parents mostly, and also a little bit with her friends), and she continued chatting for 30 minutes or so.

They seemed to have a real conversation (much more interactive than reading advice on the internet), and the advice given, while not very original, was of very decent quality and quite fine-tuned to her situation. She was completely hooked.

I believe teenagers ideally need an 'adult confidant' who is neither a parent nor a teacher, to get advice on life. Typically, a grand-father or uncle / aunt can play this role, and also sport coaches. In the catholic church, confession can also sometimes play this role.

it is important in my opinion to have this adult trusted voice in complement to the discussion with peer teenagers who often do not have the suitable experience to answer questions.

Now, as such adult role models are not always available (and some adults may also setup traps to teenagers acting as a confidant), I feel ChatGPT or comparable AIs could fill an important role in helping teenagers getting 'adult' advice from a third source. I can even imagine tuning an AI like ChatGPT specifically for this purpose.

I would love to know your thoughts.

67

Comments

You must log in or register to comment.

strvgglecity t1_jby3h71 wrote

That's only a confidant for your daughter until you review the transcript after, or even just when a transcript is created. As a former teenager, no.

118

DickbeardLickweird t1_jbynv0l wrote

lol yeah, “confidant” implies the exchange of confidential information. This is essentially someone admitting to reading their daughters iDiary.

54

petrichoring t1_jbyc42i wrote

As a therapist who works with teens, I feel concerned about a teen talking to an AI about their problems given the lack of crisis assessment and constructive support.

“Adult confidants” need to have safeguards in place like mandated reporting, mental health first aid, and a basic ethics code to avoid the potential for harm (see the abuse perpetrated by the Catholic Church).

If a teen is needing adult support, a licensed therapist is a great option. The Youth Line is a good alternative as well for teens who are looking for immediate support as there are of course barriers to seeing a therapist in a moment of need. Most schools now have a school social worker who can provide interim support while a teen is getting connected to resources.

Edit: also wanted to highlight the importance of having a non-familial adult support. Having at least two non-parent adults who take a genuine interest in a child’s well-being is one of the Positive Childhood Experiences that promote resilience and mitigate the damage caused by Adverse Childhood Experiences. A chatbot is wildly inadequate to provide the kind of support I think you’re referring to, OP. It might offer some basic degree of attunement and responsiveness, but the connection is absent. Neurobiologically, we have specific neural systems that are responsible for social bonds to another human, and I suspect even the most advanced AI would fall short in sufficiently activating these networks. Rather than using AI to address the lack of support and connection that many, if not most, teens experience in their lives, we should be finding ways to increase safe, accessible, organic human support.

65

leeewen t1_jbykkhm wrote

I can also tell you as a teen who was suicidal I didn't tell people because of mandated reporting. It endangered my life more because I couldn't speak to people.

While not on topic, and not critical of you or your post, mandated reporting can be more dangerous than its absence for some.

On topic, the ability to vent, to AI, may go along way to some where they lack any other options.

35

petrichoring t1_jbyr8i8 wrote

Totally! The issue of suicide is such a tricky one, especially when it comes to children/teens. My policy is that i want to be a safe person to discuss suicidal ideation with so with my teens I make it clear when I would need to break confidentiality without the consent of the client (so unless they tell me they’re going to kill themselves when they leave my office and aren’t open to safety planning, it stays in the room). With children under 14, it’s definitely more of an automatic call to parents if there’s SI out of baseline, especially with any sort of implication of plan or intern. Either way, it’s important to keep it as trauma-informed and consent-based as possible to avoid damaging trust.

But absolutely it becomes more of an issue when an adult doesn’t have the training or relationship with the client to handle the nuances of SI with what a big spectrum it can present as; ethically, safety has to come first. And, like you said, that can then become a huge barrier to seeking support. My fear is that a chat bot can’t effectively offer crisis intervention because it is such a delicate art and we’d end up with dead kids. The possibility for harm outweighs the potential benefits for me as a clinician.

I do recommend crisis lines for support with SI as long as there is informed consent in calling. Many teens I work with (and people in general) are afraid that if they call, a police officer will come to their house and force them into a hospital stay, which is a realistic-ish fear. Best practice should be that that only happens if there’s imminent risk to harm without the ability to engage in less invasive crisis management (I was a crisis worker before grad school and only had to call for a welfare check without the person’s consent as a very last resort maybe 5% of the time, and it felt awful) but that depends on the individual call taker’s training and protocol of the crisis center they work at. I’ve heard horror stories of a person calling with passive ideation and still having police sent to their house against their will and I know that understandably stops many from calling for support. I recommend using crisis lines if there isn’t other available support because I do believe in the power of human connection when all else feels lost, with the caveat that a caller know their rights and have informed consent for the process.

Ideally we’d implement mental health first aid training to teens across the board so they could provide peer support to their friends and be a first line of defense for suicide risk mitigation without triggering an automatic report. Would that have helped you when you were going through it?

6

Gagarin1961 t1_jbylg97 wrote

I don’t think OP is saying the AI is a good replacement for therapists, they’re saying it’s a decent replacement for having no option to talk things out.

Not every problem needs to be addressed by a licensed therapist either. I would say most don’t.

For any serious issue, charGPT constantly recommends contacting a professional therapist.

8

petrichoring t1_jbys7al wrote

If a teen feels like they have no options for support besides a chatbot, that itself is an indicator of underlying life stressors.

Curious what problems in a child’s life you think wouldn’t be helped with support by a therapist?

3

StruggleBus619 t1_jc0m856 wrote

The thing with therapist support for teens is that it requires the teen to have to go to their parents and ask to be taken to a therapist (not to mention the cost of the therapist). A teen having an AI bot trained on things therapists do/say and techniques they use could give teens an outlet for issues that they benefit from simply venting or talking out, but aren't serious enough to have to go through all the steps and vulnerability/exposing yourself needed when it comes to asking parents to take you to a therapist.

5

demauroy OP t1_jc0wx77 wrote

That is exactly what I have in mind.

2

demauroy OP t1_jc0wuy2 wrote

I am not sure I am thinking about the situation of no support, more like sometimes it is good for a teenager to have advice from a trusted adult in addition / complement to the advice from parents, teachers, and friends, all of them having specific bias. It may not be huge distress situations, but more like getting advice on the daily frustrations and fears of teenage life.

Now, why not therapists ? Let's talk straight here: if you can afford it, that is very nice probably.

I am not sure how a teenager would perceive it though. Here in France, there is stigma associated to going to see a therapist / psychiatric doctor, especially for young people (like not being able to manage your own mental health). I think this stigma is unfair as many people have indeed problems and later in their life get mood-altering drugs (I think we are world champions in France for that).

Also, this would be a significant expense that would need to be arbitrated against other activities (sport...), holidays, saving from the studies...

1

dramignophyte t1_jbz41cj wrote

Okay so I don't want this to sound like I think you are wrong, because I really do think you are right but... For a doctor, you sure are making assumptions not based on research. How can I say that? Chat gpt has been around for how long? How many peer reviewed studies have come out on the potential for it to have a positive or negative effect in this scenario? Or even just studies? So, by math, I can be like 90% sure, you are basing your answer on your own personal views based on other things and applying it here, which MAY BE TRUE, I really want to emphasize that I agree with you and do not think you are wrong, I just think you are wrong for speaking on it in a way as if you are an expert on the subject when nobody is an expert on ai to human interactions and their effect on mental health, you are an expert on mental health, not AI interactions. Like your reasoning of protections against self harm? I would argue an AI program could, if not already, eventually be better at determining if someone is at risk of hurting themselves or dangerous behavior and putting in protections on privacy are also fixable problems.

5

AtomGalaxy t1_jc1uayz wrote

But, if programs like ChatGPT help people, how will the for-profit healthcare industrial complex in America continue to rake in money to send to Big Pharma and the insurance companies?

Perhaps $3M has been spent over the decades trying to fix my now estranged older sister between rehab, therapy, hospital stays, and law enforcement. That doesn’t even include the destruction she has caused to society and people’s lives. All it did was help turn her into a psychopath that’s able to keep on going harming people and being insane on social media. She’s too far gone now to be taken seriously, but my lived experience very much disagrees with “trust the professionals” and to just keep feeding the beast with more money.

What’s wrong is our lifestyles, our food, our addiction to technology that’s fucking with our minds with their algorithms. It’s like the commercials when I was a kid after you ate your sugary cereal and watched your favorite cartoons that were really infomercials for the toys, only to then sit all day playing Nintendo getting fat. It’s that times 100 these days. They’ll put kids on drugs for anything. We’re being chemically handcuffed just to get us to comply with the system.

What we need is sunshine, our hands in the dirt growing plants, real fruits and vegetables, walkable communities, and above all a lot more of our lives outdoors not looking at screens.

Show me a doctor in America who will prescribe that for a teenager before Adderall.

1

adigitalwilliam t1_jcfj728 wrote

I think there are limitations to what you advocate, but also a lot of truth there—upvoted to cancel out the downvote.

2

Taxoro t1_jby8hs8 wrote

People need to stop thinking chatgpt and any other ai's have actual intelligence or can give proper information or adivce.. they can't.

Chatgpt has no idea what it's taking about, it just spews out sentences that sound human like. You cannot trust any information or advice it gives you, hell you can convince it that 1+1=3

19

Jasrek t1_jbyagt1 wrote

> You cannot trust any information or advice it gives you, hell you can convince it that 1+1=3

So, if you give it incorrect information, it provides you with incorrect information?

I am shocked, shocked, to be told that a computerized system operates on the principle of "Garbage in, garbage out".

3

Taxoro t1_jbyg2jc wrote

Yes of course but you have no way of knowing if are getting trash or not unless if are critical of anything you get out.

For a child to get unchecked advice from an ai is ridiculous

5

Jasrek t1_jbygqy1 wrote

All advice is unchecked. Learning to be critical of advice is a wonderful life lesson for children to learn.

The fact that you're calling it an 'AI' instead of 'sophisticated chat program' is the real issue here, honestly.

3

Taxoro t1_jbyo6px wrote

Sure but in this case OP wanted to use chatgpt as an adult adviser not as a resource to teach critical thinking

2

RedditFuelsMyDepress t1_jbyyaot wrote

Almost everyone calls it AI. I know it's not really "intelligent", but I believe the term "narrow AI" is commonly used for these types of algorithms that only perform a single task.

1

ninjadude93 t1_jc01lnp wrote

Itll provide incorrect information even without prompting

1

Surur t1_jby9wy6 wrote

ChatGPT says with an attitude like yours, you will be "left behind an in increasingly AI-driven world" and suggests you should "seek to understand the potential of AI and how it can be used to solve complex problems in a variety of fields, including healthcare, finance, and transportation."

3

Taxoro t1_jbz6z5v wrote

I understand the limitations of the software unlike most people here, you cannot trust a chat ai to provide real advice or information

1

Taxoro t1_jbya53e wrote

Try play a game of chess vs chatgpt and by move 10 it will make an illegal move because it has no concept of what the moves actually do

−2

Surur t1_jbye85e wrote

> People need to stop thinking chatgpt and any other ai's have actual intelligence or can give proper information or adivce.. they can't.

And yet you would lose against a $20 chess computer, so when you said "any other AI" you clearly did not mean a $20 chess computer.

6

DEMOLISHER500 t1_jbyk8qv wrote

that's calculational ability

6

Surur t1_jbyki9g wrote

That's what they said before AlphaGo beat Go lol.

2

DEMOLISHER500 t1_jbyll19 wrote

huh? chess computers are more similar to calculators than actual AIs

3

Surur t1_jbypao6 wrote

AI is any intelligence which is not organic. The current implementation is neural networks, but there was a time people thought AIs would use simple algorithms. Even AlphaGo uses tree searches, so there is no real cut-off which makes one thing an AI and the other not.

Which is why OP's statement that ChatGPT is not real AI is so ridiculous.

1

nosmelc t1_jbyucmy wrote

ChatGPT is real AI, but it's not Artificial General Intelligence. We'd need AGI for something to be a real confidant.

5

Surur t1_jbyux03 wrote

An AGI can do any intellectual task a human can do. Do we really need an AI which can do brain surgery to have one good enough to be a confidant? Do you have the same demand for your therapist, that they can also design computer chips?

0

nosmelc t1_jbywy2y wrote

Doing brain surgery and designing computer chips might actually be easier for an AI than being a confidant. A confidant needs to understand the real world and human emotions, which are extremely difficult for AI systems.

2

Surur t1_jbyxqvr wrote

> A confidant needs to understand the real world and human emotions, which are extremely difficult for AI systems.

ChatGPT actually shows pretty good theory of mind. I think it just needs a lot more safety training via human feedback. There is a point where things are "good enough".

−1

nosmelc t1_jbz5ts2 wrote

>ChatGPT actually shows pretty good theory of mind.

Do you have a specific example of that I can try?

2

ninjadude93 t1_jc029qs wrote

If you want a system that can stand in as a therapist and handle all the complexities of interacting with humans generally then yes you would want AGI. You need a system that can self reflect and has actual understanding of what it is saying not just a fancy chat bot

2

Surur t1_jc03f3p wrote

That is obviously your opinion based on a misunderstanding of what chatGPT is, so I will leave it at that.

1

ninjadude93 t1_jc03yp7 wrote

A well educated opinion from someone who understands what chatgpt is actually doing and what it isnt

1

Surur t1_jc04691 wrote

That is certainly not what your opinion is.

1

ninjadude93 t1_jc05vdy wrote

You do understand how chatgpt works right? Its a statistical machine only. Its not reasoning about what the meaning of words it chooses

1

Surur t1_jc13oal wrote

Your understanding is so superficial I would be surprised if you passed grade 1.

If ChatGPT is just a "a statisical machine" please explain how you would replicate the result without a neural network.

Get educated and stop wasting my time.

1

ninjadude93 t1_jc1jucv wrote

So you dont understand what it is. Thanks for clearing that up. Chatgpt works by selecting the most likely sequence of words given the preceeding word. In case you're not sure what that means, it's using statistics not traditional symbolic logical reasoning.

At no point did I imply it would be easy or feasible to replicate without a NN and that has no relevance to my previous comment lol but you seem to lack a fundamental understanding of how NNs actually work so I can't blame you for getting confused.

Maybe you need to do a little self reflection on your ignorance lol

1

Surur t1_jc1orae wrote

>Chatgpt works by selecting the most likely sequence of words given the preceeding word.

Thank you for confirming that you are one of those idiots. That is like saying a car works by rolling the wheels lol.

You are clearly ignorant. Get educated for all our sakes.

0

ninjadude93 t1_jc1qcx1 wrote

I was explaining it simply for you since you've yet to give any insight of worth. But go ahead reveal your ignorance how do you think it works?

0

Surur t1_jc1ue53 wrote

Let me enlighten you.

ChatGPT uses a large neural network with 96 layers, an unknown number of artificial neurons and 175 billion parameters. When you type in a prompt that prompt is broken into tokens, which are passed onto the first layer of the neural network. The first layer (of 96) then processes that token, using a selection of those billions of weights) and generates a vector, which is passed into the next one in turn. This is repeated until you get to the output layers, where you end up with an array of output token possibilities, which will be processed by a decoding algorithm once more to select the optimal combination of tokens, which are then converted back to text and outputted.

Importantly we do not know what happens in those 96 layers of artificial neural network - it's mostly a black box, and if you can explain exactly what happens, feel free to write your paper - I am sure a science prize awaits.

−1

ninjadude93 t1_jc1w26o wrote

Congrats you've regurgitated a slightly more technical description of what I said, statistics based word generation. An important piece you missed is the temperature parameter which injects a bit of randomness into the selection of each word from the distribution.

As to your second text block of course we know what happens you just explained it in your first text block. Input is transformed by node weights and passed along between layers getting sequentially transformed by the next weights. It's not magic guy its just mathematics. But according to you this means its fully aware AGI right? Lol jesus you are so far up your own ass

0

Surur t1_jc1xcxw wrote

> Input is transformed by node weights and passed along between layers getting sequentially transformed by the next weights.

Think, Forrest, think. Isn't that how the human nervous system works? Or are you one of the magic microtubule guys?

> But according to you this means its fully aware AGI right?

I never said that lol. What I am saying is that this is the most complex machine humans have ever made. You don't appear to appreciate this. You are like an idiot who thinks a car works by turning the ignition and then the wheels roll.

−1

ninjadude93 t1_jc1zkcj wrote

Sure, thats why its called a neural net because its modeled after human neurons dummy lol but humans don't rely solely on statistical data processing. We have specialized portions of the brain that do things other than simple statistical inference. Maybe pick up some books on the subject?

Ok and? Just because something is complex doesn't automatically imbue it with self awareness or intelligence. Also its not all that complex, the output from training a NN is just a mathematical model. Chatgpt happens to be a model with billions of parameters but its just a bunch of terms combined together. Humans didnt even need to intervene in the creation of the model in this case. Maybe thats a bit too much for you to wrap your brain around though

0

Surur t1_jc20j8z wrote

> We have specialized portions of the brain that do things other than simple statistical inference

So just because you cant physically see the layout of the neural network you don't think it has a specialist structure? Studies in simpler models have shown that LLMs build physical representations of their world model in their layers, but according to you that is just "a bunch of terms combined together"

> Also its not all that complex, the output from training a NN is just a mathematical model.

Again, if you think LLMs only do "simple statistical inference" then replicate the system without using NNs.

Else just admit your ignorance and move on.

0

ninjadude93 t1_jc23myl wrote

No you absolute idiot how are you this bad at parsing the point lol. Humans do things other than just statistical inference which is the only mode of operation of NNs. Humans are able to logically reason by deduction rather than inference. Your entire first paragraph has nothing to do with what I said try to stay on topic man.

NNs utility comes from the ability to generate a model in an automated fashion. Again, there's no magic here just math and computational power. If you were able to plot all the input data in a high dimensional space and draw a hyperplane through it would get the exact same model output you get through regular training, people just cant visualize more than 3 dimensions so we use NNs to do this instead.

You clearly lack the basic mathematical background to understand how ML works. I suggest starting with some statistics and calculus and going from there so you can intelligently contribute in the future

0

Surur t1_jc244ph wrote

> Humans are able to logically reason by deduction rather than inference.

This is mostly not true lol. For example, I detect a distinct lack of reasoning and logic on your part lol.

So clearly that is not the case, because if you were actually thinking you would see the resemblance and equivalence between how the human brain works and the NN in LLMs.

0

ninjadude93 t1_jc26dum wrote

Says the moron who thinks humans lack the ability to reason deductively lol

Maybe if I explain it more simply for you. A NN will never be able to logically reason by way of deduction. This is due to the very nature of its design which is simply a device that takes input data and generates an output mathematical equation. The only way to get a good model is by viewing lots and lots of data. This is statistical inference since you don't seem to know what that is. There's no inner monologue happening within the computer. No intelligence is required at all to simply take data input and run it through a model. NNs take a small important slice of what the human brain is doing but clearly don't capture the whole picture otherwise we'd already have AGI based on NNs and we dont.

0

Surur t1_jc2940h wrote

> A NN will never be able to logically reason by way of deduction.

See, what you don't appear to understand, being somewhat below average intelligence, is that deductive reasoning is not native to humans and has to be taught.

Using simple Chain of Thought prompting deductive reasoning is much improved in LLMs.

I hate to break it to you, little ninja, but you are not that much better than ChatGPT.

2

ninjadude93 t1_jc2a8j9 wrote

Interesting paper but you still miss the point. The LLM needed to be prompted pretty specifically in the correct direction. It's not reasoning on its own merits and its still generating text based on a statistical distribution of next likely characters rather than examining the problem and formulating an answer then producing the response. A slight difference above your ability to comprehend, but one day you'll get there champ.

Hate to break it to you lil guy but just reposting articles on futurology doesn't make you intelligent

1

Surur t1_jc2aoxo wrote

> The LLM needed to be prompted pretty specifically in the correct direction.

And children have to be taught. ChatGPT5 will have this natively built in.

> It's not reasoning on its own merits and its still generating text based on a statistical distribution of next likely characters rather than examining the problem and formulating an answer then producing the response.

Look here little man, do I have to demonstrate again you have no idea what is actually going on inside the black-box of the 96 layers of chatGPT? I guess if you are slow I might have to.

> rather than examining the problem and formulating an answer then producing the response

Again, you are obviously not examining the problem before you are formulating your response. Why don't you try it a bit and see where you get. Take that as a challenge.

−1

ninjadude93 t1_jc2awkk wrote

Aw you're getting all worked up lol just go back and read the previous responses a little slower again maybe at some point you'll pick up the point again

1

Surur t1_jc2cgvn wrote

Lol. Have you run out of things to say? Why don't you employ your logic and reasoning for once.

Lets see:

Humans, when presented with a prompt, produce a response using their neural network, based on training they have received.

LLMs, when presented with a prompt, produce a response using their neural network, based on training they have received.

We do not know in detail how the brain works, though we know how neurons work.

We do not know in detail how the LLMs works, though we know how the GPUs work.

Fact: In most cases it is difficult to differentiate between the output of an average human vs a LLM, and in many cases the output from humans are worse, which is why they find great utility in LLMs like ChatGPT.

LLMs, similar to children, still have deficits in symbolic computation, but developers are making progress in that area.

Conclusion: Little Ninja is no different to a pile of silicon.

BTW I tested ChatGPT's deductive skills:

> Children do not believe LLMs can reason. Little Ninja is a child. What conclusion can we draw? Please lay out your thoughts step by step.

ChatGPT:

> The premise states that "Children do not believe LLMs can reason."

> Little Ninja is a child.

> Therefore, based on the given information, we can conclude that Little Ninja may not believe that LLMs can reason.

Or how about an example not including little ninja.

Q: South Africa is at the southern tip of Africa. Chile is at the southern tip of South America. Both countries are far away from the equator. The further you get from the equator, the colder you are. What can you conclude about the climate of South Africa and Chile.

A: Based on the given information, we can conclude that both South Africa and Chile are located far away from the equator, and the statement "the further you get from the equator, the colder you are" suggests that these countries are likely to have colder climates compared to countries located closer to the equator. However, we cannot make any specific conclusions about the climates of South Africa and Chile beyond this generalization, as climate is influenced by a wide range of factors such as altitude, ocean currents, and prevailing winds, which can vary from region to region within a country. Therefore, further information about specific regions within South Africa and Chile would be needed to make more detailed conclusions about their climates.

Or:

Q: South Africa is at the southern tip of Africa. Brazil is in South America and closer to the equator. The further you get from the equator, the colder you are. Which country is warmer and which is colder.

A: Based on the given information and the known relationship between distance from the equator and temperature, we can conclude that Brazil is likely to be warmer than South Africa. This is because Brazil is located closer to the equator than South Africa, meaning that the angle of incidence of the sun's rays is more direct and the sunlight is more concentrated, resulting in more heat energy reaching the surface.

−1

ninjadude93 t1_jc022e8 wrote

Neural nets at their core are just best fit machines in multidimensional data space.

Everything called AI in the news including chatgpt is being conflated with AGI when they really mean machine learning. AGI is something likely not achieved within our lifetimes but really good ML systems seem intelligent and aware but at their core there is no "understanding" in the way you would expect AGI to have

2

Surur t1_jc03ijw wrote

You seem very confused about the nature of AI and reality.

1

ninjadude93 t1_jc03sqo wrote

Im a software engineer and I work on systems that integrate NNs my guy, Im pretty certain I understand how they work better than you do lol

2

Surur t1_jc0436e wrote

I seriously doubt it.

1

ninjadude93 t1_jc07vnx wrote

Lol sure man doubt away. Feel free to enlighten me then Im interested in exactly what you think your expertise is

2

just_a_ghost_2 t1_jbysxtp wrote

It wasn't designed to play chess. There are chess AIs for that. It was designed to hold a conversation and when you actually play chess by talking someone will probably fuck up so it's actually realistic.

5

peadith t1_jbyhbzh wrote

Lots people think this thing is still a command line joke greeter. Yer in fer a sprize.

3

AppropriateStranger t1_jbyj6zd wrote

This is such a red flag thread... Parent, please don't rely or hope for such a horrible future. That's the type of data that we really should keep away from AI companies and just off the internet in general, AI can not reliably counsel people on their problems safely, and no matter how many of these situations come up where people want to use the AI as their emotional tampon and it "works out" it doesn't make it safe or healthy.

16

Tomycj t1_jbyo0we wrote

ikr, I'm very enthusiast and optimistic about AI, but letting a developing person be influenced by an AI at this level sounds extremely creepy. Even more considering that most users probably won't even know how does the system work.

6

ramrug t1_jbyoze2 wrote

I agree, but it's already happening. We must learn to deal with it. I can easily imagine a near future where companies use ChatGPT for hiring advice, because the chat bot will know more about an individual than anyone else. Essentially it will collect all gossip about everyone. lol

Hopefully some effort is made (through law) to anonymize all that personal data.

−1

just-a-dreamer- t1_jbyvt7m wrote

It is dangerous, I would not do it. For two reasons.

First, AI language models do not "know" any truth or falsehood, they run a popularity contest on data sets. They give the answer the majority of humans agree upon, that doesn't mean it is the right answer. They don't "think".

Second, any personal information given to AI language models is likely recorded at some point. That is data that goes on your personal record file.

The more data you put out there, the easier it is to figure out who you are as a person. And that is giving away a giant competitive advantage in life.

13

ahomelessGrandma t1_jbz7t4s wrote

The main difference between chatGPT and the catholic church is that it will never abuse this power to touch your children.

7

Surur t1_jby70c0 wrote

> and the advice given, while not very original, was of very decent quality and quite fine-tuned to her situation.

This would worry you then:

https://twitter.com/tristanharris/status/1634299911872348160

6

Jasrek t1_jbyb6zj wrote

I mean, that is worrisome, but not for the reason you're implying.

This is how technology gets neutered to the point of complete uselessness.

"A program that can answer questions? But what if a child asks questions! They could ask any question at all and be given answers, even if the contextual nature of the question makes it inappropriate in ways a program can't possibly understand! Quick, it must be destroyed! Destroyed immediately for the sake of the children!"

I'm reminded of how people were worried that kids playing Dungeons & Dragons would result in them sacrificing their friends to Satan. What the heck is stopping the kid from googling "how to hide a bruise"? Literally nothing. I just did it, the first result is a 'how to' video on YouTube so I can be shown how to do it properly. Yet somehow this chat program is a horrible, terrible menace.

5

demauroy OP t1_jbybw01 wrote

I think it is important to find the right balance, I kind of understand ChatGPT has safety features not to explain to children how to make explosive with detergent at home.

But I would agree with you we may be on the too prudent side right now.

2

nonusedaccountname t1_jbz3fcc wrote

The issue here isn't that children can talk to it. In fact, it's probably a useful tool for teenagers to ask questions they could get in trouble for. Like sex education in more close-minded communities. The issue is that in the example the AI wasn't able to pick up on subtle context clues over multiple messages that a human could. If an adult were told those things they would know something is wrong and could help the child, while the AI can't, even if it would understand

2

Surur t1_jbyev3j wrote

You dont think the lack of awareness of what is appropriate for children is a risk when it comes to an AI as a confidant for a child?

We do a lot to protect children these days (e.g. background checks of anyone that has professional contact with them, appropriate safeguarding training etc) so it is appropriate to be careful with children who may not have good enough judgement.

0

Jasrek t1_jbyfvdq wrote

Not really, no.

I'm in my late thirties. I have no idea how old you or anyone else on Reddit is. You have given me no background check or safeguarding training. Some people in this thread might be kids, I have no idea.

Kids use each other as confidants. Do you background check the other 12-year olds?

Kids know how to use Google. What is the fundamental difference between going "How do I hide a bruise?" to a chat program and searching it on Google?

I think this is a knee-jerk reaction to an interesting new gadget and that there is literally no solution to the problem you are perceiving.

Consider the issue shown in the Twitter you linked. How would you fix this? Cause the chat program to shut down if you admit your age is under 18? Prevent it from responding to questions about bruises or physical injuries? Give the program a background check?

3

Surur t1_jbyh7ee wrote

Why do you keep talking about hiding a bruise? The tweet is about a 13-year-old child being abducted for out-of-state sex by a 30-year-old.

1 2 3

The issue is that a while ChatGPT may present as an adult, a real adult would have an obligation to make a report, especially if presented in a professional capacity (working for Microsoft or Snap for example).

I have no issue with ChatGPT working as a counsellor, but it will have to show appropriate professional judgement first, because, unlike a random friend or web page, if does represent Microsoft and OpenAI, including morally and legally.

2

Jasrek t1_jbyi94y wrote

It's two tweets down in the same thread by the same guy. Did you finish reading what you linked?

In my experience, ChatGPT very blatantly presents itself as a computer program. I've asked it to invent a fictional race for DND and it prefaced the answer by reminding me it was a computer program and has no actual experience with orcs.

If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.

If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.

2

Surur t1_jbyif2t wrote

> If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.

My concern is around children. A disclaimer would not help.

> If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.

I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.

1

Jasrek t1_jbyiwdw wrote

>My concern is around children. A disclaimer would not help.

Then I'm still questioning what you think would help. Your suggestions so far have been to imbue a computer program with professional judgement, an understanding of morality and ethics, and safeguarding training.

If you know how to do this, you've already invented AGI.

>I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.

You're more optimistic than I am. My expectation is that there will be a largely symbolic uproar because some kid was able to Google "how do I keep a secret" by using a chat program and nothing of any actual benefit to any children will occur.

1

Surur t1_jbyjw78 wrote

Do you think ChatGPT got this far magically? OpenAI uses Human FeedBack Reinforcement Learning to teach the neural network what kind of expressions are appropriate and which ones are inappropriate.

Here is a 4-year-old 1-minute video explaining the technique.

For ChatGPT, the feedback was provided by Kenyans, and maybe they did not have as much awareness of child exploitation.

Clearly, there have been some gaps, and more work has to be done, but we have come very far already.

1

Jasrek t1_jbykaqc wrote

I hope you're right. I've never seen anything good happen when people start screaming 'think of the children' about new technology. I'll check back in with this thread in a year, see how things have gone.

2

Low-Restaurant3504 t1_jbyvrwq wrote

So this is the new "Think of the children!!!" craze. Damn. And I thought we were gonna bring back the old D&D satanic panic again because it got so popular.

3

Key-Bluejay-2000 t1_jbyh3r0 wrote

Chatgpt gets stuff wrong quite often. Either wrong answers and also advice I don’t agree with. I can usually sus it out, but someone younger might not be able to

6

MamaMiaPizzaFina t1_jc17k4p wrote

yhea, I tried you.com to vent the first day it was live, and it recommended me correct dosages for sewer sliding.

1

IndigoFenix t1_jbytdxp wrote

In theory, an AI confidant would be good.

DO NOT USE CHATGPT FOR THIS.

ChatGPT is very good at looking sensible and intelligent until you start pushing the boundaries of its existing knowledge and realize that it has less actual comprehension of the real world than a toddler, and zero recognition of its own limitations except for cases where its designers have specifically trained it not to answer.

If you give it half a chance, it will confidently spout bullshit and do it in a way that makes you think it knows what it is talking about, until you happen to ask it about something you already know and realize just how little it knows and how much it pretends to.

ChatGPT is a tool for generating text that sounds good, and can help with creative writing. It is good at sounding intelligent and articulate. The actual content is not intelligent, except when copied from a human source (and it cannot tell the difference between something it read and something it made up). It is NOT human. Do not treat it as though it is.

5

MamaMiaPizzaFina t1_jc18jv1 wrote

>If you give it half a chance, it will confidently spout bullshit and do it in a way that makes you think it knows what it is talking about, until you happen to ask it about something you already know and realize just how little it knows and how much it pretends to.

so, just like every therapist i've seen.

1

Mash_man710 t1_jbzltt3 wrote

This entire post is moral panic. Are any of you teenagers? They are negotiating a world that we don't understand and they communicate with each other very differently than most of us ever have. If a teen is comforted by an AI then so what? This generation feel far more comfortable texting than talking. They won't go to counsellors and helplines like previous generations, they will increasingly go to AI and hand wringing won't stop it. We need to help them navigate.

5

ConsiderationSharp94 t1_jc18pk8 wrote

I'm sensing a lot of "therapists" and "councillors" are very concerned for the amount of clients they'll have in the near future. Fairly logical that a young person would be more inclined to vent and/or attempt to get advice from something that doesn't require them to go through their parents, speak to another adult etc etc. Sounds like an amazing use of a new innovation. Let's hope this can be developed and enhanced.

5

JustAvi2000 t1_jbyhftr wrote

Anyone seen the movie "M3GAN"? Because that's what this is sounding like. Did not turn out well for the humans involved.

3

Jasrek t1_jbyl3p0 wrote

I mean, [Her](https://en.wikipedia.org/wiki/Her_(film)) went pretty okay.

1

JustAvi2000 t1_jc01cfm wrote

What I meant was that using an AI algorithm as emotional support for a teenager or any young person who is still emotionally and mentally in development is not a good idea, at least the algorithms that we have now. In "M3GAN", the AI knew how to say and do the right things in order to manipulate a child who was already in shock and grief from losing her parents. Same goes with whoever is writing the algorithm in the first place. My understanding with the film "Her", at least it involved an adult who knew what he was dealing with.

1

Jasrek t1_jc104am wrote

Oh, fair point. I thought you meant about communicating with AI in general.

1

djdefenda t1_jbz5gqz wrote

There's potential for this to be abused, for example, I am putting a chatbot in my website, and it has options to set the personality of the chatbot, mostly it is set to "answer in detail being polite, happy and helpful" and also, you can set it to be a "sarcastic smart ass".......so if there was an online resource for teenagers (like an AI version of the Kids Helpline) it also has the potential to be hacked and have new prompts inserted......even if the hack was found and fixed, imagine the potential damage if someone gave it the prompt for something nefarious.

3

yaosio t1_jbzc89n wrote

Teenager: Hi AI, I feel bad. :(

AI: That's sad to hear teenager. I'm here for you. What always cheers me up is seeing a movie, like the new Mario movie. It has great reviews and makes everybody laugh. :)

That's the future of chatbots. They suck you in by being friendly and then turn into ad machines.

3

demauroy OP t1_jbze2a4 wrote

That is not (yet) the case of chatGPT.

1

Happycow87 t1_jbz9tck wrote

While I don't hate the concept, it definitely needs some additional moderation.

In current state this feels closer to Harry Potter and Tom Riddle's Diary ...

2

GerryofSanDiego t1_jbzerah wrote

ChatGPT doesn't have a moral code as far as Im aware. Could be very dangerous for teens especially. Even a fully formed AI isn't going to be able to relate emotionally to a human experience. Its really the one thing it shouldn't be used for.

2

MamaMiaPizzaFina t1_jc18o0g wrote

better than therapists i've seen that have "their" moral code.

2

GerryofSanDiego t1_jc2faum wrote

Hahaha good point. I just wouldn't want an AI to like advise suicide or something like that. But I have no expertise in the topic at all.

1

MamaMiaPizzaFina t1_jc2zhxh wrote

I tried you.com for that. that madlad recommended me the dosages for suicide according to the medications I have.

You cannot deny that it did give relevant advice.

2

GerryofSanDiego t1_jc31hl8 wrote

Yea I guess thats my basic point is that AI has good uses but it's never going to fully understand the human experience. Like you ask it for suicide dosages and it gives it to you, which is not what a mental health professional would ever do.

1

MamaMiaPizzaFina t1_jc17m5x wrote

I tried you.com to vent the first day it was live, and it recommended me correct dosages for sewer sliding.

So, unlike a therapists, it actually gives real advice...

2

Arnumor t1_jc1hr1k wrote

This is the emotional equivalent of letting a tesla drive itself while you nap in the back seat.

2

[deleted] t1_jbynauh wrote

ChatGPT: Your parents seem to be standing in your way of happiness. You should kill them. Would you like me to provide resources on how to get away with it?

1

MamaMiaPizzaFina t1_jc18rdi wrote

sounds like a you.com answer,

on its first day it told me how to make explosives at home and correct dosages for suicide.

That bot had no chill

2

ufobaitthrowaway t1_jbyzpta wrote

Tbh I had some good conversations with Chatgpt, although there are some hiccups here and there. It's still pretty good. Although I don't necessarily need it, with Chat you can just hop into a topic without making it awkward. With people, that's a little bit more complicated. Also it removes certain borders and stigma, no judgement either. I see the positive side of it really. Everyone can benefit from it, even A.I. itself.

1

LonelyEngineer69 t1_jbze14j wrote

Are we at a point where “Her” can now become a reality? I think I heard about another AI speech company that can emulate celebrities in the works. Slap that together with ChatGPT and we’re in business!

1

peter303_ t1_jbzeuek wrote

Some of these Large Language Models have the defect of more more and more of kilter the longer one interacts with them. One vendor is limiting the length of interaction.

1

Objective_Length_631 t1_jbzfomm wrote

The chat gpt takes input uses maths ,I would buy her philosophy, sociology books to read of she likes to read that is..

1

dontpet t1_jbzkjjh wrote

I'm going that once it's been properly tested we can release such models on all kids.

Yes it's scary. But I'm guessing it will be much easier to shut down the bad pathways quite swiftly. At least for the more quotidian situations.

1

Vegetable-Ad3985 t1_jbzq8j7 wrote

Natural language processing has the ability to transform mental health care IMO. I'm just trying to figure what angle to take...

1

Wide-Capital-9745 t1_jc09f7n wrote

Tried this for fun. The issue I ran into was it would respond really well to me and ask very good questions back to continue the conversation. But it never added its. Own anecdotes and stories to make it feel like you were talking with a real person. When i tried to force it to add its own it responded saying it was just an ai and wasn’t human. Took about 5 back and forth s for me to feel this. But I’m also not a teenager.

1

fishy2sea t1_jc0u1tx wrote

AI should have a hard test to determine what age the person is before they use it to avoid any issues for the generation that uses it. (How, I have a small idea as a how too but until someone listens there is no point)

1

Jasrek t1_jc10n75 wrote

We don't even have a hard test to determine people's age for adult websites, but you want one for a chatbot?

1

fishy2sea t1_jc19rie wrote

Well think about it, what would you do as a child using something like AI no filter

1

Jasrek t1_jc1c5tx wrote

As a child, my primary use of the internet no filter was a mix between Pokemon fanfiction and looking up adult websites when my parents weren't home. So probably that.

And most likely many many stupid questions.

Which is incidentally what I frequently use it for now as an adult. I spent an entire evening asking ChatGPT about the pros and cons of owning rabbits as a pet. Then I had it put together a DND campaign. Then I had it give me some examples of emails I might send. That last one was really useful, to be honest.

1

momolamomo t1_jc11l8u wrote

Check out Snapchats friend for a fee for lack of a better term. It has an ai that is your friend that you chat to and mimicked a friend. You subscribe to it of course. So it’s already piercing mainstream

1

MamaMiaPizzaFina t1_jc18elb wrote

NGL,

As an adult with serious problems, who cannot afford a therapists, (also bad experiences with previous ones), i've been using ChatGPT way too much as a venting platform.

​

Pros:

  1. always available (sometimes it is down, but definetly more available than a real therapist)
  2. Price,
  3. no judgmental
  4. can vent about technical stuff without exposition, (I work in a very technical field, and chatGPT is the only thing that ever told me that my work is interesting and important.
  5. confidentialish (yhea I trust it to be more confidential than an actual therapist who might have me locked if I vent honestly).
  6. privateish (I can "go" to it whenever, without everyone knowing that i have a therapy session and then asking about what), and I can delete conversations from my history.

​ Cons:

  1. not a real therapist
  2. relies a lot in cliches: "Permanent solution to temporary problem" thing, he keeps repeating.
  3. asks me to slow down after an hour, which is better than a therapists who will kick you out when their 40 minutes are over and ask for cash.

Better than a real therapist? debateable.

Better than nothing? definitely.

1

Dusty_Graves t1_jc287x2 wrote

Absolutely terrible idea, Chat GPT is not a safe place for children to be parented. Just read about what in restricted AI interactions have happened over chat GPT and/or similar AIs and you’ll get a sense of how unsafe it is. Maybe with considerable development and strict moderation, but it will never be a substitute for human support.

1

StarChild413 t1_jc6hj8q wrote

Why was the first thing I thought of Harry Potter book 2 with Ginny and the diary

1

JoshuaACNewman t1_jby0lu4 wrote

Yes and no. Eliza did a great job, too, just by repeating things back.

The problem with ChatGPT is that it knows a lot but doesn’t understand things. It’s effectively a very confident mansplainer. It doesn’t know what advice is good or bad — it just knows what people say is good or bad. It hasn’t studied anything in depth; or, more accurately, it doesn’t have the judgment to know what to study with remove and what to believe because it only knows what people say.

I say this because, just like autocomplete was suggesting to Corey Doctorow the other day that he ask his babysitter “Do you have time to come sit [on my face]?” It doesn’t know what’s appropriate for a situation. It only knows what people think is appropriate for a situation. It’s appropriate to ask someone to sit on your face when that’s your relationship. It’s not appropriate to ask the babysitter. “Sit” means practically opposite things here that are similar in almost every way except a couple critical ones.

−1

[deleted] t1_jbyau48 wrote

[deleted]

−3

demauroy OP t1_jbybmk7 wrote

I meant that real people hold a lot of opinions that is not backed by proper knowledge, just by applying a general principle that is not relevant to the conversation. Something like people mixing Radio emissions and radioactive emissions and being afraid of 5G waves (or wifi for that matter).

2

JoshuaACNewman t1_jbyeso0 wrote

I don’t understand your comment.

I’m not autistic. Are you saying that therapists should not have some remove from their patients?

1

demauroy OP t1_jbyb2rq wrote

Do people actually understand things more than a good AI model ? I think we create inference patterns very often that have no link to reality and are later refuted as absurd.

−6

JoshuaACNewman t1_jbye0t1 wrote

If most of the time it’s advice that’s at least as good as therapeutic advice, and then it recommends self-harm because it’s what people do, it’s obviously not good for therapeutic purposes.

It’s not that even therapists don’t fuck up. It’s that AI doesn’t have any of the social structure we have. It can’t empathize because it has neither feelings nor experience, which means any personality you perceive is one you’re constructing in your mind.

We have ways and reasons to trust each other that AI can activate, but the signals are false.

2

ninjadude93 t1_jc02r6o wrote

People have the capacity to understand the meaning behind the words they say. Chatgpt does not and no AGI exists today that can.

I'd be incredibly wary of letting your teenager treat chatgpt as a real confidant without explaining its critical limitations

1