Viewing a single comment thread. View all comments

petrichoring t1_jbyc42i wrote

As a therapist who works with teens, I feel concerned about a teen talking to an AI about their problems given the lack of crisis assessment and constructive support.

“Adult confidants” need to have safeguards in place like mandated reporting, mental health first aid, and a basic ethics code to avoid the potential for harm (see the abuse perpetrated by the Catholic Church).

If a teen is needing adult support, a licensed therapist is a great option. The Youth Line is a good alternative as well for teens who are looking for immediate support as there are of course barriers to seeing a therapist in a moment of need. Most schools now have a school social worker who can provide interim support while a teen is getting connected to resources.

Edit: also wanted to highlight the importance of having a non-familial adult support. Having at least two non-parent adults who take a genuine interest in a child’s well-being is one of the Positive Childhood Experiences that promote resilience and mitigate the damage caused by Adverse Childhood Experiences. A chatbot is wildly inadequate to provide the kind of support I think you’re referring to, OP. It might offer some basic degree of attunement and responsiveness, but the connection is absent. Neurobiologically, we have specific neural systems that are responsible for social bonds to another human, and I suspect even the most advanced AI would fall short in sufficiently activating these networks. Rather than using AI to address the lack of support and connection that many, if not most, teens experience in their lives, we should be finding ways to increase safe, accessible, organic human support.

65

leeewen t1_jbykkhm wrote

I can also tell you as a teen who was suicidal I didn't tell people because of mandated reporting. It endangered my life more because I couldn't speak to people.

While not on topic, and not critical of you or your post, mandated reporting can be more dangerous than its absence for some.

On topic, the ability to vent, to AI, may go along way to some where they lack any other options.

35

petrichoring t1_jbyr8i8 wrote

Totally! The issue of suicide is such a tricky one, especially when it comes to children/teens. My policy is that i want to be a safe person to discuss suicidal ideation with so with my teens I make it clear when I would need to break confidentiality without the consent of the client (so unless they tell me they’re going to kill themselves when they leave my office and aren’t open to safety planning, it stays in the room). With children under 14, it’s definitely more of an automatic call to parents if there’s SI out of baseline, especially with any sort of implication of plan or intern. Either way, it’s important to keep it as trauma-informed and consent-based as possible to avoid damaging trust.

But absolutely it becomes more of an issue when an adult doesn’t have the training or relationship with the client to handle the nuances of SI with what a big spectrum it can present as; ethically, safety has to come first. And, like you said, that can then become a huge barrier to seeking support. My fear is that a chat bot can’t effectively offer crisis intervention because it is such a delicate art and we’d end up with dead kids. The possibility for harm outweighs the potential benefits for me as a clinician.

I do recommend crisis lines for support with SI as long as there is informed consent in calling. Many teens I work with (and people in general) are afraid that if they call, a police officer will come to their house and force them into a hospital stay, which is a realistic-ish fear. Best practice should be that that only happens if there’s imminent risk to harm without the ability to engage in less invasive crisis management (I was a crisis worker before grad school and only had to call for a welfare check without the person’s consent as a very last resort maybe 5% of the time, and it felt awful) but that depends on the individual call taker’s training and protocol of the crisis center they work at. I’ve heard horror stories of a person calling with passive ideation and still having police sent to their house against their will and I know that understandably stops many from calling for support. I recommend using crisis lines if there isn’t other available support because I do believe in the power of human connection when all else feels lost, with the caveat that a caller know their rights and have informed consent for the process.

Ideally we’d implement mental health first aid training to teens across the board so they could provide peer support to their friends and be a first line of defense for suicide risk mitigation without triggering an automatic report. Would that have helped you when you were going through it?

6

Gagarin1961 t1_jbylg97 wrote

I don’t think OP is saying the AI is a good replacement for therapists, they’re saying it’s a decent replacement for having no option to talk things out.

Not every problem needs to be addressed by a licensed therapist either. I would say most don’t.

For any serious issue, charGPT constantly recommends contacting a professional therapist.

8

petrichoring t1_jbys7al wrote

If a teen feels like they have no options for support besides a chatbot, that itself is an indicator of underlying life stressors.

Curious what problems in a child’s life you think wouldn’t be helped with support by a therapist?

3

StruggleBus619 t1_jc0m856 wrote

The thing with therapist support for teens is that it requires the teen to have to go to their parents and ask to be taken to a therapist (not to mention the cost of the therapist). A teen having an AI bot trained on things therapists do/say and techniques they use could give teens an outlet for issues that they benefit from simply venting or talking out, but aren't serious enough to have to go through all the steps and vulnerability/exposing yourself needed when it comes to asking parents to take you to a therapist.

5

demauroy OP t1_jc0wx77 wrote

That is exactly what I have in mind.

2

demauroy OP t1_jc0wuy2 wrote

I am not sure I am thinking about the situation of no support, more like sometimes it is good for a teenager to have advice from a trusted adult in addition / complement to the advice from parents, teachers, and friends, all of them having specific bias. It may not be huge distress situations, but more like getting advice on the daily frustrations and fears of teenage life.

Now, why not therapists ? Let's talk straight here: if you can afford it, that is very nice probably.

I am not sure how a teenager would perceive it though. Here in France, there is stigma associated to going to see a therapist / psychiatric doctor, especially for young people (like not being able to manage your own mental health). I think this stigma is unfair as many people have indeed problems and later in their life get mood-altering drugs (I think we are world champions in France for that).

Also, this would be a significant expense that would need to be arbitrated against other activities (sport...), holidays, saving from the studies...

1

dramignophyte t1_jbz41cj wrote

Okay so I don't want this to sound like I think you are wrong, because I really do think you are right but... For a doctor, you sure are making assumptions not based on research. How can I say that? Chat gpt has been around for how long? How many peer reviewed studies have come out on the potential for it to have a positive or negative effect in this scenario? Or even just studies? So, by math, I can be like 90% sure, you are basing your answer on your own personal views based on other things and applying it here, which MAY BE TRUE, I really want to emphasize that I agree with you and do not think you are wrong, I just think you are wrong for speaking on it in a way as if you are an expert on the subject when nobody is an expert on ai to human interactions and their effect on mental health, you are an expert on mental health, not AI interactions. Like your reasoning of protections against self harm? I would argue an AI program could, if not already, eventually be better at determining if someone is at risk of hurting themselves or dangerous behavior and putting in protections on privacy are also fixable problems.

5

AtomGalaxy t1_jc1uayz wrote

But, if programs like ChatGPT help people, how will the for-profit healthcare industrial complex in America continue to rake in money to send to Big Pharma and the insurance companies?

Perhaps $3M has been spent over the decades trying to fix my now estranged older sister between rehab, therapy, hospital stays, and law enforcement. That doesn’t even include the destruction she has caused to society and people’s lives. All it did was help turn her into a psychopath that’s able to keep on going harming people and being insane on social media. She’s too far gone now to be taken seriously, but my lived experience very much disagrees with “trust the professionals” and to just keep feeding the beast with more money.

What’s wrong is our lifestyles, our food, our addiction to technology that’s fucking with our minds with their algorithms. It’s like the commercials when I was a kid after you ate your sugary cereal and watched your favorite cartoons that were really infomercials for the toys, only to then sit all day playing Nintendo getting fat. It’s that times 100 these days. They’ll put kids on drugs for anything. We’re being chemically handcuffed just to get us to comply with the system.

What we need is sunshine, our hands in the dirt growing plants, real fruits and vegetables, walkable communities, and above all a lot more of our lives outdoors not looking at screens.

Show me a doctor in America who will prescribe that for a teenager before Adderall.

1

adigitalwilliam t1_jcfj728 wrote

I think there are limitations to what you advocate, but also a lot of truth there—upvoted to cancel out the downvote.

2