Cdn_citizen t1_jccpvvb wrote

Nice try but my friends who are actual rocket scientists don’t have time in their day jobs to waste on Reddit. They’re busying trying to get to Mars.

Who’s moving goals posts now? We are talking about licensed human therapists versus hacking.

Now you’re bringing up a new term human hacking

Maybe if you have so much free time you should Google what you say before you say it.


Cdn_citizen t1_jcbseal wrote

I’m not saying people can’t get hacked. Do you even know what hacking is?

I’m not moving goal posts. I’m stating facts; you’re the ones who can’t understand logic versus what you ‘think’ in your own minds is equal. So sad to see the lot of you, no wonder you’re on reddit constantly replying to my comments. You all have nothing better to do

Edit: To prove a point it’s not my fault you people don’t know the definition of ‘hacking’.

Bribing, breaking in, black mailing is not ‘hacking’


Cdn_citizen t1_jcaqglm wrote

Keyword “if”. Which most don’t. What you don’t and all the others on here don’t understand is the reach of an AI therapist is much greater than a human therapist.

Hence why for example people don’t break into convenience stores to steal their customer data but will hack Facebook or Uber servers.

Man this crowd is dense.


Cdn_citizen t1_jc8yzrs wrote

Nice try but an A.I. can have bad data fed to it. A.I.s have to be connected to the cloud. A.I.s are stored on servers, designed to be accessible digitally.

What you state, physical file storage, breaking in. Someone has to physically go to the office and know where it is and have a connection a patient of the therapist. No random person will do that, it will be targeted.

Plus they'll have to carry all those files out of the office or computer and usually professional buildings will have 24/hr security on site.

An A.I. therapist won't have such an office and can be hacked any time during your hypothetical appointment.


Cdn_citizen t1_j9ihsad wrote

Starlink still sucks in urban areas, until they fix that it probably won’t be ‘Every ship, train, plane, work truck, semi, and military unit.

Also this thing is still huge so it won’t work on cars and you have to be pretty stationary to use it effectively. I feel like this will just be replacing those big old satellite dishes from the 90s


Cdn_citizen t1_ivytwxs wrote

I don’t have to understand the tech. I’m testing it every day with real life scenarios. It’s not anywhere near ready.

You are so sold on your beliefs you’re not looking at reality. Have you not noticed all the car ads stop pushing self driving and more towards driver assistance this year?

There’s a reason.

P.S, You can downvote me all you want but that does not change the facts.


Cdn_citizen t1_ivy5ozh wrote

That's the issue, Sdcs don't have a fear or surprise in addition not intuition either.

It can't for example anticipate a person crossing the street from behind a vehicle and possibly prepare to brake if that person actually ends up crossing or if they are just getting into their car. A human driver would have their foot over the brake or be prepared to move away from them should they unexpectedly cross.

I'm not sure where you're from but people drive and follow the rules of the road when it snows in Canada. The only exception would be the lane markings being covered but my car can't see the lane markings when it rains anyway so that's not a snow issue.

Also sdcs can misread signs, a human far less likely to do so. For example HOV signs.


Cdn_citizen t1_ivw3bwm wrote

It’s less so snow sitting on the roads as to the melting and freezing cycles over night that’s the issue. Plus I’m in Canada and sometimes a polar vortex can keep it so cold that even road salt doesn’t work and ice remains on city roads.

In that case snow + ice = bad time to be self driving and lack the intuition of a human driver.


Cdn_citizen t1_ivqn05s wrote

You either watched too many movies or have not studied A.I. before. The ones you are referring to are fictional, not only that they are semi-sentient.

Current A.I. is no where near that level of capability, and likely won’t before the near future.

The way current A.I. Models function is by having humans hand feed them massive amounts of categorized data to help it ‘learn’ and output results based on that data.

It’s why you can’t have a conversation with Alexa or Siri, you can only tell it to do things and it can only perform tasks it’s been programmed to do.