Viewing a single comment thread. View all comments

rogert2 t1_j6t4g7m wrote

One possibility is that AI could be used to manufacture evidence. As others have pointed out, that may not pose as big a danger as might be feared. But, yeah, it's a thing to be alert for.

Another possibility is that AI could be used to enhance the credibility of social engineering attacks made against the humans in the system. It might be a lot easier to trick your legal opponent's legal team into divulging confidential info by doing a FaceTime call that presents an AI deepfake of their boss, claiming to be calling from a colleague's phone, asking for some information about the case or legal strategy. "My phone died, I'm calling from a friend's phone; send me the email addresses for our witness list so I can [do something productive]."

Another possibility is that AI will be used to vet jurors. Instead of just asking potential jurors if they have any prejudicial opinions about EvilCorp, and having to take their word for it, you can have AI digest all that person's published writing (including social media) and provide you with a summary. "Based on analysis of writing style, these two anonymous social media accounts belong to Juror 3, and have been critical of large corporations in general and EvilCorp's industry in particular. Boot juror 3." Rich legal teams will have even more powerful x-ray vision that helps them keep out jurors who have justified negative opinions about their demonstrably predatory clients.

And probably a lot more. I guess paralegals are really worried that ChatGPT will eat their whole discipline, and since "people are policy," that's going to have an impact on outcomes.


Mayor__Defacto t1_j6uanv7 wrote

Well, that’s the thing, jurors with strong opinions about the defendant or prosecution, even if justified, aren’t supposed to be empaneled anyway, so it’s a nonissue. The prosecution could use the same tech to boot people who hate cops, for example.


BMXTKD t1_j6v2b86 wrote

Another thing too, that people aren't going to mention, is that artificial intelligence can detect people's vocal tones. Someone who is guilty has a much different tone than someone who's innocent. If you've heard of Dr Paul Eckman, he's done research on nonverbal communication. So you can have a deep fake of someone doing something. But if the reaction to it is of total surprise or righteous indignation, rather than fear, then there isn't a likely chance that they committed the crime. Because they are as surprised as you are that this happened.


BMXTKD t1_j6v2m1e wrote

Like this. You accuse a vegetarian of eating a steak dinner.

There is a deep fake of the vegetarian eating a steak. The artificial intelligence shows a nice, juicy, steak being eaten by a vegetarian.

Turns out that the vegetarian was eating a Portobello mushroom, in the AI swaps the portobello mushroom out with a steak.

The vegetarian is surprised and righteously shocked that it showed him or her eating the steak. If the vegetarian wasn't doing what they were supposed to be doing, then the vegetarian would be afraid that he or she got found out.