Submitted by originmsd t3_10qxr78 in Futurology

Right now there are still signs and tells that voices and deep fakes aren't quite real. An expert or certain software can discern them easily. But like most things, the technology is going to improve. Some jury members may already have trouble spotting the difference between real and fake evidence. How will this effect the way video and audio evidence works in court in the future?

​

Thoughts?

178

Comments

You must log in or register to comment.

MrSpotgold t1_j6se5wd wrote

Nobody rich can be convicted for any crime, ever. Everybody else can be convicted for any crime, anytime.

123

stupidcasey t1_j6v4inm wrote

Huh? Maby this is a good thing? I mean if all video is essentially useless you go back to the days before everyone recorded everything and you had to dance around spoiled brat’s unworthy opinions

3

Orc_ t1_j6xr4da wrote

ok? that wasnt the question

1

Fafniiiir t1_j90f6ct wrote

Uhm Harvey Weinstein?
Bill Cosby was convicted originally too but I think his age played a bigger part in it being overturned than him being rich.
The notion that rich people are somehow immune tho is complete bullshit.

They can hire better lawyers yes but lawyers can't just let you get away with any crime.
Most questionably legal things rich people get away with too are legal loopholes ( especially when it comes to stuff like taxes ).
Rich people aren't murdering people and getting away with it in spite of evidence against them that's just not happening.

1

TheLastSamurai t1_j6t76e1 wrote

It will be a nightmare. Even being accused of something with seemingly convincing evidence can cost you jobs and relationships.

92

Littleman88 t1_j6tfe66 wrote

There are some accusations that will stick on your record even when found innocent. Convincing evidence is just another hurdle.

The world is fucked and everything is set up so that you have to be at least somewhat paranoid about everyone around you.

38

TheLastSamurai t1_j6thcfx wrote

Absolutely, imagine a school teacher. A heinous video is released of them, do you think the school waits? No. Then what about on the other hand when something is real and legit, it's like that evidence has lost impact. Big challenge.

21

randallAtl t1_j6wjy88 wrote

I was thinking the opposite. You could play a fake recording for a witness and say "Is this Mr. Smith" and when they said "Yes that is Mr. Smith" you could say "False, that is AI"

7

Ikiro_o t1_j6vzh3a wrote

There will be companies dedicated to fake compelling evidence of the opposite with AI just to make a point. :)

3

GukkiSpace t1_j6t8w37 wrote

My bet is the same companies who make the deepfake tools will make deepfake detection tools. Make a problem, maybe even make the problem free for the common man, then sell the solution to parties who need it. Pretty much straight outta apples playbook.

55

fruor t1_j6u59yn wrote

This is the correct answer. Training AI means competing AIs vs each other. You can't improve the skills to achieve a goal without also advancing the challenges it faces.

The companies holding the best algorithms to evade detection will always be the companies who have the best detection algorithm.

15

oddinpress t1_j6vghfg wrote

Yeah except at the end of the day a video is just pixels. There's a point where it's physically impossible to distinguish a video that was AI altered from a video that may as well be a recording of a real event.

Even metadata can be bypassed if someone really wants do.

This solution can't be sold if it's not possible

9

GukkiSpace t1_j6vi39d wrote

Well yeah, you take a still image or something but when you have anything that is more than a single sample the job becomes much easier, it’s a game of sequential comparative data.

2

imuniqueaf t1_j6w5e37 wrote

Bingo.

"Those ones who sell the panic, also sell the cure"

6

Nebula_Zero t1_j6wn24m wrote

They can probably hide it the same way you hide Photoshop edits. The way an edit is detected is by comparing the pixels to the rest of the image and a computer can very easily see the differences in resolution and other small things. If someone just makes a high resolution image and compresses it enough that there are slight compression artifacts on everything, then you can't detect it anymore. The details that show the Photoshop disappear in compression artifacts.

4

Fafniiiir t1_j90f9ep wrote

I honestly don't think it'll matter in most cases.
Once it's out there it's too late.

It's like how when a newspaper puts out fake news almost no one will read the correction afterwards and the fake news will keep being perpetuated.

1

Helloscottykitty t1_j6seaw5 wrote

Chain of security will and is a thing, you ever wonder why your dash cam footage gets just a nod from the police and a we will look into it, well because its not a cctv system being recorded onto tape inside a locked box.

You still have to provide reasonable assurance that video is from a trusted source, you will have meta data specialists working on less secure forms of stored media.

40

BinyaminDelta t1_j6smyuq wrote

Modern security systems don't "record onto tape," either.

13

Helloscottykitty t1_j6snk49 wrote

Let's be real how many places are using ssd, my works modern system still uses dvds.

The important part is that not just anyone can go and access it or interfere, 1 guy with the key and that's it

7

Jakcle20 t1_j6tzhth wrote

Sorry but the big glaring weakness to me is that there's a human with access to it and that it's locked behind a lock. Both of those things are fallible. No matter how good of a lock you think you have, it can be exploited. No matter how trustworthy a person seems, they can be bought/coerced/eliminated.

8

Helloscottykitty t1_j6u1kvg wrote

True but that's why it's proof beyond a reasonable level of doubt.

I could take strands of your hair and plant them at a crime scene while wearing a high tech containment suit to not leave any of mine.

I could hire actors who are your body double and commit the crime in front of a bunch of witnesses.

Everything is fallible but as video evidence stands a key from a trusted person, who often stands to gain little from giving others access is the most reasonable level of evidence you can provide.

Cctv systems are also a little different in the way they record onto whatever medium they chose that itself has validation methods I am aware of but never had explained.

11

johnp299 t1_j6snrfb wrote

This. Some kind of secure hardware and image/data watermarking is needed. Have a certification process. Uncertified footage won't be trusted.

7

throwaway4abetterday t1_j6sp5en wrote

That's not the reason why cops ignore it. Cops ignore it because they're in on the abuse and don't want to rat out their friends.

7

Helloscottykitty t1_j6spmnf wrote

Than swap out dash cams with ring door bells, I'm not going to comment on the police from USA but UK police like to have reasonable evidence that they can use in court.

2

throwaway4abetterday t1_j6sq7xb wrote

No, swap out your flippant attitude with a more interested and caring one.

You can't be reductive with the corrupt police issue, and the UK's are no different than the U.S.'s or anywhere else's. I bet you'll dispute that point though, and not address the fact that police dismissal of video evidence has nothing to do with technology.

−4

orincoro t1_j6tvzyi wrote

lol. The UK’s issues are EXTREMELY DIFFFERENT from those of the US.

2

BMXTKD t1_j6v1utv wrote

Not to mention, how decentralized the US is. It's even in the country's name. United States. Which means the police force over in bugtussle, missouri, is going to be a lot different than the police force over in Seattle Washington. Even the police force over in bellevue, Washington is going to be different than the police force over in seattle, washington. Even though they are in the same metropolitan area.

0

Iwasahipsterbefore t1_j6xrgzj wrote

No, they're not. That's the thing. They're not different due to a nationwide effort for the past century or so to homogenize and militarize the U.S police force. Everyone gets put through the same "warrior" training that involves traumatizing new officers by making them watch videos of cops get violently murdered. Nationwide, we cull recruits that score too highly on intake tests. Nationwide, cops are from outside the county or town they work in to help them Other people.

The bare details like "which cop gang do these cops belong to?" Might change, but that's about it.

0

BMXTKD t1_j6xzql1 wrote

"nationwide, we call recruits that score to highly on intake tests.".

"Nationwide, cops outside the county or town they work in help them"

I can think of many communities where you have to have a four year criminal justice degree, and residency requirements to serve in their communities. The states where they don't have either of those things, is where you find more corrupt police. My state has a four year degree requirement, but not a residency requirement. Before they repealed The residency requirement, cops were actually quite decent.

1

Helloscottykitty t1_j6ssscf wrote

As I said I won't comment on police in the USA and I won't argue with you, you came to a thread with the discussion being about how deep fakes may impact the judicial system and you don't feel that before deep fakes become troubling that they cared about video evidence in the first place.

What point could I possibly provide that would make you feel differently, your either looking for a non rational argument or you are non rational yourself.

0

throwaway4abetterday t1_j6st3sh wrote

What is it you think you're doing now?

Address the counter-claim. Address the statement, "Police don't ignore dashcam footage because its origin cannot easily be proven, it's ignored because police are corrupt and are trying to protect their friends".

Address it. If you have the courage.

−3

-The_Blazer- t1_j6txk2s wrote

Yup. I think in the future this will be expanded: there will be cryptographically verified sensors that sign their images (or other products) with a unique key that represents a "trusted" sensor. Fabricating keys or modifying sensors will carry extremely harsh penalties.

5

Helloscottykitty t1_j6u09t9 wrote

Yeah this is my thoughts to, I think 3 key innovation/techs will play a key role, first being further developments in wireless Internet to the point it is everywhere, we wil all want this for everything.

Second will be quantum computing , which will provide encryption and verification.

The last will be a digital passport, this will provide authentication which by the time it exists the idea of being anonymous or not being able to be verified will seem alien to us than.

2

-The_Blazer- t1_j6u0vx2 wrote

Yep. We already have digital ID in my country, although using it if you're a third party is still hard enough that EG Twitter won't allow you to use it to verify yourself. Which is a pity, because I'd much rather trust Twitter with a "verified" message from the government than a scan of my ID card.

3

Mayor__Defacto t1_j6uabxr wrote

Yep. And to expand on this, the justice system largely operates on trust, but you can’t just decide to randomly introduce your own video into evidence. If the prosecution though the video/audio could be faked (based on their own evidence that has a proven chain of custody), there would be an evidentiary hearing to figure out whether the jury should even be allowed to see it.

4

Remarkable-Way4986 t1_j6se3g1 wrote

If only AI could be taught to predict crime before it happens /s

20

Raagggeeee t1_j6tnc7s wrote

Find 3 precognitive mutants, hook em up to an AI interface.

Future Crimes Division has entered the chat

7

Suberizu t1_j6tla3k wrote

I see you are Psycho Pass enthusiast as well.

3

BinyaminDelta t1_j6sn36x wrote

At some point, there may be a serious push for AI TO BE THE JURY.

Not any time soon. But there's a path to this.

11

rogert2 t1_j6t0fcb wrote

I doubt it.

At least in the U.S. the stated reason for having juries is the assertion that each of us deserves to be judged by our peers. AI will never be our peer, not because it isn't as smart, but because it is categorically not a human.

I do expect AI to be used extensively during voir dire and to observe jurors during high-stakes trials.

I think we'll see almost zero AI inside the courtroom for philosophical and legal reasons, but a huge amount of it just outside for adversarial, winning-is-all-that-matters reasons.

9

bloodycups t1_j7378px wrote

At that point you would just get rid of the jury and just present your case to judges instead.

Hmm maybe ai judges but human jury pools

1

rogert2 t1_j6t4g7m wrote

One possibility is that AI could be used to manufacture evidence. As others have pointed out, that may not pose as big a danger as might be feared. But, yeah, it's a thing to be alert for.

Another possibility is that AI could be used to enhance the credibility of social engineering attacks made against the humans in the system. It might be a lot easier to trick your legal opponent's legal team into divulging confidential info by doing a FaceTime call that presents an AI deepfake of their boss, claiming to be calling from a colleague's phone, asking for some information about the case or legal strategy. "My phone died, I'm calling from a friend's phone; send me the email addresses for our witness list so I can [do something productive]."

Another possibility is that AI will be used to vet jurors. Instead of just asking potential jurors if they have any prejudicial opinions about EvilCorp, and having to take their word for it, you can have AI digest all that person's published writing (including social media) and provide you with a summary. "Based on analysis of writing style, these two anonymous social media accounts belong to Juror 3, and have been critical of large corporations in general and EvilCorp's industry in particular. Boot juror 3." Rich legal teams will have even more powerful x-ray vision that helps them keep out jurors who have justified negative opinions about their demonstrably predatory clients.

And probably a lot more. I guess paralegals are really worried that ChatGPT will eat their whole discipline, and since "people are policy," that's going to have an impact on outcomes.

10

Mayor__Defacto t1_j6uanv7 wrote

Well, that’s the thing, jurors with strong opinions about the defendant or prosecution, even if justified, aren’t supposed to be empaneled anyway, so it’s a nonissue. The prosecution could use the same tech to boot people who hate cops, for example.

3

BMXTKD t1_j6v2b86 wrote

Another thing too, that people aren't going to mention, is that artificial intelligence can detect people's vocal tones. Someone who is guilty has a much different tone than someone who's innocent. If you've heard of Dr Paul Eckman, he's done research on nonverbal communication. So you can have a deep fake of someone doing something. But if the reaction to it is of total surprise or righteous indignation, rather than fear, then there isn't a likely chance that they committed the crime. Because they are as surprised as you are that this happened.

1

BMXTKD t1_j6v2m1e wrote

Like this. You accuse a vegetarian of eating a steak dinner.

There is a deep fake of the vegetarian eating a steak. The artificial intelligence shows a nice, juicy, steak being eaten by a vegetarian.

Turns out that the vegetarian was eating a Portobello mushroom, in the AI swaps the portobello mushroom out with a steak.

The vegetarian is surprised and righteously shocked that it showed him or her eating the steak. If the vegetarian wasn't doing what they were supposed to be doing, then the vegetarian would be afraid that he or she got found out.

2

PeacefullyFighting t1_j6sgvje wrote

Security cameras will become obsolete and the implications of that are astounding

8

Longjumping-Tie-7573 t1_j6sp0zc wrote

Security camera recordings will be obsolete, but folks will still trust their own cameras to show them their situation at hand.

10

KamikazeArchon t1_j6stbf1 wrote

No, they won't. "Use in court" is not a common use case for security camera recordings.

Further, even the "court" use case doesn't actually go away. Plenty of things can be faked today; that doesn't mean they commonly are faked.

A printed contract can be faked just by printing up a different contract and claiming it's the original. That doesn't stop contracts from being used in court. Falsifying evidence, after all, is its own separate serious crime - and getting caught makes you look incredibly guilty for the original crime (if defending) or makes your case collapse (if prosecuting).

As a more general statement, laypeople commonly overestimate the value of "hard evidence" in court. Sure, it's important, but it's neither as common nor as necessary as people seem to think. Many cases are decided on nothing more than witness testimony.

4

FawksyBoxes t1_j6swr92 wrote

Almost all printers print an invisible to the naked eye dots that show the date time and IP of where it was printed for exactly this reason. This is why most printers require color even when printing in Black and White.

4

rogert2 t1_j6t267y wrote

I find this hard to believe.

For one thing, there are very many printers out there that only print black-and-white. I happen to own one of the most popular models, which (when I bought it) was the most popular printer being bought on Amazon. It takes black toner only.

There are also many printers out there whose IP addresses are meaningless, because they're on a private network. My home printer has an IP of something like 192.168.1.4, which is what my common-as-dirt home wifi router gave it. So, it's not going to be very helpful to know the IP of the printer or even the computer that sent the print job.

Yes, there are many circumstances where these problems don't apply, and yes, there are undoubtedly people out there trying to falsify evidence who wouldn't know to take any of the simple steps necessary to defeat "hidden fingerprinting" like this. But it seems so unreliable that I would be surprised if vendors even tried.

2

Ferret_Faama t1_j6uudhn wrote

Even if all of this were true it's ignoring the fact you could just spoof the time to the printer.

2

PeacefullyFighting t1_j6t1nbs wrote

All a defense lawyer has to do is get good at using the tech and upload 20-30 of the exact same video with 20-30 different people to social media and use as evidence. I'm sure this breaks some rules but you know it would be done, most commonly for the elites

2

oddinpress t1_j6vgrnm wrote

Not necessarily. It may be the case that security cameras evolve to record onto physical unalterable storage like sensitive tape in a black box. And only something stored in those means is admissible in court.

Law would evolve

2

Orc_ t1_j6xrbv7 wrote

Not if they're timestamped and NFT'd (yes, those NFTs) as much as many of you hate them they're part of a chain of technology (pun intended) that will be very helpful in the future

1

trash_burner12 t1_j6tgzyb wrote

The arms race between AI-identifying software and AI software is potentially going to make it impossible to really "know" if the AI-identifying software is even working. I'm under the impression most developers or all developers do not understand exactly what is happening inside these machines during runtime / training time.

Most court systems are sluggish and highly bureaucratic. They aren't going to be able to keep up or decide what AI-identifying software to use when different systems could be rapidly coming out.

As others have said, media evidence could be accepted if it's verified as coming from a trusted source. Unfortunately this causes a sort of "who watches the watchers" scenario -- it's sad because for so long video evidence was basically an oracle of truth.

7

TheRealDestian t1_j6u1x7i wrote

Ona side note, I’m still surprised voice acting exists as a profession at all at this point, but I have to imagine it’s days are indeed numbered.

5

Illuminaso t1_j6x24cr wrote

I'm worried for the day people start making deepfakes of world leaders. Imagine the Chinese government putting actual money into making a deepfake of Joe Biden doing or saying something terrible. With the resources of a government backing a deepfake, they could easily make it so convincing that nobody could tell it isn't real.

5

Ill_Operation3937 t1_j6trapu wrote

It is not there any classifier (like that we have for ChatGPT) to know that one video is real or fake?

4

peter303_ t1_j6vd8qd wrote

Answer: Another technology of non-counterfeitable blockchain watermarks will identify authentic digital evidence. Inauthentic evidence will be inadmissible. People have been working on this for money and votes that cant be faked. This will extend to evidence.

3

matthra t1_j6viqr2 wrote

No such thing as a perfect fake, it might be able to fool humans but they are detectable by various means. This also isn't our first go around with fakes As a species, for instance, Forgery has been around for forever but we still use documents. People have used Photoshop to alter photos, but none of us actually believe Obama eats babies.

3

oriiiginal t1_j6vpv47 wrote

I think we're entering into dystopia more than we already are in now

3

Blodig t1_j6wciwj wrote

A voice record may not be a liable evidence anymore, thats for sure.

3

JRocFuhsYoBih t1_j6x7o4r wrote

These comments are bumming me tf out. I can’t help but feel like as a whole, we’re screwed

3

originmsd OP t1_j6yo4l1 wrote

Don't feel that way. I asked the question in the first place to get a discussion going. The more people talk about it, the better the odds someone someday will actually address the problem. Who knows, maybe there is something you can do to help improve justice in your area.

AI is something that's going to affect every major field. Better to start talking about it now than get caught with our pants down. gif

1

JRocFuhsYoBih t1_j6yq1lr wrote

I agree entirely. Just thinking about the potential harmful disadvantages is heavy to me. It’s all across the board now too so everyone could be impacted. I try to keep a glass half full outlook on everything but maybe I’ve read 1984 one too many times for that at this point. Things are happening at such a rapid rate that we’ll know sooner than later what our future with ai holds.

2

WindowsDOS t1_j6utmzu wrote

Don't worry, We'll probably have the technology to retrieve a persons memory from their brain long before AI is that good. We can see if they actually did it based on their own recollection. If you refuse, you're obviously guilty, 5th amendment be damned. We'll have a couple years before that same technology is used to implant fake memories in peoples minds. After that, it doesn't matter if you go to jail or not, because they can just put the memory of jail in your brain. :)

2

createcrap t1_j6vh1x4 wrote

AI will be used as tool to further enslave the working class and further redistribute wealth up to the 1%.

I really only see the destruction of society as we know it.

2

mileswilliams t1_j6wjzpk wrote

I think AI will be able to tell if you are a liar, from video, voice etc

2

Beefygopher t1_j6wmsbr wrote

It will be like photoshop. It’ll be hard to tell what’s real and what’s fake for awhile, but now there’s programs that show what’s been photoshopped and what hasn’t. I’m sure a similar program will be made for deep fakes and the like.

2

MuForceShoelace t1_j6wtq8k wrote

I mean, we have been able to fake photographs convincingly for decades. How is this different?

2

GCSS-MC t1_j6xao1a wrote

You still need to authenticate the source of the video. Also, when it's paired with physical evidence, like DNA, fingerprints, etc, that is where video evidence becomes damning.

Of course, this won't stop the court of public opinion from jumping to conclusions. Let's face it, you can win in a court of law and still lose everything because of the court of public opinion has found you guilty.

2

PublicFurryAccount t1_j725ml4 wrote

You can’t just present evidence to a jury.

The court has to accept it first and, if there’s a serious question about whether the evidence is simply fake, the court will not accept it.

2

Affectionate-Tie-843 t1_j6sdnth wrote

I'm pretty sure video's will be decoded to determine whether or not they are AI generated. The file will have to look identical to that of a video file; everything will be traced to it's source somehow so I don't think deep fakes will become much of an issue until they are both hyperealistic and the file data can disguise itself appearing to be actual footage.

1

Itchy_Adhesiveness59 t1_j6uz8rd wrote

I have no idea but I firmly believe the software should cost a small fortune and should be heavily regulated. As in you need a license to own it, everything you make with it needs to be documented. Maybe a feature that auto uploads anything that's been edited to a secure database.

Honestly I think it's too dangerous for the general public to have, and I can't think of a good reason for this technology to exist beyond entertainment, and I don't think entertainment is worth the risks.

1

luke_530 t1_j6vcjvb wrote

Let's do a deep fake of a cop beating a black guy! Crickets..... .

1

Asleep_Barracuda4781 t1_j6yp2p2 wrote

  1. Stronger defamation laws
  2. I think it will finally force us to pass laws that punish knowingly false accusations or the use of faked/tampered evidence with same punishment as the accused.

Before anyone jumps on me I don't think this will solve everything and it would need some fine tuning like every other law. So maybe 80 or 90% of the punishment for the real crime, but it can't be a slap on the wrist. Also, innocent until proven guility would still apply. It would have a cooling effect on the malicious accusations or some of the false evidence companies others have mentioned. Unfortunately I dont know how to solve the reputation damage issue. The internet spreads the false accusation world wide and allows everyone to find it for all time.

1

WesMasFTP t1_j6zq115 wrote

I feel like just as AI gets better as technology advanced our tech to spot what’s real or not will improve as well. Chat GPT is impressive but students already developed technology to spot what’s written by Chat GPT or not. So while the technology is gonna go crazy, I think we will be okay and weather the changes. I’m sure it must’ve been scary when telephone calls became automatic and you didn’t need an operator - but things progress and new jobs will be created. Maybe we will have full time jobs of people who decide what is real or not?

1