Submitted by awesomedan24 t3_10pcuki in singularity
BigZaddyZ3 t1_j6k2mxr wrote
Reply to comment by awesomedan24 in The legal implications of highly accurate AI-generated pictures and videos by awesomedan24
This is why AI will have to be regulated by world governments similar to how we regulate the creation of nuclear weapons. Society as we know it might collapse if this kind of technology goes unchecked.
awesomedan24 OP t1_j6k9v5u wrote
Seems correct but tough as hell to implement. Its easy to safeguard uranium, but AI is just software anyone can send easily.
BigZaddyZ3 t1_j6kbt5t wrote
True. But what exactly do we do then? Enter a post-truth world where the law can’t be enforced because nothing can be proven anymore? Will we revert back to law of the jungle and might makes right? We have to at least try to curb and control the creation of these programs if we want to continue living in a lawful society.
My guess is we’re headed for big-brother style 24-7 computer monitoring where every single thing you do is logged and overseen by government owned AI. If they suspect you might be up to no good then they move in. It’s not ideal, but it’s the only real way to stop this kind of technology from ruining our entire perception of what’s real and what’s not. It’s that or lawlessness (can’t tell you which is worse tbh)
Gotisdabest t1_j6lk5tb wrote
The only thing to do is to start an arms race. Ai evidence fought with fake evidence detectors of some kind. That's the only semi feasible solution.
BigZaddyZ3 t1_j6m05rw wrote
This is risky tho as there’s no guarantee truth would win that arms race. Also those tools might not even be possible over the long term once AI images become indistinguishable from real ones. Which would lead us back to square one anyways.
Gotisdabest t1_j6m0joa wrote
If AI images have been physically indistinguishable they will probably still hold elements that can be distinguished with an ai itself. It's definitely risky but probably more effective than bans. Banning software really isn't going to work, especially in legal matters.
BigZaddyZ3 t1_j6m1utg wrote
What happens when someone takes a simple screenshot of the image and spreads that around instead? What happens when there are millions of copies and screenshots of the image spreading across social media like wildfire? To the point that it becomes impossible to even find the “original” version of the image? It’s foolish to bet on some sort of metadata or other remnants to always be there to save the day. There’s no guarantee that’ll even be possible. Do you want to take the risk of finding out if your theory is actually right or wrong? The results could be catastrophic.
Regulation will have to occur at some level, and we already have historic precedent for it. Why do you think the national gun registry exists? Governments have always known that certain technologies have the ability to completely destroy society if left uncontrolled. And they act as a preventative mechanism in every single one of these cases. Why wouldn’t they do the same for AI? It’s not even in their personal best interests to let this tech run amok. It’s not in any of our best interests tbh. It’s most likely going to happen, and you can already see the seeds for government policy and collaboration being planted as we speak.
Gotisdabest t1_j6m2gr9 wrote
The inherent problem with this argument is that you're basing much of your opposition on the fact that there's no guarantee this idea works while talking about an idea that has shown no guarantees either.
> have always known that certain technologies have the ability to completely destroy society if left uncontrolled
And here's the fun part, unlike with hardware like guns, you can't really just regulate what and how software is being used. They can't even kill piracy, for crying out loud. If Guns could be anonymously downloaded, used in the comfort of your home and still somehow magically accomplish their purposes anywhere in the world, i can tell you for sure that gun registry would be a pointless practice.
I am strongly pro regulation where it is practical. Here it's simply a waste of everyone's time. If we could stop or even just massively curtail ai misinformation with them that'd be great. But we realistically can't unless you have some really innovative and detailed proposals as to how to go about it.
You can either try shutting generative ai down completely which is impossible, or you can control it using its own abilities. Is it perfect? No. It will more or less lead to a post truth society where the internet is just a mess of well made and high quality lies. But what it may at least do is keep some small oasis' and the legal system safer.
BigZaddyZ3 t1_j6m7omm wrote
>>If guns could be anonymously downloaded…
Which is why they will do away with the anonymous aspect of the internet e.i. “Big-Brother” like I said. I already accounted for this.
>>If we could stop or even just massively curtail ai misinformation with them that’d be great
We likely can tho. It starts with it criminalizing severely. (To the point that it isn’t even worth the risk for most people) that alone will reduce the number of bad actors down to only the boldest, most lawless citizens. That’s part of what I meant by government regulation. Of course you’d have to pair this with some systems that keep tabs on when and where an image was originally created (perhaps cryptography and the blockchain might finally actually have some real use). Either way, the government will most likely reduce technological privacy of its citizens in order to enforce this. That’s where the big-brother stuff comes in.
>> But realistically we can’t unless we have some really innovative and detailed proposals as to how to go about it.
Or they could just say “fuck all that” and decide to mandate by law that computers come pre-installed with AI that monitors you’re every move while on the computer. I’m not saying it’s a forgone conclusion, but it’s the most likely scenario long term in my opinion. And it’s the only scenario I’ve ever come across that could at least work conceptually.
Gotisdabest t1_j6m85tw wrote
> they could just say “fuck all that” and decide to mandate by law that computers come pre-installed with AI that monitors you’re every move while on the computer.
This is about as practical as shooting people on the street for protesting. It'd be considered a heinous attack on privacy and the government wouldn't last very long without some degree of martial law. The internet is quite directly many people's lives and livelihoods.
>It starts with it criminalizing severely.
No it really doesn't. See, one of the biggest reasons drop in the ocean crimes aren't really punished severely is because of their ease. If you make harsh punishments for easy to do crimes you'll be putting half the country in jail by Sunday.
Not to mention even big brother can't actually fix this without running into all the problems you highlighted in my argument unless you plan on recruiting half the population to spy on the other half.
BigZaddyZ3 t1_j6m9ibo wrote
It’d only be an attack on privacy if the government has to do it unilaterally (or by violent force). For all we know, society could welcome such a change if it protects us from an “information-apocalypse”.(and the lawless anarchy that would come with that.) But you’re also assuming we’ll still be in a democracy in this post-truth world. There’s no guarantee of that my friend.
And we don’t need half the population to spy on the other half, just ultra-sophisticated, omnipresent AI that can monitor thousands of computers at a time…
And I get the drop-in-the-ocean thing, but it only takes making an example out of a few high profile offenders in order for the average joe to feel like it isn’t worth the risk.
But yeah, I agree that there’s no simple fix. So what are you suggesting? The arms race thing isn’t compelling to me because there’s no guarantee that they’ll always be a way to differentiate a “fake” image from a “real one” (same goes for audio and video). So what happens? Do we just accept that they’ll no longer be a “truth”? (Which would mean it’d be impossible to enforce “justice” or prove that any crime did or didn’t happen). A descent into chaos before we wipe ourselves out? What’s the plan, if the arms-race idea fails? Cause other then the big-brother scenario, I’m not seeing anything that will save us the coming paradigm shift tbh.
Gotisdabest t1_j6makdt wrote
>But you’re always assuming we’ll still be an a democracy in this post-truth world. There’s no guarantee of that my friend m.
There's not much logical reasoning to think that suddenly the entire world will revert back to monarchy in such a scenario. You seem to think that people will still actually believe the internet. It'll be only post truth in the context of the post 80s Television and camera world. Every other source of information could already have been replicated anyways and has obvious checks on it.
>And We don’t need half the population to spy on the other half, just ultra-sophisticated, omnipresent AI that can monitor thousands of computers at a time…
So you're saying that we can have an ultra sophisticated omnipresent ai that simply cannot be outwitted by any other ai on the market, has unimaginable powers of detection.
>It’d only be an attack on privacy if the government has to do it unilaterally. For all we know, society could welcome such a change if it protect from an “information-apocalypse
Suddenly you go from it being the most logical conclusion to throwing out random very theoretical what if scenarios.
>And I get the drop-in-the-ocean thing, but it only takes making an example out of a few high profile offenders in order for the average joe to feel like it isn’t worth the risk.
Nope. Harsh punishments for drop in the ocean crimes have rarely ever worked in history. The idea that just hanging 10 people for doing video game piracy or something will stop video game piracy is ludicrous. All it'll do is create a lot more unrest as people are unduly punished for minor crimes. Your weird idea of conformism is not going to occur unless they also just get the AI to be so persuasive that everyone believes everything it says or shows in which the law is irrelevant anyways.
>The arms race thing isn’t compelling to me because there’s no guarantee that they’ll always be a way to differentiate a “fake” image from a “real one” (same goes for audio and video).
If it isn't, then we suddenly won't devolve back into mass murderous rage and anarchy. It'll be the death of the internet as an open forum. Small communities of families and friends will still exist, but large scale messaging will become irrelevant.Most people won't start believing anything they see, they'll just stop believing in things they could once trust. The weight will go back on certain channels of information and the pressure on them to report factually will increase dramatically, with harsh pushback for misinformation once again becoming the norm. We won't suddenly become cavemen again, we'll go back into a pre social media era of news where trust in the source becomes more important and more harshly policed.
This battle isn't for the survival of the human race, just some large sections of the internet.
This will also be helped along by the fact that bots will become more competent and impossible to distinguish from real people. So you will not be getting real info or talking to real people.
Shockingly, society did still exist even when there was no internet or even television to get news from. I'd even argue there was a far healthier political climate in most places. It'll lead to some radicalism at the start, no doubt, and basically we won't see leaked videos or audios or pics as legitimate anymore. But eventually, the chain of information will become almost sacred. Who knows, may even be good for society in the long term.
BigZaddyZ3 t1_j6mc8ba wrote
I wasn’t implying monarchy my friend, but instead, totalitarianism… After all, if the government will have the most cutting-edge AI, and we’re suddenly in a post truth world, could the government themselves simply not control who “wins” the “elections” at that point?
And it will be a fight for survival, you’re extremely naive if you think the collapse of truth won’t have serious real-world implications. It’s not an “internet thing” it’s a “society as we know it” thing.
You’re also not making sense with “we’ll go back to trusting small communities” thing. Could these tools (if not regulated like I suggested) not be used to reek havoc at the local levels? If anyone can use AI to spread misinformation, why would you be able to trust those in your community? And then you mention the whole “they’ll be harsh pushback for reporting misinformation” thing. How? People will have no way of knowing which information is fake or not in many cases. You’re underestimating how much of the things we take as true are just things that are told to us via news outlets. What happens when we can no longer trust we see, hear, or read in the news?
And are seriously gonna use the fact that there was society pre-internet to justify a post-truth nightmare scenario. Bro, in the 80s it wasn’t possible to generate photorealistic images or videos of people doing things they never actually did. (Complete with realistic voice cloning as well) It’s not comparable. And appealing to the past is ridiculous here. These powerful AI didn’t exist in the past, now they do. Things change buddy. The past is irrelevant here.
But you know what, let’s just agree to disagree at this point. Either way I got a feeling that we’ll be seeing how things play out sooner than most people expect.
Gotisdabest t1_j6md2cm wrote
Your entire post seems insistent on the idea that people will magically just believe everything they see, despite obvious proof of existence of easy tools to make up lies. Gullible people will exist, no doubt, but most will just discount such sources entirely. And i don't think tech illiterate 80 year olds will be destroying society anytime soon.
Your view is so detached from practicality it's disturbing. Your "gotcha" with regards to "times change" makes no sense once you stop being arrogant and see the simple fact that too many lies doesn't mean everyone will believe them, it'll just mean the truth will also be difficult to obtain from specifically these sources.
> will have no way of knowing which information is fake or not in many cases.
How did they know back in the 80s or in any time in human history before the existence of the internet? How do they know right now? There will still be reliable and trustworthy sources. People can write lies on the internet today, and yes many believe them. People have always been able to say lies, and yes, many believed them. Neither led to collapse. After this happens there'll just be no social media of this kind left since the entire point behind it is gone. People will simply just have to revert to a world where videos aren't trustworthy anymore.
BigZaddyZ3 t1_j6mebz3 wrote
>>Your entire post seems insistent on the idea that people will magically just believe everything they see, despite obvious proof of existence of easy tools to make up lies. Gullible people will exist, no doubt, but most will just discount such sources entirely.
Wake up bruh… AI misinformation hasn’t even kicked in yet and half the country fell for the lie that Joe Biden didn’t actually win the election. Half the country believes that the COVID vaccine causes autism and heart disease. A significant amount of people believe the pandemic was “plan-demic” of whatever. Humans are incredibly susceptible to misinformation and we haven’t even reached the era of extremely convincing misinformation yet”.
>>How did they know back in the 80s or in any time in human history before the existence of the internet? How do they know right now? There will still be reliable and trustworthy sources.
Gee.. Maybe it because they could actually rely on photographic, video, and audio evidence back in that era? Something we’re about to lose the ability to do. And like I said, many people back then didn’t know what was the ultimate truth back then. they just believed whatever news outlets told them. Like I said, a lot of our perception of what exactly is going on around the world in totally dependent on what we are told by the media. Take this away, and many people become blind to any event that didn’t happen directly right in front of them. Trust me, that’s not a world you wanna live in. But like I said, we can just agree to disagree. No point arguing about this all day/night.
Gotisdabest t1_j6mf2xe wrote
>half the country fell for the lie
The fact that you state half the country is already rather telling. People believe bullshit lies they want to believe. That will always be true, with or without video. There is no extremely convincing misinformation. Once it becomes blatantly obvious that everything is false as opposed to the current climate where there's a debate to be had about the percentage of falsehood, the medium will simply become irrelevant. The weight behind it is that it comes from a trustworthy source to those people, not that it's organically based on some kind of proof.
>Gee.. Maybe it because they could actually rely on photographic, video, and audio evidence back in that era?
Yeah, the photographic video and audio evidence that was everywhere in all of the human history.
And people believed what trustworthy media told them, yeah. That had it's negatives but was all in all a functioning system that typically showed less cracks then the information system of today. These outlets will still exist and as i said before, likely get a lot more pushback for lies then in the current system, as a general standard of truth would arise from the common media standard. Opinion based media would be crushed under the weight of fact based media and an outlet reporting contradictory facts would quickly be singled out and discarded. Of course crazy conspiracy theories would still exist. It'd probably put a hard stop on dumbass conspiracy theories that are based on communities like Qanon since social media would by and large not be a thing.
BigZaddyZ3 t1_j6mfsfp wrote
No bruh… they believed in photographic evidence in those eras because there was no convincing way to manipulate those on a large scale. (especially the audio and video). That’s about to change soon.
Once we hit a point where video and audio can easily be faked, neither one will ever be believable again. You could just use the “it’s a deepfake argument” for everything. That wasn’t possible in those previous eras. So stop comparing the future to those eras. We are about enter a completely new era in history. We aren’t simply going back to the fucking 90s bruh. Lmao.
Gotisdabest t1_j6mgqy1 wrote
>they believed in photographic evidence in those eras because there was no convincing way to manipulate those on a large scale.
Again, are you trying to claim they believed in photographic evidence throughout all of human history.
>neither one will ever be believable again.
Not really. It just won't be believable from anonymous sources. When say, the NYT posts an article that Ukraine has blown up the Kerch bridge, it's much harder to just claim the photos are from some other incident then if some dude on reddit posts it. People already can claim anything they don't like is fake news, no matter the evidence behind it. Misinfo's strength is that most of info recieved in general from the same medium has to be credible and corroborated, or otherwise trustworthy. If nothing is trustworthy on the medium, the medium dies.
In non legal matters, people have always put more stock in individual trust and words rather than actual hard proof. Your own example on the election proved this where the people believing in the "hoax" idea lost dozens of court cases because they had no proof. They believe in it because someone they trust for whatever reason(despite him being a proven liar) said so.
Audio and video proof, which is rare anyways in most disputable cases , will become mostly contigent on the source. Like it mostly is today.
BigZaddyZ3 t1_j6mgyoz wrote
Lol we’ll just have to wait and see how it plays out I guess. Time will tell. There’s no point in continuing this any further in my opinion. Agree to disagree for now.
Gotisdabest t1_j6mh1zr wrote
That is true. I do agree that it isn't particularly long till the inciting incident occurs.
Viewing a single comment thread. View all comments