Comments

You must log in or register to comment.

zimonitrome t1_j0y7ztu wrote

In theory, yes, the concerns are valid. So far we have always been a few steps ahead in consuming content that is more realistic than that which is generated. In the coming years we might start to consume 3D video or use tools that predicts if media is generated or not.

But what if generated media catches up? It could lead to us valuing real life experiences more for determining what is true or not. But humans also seemingly like to consume content that is "false".

Generally humans are very good at adapting to new paradigms so best scenario might be transition periods with a lot of confusion. Media footage is used in court cases but almost always combined with witness testimony. It's difficult to know how reliable we actually are on their authenticity. We were already deceived by cut up body cam footage and photoshopped images before DALL-E was made public.

2

Laafheid t1_j0yaqrx wrote

It might be way out there, but the who situation makes me think of a piece by philosopher Slavoj Zizek. The piece is about a link between belief in existence of God, or some other transcendent Big Idea, and limitations.

If one replaces the Big Idea, with the concept that "reality is and should be reality", then it stands to reason that either there will be more explicit censorship and/or that more importance will be placed on reputation/status/interpersonal relationships, and beside these two other forms of verifyability.

On the flipside, if one looks at what the majority of people actually create with it, you get things which are mostly (")productive(").

In a sense it is similar to the invention of the printing press: with it people could relatively rapidly publish anything, but in the end reputation of good publishers is what covers the majority of the market, and although there are still people who self-publish how far their reach is depends on credentials (in a broad sense of reputation, status, relationships & accomplishments).

what I am more worried about than fake X, is the opposite consequence. Because it becomes easier to create fake X, it should also become more "reasonable" to claim X is actually fake. Since deepfake detection is practically an arms race with no end in sight a different solution than AI is needed for this and consequences depend a lot on what that looks like.

0

ChuckSeven t1_j0yds4b wrote

We already have realistic-looking generated media. Have you not seen the latest Hollywood movies? Almost all movies use CGI nowadays and most don't notice it at all.

1

Cherubin0 t1_j0yfs3a wrote

You don't even need high tech to make scam news. A German public channel once made a fake documentary that was after like half an hour "April fool's" to show how easy it is to look perfectly real. I think the long term solution is that people get used to that everything can be faked and that you need trusted channels directly to the source.

2

MAVAAMUSICMACHINE t1_j0yxzh4 wrote

This may be true, but the CGI you see in Hollywood can run for hundreds of thousands of dollars. The threat with generative models is just how accessible it will be for anyone to make fake news if they choose to, regardless of their budget.

1

arg_max t1_j0z1p30 wrote

When? Probably now if someone decides to put enough money into it.
All the big Text-To-Image models like Dall-E, Imagen, Stable Diffusion are not very novel in terms of metrology. They all rely heavily on existing ideas and then combine them with more compute, bigger datasets and some tweaks.

Videos are not much more than 3D images with certain temporal constraints. There are already small scale Diffusion models for videos and I'm not saying that it's trivial to get longer videos, recurrent learning often is a bit tricky but I don't see why it would be impossible. Probably takes a few years before consumer hardware can run video generation though, after all we just about manage images at the moment.

1

Laafheid t1_j0zi3ca wrote

I wouldn't be as worried if I had solutions for this, that's kind of the issue.

it seems difficult to me since regardless of whether the thing is fake or real the truth-finding used to depends on records being available, but precisely those become easier to manufacture, so even if alibi material is presented it's not clear whether that is real either. the only solution that would sort of solve this, seems to me to be surveillance by default, yet that brings with it a slew of other problems.

Do you have ideas?

2

ChuckSeven t1_j0zvd2j wrote

You are saying that you and your keyboard are not capable of producing fake news?

I do get your point but I don't think that anything will be different. It might require a little less effort but with less money than what it costs to buy a gun, you can already do these things. Vilifying the tool is not the right approach. It never has been.

1

MAVAAMUSICMACHINE t1_j116csw wrote

You are right in that we can already create fake news. I also do think that for the most part generative models are going to lead to some really interesting content being created. That being said, I believe that it will make creating fake news significantly more accessible then it currently is which is why we should exercise caution with the development of these tools

1

MAVAAMUSICMACHINE t1_j117fbi wrote

You raise some really good points. I really don’t know too much about this area, but one idea is to use some sort of blockchain technology at point of capture to verify the integrity of images/videos.

1