zimonitrome t1_j0y7ztu wrote

In theory, yes, the concerns are valid. So far we have always been a few steps ahead in consuming content that is more realistic than that which is generated. In the coming years we might start to consume 3D video or use tools that predicts if media is generated or not.

But what if generated media catches up? It could lead to us valuing real life experiences more for determining what is true or not. But humans also seemingly like to consume content that is "false".

Generally humans are very good at adapting to new paradigms so best scenario might be transition periods with a lot of confusion. Media footage is used in court cases but almost always combined with witness testimony. It's difficult to know how reliable we actually are on their authenticity. We were already deceived by cut up body cam footage and photoshopped images before DALL-E was made public.


zimonitrome t1_iwc14i5 wrote

Wow thanks for the explanation, it does make sense.

I had a pre-conception that all optimizers dealing with any linear functions (kinda like L1 norm) still produce values close to 0.

I can see someone disregarding tiny values when using said sparsity (pruning, quantization) but didn't think that it would be exactly 0.