Leptino

Leptino t1_j9s73ml wrote

One would have to consider the ultimate consequences (including paradoxical ones) of those things too.. Like would it really be catastrophic if social media became unusable for the average user? The 1990s are usually considered the last halycon era... Maybe thats a feature not a bug!

As far as drone swarms, those are definitely terrifying, but then there will be drone swarm countermeasures. Also, is it really much more terrifying than Russia throwing wave after wave of humans at machine gun nests?

I view a lot of the ethics concerns as a bunch of people projecting their fears into a complicated world, and then drastically overextrapolating. This happened with the industrial age, electricity, the nuclear age and so on and so forth.

8

Leptino t1_j4zxkyn wrote

The only people that have a prayer at doing this, is OpenAI themselves. It is likely they can insert an undetectable watermark in sufficiently generic text output for sufficiently many words which does not distort the meaning or quality appreciatively.

However, there is almost no way this can survive subsequent finetunings.. Like 'rewrite the previous paragraph with three new random words that doesn't change the meaning', and 'change all the nouns/verbs into synonyms that preserves the meaning of the paragraph'.

I strongly suspect (and might one day try my hand at the math) that there can be no such system that works in general against this sort of attack.

2

Leptino t1_j4oxrdp wrote

It shouldn't be too difficult to produce a watermark provided the output is something on the order of a paragraph. However, I don't think its always possible. For instance if I ask ChatGPT to replicate the previous paragraph by replacing all nouns and verbs and to keep the same meaning.

Further tweaking by a human should completely destroy any residual.

1