Viewing a single comment thread. View all comments

mirrorcoloured t1_j6e1ckl wrote

Yes I wasn't clear on the comparison, but I meant more by analogy that it's possible to hide information in images without noticeable impact to humans. In this space, I just have my anecdotal experience that I can use textual inversion embeddings that use up 10-20 tokens with no reduction in quality that I can notice. I'm not sure how much a quality 'watermark' would require, but based on this experience and the fact that models are getting more capable over time, it seems reasonable to me that we could spare some 'ability' and not notice.

I also agree with the philosophy of 'do one thing and do it well' where limitations are avoided and modularity is embraced. Protecting people from themselves is unfortunately necessary, as our flaws are well understood and fairly reliable at scale, even though we can all be rational at times. As a society I think we're better off if our pill bottles have child safe caps, our guns have safeties, and our products have warning labels. Even if these things marginally reduce my ability to use them (or increase their cost), it feels selfish for me to argue against them when I understand the benefits they bring to others (and myself when I'm less hubristic). To say that, for example, 'child-safe caps should be optionally bought separately only by those with children and pets' ignores the reality that not everyone would do that, friends and family can visit, people forget things in places they don't belong, etc. The magnitude of the negative impacts would be far larger than the positive, and often experienced by different people.

1

dineNshine t1_j6gikpr wrote

Children and pets are not the same as adults. Guns are also different from language models and image generators. A gun is a weapon, but a language model isn't.

Adding certain protections might be necessary for objects that can otherwise cause bodily harm to the user (e.g. gun safeties), but if you think that people must be prevented from accessing information because they are too stupid to properly evaluate it, then you might as well abolish democracy.

I am not doubting that people can evaluate information incorrectly. The issue is that nobody can do it in an unbiased way. The people doing the censorship don't know all that much better and often don't have the right intentions, as is often demonstrated.

It has been shown that ChatGPT has strong political biases as a result of the tampering applied to make it "safe". I find this concerning.

1