Sinity

Sinity t1_j6ehzcx wrote

Reply to comment by CypherLH in Google not releasing MusicLM by Sieventer

> this bullshit copyright anti-AI narrative has taken hold among so many people. Sorry but looking/listening to a bunch of stuff to learn what that stuff looks like and then using that learning to produce new, entirely original, works is not a f-ing copyright violation.

I think it mostly didn't actually take hold.

They just have a problem with automation itself. All of this talk about copyright is just the best they can do to argue that technological progress should be halted.

1

Sinity t1_izcwah8 wrote

Yes. Through it's not actually simulating the machine - it's just superficially good at pretending.

cat "asdf" > file.txt

Works well. file.txt is visible if you ls. Then you maybe do cat "qwerasdf" > somedata.dat... and on ls it, again, shows up. But maybe ChatGPT forgot about file.txt and it doesn't show up anymore.

TBF, humans wouldn't necessarily even outperform it on "pretending to run a computer" (not just an algorithm, but actual OS and such).

I think scale would make it way better at keeping track of things well.

11

Sinity t1_ivu2pa0 wrote

Hopefully people will ignore these licences. In the end, it's just text.

Some anon can (physically) download these models from HuggingFace and reupload them elsewhere stripped of the licence. Possibly even under a different name. Possibly worth ensuring that file's hash is different than the original.

Then, any user can download it. And they can't possibly know that OpenRAIL-M is a thing that applies to this model.

Law is not morality.

Also maybe worth it to make the life harder for the people responsible, not necessarily through illegal ways. Mobs on Twitter do it all the time, after all, and it's apparently fine.

0