Comments

You must log in or register to comment.

Royal-Recognition493 OP t1_j6e5fra wrote

The main reason Google are hesitant and have no immediate plans to release MusicLM for public use is copyright issues.

In an internal experiement, they discovered that around 1% of the music generated was a direct replica of a piece of music is was trained on.

73

bogglingsnog t1_j6e7p7n wrote

Haha, just like a real musician :P

100

CYOA_With_Hitler t1_j6g4ews wrote

Yep pretty much, if you've ever listened to Kevin Parker(tame impala's) music, quite a few of his songs are almost entirely stolen.

Good example is Elephant from the album lonerism.

He just stole most of the song Queens Will Play by Black Mountain

8

chcampb t1_j6gxspc wrote

Well, luckily, we also got really good at automatically fingerprinting music, why not just two-stage it?

If the music has higher than a certain uniqueness threshold, bin it and train away from it.

12

msndrstdmstrmnd t1_j6gvsrk wrote

Why can’t they just add a filter in post processing? Is there more to it??

4

jonathanrdt t1_j6ji87b wrote

I’m surprised it’s thought to be so good. I listened to most of the examples in the published research, and I found most to be unlistenable as anything but background music. The moment I focussed in it, I wanted to change it.

1

frontiermanprotozoa t1_j6et75k wrote

Hmmm i wonder whats the only difference between AI drawings and AI music.

10

Veleric t1_j6ewuvd wrote

For one thing, musical components are much easier to identify direct comparisons with than visual art, especially with the diffusion model art uses, which literally rebuilds the image from scratch. Leaving the sourcing of the dataset out of the debate, you will see similar styles, brushstrokes, line art, etc. but that is going to prove very difficult in a legal argument as a direct copy. With music, however, we can much more easily detect a famous riff or melody or whatever that is incredibly distinctive to a particular song, even with slight variations in tempo, instrument, etc.. so it has a much more tangible, quantifiable way to identify a direct copy of a song. I don't know exactly how MusicLM works or how many of the music generators work at this point, and they may still be able to argue that it's been transformed in a meaningful way that did not directly use or replicate, but I do think it will be a tougher battle legally.

In any case, I think it's an inevitable reality that needs to be worked through rather than hidden from.

19

frontiermanprotozoa t1_j6f11e0 wrote

Drawn art AI gets plenty of hits too, and alleged fair use and other defenses applies to music too. Its purely a power difference. Music industry has been much more litigious and strict on protecting their copyright, while drawn art industry (if you can call it that) was permissive and benevolent to a fault. They are protected by the same laws, these corporations know there is 0 reason why an argument that applies to one cant be applied to other. They were so far successful in blurring the waters with cringy catchphrases like “it learns like humans” (its not a human, important distinction in copyright law and for anyone with common sense) “copyright is arcane anyways should be done away with” (for small artists tho, not for us). Theyre burying their music ais because they dont want to break the illusion they created in publics and lawmakers eyes.

6

ATR2400 t1_j6g2bse wrote

A lot of the people that have issues with AI art right now are small individual creators on the internet who do some commissions. In comparison like you said the music industry consists of large multi-million and multi-billion dollar corporations. A large coalition of the artists still might not be able to do much damage. Billion dollar companies with expensive lawyers can

6

vgf89 t1_j6g98h3 wrote

There's also just way way more aesthetic visual media than there is good music.

And the limitations music tends to hold itself too makes it incredibly hard to make anything original as a whole. In art, you've got a whole canvas you can fill with whatever you want and have quantifiably more ways to express things. When you've only got so many notes you can play on an instrument to experiment with, you'll randomly come across melodies that are extremely similar to things others have already made until you go back and think "hey wait that sounds like something... Oh it's basically the The Office theme"

Every possible 8 or 12 note melody (on a standard scale, anyways) fits in 601 GB. And despite how easy it might be to land on someone else's melody because only a small subset of melodies actually sounds good, some artists have been extremely litigious when it comes to "copying" melodies. Supposedly it's a pretty big problem on the jazz scene too. Plus sampling pieces of other songs tends to only be defendable if you're making a parody (ala lots of Lemon Demon's discography) or reach a certain bar for transformativeness that's much easier to reach in visual mediums. Not to mention that instrument/sfx samples tend to come from big packs that are separately licensed, so if you sample them from another song you may be trampling the copyrights of the instrument pack.

In visual arts, there's much more freedom. In digital, you've got at least 255x255x255 possible colors per pixel, canvases can get massive, and the combination of art styles and subjects are essentially infinite. And if you train a small 4GB AI on hundreds of terabytes of images, it's going to learn how to reproduce common subjects and concepts, styles, etc without physically being able to copy any one copyrightable element. There are outliers where something very exact and specific appears too often in the training data (Getty logo, and arguably the Mona Lisa, characters, for instance), but that doesn't detract from the AI being able to produce what are wholly new pieces of art that don't infringe on copyrights, and I question whether an AI being able to reproduce Micky mouse constitutes as the AI itself infringing copyright rather than the user infringing if they publish said image.

Point is, there's a good legal argument that image generation tends to not infringe copyrights (it lands squarely in fair use because it isn't copying/memorizing training data, barring people using it in stupid ways) while music generation AI could frequently spit out arguably copyrighted stuff due to the sheer numbers difference between training on over a billion unique images vs a much smaller number of songs that have fewer distinct pieces that make them unique (and a much more litigious music industry that wrongfully protects tiny note snippets that probably shouldn't be copyrightable in the first place).

Maybe music AI will get better, but it sounds like a lot more work needs to be done there (or Google's just scared of the MPAA RIAA), and the case law on music copyright would also have to change.

6

czk_21 t1_j6je4my wrote

well written description of differences

as you said Every possible 8 or 12 note melody (on a standard scale, anyways) fits in 601 GB and these combinations are natural occurence and they are placed under zero copyright, shouldnt this apply to google product or even to the original songs- u know since there is finite number of this combination ppl can come with sam or similar idea completely independently, also many songs have already similar melodies for whatever reason

there should be no copyright for just the melody but for song a whole

1

SnapcasterWizard t1_j6izs2b wrote

>while drawn art industry (if you can call it that) was permissive and benevolent to a fault.

Ah yes, an industry with titans such as Disney are known to be permissive and benevolent to a fault!

1

Trinituz t1_j6gunnm wrote

It’s easier to bully many artists who barely made living than musicians followed who made millions with lawyers to back them up - AI devs, very likely

4

Zlimness t1_j6h8x7h wrote

Not that big of a difference in the sense that AI could theoretically recreate any picture. But that's not the purpose of the tech nor can you copyright AI generated works, so Google is overreacting. But the music industry is also incredibly overzealous and have deep pockets, so it might not be worth the risk.

2

MEMENARDO_DANK_VINCI t1_j6ilswx wrote

They’re both toddler Fields, likely to solve these problems to have them replaced by much more challenging once in the next 5 years

1

Difficult_Bit_1339 t1_j6fpc97 wrote

I remember reading that someone who worked in machine learning was saying that Google's reluctance to release software is causing them to be the company that's trained an entire generation of machine learning engineers who went on to other companies to make great things.

This is certainly an example of that type of thinking. They're scared to innovate because they're more interested in protecting themselves from legal exposure than in pushing the technology. So the engineers that are passionate about creating new things are just going to new fast moving startups and this leaves Google playing catchup.

9

McDof t1_j6e5jpx wrote

I bet they are releasing some of the music it's generated. Just under pseudonym artists in order to distance their name from the potential lawsuits that follow.

I mean actually probably not, but that would be wild.

6

jloverich t1_j6f0aw5 wrote

It didn't seem that good. I know it will get better, but astounding is the wrong word. Maybe "almost compelling elevator music", but even then, there is something no quite right.

4

LtsThrwAwy t1_j6fu3du wrote

Sure, but everyone always focuses on the right here right now. They've developed a AI music generator that only gets better as it goes.

I feel like with every technology someone shits on it, but the point is the development that becomes possible.

It's like shitting on the Wright Brothers because the first flight was only 59 seconds. What good is that going to do anyone?!?

12

Themasterofcomedy209 t1_j6h0mhq wrote

And OpenAI’s music generator from a few years ago created out of tune nightmare music, and now music generators can actually create something that sounds good. It’s all about progression

4

SnapcasterWizard t1_j6j03z6 wrote

Its pretty astounding that we can generate music from text using algorithms. Like, this ability just didn't exist a few years ago and now it does.

2

jonathanrdt t1_j6jiqk0 wrote

That’s exactly how it seemed to me: it was only acceptable as background. When I actually listened and studied, I found it off-putting and strange, just like most generated images. It has all of the musical elements, but they’re assembled without understanding or art.

1

ten-million t1_j6fe7lz wrote

Yeah I got the elevator music vibe as well. Very blah. Could be on an old tv show.

0

Substantial_Space478 t1_j6fg2cn wrote

yeah i've also been writing and recording astoundingly good music for the last decade, but i...uh...can't release it... because...uh... concerns

yeah totally. "concerns"

2

Exel0n t1_j6h7zgj wrote

sooner or later some open source music generator will be released and the big 3 music publishers will be destroyed. they might sue, so what? some no name small company just gonna declare bankrupcy, aye lmao, while the open source code would still be used by everyone

the days of those copyright abusing studios are numbered.

2

Responsible-Book-770 t1_j6ig4s7 wrote

So stupid, make an amazing breakthrough tell the crowd then not release it............ fuck copyright honestly every dam song is a remix of some other person's jam so dam retarded

2

datsmamail12 t1_j6j1fht wrote

Yeah of course. Hey guys,I just created ASI. What? Don't believe me? Dude,trust me,it has superpowers and shit. Of course it's true! Why? Because I said so! I'm not releasing it because it would mess up the world and stuff. Y'know what I mean?

2

DEATHRETTE t1_j6em8e8 wrote

Release the MusicLM, man!!

Copyright my ass. You can thank everyone you want for 'sampling' their musical interests. What's different here!?

I guess you should copyright the letter E or the note E flat. Cmon man... music is free.

#freethemelodies

1

DarkJayson t1_j6fmu9z wrote

Then what was the point of announcing it? Did they have a goal or just bragging rights?

1

Alex_Greene t1_j6g3p5y wrote

This has “I have a girlfriend, but she goes to a different school” vibes to it

1

bnetimeslovesreddit t1_j6i6b5o wrote

Its like they invented everything like Chatgpt or OpenAI like but withhold it for ethical reasons

1

zvoidx t1_j6kicpj wrote

I could see a music industry lawyer arguing that Ai is not technically learning, but reshuffling the finite bits to create something else.

1

TheCartoonClub t1_j6limrg wrote

I’m looking to make an aggregated search engine for MusicLM and other music generations to help artists find and remix music - would love to hear some ideas from folks in this thread on how to deal with copyright issues. https://soundsearch.net

1

SerenumUS t1_j6gv94p wrote

Yeah, this is literally every AI right now being thrown in the headlines.

It's taking other people's shit without permission, and making shit that is heavily influenced off of other's stuff. And people still want to act like it can make genuine music.

0

bart9611 t1_j6ecyap wrote

So it’s not MusicLM that is “Astoundingly Good” but the artists themselves that made the track that the AI copied.

−5

sdric t1_j6fts3a wrote

I don't think you understand how percentages work. It happend literally in 1% of the cases. Takes quite some prejudice against a technology to discard all of it for that.

Considering how many cover songs and remixes are out there, I'm honestly not surprised that a prototype AI that doesn't have too much training yet sees a minor degree of overfitting.

8

MrStomp82 t1_j6e94cr wrote

Googles MusicLM is astoundingly good at making facsimiles of other people's hard work and creativity.

−10

ikediggety t1_j6ebf94 wrote

Exactly. People keep talking about "artificial intelligence", it ain't nothing but advanced copying

−17

ATR2400 t1_j6g2rf0 wrote

How do you think it works? Honest question. If I ask an AI to make an image of a boat at sunset what steps do you think it follows to achieve a result?

5

ikediggety t1_j6gx4ef wrote

I think it pores through tons of images created by people and copies and combines aspects of several.

Without decades of work being done by humans, there's nothing to "train" the system on. It's imitation, not intelligence

−3

ATR2400 t1_j6h5vgy wrote

That’s not how it works at all. Stable diffusion is trained on over two hundred terabytes of data yet it’s download takes up 4gb on my computer. How? Because it’s not just pulling images from some database and playing mix and match with their pieces.

Although comparison to human learning is not the best in this case it’s called “training” for a reason. The imagery it views is used to teach the AI to create its own imagery. If it bears a resemblance to someone else’s art style it isn’t because it’s ripping images from their deviantart page. It’s because a great deal of how it learned about imagery came from that person. It’s very loosely similar to how if during the process of learning to draw I browsed other peoples works to learn how images of certain things are assembled and used that to gain skill and knowledge but when I make my own art I don’t directly use those images in the creation process. If I learn a lot from a specific person my style may grow similar to theirs. Now I must stress that humans and machines are very different but it’s closer to that than it is to having the AI access some database of stolen images

And no. There’s no compression good enough to compress 250 terabytes into 4gb without making the data supremely useless. And it doesn’t connect to the internet. It works offline

4

ikediggety t1_j6jclch wrote

"Because it’s not just pulling images from some database and playing mix and match with their pieces."

It's playing mix and match with not just their pieces, but their characteristics. If that wasn't what it was doing, the database would not be required.

"It’s very loosely similar to how if during the process of learning to draw I browsed other peoples works to learn how images of certain things are assembled and used that to gain skill and knowledge but when I make my own art I don’t directly use those images in the creation process."

But you aren't a machine carrying out instructions with no choice. You aren't just an output-producing biological algorithm. The most important input for creation is the initial idea that it is necessary. When you sit down to make a painting, yes, you are employing techniques you may have learned from others, but you may also invent your own techniques that haven't been used before. Most importantly, you are the instigator of your own creative process - you are not making that painting because you are compelled to by outside forces you cannot control, you are making that painting because it occurred to you and you thought it was a good idea.

No machine, absent human input, has ever produced a painting, for the simple reason that no machine ever does anything absent human input. Machines simply carry out instructions given to them by humans. They are very good at that.

It's simply a calculation engine, and humans have done the hard work of figuring out how to use calculations to synthesize works of art.

Let me know when an AI, unprompted and with no input, asks a question of a human being. At that point I will call it intelligence. Until then, it's just a very advanced program processing input to produce output.

0

SnapcasterWizard t1_j6j0d4r wrote

>Without decades of work being done by humans, there's nothing to "train" the system on. It's imitation, not intelligence

If you raised a human in a dark room its whole life, do you think it could make art if you handed it a paintbrush and turned on the light?

2

ikediggety t1_j6j9tiw wrote

Well, somebody did. Somebody, somewhere, made a cave painting when nobody else had before.

Imitation is not creation. Advanced copying and pasting is not intelligence, it just looks that way if you squint real hard.

0

SnapcasterWizard t1_j6jfqh1 wrote

Yes and cave paintings don't look anything like the kind of art produced today. Art is learned and developed through imitation.

1

ikediggety t1_j6jjjxg wrote

It can be, but that's not the only avenue.

Crucially, major developments in art are frequently reactions against what came before, not simply reiterating it. Pointillism was unthinkable in the 1700s, for example, because nobody had thought to do it. The idea to do it didn't come from the desire to perfect the techniques of mannerism or baroque painting. It came from the idea to do something different.

Many major advances in human civilization come from a similar place of not accepting the rules. Machines, on the other hand, are literally incapable of not following the rules. They are large calculators. Rules is all they do.

Left unattended, a human will measure its environment and choose to take actions which will benefit it.

Left unattended, computers will rust, because it will never occur to them to do anything else, because nothing ever occurs to a computer. Computers don't have ideas.

1

SnapcasterWizard t1_j6jy87m wrote

>Left unattended, computers will rust, because it will never occur to them to do anything else, because nothing ever occurs to a computer. Computers don't have ideas.

Except if the computer is running a neural net, then yes, it actually can "come up with new ideas" thats the entire point of machine learning algorithms.

​

As for your previous paragraphs. In order to have a reaction against something, there must be a something there. New art styles and ideas build upon everything that comes before it, even if its a rejection of those ideas.

1

ikediggety t1_j6kbbtt wrote

But machines don't do that. AI will never invent a new genre of music, or a new style of painting. It can iterate and improve upon what it already knows. That's it.

All it's doing is running really fancy math that humans invented, that humans programmed into it, to analyze thousands of works made by humans, and spit out variations on it. It's a Netflix recommendation on steroids.

And no, computers don't "have ideas" because ideas are spontaneous. Computers produce output, and they do so because that's what human beings instruct them to do.

ETA: show me the AI algorithm that, when trained on centuries of baroque and mannerist paintings, invents impressionism. Show me the algorithm that, when trained on centuries of Bach and Haydn, invents jazz.

0

Imaginary_Passage431 t1_j6emq25 wrote

Haha they aren’t releasing it because they are lieing and OpenAI will again beat them. RIP Google.

−11

ATR2400 t1_j6g2kwd wrote

How is it RIP google? Google isn’t primarily an AI company. They’re a user data collection company and advertisement platform.

If all their AI projects like got shut down tomorrow it would do nothing to them

3

Imaginary_Passage431 t1_j6gd5zg wrote

AI will replace non AI businesses.

2

ATR2400 t1_j6ggo4f wrote

OpenAI sells AI services to other companies. But they don’t really seem to have a plan to use AI to dominate every sector by themselves. Worst case scenario google uses some of that big money to pay someone else to lend them a good AI

2