Submitted by Sieventer t3_10n7gj7 in singularity

I don't get it. OK, we understand that LLM smart bots can be 'dangerous' without filters. But a freaking music generator? How could it affect humanity negatively? How could it be dangerous for them NOT to take it out?

​

And they were supposed to be on 'red alert'?

97

Comments

You must log in or register to comment.

DadSnare t1_j67fkpg wrote

With how fast things are going, I hope someone figures it out and make an open source version soon.

65

Akimbo333 t1_j67hj6r wrote

I think Polymuse and Stability AI are working on open-source versions pretty soon

38

Tolchiee t1_j67x0ca wrote

We (Polymuse) may indeed be... 😈

41

gay_manta_ray t1_j69hljc wrote

i have a feeling that these big companies are at a very high risk of losing the AI race because of their reluctance to release anything too disruptive.

12

dayaz36 t1_j67k257 wrote

It’s never been about safety. These companies just want to control everything. Safety is the excuse. Case in point…

56

Talkat t1_j681bli wrote

So a couple questions then.

  1. If it is about control, why release these papers? Why give the industry/world updates on your progress?

  2. If they wanted to control the AI and the products it creates, it should create an AI spinoff to launch products. (eg; Make a AI music company that releases hits). But there has been no word of this either.

9

jloverich t1_j681uu6 wrote

The researchers aren't interested in working in places they can't publish. There are other places that probably aren't publishing exactly what they are doing, midjourney and womba I think are examples.

11

HenryHorse_ t1_j682w84 wrote

>MusicLM

in terms of MusicLM and google.

​

  1. Maybe for community feedback and advancements/improvements
  2. There could be a minimum baseline of quality before they are useful, when that happens they are integrated into existing products. Youtube for example.. MusicML isnt currently good enough yet but would be a great product for content producers.
9

fingin t1_j68hdlp wrote

Not that I necessarily agree with OP but:

  1. Papers are great promotion. Think of all the buzz that has now been created for Google MusicLM by only releasing their paper. Also, now that the paper is out, problems or limitations can be addressed by other researchers that will ultimately help Google. Really, the information/theory behind the model is not that important compared to an actual product or tool served.
  2. Agreed!
4

Trumaex t1_j6a8kap wrote

Or just doing those press releases as marketing stunts. That's what OpenAI did with GPT2, and it worked for them very well.

1

LittleTimmyTheFifth5 t1_j67cxnb wrote

Here's a thought, how would the music industry react to that? I sense that would be a lot of legal fights and claims. Besides, they could probably privately license it to companies to get some money off it or something.

51

Spire_Citron t1_j684p96 wrote

If there's one industry you don't want to get into a fight with over copyright, it's the music industry. I think you're right. If the music industry challenges them and wins, it could impact all their other AI ventures.

17

pressurepoint13 t1_j688n2n wrote

Everyone will eventually lose to AI. It's inevitable.

28

SurroundSwimming3494 t1_j69cz5c wrote

Yeah, let's have one industry choose our future on behalf of the entirety of humanity.

Not that this sub would have a problem with that, of course.

−5

IONaut t1_j6dmw2b wrote

Copyright infringement by AI training will probably be decided before the music industry even has a chance in the class action lawsuit brought by visual artists against Stability AI.

2

Glittering-Neck-2505 t1_j6btcx9 wrote

I think the bigger factor here is that research prototypes aren't typically released to the public. Companies that have released public betas have done so after numerous iterations behind closed doors. These are things that are not public by default. I'm going to get pushback saying this but I don't know why people feel entitled to freely test out R&D prototypes.

ChatGPT has been the exception, not the rule.

1

6omph9 t1_j67bb1s wrote

Music is powerful man the world isn't ready for that

14

LUNA_underUrsaMajor t1_j67t69d wrote

Its either a financial liability or they are waiting to find out how to make money off it

10

GodOfThunder101 t1_j67khzp wrote

Copy right issues. Probably would spark lots of lawsuits.

8

visarga t1_j68th6d wrote

Let me show how you can sidestep copyright.

> In December 2014, the United States Copyright Office stated that works created by a non-human, such as a photograph taken by a monkey, are not copyrightable.

Since AI generated content is public domain, then AI trained on AI generated content is free from any liabilities. This second generation AI cannot replicate any human original work because it never saw them in its training set.

By training on variations we can cleanly separate expression from idea. Copyright only covers expression, not the ideas themselves. But a variation in the same style will capture just the style and not the contents of the original.

So, second generation AI can learn from what is allowed to be learned (ideas) and avoid learning what is protected (expression).

2

Superschlenz t1_j6atd0c wrote

If the output from the first generation AI which becomes the input to the second generation AI is considered illegal, then the output from the second generation AI may be considered illegal as well.

1

visarga t1_j6c2fd0 wrote

The question is illegal in itself, for simply existing, or illegal to publish, but ok to train on since it has no copyright and does not closely resemble the originals? It could be a technical way to reduce exact copyright infringement.

2

Trumaex t1_j6a91ie wrote

Nah, doubt it. They did expect troubles (and got them) when scanning all those books, yet they did it anyway. Google can afford the best lawyers. It's something else. Maybe it's not that good, maybe it's publicity stunt, maybe they want to gauge the reaction. Maybe they don't want to be first, so all the anit-ai art hateful crowd goes after someone else first. etc. etc.

1

Superschlenz t1_j6atxcu wrote

>It's something else.

Yes, and it's called the passing of time. Google 2004 ≠ Google 2023.

1

ElvinRath t1_j67uf8g wrote

it's google, they don't release things, just papers.

Let's pray that they keep it that way (At least releasing papers).

7

malcolmrey t1_j68tm0t wrote

why?

1

ElvinRath t1_j68uhc9 wrote

Why pray that they keep publishing papers? Because they are very useful for other groups.

6

malcolmrey t1_j68w9cg wrote

sure, that is good

but if you are already a religious person, why not pray that they not only release the papers but the codebase?

also, I was wondering about the praying, does it ever work for you?

1

blueSGL t1_j69254v wrote

some people use words like that as turns of phrase and are not religious.

You will find atheists uttering phrases like "God fucking damn it" and "Jesus Christ" as expletives because it's common parlance.

Much like 'pray' being synonymous with 'hope'

Same with using Yiddish expressions without being Jewish.

7

ElvinRath t1_j694ill wrote

Haha sorry, It was a set phrase, I don't mean literally praying, I'm an atheist.

I think that it's also used in english, but maybe it is not that common? At least in my language there are a lot of expressions that include religious terms (like praying or god) that are quite common, and I don't feel particulary bad using them when they seem fit to the conversation.

It means "let's HOPE+WISH that they keep publishing the papers".

I don't mention the code because...well, I WISH that they published it, but I have no HOPE for that hahaha

4

malcolmrey t1_j695bo4 wrote

i come from poland and if someone says that he/she is praying - they mean it :-)

cheers!

2

Stakbrok t1_j692o60 wrote

The last time I prayed for something, it was for a job opportunity. I prayed for God to open up the right doors and provide me with the resources I needed. The next day, I received an email from a company that was interested in my qualifications and offering me an interview. I ended up getting the job.

1

HelloGoodbyeFriend t1_j67grcc wrote

They’re scared of starting the avalanche that will be Napster 2.0

5

CypherLH t1_j67jb20 wrote

It bugs me so much that this bullshit copyright anti-AI narrative has taken hold among so many people. Sorry but looking/listening to a bunch of stuff to learn what that stuff looks like and then using that learning to produce new, entirely original, works is not a f-ing copyright violation. If it is then genres/fashions/styles can be copyrighted and fair use is dead so basically all art and creative work is dead since no one can learn their art by learning from stuff they "don't own", no one can follow a fashion trend, no one can be inspired by the style of anyone else's work ever. How can people not see what a disaster it would be if these copyright assertions are upheld by the courts??? And it all stems from a fundamental misunderstanding of how this AI tech works; these models are NOT just mashing things together like a giant collage so matter how many people keep repeating this false assertion.

36

TwitchTvOmo1 t1_j67nccl wrote

For what it's worth, no matter how much copyright advocates scream and cry, it won't stop AI from replacing entire industries. Like it or not the music industry is next. It might slow us down, but it's happening one way or another.

14

SurroundSwimming3494 t1_j67qrov wrote

>Like it or not the music industry is next.

AI hasn't replaced any industry yet. It hasn't even made significant inroads into the replacement of any industry, as far as I'm concerned, and I think that'll remain the case for at least the foreseeable future.

And also, going from this model (MusicLM) to the entire music industry being replaced is just one hell of a leap to make.

My personal and humble opinion is tools like these others will help musicians flourish for a good while, before the tools become so helpful that they actually begin disrupting the industry.

4

TwitchTvOmo1 t1_j67rg0b wrote

>My personal and humble opinion is tools like these others will help musicians flourish for a good while, before the tools become so helpful that they actually begin disrupting the industry.

I never said the opposite. Industries aren't gonna go "poof" and disappear from one moment to the other. But it's already began. Diffusion models will be remembered as the beginning of the end of the digital art industry. MusicLM and other similar tools that will surface in the near future will be remembered as the beginning of the end of the music industry. And it's not a hell of a leap to say this is gonna happen within the current decade. Everything seems like a hell of a leap to our brains because we're not very good at grasping the concept of exponential growth. Our brains think linearly, but AI growth has been exponential for years now.

2

visarga t1_j68oh3j wrote

I work on NLP, simpler tasks like information extraction from forms. My model was based on years of painstaking labelling and architecture tweaking. But last year I put an invoice into GPT-3 and it just spit out the fields in JSON, nicely formatted. No training, just works.

At first I panicked - here we have our own replacement! What do I do now? But now I realise it was not so simple. In order to make it work, you need to massage the input to fit into 2000 tokens, and reserve the rest of 2000 for the response.

I need to check that the extracted fields really do match to the document and are not hallucinated. I have to run it again to extract a few fields that came out empty for some reason. And I have to work on evaluation of prompts, it's not just writing, it has to be tested as well. Now I have so much work ahead of me I don't know what to do first.

I believe most AI adoptions will be similar. They will solve some task but need help, or create new capability and need new development. There is almost no AI that works without human in the loop today, not even chatGPT can be useful until someone vets its output, an certainly not Tesla or Waymo SDCs.

8

Frumpagumpus t1_j69myrc wrote

nice example.

it definitely does seem like "contextualization" is one of the biggest limiters on gpt performance.

https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals

you might enjoy this copilot reverse engineering in a similar vein. if i had enough time i would probably port some of these techniques to emacs (can use copilot there but looking at extensions dont quite do all this i dont think, tho it does work well enough with just the buffer)

1

SurroundSwimming3494 t1_j67s9i0 wrote

>Diffusion models will be remembered as the beginning of the end of the digital art industry. MusicLM and other similar tools that will surface in the near future will be remembered as the beginning of the end of the music industry.

I definitely think they'll be remembered as the start of revolutions in both digital art and music, but I'm not sure that'll they'll be remembered how you envision so. We'll see.

>And it's not a hell of a leap to say this is gonna happen within the current decade.

I guess we'll find out on Jan 1, 2030, but I think humans will still be playing a role in both the art and music world by then (even if quite different).

>Our brains think linearly, but AI growth has been exponential for years now.

Good point. But it's also worth noting that AI has hit roadblocks in the past after a period of exponential improvement. I don't see why it's not possible for that to happen to the current AI boon at some point (I think it probably will).

5

visarga t1_j68nu2s wrote

AI will make some things easier and create more expectations and work around it.

1

CypherLH t1_j67nlpz wrote

Yeah, its the potential slowing down that bugs me. I assume all the big AI players would have to build entirely new data sets to fit within whatever the new copyright regime would allow. Then re-train their large models on the new data sets, etc. Would definitely slow things down and some players like MidJourney might have to close their doors with a setback like that.

Plus I fear the broader implications of a new copyright regime that would effectively allow genres, fashions, and styles to be copyrighted.

3

TwitchTvOmo1 t1_j67o8dg wrote

Capitalism gonna capitalise mate. Every corporation will lobby billions doing their best to find a way to profit as much as possible off of AI. Even though it should be democratized.

Let's take this post for example. OP says he has no idea why they'd keep MusicLM private and how their usual argument of "it could be dangerous" doesn't really make sense here. It's because it's bullshit and that's not the reason they're keeping it private. It doesn't even have anything to do with the potential legal battles. The real reason is they know it's going to be a MASSIVE cash cow in the next 5 years and they'd be stupid not to milk it behind the scenes while acting like they're looking out for the world. Only chance of them releasing it is if a competitor like stability.ai releases something similar for free. Then they would be forced to release theirs too (not for free of course) before stability.ai erodes the entire market and they can no longer make the trillions they dreamed of.

Free market competition is the only hope there is. And that still looks a bit grim, considering the huge amounts of capital needed to make progress in these areas. And we all know which are the companies with those huge amounts of capital. The same ones that wanna squeeze every profitable penny out of AI progress.

3

purgatorytea t1_j67ok6g wrote

I 100% agree with you. And the only people who stand to benefit in that scenario are the big companies and the wealthy who will hire lawyers to enforce whatever they believe they "own". Expanding copyright will only hurt regular people and smaller artists...the artists that these "movements" are claiming they're advocating for....the smaller creators who are joining in on the anti-AI crap... they're the ones who will be harmed by the new copyright regime....moreso than simply allowing AI art generators to operate without this legislation and slowdown of technology. In fact, I wouldn't be surprised if there are some big companies trying to push the anti-AI art movement because they know it's a big opportunity to gain control of the industry and increase their own profits.

2

CypherLH t1_j67q5pq wrote

Yep, the anti-AI artists are literally trying to cut of their own noses to spite their face. They will be cutting their own throats if they get their way. They should be embracing this technology as a way to augment/expand their work and welcoming all the new people showing an interest in art because of the accessibility of the AI tools.
The annoying thing is they are spreading this shit on tiktok and elsewhere, indoctrinating young budding artists to hate "Evil AI" that is stealing all their works and trying to suck out their humanity. (literally, my 11 year old is spouting this stuff at me because of shit she is seeing online)

4

wavefxn22 t1_j67v35u wrote

It’s not evil it’s a tool, that evil people can use as well as good people.. it needs copyright restrictions so you can’t just straight up steal someone’s unique style that they spent a lifetime developing . But it also shouldn’t be so restricted that we can’t make anything at all

1

visarga t1_j68qfux wrote

Styles, by definition, are broad categories. If they were copyrightable, then the same rule would need to apply to both humans and AI. We can never know when a human has used AI or just looked at AI for inspiration. So we have to assume any human work might have AI in it.

If human works would be exempt from the strict rules AI has to follow what's to stop the big companies to hire people to white wash the style copyrights? What companies need is to license some images in that style. The images can be produced for hire at the lowest price.

2

wavefxn22 t1_j69828s wrote

They aren’t necessarily broad categories. You can ask ai to do something in the style of a specific artist. Say, van gough. His style was not a broad category, it was very distinct. And even he had styles within his styles, different periods.

Ai can be broad or specific, when it gets too specific as in people asking for “in the style of van gough” then we need some copyright protections.

Picked van gough as an example because he killed himself thinking he was worthless. He’d be even worse off today

1

visarga t1_j6c2z5z wrote

I disagree, copyrighting styles is absurd, countless possibilities banned in one go? We'll get to the point where humans fear creating anything because it will inevitably resemble some style somewhere.

2

wavefxn22 t1_j6d8crl wrote

I don’t think you understand what I said; there’s a range. A work in the style of Van Gogh, is a limited style range that should have copyright protections. A work in the style of Impressionism however is fair use.

1

visarga t1_j6n30bh wrote

I don't think even Van Gogh can claim ownership of squiggly lines that look like fire or the colour palette of white-blue-gold. They pre-existed and were rediscovered in many ways in by many artists.

Can we agree that a style used by 3 or more artists doesn't belong to anyone and is open for AI to use? We just need to make a list of all styles that are generic enough.

0

HelloGoodbyeFriend t1_j67o4a9 wrote

Completely agree. Unfortunately the way the AI works isn’t easily explainable as of now, so uneducated critics or artists with a bias toward AI content will immediately call it a “collage tool” instead of focusing on the power that it has to expand art.

I’m a musician myself and I cannot fucking wait till the models are capable of producing anywhere near the equivalent musically that the text to image models are doing now because I will immediately use it as a tool to expand my art and expression to get the most accurate output of my emotions and the story I want to tell. That’s the biggest part of the conversation thats’s missing right now in my mind.. The great visual artists of today should be, or are probably already using AI to amplify their talents. AI, for now.. is just raising the bar of how the average person can attempt to express themselves, it’s up to the artists to view it as a tool or a threat.

3

CypherLH t1_j67onsk wrote

It unlocks artistic expression for people who previously lacked the traditional talents or lacked the time/money/resources to get that training, etc. Someone who can write but never had the talent to draw can you infuse visual imagery into their products without having to spend a bunch of money and waste time going back and forth with a contractor, etc. Its going to bring in an explosion of new creative effort, new ideas, etc. Plus think about disabled peoples who couldn't physically do things like drawing/painting but can now interact with an AI tool via speech recognition.

7

BigZaddyZ3 t1_j67plhh wrote

The problem with people who have these type of utopian fantasies is that you clearly don’t understand the concept of saturation and how this type of bar-lowering will simply tank the value of art and end up rendering the majority of it worthless in the future. It won’t lead to some unrealistic renaissance where everyone is lauded for their artificial, ai-granted, “artistic“ abilities. Instead, art will be so easy and cheap to produce for even the most talentless morons that creating art won’t be impressive or meaningful to anyone in the future.

0

CypherLH t1_j67puor wrote

Literally this is the same argument used against the printing press, digitization of data, etc. Oh no the vulgar masses can now print and read whatever they want, the horror!

Suppressing this AI now is literally akin to trying to suppress moveable Type to save the jobs of scribes and monks.

Hell, I bet people made the some sort of complaints about _writing_ when it was first coming into use.

3

BigZaddyZ3 t1_j67qzg6 wrote

First off, no where in my comment did I advocate for suppressing it smart guy… I was simply telling you what the outcome will most likely be from these innovations.

Second, stop trying to compare AI to printers, etc. AI is completely different from all those other tools. It’s a dumb false dichotomy that doesn’t even make sense. And history doesn’t always repeat, so appealing to the past is ridiculous anyways.

Lastly, do you not understand that the value of art is tied to its rarity? Do you think “The Starry Night” would have been so beloved if literal everyone could create something just as beautiful with a few text prompts? If everyone has the capability to be a great artist, no one has any reason to consume or pay for anyone else’s art. Thus art will cease to have any real monetary or cultural value. (And that’s not even touching on the damaging effects that market saturation will have on these industries as well.)

Deep down even you know I’m correct because you can’t even actually argue with what I’m telling you. All you can do is try to appeal to past creations that are in no way comparable to what AI is capable of. Says a lot huh..

−6

CypherLH t1_j67rv28 wrote

I'll admit I was ranting off on a tangent there.

That said, I really don't give a shit if artists don't like that AI Art makes it easy to generate art. The onus is on them to use the new tools to augment/improve their work....which they should be better at since they have the advantage of their artistic talent.

Set aside the copyright issue for a moment....would you agree that most of the anti-AI artists really just don't like AI generating art, period? Their citing of "copyright" is just a tactic, the real issue is that they just don't like AI Art and they hate the thought of dirty untalented vulgarians being able to express their ideas with a new tool.

5

BigZaddyZ3 t1_j67smhg wrote

Well, I definitely agree that artists just don’t like AI art period and never will (for good reason). But with the copyright thing, I don’t think it’s a “this or that” situation. I genuinely believe that it also does piss them off that the AI technology is not only a threat to their industry, but basically using their own art to eventually render them obsolete. Who wouldn’t be slightly pissed in that scenario?

But like I said, I do agree that the real animosity they have stems from the fact that they can see the writing on the wall. If people can just use AI to design their own art. There’s no need to ever hire “artists” as we know them. Thus the market for “artists” will disappear shortly after. Their animosity will most likely be justified in the end. But the genie’s out the bottle now so… it is what it is.

3

CypherLH t1_j67tmis wrote

Obviously "art" is going to change, there is no denying that. And yes there will be a flood of art. The skill will come in using the new tools to enhance works and create projects that are larger in scope, etc. But yes there is no avoiding that there is going to be a MASSIVE amount of art out there and it will be divided into smaller and smaller niches. Thats just the way its headed, like it or not. Add it to the pile of things AI is going to disrupt MASSIVELY.

By the way, if we get to UBI or some form of "post scarcity" then its alleviates most of the problems because artists would no longer need to earn end's meat off their work, they could just do art for the joy of it like any starving artist but without the starving. Sorry to sound all utopian but this IS /singularity ;)

4

BigZaddyZ3 t1_j67uqtz wrote

I can agree that a post-scarcity world takes the sting out of losing your career, but my concern lies more with what the value of creating art will be in a world where AI allows everyone to be just as capable as you are.

There may not actually be much fulfillment in creating art in a world where artistic skill itself is no longer scarce. You know what I mean? Sure some may still attempt to make art when bored or whatever. But what’s the point when some less talented idiot can just open up an AI and create something just as good or even better with a fraction of the time and effort it took you? How fulfilling will making art be when “making art” simply consisted of typing a short description into a text prompt and then boom… beautiful artwork?

I’m just not sure the value of making art will survive this transition into post-scarcity. I guess that’s what’s being debated here.

2

CypherLH t1_j6a4za4 wrote

I can see your point but I optimistically assume that a larger amount of art in total will also mean a larger amount of quality art.(even if its a small percentage of the total) And the same AI tools that generate art will also be able to help people seek out art that appeals to them. The best art will still rise to the top and there will still be a skill in things like worldbuilding, setting style guides, etc.

2

visarga t1_j68rznt wrote

> If people can just use AI to design their own art. There’s no need to ever hire “artists” as we know them.

So naive. The competition will not fire their artists and use AI as well. Guess who will win? They might have so much volume they need to hire more.

−1

HelloGoodbyeFriend t1_j67tm9c wrote

“Deep down even you know I’m correct because you can’t even actually argue with what I’m telling you.”

I was going to write out a whole counterargument to your comment but then I read this. Fuck the fuck off dude and if you don’t understand why I’m saying this, go ask ChatGPT.

4

BigZaddyZ3 t1_j67ts6b wrote

Nice argument you got there pal… totally didn’t prove me right with that comment… nope.

−2

californiarepublik t1_j681dvo wrote

The problem with your ideas here is that you have no proof or evidence it will work out this way.

−1

BigZaddyZ3 t1_j681o9q wrote

I have no proof that the more saturated an item or skill is on the market, the lower the price it yields? I have no proof that the larger the supply of an item or skill, the lower the demand? Am I misunderstanding your question or are just new to planet Earth? Please be more specific…

2

californiarepublik t1_j682la1 wrote

Another point -- using MusicLM to create drumbeats or riffs for an electronic dance track could actually be a MORE creative process for many producers that the ways they are doing it now, since many people are simply sifting through a library of samples to find their basic musical building blocks. Using a text-to-music generator to make your beats instead seems potentially a much more creative process, and personally I will embrace this as a tool in my own music making as soon as its available. I don't see this as replacing my 30 years of education and experience as a musician -- rather -- my background as a musician and artist enables me to get much better use out of AI tools and get the results I want.

1

BigZaddyZ3 t1_j6850c9 wrote

Ahh… So you’re a “would-be” artist yourself? that explains a lot. Have you not considered that you may be biased on this particular topic buddy? Seems like you have a vested interest in the idea that human art will somehow be spared from automation. (For pretty obvious reasons).

Ask yourself this, what’s gonna happen when AI creates a world where there’s no need for beatmakers because AI will generate a perfect beat in seconds based on a few descriptive sentences? What happens when everyone can use these AI to make their own beats? (So there’s no need for them to ever buy anyone else’s beats?)

What happens when we have AI that can totally bypass the process of “making beats” and can instead, simply generate fully completed songs with human vocals included? What happens to music industry when this type of tech is available to everyone?

1

californiarepublik t1_j6828h9 wrote

Let me try.

With regard to music, its already very easy to produce formulaic derivative music without AI, you can simply buy all the samples online and snap them together. You can buy vocal a cappellas and use them or hire a studio singer online for cheap.

This has led to a flood of mediocre music ALREADY, we're well down that road, but somehow the best artists still manage to rise the top by creating work that moves people, regardless what tools were used. I believe that this situation will continue well into the future, and AI-art that can replace human artists is still as far off as 100% reliable self-driving cars, another chimera.

0

BigZaddyZ3 t1_j68400s wrote

Because… at the moment , making “the best” music still requires some degree of skill and talent. What do you think will happen once we have AI that can generate music better than today’s best artists with a few descriptive text prompts? What happens to the market for music when anyone can generate an entire album full of songs personally tailored to their specific tastes for free with AI? Do you still think people will bother listening to (or financially supporting) music created by other people?

It’s completely stupid to compare the state of any industry today to what will be possible with these AI’s in the future. There’s never been a point in human history where we were able to create the type of technology that we’re working towards now. There’s no historical precedent for a world with AI so comparing the future to the past is useless here. At the end of the day you’ll always be comparing two different worlds. History doesn’t always repeat itself my friend. Past doesn’t necessarily dictate future.

3

californiarepublik t1_j684kzf wrote

> Because… at the moment , making “the best” music still requires some degree of skill and talent. What do you think will happen once we have AI that can generate music better than today’s best artists with a few descriptive text prompts? What happens to the market for music when anyone can generate an entire album full of songs personally tailored to their specific tastes for free with AI? Do you still think people will bother listening to (or financially supporting) music created by other people?

Will this happen before or after my full self-driving Tesla can drive me to work in a snowstorm in New England?

0

BigZaddyZ3 t1_j685awm wrote

Are you under the impression that neither of these things will ever happen? Or do you just think that they’re a long time away? You’d be wrong on both counts…

3

californiarepublik t1_j694wuz wrote

I’d rather say that by the time AI ca n do all those things, we’ll have bigger problems than musicians and artists being replaced, at that point AI will be able to do everything better than humans.

2

californiarepublik t1_j681xzd wrote

r/iamverysmart

−1

BigZaddyZ3 t1_j68340b wrote

Because… I simply disagree with you? Seems like someone just has some intellectual insecurities. 😂 But whatever, I’ll take that as a compliment. The fact that you think I’m trying to “look smart” when I’m simply giving my views on the matter is hilarious tbh. It’d be like telling a beautiful person simply having a conversation in a restaurant to “stop trying to look beautiful 😡”.

No one’s trying to look like anything buddy. Do you really think I wanna impress some random dumbass on Reddit that has yet to even provide any real argument against what I said? Lmao get over yourself. I simply gave my opinion, you proceeded to post a dumb rebuttal, and then I responded to that. That’s it. The fact that you’re now trying to resort to childish insults proves you just don’t have anything meaningful to add to the conversation. So stop wasting your own time and just move along pal. 👍

2

wavefxn22 t1_j67us8n wrote

It’s not that simple, at least my friends in animation- one has already had the experience of a person using ai to copy her specific style. Imagine training and drawing your whole life and having a body of work to show, then a robot just takes it and makes more that looks like you made it but in 30 seconds .. and someone else uses it to make money. Sucks.

3

CypherLH t1_j6a5b3i wrote

I get see how that might be frustrating but "style" cannot, and should not, be something that can be copyrighted. The negatives of doing this would outweigh the positives.

2

Talkat t1_j681d60 wrote

Modern day luddites. It will get a hell of a lot worse when the AI actually start producing higher quality product.

2

CypherLH t1_j6a64u6 wrote

​

I've actually been surprised at how rapidly, and deeply, the anti-AI sentiment took hold in the art community. I still hope its mostly a vocal minority.

And yeah, as the models keep improving the anti-AI types will probably just get more shrill. It will be funny to see them keep trying to make fun of AI art as they keep having to get more and more picky about the flaws they point out in AI-generated content. At some point the models will figure out things like hands, and keep getting more and more consistently coherent.

3

Sinity t1_j6ehzcx wrote

> this bullshit copyright anti-AI narrative has taken hold among so many people. Sorry but looking/listening to a bunch of stuff to learn what that stuff looks like and then using that learning to produce new, entirely original, works is not a f-ing copyright violation.

I think it mostly didn't actually take hold.

They just have a problem with automation itself. All of this talk about copyright is just the best they can do to argue that technological progress should be halted.

1

CypherLH t1_j6eutp4 wrote

Hopefully as the models keep getting the anti-AI crowd will just fade into the background since everyone else will just be enjoying all the cool new tools and using the capability to enhance their work or just for fun, etc

2

AbeWasHereAgain t1_j67psxu wrote

Think it has more to do with making money off a bot trained on copyrighted content.

0

CypherLH t1_j67qb43 wrote

Don't post anything online if you don't want others to learn from it and be inspired from it and don't want to accept the consequences of Fair Use.

2

BellyDancerUrgot t1_j67oidu wrote

Yes but these models were trained on data publicly available without consent. That’s the big legal problem and imo entirely fair. Fair use falls flat in this argument lol.

Edit : for people replying to my last comment

First mistake is comparing neural networks to the brain under this context.

And no

Their output is not unique because it follows the same distribution that it learnt the representation on. Humans don’t do that. You can’t find a human analogy because humans do not learn things the same way as neural nets.

Neural networks can’t actually extrapolate data because they don’t have a physical intuition just a large associative memory. You only think they can because you are uneducated on the topic.

−2

CypherLH t1_j67pjdy wrote

No consent is needed. The training algorithm is just looking at the stuff and using that to learn how to create new stuff. Its not just ingesting it all and then mashing them all out in some giant collage. Its functionally no different than me or you looking in Art Station to get ideas and then being inspired by those ideas to go make new stuff.

If you post your work in a public forum then other people get to look at it and be inspired to do similar works. There is no copyright on "style" or genre or fashion, etc.

In technical terms using the works for learning/training is covered by fair use and creating new works based on that training constitutes a transformative work and is covered under existing copyright precedent. If some judge is convinced to change this then it opens a giant can of worms and artists will have cut their own throats because it leads to Disney copyrighting entire genres and whatnot.

3

BellyDancerUrgot t1_j67qd3w wrote

Totally wrong. A neural network learns a representation from the data. It literally scans ur work. The entire analogy of it ‘just looking’ at ur data is wrong. There’s a reason why artists have watermarks and signatures on artwork hosted on various websites. Circumventing measures put in place to prevent misuse doesn’t mean it’s legal , it just means existing laws were inadequate.

Edit: fyi there’s already work being done to trace back datasets on which ai art generation models were trained. Quite easy to do since most GAN and Diffusion models have distributions that get replicated in the output (cuz the outputs are derived from the representations learnt from the dataset they are trained on) making them easy to trace back.

−1

CypherLH t1_j67qlmd wrote

A representation of a work is NOT the original work, lol. In this case the "representation" is just added weightings into the massive nueral net with billions of parameters that goes into the world model. Like seriously go read about Fair Use and Transformative work. This stuff is well established in copyright law.

Again, if judges are convinced to accept this argument then Fair Use is dead and it opens the flood gate to massive new rent seeking for large holders of IP. (Disney and similar)

​

edit : to be clear, yes the courts will decide this and yes I could be wrong. I don't think so but we'll see.

3

BellyDancerUrgot t1_j67sn0r wrote

It isn’t at all. ‘Lol’

What I understand from our brief exchange :

-u have no idea about fair use, Creative Commons licensing, TDM rules apply to non commercial uses which is not the case here, scraping copyright protected content is a legal infringement if used for commercial purposes and or generate profit.

-u make dumb analogies because u don’t understand that representations in DL are equivalent to a photocopy of ur data. U can’t remove an artists watermark and use their IP to generate revenue.

oh but I can look at someone’s work and modify it a bit and thats fair use - yes except that’s not what’s happening here. Stop trying to throw random analogies trying to connect the two. Ur ai generated art will have the same distribution as whatever input data it sourced from during inference. Which is the entire foundation for digital watermarking against generative diffusion and GAN models which picked up in popularity.

−3

gantork t1_j6biy95 wrote

If you were right, most of the machine learning work done until today would be illegal and Google, OpenAI, Meta, etc., would have been sued to hell long ago.

0

BellyDancerUrgot t1_j6bzv82 wrote

Using scraped data for research does not violate copyright laws. Monetizing it as a product for the public does. Most of the work done by Meta , Google , nvidia and other big tech arent even available for public use let alone monetized for public use. But yeah sure whatever u say! I’ve realized people on this sub who have no real knowhow about ML/DL and about laws/legal consequences are the ones that are the loudest.

Have a good day.

1

dmit0820 t1_j69c4js wrote

> A neural network learns a representation from the data. It literally scans ur work.

The neural network best know for this ability is the human brain. Aspiring artists and musicians scan many works during training which alter the parameters(synapses) of the neural net, allowing it to better recreate the training data or extrapolate from that data to create new and unique output. Sometimes, the parameters in the neural net are configured so precisely that it becomes possible for it to re-create copyrighted works with high precision. The ability to do this does not constitute copyright infringement. Copyright infringement only occurs if the recreation isn't properly attributed.

0

Rufawana t1_j681wzh wrote

Google has shit the bed with AI.

They had the market advantage with transformers and chatgpt but pissed it all away.

Have no idea how their CEO still has a job.

4

[deleted] t1_j68rcys wrote

[deleted]

4

tobbtobbo t1_j6a3r7u wrote

Maybe they’re simply holding back and we come out with a truly innovative finished product. Other smaller companies have more to prove quickly

1

Glittering-Neck-2505 t1_j6bu1fa wrote

I wouldn't say that. Based on TwoMinutePapers, they have made some remarkable advancements in AI R&D. The difference is, they didn't release a prototype to the public.

I think the big pitfalls they want/need to avoid are confident incorrectness (ChatGPT is lovely except when it confidently says nonsense) and advertiser comfort (they are a business after all). Clearly they don't have a market-ready product. But I wouldn't write them off.

Obviously, though, competition is unambiguously good.

1

Longjumping-Sky-1971 t1_j67odba wrote

They still released a huge data set for others to train on, not worth it for them to deal with legal issues.

3

ejpusa t1_j69hgv5 wrote

Google is just weird. It's a breakdown in management.

AI will put us out of business!
The Google MBA

But didn't we have a big role in invented the latest AI?
The Google AI Scientist

AI will put us out of business!
The Google MBA

It's just a breakdown. This happens. All the time. Just an evolutionary process. Once worked at a startup in NYC, with 5 MBAs running the shop. They just could not understand Open Source. Incomprehensible to them. And these were 5 Ivy League grads.

"How can something be for free? That makes zero sense."

We ended up paying $25K a month of a "custom build web server" application. It never worked, it was disaster. I said, just use Apache. It's free!

"Free can not be better than $25,000 a month. That's IMPOSSIBLE."

Company folded.

They just didn't get it. Same story at Google, they just don't get it. And the "business guys" there run the show. Not the coders.

3

Lawjarp2 t1_j67t11f wrote

Copyright is only supposed to protect effort when it isn't easily recreated. If AI can create good music then copyright is pointless. Why should only a few have the ability to make music and express and not everybody.

2

gthing t1_j69g69p wrote

every decision Google is making is in regards to their legal liability and avoiding lawsuits. That’s it. They are at higher risk because of their size.

2

Trumaex t1_j6a95m0 wrote

explain why they did scan all those books? :D

1

redroverdestroys t1_j69gsyh wrote

Who cares, someone else will do it better anyway. Google ain't shit.

2

chaddwoo t1_j6a84ji wrote

Google is choosing to die

2

luisbrudna t1_j6agp7k wrote

Give time to first sell all Spotify shares.

2

robustquorum09 t1_j67lpls wrote

Let us give the team Magenta to turn Red as it ripens MusicML to its full potential.

1

malcolmrey t1_j68tqgf wrote

fuck them, eventually someone else will do it

they are on a high horse acting as if they care about humanity...

1

ArgentStonecutter t1_j68u7bf wrote

Don't get so salty about this. If it's a Google product they would have killed it just as you were starting to really get into it.

1

DukkyDrake t1_j68y4c5 wrote

It's really not worth messing with the music industry and for absolutely no gain.

1

Typo_of_the_Dad t1_j69bokl wrote

Companies can just let it generate some music for their stores, shows, movies, games etc. and artists stop making money completely besides live shows (which are already being taken over by holograms in Japan) while most people don't even notice. Unless it's made free and anyone can use it creatively and on the same level (not gonna happen)

1

lovesdogsguy t1_j6bo156 wrote

I think it's because it's probably the first thing that has the true potential to bring AI and its potential to mainstream awareness. They may not want that at this point in time — gives them time to keep working away (mostly) in private. It may or may not be a good idea though; somebody's going to do it regardless if they don't.

If a model was released that enabled anyone to create beautiful music in any genre effortlessly (or even easily,) the shift in public perception would be absolutely tectonic. Everyone would be talking about artificial intelligence. Image generators caused a small stir. This would incur societal level awareness of what's happening.

1

prophetsguild t1_j6c2sxq wrote

There's a AI race between US and China and these lawsuits fighting against mainly fair use of copyrighted material will slow down US significantly in that race.

1

Mixit-SingCovers t1_j6mcie6 wrote

You can try Mixit app in the meanwhile, it can switch genres for any song.

1

SuperSpaceEye t1_j68jtkq wrote

Researchers in general rarely release stuff, sadly.

0

thehearingguy77 t1_j69hpku wrote

At least have the decency not to make your expletive part of my Gods name, or to attach one to the name at all.

0

Sea_Emu_4259 t1_j69sllu wrote

Nothing new, they never release their google ai assistant from 2018 that can make real phone call on your behalf & book an haircut.

0