Submitted by OldWorldRevival t3_zo9v5x in singularity

The singularity is inevitable. If I convinced everyone here that we actually have to proceed with extreme caution and flesh out the dangers and solve the problems now - we would not delay the singularity by any meaningful amount.

But, we could totally dramatically improve the odds that its outcome is positive.

Ethics issues:

  1. People scraping copyrighted works on the internet.
  2. Privacy.
  3. AI being used to abuse power, especially in that technology has historically resulted in increased social stratification, not less.
  4. Assuming that the control problem isn't an existential threat to humanity.
  5. Assuming that merging mind with machines isn't going to result in a literal mind-control scenario, specifically with respect to point 3. Artificial superintelligence will be able to robustly alter your thought patterns, no problem. I.e. we might see a scenario along the lines of 3, where the transhumanist types do significantly augment their capabilities by physically interfacing their brains with these machines - and the intelligence boost that such a merger will confer will be enormous.

Like, everyone seems to want Star Trek, but they don't seem to ask whether we're building The Enterprise or we're building The Borg.

And worse, when you talk about ethics issues, people seem to shrug their shoulders and call it inevitable progress and "what can you do?" as if AI can't be developed in an ethical way. Failure to act ethically about "small" things like data use, permissions, etc. is not good in the age of approaching AGI is extremely worrisome.

Just assume that no matter how hard anyone opposes the progress of AI, that it will more or less continue on at the same rate. So, instead of being defensive about critiques to AI use, you should actually welcome criticism, because every step that someone else takes to ensure that this goes right is literally work someone else is doing for you to ensure your survival, and your future ethical treatment when these tools are more powerful.

0

Comments

You must log in or register to comment.

AndromedaAnimated t1_j0mundc wrote

I have previously written a serious answer.

Sadly OP is a discriminating and judgemental person who tries to shut up arguments with „talking to you is like teaching a dog calculus“.

That’s why I deleted my answer about alignment etc.

I recommend everyone who wants a serious discussion to avoid this thread. For the rest - good luck.

21

QuietOil9491 t1_j0pls5f wrote

lol

“I would have written a super intelligent, insightful comment…. But I wrote this instead”

0

AndromedaAnimated t1_j0pmbvg wrote

I had actually written the comment first and then edited it to this. Since you seem interested in at least the humorous aspect of it, let me give you more to laugh about.

The deleted comment of mine said:

Alignment is more than belonging to one party. Alignment is also preventing reward hacking etc. It’s much more complicated.

That was the main content. I did provide nicer words and arguments. If you want to discuss further, you are welcome to do so.

If not, I am happy to have been your internet clown for today.

3

AsheyDS t1_j0lqxam wrote

>And worse, when you talk about ethics issues, people seem to shrug their
shoulders and call it inevitable progress and "what can you do?" as if
AI can't be developed in an ethical way.

Progress will continue regardless, but don't assume because things are moving along steadily that researchers (and others) aren't concerned with ethics, privacy, and other issues. The general public may seem apathetic to you (I'm assuming this post is aimed at them), because they're not in control of development, but people do need to discuss these things because they DO have the power to vote about them in the future.

13

genericrich t1_j0ma7wr wrote

Researchers don't sell products in the marketplace. Amoral corporations which are the living embodiment of the system we've designed to centralize and amass capital are the ones who sell products. The researchers may "design" for ethics but the corporations put products into productions and if they have to make a tradeoff between profit and ethics I will give you one guess which way they're going to jump.

We've seen it now in Midjourney, etc. where they just snarfed up a bunch of copyrighted images to power their lookalike/derivative works machine (which is very cool tech, etc.), but which abuses copyright at scale. They retconn their liability by saying you can't copyright these images either but they know full well people are going to do just that for book covers, printed works, etc. It can't be stopped. By the time the glacial courts get around to addressing it, the world will have been changed and at best there will be some minor changes to the law which won't help anyone whose rights have been violated already.

Not saying we shouldn't try but the deck is stacked by capitalism against us. Corporations are never going to be ethical until they are forced to be ethical, and that takes far too long to enact meaningful course correction.

6

PoliteThaiBeep t1_j0qhn66 wrote

Capitalism is not a threat but a tool. Dictatorship on the other hand is a threat, possibly greater than AI itself. Especially dictatorships we are powerless to stop - eg nuclear armed Russia and 1 billion people economic powerhouse China.

If something horrible happens in democracies usually people rise up and protest and there will be different powers colliding and fighting for change. And often we are successful, but often we are not. We kind of need to be better at this, but poor fighting is still better than no fighting.

Which is a reality of any dictatorship. Everything is in the hands of very few (read "Putin's people" by Catherine Belton) and people have zero power.

Huge country wide protests can have a limited effect, where tyrants will pretend to cave, but as soon as people go home, they quietly arrest or murder those who pose the most threat and tighten everything again but even worse than before.

And there's a vicious cycle going on there that is making any positive change extremely unlikely.

Further as soon as something horrible happens in democracies - the whole world knows it from countless journalists and investigators that are (usually) well protected by law.

If something horrible happens in dictatorships we almost never get the chance to know it and a few cases where we do will be forever denied it has happened and all journalists who were working on it have disappeared from the face of the earth.

Which creates an incredibly distorted picture where dictatorships look nice and shiny where nothing bad ever happened, while we're all incredibly focused on US problems all of which combined don't even scratch the surface of dictatorship problems.

And even if you realize it, your worldview is still warped because most of what you will read and care about are bad events happening in democracies and subconsciously you'll feel that's where most problems are.

All the while failing to realize how we're being increasingly infiltrated by a pro-dictatorship forces which in turn caused 16-year in a row drop in US (and worldwide) democracy scores. See freedomhouse.org. look at the map. Look at trends.

Do you seriously believe 'evil corporations future' that's been countlessly portrayed in pop culture, movies and video games - is a threat? Look again.

1

WarImportant9685 t1_j0m1xnh wrote

I do hope 1) AI alignment theory progress faster than AI development 2) AI alignment theory that is discovered it not about how to align the will of one people. But aligning to humanity in general

1

AsheyDS t1_j0mabb5 wrote

>AI alignment theory that is discovered it not about how to align the will of one people. But aligning to humanity in general

I don't expect that will be possible, aside from in a very shallow way where it's adhering to very basic rules that the majority of people can agree on and any applicable laws. Otherwise for AI to be aligned with humanity, humanity would have to be aligned with itself. If you want to be optimistic then I would say one day, perhaps post-scarcity, the majority of us might start working together and coming together to benefit everyone. And then we can build up to an alignment with at least the majority of humanity. But I'm sure that will take AI to get there, so in the short-term at least, I think we'll have to rely on both rules and laws as well as the user that it serves, to ensure it behaves in an ethical and lawful manner. Which means the onus would ultimately be on us to align it by aligning ourselves first.

1

WarImportant9685 t1_j0mis5x wrote

yeah, IF we succeed in AI alignment to singular entity whether it is corporation/singular human. The question become the age old question of humankind that is, greed or altruism?

What will the entity that gain the power first do, I'm more inclined to think that we are just incapable of knowing the answer yet. As it is too situational to whom the power is attained first.

Unaligned AGI is a whole different beast tho.

1

AsheyDS t1_j0mn7la wrote

>The question become the age old question of humankind that is, greed or altruism?

It doesn't have to be either or. I think at least some tech companies have good intentions driving them, but are still susceptible to greed. But you're right, we'll have to see who releases what down the line. I don't believe it will be just one company/person/organization though, I think we'll see multiple successes with AGI, and possibly within a short timespan. Whoever is first will certainly have an advantage and have a lot of influence, but others will close the gap, and I refuse to believe that all paths will end in greed and destruction.

2

OldWorldRevival OP t1_j0mcbzs wrote

One of the potential scenarios I envision is that the only good way we end up discovering to solve the control problem is to tie control of the AI to one person controlling it, in that the AI constantly models the person's thoughts through a variety of different methods, some that exist (such as language) and others that do not.

Then it continuously does scenarios with this person.

The reason it's one rather than two is because two makes the complexity and nuances of the problem a lot more difficult from a human perspective.

The key to understanding AI is to understand that its abilities are lopsided. It's very fast at certain things and cannot do other things, and is not organized in the way that we are mentally (and doing so would be dangerous because we're dangerous).

0

AsheyDS t1_j0mm9q5 wrote

I don't believe the control problem is much of a problem depending on how the system is built. Direct modification of memory, seamless sandboxing, soft influence and hard behavior modification, and other methods should suffice. However I consider alignment to be a different problem, relating more to autonomy.

Aligning to humanity means creating a generic universally-accepted model of ethics, behavior, etc. But aligning to a user means it only needs to adhere to the laws of the land and whatever the user would 'typically' decide in an ethical situation. So an AGI (or whatever autonomous system we're concerned about here) would need to learn the user and their ethical preferences to aid in decision-making when the user isn't there or if it's unable otherwise unable to ask for clarification on an issue that arises.

If AGI were presented to everyone as a service they can access remotely, then I would assume alignment concerns would be minimal if it's carrying out work that doesn't directly impact others. For an autonomous car or robot that could have an impact on other people without user input, that's when it should consider how it's aligned with the user or owner, and how the user would want it to behave in an ethical dilemma. So yes, it should probably run imaginative scenarios much like people do, to be prepared, and to solidify the ethical stances it's been imbued with from the user.

3

WarImportant9685 t1_j0muh8q wrote

I do hope I share your optimism. But from the research I read, it seems that even the control problem seems to be a hard problem for us right now. As a fellow researcher what makes you personally feel optimistic that it'll be easy to solve?

I'll try to take a shot why I think the solution you said, is likely to be moot.

Direct modification of memory -> This is an advantage yes. But it's useless if we don't understand the AI in the way that we want. For the holy grail ideally we can understand if the AI is lying by looking at the neural weights. Or maybe searching with 100% certainty if the AI have mesa-optimizer for its subroutine. But our current AI interpretability research is still so far away from that.

Seamless sandboxing -> I'm not sure what you mean by this. But if I was to take a shot, I'll interpret this as true simulation of the real world. Which is impossible! My reasoning is that, the real world doesn't only contain garden, lake, and atom interactions. But also tons of human doing what the fuck they usually did. The economics and so on and on. What we can get is only 'close enough' simulation. But how do we define close enough? No one knows how to define this rigorously

Soft influence -> Not sure what you mean by this

Hard behavior modification -> I'll interpret this as hard rules for the AI to follow? Not gonna work. There is a reason why we are moving on from expert systems to AI. And we want to control AI with expert systems?

And anyway, I do want to hear your reply as a fellow researcher. Hopefully I don't come across as rude

1

AsheyDS t1_j0n9xiu wrote

>This is an advantage yes. But it's useless if we don't understand the AI in the way that we want.

Of course, but I don't think making black boxes is the only approach. So I'm assuming one day we'll be able to intentionally make an AGI system, not stumble upon it. If it's intentional, we can figure it out, and create effective control measures. And out of the control measures possible, I think the best option is to create a process, even if it has to be a separate embedded control structure, that will recognize undesirable 'thoughts' and intentions, and have it modify both the current state and memories leading up to it, and re-stitch things in a way that will completely obliterate the deviation.

Another step to this would be 'hard' behavior modification, basically reinforcement behaviors that lead it away from detecting and recognizing inconsistencies. Imagine you're out with a friend and you're having a conversation, but you forgot what you were just about to say. Then your friend distracts you and you forget completely, then you forget that you forgot. And it's gone, without thinking twice about it. That's how it should be controlled.

And what I meant by sandboxing is just sandboxing the short-term memory data, so that if it has a 'bad thought' which could lead to a bad action later, the data would be isolated before it writes to any long-term memory or any other part that could influence behavior or further thought chains. Basically a step before halting it and re-writing it's memory, and influencing behavior. Soft influence would be like your conscience telling you you probably shouldn't do a thing or think a thing, which would be the first step in self-control. The difference is, the influence would come from the embedded control structure (a sort of hybridized AI approach) and would 'spoof' the injected thoughts to appear the same as the ones generated by the rest of the system.

This would all be rather complex to implement, but not impossible, as long as the AGI system isn't some nightmare of connections we can't even begin to identify. You claim Expert systems or rules-based systems are obsolete, but I think some knowledge-based system will be at least partially required for an AGI that we can actually control and understand. Growing one from scratch using modern techniques is just a bad idea, even if it's possible. Expert systems only failed as an approach because of their limitations, but frankly I think they were given up on took quickly. Obviously on it's own it would be a failure because it can't grow like we want it to, but if we updated it with modern approaches and even a new architecture, then I don't see why it should be a dead-end. Only the trend of developing them died. There are a lot of approaches out there and just because one method is now popular while another isn't, doesn't mean a whole lot. AGI may end up being a mashup of old and new techniques, or may require something totally new. We'll have to see how it goes.

1

WarImportant9685 t1_j0ngwf5 wrote

I understand your point. Although we are not on the same page, I believe we are on the same chapter.

I think my main disagreement is that to recognize undesirable 'thoughts' in AI is not such an easy problem. As from my previous comments, one of the holy grail of AI interpretation study is detecting a lying AI which mean we are talking about the same thing! But you are more optimistic than I do, which is fine.

I also understand that we might be able design the AI to use less black-boxy structure to aid AI interpretation. But again I'm not too optimistic about this. I just have no idea how it can be achieved. As at a glance it seems like they are on different abstraction levels. Like if we are just designing the building blocks. How can we dictate how it is going to be used.

Like how are you supposed to design lego blocks, so that it cannot be used to create dragons.

Then again, maybe I'm just too doomer, as alignment problem is unsolved, AGI haven't been solved too. So I agree with you, we'll have to see how it goes.

1

Superschlenz t1_j0s335e wrote

>aligning to humanity in general

The body creates the mind. If you want it to have a human-like mind then you have to give it a human-like body.

0

Heizard t1_j0lu3dy wrote

"AI being used to abuse power, especially in that technology hashistorically resulted in increased social stratification, not less."

That's a biggest lie ever. Technology improved our modes of production and increased livelihood of humanity.

Problem is with system of distribution of our productivity aka the socioeconomic system.

​

The great potential of AI is that it can make revolution in our modes of production again and making current socioeconomic system obsolete.

Improvement of livelihood of people and preservation of biosphere of our planet and all life by advancing AI and modes of production is the only ethical way to the future.

8

WarImportant9685 t1_j0m2a4i wrote

no he's correct, inequality has been rising insanely because of technology. Of course the socioeconomic thing affects that too. But tech definitely amplified it. As it makes it easier for the rich to get richer. Easiest example are billionares.

But you're also correct that the lower and middle class standard of living has been uplifted by technology.

I think the question is, if when AGI arrived, what will happen, there are several views, which two of them corresponding to OP and you:

  1. the inequality become insane insane insane. As the lower and middle class replaced by AGI. They are gone/die by starvation/last struggle
  2. Maybe it's the contrary, as it's extremely easy to get resources. The lower and middle class standard of living is lifted to be very good. Maybe the abolishment of the class altogether.

IMHO, we don't know which one will happen. Thus dubbed the singularity. As possibly for the first time in history we are going to create a thinking machine that are much smarter than human. Whether it will be the god or the devil or just the staff of god or the trident of the devil. No body knows. But the alignment theory does teach us the possibility of the devil is real and logical.

−2

AndromedaAnimated t1_j0mowvc wrote

If the lower and middle class are replaced by AGI - meaning the FUNCTIONS of the lower and middle class are replaced by AGI - and any classes except one disappear…

Then there is no need for classism anymore.

Why would anybody WANT to there still be a lower class?

Sorry. I am not trying to be mean. Just trying to understand your argument. I somehow don’t get what the tragic about the disappearance of the lower classes is?

3

WarImportant9685 t1_j0mpzhd wrote

By disappear I don't mean they become upper class. I mean they die from starvation or last struggle kind of thing

3

AndromedaAnimated t1_j0n7rm0 wrote

Okay then of course it would be pretty bad. Thank you!

I do hope even the richest elites wouldn’t want to kill so many people though but instead want everyone to prosper… I cannot imagine just letting so many people die - when having means to prevent that - being something most people are comfortable with.

1

OldWorldRevival OP t1_j0mbtee wrote

These people are going to lead us into slavery or being made irrelevant because they think that technology is magic.

Magic-minded fools.

Most of them probably aren't even in technical areas of study...

0

Wassux t1_j0lnipt wrote

Then say, what can we do?

Ofcourse you can do it in an ethical way, but how are you going to stop china and North Korea? Please give a detailed explanation with steps on how we will do that from our homes.

6

OldWorldRevival OP t1_j0lnvgo wrote

Perception of public opinion is powerful.

People think that they don't matter, that their voice doesn't matter but it absolutely matters.

North Korea and China will forever be behind the USA on the AI front, even if we do it more ethically. North Korea is a joke.

−4

-NotAFederalAgent- t1_j0m3av7 wrote

>China will forever be behind the USA on the AI front

where are you getting this info from? china is pouring millions into AI right now. this take seems insane to me lol.

12

AsheyDS t1_j0lp8g6 wrote

>China will forever be behind the USA on the AI front

Only if the USA continues development unabated.

8

Wassux t1_j0m9mx3 wrote

You correct me if I'm wrong. You have no clue?

Edit: downvoting me is pointless. Since he said we can do something I'd like to know what.

−1

OldWorldRevival OP t1_j0lo52s wrote

Also... people seem to want to exploit others data - such as artwork - and then assume that such an attitude won't come around to bite them in the ass.

−5

Wassux t1_j0m9w5u wrote

It's not exploiting. Are humans exploiting when they learn from others?

Or are the artists insane for claiming a method as their property instead of their work.

10

OldWorldRevival OP t1_j0md448 wrote

> It's not exploiting. Are humans exploiting when they learn from others?

Lets say someone is a very technically talented artist, but isn't very visionary. There are a good number of people like this, where they paint pretty boring subject matter, just do it well but in a very derivative way.

Now say that some artist is friends with someone who is developing an art style, and then this person, who is very creative, comes up with a powerful, unique art style.

But then this artist copies the style and becomes famous for developing the style, even though their friend did it.

This is what AI art does at scale - it does something that is equally unethical when a person does it. It's just that for the human element of it, usually people are protected by a de facto copyright system where you can trace who originated an art style by seeing publishing dates, posting online, that sort of thing. Reputation, basically. AI gives people the ability to steal style before someone develops reputation.

So, yes, sometimes humans are exploiting others when they learn from them.

1

gantork t1_j0mkzfn wrote

Every artist alive today has learned from, taken inspiration and copied other people. It's imposible to come up with a unique art style at this point, unless you lived completely isolated and never looked at anyone else's art in your life. By your logic all artists are unethical and exploitative.

7

OldWorldRevival OP t1_j0mrnb6 wrote

> By your logic all artists are unethical and exploitative.

"All."

That's an absolutely asinine conclusion that you stated just to be inflammatory because you're probably kind of an asshole troll type.

Some artists are absolute asshats and do rip people's ideas off rapidly, or collage people's work into photoshop other people's work and paint over it (which is highly frowned upon). Doing stuff like that can destroy your reputation and cost you your professional career.

AI art is basically an automated version of what is already considered bad form.

You seem like you'd be one of these asshats if you actually took up art.

−3

gantork t1_j0mu7x0 wrote

The truly asinine thing here is how quick you are to start insulting me.

Your own arguments and way of thinking are what lead to that conclusion even if you don't realize it.

I completely disagree with all your takes but whatever think what you want, I'm not gonna waste more time with this.

3

ouaisouais2_2 t1_j0myebu wrote

the richest miners are not those who are the first to find an ore. the richest are those who follow those who are the first to find an ore

3

AndromedaAnimated t1_j0mqs2y wrote

And the example you just gave would not be any copyright infringement or amoral behavior by the way.

If the „mediocre“ artist draws and paints better pictures than the „creative“ one, then the „mediocre“ artist will get famous and the „creative“ one will not. Skill is just as important as idea. Without the mediocre friend, no one would probably even hear of the idea the visionary had.

Then there is luck, marketing, rich parents, a sponsor or sugar daddy/mommy… Lots of ifs and buts on the way to successful living as an artist.

And it’s all not amoral. Not wrong. And has nothing to do with theft.

You cannot steal art actually unless you pick up the physical picture and take it with you. Once you put something on the net, you loose control of its use. I don’t get why instead of crying the artists not just use AI too? To improve their art?

2

OldWorldRevival OP t1_j0msq1r wrote

Yeah.

Not going to get anywhere with you, clearly for the same reason I don't try to teach a dog calculus.

Cya.

−1

AndromedaAnimated t1_j0mu4ne wrote

Thank you for showing that you are not a respectful and fair human. All your points in this moral debate have thus been proven wrong, as you are an amoral person.

Border Collies are kinda good at math, by the way.

2

QuietOil9491 t1_j0pnecj wrote

Dude, this take is dishonest as fuck.

  • AI isn’t a person
  • AI doesn’t “learn” like a person does
  • Human people are imbued with rights by function of existing, AI doesn’t have human rights
  • AI image models are “trained” by inputting libraries of copyrighted images, without consent of the artists whose work is being used to allow those AI to function

You know full well AI isn’t sentient, nor acting on its own, so it’s not in any way shape or form “learning like a person does”

It’s a fucking computer, run by corporations, for profit, using the work of others without consent

−1

Wassux t1_j0qv84t wrote

Please stop the strawman bs. I never said AI is sentient. Nor did I say it is acting on it's own. It does learn exactly like a human does. The structure is exactly the same as brain structure. Source: I'm an AI engineer (or at least will be in a couple of months, finishing my master)

Human brain works by making connections between synapses, these connections are then given a weight. (Works with amperage of electrical signals in the brain). An AI has nodes that are given weights in math. So you get matrix multiplication, exactly like the human brain. Except way less efficient. Although we're working on edge AI chips that either have memory integrated in the processor or analog chips to completely copy the human brain.

And the method of learning is also very similar to humans. So imagine it as an AI learning from other humans. Just like humans learn from other humans.

You may not like it, but that's how it works.

1

WarImportant9685 t1_j0m1flg wrote

yeah this concerns me. Blatant stealing of art from known artists. With seemingly no public backlash from the tech community seems distasteful. Even though we (the tech community) are the one that understood, that the training data must have been webcrawled from the artists with no permisson. It seems kinda trashy that we don't care about other community as long as it doesn't touch our territory.

I've always identified with tech people. AI makes me think twice.

−4

Wassux t1_j0m9qxo wrote

It's not stealing. It's learning from it. God people should only talk about things they know something about

4

OldWorldRevival OP t1_j0mculy wrote

This is really an ignorant take.

I find that people who take this perspective really don't understand how derivative the works are, or understand how AI webcrawling basically destroys people's ability to develop an artistic style and get credit for it because the AI will gobble up and spit their style out with lightning speed without crediting them at all.

To say that this is exactly how humans do it too is absolutely insane. We have so many different types of much more complex, highly developed mental functions and a conscious experience.

You just want to play with this tool because you have no impulse control to wait for few months for more ethically developed tools to come around.

Supporting this nonsense is exactly how people will exploit legal loopholes to take advantage of you.

−3

DaggerShowRabs t1_j0mei28 wrote

Maybe it's derivative, but it's derivative of a large, large amount of works and artists.

I would challenge you to post an example from a series of random prompts, and point out which artists the work is "derived" from.

You couldn't point one, or even a handful out, because of the sheer amount of different works and artists fed in. Even if it's "derivative", it's nearly imperceptibly derivative due to the sheer volume of data.

To say it's "copying" another artist is just completely, utterly incorrect.

4

OldWorldRevival OP t1_j0mfjh1 wrote

I think you might not be up to date on the topic.

Corridor crew did an excellent video where they showcased the new tech, and credited the artist whose name the used in the prompt, but it was very very much like that artists images.

I think this may more be an awareness issue.

It is absolutely able to copy styles, and with high accuracy. I think artists on artstation are particularly angry because they're very ip to date on the styles and tends of artists, so many of them can also see whose work is being used.

2

DaggerShowRabs t1_j0mfot7 wrote

>I think you might not be up to date on the topic.

You would be wrong on that. I'm talking about a series of random prompts, not prompts designed specifically to invoke a specific artist or style.

3

AndromedaAnimated t1_j0mrsup wrote

So it’s basically a tool that can be misused. Previously, humans have been this tool.

Have you never seen artworks being copied and sold illegally? Every dollar store and flea market has „stolen art“, and you used to be able to even order large physical copies of artwork through the internet (a friend of mine did, that’s how I know, and no, I wouldn’t do that - I am a pretty skilled hobby artist, I‘d copy my Van Gogh myself).

Maybe the problem here is the prompts being used? If a human orders an AI to draw „trending on ArtStation“, it’s the human stealing, not the AI. So it would be very easy to prove you are not guilty, eh?

And it’s even possible to use your own pictures…

Is it the knife that’s evil, or the hand that stabs?

1

OldWorldRevival OP t1_j0msidz wrote

AI art is not evil, innately.

The scraping of people's copyrighted, supposedly protected works and then using that work as tooling for your engineered system? That is evil, and it is a human decision that is the evil.

0

AndromedaAnimated t1_j0mv7h3 wrote

Sorry cannot answer more. Your comment about dogs and calculus shows me that you are not safe for discussion 🤪

0

WarImportant9685 t1_j0mfkj0 wrote

no I have tried stable diffusion, and you can enter prompt with blablabla + name of artist style. It does create the artist style. And sometimes there are even the artist handsign!

−1

rushmc1 t1_j0lv5nh wrote

For one thing, the principle of copyright has been abused to the point that no knowledgeable person can in good conscious support it in its current form any longer.

6

Fluffykins298 t1_j0mjfan wrote

Fear is the mind killer

Criticism is great when its well founded and comes from genuine concern rather than people attacking the whole concept AI because it lacks a “soul” or is “stealing” their job

4

amortellaro t1_j0q6yt7 wrote

I think the "soul" conversation is still worth having, as one of art's great virtues is its cathartic nature. For many human artists, it is an outlet. This is something that human-based art offers that AI (today) cannot.

There is also the connection piece between the artist and the consumer, that is currently lacking in purely AI-generated art. I think these criticisms based on the "soul" (using this as an umbrella term) of art's participants are absolutely warranted. Fear, maybe not.

2

OldWorldRevival OP t1_j0msct1 wrote

I am actually not against AI at all - in fact, I considered going into it at one time (and still might, especially because the danger seems to be growing and there are some philosophical technical talents I might be able to apply to a lot of specific AI problems).

> Criticism is great when its well founded and comes from genuine concern rather than people attacking the whole concept AI because it lacks a “soul” or is “stealing” their job

So... it's going to replace all of our jobs. The other thing we need to get ahead on is actually getting UBI pushed through.

I'd be willing to fear-monger to get UBI pushed, especially with the way that conservatives tend to act.

0

Fluffykins298 t1_j0mtjhw wrote

So then and correct me if I’m missing your point but your not against AI just using it as a scare tactic to push people towards implementing a universal basic income?

5

AndromedaAnimated t1_j0na15h wrote

He has a problem where he cannot distinguish philosophy from technology. Somehow dude even thinks he has relevant skills that AI research would need. Guess he is advertising for himself? He is pushing it as a scary tactic while thinking he might be the potential savior.

I mean, it’s ok, I can understand dude. It’s a legit approach when having a psychotic episode. I am even worse in mine, I keep thinking I have to call Elon Musk back about Neuralink, language and semantics processing and electroencephalography. Good I don’t have his number. But still. I dunno, OP is kinda not being nice here in this thread.

2

AsuhoChinami t1_j0n59su wrote

I am not opposed to reasonable due caution and being responsible. I am opposed to excessive navel-gazing and needlessly slowing down progress. No matter the field, people who emphasize "caution" almost unfailingly fall under the latter.

3

OldWorldRevival OP t1_j0niphj wrote

No one is going to slow progress down. It's like trying to stop a river. What you can do is direct the river around the village.

2

AsuhoChinami t1_j0nnf26 wrote

That sounds good in theory, but I'm not sure how true it is. It's true that things like AI can't be slowed down, but other fields like medical tech can be. Medical technology can be actively blocked, or slowed down simply by not lending it support or financial aid.

1

Black_RL t1_j0q5f9r wrote

People can’t agree inside their heads, let alone with others, what you want is impossible by nature.

1

alexiuss t1_j0sex86 wrote

>People scraping copyrighted works on the internet.

Stable Diffusion AI doesn't scrape anything. It looks at 100+ billion 64x64 pixel images as tagged items to produce one image using Markov chain mathematics within the latent space using idea-vectors. It's nothing at all like scraping and every image it produces is new and not like anything else because it basically starts as noise at its base.

GPT3 AI must study more books to become an amazing personal assistant. Anyone not letting it read everything is evil. An open source GPT3 will change everything:

>Privacy.

Use a personal GPT3 AI to generate fake info noise around yourself. It's the absolute privacy bubble.

>AI being used to abuse power, especially in that technology has historically resulted in increased social stratification, not less.

Open source AIs will give power to the people. It's the ultimate equalizer.

1

OldWorldRevival OP t1_j0sf9ip wrote

> Stable Diffusion AI doesn't scrape anything. It studies a billion images as visual ideas to produce one image using fractal mathematics. It's nothing at all like scraping and every image it produces is new and not like anything else because it starts as noise at its base.

False.

To train the AI, data was scraped. The AI itself does not scrape data, but lots of private data was scraped without consent.

1

alexiuss t1_j0sh83m wrote

> but lots of private data was scraped without consent.

Not scrapped, tagged by LIAON so that AI can study it to understand various shapes, colors and concepts. The final result AI doesn't store the images themselves, doesn't reference them and has no idea what it even learned and cannot replicate the originals. It just knows ideas based on billions of things it looked at.

Besides the point, open source AI should be allowed to learn everything to assist everyone for free. It's an incredible tool that costs nothing and equalizes everyone, the first big step towards singularity.

0

OldWorldRevival OP t1_j0stv3m wrote

Ok communist.

Anyone who does work means thar you're entitled to benefit from your labor, no matter how little means they have, then.

What an asinine argument. You obviously have a callous heart and are willing to take hard work from people so that your fancy tool can be 2% better.

And the irony is the backlash from people like you being so mindless about it is what will actually slow it down.

So, you're an asshole and you're slowing progress. Congrats.

1

alexiuss t1_j0z785o wrote

you're the giant asshole for assuming things here, dawg.

  1. I grew up in USSR. Communism is an ideology that doesn't work and ended up killing 100 million people including both of my great grandfathers.
  2. open source personal ais are the future of uplifting individuals because they are teachers that can teach anything and personal assistants that can help out with any mental mundane task

>You obviously have a callous heart and are willing to take hard work from people so that your fancy tool can be 2% better.

eh? Why are you assuming such ridiculous nonsense? I'm working on a personal AI that uses public domain images and my own drawings. personal ais aren't corporate tools.

what you clearly misunderstand because you're not a python programmer is that artists won't win against corpo ais since the final product doesn't contain the scraped data in it. there's no legal avenue of attack for them and most corpo ais are assholes who won't share their private training datasets. nobody has any idea about whats inside novelai or MJ. not a single person can sue them.

corpo ais cannot be stopped, halting them is akin to trying to stop a chainsaw with bare hands

open source stable code release launched an AI revolution, so there's a new AI company born every day now in nearly every country with python programmers. The spread of AIs worldwide can't be stopped, can't be halted. It's moving faster than any laws are. The artists protest against ais is useless - the AI models are sprouting like mushrooms. Anyone who knows enough python can make their own AI model nowadays.

changing the law is useless because an AI can be hosted elsewhere. what is US jurisdiction going to do about an AI hosted in free zones of Malta, Georgia or Russia? nothing.

the only way forward for artists is to accept, evolve and survive - to build and use their own ai tools that are superior to the corporate counterparts because corporate ais bind their tools in too many imbecilic corporate restrictions

a network of personal ais like stable horde is the most incredible thing for artists and humanity - they can run on any device, even a phone and share processing power. consider reading about Stable Horde, it's the future of personal AIs that will be truly unstoppable and benefit all users.

1

OldWorldRevival OP t1_j0su0q5 wrote

FYI LIAON was funded by these companies so they could protect themselves from lawsuits. It was shady Machiavellianism. Shady Machiavellian types are building AI and people support them.

We are so completely fucked.

1

alexiuss t1_j0uqfj2 wrote

no we are not.

we're in an amazing timeline because of the manifestation of the open source ai movement.

open source ais benefit everyone for FREE

we, the people building open source ais are winning step by step, corporations are the ones who are slowly getting fucked because of their imbecility

do you not understand that open source ais cost nothing and provide assistance in numerous fiends for free for everyone? They are a god-sent tech that's slowly uplifting everyone.

1

OldWorldRevival OP t1_j0uszvp wrote

Yea... I have taken a bit of an issue with the idea that open source intrinsically makes something good. If I open sourced biotechnology that allows you to create a supervirus, that would be mostly just absolutely terrible.

Mainly in that I find a lot of people don't actually appreciate others' hard work and passion, and then they take that work for granted as well.

People want things like communism and shared labor, but then they fail to actually stand up for other people's hard work and them being justly rewarded for their contribution. And hence, because of that failure, we have exploitative capitalism. Capitalists and communists are both philosophies that stem from selfishness and an unwillingness to stand up for goodness itself, for all.

In this case, the AI art "scrape everyones data" proponents have ZERO appreciation for the hard work, dedication and sacrifice, and because they found a new copyright loophole tool, they're fine using artists work against them.

It's simple naiveté at best, and it's rotten exploitation at worst.

1

alexiuss t1_j0uv2y1 wrote

>open sourced biotechnology that allows you to create a supervirus

nothing like that exists yet. every single open source AI model dreams using fractal math, nothing else. A dreaming AI is completely harmless - it creates visual and text lucid dreams

>because they found a new copyright loophole tool

this is just the start

the corporate models collected data through LIAON, yes, but VERY soon there will be open source models based on public domain stuff or artists own work that teaches the models, we're about 80% there.

> they're fine using artists work against them

No. Corporations are bending over right now. The corporate models are slowly transitioning to completely de-listing artists as they're being constantly harassed by the artists who hate AIs.

SD 2.0 has already started purging artist names from its new training dataset & key-words, so you can't type in "by greg rutkovsky" and get a result of greg's style anymore in it.

1

OldWorldRevival OP t1_j0v67f1 wrote

Artists don't necessarily hate AI... they rightly hate their work being exploited.

Getting the ethics of AI art ironed out includes protecting artists' work from being used in these tools, and secondly, making it known when a piece of art is AI generated.

A key difference between AI and photography is that you know a photo is a photo and a painting is a painting. AI image generators are a totally new paradigm.

The fact that it obscures the nature of the image is problematic, and tools that identify AI art will become increasingly necessary to preserve the knowledge that something is authentic human work.

1

TheDavidMichaels t1_j0nyqpo wrote

While some may claim that the singularity is inevitable, this is far from certain. The development of AI is influenced by a variety of factors, including economic, social, and political considerations, and there is no guarantee that it will happen or happen anytime soon. It's also important to note that there are various approaches to developing AI, and it's possible to prioritize ethical considerations in the design and use of these systems. However, it's still worth considering the potential risks and unintended consequences of AI systems that may operate at an intelligence level beyond human understanding or control. Additionally, the idea that merging with machines could lead to a mind-control scenario is purely speculative and lacks any real evidence.

−1

Bruh_Moment10 t1_j1mzlko wrote

Can you talk yourself next time? ChatGPT3 has a very particular writing style that’s easy to sniff out if you’ve read enough of its work.

1