LizardWizard444 t1_ja3vggx wrote

The current attitude concerning AI is startlingly naive. People seem under the impression that somehow the people making AI will just built in "benevolence to human life" without a clear reason for "why" that will happen. It's this attitude that scares me and others and go screaming "SKYNET IS COMING".

My biggest concern with AI is that people press forward on this exciting bew tech and don't put enough resources into AI Alingment to ensure that the AI doesn't one day start doing something bad.

I'm scared, One day you wake up, go to work and your phone starts heating up and batteries start exploding. You turn on the news and hear uf others are facing this issue just in time for the new broadcasts to start going out. The reason all this is happening is the AI is connecting itself to any devices it can get at and useing them for extra processing for whatever it's trying to solve and blindly overclocking devices to get it. The results is large swaths of the internet wiped put and rendered unusable until a solution is found (and there might never be one since the internet is down and at most a handful of places with the processing power aren't already baking like an oven because taken over).

That might be it if we're very very very lucky or maybe the AI starts making a nanobot swarm and decides to turn any material it can into processing or ram or whatever to solve that issue and we're all just waiting for the nanite cloude to kill us all.

The big issue is that people are making AI blindly, they're thinking "hey can i make this neat thing" rather than "should i". Like chatGPT and AI art alone could put a ton of people out of work forever now they exist and are out and people seem completely okay with that with little to no mitigation.


LizardWizard444 t1_ja2yjpg wrote

.....yes the nanobot swarm graygooing the cities and people is admittedly interested but I'D STILL RATHER WE NEVER MADE IT AND DODGED THE BULLET WHEN WE HAD THE CHANCE

A fundamental computer science principle concerning basic algorithms "you should always expect the worst case scenario" is so under considered in these kinds of discussions that I'm fully expecting us to be doomed.


Not to mention AI is so much worse than any of those largely because the nukes and rockets don't unexpectedly turn on you one day and begin processes of destruction no one properly considered because PEOPLE ASSUME AGI WILL RANDOMLY BE BENEVOLENT


LizardWizard444 t1_ja2xg5l wrote

Yes but even a terrible example of AGI has made extinct many species and irreversibly changed the planet without the one track optimization inherent in even the simplest AI.

When your argument is "We haven't made one and don't know how to make yet" doesn't inspire comfort as it means we absolutely can stumble into it and then everyone's phone start heating up as they're used fkr processing and WAY scarier things start happening after that


LizardWizard444 t1_j6ole3w wrote

I know that, rudementry life is largely just a matter of getting the right chemicals in place. But the truely interesting part is that it continues and iterates. Statistical laws and higher mathmatical patterns pushing innert chemicals into rudimentary life is amazing, the complexity that emerged is so incredibly fascinating and the fact an animal born of this motion with a sophisticated enough mind to refine the underlying math purely and susinctly is miraculous.

The real crazy stuff is all around is happening even as i type this and most people go they're whole lives never really thinking about even three fourths the phenomenon they encounter on a daily basis. The true marvel of AI is that it must calculate from the simplest math into the more complicated. A human brain may be good enough to identify "tiger" and "danger" afew precious seconds to keep us alive but a machine has ti run the tiger through math and comes to a more complete picture in milliseconds w4the processing we pack nowadays.


LizardWizard444 t1_j6oh4o2 wrote

Yeah, ultimately evolution's true achievement is the fact it managed to optimize in the first place. The fact that random chunk of chemicals got together and make more of itself and then make more of other stuff and so on and so forth till the world got covered in thos carbon nanostructreal iterative mesh making directed optimized processes is the cool part. All taking advantage of the fact if there's more of something that can make itself than that something will increase in number.


LizardWizard444 t1_j6lxkos wrote

....this isn't that impressive. Evolution is a blindly shambling optimizer that will leap and bound forward on shear luck. There are problem that Evolution would take longer than the atoms of the universe has to exist that a grad student in comp-sci will solve in an afternoon (the enigma machine for example).

Human evolution is largely divorced from standard evolution because of our tool use. Theoretical there are changes getting made constantly just to keep up with disease but even the imunocompromised can live well past what genes and nature would say on the matter.


LizardWizard444 t1_j6j5xzp wrote

Oh no that's the worst part, it's everyone picking the stupid options. Here are the rough steps.

The politicians attack AI

The afew clever employees use AI tools anyway and make they're lives easier and end up the most efficient members of the employees.

The corporation see's an uptick in production and cutting cost (all the people not clever enough to use AI tools)

Someone might find out but the big corporation being in bed with the politicians face no serious harm and just turn a blind eye

The big corporation make a public statement about how "agahst" they are after pinning it on afew employees. The remaining employees just use more powerful AI tools to pick up the slack.

The results are big companies make tons of money and out compte any new competitors as the new competitors have to follow the law or face real consequences because they aren't big enough to tank the legal fine

everyone not already benefitting from this get's a middle finger from the politicians because "unemployed = bad". So they're all written off as 'lazy and deserve starvation and hunger' thinks our politicians

That's it unless something really unusual happens. Politicians these day seem more willing to die than make food, water and shelter provided for basically free and this seems the likely path forward if nothing changes radically about how we organize the world


LizardWizard444 t1_j6hqe8a wrote

I'm not so sure politicians are stupid and would much rather campaign on "AI = BAD" over "unemployed = not Bad" which is what you end up needing to make automation not cause massive suffering.

I imagine laws will get written or applied in such a way to make this generative AI business difficult (at least for the smaller new businesses who can't afford to pay the fine). The older corporations will just keep using it anyway while paying as few people to technically be working as possible and it'll be down hill from there.


LizardWizard444 t1_j0y22cj wrote

My point is that when you start asking "will AI replace artist?" (A question 2 years ago I'd have laughed at and confidently said no) and now it's being asked with seriousness then maybe something is up.

AI has suprised us, and just because that suprise happens to be neat pictures definitely doesn't mean we shouldn't step back and VERY SERIOUSLY considering AI Alingment ( because i would much rather spend a bunch of money to check "No AI doing art like this isn't an indication that in the next 5 year's we're going to paperclip ourselves out of existence" and have a very solid and robust feild of research into "preventing AI from killing humanity" than the alternative which is overlooking what really looks like a dead canary in this proverbial coal mine and dying in a truely unavoidable fashion.

Edit: as it stands if humanity tried to automate every job we could with existing AI technology (not new or experimental technology) I'm absolutely certain all capitalist civilization would collapse due to too much of the population being out of the job and unable to buy things. Collapsing civilization is definitely not a small thing and AI can already do it if we where stupid about implementing it.


LizardWizard444 t1_j0y0klb wrote

.....are you not worried because if AI is already on the developmental course to paperclip humanity out of exotic there's nothing we can do about it at this point or because ai wiping out large sections of human culture and identity forever in a finacial sense isn't a big enough warning bell to be worth considering?

Seriously though I'd feel much safer if there was more work beimg done in AI Alingmemt.


LizardWizard444 t1_j0xzy88 wrote

Yeeeeah i thought art would be some of the last to get automated. Now that's definitely wrong I'd like to dumb billions of dollars into AI ALINGMENT RESEARCH preferibly sometimes in in the next 5 years. Because that's about how long decision theorists Elizer Yudvowski is saying we've got


LizardWizard444 t1_j0xzjrp wrote

I think we might want to take it a step further and look The way we're developing AI right now. In particular we need to do research on allinging artificial intelligence so that it doesn't kill us

elizer yudvowski (a man who speacilizes in AI and decision theory posted) belives that ai development is going "people having kids today might be able to see they're children graduate kindergarten". Ai wiping humanity out may sound like sci-fi but if you think of it as humanity is out swimming in the great unknown i think the "will ai replace artist?" Type question is something big just brushing against humanities leg.

There's a very slim chance it's nothing, but given we just asked a question about something as big as "ART" and whether humanity is gonna still make it I'd rather dump millions of dollar into AI alignment research right the fuck now and look back later and realize it was nothing rather than find out by ending up between the silicon transistor jaws and going extinct.


LizardWizard444 t1_j0xy97v wrote

I disagree, i don't think we can have a tech revolution fast enough before existing systems and tech almost kill us.

I think people's first assumptions they make is believing capitalism is gonna remain stable forever and that the game theory that makes this big thing we call society work remains stable indefinitely; your promising that the system we live in (capitalism) and a stable majority of people will always be worthwhile enough for everyone to stay. I don't think this is the case, finacial or even just generic disasters happens and sometimes society can handles it well and people get what they need but one day it might not happen. There isn't anything built into the structure of capitalism that promises with absloute certainty that the market or businesses or the government will provide for the needs of a big enough majority that everything will forever and always be businesses as usual. Perhaps someday a perfect storm of disasters occurr (for example sake a actually killer disease and several exteme weather events all at the same time) and suddenly no one can buy or produce and the whole thing breaks and stays broken for so long that people turn to something else to save them.

The next big assumption is that AI will never be good enough to do everything. If you asked me a 2 years ago "do you think ai will be able to automate art", I'd probably have given a solid no. I'd assumed that art would have been one of the last things to be automated if ever, but now there's news and posts raising serious concerns. I think the ai generated art is solid proof that we definitely don't know what AI will be able to do in 5 years let alone if it will become a main source of art commercially in the next few years.

Honestly in a grander more grim prediction on the capabilites of AI speacialist on the issue of "will ai kill us" (Elizer Yudvowski to be specific) said "people having kids today might be able to see they're children graduatee kindergarten". What i think is going to be necessary is considering AI alignment and making sure that the AI we make doesn't manage to end us in some catastrophic manner and will help humanity rather than destroy it.

Overall i think automation is part of this issue, capitalism as it is has businesses that will much rather automate a job out of existence forever with AI solutions rather than keep those jobs around for the good of the people working them. New discoveries that might produce a new job that an AI can't do are not being made fast enough to keep up with technologies ability to remove jobs forever and i doubt striking a balance between truely new jobs like "bozon cutting" or "horizon deer" and AI general trend of automating any job we can get enough data on is a sustainable solution.

When people raise the concern "will AI replace artist?" It's the first touch of something in the murkywater just under us(humanity). To keep the metaphor going, i believe we can escape the jaws of runaway AI but it's gonna mean taking things like ai generated art seriously. It definitely doesn't mean assuming that "new jobs" will miraculously emerge from nothing and mindlessly playing around in the great unknown with artificial intelligence.


LizardWizard444 t1_j0r7ltg wrote

I'm pretty sure almost nobody literally wants to see the world burn i think he's just stating some simple facts and deducing the likely outcome.

-capitalism generally needs people and businesses to pay for they're continued existence

-the businessed like automation because it often means paying the electric bill month to month instead of a paying for a whole person to come do the same job often worse than the automatic machine can.

-automation permanently removed a job from the work force.

-if all of the above are true eventually you get a problem where people can't pay for they're existence or anything else for that matter and so the businesses can't continue existing because no one's buying anything from them and the whole thing stalls out.

These are all decently well known and proven facts. So even if you managed to get a third party with no stake in any of this, informed them of the facts they'd probably go with the guy watching the world burn itself over the guy accusing him of wanting blind destruction.