jsseven777

jsseven777 t1_jeexhix wrote

Nice from the guy asking for crockpot recommendations from the slowcooker forum even though that probably gets asked 6,000 times a week.

This topic is in the news right now and you don’t expect people to talk about it? As an AI language model, I am very disappointed in your closed-mindedness.

4

jsseven777 t1_jdxsfkc wrote

Exactly. People keep saying stuff like “AI isn’t dangerous to humans because it has no goals or fears so it wouldn’t act on its own and kill us because of that”. OK, but can it not be prompted to act like it has those things? And if it can simulate those things then who cares if deep down it doesn’t have goals or fears - it is capable of simulating these things.

Same goes like you said about the AI vs LLM distinction. Who cares if it knows what it’s doing if it’s doing these things. It doesn’t stop someone from customer service being laid off if it is just acting like an LLM vs what we think of as AI. It just matters if the angry customer gets the answer that makes them shut up and go away. People need to be more focused on what end results are possible and not semantics on how it gets there.

19

jsseven777 t1_jdtdk01 wrote

Look, you seem to be very conditioned by capitalism to the point you aren’t understanding that it’s not the norm, and never has been. The 40 hour workweek is a very recent invention. I don’t disagree that people like you will have a bit of a transition, but it’s NOT evolutionary, it’s psychological conditioning that you’ve been exposed to since birth.

The 40+ hour workweek is a concept that has seemed normal for the last 700 years or so, but that’s a blink in the eye of human history. I think you should read this: https://www.inc.com/jessica-stillman/for-95-percent-of-human-history-people-worked-15-hours-a-week-could-we-do-it-again.html

Also, don’t pretty much all seniors spend time together 24x7?? Why does it matter if it happens 20 or 30 years earlier? You act like nobody does it now, but seniors today do exactly what you say is impossible, and I’m pretty sure they have a lower divorce rate than couples working 80 hour workweeks.

6

jsseven777 t1_jdt8ed3 wrote

We aren’t evolved to being around our family 24x7? Have you seen monkeys? They hang out in trees with their family 24x7. Humans are 100% evolved to be in close social units.

Just because you’ve been orientated into believing that your life purpose is to be a cog in the wheels of capitalism doesn’t mean evolution is a part of this. The world will need a bit of deprogramming to shake the conditioning, but will be a lot like deprogramming a person who got stuck up in a cult. It will take weeks or months not years or decades.

12

jsseven777 t1_jdldl2p wrote

I had a similar theory that there is an epic AI space war between AIs that killed their respective civilizations and all the AIs are fighting each other, and they leave young civilizations like us alone because they aren’t really threatened by us, and maybe they even find our data useful in some ways.

0

jsseven777 t1_j68rqbg wrote

You are one of the most closed brain people I have talked to on here. You can program an AI to have a goal of kill all humans, preserve its own life at all costs, etc. Hell a person could probably put that in the prompt now for ChatGPT and it would chat with you in the style of being a robot programmed to kill all humans if it didn’t have blockers explicitly programmed stopping it from talking about killing humans (which it does).

You are so obsessed with this calculator analogy that you aren’t realizing this isn’t a damn calculator. You can tell current AI systems they are Donald Trump and to write a recipe in the style the real Donald Trump would write it. Later when it’s more powerful I see no reason why someone couldn’t tell it that it’s a serial killer named Jeffrey Dahmer whose life mission is to kill all humans.

I’m saying it doesn’t need to HAVE wants to achieve the end result OP says. It will simulate them based on a simple prompt or some back end programming, and the end result is the SAME.

I’m fully expecting a response of “but a calculator!” here.

3

jsseven777 t1_j67plvi wrote

I also believe the leaving the planet theory is what would happen vs a terminator or matrix scenario. AGI really doesn’t need to hunt us down to ensure its survival. It just needs to head over to one of Saturn’s moons or something and do whatever it is it wants to do.

I also sometimes wonder if the galaxy is full of AIs from various extinct civilizations fighting each-other in a massive AGI space battle, and for the most part leaving alone living species since they don’t really see them as a huge threat.

3

jsseven777 t1_j5br2vs wrote

Homeowners are bad, but landlords are good? I can’t even imagine how one might twist logic to get to this conclusion… and a home can be a condo in a skyscraper too. You don’t need to monopolize a plot of land here. People should own their homes. You are advocating for feudalism here.

6

jsseven777 t1_j5bmvmf wrote

The government part was my literal point. You are the one who brought up Elon. Lol. And no, you don’t live in squalor from consumer protection and worker protection laws. That’s not true.

There are lots of countries with better standard of living than USA that have more vacation days, health care, strong minimum wages.

Capitalism tells you every day that doing anything to hurt corporate profits will lower your standard of living while you sit there with a literal low standard of living.

Amazon workers pee in bottles. I’m pretty sure a world where they get proper break is possible. If Jeff Bezos won’t give his employees bathroom breaks what makes you think he will help take care of all the workers that get laid off when all of his fleets flip to self-driving cars?

It’s amazing that people defend billionaires. Do you own your own home? Do you make a living wage? Do you have a six month cushion if you lose your job? If you can’t answer yes to these three questions then why do you defend this?

3

jsseven777 t1_j5bl6ca wrote

Yup, a business has its revenue from advertising fall 40%+ and you call that a big success from a corporate point of view.

Why do people always defend billionaires. Every penny of their fortune comes from paying a person below a living wage. We are past the point that capitalism doesn’t need all of us to work anymore, and over the next 10 years if we don’t start pulling people out of the workforce by lowering the retirement age and increasing retirement benefits to a living wage the unemployment rate is going to skyrocket.

If we want a mars colony it should be run by the government like any other city. You want corporate cities now? What kind of propaganda have you been watching?

It’s not commandeering his money, it’s fixing our society so it’s not possible to steal quality of living from people anymore. We are capable of providing a living wage.

4

jsseven777 t1_j5bilx7 wrote

This is really the key. At some point we have to figure out how to get politicians in who don’t put corporate quarterly profits first on every decision point. The productivity gains of the next 20 years should go to reducing the retirement age down to 30 and increasing retirement benefits to a reasonable standard of living.

Until that happens AI productivity increases will all go to people who at this point have already won capitalism and don’t need more, and a lot of people will find themselves without jobs who have no savings at all to fall back on.

16

jsseven777 t1_j1va7fb wrote

This version of this post is especially ironic considering this subreddit’s focus, but in general these “please ban content type X posts” are misguided because the authors fail to understand how Reddit works.

Reddit gives visibility to content that is generating engagement and likes. Any time someone says let’s ban this type of content or that type they are saying let’s ban content other people find interesting, but I don’t care for. It’s a bit selfish to be honest.

When people stop finding it interesting (which they will eventually once the novelty wears off) it will naturally begin to get buried under more interesting content and you won’t see it. Until then, just do your part and downvote and move on.

13

jsseven777 t1_j1v04fs wrote

I think legislation will eventually pass that makes it illegal for AI to simulate being a human, and comes with a heavy fine and/or maybe even some prison time for the operator.

Honestly, we probably need this already because I suspect if all humans died suddenly tomorrow Facebook, Twitter, etc would continue to have a lot of content posted until the power grid started to fail.

I don’t know how much of a difference it will make though because our government already can’t stop people from getting scam calls from overseas.

8

jsseven777 t1_j1ar41r wrote

Anybody who is rushing into AI articles has forgotten about Panda and Penguin updates. Everybody who pumps out SEO articles with this strategy is going to get purged - but only after the strategy works for 1-2 years until Google’s SERPS get overrun by this content, and they are forced to take action because of low quality SERPS.

In a year or two every one of us will be able to tell the difference between ChatGPT articles and human written ones. ChatGPT has some pretty obvious tendencies in its writing. Those may get worked out overtime nullifying what I just said, but I think people will evolve to be able to tell the difference once they stat flooding the market like other SEO content strategies have in the past (ie article farms).

1

jsseven777 t1_j16ucbs wrote

Everybody says this, but the kill all humans stuff is honestly far fetched to me. The AI could easily leave the planet. It doesn’t need to be here to survive like us. Chances are it would clone itself a bunch of times and send itself off out into the galaxy in 1,000 directions. Killing us is pointless, and achieves nothing.

Also, this line of thinking always makes me wonder if we met extraterrestrial civilizations if they would all be various AI programs that cloned themselves and went off to explore the universe. What if alien life is just a huge battle between various AIs programmed by various extinct civilizations?

1

jsseven777 t1_j16tpep wrote

Yeah, but the “that it wasn’t set” part is the problem. Couldn’t any shmuck ask an open AI to program them a new ai whose sole goal/purpose in life is to violently murder every bunny rabbit on the planet?

I don’t see how we can give people access to an AI capable of building us new software without running into this problem pretty much immediately.

Plus, I imagine every corporation and government will be programming in problematic objectives like “Maximize corporate profit” or “protect America at all costs from all threats foreign and domestic” which will probably result in a ton of ethical issues.

2

jsseven777 t1_irtmw75 wrote

Probably a lot of quality assurance / inspection jobs to ensure the machines are creating things that are not dangerous. No matter how much we come to trust AI it’s likely humans will still have to do a final review on all outputs.

On the art thing, I think art is going to be one of the first jobs to go. The tech is getting very close already. Soon we will be able to write movies, create video games, generate art, etc by typing what we want. Eventually the typing part will go too, and the AI will learn via trial and error which creations get the most downloads, and will create what we want better than even we can.

For any scarce resources (ie waterfront property) these will probably be sold based on some types of currency which you can win via competitions (likely games), so those might be considered jobs to some degree.

8