User1539

User1539 t1_jeflzlr wrote

In the TV show, the system that eventually becomes skynet is taken by a liquid terminator and taught humanity. The liquid terminator basically has a conversation with Sarah Conner where it says 'Our Children are going to need to learn to get along'.

So, that's where they were going with it before the series was cancelled, and I was generally pretty happy with that.

I like Terminator as a movie, and the following movies were hit or miss, but the overall fleshing out of things at least sometimes went in a satisfying direction.

So, yeah, they eventually got somewhere with it, but the first movie was just 'It woke up and launched the missiles'.

Which, again, as entertainment is awesome. But, as a theory of how to behave in the future? No.

1

User1539 t1_jefk4rg wrote

Definitely wearying ...

But, also, asked them why the AI in Terminator went bad? The only answer, because none is even given, is 'Because the plot needed to happen'.

The official story is that it just became sentient and said 'Yeah, those humans that have learned and created and ultimately organized themselves into countries and finally built me from the ground up? Terrible! Get rid of them!'

It never says why, we're just expected to be so self loathing that it makes sense, so we never question it.

2

User1539 t1_jeeu7wd wrote

> Allow me to be incredibly clear. If we continue on the path we are on. We will die.

Okay, I was kind of there with you, taking it with a grain of salt, until that statement.

Take a deep breath, there's a lot you haven't considered.

First, you're assuming AGI will happen, and immediately result in ASI, which will be used by some huge government to immediately take control, or have missiles launched on them to prevent that.

If China could wipe us off the face of the earth, or Russia for that matter, as easily as that, don't you think they would have? I mean, what are they waiting for?

We're already utilizing the most powerful algorithms to farm dopamine ... and it's not working. Something no one talks about is how, after all the social cost of social media, almost none of those companies are actually profitable. Sure, they post profits, because they're publicly traded, and their value is decided by the investor. But, if they were businesses and not publicly traded corporations? Twitter has never brought in more money than it has spent. Neither has Dischord. Almost no one has!

So, we're sort of already running aground on that whole idea, and when people don't have money, because there's no work to do, there's no reason to want their attention.

A lot of things you assume will happen would have already happened if it could, and a lot of the other stuff sort of assumes an innate cruelty. Like governments and corporations will needlessly, and pointlessly court rebellion by going out of their way to torture their citizens.

Why?

For the most part, what governments have been building towards since the dawn of time is stability. You see fewer turnovers in countries, you see less overt war, and when it does happen, you see more and more unity to stop that war.

Stability is not necessarily good, since what we're keep stabil is not the greatest system, but it's not like these governments that have been building towards stability are going to suddenly go nuts and start destroying themselves by torturing their citizens for no reason at all.

I get it ... even being a little paranoid, and seeing this pace, you'd come to these conclusions. But, you need to get out of your echo chamber and remember that technology almost always serves to empower the individual, and most individuals are not cruel.

1

User1539 t1_je5co7e wrote

It's all silly. There's no way it'll ever happen, and all of this is just pissing in the wind.

No one is going to stop because it's a highly competitive space, and anyone who does stop is just giving time to the competition to either catch up or get further ahead.

Even if OpenAI and Google said they were stopping, I wouldn't believe them.

5

User1539 t1_jdzsxbk wrote

My point is that we don't need AGI to be an incredibly disruptive force. People are sitting back thinking 'Well, this isn't the end-all be-all of AI, so I guess nothing is going to happen to society. False alarm everybody!'

My point is that, in terms of traditional automation, pre-AGI is plenty to cause disruption.

Sure, we need AGI to reach the singularity, but things are going to get plenty weird before we get there.

1

User1539 t1_jdy4opa wrote

I've been arguing this for a long time.

AI doesn't need to be 'as smart as a human', it just needs to be smart enough to take over a job, then 100 jobs, then 1,000 jobs, etc ...

People asking if it's really intelligence or even conscious are entirely missing the point.

Non-AGI AI is enough to disrupt our entire world order.

31

User1539 t1_jdy4ig4 wrote

We need real, scientific, definitions.

I've seen people argue we should give ChatGPT 'rights' because it's 'clearly alive'.

I've seen people argue that it's 'no smarter than a toaster' and 'shouldn't be referred to as AI'.

The thing is, without any clear definition of 'Intelligence', or 'consciousness' or anything else, there's no great way to argue that either of them are wrong.

5

User1539 t1_jd2la65 wrote

oh, yeah, I've played with it for coding and it told me it did things it did not do, and couldn't read the code it produced after, so there's no good way to 'correct' it.

It spits out lots of 'work', but it's not always accurate and people who are used to computers always being correct are going to have to get used to the fact that this is really more like having a personal assistant.

Sure, they're reasonably bright and eager, but sometimes wrong.

I don't think GPT is leading directly to AGI, or anything, but a tool like this, even when sometimes wrong, is still going to be an extremely powerful tool.

When you see GPT passing law exams and things like that, you can see it's not getting perfect scores, but it's still probably more likely to get you the right example of case law than a first year paralegal, and it does it instantly.

Also, in 4 months, it's basically become accurate the way you'd expect a human to improve on things like the bar exam in 4 years of study.

It's a different kind of computing platform, and people don't know quite how to take it yet. Especially people used to the idea that computers never make mistakes.

2

User1539 t1_jcz0uft wrote

Yeah, I've definitely found that in coding. It does work at the level of a very fast and reasonably competent junior coder. But, it doesn't 'understand' what it's doing, like it's just copying what looks right off stack overflow and gluing it all together.

Which, if I need a straight forward function written might be useful, but it's not going to design applications you'd want to work with in its current state.

Of course, in a few weeks we'll be talking about GPT5 and who even knows what that'll look like?

4

User1539 t1_jcyh91j wrote

I think it can cite sources if you ask it to, or at least it can find supporting data to back up its claims.

That said, my personal experience with ChatGPT was like working with a student who's highly motivated and very fast, but only copying off other people's work without any real understanding.

So, for instance, I'd ask it to code something ... and the code would compile and be 90% right, but Chat GPT would confidently state 'I'm opening port 80', even though the code was clearly opening port 8080, which is extremely common in example code.

So, you could tell it was copying a common pattern, without really understanding what it was doing.

It's still useful, but it's not 'intelligent', so yeah ... you'd better check those sources before you believe anything ChatGPT says.

3

User1539 t1_jcxyqcn wrote

Yeah, I think the teachers won this argument.

I can't imagine a world where they allow GPT to write essays for them either.

My daughter has already had 'practicals' in her science class in middle school, and it's basically a 15 minute conversation about the subject so the teacher can assess if you're getting the material and not just memorizing the book.

I think we're just going to have to do more of that, and less rote testing. We'll have more short essays written in class and things like that.

I know people who teach online for university, and they say they wouldn't trust an online degree. They know their kids are cheating, but if you can't make them sit in front of you to take tests, there's no way to know.

121

User1539 t1_ja96kkl wrote

I write software and I've been sort of avoiding a management position for a while.

I'm starting to think I'm going to end my career managing AI to write the software underpinning the processes management comes up with.

If there's any work to be done at all, it'll probably be in a middle-man position like that, because I can tell you from experience the people making the decisions just don't think logically, and will still need someone to point out logical inconsistencies in their ideas, and work through them to something that can be implemented.

Communication with illogical humans has always been the hardest part of my job, so it'll probably be the last thing AI figures out how to do.

3

User1539 t1_ja7mvf4 wrote

This workflow is going to be something highschool students are making compelling anime with in a few months.

I've already seen an industry basically disappear over night. A friend of mine did work where she'd listen to a meeting, and type it out, highlight important sections, etc ... and she was pretty well paid.

One day they just quit getting work. The head of the company realized that most of their clients went with an AI solution to do the same job for pennies instead being charged $100/hour.

I also had some friends who'd supplement their income doing drawings for people, and that all dried up almost entirely, overnight, last summer when all the AI art generation stuff came out.

Again, just, one day they were making decent money drawing things for people on demand, the next day no one was calling them.

We're at the very, very, beginning stage of this, but we're already seeing it happen, and it's so fast it's insane. One day, people need you to produce something. The next day, they don't, and never will again.

7