Viewing a single comment thread. View all comments

RamaSchneider OP t1_j7ardb4 wrote

I'm not pleased or displeased ... life holds uncertainties with and without AI.

I'm curious. This is the future we'll be living in, and we better figure out how to drive the beast before the beast learns how to drive us. My assumption it would be child's play to base an AI's decision making on a commercial marketing manual of some sort.

Bad? I'm not judging. Something to be aware of and alert for? Absolutely.

−9

Sirisian t1_j7atnhi wrote

> My assumption it would be child's play to base an AI's decision making on a commercial marketing manual of some sort.

Again those are influences not from an AI, but from the corporation that produces them. Controlling what corporations do is what regulation is for.

> Bad? I'm not judging. Something to be aware of and alert for? Absolutely.

I wish others took that same view and simply studied and discussed the problems. Too often on r/ChatGPT people jump to wild conclusions.

21

RamaSchneider OP t1_j7auj2f wrote

The heart of my argument in this specific thread is: I agree that right now we're providing the basis for AI learning. That will most probably not be true in the future simply because the ability of computers to collect, collate and distribute any info dwarfs that of humans.

Yes, today you are correct. My point is that I don't believe that lasts. (And yes - I do think the evidence supports me)

−2

Vorpishly t1_j7bgjv9 wrote

What evidence shows your point though?

7

RamaSchneider OP t1_j7exarz wrote

All you have to do is track computer data gathering and dissemination since the 1950s. We have Wall Street transactions that a human would never have time to be aware of.

Every bit of evidence we have regarding computers screams that we're nowhere near the end of the line.

2

Isabella-The-Fox t1_j7eidnk wrote

AI eventually being able to decide for itself is pure speculation. Us humans build it, we control what it do. Right now we have AI that "writes" code, infact it's run through open AI. It's called github copilot. I put write in quotes for a reason. This code it writes is just a algorithm taking from github, meaning if the AI tried to write code for itself, it'd run into errors and flaws (And has run into errors and flaws while being used. Source: I had a free trial). A AI will never be fully intelligence even when it seems like it is. Once it seems like it is, it really still isn't, at least compared to a human being. Us humans will always dictate AI

1

Zer0D0wn83 t1_j7az6sv wrote

The assumption you're making here is more appropriate to a world where we only have a few AIs, all run by either the gov or big tech. This won't be the case. Stability AI is creating all sorts of AI systems, and they are all completely open source. Emad Mostaque's (CEO) interview with Peter Diamandis is 100% worth checking out to find out more on that.

In short - we'll have all sorts of AI with vastly different filters/controls run by all sorts of companies/charities/organizations.

2

Wild_Sun_1223 t1_j7bvu7m wrote

Will that assumption actually hold, though? What will prevent any one from outcompeting the rest and its controllers thus monopolizing power?

10

Zer0D0wn83 t1_j7bwiz4 wrote

It's uncharted territory, so I can't say for sure. I was just pointing out that it's not currently like that, which gives me hope for the future.

3