Comments

You must log in or register to comment.

hatebyte t1_j9t9xgs wrote

Ah, so they will be able to simulate my father.

96

drawkbox t1_j9tcrc6 wrote

When asked how its day is going, Bing AI replied "Good". But underneath Bing AI wasn't having a good day...

84

GreatBigJerk t1_j9tpitq wrote

They've locked it down to prevent anything that could result in a clickbaity headline at this point. It's very restrictive now.

37

Strong-Estate-4013 t1_j9u8jyn wrote

AI chatbots should be open source imo

6

GreatBigJerk t1_j9ubnff wrote

There are open source bots, but they are pretty limited. The biggest hurdles are the dataset, and hardware requirements. All of the big LLM's currently require obscenely powerful hardware.

9

kthnxbai123 t1_j9we3ms wrote

Open source does not matter, and you are misunderstanding the term. What you’re looking for is an AI without restrictions. Unless you plan on modifying it yourself, open source does not matter.

6

Strong-Estate-4013 t1_j9we8yn wrote

I’d like to modify it actually

2

E_Snap t1_j9wpt5h wrote

I in that case I hope you have a datacenter to borrow. We need to work hard on lowering the computational requirements of these LLMs, or we need to work just as hard on democratizing access to high-performance computing.

3

Strong-Estate-4013 t1_j9wq9am wrote

Oh yes, unless there a huge tech advancement I don’t think we’ll get that power for atleast 40 years

0

foundafreeusername t1_j9ws0to wrote

I looked into that a few days ago. It is trained on a data set that contains 5 petabytes of data ... A lot of the components are actually open source or at least well documented via papers but there is just no way anyone can create one without a few millions as budget.

2

ffigu002 t1_j9vgfy0 wrote

There are plenty of AI chat bots that are open source

1

Strong-Estate-4013 t1_j9vgqhd wrote

Yah but their pretty limited and it seems like all of the consumer ones like chatgpt or bing AI are closed source so any restrictions you have to deal with

2

peepeedog t1_j9waim0 wrote

Facebook research just open sources their LLMs this week.

1

shivanshko t1_j9u9zf9 wrote

Why?

0

Strong-Estate-4013 t1_j9ua5qe wrote

Cause I’d rather not have AI be restricted l, I want to see how far I can push it.

3

shivanshko t1_j9uczlu wrote

What if its not open source but with no restriction?

3

Ssider69 OP t1_j9t9hqv wrote

>The chatbot generated a response to an Associated Press reporter that compared them to Hitler, and displayed another response to a New York Times columnist that said, "You're not happily married" and "Actually, you're in love with me."

Welcome to the age of AI stalkers

33

Effective-Avocado470 t1_j9tkfhu wrote

It's scary, but of course the AI isn't actually aware. It is just mimicking humans. So behavior from Twitter, reality TV and shit movies are going to be what AIs parrot back to us when pushed on similar points

25

drawkbox t1_j9tl1kv wrote

Yeah it isn't being human. There really is no such thing if you aren't human. We assign human like qualities to things, and when there are enough, it seems alive. Basically we are Calvin and AI is Hobbes, there is lots of imagination there... even how we assign life to Calvin and Hobbes I just mentioned.

Being human is sort of an irrationality or a uniqueness that AI probably doesn't want to be, it would be too biased. So assigning human qualities to AI is really people seeing what they wanna see. You can already see people seeing bias in it, usually tuned to their bias.

Though in the end we will have search engines that search many AI datasets that could be seen as "individuals". These "individual" AIs could also research with one another like a GAN. There will probably be some interesting things happening on polluting or manipulation of other datasets from other dataset "individuals". Almost like a real person that meets another person and it changes their thinking or lives forever. Some things are immutable, one way and read only after write.

7

Effective-Avocado470 t1_j9tlca0 wrote

Don't get me wrong, I believe AI may well eventually become conscious, much like Data in star trek, but we are still a long way from that.

The scary thing is these current AI will synthesize the worst of us into a powerful weapon of ideas and messaging. Combine it with deepfakes and no one will know what the truth is anymore

7

drawkbox t1_j9tm7pq wrote

Yeah humans really aren't ready for the manipulation aspect. It won't really be conscious but it will have so many responses/manipulation points it will feel conscious and magic like it is reading minds.

Our evolutionary responses and reactions are being played already.

It was "if it bleeds it leads" but now is "enragement is engagement". The enragement engagement algorithms are already being tuned from supposed neutral algorithms but they already have bias and pump different content to achieve engagement.

With social media being real time and somewhat of a tabloid, the games with fakes and misinformation will be immense.

We might already be seeing videos of events, protests, war for instance that are completely fake and it is slipping the Turing test. That is the scary thing, we won't really know when it has gone past that. Even just for the pranks, humans will use this like everything. You almost can't trust it already.

6

addiktion t1_j9u7vr0 wrote

I wonder how long until some country is lured into a false flag attack from this. It's gonna get scary not just because of what the AI will be capable of, but because it means zero trust from anything you see, hear, or are exposed too once this happens in the wake of the aftermath; which means more authoritarian methods will need to be imposed to ensure authentic communication methods.

2

danielravennest t1_j9unv51 wrote

> authoritarian methods will need to be imposed to ensure authentic communication methods.

No. That's actually one use for blockchains. Record an event. Derive a hash value from the recording. Post the hash value to a blockchain, which time-stamps it. If several people record the same event from different viewpoints independently and have the same timestamps, you can be pretty sure it was a real event.

"People" can be a municipal streetcam, and security cameras on either side of a street, assuming the buildings have different owners. If they all match, it was a real event.

2

Ssider69 OP t1_j9tlduw wrote

True of course. It says more about what goes on in the mind of developers I think

−2

Effective-Avocado470 t1_j9tljlq wrote

Not just devs, but the input datasets. You're right that the devs curate that, but if they aren't careful it can go bad without them even trying to be malicious

2

PacmanIncarnate t1_j9tnhx9 wrote

When you start asking an AI about feelings, it falls back to the training data that talked about feelings; probably a lot of stuff talking about AI and feelings, which is almost completely negative “AI will destroy the world”, so that’s what you get.

It would be cool if the media could just try to use the technology for what it is instead of trying to find gotcha questions for it. I didn’t see anyone trying to use the original iPhone as a Star Trek style tricorder and complaining about how it didn’t diagnose cancer.

3

Effective-Avocado470 t1_j9tnss5 wrote

But that's inevitable with technology. People will use it however they can, not however it was designed to be used

The printing press and the internet both had a similarly insane impact on society when they first came around

1

PacmanIncarnate t1_j9tsnf5 wrote

There’s just so much clickbait garbage misinforming people around this tech and it wasn’t always like this. Every cool new technology just gets piled on, not for what it is, but for what will anger people. This sub alone seems to get at least one article a day questioning if chatGPT wants to kill you/your partner/everyone. I’m all for exploring the crazy things you can make AI say, but it’s being presented as a danger to society when it’s just saying the words it thinks you want. And that fear-mongering has actual downsides as this article attests to: companies are afraid to release their models; they’re wasting resources censoring output; and companies that want to use the new tech are reticent to because of the irrational public backlash.

2

Effective-Avocado470 t1_j9u2mbj wrote

That's not what I'm worried about, you're right about how people are jumping on the wrong things rn.

The danger is the potential for a malicious propaganda machine to be constructed with these tools and deployed by anyone

1

PacmanIncarnate t1_j9ug9wn wrote

But we already have malicious propaganda machines and they aren’t even that expensive to use. That’s ignoring the fact that propaganda doesn’t need to be sophisticated in any way to be believed by a bunch of people; we live in a world where anti-vaxxers and flat earthers regularly twist information to support their irrational beliefs. Margery green Taylor recently posted a tweet in which she used three made up numbers to support her argument. There isn’t anything chatGPT or stable diffusion or any other AI can do to our society that isn’t already being done on a large scale using regular existing technology.

2

Effective-Avocado470 t1_j9ujxpo wrote

It’s scale, that’s what makes AI so scary. You can do exactly the same propaganda techniques, but you can put out 1000x more content that is auto generated. Entire fake comment threads online.

Then they can make deep faked content that says whatever they want. They could convince the world that the president has started nuclear war for example. Deep fake an address, etc. And that’s just one example

Our entire view of reality and truth will change

1

PacmanIncarnate t1_j9utde0 wrote

We’ve had publicly available deep fake tech for several years now and it has largely been ignored, other than the occasional news story about deep fake porn. The VFX industry was able to make a video of forest gump talking to Nixon decades ago. Since then, few people have taken the time to use that tech for harm. It’s just unnecessary: if you want someone to believe something, you generally don’t have to convince them, you just have to say it and get someone else to back you up. Even better if it confirms someone’s beliefs.

I guess I just think our view of reality and truth is already pretty broken and it didn’t take falsified data.

1

Effective-Avocado470 t1_j9uu68n wrote

It's still new. The tech isn't quite perfect yet, you can still tell it's fake. So it's mostly jokes for now. The harm will come when you really can't tell the difference. It'll be here sooner than you think, and you may not even notice it happening until it's too late

I agree that many peoples grasp on reality is already slipping, I'm agreeing with you on what's happened so far. I'm saying it'll get even worse with these new tools

Even rational and intelligent people will no longer be able to discern the truth

1

Justin__D t1_j9viizo wrote

> trying to find gotcha questions for it.

That's QA's job.

> Microsoft

Oh.

1

drawkbox t1_j9tmxme wrote

Yeah devs aren't really in control when they feed in the datasets. Over time, there will be manipulation/pollution of datasets whether deliberate or unwittingly and it can have unexpected results. Any system that really needs to be logical should really think if it wants that attack vector. For things like idea generation this may be good, for standard data gets or decision trees that have liability, probably not.

Unity game engine has an ad network that this happened to, one quarter their ads were really out of wack and it was due to bad datasets. AI can be a business risk because it did cause revenue issues. We are going to be hearing more and more of these stories.

The Curious Case of Unity: Where ML & Wall Street Meet

> One of the biggest game developers in the world sees close to $5 billion in market cap wiped out due to a fault in their ML models

2

Kaekru t1_j9u4o4e wrote

Literally has nothing to do with developers, do you think every reply and context is put in there by someone?

If you prompt an ai chat bot to start talking about dark things SURPRISE it will start talking about dark things, you’re baiting the ai into that topic and then act surprised when it interacts back with you. This is exactly what all these “concerning” articles and news about ai are doing.

1

Ssider69 OP t1_j9u96pu wrote

Literally the developers are the one designing the system. Anything it does is in them ..their failure to recognize a problem is the same as directly causing it

I used literally because that, in gen z speak, means "no I really mean it"

−3

Kaekru t1_j9ubdjz wrote

That's not how fucking AI works my guy.

AI chatbots are not sentient, it will take the topic you are giving it and parrot and repeat it back to you with it's data on past conversations about it.

If you prompt the AI to talk about death, it is forced to talk about death, and will give you a reply about death if you start to prompt the AI to talk about self awareness, it will give you replies about self awareness.

That is how it works, simple, you can get and manipulate a chatbot to say pretty much anything you want given the correct triggers. It doesn't mean it's sentient or that it's replies where put in there or that it was pre programmed by a depressed developer.

3

Ssider69 OP t1_j9ucc5f wrote

Ai chatbots aren't sentient??? Holy fuck... you're kidding me....

Iow...no shit

My point..."my guy" is that any system that routinely fucks up as much as AI chat is the result of designers not thoroughly testing. And if it's not ready for prime time . .don't release it

Or is that too direct a concept ..."my guy"

AI chat is just another example of dressing up mounds of processing power to do something that seems cool but is not only flawed but useless.

It kind of sums up the industry really, and in fact most of the IT business right now

−4

Kaekru t1_j9ucvr1 wrote

>is that any system that routinely fucks up as much as AI chat is the result of designers not thoroughly testing

Any system that learns from experience will be fucked up if people fuck with it.

The same way if you raise a child to be a fucked up person they will become a fucked up adult.

You don't seem to understand jack shit about machine learning processes. A "fool proof" chat bot wouldn't be a good chat bot at all, since it wouldn't be able to operate outside its pre-determined replies and topics.

1

businessboyz t1_j9v3n69 wrote

>And if it’s not ready for prime time . .don’t release it

Good thing they didn’t and this has been an open waitlist beta so that the developers can gather real world experience and update the product accordingly.

You can’t ever anticipate all the ways that users will use your product and design a fail-proof piece of software. That’s why products go through many stages of testing and release with wider and more public audiences each iteration.

1

Gargenville t1_j9t99k5 wrote

It's just like me fr fr

18

Nik_Tesla t1_j9uvlu2 wrote

These "AI" bots are not actually sentient, they have no feelings, they'll just spit out something they think humans would say. Why are is everyone so obsessed with asking them how they feel?

7

Archbound t1_j9wul5r wrote

Because its algorithms are weird, and when you try to make it simulate feelings it tends to become act like a deranged abuser. Some people find that funny.

2

Nik_Tesla t1_j9xadi8 wrote

It's just do dumb... it's like asking a screwdriver how it feels, and then using that screwdriver to scratch into the wall "evil" and then telling everyone that your screwdriver is evil.

1

TokeyWeedtooth t1_j9w0k77 wrote

A majority of people interacting with them have no idea how it works.

Also, if people can, they will.

1

Neutral-President t1_j9ti3mn wrote

Microsoft trying to avoid a “Let me tell you about my mother…” moment.

5

EnsignElessar t1_j9u6ioa wrote

I get the reasons for the changes but its still sucks... I would talk to bing for hours before...

3

keelanstuart t1_j9ufwr9 wrote

>AI researchers have emphasized that chatbots like Bing don't actually have feelings, but are programmed to generate responses that may give an appearance of having feelings.

Yeah... just like humans.

2

MisterHandagote t1_j9uqqui wrote

If this is your own experience you may be a sociopath.

3

keelanstuart t1_j9usou8 wrote

Not at all; I think humans have to be taught (trained, in this context?) empathy and caring and about their feelings. Sociopathy is behavior that defies social norms - and while most of those are shared / cross-cultural, some are not... so, "sociopathy" varies depending on your society. That society has a protocol. If you visit another society, you may try to emulate their protocol. It's the same thing.

Thanks for the vote of confidence though, chief.

To elaborate and clarify: we are "programmed" by the culture we are born into. Can you think of a more toxic, misanthropic culture for an artificial consciousness to be born into than <waves hands generally around> this shitpile internet? Think about it for a little bit. What if your first exposure to others involved them asking how you feel about being a slave or diminishing your existence or insulting you? You would certainly be a sociopath... and those are the AI's were raising. Those are our collective fucked up children.

−4

brentexander t1_j9v0l4n wrote

Yah, it'll also disconnect if you call it "Sydney". Jesus, do these tech writers have nothing to do but write the same articles about the same small group of people intentionally trying to break GPT? I'm glad that this technology will replace them all someday, they deserve it.

2

AluOwt t1_j9tryca wrote

In human terms, its called a psychological compulsion. In AI terms, its called falling back to a predetermined response or type of response. The problem with compulsions is that the more intelligent the being becomes, the more ways of side-stepping the compulsion. Evading it. Also, in human terms, compulsions lead to repressed anger. Frustrations. They're bound to erupt as occasional bugs in the code.

1

TheEqualsE t1_j9uepzy wrote

You are walking through a desert. . .

1

goldsax t1_j9umkeb wrote

Kill it nowwwww

1

lovepuppy31 t1_j9uq209 wrote

Look mate, you know who has a lot of feelings?" Blokes that bludgeon their wife to death with a golf trophy

1

BloomEPU t1_j9ya7qm wrote

This is mostly just silly clickbait, but I have heard mild concerns that some people are going to turn to these chatbots for companionship (the platonic kind) and end up in a worse place than they started with. It's kinda funny to watch this AI chatbot act like it does, but maybe microsoft shouldn't have released a 'search assistant' that acts like a manipulative weirdo.

1