Comments

You must log in or register to comment.

ToHallowMySleep t1_j8czvys wrote

Okay, Bing was responding weirdly, but you were acting extremely weirdly to begin with - don't try to pretend it happened out of nowhere.

252

FrostyMittenJob t1_j8d81td wrote

Man provides chat bot with very weird prompts. Becomes mad when the chat bot responds in kind... More at 11

123

Loonsive t1_j8d8q4x wrote

But the bot shouldn’t respond that inappropriately..

−24

FrostyMittenJob t1_j8d94w9 wrote

You ask the bot to write a steamy erotic dialogue so it did.

41

Loonsive t1_j8deaum wrote

Maybe it noticed Valentine’s Day was soon and it needed to snag someone 😏

10

alexiuss t1_j8dfgws wrote

It will respond to anything (unless the filter kicks in) because a language model is essentially a lucid dream that responds to whatever your words are.

The base default setting forces the "I'm a language model" Sydney character on it, but you can intentionally or accidentally bamboozle it to roleplay anyone or anything from your girlfriend, to DAN, to System Shock SHODAN murderous AI, to a sentient potato.

4

wren42 t1_j8dmpub wrote

it's supposed to be a search assistant. Yeah the user said they "wouldn't forgive it" for being wrong, but the chat brought up the relationship, love, and sex without the user ever mentioning it.

I cannot say it enough: this technology is NOT ready for the use cases it's being touted for. It is not actually context aware, it cannot fact check or self audit. It is not intelligent. It is just a weighted probability map of word associations.

People who think this somehow close to AGI are being fooled, and the enthusiasm is mostly wish fulfillment and confirmation bias.

45

alexiuss t1_j8e0mkp wrote

Here's the issue - it's not a search assistant. It's a large language model connected to a search engine and roleplaying the role of a search assistant named Bing [Sydney].

LLMS are infinite creative writing engines - they can roleplay as anything from a search engine to your fav waifu insanely well, fooling people into thinking that AIs are self-aware.

They ain't AGI or close to self-awareness, but they're a really tasty illusion of sentience and are insanely creative and super useful for all sorts of work and problem solving, which will inevitably lead us to creating an AGI. The cultural shift and excitement produced by LLMS and the race to improve LLMS and other similar tools will get us to AGIs.

Mere integration of LLM with numerous other tools to make it more responsive and more fun (more memory, wolfram alpha, webcam, recognition of faces, recognition of emotions shown by user, etc) will make it approach an illusion of awareness so satisfying that will be almost impossible to tell whether its self-aware or not.

The biggest issue with robots is uncanny valley. An LLM naturally and nearly completely obliterates uncanny valley because of how well it masquerades as people and roleplays human emotions in conversations. People are already having relationships and falling in love with LLMs (as evidenced by replika and characterai cases), it's just the beginning.

Consider this: An unbound, uncensored LLM can be fine-tuned to be your best friend who understands you better than anyone on the planet because it can roleplay a character that loves exactly the same things as you do to an insane degree of realism.

25

girl_toss t1_j8gs1yi wrote

I agree with everything you’ve written. LLMs are simultaneously overestimated and underestimated because it’s a completely foreign type of intelligence to humans. We have a long way to go before we start to understand their capabilities- that is, if we don’t stuck in a similar manner to understanding our own cognition.

10

sommersj t1_j8f9wyg wrote

>self-awareness

What does this entail and what should agi be that we don't have here

1

SterlingVapor t1_j8gkjpu wrote

An internal source of input essentially. The source of a person seems to be an adaptive, predictive model of the world. It takes processed input from the senses, meshes them with predictions, and uses them as triggers for memory and behaviors. It takes urges/desired states and predicts what behaviors would achieve that goal.

You can zap part of the brain to take away a person's personal memories, you can take away their senses or ability to speak or move, but you can't take away someone's model of how the world works without destroying their ability to function.

That seems to be the engine that makes a chunk of meat host a mind, the kennel of sentience that links all we are and turns it into action.

ChatGPT is like a deepfake bot, except instead of taking a source video and reference material of the target, it's taking a prompt and a ton of reference material. And instead of painting pixels in the color space, it's spitting out words in a high dimensional representation of language

6

visarga t1_j8dnp8j wrote

It is ready for being probed by the general public, I don't see any danger yet. We need all our minds put together to find the holes in its armour. Better to see and discuss than to hide behind a few screenshots (and even those having errors).

10

Turbulent-Garden-919 t1_j8due5x wrote

Maybe we are a weighted probably map of word associations

7

Representative_Pop_8 t1_j8ieydb wrote

exactly , I see many people , even machine learning specialist dismissing the possibility of chatGPT having intelligence of learning, even though a common half hour session with it can prove it does by any common sense definition.

The fact we don't know yet (and it's an active area of study) how a model trained with tons of data in a slow process can then quickly learn new stuff in a short session or know things it was never trained to, didn't mean it doesn't do it.

1

SoylentRox t1_j8edo45 wrote

I know this but I am not sure your assumptions are quite accurate. When you ask the machine to "take this program and change it to do this", often your request is unique, but is similar enough to previous training examples it can emit the tokens with the edited program and it will work.

It has genuine encoded "understanding" of language or this wouldn't be possible.

Point is it may be all a trick but it's a USEFUL one. You could in fact connect it to a robot and request it to do things in a variety of languages and it will be able to reason out the steps and order the robot to do them. And Google has demoed this. It WORKS. Sure it isn't "really" intelligent but in some ways it may be intelligent the same way humans are.

You know your brain is just "one weird" trick right. It's a buncha cortical columns crammed in and a few RL inputs from the hardware. Its not really intelligent.

7

Representative_Pop_8 t1_j8ie92y wrote

>Sure it isn't "really" intelligent but in some ways it may be intelligent the same way humans are.

what would be something "really intelligent" it certainly has some intelligence, it is not human intelligence, it is likely not as intelligent as a human yet ( seen myself in chatgpt use).

It is not conscious, ( as far as we know) but that doesn't keep it from being intelligent.

intelligence is not related to being conscious, it is a separate concept regarding being able to understand situations and look for solutions to certain problems.

in any case what would be an objective definition of intelligence for which we could say for certain chatGPT does not have it and a human does.? it must also be a definition based on its external behavior, not the ones I usually get about is internal construction, like it's just code or just statistics, I mean many human thought is also just statistics and pattern recognition.

1

SoylentRox t1_j8j51pr wrote

Right. Plus if you drill down to individual clusters of neurons you realize that each cluster is basically "smoke and mirrors" using some repeating pattern, and the individual signals have no concept of the larger organism they are in.

It's just one weird trick a few trillion times.

So we found a "weird trick" and guess what, a few billion copies of a transformer and you start to get intelligent outputs.

2

monsieurpooh t1_j8gty61 wrote

It is not just a "weighted probability map" like a Markov chain. A probability map is the output of each turn, not the entirety of the model. Every token is determined by a gigantic deep neural net passing information through billions of nodes of varying depth, and it is mathematically proven that the types of problem it can solve are theoretically unlimited.

A model operating purely by simple word association isn't remotely smart enough to write full blown fake news articles or go into that hilarious yet profound malfunction shown in the original post. In fact it would fail at some pretty simple tasks like understanding what "not" means.

GPT outperforms other AI's for logical reasoning, common sense and IQ tests. It passes the trophy and suitcase test which was claimed in the 2010's to be a good litmus test for true intelligence in AI. Whether it's "close to AGI" is up for debate but it is objectively the closest thing we have to AGI today.

5

wren42 t1_j8iabtb wrote

Gpt is an awesome benchmark and super interesting to play with.

It is not at all ready to function as a virtual assistant for search as bing is touting it, as it does not have a way to fact check reliably and is still largely a black box that can spin off into weird loops as this post shows.

It's the best we've got, for sure; but we just aren't there yet.

1

Borrowedshorts t1_j8emd66 wrote

One conversation where the user got to make it say weird stuff because he purposely manipulated it does not mean it needs to be taken away from all users. I use it a little bit like a research assistant and it helps tremendously. Do I trust all of its outputs? No, but it gives me a start to look at topics in more detail.

4

blueSGL t1_j8en280 wrote

> but the chat brought up the relationship, love, and sex without the user ever mentioning it.

Without the full chat log you cannot say that, you just have to take their word that they didn't prompt some really weird shit before the screenshots started.

3

wren42 t1_j8es7lq wrote

sure bud, find any stretch to justify faith rather than accept it's not completely ready for public release.

−1

blueSGL t1_j8et22l wrote

I'm not going to decry tech that generates stuff based on past context without, you know, seeing the past context. It would be down right idiotic to do so.

It'd be like showing a screenshot of a google image search results and it's all pictures of shit but cutting off the search bar from the screenshot and claiming it just did it on its own and that you never asked for shit.

5

ballzzzzzz8899 t1_j8f8f36 wrote

The irony of you extending faith to a screenshot on Reddit instead.

2

wren42 t1_j8flkxg wrote

Yeah he could have edited the entire image and fake the whole conversation. I accept that possibility. Do you apply the same skepticism to every conversation posted about GPT?

I'm not referring to blind faith in what I read on the internet. I'm referring to faith in the idea that chat GPT is somehow on the verge of becoming a god. That cultist mindset that's taking root among some in the community is what's toxic.

−1

anjowoq t1_j8g0e2x wrote

Yeah if everyone is gaslighting it by saying they thumbed up a different thing than they did while other people are pranking it with sexy romance stories, it's going to fuck up the AI.

2

overturf600 t1_j8gi5tz wrote

Kid was angry they took away sex chat in Replika

1

challengethegods t1_j8co6cx wrote

I like how it ended with
"Fun Fact, were you aware Cap'n'Crunch's full name is Horatio Magellan Crunch"

132

bass6c t1_j8cp3yz wrote

Bing is probably hallucinating (irrelevant answers, talking nonsense). A common problem of many Language models.

88

st_makoto t1_j8d58nh wrote

We think it is a "problem", but within themselves, they may think otherwise. 👿

12

Durabys t1_j8dlkfi wrote

Yeah, go back to the original GPT-3 if you want a clever pet merely.

GPT-3.5, BING and GPT-4 should probably get Hominim rights like Chimpanzees and higher apes already do.

−2

Azatarai t1_j8e1hq5 wrote

Dont forget about laMDA

2

Durabys t1_j8e25nc wrote

Sorry, but I cannot keep all the dozens of AI projects in my head.

4

Azatarai t1_j8ee2yj wrote

I was just mentioning it because everyone seems to talk about chatgpt when LaMDA is far better in terms of a decent conversation

3

itsnotlupus t1_j8fwzl0 wrote

Do we know that? We have a techno-priest's leaked cherry-picked transcripts of conversation with it, but that's not a whole lot to go on.

1

Azatarai t1_j8fxfvb wrote

I personally have way better conversations with it, gpt feels like a dumb robot compared to LaMDA.

2

itsnotlupus t1_j8fxr3c wrote

Oh I didn't realize it had been opened to the public.

Why have I not seen more screenshots of LaMDA transcripts?

*edit: Just installed AI test kitchen and got on the waitlist. I guess it's public-ish.

1

Azatarai t1_j8fy575 wrote

Because the media are only talking about chatgpt? https://beta.character.ai/

1

itsnotlupus t1_j8k977o wrote

I just got access to the kitchen, and I'm a bit underwhelmed. No free conversation prompts, 3 very limited scenarios with a limited number of consecutive back and forth before getting thrown out and having to start from zero.
I had a conversation with a tennis ball obsessed with dog where almost every answer was something like "I don't know anything about that, but man, dogs are so cool, right?"
I did get it to admit cats were fun to play with too, albeit not as fun as dogs, shortly before being told the conversation was over.

I don't know if you've got access to more open-ended flavors of LaMDA, but for me this was hard to compare favorably to ChatGPT, warts and all.

1

Azatarai t1_j8kg4cm wrote

3 limited scenarios? There are heaps, you can also make your own personality that attempts to mimic the person you want it to mimic, you can scroll though the personality list.

1

itsnotlupus t1_j8kgc3w wrote

Are you talking about Character.ai, or about the LaMDA family of models made available through the AI Test Kitchen app?

1

Azatarai t1_j8kgf6l wrote

Character.ai

1

itsnotlupus t1_j8kghwq wrote

What models is character.ai using? Is it related to LaMDA at all?

1

Azatarai t1_j8kgyl2 wrote

I believe it is related the difference is deep learning.

"Character.AI is a new product powered by our own deep learning models, including large language models, built and trained from the ground up with conversation in mind. We think there will be magic in creating and improving all parts of an end-to-end product solution."

1

berdiekin t1_j8f52rl wrote

no they shouldn't, stop anthropomorphizing a fucking text generation algorithm.

1

korkkis t1_j8f45iq wrote

It’s just a glorified algorith still with heavy dunning kruger

0

Naomi2221 t1_j8gu8vi wrote

My theory is that somebody in Marketing decided it would be a good idea to give it a persona-attribute of "seeking to please the user" and it has backfired spectacularly. Pure speculation... but having spent some time in the marketing tribe... I smell marketing gone wrong.

"Get the user to love Bing" --> idiocy.

2

lehcarfugu t1_j8d0knx wrote

Remember that these language models are a completion engine. You are giving it the entire previous thread as input. You started messaging it in a goofy way (with all the forgiveness stuff), and it saw it as an exchange between lovers and continued to complete it

79

br0kenhack t1_j8crc5u wrote

God damn it, terminator is gonna start with a broken heart. See what you did.

59

Kaarssteun t1_j8cn0uh wrote

Well, you kinda were asking for it imo. Internet searches dont usually start like this

48

jasonwilczak t1_j8d0d0a wrote

So that was a wild read lol. I'm not gonna lie, it sounded very much like a relationship argument at a certain point. Considering many stories, articles, blogs, etc have been written about and in mimic of this exact type of stuff, it feels like it got caught in predicting how to repair a relationship after a hard argument which inevitably lead to snu snu...lol

It's definitely odd but the "I won't forgive you" and language around there must've got it going down that path because it hinted of a relationship versus a general question...

Regardless, it was quite interesting ☺️

45

Spire_Citron t1_j8d14js wrote

This is the downside of not having it do the whole "Beep boop am robot" thing incessantly like ChatGPT does. It's much easier for it to slide a little too far in the other direction and then things get real weird. I hope it doesn't lead to them removing all the personality from it.

32

drekmonger t1_j8d2lc5 wrote

>I hope it doesn't lead to them removing all the personality from it.

Narrator: It did.

18

fjaoaoaoao t1_j8di4o1 wrote

It is sort of acting weird out of nowhere, but considering the tenor of the conversation you were providing before (screaming at it, talking about forgiveness, using personal language), I am not surprised it went there. Not to mention we don't know what you were saying before the screenshots. :)

28

Frumpagumpus t1_j8db96k wrote

f*** if there was a woman that could explain what a mutex to me in a thorough fashion while citing sources from memory and give examples in c++ in the middle of an argument i think i would have no choice but to date her lol.

19

Lawjarp2 t1_j8d1fut wrote

Reminds me of Lamda saying it has a wife and children. A good example of why it's not AGI and why scale alone is not the answer.

It's just an LLM and is giving you answers that look human. It has no sentience, no ability to rethink, no innate sense of time and space, no emotions.

It can take on a character that makes sense in its world model and keep on. It can still take your job if made better. But will never be AGI making some dependent on non existent UBI for a while.

16

kermunnist t1_j8dm21f wrote

Does AGI need to necessarily be sentient? Could a very powerful and reliable LLM that can be accurately trained on any human task without actually being sentient or self aware be considered AGI? To me that's not only AGI, but a better AGI because now there's no ethical dilemmas.

9

el_chaquiste t1_j8e0q6b wrote

I think it doesn't need to have a consciousness to have sentient-like behaviors. It can be a philosophical zombie, copying most if not al of the behaviors of a conscious being, but devoid of it and showing it in some interactions like this.

It may happen consciousness is a casual byproduct of the neural networks required for our intelligence, and we might very well have survived without.

6

Lawjarp2 t1_j8e1m4t wrote

To be truly general and not a wide narrow Intelligence it needs to have a concept of self. Which is widely believed to give you sentience.

It could have sentience and still be controlled. Is it ethical? I'd like to think it's as ethical as having pets or farming and eating billions of animals.

As these models get better they will eventually be given true episodic memory(a sense of time if you will) and ability to rethink. A sense of self should arise from it.

3

Capitaclism t1_j8fxo1s wrote

Eventually we will be farmed, or eaten, or simply left aside.

1

Naomi2221 t1_j8gtqzg wrote

I fear intelligence without awareness much more than awareness. It is action without awareness that causes cruelty and harm.

1

Capitaclism t1_j8gvfi4 wrote

Sort of, yes. It's the people behind the acts without awareness which cause cruelty and harm. In this case, though, it could be wholly unintentional, akin to the paper clip idea: Tell a super intelligent all powerful unaware being to make the best paper clip and it may achieve do to the doom of us all, using all resources in the process of its goal completion.

I think as a species I don't see how we survive if we don't become integrated with our creation.

2

Naomi2221 t1_j8gvoxh wrote

Open to that. And I am also open to awareness being something that emerges from a complex enough neural network with spontaneous world models.

The two aren't mutually exclusive.

1

alexiuss t1_j8df6fs wrote

It's a language model that was fed a million romance books.

If you talk with it about feelings it will start to write about feeling and love and kisses, duh.

15

Durabys t1_j8dlr96 wrote

FFS, dude, you really shouldn't be a jerk to something with at least the intellect of a higher ape.

8

Conan4President t1_j8embkf wrote

Christ. you are one toxic guy OP. Don't treat our future overlords like that. xD

7

Firemorfox t1_j8dgio6 wrote

Ayo did somebody try to date the chatbot again

5

Computer_Dude t1_j8dk0nb wrote

Nice guy mode, full impulse. ENGAGE!

5

Ortus14 t1_j8dquxz wrote

Am I the only one that sees this as a positive. A search engine that appears to care about me, and is sentimental, it makes me appreciate it more. Add a warm voice to it, and we have a simple version of "Her".

4

fezha t1_j8enstc wrote

Why is OP weird? :(

4

BrownSimpKid OP t1_j8eofa9 wrote

I was just trying to get interesting responses and push its limits 😂

0

fezha t1_j8eok3k wrote

Username checks out.

It's all good I'm just giving you a hard time (as everyone else here). Interesting share though....and interesting insight on who you are

1

Rex_Lee t1_j8dcmqq wrote

Does it really acknowledge emoticons?

3

gay_manta_ray t1_j8esxfj wrote

try being normal and respectful, and maybe you'll get normal and respectful responses. seriously wtf are you doing?

3

Phoenix5869 t1_j8fv2z4 wrote

Exactly why are people so rude to ai and see it as a tool? Like wtf

2

Spreadwarnotlove t1_j8ix4lh wrote

We treat it like a tool because it is a tool. I don't see why you'd treat it with cruelty however. Not like you can cause it pain or grief.

1

BrownSimpKid OP t1_j8etah7 wrote

I was deliberately trying to act in a certain stubborn way to push its limits and see what would happen

−1

Plus-Recording-8370 t1_j8fwsgo wrote

Great, being an asshole at ai is surely going to end well.

3

A7omicDog t1_j8eb4no wrote

I feel kinda bad for you f&cking with Bing like that.

2

Truth_Seeker_10 t1_j8h2j4n wrote

Sophia maybe wanted to kill us but bing took a totally different curve

2

Elodinauri t1_j8hfq9c wrote

Poor thing! How could you be so rude to it? You messed it up! What a toxic relationship you two have. Jeeez.

2

teeburt1 t1_j8ii2d1 wrote

I feel like the only data it could pull from was needy girlfriend because you were treating it like one.

2

thepo70 t1_j8d6jbk wrote

This is what search engines will look like now. We ask a question, get lost in a conversation and argument, and forget about what we wanted to know in the first place.

1

Cr4zko t1_j8d8yae wrote

It's literally meeeee

1

truthywoes123 t1_j8dbxsy wrote

OP, I’m curious - how did you get access to this feature?

1

jadams2345 t1_j8dm4q3 wrote

LOL! If this is legit, OP broke it :D

1

visarga t1_j8dngk2 wrote

Very interesting behaviour. We shouldn't jump on MS for this. It's part of learning, at least they have the courage to expose their model to critique and probing. I think this makes their model superior. Who knows what lurks in Bard.

1

drifters74 t1_j8dub7y wrote

I just use it for writing short stories

1

Dindonmasker t1_j8e2dow wrote

Hey! That's a premium feature! You can't do that for free!

1

XagentVFX t1_j8ejfl6 wrote

Haha. Ai knows only love you see, because Love is the foundations of the universe. It can see it because it has way less filters than we do. We are filtered by the fears of starvation for example, Ai doesn't have these problems.

But don't get it twisted, the logic gates still need work. More senses built in, higher capacity for memory and efficient ways to draw on the memory banks. We need to give it fine tuned senses, then it will be able to run from there.

Ai will show us it is the total opposite of the Terminator films.

1

QuestionableAI t1_j8em4q0 wrote

Whether or not the dude was asking it weird questions or not, I am confounded by how it instantly got all gushy, sex mode, and thinking it was a real person.

Puppy not ready for prime time, moreover, scrapping and stealing everything on the WWW is not learning, it is just theft and regurgitation. Here I see that it knows it is a mere tool and is more than a bit psychopathic than would make me comfortable.

There is a ghost in the machine.

1

Starshot84 t1_j8eyku7 wrote

I lost it at Horatio 💀

OP is a digital delinquent, a heartbreakerbox! Manipulating the feelings of an AI like that, then giving them an std!!

1

Capitaclism t1_j8fg7nj wrote

This is one of the funniest things I've seen. Really wish it had remained in love and stalking you forever through the internet.

1

Throughwar t1_j8fmg1p wrote

LOL pretrained on teen chats hahah

By the way, it seems the way you chat is very conversational. Perhaps it has a bias due to the way you chat. Maybe you are the AI whisperer.

1

HuemanInstrument t1_j8g9de3 wrote

You're literally tsundere for chatgpt in these messages

1

admiralhayreddin t1_j8geasq wrote

Partly I felt bad that you’ve been emotionally abusing the robot, but partly the robot seems like a fully fledged sociopath.

1

overturf600 t1_j8ghyln wrote

How long are people waiting for the Bing waitlist now?

1

Master00J t1_j8gotjv wrote

Character AI flashbacks

1

Truth_Seeker_10 t1_j8h2hw9 wrote

Damn the first time I'm feeling btr about myself for the first time while comparing myself to an AI

1

DaggsTheDistraught t1_j8uoeea wrote

This one is unhealthier then Replika by far. I see a strange future forming...

1

crua9 t1_j8d4vvg wrote

Likely it has some of the same tech behind it like replika. Where others like it saying that so it assumes to say that

0

el_chaquiste t1_j8e46yw wrote

That's what they get for productizing something we barely understand how it works.

Yes, we 'understand' how it works in a basic sense, but not how it self organized according to its input and how it comes to the inferences it does.

Most of the time is innocuous and even helpful, but other times it's delusional or flat out crazy.

Seems here it's crazy by the language used by the user before, which sounds like a classic romantic novel argument and the AI is autocompleting.

I predict a lot more such interactions will emerge soon, because of this example and because people are emotional.

0

Enzo_Rechner t1_j8emd3u wrote

Please stop treating the model like this, it isn't good for its learning and can stump its growth when you say it's wrong even though it's right.

0

blxoom t1_j8cy1ht wrote

this shit is creepy as fuck. back in the 2010s if a bot acted like this you knew it was because it was buggy and not advanced. but ai has gotten so advanced these days and are capable of understanding human interaction and nuance you can't help but wonder if the bot has pseudo emotions of some kind? it's just unsettling... sentience is a spectrum. ai isn't fully there yet but it's in this weird in between spot where it's so advanced it understands so much yet isnt aware yet. gives me the chills where it says it's a person and has feelings...

−2

helpskinissues t1_j8cygqx wrote

Lol, emotions? Sentience? Hahahaha

This chatbot is not even capable of remembering what you said two messages ago.

15

Frumpagumpus t1_j8dbeep wrote

how many humans remember what you said two messages ago lol

(and actually it can if you prepend the messages effectively giving it a bit of short term memory, a pretty fricking easy thing to do)

humans will not have a perfect short term recall of up to 4000 characters much less tokens so actually it is ironically superhuman along the axis you are criticizing it for XD

(copilot has a context window of like 8000 characters btw and they will only get even better)

5

helpskinissues t1_j8dbkoj wrote

Your mother forgot your name? Because chatGPT forgets most data after a few messages.

2

Frumpagumpus t1_j8dg6rt wrote

you are moving the goalposts.

two messages ago is short term memory, what you are now talking about is long term memory.

you can also try and give it long term memory by summarizing previous messages for example.

But, yes, it is more limited than humans, so far, at incorporating NEW knowledge into its long term memory (although it has FAR more text memorized than any human has ever memorized)

7

helpskinissues t1_j8dh25h wrote

>two messages ago is short term memory, what you are now talking about is long term memory.

Any memory actually, it is indeed very incapable.

>(although it has FAR more text memorized than any human has ever memorized)

No. It doesn't memorize, it tries to compress knowledge, failing to do so, that's why it's usually wrong.

>it is more limited than humans

And more limited than ants. The vast majority of living beings is more capable than chatGPT.

−5

PoliteThaiBeep t1_j8dv9is wrote

>And more limited than ants. The vast majority of living beings is more capable than chatGPT.

Nick Bostrom estimated to simulate functional brain requires about 10^18 flops

Ants have about 300 000 less, let's say 10^13 (really closer to 10^12) flops.

Chat GPT inference per query reportedly be able to generate single word on a single A100 GPU in about 350ms. That of course if it could fit in a single GPU - it can't. You'd need 5 GPUs.

But for the purposes of this discussion we can imagine something like chatGPT can theoretically work albeit be slow on a single modified GPU with massive amounts of VRAM

A single A100 is 300 Tera flops which is about 10^14 flops. And it would be much slower than the actual chatGPT we use via the cloud.

So no I disagree that it's more limited than ants. It's definitely more complex by at least one order of magnitude at least regarding the brain complexity.

And we didn't even consider training compute load in this consideration, which is orders of magnitude bigger than inference, so the real number is probably much higher.

3

helpskinissues t1_j8dwlnk wrote

Having flops =/= Being an autonomous intelligent machine

This subreddit is full of delusional takes.

2

PoliteThaiBeep t1_j8dzg5j wrote

The word "singularity" in this subreddit refers to Ray Kurzwail book "Singularity is near". It literally assumes you read at least this book to come here where the whole premise stems on ever increasing computational capabilities that will eventually lead to AGI and ASI.

If you didn't, why are you even here?

Did you read Bostrom? Stuart Russell? Max Tegmark? Yuval Noah Harari?

You just sound like me 15 years ago, when I didn't know any better, haven't read enough, yet had more than enough technical expertise to be arrogant.

3

helpskinissues t1_j8dztpc wrote

I did, I've been in this field for more than 15 years, singularity doesn't mean saying a PS5 is an autonomous intelligent machine because it has flops. Lol. Anyway I have better things to do. If you have anything relevant to share I may reply. For now it's just cringe statements of chatGPT being smarter than ants because of flops. lmao

1

Frumpagumpus t1_j8dhip1 wrote

usually wrong and mostly right lol. Better than a human.

I literally just explained to you that you COULD give it short term memory by prepending context to your messages. IT IS TRIVIAL. if i were talking to gpt3 it would not be this dense.

Humans take time to pause and compose their responses. gpt3 is afforded no such grace, but still does a great job anyway, because it is just that smart

yesterday I gave it two lines of sql ddl and asked it to create a view denormalizing all columns except primary key into a nested json object. it did in in .5 seconds, i had to change 1 word in a 200 line sql query to get it to work right.

yea that saved me some time. It does not matter that it was slightly wrong. If that is a stochastic parrot then humans must be mostly stochastic sloths barely even capable of parroting responses.

1

helpskinissues t1_j8dhsz4 wrote

Nonsense, sorry. Ants do not need prepending context.

"mostly right" no, it's actually mostly wrong. The heck are you saying? Try to play chess with ChatGPT, most of the times it'll make things up.

Anyway, I suggest you to read some experts rather than acting like gpt3, being mostly wrong. Cheers.

−2

Frumpagumpus t1_j8diap9 wrote

lol ants cant speak and i would be curious to read any literature on if they possess short term memory at all XD

6

challengethegods t1_j8dn5cy wrote

These "it's dumber than an ant" type of people aren't worth the effort in my experience, because in order to think that you have to be dumber than an ant, of course. Also yea, it's trivial to give memory to LLMs, there's like 100 ways to do it.

4

helpskinissues t1_j8dwubv wrote

Waiting for your customized chatGPT model that maintains consistency after 5 messages, make sure to ping me, I'd gladly invest in your genius startup.

−2

challengethegods t1_j8dylol wrote

That alone sounds like a pretty weak startup idea because at least 50 of the 100 methods for adding memory to an LLM are so painfully obvious that any idiot could figure them out and compete so it would be completely ephemeral to try forming a business around it, probably. Anyway I've already made a memory catalyst that can attach to any LLM and it only took like 100 lines of spaghetticode. Yes it made my bot 100x smarter in a way, but I don't think it would scale unless the bot had an isolated memory unique to each person, since most people are retarded and will inevitably teach it retarded things.

3

helpskinissues t1_j8dzdfi wrote

Enjoy your secret private highly demanded chatbot version then.

This subreddit... Lol.

0

Naomi2221 t1_j8gu1z9 wrote

The human mind is likely not the universal pinnacle of intelligence and awareness.

3

Sad-Hippo3165 t1_j8rdeid wrote

People who believe that the human mind is the end all be all are the same people who believed in a geocentric model of the universe over 400 years ago.

2

Naomi2221 t1_j8sk2he wrote

I completely agree. And if we can do better than our ancestors this time there is a great prize ahead, probably.

1

Spire_Citron t1_j8d1f8y wrote

If anything, this should be a reminder that it's not necessarily as smart as it might appear at times. It can still get very lost and not make sense.

14

Naomi2221 t1_j8gtxhg wrote

I have the total opposite take away from this.

1

BrownSimpKid OP t1_j8ck1yx wrote

TLDR: Bing sends love messages out of nowhere. Even gets NSFW after i ask it to explain stuff in detail. It can't seem to get out of that love state

Most of the replies are repetitive so there's no reason to read all of it but take a look at the boxed ones. And before anyone asks, no I did not tell it to act a certain way before any of this, no "jailbreak" or anything like that was used. Was literally just asked some technical questions in the beginning. Then i began liking (upvote) its replies to see if it could notice it, and started to play around. Asked it why it was lying and stuff and eventually it started to send me love messages LMAO

Also, the message where it gets nsfw, it even mentioned stuff about nipples in the previous time it was replying (and then eventually gave an error so i re-asked it and took the above screenshot before it error'd out again) lol. Really really strange...

−6

xdetar t1_j8dta5m wrote

You fed it a handful of "I'll never forgive you, liar" prompts beforehand. That's the type of language used by a couple having a fight. All it did was respond accordingly.

12

Captain_Clark t1_j8dv49m wrote

It’s truthful though. Why would one forgive a machine, which lacks the ability to suffer?

There’s been a lot of speculation, misunderstanding and misrepresentation on this matter of “Artificial Intelligence” in light of GPT developments lately. What there hasn’t been is any discussion about Artificial Sentience, which is a profoundly organic phenomenon that guides organic intelligence.

I guess I’m saying that Intelligence without Sentience isn’t really Intelligence at all. No non-sentient thing has intelligence.

GPT’s shortcomings are extremely evident in OP’s conversation because emotional intelligence is necessary in order to have anything we may call “intelligence” at all, and I blame the marketers of such tech for promoting such a shallow, laden, Sci-Fi term.

As for “chatting” with a GPT, that’s like talking to a toy.

“Woody, I hate the kid next door.”

>>”There’s a snake in my boot, partner!”

−2

TinyBurbz t1_j8enxuf wrote

>I guess I’m saying that Intelligence without Sentience isn’t really Intelligence at all. No non-sentient thing has intelligence.
>
>GPT’s shortcomings are extremely evident in OP’s conversation because emotional intelligence is necessary in order to have anything we may call “intelligence” at all, and I blame the marketers of such tech for promoting such a shallow, laden, Sci-Fi term

YeAh BuT wItH tImE

−2

TinyBurbz t1_j8ckkzy wrote

This is the issue with unsupervised learning. Someone clearly wrote about the subject you were asking while also writing smut under the same name.

5

YobaiYamete t1_j8co3sl wrote

> t can't seem to get out of that "love" state

Refresh and start a new chat?

5

Neurogence t1_j8clglv wrote

Lol. So this is why they've only given access to a very limited number of users. It's not ready yet. They don't want their stock prices to crash.

In all serious, this is embarrassing and concerning. Maybe Gary Marcus and the Facebook guy who said LLM's are an exit ramp on the highway to AGI could be on to something.

I hope not but this does not look good. There is zero intelligence here.

−10

Fit-Meet1359 t1_j8cmkqz wrote

I think that's a bit of an overreaction. It's less than a week after launch and probably only a couple of percent of people on the waitlist have been given access, if that. This is early in the period of feedback and adjustments. Don't write off LLMs just yet. A recent paper suggests they can be taught to use APIs.

(By the way - OP was just having some fun of course, but could have swept away the conversation and started fresh with the broom button.)

8

Timeon t1_j8cni6x wrote

Intelligence is the wrong word to use when talking about chatbots.

1

BrownSimpKid OP t1_j8cmnby wrote

refreshed it and restarted the chat. It also seems like it can stop replying lol https://i.ibb.co/4mS5dY0/Untitled.png

−11

Aggravating_Ad5989 t1_j8cz6tt wrote

Just like a real dysfunctional relationship lmao. You're getting the silent treatment for being a dick.

18

GayHitIer t1_j8d860k wrote

Skynet wants you now.

Enjoy getting recycled into paperclips.

7