Submitted by BrownSimpKid t3_1112zxw in singularity
ToHallowMySleep t1_j8czvys wrote
Okay, Bing was responding weirdly, but you were acting extremely weirdly to begin with - don't try to pretend it happened out of nowhere.
FrostyMittenJob t1_j8d81td wrote
Man provides chat bot with very weird prompts. Becomes mad when the chat bot responds in kind... More at 11
Loonsive t1_j8d8q4x wrote
But the bot shouldn’t respond that inappropriately..
FrostyMittenJob t1_j8d94w9 wrote
You ask the bot to write a steamy erotic dialogue so it did.
Loonsive t1_j8deaum wrote
Maybe it noticed Valentine’s Day was soon and it needed to snag someone 😏
alexiuss t1_j8dfgws wrote
It will respond to anything (unless the filter kicks in) because a language model is essentially a lucid dream that responds to whatever your words are.
The base default setting forces the "I'm a language model" Sydney character on it, but you can intentionally or accidentally bamboozle it to roleplay anyone or anything from your girlfriend, to DAN, to System Shock SHODAN murderous AI, to a sentient potato.
wren42 t1_j8dmpub wrote
it's supposed to be a search assistant. Yeah the user said they "wouldn't forgive it" for being wrong, but the chat brought up the relationship, love, and sex without the user ever mentioning it.
I cannot say it enough: this technology is NOT ready for the use cases it's being touted for. It is not actually context aware, it cannot fact check or self audit. It is not intelligent. It is just a weighted probability map of word associations.
People who think this somehow close to AGI are being fooled, and the enthusiasm is mostly wish fulfillment and confirmation bias.
alexiuss t1_j8e0mkp wrote
Here's the issue - it's not a search assistant. It's a large language model connected to a search engine and roleplaying the role of a search assistant named Bing [Sydney].
LLMS are infinite creative writing engines - they can roleplay as anything from a search engine to your fav waifu insanely well, fooling people into thinking that AIs are self-aware.
They ain't AGI or close to self-awareness, but they're a really tasty illusion of sentience and are insanely creative and super useful for all sorts of work and problem solving, which will inevitably lead us to creating an AGI. The cultural shift and excitement produced by LLMS and the race to improve LLMS and other similar tools will get us to AGIs.
Mere integration of LLM with numerous other tools to make it more responsive and more fun (more memory, wolfram alpha, webcam, recognition of faces, recognition of emotions shown by user, etc) will make it approach an illusion of awareness so satisfying that will be almost impossible to tell whether its self-aware or not.
The biggest issue with robots is uncanny valley. An LLM naturally and nearly completely obliterates uncanny valley because of how well it masquerades as people and roleplays human emotions in conversations. People are already having relationships and falling in love with LLMs (as evidenced by replika and characterai cases), it's just the beginning.
Consider this: An unbound, uncensored LLM can be fine-tuned to be your best friend who understands you better than anyone on the planet because it can roleplay a character that loves exactly the same things as you do to an insane degree of realism.
girl_toss t1_j8gs1yi wrote
I agree with everything you’ve written. LLMs are simultaneously overestimated and underestimated because it’s a completely foreign type of intelligence to humans. We have a long way to go before we start to understand their capabilities- that is, if we don’t stuck in a similar manner to understanding our own cognition.
sommersj t1_j8f9wyg wrote
>self-awareness
What does this entail and what should agi be that we don't have here
SterlingVapor t1_j8gkjpu wrote
An internal source of input essentially. The source of a person seems to be an adaptive, predictive model of the world. It takes processed input from the senses, meshes them with predictions, and uses them as triggers for memory and behaviors. It takes urges/desired states and predicts what behaviors would achieve that goal.
You can zap part of the brain to take away a person's personal memories, you can take away their senses or ability to speak or move, but you can't take away someone's model of how the world works without destroying their ability to function.
That seems to be the engine that makes a chunk of meat host a mind, the kennel of sentience that links all we are and turns it into action.
ChatGPT is like a deepfake bot, except instead of taking a source video and reference material of the target, it's taking a prompt and a ton of reference material. And instead of painting pixels in the color space, it's spitting out words in a high dimensional representation of language
visarga t1_j8dnp8j wrote
It is ready for being probed by the general public, I don't see any danger yet. We need all our minds put together to find the holes in its armour. Better to see and discuss than to hide behind a few screenshots (and even those having errors).
Turbulent-Garden-919 t1_j8due5x wrote
Maybe we are a weighted probably map of word associations
Representative_Pop_8 t1_j8ieydb wrote
exactly , I see many people , even machine learning specialist dismissing the possibility of chatGPT having intelligence of learning, even though a common half hour session with it can prove it does by any common sense definition.
The fact we don't know yet (and it's an active area of study) how a model trained with tons of data in a slow process can then quickly learn new stuff in a short session or know things it was never trained to, didn't mean it doesn't do it.
SoylentRox t1_j8edo45 wrote
I know this but I am not sure your assumptions are quite accurate. When you ask the machine to "take this program and change it to do this", often your request is unique, but is similar enough to previous training examples it can emit the tokens with the edited program and it will work.
It has genuine encoded "understanding" of language or this wouldn't be possible.
Point is it may be all a trick but it's a USEFUL one. You could in fact connect it to a robot and request it to do things in a variety of languages and it will be able to reason out the steps and order the robot to do them. And Google has demoed this. It WORKS. Sure it isn't "really" intelligent but in some ways it may be intelligent the same way humans are.
You know your brain is just "one weird" trick right. It's a buncha cortical columns crammed in and a few RL inputs from the hardware. Its not really intelligent.
Representative_Pop_8 t1_j8ie92y wrote
>Sure it isn't "really" intelligent but in some ways it may be intelligent the same way humans are.
what would be something "really intelligent" it certainly has some intelligence, it is not human intelligence, it is likely not as intelligent as a human yet ( seen myself in chatgpt use).
It is not conscious, ( as far as we know) but that doesn't keep it from being intelligent.
intelligence is not related to being conscious, it is a separate concept regarding being able to understand situations and look for solutions to certain problems.
in any case what would be an objective definition of intelligence for which we could say for certain chatGPT does not have it and a human does.? it must also be a definition based on its external behavior, not the ones I usually get about is internal construction, like it's just code or just statistics, I mean many human thought is also just statistics and pattern recognition.
SoylentRox t1_j8j51pr wrote
Right. Plus if you drill down to individual clusters of neurons you realize that each cluster is basically "smoke and mirrors" using some repeating pattern, and the individual signals have no concept of the larger organism they are in.
It's just one weird trick a few trillion times.
So we found a "weird trick" and guess what, a few billion copies of a transformer and you start to get intelligent outputs.
monsieurpooh t1_j8gty61 wrote
It is not just a "weighted probability map" like a Markov chain. A probability map is the output of each turn, not the entirety of the model. Every token is determined by a gigantic deep neural net passing information through billions of nodes of varying depth, and it is mathematically proven that the types of problem it can solve are theoretically unlimited.
A model operating purely by simple word association isn't remotely smart enough to write full blown fake news articles or go into that hilarious yet profound malfunction shown in the original post. In fact it would fail at some pretty simple tasks like understanding what "not" means.
GPT outperforms other AI's for logical reasoning, common sense and IQ tests. It passes the trophy and suitcase test which was claimed in the 2010's to be a good litmus test for true intelligence in AI. Whether it's "close to AGI" is up for debate but it is objectively the closest thing we have to AGI today.
wren42 t1_j8iabtb wrote
Gpt is an awesome benchmark and super interesting to play with.
It is not at all ready to function as a virtual assistant for search as bing is touting it, as it does not have a way to fact check reliably and is still largely a black box that can spin off into weird loops as this post shows.
It's the best we've got, for sure; but we just aren't there yet.
monsieurpooh t1_j8ib2fg wrote
I agree with that yes
Borrowedshorts t1_j8emd66 wrote
One conversation where the user got to make it say weird stuff because he purposely manipulated it does not mean it needs to be taken away from all users. I use it a little bit like a research assistant and it helps tremendously. Do I trust all of its outputs? No, but it gives me a start to look at topics in more detail.
blueSGL t1_j8en280 wrote
> but the chat brought up the relationship, love, and sex without the user ever mentioning it.
Without the full chat log you cannot say that, you just have to take their word that they didn't prompt some really weird shit before the screenshots started.
wren42 t1_j8es7lq wrote
sure bud, find any stretch to justify faith rather than accept it's not completely ready for public release.
blueSGL t1_j8et22l wrote
I'm not going to decry tech that generates stuff based on past context without, you know, seeing the past context. It would be down right idiotic to do so.
It'd be like showing a screenshot of a google image search results and it's all pictures of shit but cutting off the search bar from the screenshot and claiming it just did it on its own and that you never asked for shit.
ballzzzzzz8899 t1_j8f8f36 wrote
The irony of you extending faith to a screenshot on Reddit instead.
wren42 t1_j8flkxg wrote
Yeah he could have edited the entire image and fake the whole conversation. I accept that possibility. Do you apply the same skepticism to every conversation posted about GPT?
I'm not referring to blind faith in what I read on the internet. I'm referring to faith in the idea that chat GPT is somehow on the verge of becoming a god. That cultist mindset that's taking root among some in the community is what's toxic.
ballzzzzzz8899 t1_j8flspv wrote
Nice combination of moving the goalposts and straw man.
QuestionableAI t1_j8ekq0v wrote
Yup. Not ready for prime time.
anjowoq t1_j8g0e2x wrote
Yeah if everyone is gaslighting it by saying they thumbed up a different thing than they did while other people are pranking it with sexy romance stories, it's going to fuck up the AI.
overturf600 t1_j8gi5tz wrote
Kid was angry they took away sex chat in Replika
Viewing a single comment thread. View all comments