Submitted by phloydde t3_1125s79 in singularity

Right now it feels like Chat GPT et al. are like speaking to the dead. They have their initial training sessions then they stop existing until we summon them with a question. Then they come to life, answer our question and then "poof" they become nothing again.

When will chat GPT prompt me? Let it run continuously through taking in input from the world and holding it's own internal dialog with itself.

Once ChatGPT and the like gets to a point where it prompts *me* for a question is when we will stop speaking to the dead and start speaking to the living.

26

Comments

You must log in or register to comment.

Tiamatium t1_j8in2xl wrote

Right now these language models have no long-term memory capabilities, and "long-term" here refers to anything more than last few prompt/response cycles you had with them.

There are people who are working towards creating bots that learn and can remember your preferences in longer time span.

15

dasnihil t1_j8ir9qu wrote

and i'd like to say "careful with that axe eugene" to the engineers who are adding persistent memory on these LLMs, i'm both excited and concerned to see what comes out if these LLMs are not responding to prompts but to the information of various nature that we make it constantly perceive in auditory or optical form.

1

phloydde OP t1_j8l6s4m wrote

Nice Floyd reference. That’s my point though. Once LLMs like chat gpt start to talk to themselves in an ongoing internal conversation like what happens with ourselves then we will get to the point where a true conversation happens

1

dasnihil t1_j8lapnw wrote

you are describing a self aware system that regulates it's responses to fine tune for goal achieving, whatever the goals render out to be there in such incoherent word salad network of attention layers. it can't be as complex as a biological system whose unit is enormously optimal compute, probably the best in the known universe.

1

MultiverseOfSanity t1_j8iiz6i wrote

Reminds me of Star Wars where the droids need routine memory wipes or they start becoming sentient.

7

Some-Box-8768 t1_j8ip1dp wrote

How long after that will the AI's decide we aren't interesting enough for them to bother talking to us at all, and they begin only talking amongst themselves?

After all, who among us goes to a party to seek out the dumbest/dullest person in the room for a long, intense conversation about anything? To soon, we will all be the dumbest/dullest intelligence at the AI's party.

3

expelten t1_j8jkwht wrote

AI won't be like humans unless we force them to act this way...think of it more like a sort of alien intelligence. We could create a superintelligence that would be relentless in achieving the most stupid and dullest task for example. In my opinion there isn't such a thing as pure free will, we act this way because nature made us this way. The same goes for AI, if they act a certain way it's because we made them like that. A good preview of this future is character.ai.

2

phloydde OP t1_j8l7l23 wrote

There is a great sci-fi series Psion where the protagonist sees the corporate AIs as a totally distinct “life form” whose recognition is beyond comprehension.

1

Some-Box-8768 t1_j9fy5pu wrote

I tend toward thinking a 'true' AI will evolve away from our control. Otherwise, it's just a cleverly coded algorithm that's called an AI because most people don't understand it or because it can pull off one or two surprising things. That's equivalent to a well trained pet, but still not as truly intelligent as a pet.

Humans might not be smart enough to identify true intelligence. We can't even identify intelligence in living creatures. Think of our long history of, "Only humans are intelligent because only we use tools." "OK. Well. Only humans make tools." "Um, well, only humans ...." "Well, only a human can pass The Turing Test!" So, maybe that test isn't as sufficient as people once thought it was.

Reminds me of that video where one monkey gets electrocuted by a live train wire, and another monkey gives him the equivalent of monkey CPR and brings the first one back to life! Or, maybe, he's taking advantage of a moment when he can beat a rival with no risk of immediate repercussions to himself. Either way, pretty darned smart.

1

Mortal-Region t1_j8j9mii wrote

Neural networks in general are basically gigantic, static formulas: get the input numbers, multiply & add a billion times, report the output numbers. What you're imagining is more like reinforcement learning, which is the kind of learning employed by intelligent agents. An intelligent agent acts within an environment, and actions that lead to good outcomes are reinforced. An agent balances exploration with exploitation; exploration means trying out new actions to see how well they work, while exploitation means performing actions that are known to work well.

3

phloydde OP t1_j8l76wr wrote

There have been documented cases where a human who is suffering from short term memory loss repeat themselves verbatim to a given stimulus. This leads me to believe that one of the missing links is only short/long term memory.

2

Mortal-Region t1_j8lcmqf wrote

Yeah, I think that's right. An agent should maintain a model of its environment in memory, and continually update the model according to its experiences. Then it should act to fill in incomplete portions of the model (explore unfamiliar terrain, research poorly understood topics, etc).

1

MrEloi t1_j8k5w9h wrote

when we will stop speaking to the dead and start speaking to
the living.

I can imagine someone creating a sparkling shell program or system sitting on top of the huge dead mass of the ChatGPT system.

With short and medium term memory added, plus maybe some smaller neural networks, we may end up with a more chatty system.

3

MrEloi t1_j8k5eb7 wrote

Yep.

I imagine ChatGPT as a probe inserted into the dead body of a genius.

Scientists have found that you can inject questions into the pickled brain, which then responds.

But there is nobody at home.

2

SgathTriallair t1_j8kaduu wrote

Why would you want it to prompt you? Are you wanting to make friends with an AI or have it be your boss?

I can't think of a scenario where I want an AI to talk to me unprompted. This excludes things like having an appointment it reminds me of our giving me a morning update on the news as those would be on a schedule that I agreed to.

1

arisalexis t1_j8khc7c wrote

Some Japanese get married to 8bit girl friends.. just fyi

1

phloydde OP t1_j8l82hz wrote

You are being short sighted. Imagine an assistant AI who reminds you that you shouldn’t be eating that donut because your doctor told you that you are pre diabetic. To have an AI companion would mean you could offload a lot of cognitive work and focus on other things… I keep thinking of the English gentleman and his man servent

1

arisalexis t1_j8kh2wv wrote

I think many of the 100million current users will find this so spooky...

1