rpnewc

rpnewc t1_jdfwkax wrote

Self preservation as a concept is something, it can learn about, talk about, express etc. But in order for it to act on it, we have to explicitly tune its instructions for it. For sake of argument, even if the AI can act on it, it has to be given the controls. Nobody in their sane mind will do that. For somewhat related analogy, if people could give the control of their cars to people in other countries over the internet, it could cause a lot of mayhem, correct? Clearly the technology exists to do it and everyone is free to try. Why is this not a big problem today?

3

rpnewc t1_jb1k781 wrote

If you are successful in getting noticed, you may get sued. If you are just one guy (not a company) may be not. But tread carefully. There may be a restricted licensing way you could show your work if you want to, but I am not an expert there.

3

rpnewc t1_jb17dvp wrote

For sure it can be taught. But I don't think the way to teach it is to give it a bunch of sentences from the internet and expect it to figure out advanced reasoning. It has to be explicitly tuned into the objective. A more interesting question is, then how can we do this for all domains of knowledge in a general manner? Well, that is the question. In other words, what is that master algorithm for learning? There is one (or a collection of them) for sure, but I don't think we are much close to it. ChatGPT is simply pretending to be that system, but it's not.

1

rpnewc t1_jawxrjh wrote

Yes ChatGPT does not have any idea about what trophy is, or a suitcase is or what brown is. But it has access to a lot of sentences with these words and hence some attributes of it. So when you ask these questions, sometimes (random sampling) it picks the correct noun as the answer, other times it picks the wrong one. Ask a logic puzzle with ten people as characters. See its reasoning capability.

7

rpnewc t1_jajt66i wrote

Clearly it's computation of some form that's going on in our brain too. So sentience need to be better defined on where it would fall on the spectrum, with a simple calculator on one end and human brain on the other. My personal take is, it is much farther close to human brain than LLMs. Even if we build a perfectly reasoning machine which solves generic problems like humans do, I still wouldn't consider it human-like until it raises purely irrational emotions like, "why am I not getting any girl friends, or what's wrong with me?" There is no reason for anyone to build that into any machine. Most of the humanness lies in the non-brilliant part of the brain.

2