Viewing a single comment thread. View all comments

audioen t1_jdz1ol1 wrote

LLM, wired like this, is not conscious, I would say. It has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task. Much of its massive size goes into recalling vast quantities of training text verbatim, though this same ability helps it to do this one-shot input to output translation which already seems to convince so many. Yet, in some sense, it is ultimately just looking stuff up from something like generalized, internalized library that holds most of human knowledge.

I think the next step in LLM technology is to address these shortcomings. People are already trying to achieve that, using various methods. Add tools like calculators and web search so the AI can look up information rather than try to just memorize it. Give the AI a prompt structure where it first decomposes task to subtasks and then completes the main task based on results of subtasks. Add self-reflection capabilities where it reads its own answer and looks at it from point of view whether the answer turned out to be very good and maybe detects if it made a mistake in reasoning or hallucinated the response, and then goes back and edits those parts of the response to be correct.

Perhaps we will even add ability to learn from experience somewhere along the line, where the AI runs a training pass at end of each day from its own outputs and their self-assessed and externally observed quality, or something. Because we are working with LLMs for some time, I think we will create machine consciousness expressed partially or fully in language, where the input and output remain to be language. Perhaps later, we figure out how AI can drop even language and mostly use a language module to interface with humans and their library of written material.

2

Dizzlespizzle t1_jdzh82t wrote

How often do you interact with bing or chatgpt? bing has already demonstrated ability to recall the past with me for my queries going back over a month so not sure what you mean exactly. Is 3.5 -> 4.0 not evolution? You can ask things on 3.5 that become entirely different level of nuance and intelligence when asked on 4.0. You say it can’t think to refine its answer but it literally has been in the process of answering questions regarding itself that it will suddenly flag mid-creation and immediately delete what it just wrote and just replace it all with “sorry, that’s on me.. (etc)”, when it changes it’s mind that it cannot tell you. If you think I am misunderstaning what you’re saying on any of this feel free to correct me.

2

czk_21 t1_jdzr8s1 wrote

> it always predicts the same output probabilities from the same input

it does not, you can adjust it with "temperature"

The temperature determines how greedy the generative model is.

If the temperature is low, the probabilities to sample other but the class with the highest log probability will be small, and the model will probably output the most correct text, but rather boring, with small variation.

If the temperature is high, the model can output, with rather high probability, other words than those with the highest probability. The generated text will be more diverse, but there is a higher possibility of grammar mistakes and generation of nonsense.

1