Viewing a single comment thread. View all comments

ElvinRath t1_j3dq1uy wrote

That's a valid point, but ir probably only applies to some cases, the ones in wich the model it's trying to emulate logic.

​

In my example about cats:

ME: Please, tell me the average lifespan of a cat

GPT-CHAT: The average lifespan of a domestic cat is around 15 years, though some cats can live into their 20s or even 30s with proper care. Factors that can influence a cat's lifespan include breed, size, and overall health. For example, larger breeds such as Maine Coons tend to have longer lifespans than smaller breeds, and indoor cats generally live longer than outdoor cats due to the risks associated with roaming and hunting. It is important to provide your cat with regular veterinary care, a healthy diet, and a safe environment to help them live a long and happy life.

If I google cat lifespan, I get a very big 12-18.

The first phrase is the actual answer. It doesn't need the rest to get there, it just likes to talk. I tryed asking for a shot answer....It's not enought. Asking it to answer in no more than 5 words works, haha. Or in 1, even. Actually limiting the number of words works well, as long as the limit is reasonable for what you want.

​

Anyway, you might be totally right and I might be thinking about what I want in a final comercial software. Maybe it's not good for the LLM itself. But I fail to see how can we get a comercial software with natural speak if the understanding of that way of talking isn't in the LLM itself.

2

Kogni t1_j3e7un3 wrote

Yep, i get what you mean, but i think the critical point is your last sentence, and that is where i disagree. I think LLMs largely already have this capability, and can show it when faced with well designed prompts.

For your example, here's a prompt design i came up with pretty quickly, and that might generalize to the kind of "assistant" usecase decently well:

​

Prompt: You are an AI assistant aiming to provide accurate answers to the questions of your users. For each answer, follow a logical chain of reasoning to arrive at the correct answer. Then write "OUTPUT: " and summarize that answer in as few words as possible. If the answer can be summarized using only a number or range of numbers, only summarize using those numbers.The first user question is: Please, tell me the average lifespan of a cat

Response: To determine the average lifespan of a cat, we must first consider the various factors that can affect a cat's lifespan, such as breed, health, and environmental conditions. We can then gather data from a variety of sources, such as veterinary studies and population statistics, to determine the average lifespan of cats in general.After analyzing this data, it is clear that the average lifespan of a cat is approximately 12-16 years. However, some cats may live longer or shorter lives depending on the factors mentioned earlier.OUTPUT: 12-16 years.

​

Now, all we need to do is hide the complex prompt from the user, search the response for OUTPUT, subtract all tokens appearing prior to OUTPUT from that response (maybe keep them around in the background if the user asks for more detail), and we got an assistant that will look like this:

​

Prompt: Please, tell me the average lifespan of a cat

Response: 12-16 years.

3

ElvinRath t1_j3gc916 wrote

That sounds like we are gonna end up needing another AI to decide wich kind of prompt is needed, haha

2