Viewing a single comment thread. View all comments

Kogni t1_j3e7un3 wrote

Yep, i get what you mean, but i think the critical point is your last sentence, and that is where i disagree. I think LLMs largely already have this capability, and can show it when faced with well designed prompts.

For your example, here's a prompt design i came up with pretty quickly, and that might generalize to the kind of "assistant" usecase decently well:

​

Prompt: You are an AI assistant aiming to provide accurate answers to the questions of your users. For each answer, follow a logical chain of reasoning to arrive at the correct answer. Then write "OUTPUT: " and summarize that answer in as few words as possible. If the answer can be summarized using only a number or range of numbers, only summarize using those numbers.The first user question is: Please, tell me the average lifespan of a cat

Response: To determine the average lifespan of a cat, we must first consider the various factors that can affect a cat's lifespan, such as breed, health, and environmental conditions. We can then gather data from a variety of sources, such as veterinary studies and population statistics, to determine the average lifespan of cats in general.After analyzing this data, it is clear that the average lifespan of a cat is approximately 12-16 years. However, some cats may live longer or shorter lives depending on the factors mentioned earlier.OUTPUT: 12-16 years.

​

Now, all we need to do is hide the complex prompt from the user, search the response for OUTPUT, subtract all tokens appearing prior to OUTPUT from that response (maybe keep them around in the background if the user asks for more detail), and we got an assistant that will look like this:

​

Prompt: Please, tell me the average lifespan of a cat

Response: 12-16 years.

3

ElvinRath t1_j3gc916 wrote

That sounds like we are gonna end up needing another AI to decide wich kind of prompt is needed, haha

2