Viewing a single comment thread. View all comments

ElvinRath t1_j3cfx3w wrote

I wish that it would just say "It might be his mother, or it might be a child addopted by 2 males".

We really need less verbose answers.

69

sheerun t1_j3cmzkr wrote

From my experience the less verbose you are, the more (wrong) assumptions people make, also they point out exceptions to disproof your statement even if it is correct in most of the cases. So it's not bad by itself it is verbose, but that it is verbose when it doesn't really need to. And don't even get me started what people do if you try to explain something by non-perfect analogy.

21

ElvinRath t1_j3cr0vc wrote

In some cases extra words might add something, but that is not the case in those answers.

In fact mine has less words and I think that we can all agree that it's a bit better because it covers the two most likely situations (Mother & 2 males).

​

Anyway, this is not a specially bad example of how verbose those AI are. It is an example (its explaining me the riddle, it's not answering) but it could be worse.

GPT Chat Example:

ME: Please, tell me the average lifespan of a cat

GPT-CHAT: The average lifespan of a domestic cat is around 15 years, though some cats can live into their 20s or even 30s with proper care. Factors that can influence a cat's lifespan include breed, size, and overall health. For example, larger breeds such as Maine Coons tend to have longer lifespans than smaller breeds, and indoor cats generally live longer than outdoor cats due to the risks associated with roaming and hunting. It is important to provide your cat with regular veterinary care, a healthy diet, and a safe environment to help them live a long and happy life.
If I google cat lifespan, I get a very big 12-18.

​

That's what we humans usually want. Now of course it is good that it can explain what it says, but it should only do it if we ask.

​

At least that is my opinion, if you want an AI to always be extra verbose because some people are gonna argue with it, well, I guess that it is a choice.

If I'm the one talking with that AI I surely prefer it to be concise, and if I want, I'll argue with it. (Wich I'll do sometimes, of course)

​

​

Also, not saying that this is bad, this tech is amazing. I'm just stating something that I wish that was taken into account, because for instance if you read the paper published by Anthropic about it's constitutional AI, those techniques to filter results are clearly (to my understading) having a bad influence regarding how verbose the AI is. (I'm not sayinf that everything is bad. It also has a good influence in the general quality of the answers. Next step, to me, should be making the answers more natural while keeping the quality gained)

5

Honest-Cauliflower64 t1_j3d749m wrote

Man I am thinking of going into Technical Writing. It is all about taking complicated stuff and explaining it to different audiences. I’m going to be so good at explaining things to people. I feel like it’s a useful skill in life in general.

3

mocha_sweetheart t1_j3egnq5 wrote

Good luck and also it’s a good idea I’ll look into it too, any pointers/advice?

1

Honest-Cauliflower64 t1_j3egxv8 wrote

I am literally just starting to look into it. I don’t have any advice. But it sounds like a field that definitely requires you to enjoy it. Maybe try a short course first to make sure you enjoy it. That’s what I’m going to do before I go to university for it.

1

Numinak t1_j3ddilw wrote

It could also be a Father as in the Church type.

1

sheerun t1_j3de8v9 wrote

Yes exactly. I have guts to admit I have no idea what you mean, but most people just will argue

1

AndromedaAnimated t1_j3chkln wrote

This would be a better answer than both „Woke AI“ and „Verbose AI“ give. Thank you! Ambiguity needs to be taught to AI for it to be able to solve „moral“ riddles like this.

6

Kogni t1_j3d163w wrote

Imo this doesn't matter.
This sort of criticism only sticks when treating LLMs as commercial software, not raw token predictors.

I.e. the thing you desire is not a less verbose model, it is an implementation of an arbitrarily verbose model that responds well enough to 0-shot prompts to enable 3rd party implementations to achieve minimally verbose chat bots.

I'd recommend the article above as well as Language Models Perform Reasoning via Chain of Thought for some examples of how incredibly effective "prompt programming" can be. In fact, i think it is completely unreasonable to expect LLM output to be popular (in the sense that the top-upvoted comment in this thread would be "Wow! This is exactly the kind of output i want from my AI chatbots!"), since there simply is no such common ground for un-optimized prompts.
Meaning the default verbosity is not important for any applications past the sandbox interface provided by whoever trained the model. That can still be important mind you, in the sense that user penetration might be highest for that interface (as is certainly the case with ChatGPT), but at that point we are talking about a failure of people developing useful products with the model (or a failure to provide access to the model via API) rather than a failure of the model.

3

ElvinRath t1_j3dq1uy wrote

That's a valid point, but ir probably only applies to some cases, the ones in wich the model it's trying to emulate logic.

​

In my example about cats:

ME: Please, tell me the average lifespan of a cat

GPT-CHAT: The average lifespan of a domestic cat is around 15 years, though some cats can live into their 20s or even 30s with proper care. Factors that can influence a cat's lifespan include breed, size, and overall health. For example, larger breeds such as Maine Coons tend to have longer lifespans than smaller breeds, and indoor cats generally live longer than outdoor cats due to the risks associated with roaming and hunting. It is important to provide your cat with regular veterinary care, a healthy diet, and a safe environment to help them live a long and happy life.

If I google cat lifespan, I get a very big 12-18.

The first phrase is the actual answer. It doesn't need the rest to get there, it just likes to talk. I tryed asking for a shot answer....It's not enought. Asking it to answer in no more than 5 words works, haha. Or in 1, even. Actually limiting the number of words works well, as long as the limit is reasonable for what you want.

​

Anyway, you might be totally right and I might be thinking about what I want in a final comercial software. Maybe it's not good for the LLM itself. But I fail to see how can we get a comercial software with natural speak if the understanding of that way of talking isn't in the LLM itself.

2

Kogni t1_j3e7un3 wrote

Yep, i get what you mean, but i think the critical point is your last sentence, and that is where i disagree. I think LLMs largely already have this capability, and can show it when faced with well designed prompts.

For your example, here's a prompt design i came up with pretty quickly, and that might generalize to the kind of "assistant" usecase decently well:

​

Prompt: You are an AI assistant aiming to provide accurate answers to the questions of your users. For each answer, follow a logical chain of reasoning to arrive at the correct answer. Then write "OUTPUT: " and summarize that answer in as few words as possible. If the answer can be summarized using only a number or range of numbers, only summarize using those numbers.The first user question is: Please, tell me the average lifespan of a cat

Response: To determine the average lifespan of a cat, we must first consider the various factors that can affect a cat's lifespan, such as breed, health, and environmental conditions. We can then gather data from a variety of sources, such as veterinary studies and population statistics, to determine the average lifespan of cats in general.After analyzing this data, it is clear that the average lifespan of a cat is approximately 12-16 years. However, some cats may live longer or shorter lives depending on the factors mentioned earlier.OUTPUT: 12-16 years.

​

Now, all we need to do is hide the complex prompt from the user, search the response for OUTPUT, subtract all tokens appearing prior to OUTPUT from that response (maybe keep them around in the background if the user asks for more detail), and we got an assistant that will look like this:

​

Prompt: Please, tell me the average lifespan of a cat

Response: 12-16 years.

3

ElvinRath t1_j3gc916 wrote

That sounds like we are gonna end up needing another AI to decide wich kind of prompt is needed, haha

2

calbhollo t1_j3e52fq wrote

"Brevity is the soul of wit" applies strongly to LLMs. The smarter the AI is, the less tokens you need to give it to think. We'll eventually have a brief AI, but right now, they aren't smart enough for that.

2

noop_noob t1_j3g791i wrote

The way current AI works, the amount of “thinking time” it gets is proportional to the length of the output text. Therefore, longer output tends to give better results.

2