Viewing a single comment thread. View all comments

monsieurpooh t1_j2xw6ta wrote

These models are trained only to do one thing really well, which is predict what word should come after an existing prompt, by reading millions of examples of text. The input is the words so far and the output is the next word. That is the entirety of the training process. They aren't taught to look up sources, summarize, or "run nootropics through its neural network" or anything like that.

From this simple directive of "what should the next word be" they've been able to accomplish some pretty unexpected breakthroughs, in tasks which conventional wisdom would've held to be impossible for just a model programmed to figure out the next word, e.g. common sense Q and A benchmarks, reading comprehension, unseen SAT questions, etc. All this was possible only because the huge neural network transformers model is very smart, and as it turns out, can produce emergent cognition where it seems to learn some logic and reasoning even though its only real goal is to figure out the next word.

Edit: Also, your original comment appears to be describing inference, not training

2

indigoHatter t1_j2ysq1c wrote

Okay, again I am grossly oversimplifying the concept, but if it was trained to predict what word should be next in a response such as that, then presumably it once learned about nootropics and absorbed a few forums and articles about nootropics. So.......

Bro: "Hey, make my brain better"

GPT: "K, check out these nootropics"

I made edits to my initial post in hopes it makes better sense now. You're correct that my phrasing wasn't great initially, and leaves room for others to misunderstand what I am not clearly stating.

1

monsieurpooh t1_j2z3bt5 wrote

Thanks. I find your edited version hard to understand and still a little wrong, but I won't split hairs over it. We 100% agree on the main point though: This algorithm is prone to emulating whatever stuff is in the training data, including bro-medical-advice.

2

indigoHatter t1_j2zeaxf wrote

Yeah, I'm not trying very hard to be precise right now. Glad you think it's better though. ✌️ Have a great day, my dude!

2