Submitted by Vegetable-Skill-9700 t3_121a8p4 in MachineLearning
blose1 t1_jdoj8kl wrote
Reply to comment by WonderFactory in [D] Do we really need 100B+ parameters in a large language model? by Vegetable-Skill-9700
GPT models struggle with out of distribution programming tasks, which means it can't create novel ideas, I tested this myself many times and it's not a prompt engineering issue. I think LLMs could act as great teachers but not researchers, teachers just teach what we already know, researchers create novel knowledge that teachers use.
Viewing a single comment thread. View all comments