Submitted by Beautiful-Cancel6235 t3_11k1uat in singularity
hassan789_ t1_jb7hzjx wrote
Lack of quality information. There's a max of 12 trillion high quality token for LLMs to learn from. After that, the returns could diminish (maybe 10% new quality info is added per year). Right now, largest models are trained on 1T tokens..
NothingVerySpecific t1_jb93eck wrote
Sounds intriguing, got a link for the T nomelecture?
Viewing a single comment thread. View all comments