Submitted by starstruckmon t3_1027geh in MachineLearning
Taenk t1_j2sgndx wrote
Reply to comment by Purplekeyboard in [R] Massive Language Models Can Be Accurately Pruned in One-Shot by starstruckmon
Compared to what? I have been playing with it for a little bit via Petals and it performs decently, although ChatGPT certainly sets a very high bar of success. Personally I think that it is a shame, that OpenAI gets exclusive access to the absolutely massive dataset of interacting with actual humans and models like BLOOM could certainly profit from having publically accessible interactions.
nutpeabutter t1_j2snx76 wrote
From my personal interactions it just gave off this vibe that it was trained on websites, rather than the GPT-3 (both base and chat) models which felt much more natural. Something to do with having to learn too many languages?
Viewing a single comment thread. View all comments