Viewing a single comment thread. View all comments

blueSGL t1_jdd7maq wrote

After refusing to say how many parameters GPT4 has, refusing to give over any details of training dataset or methodology and doing so in the name of staying 'competitive' I'm taking the stance that they are going to do everything in their power to obfuscate the size of the model and how much it costs to run.

e.g. Sam Altman has said in the past that the model would be a lot smaller than people expect and that more data can be crammed into smaller models. (Chinchilla and especially the very recent Llama papers prove this)

Would I put it past the new 'competitive' profit driven OpenAI to rate limit a GPT4 that is actually similar in size to GPT3 to give the impression the model is bigger and takes more compute to generate answers? No (as the difference in inference cost is pure profit)

8

141_1337 t1_jdd8rfs wrote

This is insane because if by close sourcing this, they are slowing down the rate of advancement and discovery since now everyone has to reinvent the wheel.

2

drawkbox t1_jddii1o wrote

ChatGPT basically used Google Brain created AI tech, transformers. These were used to build ClosedGPT. For that reason it is NopeGPT. ChatGPT is really just datasets, which no one knows, they could swap at any time run some misinformation then swap the next day. This is data blackboxing and gaslighting at the up most level. Not only that it is largely funded by authoritarian or originally authoritarian money...

Google Brain and other tech is way more open already than "Open"AI.

ChatGPT/OpenAI just front ran the commercial side, but long term they aren't really innovating like Google is on this. They look like a leader from the marketing/pump but they are a follower.

5