TemperatureAmazing67

TemperatureAmazing67 t1_jbzcn6a wrote

>extensions of LLMs (like
>
>PALM-E
>
>) are a heck of a lot more than an abacus. I wonder what would happen if Google just said, "screw it", and scaled it from 500B to 50T parameters. I'm guessing there are reasons in the architecture that it would

The problem is that we have scaling laws for NN. We just do not have the data for 50T parameters. We need somehow to get these data. The answer on this question costs a lot.

3

TemperatureAmazing67 t1_jbzc8cc wrote

'require input to generate an output and do not have initiative' - use random or other's network output.

Also, the argument about next token is skrewed up. For a lot of task everything you need is perfectly predicted next token.

2