DamienLasseur

DamienLasseur t1_j379x65 wrote

However, the hardware is insanely expensive to train the model and run inference. If this were to work, we'd need someone with access to a lot of cloud computing/supercomputer/Google TPU's. The ChatGPT model alone requires ~350GB of GPU memory to generate an output (essentially performing inference). So imagine a model capable of all that and more? It'd require a lot of compute power.

10