Viewing a single comment thread. View all comments

Beli_Mawrr t1_jad4r9n wrote

That's almost in the realm of my computer can run it, no?

30

curiousshortguy t1_jad9s4t wrote

it is, you can probably do 2 to 8 billion on your average gaming pc, and 16 on a high end one

28

AnOnlineHandle t1_jaeshwf wrote

Is there a way to convert parameter count into vram requirements? Presuming that's the main bottleneck?

7

metal079 t1_jaeuymi wrote

Rule of thumb is vram needed = 2x per billion parameters, though I recall pygamillion which is 6B says it needs 16GB of ram so it depends.

12

curiousshortguy t1_jaf3aab wrote

Yeah, about 2-3. You can easily shove layers of the networks on disk, and then load even larger models that don't fit in vram BUT disk i/o will make inference painfully slow.

10

dancingnightly t1_jadj7fa wrote

Edit: Seems like for this one yes. They do consider human instructions (similarish to the goal of a RLHF which requires more RAM), by adding them directly in the text dataset, as mentioned in 3.3 Language-Only Instruction Tuning-

For other models, like OpenAssistant coming up, one thing to note is that, although the generative model itself may be runnable locally, the reward model (the bit that "adds finishing touches" and ensures following instructions) can be much bigger. Even if the GPT-J underlying model is 11GB on RAM and 6B params, the RLHF could seriously increase that.

This models is in the realm of the smaller T5, BART and GPT-2 models released 3 years ago and runnable then on decent gaming GPUs

7

currentscurrents t1_jaetyg1 wrote

Can't the reward model be discarded at inference time? I thought it was only used for fine-tuning.

8

currentscurrents t1_jaetvbb wrote

Definitely in the realm of running on your computer. Almost in the realm of running on high-end smartphones with TPUs.

2