Comments

You must log in or register to comment.

Dankmemexplorer t1_iymieav wrote

for a sense of scale, GPT-NeoX, a 20 billion parameter model, requires ~45GB of vram to run. gpt-3 davinci is 175 billion parameters.

unless these models can be pared down somehow (unlikely, the whole point of training these huge models is because their performance scales with size), we will have to wait a decade or two for consumer electronics to catch up

2

Deep-Station-1746 t1_iymmi79 wrote

> we will have to wait a decade or two

The best I can is 4 years. Take it or leave it.

2

Dankmemexplorer t1_iymsbgo wrote

my current gpu is 4 years old 😖

state of the art has gotten a lot better since then but not that much better

1

aero_oliver2 OP t1_iymivor wrote

Interesting. So you’re saying rather than adjusting the models to work on current devices the better option is actually designing the devices to work with these models ?

1

Dankmemexplorer t1_iymjsty wrote

running the full gpt-3 on a laptop would be like running crysis 3 on a commodore 64. you cant pare it down enough to run without ruining it

1

StChris3000 t1_iyn5rdm wrote

There are advances such as quantization that have enabled edge devices to run some pretty spicy models so i wouldn’t be surprised if we got it down to within gaming computers reach pretty soon. Also Google research revealed that GPT-3 was not trained efficiently and has too many parameters. So a newly designed model with way fewer parameters trained on the same data should perform as well as GPT-3.

(I am only a machine learning enthusiast and not an expert so take everything I say with a grain of salt)

1