Viewing a single comment thread. View all comments

currentscurrents t1_j490rvn wrote

It's meaningful right now because there's a threshold where LLMs become awesome, but getting there requires expensive specialized GPUs.

I'm hoping in a few years consumer GPUs will have 80GB of VRAM or whatever and we'll be able to run them locally. While datacenters will still have more compute, it won't matter as much since there's a limit where larger models would require more training data than exists.

3