Viewing a single comment thread. View all comments

xrailgun t1_j9aq903 wrote

Did you test any larger and it wouldn't run?

Also, any comments so far among those? Good? Bad? Easy? Etc?

4

wywywywy t1_j9ar2tk wrote

I did test larger but it didn't run. I can't remember which ones, probably GPT-J. I recently got a 3090 so I can load larger models now.

As for quality, my use case is simple (writing prompt to help with writing stories & articles) and nothing sophisticated, and they worked well. Until ChatGPT came along. I use ChatGPT instead now.

6

xrailgun t1_j9avboh wrote

Thanks!

I wish model publishers would indicate rough (V)RAM requirements...

4

wywywywy t1_j9b2kqu wrote

So, not scientific at all, but I've noticed that checkpoint file size * 0.6 is pretty close to actual VRAM requirement for LLM.

But you're right it'd be nice to have a table handy.

11