Viewing a single comment thread. View all comments

yaru22 t1_jdn17j5 wrote

Hello,

GPT4 has context length of 32K tokens while some others have 2-4K tokens. What decides the limit on these context lengths? Is it simply bigger the model, larger the context length? Or is it possible to have a large context length even on a smaller model like LLaMA 7/13/30B?

Thank you!

1

LowPressureUsername t1_jdq0nsn wrote

It’s mostly computational power available AFAIK. More context = more tokens = more processing power required.

1

yaru22 t1_jdron1b wrote

So it's not an inherent limitation on the number of parameters the model has? Or is that what you meant by more processing power? Do you or does anyone have some pointers to papers that talk about this?

1