Comments

You must log in or register to comment.

abnormal_human t1_jacjmrj wrote

Am I reading right that this is a 1.6B parameter model?

70

[deleted] t1_jacx9ai wrote

That’s about x100 less than what I’d expected.

39

Beli_Mawrr t1_jad4r9n wrote

That's almost in the realm of my computer can run it, no?

30

curiousshortguy t1_jad9s4t wrote

it is, you can probably do 2 to 8 billion on your average gaming pc, and 16 on a high end one

28

AnOnlineHandle t1_jaeshwf wrote

Is there a way to convert parameter count into vram requirements? Presuming that's the main bottleneck?

7

metal079 t1_jaeuymi wrote

Rule of thumb is vram needed = 2x per billion parameters, though I recall pygamillion which is 6B says it needs 16GB of ram so it depends.

12

curiousshortguy t1_jaf3aab wrote

Yeah, about 2-3. You can easily shove layers of the networks on disk, and then load even larger models that don't fit in vram BUT disk i/o will make inference painfully slow.

10

dancingnightly t1_jadj7fa wrote

Edit: Seems like for this one yes. They do consider human instructions (similarish to the goal of a RLHF which requires more RAM), by adding them directly in the text dataset, as mentioned in 3.3 Language-Only Instruction Tuning-

For other models, like OpenAssistant coming up, one thing to note is that, although the generative model itself may be runnable locally, the reward model (the bit that "adds finishing touches" and ensures following instructions) can be much bigger. Even if the GPT-J underlying model is 11GB on RAM and 6B params, the RLHF could seriously increase that.

This models is in the realm of the smaller T5, BART and GPT-2 models released 3 years ago and runnable then on decent gaming GPUs

7

currentscurrents t1_jaetyg1 wrote

Can't the reward model be discarded at inference time? I thought it was only used for fine-tuning.

8

currentscurrents t1_jaetvbb wrote

Definitely in the realm of running on your computer. Almost in the realm of running on high-end smartphones with TPUs.

2

RetroPenguin_ t1_jad51qy wrote

For the >10B closed source models, I’d be really curious how many of those weights are zero with fp16 precision.

23

7734128 t1_jaemc4b wrote

Doesn't really change anything, does it? A zero still has an effect, so it has to be there, so I assume you mean that it could use less memory, right? But is that technically feasible to do in a practical manner? I can't imagine a practical way to have a tensor of split precision weights without ruinous reprocessing when trying to use the weights.

6

karius85 t1_jaeoyq7 wrote

Sparse matrices, but you would need quite a lot of zeros.

2

pawsibility t1_jaep5s5 wrote

> The MLLM component has 24 layers with 2,048 hidden dimensions, 8,192 FFN intermediate size, and 32 attention heads, resulting in about 1.3B parameters. We use Magneto’s initialization for optimization stability. For faster convergence, the image representation is obtained from a pretrained CLIP ViT-L/14 model with 1,024 feature dimensions. The images are preprocessed into 224×224 resolution during training. We freeze the parameters of the CLIP model except for the last layer during training. The total number of parameters of KOSMOS-1 is about 1.6B.

If they use CLIP to generate image representations/embeddings as input to their model, isn't that kind of cheating when reporting numbers of parameters? Or is CLIP sufficiently small, and that's how they jumped from 1.3B to 1.6B?

6

AnOnlineHandle t1_jaesse4 wrote

The CLIP model in the Stable Diffusion 1.5 package is 480mb according to my directory where it was unpackaged by diffusers, though I don't know how that translate into parameter count.

2

MysteryInc152 OP t1_jaccf9c wrote

>A big convergence of language, multimodal perception, action, and world modeling is a key step toward artificial general intelligence. In this work, we introduce Kosmos-1, a Multimodal Large Language Model (MLLM) that can perceive general modalities, learn in context (i.e., few-shot), and follow instructions (i.e., zero-shot). Specifically, we train Kosmos-1 from scratch on web-scale multimodal corpora, including arbitrarily interleaved text and images, image-caption pairs, and text data. We evaluate various settings, including zero-shot, few-shot, and multimodal chain-of-thought prompting, on a wide range of tasks without any gradient updates or finetuning. Experimental results show that Kosmos-1 achieves impressive performance on (i) language understanding, generation, and even OCR-free NLP (directly fed with document images), (ii) perception-language tasks, including multimodal dialogue, image captioning, visual question answering, and (iii) vision tasks, such as image recognition with descriptions (specifying classification via text instructions). We also show that MLLMs can benefit from cross-modal transfer, i.e., transfer knowledge from language to multimodal, and from multimodal to language. In addition, we introduce a dataset of Raven IQ test, which diagnoses the nonverbal reasoning capability of MLLMs.

40

farmingvillein t1_jacq4fn wrote

The language-only performance was pretty meh, comparing the versions with and without images. We'll have to see whether scale up helps here (other research suggests yes?... But still need to see proof).

22

MysteryInc152 OP t1_jacswnq wrote

There's pretty much no way it won't scale up.

11

farmingvillein t1_jadqg1l wrote

You're missing the point here, or I wasn't clear--the question isn't whether performance will improve with more params (and potentially) data; no doubt there.

The question is whether a model trained at scale on text & images will outperform a model trained at scale solely on text, in the text-only domain (or similarly, the image-only).

To-date, all* of the public research (and Kosmos is no different) on multimodal models have showed, at best, multimodal models generally performing equal to unimodal variants in unimodal domains. And often they are a shade worse (like Kosmos).

(*=unless you count code+natural language.)

The holy grail, of course, is that the two help one another, so that your multimodal variant outperforms the unimodal variants on unimodal tasks. GPT-* gets better at talking to you because it has ingested all of the Youtube videos in the world, e.g.

If you can demonstrate that (and it certainly makes intuitive human sense that this could/should be true), then of course there is a giant truckload of image (including video!) and audio data you can slam into your text models to make text-based scenarios better (and similarly for images, etc.). (And it also more plausibly suggests that massive amounts of synthetic world exploration data could be accretive, too...)

There is a bunch of research (https://arxiv.org/abs/2301.03728 being one of the most exciting) suggesting that this can occur, with enough data/params, but no one has publicly demonstrated it. (And it'd surprise no one, probably, if this was part of GPT-4's or Gato-2's mix.)

40

deliciously_methodic t1_jad1h8m wrote

What does “scale up” mean in this context? I use “scale up” in a ML hardware context vs “scale out” to represent “making a cpu/GPU more powerful” vs “adding more gpus”, but I’m not clear if the analogy is used for AI models, scaling up and out. Or if you simply mean, “the model will get bigger”

−3

farmingvillein t1_jadt897 wrote

FWIW, I was trying to make a more subtle point than OP's response--see my other reply.

4

zykezero t1_jacvr1g wrote

Finally kosmos has arrived. We need her help to fight the gnosis.

16

1azytux t1_jadmvbe wrote

can we download the model weights? is it open sourced? or maybe perform zero shot tasks by ourselves?

16

[deleted] t1_jacygxs wrote

Any idea when we will be able to use the model?

8

1azytux t1_jadp0aa wrote

do you know which foundation models we can use though, or are open sourced? It seems like every other model is either not available or their weights aren't released yet. It's case with, CoCa, Florence, Flamingo, BEiT3, FILIP, ALIGN. I was able to find weights for ALBEF.

7

[deleted] t1_jadxu2n wrote

I mean...

Google

Microsoft

Meta

Have readily available models. But I understand where you are coming from, which is why I asked my question.

5

ReasonablyBadass t1_jae7zhu wrote

Can't read the paper right now, can someone summarize: is it a new model or "just" the standard transformers but used on multi modal data? if it is new, what are the strucutral changes?

6