Submitted by External_Oven_6379 t3_yd0549 in MachineLearning
External_Oven_6379 OP t1_itpny3q wrote
Reply to comment by LastVariation in Combining image and text embedding [P] by External_Oven_6379
Thank you for your input!
I checked on the scale of the VGG19 feature embedding. All values are between [0, 9.7]. So in that case, should the values of the onehot vector be either 0 and 9.7?
The labels are textures like floral or leopard. So you are right, they are not necessarily orthogonal, but it's difficult to estimate the correlation among these classes. So one-hot vectors were the most accessible to me.
I have read about CLIP when starting this. My thoughts were that CLIP input consists of images and a text input like an image description, e.g. "Flowers in the middle of a blue floor" (which is not categorical). Could categorical text be used?
LastVariation t1_itps1fq wrote
R.e. the scale of one-hot vectors, it's a little hard to say, it probably depends on your data and task. Essentially you could scale the one hot vectors up by sqrt(K), where K is the average similarity of two images with the same label. That way having the same label has the cosine similarity as two images being averagely similar for the label. In practice you'd probably want to fit K as a hyperparameter with some training data.
R.e. CLIP, you can input categorical text labels as raw text and the model is decent at interpreting it. I believe it's common practice to make the text a bit more natural language in that case, so "a photo of a <object>" rather than just "<object>".
Viewing a single comment thread. View all comments