Submitted by Pro_RazE t3_10wk2zn in singularity
trovaleve t1_j7owfxe wrote
Reply to comment by Pro_RazE in AI Progress of February Week 1 (1-7 Feb) by Pro_RazE
I've never actually put any thought into how image similarity tests work. I wonder if that's how Google's reverse image search works.
Pro_RazE OP t1_j7ox1lr wrote
Let's say you generate an image of a cat. CLIP can convert what is in the image into words and then use them to find similar images that was in the LAION dataset which Stable Diffusion uses. So if it was let's say an orange cat, it can find similar images of that cat that was used in the training. Without those original pictures, Stable Diffusion cannot generate pictures of an orange cat (poor example i know lol). It is not always accurate. And also generated images are always different to the original ones. But one recent paper kinda proved it wrong (very rarely happens)
I hope this helps.
trovaleve t1_j7oz3rv wrote
Interesting! What was the paper about?
Viewing a single comment thread. View all comments