Viewing a single comment thread. View all comments

thelastpizzaslice t1_isp3rj7 wrote

Got a colab where I can use this? I'd love to do some image matting to pre-prep masks for stable diffusion.

1

fraktall t1_isph71a wrote

could you elaborate? why would you need this? just curious

1

thelastpizzaslice t1_ispl47u wrote

For img2img, if I could generate of one those masks that matches up to the various objects in the image, that would help a lot. Otherwise, I need to draw them or make them myself every time. Would be great as a feature in automatic1111's UI to auto-generate maskable regions to use in stable diffusion.

Also, I'd be interested in removing backgrounds and then re-generating them inside of different contexts to feed into dreambooth, i.e. to remove an object that is a part of a subject from its context and put it in a different one. For example, if I wanted to make a prosthetic arm that sits on a table, or if I want to make fried rice but remove the plate from the background and give it additional possible backgrounds instead. This will probably break dreambooth instead of doing what I want, but if it works, it's going to be some awesome witchcraft that lets me turn one object into a very different one.

I could just as well run it on my computer locally, but I do work on that machine and don't like using 100% of my GPU processing in the background.

5

Effective_Tax_2096 OP t1_israu1k wrote

The repo provides matting module consisting of SOTA models, open-source code for training and evaluation. Besides, there are well-trained out-of-the-box matting models for human, thus you can just use these human matting models in applications without training. If want to do image matting for other things or stuffs, you need to collect images, label images and train model. I will also convey your demand to the developers.

code: https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting

1