Submitted by mikonvergence t3_11qnv4c in MachineLearning
Hi! Here's an open-source implementation I released today for masked ControlNet synthesis, where you can specify the region that will be synthesised using a mask. The content of the synthesised region is controlled via textual and visual guidance as shown in the README.
https://github.com/mikonvergence/ControlNetInpaint
Here's an example with a prompt of "a red panda sitting on a bench":
PuzzledWhereas991 t1_jc4ema6 wrote
This is exactly what I have been looking for this days