Comments

You must log in or register to comment.

TheDividendReport t1_irbihdt wrote

This is so cool. God, I wish I paid more attention in school. I’ve been trying to get these amazing programs running on my Gaming computer and am making progress but I’m so far away. Nearly got a running Stable Diffusion but ran into a memory issue (12gb GPU, I need to figure out how to dedicate a graphics card.)

The advancement in AI this year has led me to learning basic 3D printing and coding. I never had the desire to learn more about the more technical side of computing until now. If I had as much time to waste now as I did when I was a teen…

4

Smoke-away t1_ircr37c wrote

> Nearly got a running Stable Diffusion

Just use this version this version by /u/nmkd posted on /r/StableDiffusion. Easy to use.

*Update: I just tried v1.5 of their GUI and found that it doesn't run as well as v1.4 for me. I suggest trying both and see which version you like.

6

TheDividendReport t1_ircsqc3 wrote

I appreciate the call out! Thank you!

2

Smoke-away t1_ird19cm wrote

*Update: I just tried v1.5 of their GUI and found that it doesn't run as well as v1.4 for me. I suggest trying both and see which version you like.

2

nmkd t1_irdjqtc wrote

> I just tried v1.5 of their GUI

Elaborate?

1

Smoke-away t1_irdldys wrote

I'm running on a 980Ti with face restoration at 0.5 and 1.5 seems slower to generate after changing # of steps and creativeness sliders and freezes sometimes if I lower the creativeness down too low. I think it needed to reload Stable Diffusion mid session also. I can't really put my finger on what's causing issues, but 1.4 just feels like a better experience for me.

Some other quality of life things:

  • The long prompt warning popup is a bit annoying. Would be useful to toggle it off.
  • There should be an 'X' to exit the preview window like in 1.4. Clicking 'X' is easier than hitting 'Esc'.
1

[deleted] t1_irdugnj wrote

[deleted]

2

JJP77 t1_ire49w3 wrote

we will be progressing faster and faster especially with AGI.

1

JJP77 t1_ire3y9h wrote

where'd they get 3d training data from?

1

Smearle t1_iuhhgyz wrote

They don't use any. Instead they capture screenshots of the 3D object from various perspectives then feed them into CLIP to determine how much the object resembles the text prompt.

2