Viewing a single comment thread. View all comments

cloneofsimo OP t1_izdlve0 wrote

2

LetterRip t1_izdm55i wrote

> Glad it worked for you with such small memory constraints!

Currently training image size 768, and accumulation steps=2.

If steps is set to 2000, will it be going to 4000? It didn't stop at 2000 as expected and is currently over 3500, figured I'd wait till over 4000 to kill it in case the accumulation steps acts as a multiplier. (Went to 3718 and quit, right after I wrote the above).

2

Teotz t1_izjzdve wrote

Don't leave us hanging!!! :)

How did the training go with a person?

1

LetterRip t1_izksf4k wrote

It is working, but I need to use prior preservation loss, otherwise all of the words in the phrase have the concept bleed into them. So generating photos for preservation loss now.

1

LetterRip t1_izm8rkq wrote

It did work, now I can no longer launch lora training even with 768 or 512 (CUDA VRAM exceeded), only 256 no idea what changed.

1

JanssonsFrestelse t1_j0l89ve wrote

Same here with 8GB VRAM, although looks like I can't use mixed_precision=fp16 with my RTX 2070, so that might be why.

1