Ttttrrrroooowwww
Ttttrrrroooowwww t1_j1m7yjz wrote
Reply to comment by perception-eng in [R][P] I made an app for Instant Image/Text to 3D using PointE from OpenAI by perception-eng
Have you found a standalone model that generates these synthetic images?
They mention they finetuned Glide, but I dont see that model in the repo.
Ttttrrrroooowwww t1_j1lrvoe wrote
Reply to [R][P] I made an app for Instant Image/Text to 3D using PointE from OpenAI by perception-eng
I tested the model the other day. Image based generation was bad unless I used synthetic images. Any correlation you have noticed here about image properties and quality of resulst?
Ttttrrrroooowwww t1_iyctkhw wrote
Reply to If the dataset is too big to fit into your RAM, but you still wish to train, how do you do it? by somebodyenjoy
Normally your dataloader gets single samples from your dataset. Such as reading an image one by one. In that case RAM is never a problem.
If that is not an option for you (why I would not know), then numpy memmaps might be for you. Basically an array thats read from disk, not from RAM. I use it to handle arrays that are Billions of values.
Ttttrrrroooowwww t1_iuvceuv wrote
Reply to How to solve CUDA Out of Memory error by Nike_Zoldyck
Article which scratches the surface and has been discussed thousands of times without adding any value. Another piece of clutter on the internet.
Ttttrrrroooowwww OP t1_it6n348 wrote
Reply to comment by suflaj in EMA / SWA / SAM by Ttttrrrroooowwww
Thanks a lot
I read PARS. Looks very interesting and is somewhat related to pseudo-label entropy minimization. Im thinking of going in a similar direction, a great tip.
Ttttrrrroooowwww OP t1_it6dz0l wrote
Reply to comment by suflaj in EMA / SWA / SAM by Ttttrrrroooowwww
Can you point me to the papers you reference?
Ive only come across 2019 papers about sample selection (assuming you mean data sampling)
Ttttrrrroooowwww OP t1_it5qizp wrote
Reply to comment by suflaj in EMA / SWA / SAM by Ttttrrrroooowwww
Currently my research focuses mostly on the semi-supervised space, and especially EMA is still relevant. Apparently its good to reduce confirmation biased on the inherent noisyness of pseudo labels.
While that agrees with your statement and answers my question (that I should use EMA because its relevant), I found some codes that dont mention all methods in their publications but they exist in their codebase.
Submitted by Ttttrrrroooowwww t3_y9a0j3 in deeplearning
Ttttrrrroooowwww t1_ir1tfxo wrote
Reply to [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
Miniset training. This partial dataset should somewhat reflect the mean/distribution of your actual dataset. Also, if it is very small, validation set should be a little larger.
For learning rate tune a “base learning rate” and scale it to your desired batch size using sqrt_k or linear_k rule. https://stackoverflow.com/questions/53033556/how-should-the-learning-rate-change-as-the-batch-size-change. Personally, sqrt_k rule works very well, but linear_k works too (depending on problem/model)
Ttttrrrroooowwww t1_iqneo00 wrote
Posted here a lot already
Ttttrrrroooowwww t1_j1n1stg wrote
Reply to comment by perception-eng in [R][P] I made an app for Instant Image/Text to 3D using PointE from OpenAI by perception-eng
Thanks!