alexnasla OP t1_iujn3mm wrote on October 31, 2022 at 8:49 PM Reply to comment by K-o-s-l-s in [D] When the GPU is NOT the bottleneck...? by alexnasla Ok so what I did was actual max out the input buffers to the most the GPU can handle without crashing. So basically fully saturating the VRAM. Permalink Parent 1
alexnasla OP t1_iujbukx wrote on October 31, 2022 at 7:33 PM Reply to comment by BlazeObsidian in [D] When the GPU is NOT the bottleneck...? by alexnasla Im pretty sure its running on the GPU. I dont remember what the GPU utilization was though, ill take a look when I get a chance. The test that I mentioned ran for 8 hours. Permalink Parent 1
alexnasla OP t1_iuj9596 wrote on October 31, 2022 at 7:14 PM Reply to comment by fnbr in [D] When the GPU is NOT the bottleneck...? by alexnasla So right now the bottleneck is such where I need to speed up the training time to about 10 times to be able to match the sampling time that with the training time and to be able to sample and train at the same time without the bottleneck. Permalink Parent −3−
alexnasla OP t1_iuj8se6 wrote on October 31, 2022 at 7:12 PM Reply to comment by Kon-kkk in [D] When the GPU is NOT the bottleneck...? by alexnasla Oh my bad! PyTorch Its 4 sequential layers, Dense+conv1d+lstm+dense Hmm any resources you know of I can check out to learn more about doing that? Permalink Parent 2
[D] When the GPU is NOT the bottleneck...? Submitted by alexnasla t3_yikumt on October 31, 2022 at 6:45 PM in MachineLearning 12 comments 5
alexnasla OP t1_iujn3mm wrote
Reply to comment by K-o-s-l-s in [D] When the GPU is NOT the bottleneck...? by alexnasla
Ok so what I did was actual max out the input buffers to the most the GPU can handle without crashing. So basically fully saturating the VRAM.