Submitted by pommedeterresautee t3_10xp54e in MachineLearning
clauwen t1_j7txja0 wrote
Reply to comment by SnooHesitations8849 in [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl by pommedeterresautee
You really have to wonder why everybody uses torch and tf2 then. Stupid trillion dollar companies, could just run everything on cpu, if they could only hire C++ devs. Billions of dollars down the drain, really unlucky.
Wrandraall t1_j7u76m1 wrote
Training =/= inference anyway. Whatever can be reached with CPU inference time, training still benefit by using GPUs from parallelization and caching
Viewing a single comment thread. View all comments