Submitted by pommedeterresautee t3_10xp54e in MachineLearning
Wrandraall t1_j7u76m1 wrote
Reply to comment by clauwen in [P] Get 2x Faster Transcriptions with OpenAI Whisper Large on Kernl by pommedeterresautee
Training =/= inference anyway. Whatever can be reached with CPU inference time, training still benefit by using GPUs from parallelization and caching
Viewing a single comment thread. View all comments