Viewing a single comment thread. View all comments

stevevaius t1_ja1vftm wrote

Very interesting. For a noob is there any simple notebook that shows how to load a sound file and run model on it at Google colab?

2

pommedeterresautee OP t1_ja26tgi wrote

Our work is for GPU with capacity >= 80 (A10, A100, 3090RTX, etc.) . On Colab you will likely get a T4, etc. (75). Your best bet is to copy paste related to CUDA graph from Kernl library and use with PyTorch 2.0 nightly.

2

stevevaius t1_ja27q2v wrote

Thanks. For a simple uploading a wav file and transcribe it, is there any implementation on colab? Sorry to bother you. I am working on whisper.cpp but large model is not fast on streaming. Looking to solve this issue by faster methods.

1