Viewing a single comment thread. View all comments

thundergolfer t1_iyk3df7 wrote

Try modal.com.

Modal is an ML-focused serverless cloud, and much more general than replicate.com which just allows you to deploy ML model endpoints. But still extremely easy to use.

It's the platform that this openai/whisper podcast transcriber is built on: /r/MachineLearning/comments/ynz4m1/p_transcribe_any_podcast_episode_in_just_1_minute/.

Or here's an example of doing serverless batch inference: modal.com/docs/guide/ex/batch_inference_using_huggingface.

This example from Charles Frye runs Stable Diffusion Dream Studio on Modal: twitter.com/charles_irl/status/1594732453809340416

2

gyurisc OP t1_izmxw61 wrote

>Try modal.com.
>
>Modal is an ML-focused serverless cloud, and much more general than replicate.com which just allows you to deploy ML model endpoints. But still extremely easy to use.
>
>It's the platform that this openai/whisper podcast transcriber is built on: /r/MachineLearning/comments/ynz4m1/p_transcribe_any_podcast_episode_in_just_1_minute/.
>
>Or here's an example of doing serverless batch inference: modal.com/docs/guide/ex/batch_inference_using_huggingface.
>
>This example from Charles Frye runs Stable Diffusion Dream Studio on Modal: twitter.com/charles_irl/status/1594732453809340416

this looks real nice. I will give it a try

2

thundergolfer t1_izol4cw wrote

Please do! DM me your email and I'll approve your account.

1