Submitted by doyougitme t3_yqiymz in MachineLearning

I've been building unweave.io as a zero setup way of running your machine learning code serverlessly on GPUs.

We get asked whether we support Notebooks on Unweave all the time. So, we put together a serverless Jupyter Lab Portal built on top of Unweave: playground.unweave.io

You can choose from 5 different GPUs or use CPU only. The lab comes with an `uwstore` folder at the root of the repository. Any files you add there will be persisted across multiple sessions.

We've pre-loaded all labs with a set of commonly used ML dependencies. If you'd like to add additional dependencies, you can install them directly in the Jupyter Lab and save them to a `requirements.txt` or `environment.yml` file. Unweave will automatically reinstall these dependencies the next time you start a lab.

Since we let you run on GPUs (which are very expensive) unfortunately we can't make it free. However, we've credited each account with $5 to start off with. Billing is by the minute and cheaper than AWS and GCP :)

I'd love to hear what you think!

P.S. Here's a demo: https://www.loom.com/share/0a06836e9832472d853b75bcae334020

57

Comments

You must log in or register to comment.

Mefaso t1_ivojkuh wrote

Not showing any pricing without login is a bit of a red flag for me.

Is there a way to see the prices for different gpus without sharing my personal information?

38

ReginaldIII t1_ivqkw78 wrote

TOS > Prohibited Uses > 3. there's a typo on "Umpersonate"

I couldn't find a formal statement on your data policy for data uploaded to "uwstore", which I assume from your stack and how it appears in the notebooks is AWS EBS?

Do you give guarantees about data sandboxing between users? Have you penetration tested the ability to leak data out of another users environment or persistent storage.

Do you reserve the right to audit data a user has uploaded into uwstore or do you promise not to look at users data? This has a pretty big impact on what kinds of data people can process on your service as we often need a clearly written data policy to know we're operating within our rules of governance.

You allow Docker images, which is a nice feature, but are there size limits on images? You provide a build service based on a provided Dockerfile, how many cores / ram / local storage does that build process run on?

If I have pushed a (potentially large) image to a repository, and I base my Dockerfile FROM that image, is this allowed or do you have a set of curated base images we need to use?

Do you allow users to provide an access token so you can pull images from a private repository? If you've pulled a private image from a repository and it is now in the cache of your services, are other users able to base their Dockerfile's FROM that private image without providing an access token?

11

Nmanga90 t1_ivshzfi wrote

So this is just google collab then?

2