Viewing a single comment thread. View all comments

rajatarya OP t1_j0gvprn wrote

Great question! There isn't a limit on single file size. Since we chunk each file into blocks the total size of the file isn't a limiting factor. Right now overall repo sizes can be up to 1TB but we have plans to scale that to 100TB in the next year.

Would love it if you could try out XetHub and share your thoughts after using it a bit.

25

kkngs t1_j0gx2rj wrote

Whats your model here? Is this just offered as a cloud based service? Can we host our own? The case where I'm interested has some challenges with client data, data residency, security etc.

10

rajatarya OP t1_j0gxo3b wrote

We are still early in our thinking of business model - so would love to hear your thoughts on this.

In general, we are thinking about usage-based pricing based on compute, storage, and transfer.

Right now we offer a cloud-based multi-tenant service. We can also deploy into your cloud environment (VPC) as a single-tenant offering.

I would love to hear more about the use case you are thinking about - please DM me to talk more about it (and to hear more details on single-tenant offering).

13

waffles2go2 t1_j0hxp5k wrote

This does look very interesting but I fear the business model hasn't had the same focus. There are ton of toolboxes, but people want solutions...

−6

_matterny_ t1_j0jvsyr wrote

If you are splitting things into chunks, is there a file quantity limit?

1

rajatarya OP t1_j0jx9cw wrote

No specific file limit. By scanning the files and chunking them the specific number of files in the repo doesn’t matter. But for each file in the repo we leave a pointer file that references the merkle tree entry for that file.

2