Submitted by nexflatline t3_zwzzbc in MachineLearning
In Japan, deep learning models are not protected as intellectual property. Because of that, I'm running the model in the cloud, but that has been causing multiple issues and raising costs. Since this model requires hefty processing power, I'm planning on shipping mini-pc's with powerful GPUs and everything installed directly to the customer. But then how to protect the model, which took a lot of effort, time and money to train, from being stolen?
The main issue here is probably having a market that is broad enough to make monkey, but at the same time niche enough to not make it worth developing a whole new ecosystem only to protect the model. Is there any readily available OS or a form of container made for such a purpose, or does anyone have another suggestion?
lolillini t1_j1xt5rj wrote
If a consumer is willing to buy hardware with powerful enough GPU from you, and pay for your model, and set it up, and maintain it, they are likely to value whatever you are offering enough to actually pay a lot per call/inference if you are offering your solution over an API.
Deploying the model on cloud and setting up a scalable API pipeline is a pain for the first time, sure, but I'd say it's waaay less pain than procuring, offering, and maintaining the model on physical hardware. Plus there is IP issues as you mentioned.
It's probably easier for you to hire a cloud or ML architect to setup a proper cloud pipeline and API for your model than shipping physical hardware. You can give a dummy model to your temporary hire to setup the pipeline for you.
幸運を