Submitted by nexflatline t3_zwzzbc in MachineLearning
lolillini t1_j1xt5rj wrote
If a consumer is willing to buy hardware with powerful enough GPU from you, and pay for your model, and set it up, and maintain it, they are likely to value whatever you are offering enough to actually pay a lot per call/inference if you are offering your solution over an API.
Deploying the model on cloud and setting up a scalable API pipeline is a pain for the first time, sure, but I'd say it's waaay less pain than procuring, offering, and maintaining the model on physical hardware. Plus there is IP issues as you mentioned.
It's probably easier for you to hire a cloud or ML architect to setup a proper cloud pipeline and API for your model than shipping physical hardware. You can give a dummy model to your temporary hire to setup the pipeline for you.
ๅนธ้ใ
nexflatline OP t1_j1xy26j wrote
Thank you for the tips. If I may give more details to make the problem clearer: at the moment we already have a cloud architect and the cloud ML system is already up and working. But we are dealing with large amounts of real time video data in high resolution, and that is what makes almost impossible to profit using cloud ML (also the latency is not as good as we expected). For this application we need full-HD video decoding at high frame rates.
The end users are people with no special knowledge of anything computer related and operate all through a mobile application (already done and working). Our idea now is keeping the mobile app, but moving the server locally (a mini-pc installed at the customer location). The problem is that the mini-pc would have the model stored in it and we can't find a way to keep it safe.
HGFlyGirl t1_j2al2u4 wrote
Whatever solution you find, be mindful of how it impacts the bottom line. It's easy to spend more on protection against theft, than you could lose from a theft.
It could be impossible to make it completely safe from theft, but it can be made difficult and as you say - your customers have little knowledge of computers. I have had a customer actually pay a hacker to steal my software, I caught them at it and a letter from the legal team was all I needed. I caught it because I had legitimate remote access.
Can you encrypt the model and make your software temporarily decrypt it at the point of inference? This might make the model useless in isolation.
Viewing a single comment thread. View all comments