Submitted by nexflatline t3_zwzzbc in MachineLearning
PassionatePossum t1_j231qxj wrote
We protect our models with TPMs. The model is stored on the device in encrypted form using a device-specific key. During boot-up, the state of the system is compiled into a hash value by the TPM which then decrypts the model. If the system or the software running on it has been modified, this decryption will fail.
The nice thing about TPMs is that they are write-only. Nobody gets to see the key, except for the TPM.
However, be careful not to use GPLv3 licensed software. GPLv3 not only requires you to open-source the software (which is something I could live with) but also demands complete access to the hardware you are running it on (which is completely bonkers).
nexflatline OP t1_j23hkwn wrote
That's exactly the type of suggestion I was looking for, and with real life experience. Thank you. I will look more into it and see how it would work in our situation. Thank you very much.
Viewing a single comment thread. View all comments