Viewing a single comment thread. View all comments

geoffroy_lesage OP t1_jdbz94x wrote

I see, I like the black box aspect but I understand it makes things difficult for when we need consistent output... What kind of "key" would you be able to generate and with what models? What about mathematical or statistical ways to try to reduce the output to make it more stable? This might be a dum idea but imagine if the model spits out floats, we get 1.1 but we expect 1 we could apply rounding to get integers in which case we would more often get 1... or we could do multiple runs and average them out.. or use fancy math like finite fields, modulo arithmetic, using different base math, etc...
And yea I get it that we could use something that is on device but unfortunately that is not something I want to rely on.. nothing that is hard coded anywhere can be used.
The goal here is to generate this key and use it to encrypt/decrypt stuff. I never want to store this key anywhere, it needs to be generated by the user data fed into the model

2

Jaffa6 t1_jdbzs22 wrote

This is unfortunately going to be a bit harsh, but it's worth knowing sooner rather than later: Cryptography (which this essentially is) is a VERY difficult field and creating a secure encryption scheme is very difficult.

Wanting to encrypt and decrypt without the key being stored anywhere is an admirable goal, but this is certainly not the way I'd recommend doing it and it's not likely to be secure this way.

If you're dead set on doing it like this, then pretty much any neural network can do it. You're just inputting numbers and wanting numbers out.

I guess your training data would be many sets of behavioural data from each user, say at least 50 users, and training it to predict the user from that data, but heavily penalising it if it matches another user too.

1

geoffroy_lesage OP t1_jdc027t wrote

I see, understood. You think harsh because it would be unreliable essentially? If it's possible is there no way of improving it or it will always be unreliable due to the nature of this method?

Right, I've been thinking about this for a bit and I'm not dead set on doing it like this but it seemed like there was a way so I wanted to explore. Unfortunately I'm not as smart as all you guys and gals but figured I'd ask for opinions.

2

Jaffa6 t1_jdc1gz4 wrote

It's possible, but I think you'd struggle to improve it (though I freely admit that I don't know enough maths to say). But yeah, it's never going to be a reliable method at all.

To be honest, I'd expect you to have more problems with people not being able to sign in as themselves (inconsistent behaviour) than signing in as other people deliberately.

1

geoffroy_lesage OP t1_jdc1x8t wrote

I see, ok. This is encouraging to be honest, I knew there wasn't just going to be a magical solution that is easy to see but I think there is some research needed in this department. This is something that could be huge, and maybe it's not ML but just logic gates chained together.
You said any Neural Net would do? Any particular one you would recommend for testing?

1

Jaffa6 t1_jdc264k wrote

For testing as a proof of concept, you could probably just use a shallow feedforward network. I don't think you need any complex or deep architecture here.

1