Viewing a single comment thread. View all comments

geoffroy_lesage OP t1_jdc027t wrote

I see, understood. You think harsh because it would be unreliable essentially? If it's possible is there no way of improving it or it will always be unreliable due to the nature of this method?

Right, I've been thinking about this for a bit and I'm not dead set on doing it like this but it seemed like there was a way so I wanted to explore. Unfortunately I'm not as smart as all you guys and gals but figured I'd ask for opinions.

2

Jaffa6 t1_jdc1gz4 wrote

It's possible, but I think you'd struggle to improve it (though I freely admit that I don't know enough maths to say). But yeah, it's never going to be a reliable method at all.

To be honest, I'd expect you to have more problems with people not being able to sign in as themselves (inconsistent behaviour) than signing in as other people deliberately.

1

geoffroy_lesage OP t1_jdc1x8t wrote

I see, ok. This is encouraging to be honest, I knew there wasn't just going to be a magical solution that is easy to see but I think there is some research needed in this department. This is something that could be huge, and maybe it's not ML but just logic gates chained together.
You said any Neural Net would do? Any particular one you would recommend for testing?

1

Jaffa6 t1_jdc264k wrote

For testing as a proof of concept, you could probably just use a shallow feedforward network. I don't think you need any complex or deep architecture here.

1