Viewing a single comment thread. View all comments

waiting4omscs t1_j2lbbsu wrote

Not sure if this is a simple question, so I'll ask here before making a thread. How would on-device machine learning be done? Just watched a video about "moonwalker" shoes that use AI to adapt your stride to their mechanical wheeled shoes. In a video I watched, the developer said that the shoe "learns your stride". How would it be done on-device like that? What would the underlying architecture be? What kind of algorithms/models? Would there be trained parameters already?

3

tdgros t1_j2m63e5 wrote

I can't say for sure, but there isn't necessarily any online training. You can imagine some hypernetwork regressing good parameters for a low level task such as controlling the shoes' motors. It could also be a combination of good old school sensor fusion and a nice marketing speech ;)

3

waiting4omscs t1_j2ndilk wrote

It's a real task to separate marketing from implementation. Appreciate this response. I have a few things to learn more about, "hypernetwork" and "sensor fusion". Thank you

2

tdgros t1_j2nfzj6 wrote

a hypernetwork is a term that can be used when a network outputs coefficients for another network.

Sensor fusion is typically used with low-level sensors that are noisy, biased, limited in their dynamics... but can complement each other, be "fused". For UAV navigation, we fuse accelerometers, gyros, pressure sensors, GPS and vision...

2

comradeswitch t1_j33yet8 wrote

And what you describe can also happen partially, where a model is developed offline that "learns to learn" or simply pretrained on data that's likely to be representative, and then this is placed on the embedded system that has a much simpler learning task or just starts out much closer to optimal.

But I think you nailed it with the last sentence. I need the Scooby Doo meme, where it's "AI on a wearable embedded computer" revealed to have been a Kalman filter all along.

2