Viewing a single comment thread. View all comments

seek_it t1_iwyyo7k wrote

Can someone explain how can it's inference be that fast that it can process realtime?

5

Ok-Alps-7918 t1_iwz05ql wrote

Using architecture optimised for mobile (mobilenet-style), compiling the ML model for mobile (e.g. AI accelerator chip for iOS devices), quantisation of the model, pruning, etc. I’d also imagine it’s being done locally on the device instead of the cloud.

10

seek_it t1_iwz0epu wrote

That's why I'm even more surprised. Such kind of models are usually GAN based that inference still require good compute power! On-device inference is even more astonishing!

1

pennomi t1_iwz8l6m wrote

It’s likely not a GAN.

7

Ok-Western2685 t1_iwzos8k wrote

It actually is, and indeed runs on the device.

They did amazing work in that regard!

3