Viewing a single comment thread. View all comments

Hvarfa-Bragi t1_j418ses wrote

You're not seeing the raw data, you're just seeing the resulting fitted model.

That model isn't showing the data underlying it, OP sucks a little.

It's like if I took a 3d textured model of something and stretched it over a kid's crayon drawing - the model here is much more detailed than the data that informs it, but all it's using is the positions.

6

lutel t1_j419faf wrote

I got it but you can't make up by example hand movements with the wavelength of 5 cm, demo looks way too good to be real imo

1

Hvarfa-Bragi t1_j41av1i wrote

It would depend on the distance from the transceiver and the analysis.

Even with larger resolutions (poorer) you can use parallax and time distortion (movement and repeated sensings) to get better resolved.

I didn't see any finger positioning, but algorithms could easily infer the position of hands via inverse kinematics and position the maquette accordingly.

It would be helpful of OP to show the raw data visualization underneath.

3