Viewing a single comment thread. View all comments

NeverComments t1_iz04sea wrote

Depth sensing is used to properly spatialize digital content for AR. You can try and parse depth information using raw camera imagery and ML but it's...not great. With the Quest Pro you need to manually tell the headset where your walls are while Apple's ARKit can use LiDAR to automatically map out your floor plan.

2

Heliosvector t1_iz06ptb wrote

If it similar tech to what they have been using for face unlock on phones?

1

NeverComments t1_iz0c7zm wrote

They're conceptually similar but measuring different things. The dot projector in FaceID acts as a sort of guide. The dots are projected in a grid and you can use the distortion of the dots on the projected surface to interpret the shape of the user's face. The LiDAR sensors measure time of flight which allows it to determine the specific distance of objects relative to the sensor. The sensor used for Face ID can tell you that it's detected an object but LiDAR can tell you exactly how far away it is. That property makes LiDAR extremely useful for AR where you need to know how far away a given surface is in order to render something at the appropriate size with the correct perspective distortion applied.

1