Submitted by radi-cho t3_11c8pqz in MachineLearning
Comments
[deleted] t1_ja2izup wrote
[removed]
londons_explorer t1_ja2m7hi wrote
If I'm understanding this paper correctly... This technique doesn't work if there are any moving objects in any of the camera scenes?
cvnh t1_ja2n6lj wrote
That's awesome, I was just getting started in doing something similar but starting with even simpler geometries. Fantastic work.
AlbertoUEDev t1_ja39jhy wrote
Hello! I'm looking for a couple of weeks to add segmentation to unreal engine! Let's see
AlbertoUEDev t1_ja39mi1 wrote
Are you in discord Nvidia devs?
ITagEveryone t1_ja4fhb7 wrote
No it probably won't model moving objects well. But this is not uncommon in 3D modeling IIRC.
spudmix t1_ja4xm0y wrote
Scan your real life environment into Minecraft
Sounds like a joke but honestly, I'm kinda tempted to implement that...
currentscurrents t1_ja5co7d wrote
Photogrammetry in general has a hard time with moving objects and especially objects that change shape.
Swing_Bishop t1_ja5yzo4 wrote
A good idea with some severe limitations it seems.
caks t1_ja6gib0 wrote
Serious question: how do you even annotate something like this?
radi-cho OP t1_ja27qnn wrote
Paper: https://arxiv.org/pdf/2302.12251.pdf GitHub: https://github.com/nvlabs/voxformer
Abstract: Humans can easily imagine the complete 3D geometry of occluded objects and scenes. This appealing ability is vital for recognition and understanding. To enable such capability in AI systems, we propose VoxFormer, a Transformer-based semantic scene completion framework that can output complete 3D volumetric semantics from only 2D images. Our framework adopts a two-stage design where we start from a sparse set of visible and occupied voxel queries from depth estimation, followed by a densification stage that generates dense 3D voxels from the sparse ones. A key idea of this design is that the visual features on 2D images correspond only to the visible scene structures rather than the occluded or empty spaces. Therefore, starting with the featurization and prediction of the visible structures is more reliable. Once we obtain the set of sparse queries, we apply a masked autoencoder design to propagate the information to all the voxels by self-attention. Experiments on SemanticKITTI show that VoxFormer outperforms the state of the art with a relative improvement of 20.0% in geometry and 18.1% in semantics and reduces GPU memory during training by ~45% to less than 16GB.