` Learning Geometrically-Grounded 3D Visual Representations for View-Generalizable Robotic Manipulation

Learning Geometrically-Grounded 3D Visual Representations for View-Generalizable Robotic Manipulation

Di Zhang1*, Weicheng Duan1*, Dasen Gu1, Hongye Lu1, Hai Zhang2,

Hang Yu1, Junqiao Zhao1, Guang Chen1
*Equal Contribution
Corresponding Author
1Tongji University, 2The University of Hong Kong
2026
GEM3D Teaser Image

We present GEM3D, a unified representation-policy learning framework for view-generalizable robotic manipulation.

Abstract

Real-world robotic manipulation demands visuomotor policies capable of robust spatial scene understanding and strong generalization across diverse camera viewpoints. While recent advances in 3D-aware visual representations have shown promise, they still suffer from several key limitations: (i) reliance on multi-view observations during inference, which is impractical in single-view restricted scenarios; (ii) incomplete scene modeling that fails to capture holistic and fine-grained geometric structures essential for precise manipulation; and (iii) lack of effective policy training strategies to retain and exploit the acquired 3D knowledge. To address these challenges, we present GEM3D (Geometrically-Grounded 3D Manipulation), a unified representation-policy learning framework for view-generalizable robotic manipulation. GEM3D introduces a single-view 3D pretraining paradigm that leverages point cloud reconstruction and feed-forward gaussian splatting under multi-view supervision to learn holistic geometric representations. During policy learning, GEM3D performs multi-step distillation to preserve the pretrained geometric understanding and effectively transfer it to manipulation skills. We conduct experiments on 12 RLBench tasks, where our approach outperforms the previous state-of-the-art (SOTA) method by 12.7% in average success rate. Further evaluation on six representative tasks demonstrates the strong zero-shot view generalization of our approach, with the success rate drops by only 22.0% and 29.7% under moderate and large viewpoint shifts, respectively, whereas the SOTA method suffers larger decreases of 41.6% and 51.5%.

BibTeX

@misc{zhang2026learninggeometricallygrounded3dvisual,
      title={Learning Geometrically-Grounded 3D Visual Representations for View-Generalizable Robotic Manipulation}, 
      author={Di Zhang and Weicheng Duan and Dasen Gu and Hongye Lu and Hai Zhang and Hang Yu and Junqiao Zhao and Guang Chen},
      year={2026},
      eprint={2601.22988},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2601.22988}, 
}