I am currently a 2nd-year Ph.D. student at MIL, the University of Tokyo, advised by Prof. Tatsuya Harada and Lect. Yusuke Mukuta. I also received B.S. and M.S. from The University of Tokyo, advised by Prof. Tatsuya Harada and Lect. Yusuke Mukuta.

Research Interests

My research focuses on modeling human dynamics and 3D environments from multimodal data. I am particularly interested in human motion generation and prediction, feed-forward 3D/4D reconstruction, and time-series forecasting. Recently, I have been exploring the use of multimodal large language models to better understand complex human behaviors and dynamic scenes. In the long term, my goal is to build human-centered intelligent systems that can anticipate human intentions and motions, enabling robust and adaptive human–robot interaction in real-world environments.

Education

  • [2024.04 - Present] Ph.D. in Computer Science, The University of Tokyo
  • [2022.04 - 2024.03] M.S. in Computer Science, The University of Tokyo
  • [2018.04 - 2022.03] B.S. in Engineering, The University of Tokyo

Internship

Research

  • Ryo Umagami, Liu Yue, Xuangeng Chu, Ryuto Fukushima, Tetsuya Narita, Yusuke Mukuta, Tomoyuki Takahata, Jianfei Yang, Tatsuya Harada. “Intend to Move: A Multimodal Dataset for Intention-Aware Human Motion Understanding.” NeurIPS 2025. [Project Page] [Video]
  • Shuhong Liu, Chenyu Bao, Ziteng Cui, Yun Liu, Xuangeng Chu, Lin Gu, Marcos V Conde, Ryo Umagami, Tomohiro Hashimoto, Zijian Hu, Tianhan Xu, Yuan Gan, Yusuke Kurose, Tatsuya Harada. “RealX3D: A Physically-Degraded 3D Benchmark for Multi-view Visual Restoration and Reconstruction.” arXiv:2512.23437, 2025. [Paper]
  • Kohei Uehara, Nabarun Goswami, Hanqin Wang, Toshiaki Baba, Kohtaro Tanaka, Tomohiro Hashimoto, Kai Wang, Rei Ito, Takagi Naoya, Ryo Umagami, Yingyi Wen, Tanachai Anakewat, Tatsuya Harada. “Advancing Large Multi-modal Models with Explicit Chain-of-Reasoning and Visual Question Generation.” arXiv:2401.10005, 2024. [Paper]
  • Ryo Umagami, Yu Ono, Yusuke Mukuta, Tatsuya Harada. “HiPerformer: Hierarchically Permutation-Equivariant Transformer for Time Series Forecasting.” arXiv:2305.08073, 2023. [Paper]

Misc