George Jiayuan Gao

gegao@seas.upenn.edu

prof_pic.png

I am a second year Robotics Master’s student in the General Robotics, Automation, Sensing & Perception (GRASP) Laboratory at the University of Pennsylvania, advised by Prof. Nadia Figueroa and Prof. Dinesh Jayaraman.

Previously, I completed my undergraduate studies in Mathematics and Computer Science at Washington University in St. Louis, where I worked with Prof. Yevgeniy Vorobeychik.

My research broadly focuses on the generalization of learning-based robot policies 🦾.

Link to my CV (last update: April 2025).

News

Nov 09, 2024 Presented our work on Object-Centric Recovery at CoRL 2024 Workshop on Lifelong Learning for Home Robots. Presentation Video.
Oct 26, 2024 Our paper “Out-of-Distribution Recovery with Object-Centric Keypoint Inverse Policy For Visuomotor Imitation Learning” was accepted for Spotlight:sparkles: at the CoRL 2024 Workshop on Lifelong Learning for Home Robots.

Publications

  1. ocr_preview.png
    Out-of-Distribution Recovery with Object-Centric Keypoint Inverse Policy For Visuomotor Imitation Learning
    George Jiayuan Gao, Tianyu Li, and Nadia Figueroa

Selected Projects

  1. eureka_manip.png
    (On-Going) Eureka for Manipulation: Real-World Dexterous Agent via Large-Scale Reinforcement Learning
    Training a skilled manipulation agent with RL in simulation that can zero-shot transfer to the real world is hard. The question is: does this get any easier when we add LLM in the loop and utilize ginormous levels of computing power, such as hundreds of Nvidia’s latest generation of data-center GPUs?, 2025.
  2. vlmgineer.png
    (On-Going) VLMgineer: Vision-Language-Model Driven Gripper Add-Ons for Universal Manipulation
    Can we give robots the ability to design and use tools to solve problems? This project proposes a pipeline leveraging Vision-Language Models (VLMs) to autonomously generate gripper add-ons, enhancing robots’ capabilities to handle infinitely diverse and complex objects and scenarios, 2025.
  3. converger.gif
    (On-Going) Stable Visuomotor Policy from a Single Demo: Elastic Action Synthesis Data Augmentation
    We propose a methodology that uses our in-house Elastic-Motion-Policy, enabling the training of visuomotor policies with full spatial generalization from only a single demonstration, 2024.
  4. gdn-act.png
    Novel Environment Transfer of Visuomotor Policy Via Object-Centric Domain-Randomization
    Proposed GDN-ACT, a novel, scalable approach that enables zero-shot generalization of visuomotor policies across unseen environments, using a pre-trained state-space mapping for object localization, May 2025.
  5. gmot.gif
    Modular Gait Optimization: From Unit Moves to Multi-Step Trajectory in Bipedal Systems
    Proposed the Gait Modularization and Optimization Technique (GMOT), which leverages modular unit gaits as initialization for Hybrid Direct Collocation (HDC), reducing sensitivity to constraints and enhancing computational stability across various gaits, including walking, running, and hopping, Dec 2023.
  6. yvxaiver.gif
    Miniature City Autonomous Driving Platform Development with Real-Time Vision-Based Lane-Following
    Developed the drive stack for Washington University’s inaugural miniature city autonomous driving platform by developing the vision-based lane-following pipeline, May 2023.