Hao Jiang

I am a fourth-year undergraduate student at University of Southern California where I'm working with Prof. Daniel Seita. I’m double majoring in Computer Science (CS) and Applied and Computational Mathematics (AMCM). I'm broadly interested in robot learning and manipulation.

Email  /  Google Scholar  /  Github  /  LinkedIn

profile photo

Research

My research interest lies in developing advanced robot policy and skill learning techniques, particularly for complex manipulation tasks like dexterous manipulation. I am fascinated by how robots can be trained to better perceive and interact with their surroundings through the integration of multimodal perception—combining visual, tactile, and auditory inputs. A key focus is on designing innovative observation spaces that not only enhance policy learning but also deepen the robot's understanding of the environment. I explore how deep reinforcement learning, imitation learning, and computer vision can be harnessed to push the boundaries of what robots can achieve, aiming to create machines capable of more intelligent, human-like interactions.

(* indicates equal contribution, † indicates equal advising)

Learning to Singulate Objects in Packed Environments using a Dexterous Hand
Hao Jiang, Yuhai Wang*, Hanyang Zhou*, Daniel Seita
International Symposium of Robotics Research (ISRR), 2024
project page / video / arXiv / code / blog

This paper presents the Singulating Objects in Packed Environments (SOPE) framework, which utilizes a novel displacement-based state representation and a multi-phase RL approach to enable effective singulation of target objects in cluttered environments using a 16-DOF Allegro Hand. The proposed method demonstrates high success rates in both simulation and real-world experiments, outperforming alternative techniques.


Inspired by the template here.