Sort by year:
Cherry picking with Reinforcement learning
In submission for Robotics: Science and Systems (RSS), 2023
Yunchu Zhang*, Liyiming Ke* , Abhay Deshpande, Abhishek Gupta, Siddhartha Srinivasa
In submission for Robotics: Science and Systems (RSS), 2023
Yunchu Zhang*, Liyiming Ke* , Abhay Deshpande, Abhishek Gupta, Siddhartha Srinivasa
- Proposed a system, CherryBot, for training deep RL agents for dynamic fine manipulation without rigid surface support.
- Demonstrated over multiple random seeds that, within 30 minutes of interaction in the real world, our proposal achieves 100% success rate on a exacting proxy task.
- Generalized on diverse evaluation scenarios including dynamic disturbance, randomized reset conditions, varied perception noise and object shapes.
Planning with Spatial-Temporal Abstraction from Point Clouds for Deformable Object Manipulation
Conference on Robot Learning (CoRL), 2022
Xingyu Lin*, Carl Qi*, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, Chuang Gan, David Held
Conference on Robot Learning (CoRL), 2022
Xingyu Lin*, Carl Qi*, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, Chuang Gan, David Held
- Proposed a framework that PlAns with Spatial and Temporal Abstraction (PASTA) by learning a set of skill abstraction modules over a 3D set representation
- Composed a set of skills to solve complex tasks with more entities and longer-horizon than what was seen during training.
- Demonstrated a manipulation system in the real world that uses PASTA to plan with multiple tool-use skills to solve the challenging deformable object manipulation task.
Spatial reasoning as Object Graph Energy Minimization
In Submission for ICLR, 2023
Nikolaos Gkanatsios*, Ayush Jain*, Zhou Xian, Yunchu Zhang, Katerina Fragkiadaki
In Submission for ICLR, 2023
Nikolaos Gkanatsios*, Ayush Jain*, Zhou Xian, Yunchu Zhang, Katerina Fragkiadaki
- Proposed a framework called IMAGGINE for robot instruction that maps language instructions to goal scene configurations of the relevant object and part entities, and their locations in the scene.
- Designed a semantic parser which maps language commands to compositions of language-conditioned energy-based models that generate the scene in a modular and compositional way.
- Modulated the pick and placement locations of a robotic gripper with predicted entities' locations and a transporter network that re-arranges the objects in the scene.
Visually-Grounded Library of Behaviours for Manipulating Diverse Objects across Diverse Configurations and Views
Conference on Robot Learning (CoRL), 2021
Jingyun Yang*, Hsiao-Yu Fish Tung*, Yunchu Zhang*, Gaurav Pathak, Ashwini Pokle, Chris Atkeson, Katerina Fragkiadaki
Conference on Robot Learning (CoRL), 2021
Jingyun Yang*, Hsiao-Yu Fish Tung*, Yunchu Zhang*, Gaurav Pathak, Ashwini Pokle, Chris Atkeson, Katerina Fragkiadaki
- Built a behavior selector which conditions on the invariant object properties to select the behaviors that can successfully perform the desired tasks on the object in hand
- Generated and collected a library of behaviors each of which conditions on the variable object properties to predict the manipulation actions over time.
- Extracted semantically-rich and affordance-aware view-invariant 3D object feature representations through self-supervised geometry-aware 2D-to-3D neural networks.
Oct 2018 — Mar 2019
Intelligent Aerial manipulator HCI system: Using arm-drone to collaborate with human
Advisor: Xiang Anthony Chen, Assistant Professor, Department of Electrical and Computer Engineering, University of California, Los Angeles.
Advisor: Xiang Anthony Chen, Assistant Professor, Department of Electrical and Computer Engineering, University of California, Los Angeles.
- Based on robot-arm and drone to build aerial manipulator system and make it stable with impedance control and motion planning algorithms in ROS.
- Utilize deep neural network to train an offline grasps network and train online grasps network with reinforcement learning.
- Utilize online-offline machine learning algorithm to build autonomous updated model.
- Fused multi-information from human demonstration and online-offline model’s previous knowledge.
- Interaction with human in new task with demonstration to update whole model.
Apr.2018 — June 2018 Control for Robotics system: Solving Rubik’s Cube with robot arm and motor
Advisor: Veronica Santos, Associate Professor, Department of Mechanical Engineering , Director of Biomechatronics Lab in UCLA
Advisor: Veronica Santos, Associate Professor, Department of Mechanical Engineering , Director of Biomechatronics Lab in UCLA
- Based on web-camera to detect randomly shuffled Rubik’s cube and sent motion command’s solution to robot arm.
- Utilize inverse kinematic to make trajectory and position planning for robot arm.
- Utilized PID position control to rotate Rubik’s Cube and realized real-timeGripper’s force control to grasp Rubik’s Cube.
Jan.2018 — Mar.2018 A Reinforcement Learning Approach for Locomotion
Team Members: Wandi Cui, Zeyu Zhang, Yunchu Zhang, Ziqi Yang
- This project is the term project for UCLA CS275 18Winter. We explored the Reinforcement Learning approach for locomotion training. We implemented the Evolution Strategy and A3C algorithm on the BipedalWalker-v2 physical environment provided by OpenAI Gym. Both led to good results with satisfying accumulated rewards.
- Comparing the two solutions, we decided that A3C algorithm is stabler and more suited for this problem. Then we conducted further experiments on the advanced BipedalWalkerHardcore-v2 environment that has randomly generated terrain obstacles, which achieved relatively modest performance. We also explored deeper into the underlying explanation for the experiment results.
Image based Object Detection System for Self-driving Cars Application
Advisor: Shi Ruan , Deep learning and computer vision Researcher, Facebook
Advisor: Shi Ruan , Deep learning and computer vision Researcher, Facebook
- Base on Deep learning (Mxnet) to implement object detection and tracking system on self-driving car system.
- Utilize Yolo algorithm to construct special neural network model(utilize Resnet to extract basic image information and then with new designed network frame) update a new loss function, train the network on GPU and tune parameters to converge and optimize the result.
- Optimize feedforward inference network and realize object detection and tracking in real time on Raspberry with its Pi camera.
- Electronic Design Competition (Research on blind pendulum) DUTNationwide Second Prize July 2015 — Aug.2015
- Research on blind pendulum, building model, and spatial 3-D analysis.
- Built a model of nonlinear dispersion, composition and resolution of motion, and automatic control principle.
- Collected the attitude of wind pendulum, processed the data with SCM, and regulated wind force by position from PID closed-loop.
- Made the wind pendulum swing up, perform setting-out and stay still under the control of DC blower.
- Freescale Smartcar Competition Regional Second Prize Sep.2014 — June 2015
- Wrote the algorithms for different stages to identify the racing track, based on PID algorithm to control speed.
- Was responsible for hardware building, PCB design, welding and debugging.Equipped the smart car with sensors such as accelerometer to avoid obstacles and pass over the ramp.