I'm a 4th year PhD at Berkeley's AI Research Lab co-advised by Angjoo Kanazawa and Ken Goldberg. My work primarily involves 3D multi-modal reconstruction and how it can enable robotic manipulation. Lately I've been interested in active vision; how can we get robots to look around like we do to accomplish tasks? I'm also a maintainer of Nerfstudio, a large open-source, open-license repo for 3D neural reconstruction. My work is supported by the NSF GRFP.
Previously I finished my bachelor's at CMU where I worked with Howie Choset on multi-robot path planning, and spent time at Berkshire Grey and NASA's JPL.
Email  /  Twitter  /  Github  /  Blog
Fluorescent paint enables inexpensive (<$300) and self-supervised data collection of dense image annotations without altering objects' appearance.
A sliding-pinching dual-mode gripper enables untangling charging cables with manipulation primitives to simplify perception coupled with learned perception modules.
Using NeRF to reconstruct scenes with offline calibrated camera poses can produce graspable geometry even on reflective and transparent objects.
Combining active perception with behavior cloning can reliably hand a surgical needle back and forth between grippers.