I'm a 4th year PhD at Berkeley's AI Research Lab co-advised by Angjoo Kanazawa and Ken Goldberg. My work primarily involves 3D multi-modal reconstruction and how it can enable robotic manipulation.
I'm also a maintainer of Nerfstudio, a large open-source, open-license repo for 3D neural reconstruction. My work is supported by the NSF GRFP.
Previously I finished my bachelor's at CMU where I worked with Howie Choset on multi-robot path planning, and spent time at Berkshire Grey and NASA's JPL.
GARField: Group Anything with Radiance Fields
Chung Min Kim*, Mingxuan Wu*, Justin Kerr*, Ken Goldberg, Matthew Tancik, Angjoo Kanazawa
*Equal contribution
CVPR 2024
arXiv / Website
Hierarchical grouping in 3D by training a scale-conditioned affinity field from multi-level masks
LERF: Language Embedded Radiance Fields Justin Kerr*, Chung Min Kim*, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik
*Equal contribution
ICCV 2023 Oral arXiv / Website
Grounding CLIP vectors volumetrically inside a NeRF allows flexible natural language queries in 3D
Self-Supervised Visuo-Tactile Pretraining to Locate and Follow Garment Features Justin Kerr*, Huang Huang*, Albert Wilcox, Ryan Hoque, Jeffrey Ichnowski, Roberto Calandra, and Ken Goldberg
*Equal contribution
RSS 2023
arXiv / Website
We collect spatially paired vision and tactile inputs with a custom rig to train cross-modal representations. We then show these representations can be used for multiple active and passive perception tasks without fine-tuning.
Evo-NeRF: Evolving NeRF for Sequential Robot Grasping Justin Kerr, Letian Fu, Huang Huang, Yahav Avigal, Matthew Tancik, Jeffrey Ichnowski, Angjoo Kanazawa, Ken Goldberg
CoRL 2022, Oral Presentation Website
/
OpenReview
NeRF functions as a real-time, updateable scene reconstruction for rapidly grasping table-top transparent objects.
Geometry regularization speeds and improves scene geometry, and a NeRF-adapted grasping network learns to ignore floaters.
Fluorescent paint enables inexpensive (<$300) and self-supervised data collection of dense image annotations without altering objects' appearance.
Autonomously Untangling Long Cables
Vainavi Viswanath*, Kaushik Shivakumar*, Justin Kerr*, Brijen Thananjeyan, Ellen Novoseller, Jeffrey Ichnowski, Alejandro Escontrela, Michael Laskey, Joseph E. Gonzalez, Ken Goldberg
* Equal contribution
RSS 2022, Best Systems Paper Award Website
/
Paper
A sliding-pinching dual-mode gripper enables untangling charging cables with manipulation primitives to simplify perception coupled with learned perception modules.
Combining active perception with behavior cloning can reliably hand a surgical needle back and forth between grippers.
Personal Projects
Miniature self-balancing robot, built from scratch using off-the-shelf parts and
able to follow waypoints using model-predictive control for balance and
pure pursuit for path following.
Indoor mapping robot built from scratch with a cheap planar lidar which autonomously mapped my house using a
Google Cartographer-inspired SLAM algorithm,
alongside a grid path planner for exploration and DWA controller for path following.