Justin Kerr

I am a second year PhD student at UC Berkeley co-advised by Ken Goldberg and Angjoo Kanazawa working primarily on NeRF for robot manipulation, 3D scene understanding, and visuo-tactile representation learning. Recently I'm interested in leveraging NeRF for language grounding, and how it could change how we interact with 3D. My work is supported by the NSF GRFP.

Previously I finished my bachelor's at CMU where I worked with Howie Choset on multi-agent path planning, and spent time at Berkshire Grey and NASA's JPL.

Email  /  Twitter  /  Github

profile photo
Papers
GARField: Group Anything with Radiance Fields
Chung Min Kim*, Mingxuan Wu*, Justin Kerr*, Ken Goldberg, Matthew Tancik, Angjoo Kanazawa
*Equal contribution
arXiv 2023
arXiv / Website

Hierarchical grouping in 3D by training a scale-conditioned affinity field from multi-level masks
LERF-TOGO: Language Embedded Radiance Fields for Zero-Shot Task-Oriented Grasping
Adam Rashid*, Satvik Sharma*, Chung Min Kim, Justin Kerr, Lawrence Yunliang Chen, Angjoo Kanazawa, Ken Goldberg
*Equal contribution
CoRL 2023 Oral, Best Paper Finalist
arXiv / Website

LERF's multi-scale semantics enables 0-shot language-conditioned part grasping for a wide variety of objects.
LERF: Language Embedded Radiance Fields
Justin Kerr*, Chung Min Kim*, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik
*Equal contribution
ICCV 2023 Oral
arXiv / Website

Grounding CLIP vectors volumetrically inside a NeRF allows flexible natural language queries in 3D
Self-Supervised Visuo-Tactile Pretraining to Locate and Follow Garment Features
Justin Kerr*, Huang Huang*, Albert Wilcox, Ryan Hoque, Jeffrey Ichnowski, Roberto Calandra, and Ken Goldberg
*Equal contribution
RSS 2023
arXiv / Website

We collect spatially paired vision and tactile inputs with a custom rig to train cross-modal representations. We then show these representations can be used for multiple active and passive perception tasks without fine-tuning.
Evo-NeRF: Evolving NeRF for Sequential Robot Grasping
Justin Kerr, Letian Fu, Huang Huang, Yahav Avigal, Matthew Tancik, Jeffrey Ichnowski, Angjoo Kanazawa, Ken Goldberg
CoRL 2022, Oral Presentation
Website / OpenReview

NeRF functions as a real-time, updateable scene reconstruction for rapidly grasping table-top transparent objects. Geometry regularization speeds and improves scene geometry, and a NeRF-adapted grasping network learns to ignore floaters.
All You Need is LUV: Unsupervised Collection of Labeled Images using Invisible UV Fluorescent Indicators
Brijen Thananjeyan*, Justin Kerr*, Huang Huang, Joseph E. Gonzalez, Ken Goldberg
* Equal contribution
IROS 2022
Website / arXiv

Fluorescent paint enables inexpensive (<$300) and self-supervised data collection of dense image annotations without altering objects' appearance.

Autonomously Untangling Long Cables
Vainavi Viswanath*, Kaushik Shivakumar*, Justin Kerr*, Brijen Thananjeyan, Ellen Novoseller, Jeffrey Ichnowski, Alejandro Escontrela, Michael Laskey, Joseph E. Gonzalez, Ken Goldberg
* Equal contribution
RSS 2022, Best Systems Paper Award
Website / Paper

A sliding-pinching dual-mode gripper enables untangling charging cables with manipulation primitives to simplify perception coupled with learned perception modules.

Dex-NeRF: Using a Neural Radiance Field to Grasp Transparent Objects
Jeffrey Ichnowski*, Yahav Avigal*, Justin Kerr, Ken Goldberg
* Equal contribution
CoRL 2021
Website / arXiv

Using NeRF to reconstruct scenes with offline calibrated camera poses can produce graspable geometry even on reflective and transparent objects.

Learning to Localize, Grasp, and Hand Over Unmodified Surgical Needles
Albert Wilcox*, Justin Kerr*, Brijen Thananjeyan, Jeff Ichnowski, Minho Hwang, Samuel Paradis, Danyal Fer, Ken Goldberg
* Equal contribution
ICRA 2022
Website / arXiv

Combining active perception with behavior cloning can reliably hand a surgical needle back and forth between grippers.

Personal Projects
segway Miniature self-balancing robot, built from scratch using off-the-shelf parts and able to follow waypoints using model-predictive control for balance and pure pursuit for path following.
mapping robot map Indoor mapping robot built from scratch with a cheap planar lidar which autonomously mapped my house using a Google Cartographer-inspired SLAM algorithm, alongside a grid path planner for exploration and DWA controller for path following.

Source code taken from Jon Barron's site.