Kensuke Nakamura

ken-headshot.jpg

I am a Ph.D. student at the Carnegie Mellon University Robotics Institute where I am advised by Prof. Andrea Bajcsy. My research leverages the synergy between optimal control and generative models to allow robots to safely operate in unstructured and uncertain environments. I develop theory and algorithms grounded in systems such as autonomous vehicles that use learned trajectory forecasters during planning or robotic manipulators that use world models to understand nuanced safety constraints. I was named a 2025 HRI Pioneer and I am fortunate to be supported by the NSF Graduate Research Fellowship.

Previously, I graduated from Princeton University where I was advised by Jaime Fernández Fisac and Naomi Ehrich Leonard. I’ve also had the pleasure of collaborating with Somil Bansal.

E-mail / Google Scholar / Github / Twitter

news

Oct 01, 2025 Two new papers extending latent safety filters! In Anysafe we explore how to parameterized latent safety filters in order to flexibly specify constraints at deployment time. In our second work we study temperature-based constraints to investigate how partial observability can degrade the ability of latent safety filters to prevent safety violations. We quantify partial observability in the latent space of self-supervised world models using mutual information and provide a multimodal training strategy for effective filtering even under partial observability!
May 01, 2025 Our new paper strengthens latent safety filters by allowing them to account for out-of-distribution failures by treating regions where a world model has high uncertainty as a safety violation. This work was lead by Junwon Seo and the project website is here.
Apr 23, 2025 Our paper on generalizing safety analysis for constraints beyond collision-avoidance was just accepted to RSS 2025!
Apr 07, 2025 I gave a talk to the SAIDS lab at USC! Later this month I’ll be giving a talk to the HERALD lab at TU Delft!
Feb 03, 2025 New paper on generalizing Hamilton-Jacobi reachability for constraints beyond collision-avoidance by leveraging the representations learned by world models. Check out our project website here

latest posts

selected publications

  1. multisafe_overview.png
    What You Don’t Know Can Hurt You: How Well do Latent Safety Filters Understand Partially Observable Safety Constraints?
    Matthew Kim, Kensuke Nakamura, and Andrea Bajcsy
    arXiv preprint arXiv:2510.06492, 2025
  2. anysafe_overview.png
    AnySafe: Adapting Latent Safety Filters at Runtime via Safety Constraint Parameterization in the Latent Space
    Sankalp Agrawal, Junwon Seo, Kensuke Nakamura, and 2 more authors
    2025
  3. UNISafe_preview.gif
    Uncertainty-aware Latent Safety Filters for Avoiding Out-of-Distribution Failures
    Junwon Seo, Kensuke Nakamura, and Andrea Bajcsy
    2025
  4. RSS
    frontfig_latentsafety3.png
    Generalizing Safety Beyond Collision-Avoidance via Latent-Space Reachability Analysis
    Kensuke Nakamura, Lasse Peters, and Andrea Bajcsy
    In Robotics: Science and Systems, 2025
  5. CoRL
    qualitative-regret.png
    Not All Errors Are Made Equal: A Regret Metric for Detecting System-level Trajectory Prediction Failures
    Kensuke Nakamura, Ran Tian, and Andrea Bajcsy
    In 8th Annual Conference on Robot Learning, 2024
  6. CoRL
    framework_v2.png
    Deception Game: Closing the Safety-Learning Loop in Interactive Robot Autonomy
    Haimin Hu, Zixu Zhang, Kensuke Nakamura, and 2 more authors
    In 7th Annual Conference on Robot Learning, 2023
  7. CDC
    emergent.png
    Emergent Coordination through Game-Induced Nonlinear Opinion Dynamics
    Haimin Hu, Kensuke Nakamura, Kai-Chieh Hsu, and 2 more authors
    In 2023 62nd IEEE Conference on Decision and Control (CDC), 2023
  8. ICRA
    confidence.gif
    Online Update of Safety Assurances Using Confidence-Based Predictions
    Kensuke Nakamura, and Somil Bansal
    In 2023 International Conference on Robotics and Automation (ICRA), 2023