Hi, I'm Chan Hee (Luke) Song.
I am a CS PhD student at The Ohio State University advised by Yu Su.
My research focuses on multimodal agents, particularly on planning, perception, and benchmarking.
During my undergraduate at Notre Dame, I was part of the ND NLP.
I have interned at Nvidia Research and Adobe Research.
March 2025
Interning at Google Cloud AI Research this summer working on multimodal agents. Catch me (again) in Seattle!
Feb 2025
RoboSpatial has been accepted to CVPR 2025 with a perfect 5,5,5 score!
Feb 2025
VisualAgentBench has been accepted to ICLR 2025.
Nov 2024
Excited to present RoboSpatial, a work done at Nvidia. We present a large-scale 2D/3D spatial understanding dataset and benchmark tailored for robotics. Stay tuned for the full release!
Jun 2024
BioCLIP won the best student paper award at CVPR 2024! Honored to be part of the team.
Feb 2024
I will be interning at Nvidia Learning and Perception Research Group this summer. Catch me in Seattle!
Jul 2023
LLM-Planner, a paper on using large language models for vison-and-language navigation accepted to ICCV 2023.
Mar 2023
Our SalsaBot work for Amazon Alexa Prize Challenge has been accepted to the Embodied AI Workshop at CVPR 2023!
Mar 2023
I will be interning at Adobe Research this summer. Catch me in San Jose!
See full list in Publications.
RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics
VisualAgentBench: Towards Large Multimodal Models as Visual Foundation Agents
BioCLIP: A Vision Foundation Model for the Tree of Life
Dual-View Visual Contextualization for Web Navigation
CVPR 2024 Paper
LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models
ICCV 2023 Top 15 Most Cited Paper at ICCV 2023 Paper Website
One Step at a Time: Long-Horizon Vision-and-Language Navigation with Milestones
CVPR 2022 Paper
Feel free to contact me if you are interested in my research or want to discuss anything :)