About Me

Hi, I'm Will, and I'm a Senior at Cornell University studying Computer Science.  My research interests primarily lie within computer vision, representation learning, and robotics. I'm currently working with the PoRTaL lab on learning whole body humanoid control from video demonstrations. In the past, I've done research at the ETH Zurich Robotic Systems Lab and the Greenberg Lab, as well as engineering internships at NASA and Amazon. At Cornell, I've maintained a 4.1 GPA, I'm a teaching assistant for CS 1620: Visual Imaging in the Electronic Age, and I'm a Rawlings Presidential Research Scholar. Outside of school, I enjoy climbing, running marathons, and playing foot volley.  Here are a few of my recent projects :)

Cornell PoRTaL Lab

Robots of the future should have the ability to quickly learn new tasks in an interactive manner.  Reinforcement learning (RL) is a common approach for teaching robots to do difficult control tasks, but it requires tedious hand-designing of reward functions. At PoRTaL, I'm investigating how reward functions can be inferred from video demonstrations. Specifically, I have found that existing sequence-based distance functions fail to respect the order of subgoals in the video demonstration, resulting in reward hacking. I am researching a more principled, temporally consistent approach to sequence-matching for RL. In the future, this can be used to help robots learn a diverse set of tasks from the multitude of human videos available on the internet. 

Publications and Presentations

Huaxiaoyue Wang*, William Huey*, Anne Wu, Yoav Artzi, and Sanjiban Choudhury.  Time Your Rewards: Learning Temporally Consistent Rewards from a Single Video Demonstration. In CoRL 2024 Workshop on Whole-body Control and Bimanual Manipulation, 2024. Available on OpenReview 

* Indicates equal contribution. 

ETH Zurich Robotic Systems Lab

Modern mobile robots need to navigate complex environments with boundless types of obstacles and terrain. Currently, most traversability prediction methods rely on heuristics, human demonstration, or pretraining on a specific set of object classes. At the Robotic Systems Lab, I investigated applications of large pretrained vision language models to traversability prediction.

This project involved implementing existing traversability analysis methods in simulation, coming up with a novel approach to traversability analysis using visual semantics, developing a ROS package to run the method interactively on an Anymal D robot dog, and designing experiments to ablation test the method. We demonstrated long horizon unguided exploration using  zero shot traversability prediction. Additionally, we investigated how traversability knowledge can act as a prior for imitation learning. 

Publications and Presentations

William Huey*, Sean Brynjolfsson*, Donald Greenberg. "Distilling Vision-Language Models for Real-Time Traversability Prediction". Cornell Discover Undergraduate Research in Engineering Showcase, 2024. Poster Paper Github 

* Indicates equal contribution. 

The ANYmal Robot navigating a large construction site.

A randomly selected traversability prediction mask from a construction site rollout (red is untraversable, blue is traversable).

Visual traversability predictions projectected onto a robot-centric elevation map, obtained from a LIDAR scan. As expected, the obstacles (with large height) and bumpier areas in the path have lower predicted traversability.

Cornell Greenberg Lab

A major problem in mobile robotics is predicting what parts of a given terrain will be traversable for a robot, and planning to avoid untraversable areas. My most recent project with the Greenberg Lab is SuperEdge, a self-supervised learning method for traversability analysis from partial environment observations. As part of my final project for CS 4756: Robot Learning at Cornell, I made a video with my findings.

To support this project, I developed a python library containing end-to-end motion planning algorithms that interface with legged robots in Nvidia Isaac Sim. It includes traversability estimation, environment graph generation, graph search, and path tracking. I also implemented and benchmarked a variety of supervised and reinforcement learning methods for a quadruped to perform obstacle avoidance tasks in Isaac Sim. These included behavioral cloning, DAgger, q-learning, and actor-critic policy gradient. 

NASA Independent Verification & Validation

As a research intern at NASA IV&V, I was tasked with applying combinatorial testing methods to HLS Starship flight control software. In collaboration with verification researchers at NIST, systems engineers at NASA, and software engineers at SpaceX, I surveyed the current methods for system-level verification, and proposed a workflow that would integrate combinatorial testing.

In order to improve the efficiency of this workflow and make it viable for use by engineers, I developed a desktop application that would automatically generate unit tests with a specified degree of combinatorial coverage. This required me to design a graph optimization algorithm to find the most efficient ordering of tests in state based systems. Ultimately, the application was able to suggest tests that would improve coverage, and it significantly increased the efficiency of combinatorial coverage analysis.

UConn Institute of Materials Science

Atomic Force Microscopy (AFM) allows for measurements of certain properties of materials at nanometer scale. Nanomachining by Regressively Actuated Setpoint (NRASP) is a novel method that uses an AFM to precisely machine any pattern into the surface of the material. I designed and implemented the NRASP algorithm, which uses linear regression and gaussian smoothing to calculate and apply an optimal voltage to the material at different locations. In the process of interfacing with the AFM, I found a bug in its source code that had existed for over 10 years. This method is currently being used to reveal hidden properties of ferroelectric materials that vary with depth and location.

Publications and Presentations

Coming soon...

The surface of the ErMnO3 wafer starts with a noisy topography (heatmap scale is in nm)

After performing the nRASP procedure, a dinosaur is etched into the material with nanometer depth and lateral precision.