I have released the new Diagnostic Evaluation of Video Inpainting on Landscapes (DEVIL) benchmark, which you can use to evaluate video inpainting methods. Links to the code and the arXiv paper can be found on the project website.
I am currently looking for a job starting September 2021 or later. If you have an opening, please contact me at email@example.com!
I am a Computer Science and Engineering Ph.D. candidate at the University of Michigan in Ann Arbor. I am advised by Prof. Jason J. Corso and Prof. Honglak Lee. My current research explores conditional video generation as applied to tasks like video prediction, inpainting, and style transfer.
In 2019, I did an internship at Samsung Semiconductor, Inc. in San Diego, CA. Advised by Dr. Mostafa El-Khamy, I developed a method to apply image style transfer and inpainting models to videos in a temporally-consistent manner.
In 2017, I did an internship at Toyota Research Institute in Cambridge, MA under the advisement of Dr. Simon Stent and Dr. German Ros, where I compared the modeling and generalization performance of multiple state-of-the-art video prediction networks with a novel dataset.
In 2015, I graduated summa cum laude from the University of Massachusetts in Amherst, where I received B.S. degrees in Computer Science and Mathematics. At UMass, I did research as part of the RIPPLES lab under Prof. W. Richards Adrion and Prof. Paul E. Dickson. I worked on various aspects of the lab's Presentations Automatically Organized from Lectures (PAOL) system, including video conversion, whiteboard processing, multithreading, and the graphical user interface. For my Honors Thesis project, I proposed and evaluated a novel technique for segmenting whiteboard marker strokes in real time. Additionally, I was briefly a member of the Center for e-Design under the instruction of Prof. Jack C. Wileden and Prof. Sundar Krishnamurty, where I worked on a program that converted models between Computer-Aided Design systems.
The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting
arXiv preprint arXiv:2105.05332, 2021
HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks
IEEE Winter Conference on Applications of Computer Vision, 2021
A Temporally-Aware Interpolation Network for Video Frame Inpainting
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020
A Dataset To Evaluate The Representations Learned By Video Prediction Models
International Conference on Learning Representations (Workshop Track), 2018
Click Here: Human-Localized Keypoints as Guidance for Viewpoint Estimation
IEEE International Conference on Computer Vision, 2017
Awards and Honors
- NSF Graduate Research Fellowship - Honorable Mention (UMich)
- Outstanding Achievement in Artificial Intelligence Award (UMass)
- Honors Dean's Award (UMass)
- Honors Research Grant (UMass)
- Research Assistant Fellowship (UMass)
My paper “HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks” has been accepted to WACV 2021!
The extended version of “A Temporally-Aware Interpolation Network for Video Frame Inpainting” will appear in the May 2020 issue of IEEE Transactions on Pattern Analysis and Machine Intelligence!
I submitted the paper “HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks” to arXiv.
I will be doing an internship with Samsung Semiconductor Inc. in San Diego, CA this summer!