Albert Gong

I'm a second-year PhD student in Computer Science at Cornell, where I'm fortunate to be advised by Raaz Dwivedi and Kilian Q. Weinberger. My current work uses distribution compression to speed up various bottlenecks in machine learning, including training and inference of kernel methods and large-language models. I'm also interested in applying these tools to improve agentic workflows.

Previously, I was an undergrad at Yale, where I had the privilege of working with Andre Wibisono, Zhong Shao, and Cormac O'Dea.

Email  /  Google Scholar  /  LinkedIn  /  Github

profile photo

Research

Supervised Kernel Thinning
Albert Gong, Kyuseong Choi, Raaz Dwivedi
NeurIPS, 2024
Code / arXiv

Tl;dr—Used distribution compression to speed up kernel smoothing and kernel ridge regression.


Source code adapted from Jon Barron's website.