Albert Gong
I'm a second-year PhD student in Computer Science at Cornell, where I'm fortunate to be advised by Raaz Dwivedi and Kilian Q. Weinberger.
My current work uses thinning to speed up training and inference of machine learning methods, from kernel methods to large-language models. I'm also interested in applying these techniques to improve LLM agents.
Previously, I was an undergrad at Yale, where I had the privilege of working with Andre Wibisono, Zhong Shao, and Cormac O'Dea.
Email /
Google Scholar /
LinkedIn /
Github
|
|
Research
* = equal contribution
|
PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation
Albert Gong*,
Kamilė Stankevičiūtė*,
Chao Wan*,
Anmol Kabra,
Raphael Thesmar,
Johann Lee,
JT Klenke,
Carla P. Gomes,
Kilian Q. Weinberger
arXiv preprint, 2025
Code
/
arXiv
Tl;dr—Created a framework to automatically generate both the document corpus and question-answer pairs for benchmarking RAG and agentic workflows.
|
Low-Rank Thinning
Annabelle Michael Carrell,
Albert Gong,
Abhishek Shetty,
Raaz Dwivedi,
Lester Mackey
arXiv preprint, 2025
Code (coming soon)
/
arXiv
Tl;dr—Developed new analysis of thinning algorithms that adapts to low-rank structures, enabling faster dot-product attention in Transformers, stochastic gradient descent, and deep kernel hypothesis testing.
|
Supervised Kernel Thinning
Albert Gong,
Kyuseong Choi,
Raaz Dwivedi
NeurIPS, 2024
Code
/
arXiv
/
Slides
/
Poster
Tl;dr—Used distribution compression to speed up kernel smoothing and kernel ridge regression.
|
|