Albert Gong
I'm a second-year PhD student in Computer Science at Cornell, where I'm fortunate to be advised by Raaz Dwivedi and Kilian Q. Weinberger.
My current work uses thinning to speed up training and inference of machine learning methods, from kernel methods to large-language models. I'm also interested in applying these techniques to improve LLM agents.
Previously, I was an undergrad at Yale, where I had the privilege of working with Andre Wibisono, Zhong Shao, and Cormac O'Dea.
Email /
Google Scholar /
LinkedIn /
Github
|
|
Research
* = equal contribution
|
N2: A Unified Python Package and Test Bench for Nearest Neighbor-Based Matrix Completion
Caleb Chin,
Aashish Khubchandani,
Harshvardhan Maskara,
Kyuseong Choi,
Jacob Feitelberg,
Albert Gong,
Manit Paul,
Tathagata Sadhukhan,
Anish Agarwal,
Raaz Dwivedi
arXiv preprint, 2025
Code
/
arXiv
Tl;dr—Introduced the N2 package and N2-Bench test bench for nearest neighbor-based matrix completion.
|
PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation
Albert Gong*,
Kamilė Stankevičiūtė*,
Chao Wan*,
Anmol Kabra,
Raphael Thesmar,
Johann Lee,
JT Klenke,
Carla P. Gomes,
Kilian Q. Weinberger
ICML, 2025
Code
/
arXiv
Tl;dr—Created a framework to automatically generate both the document corpus and question-answer pairs for benchmarking RAG and agentic workflows.
|
Low-Rank Thinning
Annabelle Michael Carrell,
Albert Gong,
Abhishek Shetty,
Raaz Dwivedi,
Lester Mackey
ICML, 2025
Code (see below)
/
arXiv
Tl;dr—Developed new analysis of thinning algorithms that adapts to low-rank structures, enabling faster dot-product attention in Transformers (Thinformer), stochastic gradient descent (KH-SGD), and deep kernel hypothesis testing (DeepCTT).
|
Supervised Kernel Thinning
Albert Gong,
Kyuseong Choi,
Raaz Dwivedi
NeurIPS, 2024
Code
/
arXiv
/
Slides
/
Poster
Tl;dr—Used distribution compression to speed up kernel smoothing and kernel ridge regression.
|
|