Aarshvi Gajjarmy photo


I am a fourth-year Ph.D. student at NYU, where I am a part of the Theoretical Computer Science Group. My advisors are Christopher Musco and Chinmay Hegde.

Previously, I completed my masters from UMass Amherst, where I did my thesis with Cameron Musco. I also worked as a Strat at Goldman Sachs and earned my B.Tech. from IIIT Hyderabad.


Research

My research centers on randomized methods for sample and compute efficient learning. Some questions that interest me are:

  • Can we apply techniques that are shown to be optimal for active linear regression to variants of the problem?
  • What is the right way to use adaptive sampling for active learning?
  • How is the data selection problem approached within the communities of experimental design, pure exploration, and Bayesian optimization?
  • How can sketching/sampling be applied to improve foundation model performance?
I frequently employ tools from theoretical computer science, high dimensional statistics and randomized numerical linear algebra.

Publications

Agnostic Active Learning of Single Index Models with Linear Sample Complexity
†Aarshvi Gajjar, †Wai Ming Tai, †Xingyu Xu, Chinmay Hegde, Christopher Musco and Yi Li
COLT, 2024
Associated poster for minisymposium on Scientific ML for Scarce Data, SIAM MDS24
Preliminary version: Adaptive Experimental Design and Active Learning Workshop, NeurIPS, 2023

Active Learning for Single Neuron Models with Lipschitz Non-Linearities
Aarshvi Gajjar, Chinmay Hegde and Christopher Musco
AISTATS, 2023
Preliminary version: selected as Spotlight at DLDE Workshop , NeurIPS 2022

Subspace Embeddings under Nonlinear Transformations
Aarshvi Gajjar, Cameron Musco
ALT, 2021

Authors are listed alphabetically, except for those marked with †, indicating equal contribution.

Contact. [firstname]@nyu.edu



Website adapted from Gregory Gundersen