Research

Optimization and Robustness in Machine Learning

My group actively works on new problem formulations, methods and efficient algorithms for robust machine learning that maintain performance in the presence of perturbations and are capable of adapting to variations in data collection, task objectives, and domain shifts. We have also been recently putting effort on designing distributed architectures and algorithms for learning models using data collected by decentralized agents.

Minmax Learning

Our goal is to develop ML models that are robust to various perturbations and generative models (primarily GAN formulations) that can be efficiently trained and generalize well to unseen problem instances. The canonical formulation of these problems involves a minimax problem, a general class of problems in which one seeks to find model parameters that minimize a loss function in the presence of worst-case perturbations or value of the uncertainty. Our work develops efficient algorithms for minmax problems, stable GAN training, optimal algorithms for stochastic minmax problems, robust distributed optimization, and conducts generalization analysis of minmax learners.

  1. Mokhtari, A., A. Ozdaglar and S. Patatthil, “A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: A Proximal Point Approach,” Proc of AISTATS, 2020.
  2. Mokhtari, A., A. Ozdaglar and S. Patatthil, “Convergence Rate of O(1/k) for Optimistic Gradient and Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems,” SIAM Journal on Optimization, vol. 30., no. 4, pp. 3230-3251, 2020. **
  3. Farnia, F.,  and A. Ozdaglar, “Do GANs always have Nash Equilibria?” Proc. of ICML, 2020.
  4. Farnia, F., and A. Ozdaglar, “Train simultaneously, generalize better: Stability of gradient-based minimax learners,” submitted for publication in AISTATS 2021.
  5. Golowich, N., S Pattathil, C Daskalakis, A Ozdaglar, “Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems,” Proc. of COLT, 2020.
  6. Fallah, A., A. Ozdaglar, and S. Pattathil, “Multistage Stochastic Gradient Based Methods for Minimax Problems,” Proc. of IEEE Conference on Decision and Control (CDC), 2020. **

Meta-Learning

Meta-learning uses data from previous tasks to learn model parameters that can be quickly adapted (fine- tuned), using small amount of data specific to the new task, to perform well on that new task. Our focus is on model- agnostic meta learning (MAML) that can be applied to any learning problem that is trained with gradient descent. We focus on providing guarantees, investigating its generalization properties using algorithmic stability framework, and extending MAML for reinforcement learning problems.

  1. Fallah, A., A. Mokhtari, and A. Ozdaglar, “Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks,” Proc. of NeurIPS, 2021. **
  2. Fallah, A., K. Georgiev, A. Mokhtari, and A. Ozdaglar, “On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning,” Proc. of NeurIPS, 2021. **
  3. Fallah, A., A. Mokhtari, and A. Ozdaglar, “On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms,” Proc. of AISTATS, 2020. **

Federated Learning

In many machine learning applications, data are collected by a large number of devices, calling for a distributed architecture for learning models. Federated learning (FL) aims to address this challenge by providing a decentralized mechanism for leveraging the individual data and computational power of users. Classical FL relies on a single shared model for users but tends to perform poorly in the presence of data and task heterogeneity across users. Our recent work has developed various approaches for developing multiple “personalized” models for heterogenous users. In one of our projects, we consider a meta-learning approach, where the goal is to generate an initial shared model that users adapt to their tasks using small number of additional local computations. In another project, we consider a cluster-based approach which is more appropriate when there is substantial heterogeneity in user data distributions. We propose an algorithm that simultaneously learns cluster identities, while fully operating in a decentralized manner.

  1. Fallah, A., K. Zhang, E Vogelbaum, F. Farnia, and A. Ozdaglar, “A Multi-Model Framework for Personalized Federated Learning,” submitted for publication, 2022. **
  2. Fallah, A., A. Mokhtari, and A. Ozdaglar, “Personalized Federated Learning: A Meta-Learning Approach,” Proc. of NeurIPS, 2021. **

Online platforms and Social Networks

In various projects we study questions of data ownership, privacy and data markets, designing review systems for online markets, behavior manipulation using user data by online platforms, spread of misinformation and effects of platform interventions (e.g. effects of algorithms that create filter bubbles on misinformation). We are particularly interested in designing incentive schemes and mechanisms that can tradeoff multiple objectives, such as accuracy and privacy.

  1. Fallah, A., A. Makhdoumi, A. Malekian, and A. Ozdaglar, “Optimal and Differentially Private Data Acquisition: Central and Local Mechanisms,” submitted for publication, 2022.
  2. Acemoglu, D., A. Ozdaglar, and J. Siderius, “A Model of Online Misinformation,” submitted for publication, 2022.
  3. Acemoglu, D., A. Makhdoumi, A. Malekian, and A. Ozdaglar, “Fast and Slow Learning from Reviews,” revise-resubmit in Econometrica, 2021.
  4. Acemoglu, D., A. Makhdoumi, A. Malekian, and A. Ozdaglar, “Too much data: Prices and Inefficiencies in Data Markets,” to appear in American Economic Journal: Microeconomics, 2021.
  5. Wai, H.T., A. Ozdaglar, and A. Scaglione, “Spread of Information with Forceful Agents and Nonlinear Interactions: Analysis and Localization,” revise-resubmit in IEEE Transactions on Control of Network Systems, Special Issue on Dynamics and Behaviors in Social Networks, 2021.
  6. Wai, H.T., Eldar, Y., A. Ozdaglar, and A. Scaglione, “Community Inference from Partially Observed Graph Signals: Algorithms and Analysis,” revise-resubmit in IEEE Transactions on Signal Processing, 2021.
  7. Mostagir, M., A. Ozdaglar, and J. Siderius, “When is Society Susceptible to Manipulation?,” to appear in Management Science, 2021 (https://doi.org/10.1287/mnsc.2021.4265).

Multi-Agent Reinforcement Learning

In recent work, we focus on novel algorithms and dynamics for multi-agent learning in dynamic environments. We presented for the first time a stable independent algorithm (with no coordination among the agents) for zero-sum stochastic games (both for the model-based and model free settings), which we have extended to other classes of games and dynamics with less information requirements to capture more general settings.

  1. Sayin, M., K. Zhang, D. Leslie, T. Basar, and A. Ozdaglar, “Decentralized Q-learning in Zero-sum Markov Games,” Proc. of NeurIPS, 2021.
  2. Sayin, M., F. Parise, and A. Ozdaglar, “Fictitious Play in Zero-Sum Stochastic Games,” revise-resubmit in SIAM Journal on Control and Optimization, 2021.
  3. Ozdaglar, A., M. Sayin, and K. Zhang, “Independent Learning in Stochastic Games,” to appear in Proc. of International Congress of Mathematicians, 2022.

Network Games

In another line of work, we focus on developing new tools for studying game theoretic interactions over large networks. We made significant progress both on introducing powerful techniques (i.e., a variational inequality framework that goes beyond standard convex optimization approaches) to study network games and developing a statistical approach for analysis and interventions using graphons (a general nonparametric model for large networks which include Erdos-Renyi and stochastic block models as special cases):

  1. Review Paper: Analysis and Interventions in Large Network Games, Annual Review of Control, Robotics, and Autonomous Systems, 2021.
  2. Parise, F., and A. Ozdaglar, “Graphon Games: A Statistical Framework for Network Games and Interventions,” revise-resubmit in Econometrica, 2021.
  3. Parise, F. and A. Ozdaglar, “A variational inequality framework for network games: Existence, uniqueness, convergence and sensitivity analysis,” Games and Economic Behavior, vol. 114, pp. 47-82, 2020.