Model-Heterogeneous Federated Learning

Picture of amanda-barnard.md Amanda Barnard AM

15 Feb 2024

Federated Learning is a new paradigm of distributed machine learning that enables multiple clients with private data to collaboratively build a high-performance prediction model. In federated learning, the training process is coordinated by a central server, either synchronously or asynchronously. Each participating device trains the learning model using its local data and subsequently shares its model parameters with the central server. The central server then aggregates the model parameters and returns the updated ones to the participating devices. This iterative process continues until the model converges. By eliminating the necessity to share local training data, federated learning effectively safeguards user privacy. It has enabled a wide range of applications, including Google’s Gboard, anomaly detection, smart health, and recommender systems.

Most existing federated learning schemes adopt the model-homogeneous setting, in which all participating client devices employ the same local training models so as to produce a single global prediction model. However, this imposes limitations in real-world applications since the participating client devices can have different on-device resources regarding computing power, memory and network bandwidth. Having the same model on all participating devices will limit machine learning model selection to accommodate the least resourceful participating device.

Research Questions and Tasks

This project aims to develop model-heterogenous federated learning schemes, allowing participating client devices to select different models for local training based on their on-device resources. We will investigate the partial training approach in which each client trains a small sub-model extracted from the large global server model, and will focus on addressing the following three research questions:

  1. How should each client device select the sub-model for local training based on its on-device resources and local data distribution?
  2. How should each client device control the training of different sub-models to ensure that the global model can be evenly trained?
  3. How could dynamic sub-model selection and client scheduling be combined to avoid client drift induced by data heterogeneity across clients? [only for 24 credits]

Supervision

In collaboration with Visiting Associate Professor Haibo Zhang (https://comp.anu.edu.au/people/haibo-zhang/).

References

  • Kilian Pfeiffer, Martin Rapp, Ramin Khalili, Jörg Henkel. Federated Learning for Computationally Constrained Heterogeneous Devices: A Survey. ACM Computing Surveys, pp 1–27, 2023. https://doi.org/10.1145/3596907
  • Ying Pang, Haibo Zhang, Jeremiah D Deng, Lizhi Peng, and Fei Teng. Collaborative Learning With Heterogeneous Local Models: A Rule-Based Knowledge Fusion Approach. IEEE Transactions on Knowledge and Data Engineering, 2023. DOI: 10.1109/TKDE.2023.3341808
  • Samiul Alam, Luyang Liu, Ming Yan, and Mi Zhang. FedRolex: Model-Heterogeneous Federated Learning with Rolling Sub-Model Extraction, Proceedings of NeurIPS, 2022.https://proceedings.neurips.cc/paper_files/paper/2022/file/bf5311df07f3efce97471921e6d2f159-Paper-Conference.pdf

Requirements

Background and experience in basic Machine Learning (i.e. COMP3670/4670/4660/4650, STAT3040/4040). Experience with Python.

You are on Aboriginal land.

The Australian National University acknowledges, celebrates and pays our respects to the Ngunnawal and Ngambri people of the Canberra region and to all First Nations Australians on whose traditional lands we meet and work, and whose cultures are among the oldest continuing cultures in human history.

arrow-left bars search times