Energy Efficient Federated Learning with Bayesian Optimized Training Pace Control


Klara Nahrstedt, University of Illinois, Urbana-Champaign (UIUC): Principle Investigator
Deming Chen, University of Illinois, Urbana-Champaign (UIUC): Principle Investigator
Hongpeng Guo, University of Illinois, Urbana-Champaign (UIUC): PhD Student

Time Line

2022 ~ present

Project Description

Federated learning (FL) is a machine learning paradigm that enables a cluster of decentralized edge devices to collaboratively train a shared machine learning model without exposing users’ raw data. However, the intensive model training computation is energy- demanding and poses severe challenges to end devices’ battery life. In this project, we present BoFL, a training pace controller deployed on the edge devices that actuates the hardware operational frequencies over multiple configurations to achieve energy-efficient federated learning. BoFL operates in an explore-then-exploit manner within limited rounds of FL tasks. BoFL explores the large hardware frequency space strategically with a tailor-designed Bayesian optimization algorithm. BoFL first find a set of good operational configurations within few task training rounds, and then exploits these configurations in the remaining rounds to achieve minimized energy consumption for model training. Experiments on multiple real-world edge devices with different FL tasks suggest that BoFL can reduce energy consumption of model training by around 26%, and achieve near-optimal energy efficiency.

This poster presents a more intuitive overview of this project.


Hongpeng Guo, Haotian Gu, Zhe Yang, Xiaoyang Wang, Eun Kyung Lee, Nandhini Chandramoorthy, Tamar Eilam, Deming Chen, Klara Nahrstedt (2022). BoFL: Bayesian Optimized Local Training Pace Control for Energy Efficient Federated Learning. In ACM/IFIP Middleware 2022.

Funding Agencies

This work was supported by IBM-Illinois Discovery Accelerator Institute.