SuperFedNAS: Cost-Efficient Federated Neural Architecture Search for On-Device Inference
Alind Khare*, Animesh Agrawal, Aditya Annavajjala, Payman Behnam, Myungjin Lee, Hugo M Latapie, Alexey Tumanov
;
Abstract
"Neural Architecture Search (NAS) for Federated Learning (FL) is an emerging field. It automates the design and training of Deep Neural Networks (DNNs) when data cannot be centralized due to privacy, communication costs, or regulatory restrictions. Recent federated NAS methods not only reduce manual effort but also help achieve higher accuracy than traditional FL methods like FedAvg. Despite the success, existing federated NAS methods still fall short in satisfying diverse deployment targets common in on-device inference including hardware, latency budgets, or variable battery levels. Most federated NAS methods search for only a limited range of neuro-architectural patterns, repeat them in a DNN, thereby restricting achievable performance. Moreover, these methods incur prohibitive training costs to satisfy deployment targets. They perform the training and search of DNN architectures repeatedly for each case. addresses these challenges by decoupling the training and search in federated NAS. co-trains a large number of diverse DNN architectures contained inside one supernet in the FL setting. Post-training, clients perform NAS locally to find specialized DNNs by extracting different parts of the trained supernet with no additional training. takes O(1) (instead of O(N )) cost to find specialized DNN architectures in FL for any N deployment targets. As part of , we introduce —a novel FL training algorithm that performs multi-objective federated optimization of DNN architectures (≈ 5 ∗ 108 ) under different client data distributions. achieves upto 37.7% higher accuracy or upto 8.13x reduction in MACs than existing federated NAS methods. Code is released at https://github.com/gatech-sysml/superfednas."
Related Material
[pdf]
[supplementary material]
[DOI]