Energy-Efficient and Privacy-Preserved Incentive Mechanism for Federated Learning in Mobile Edge Computing
Liu, J., Chang, Z., Min, G., & Zhang, Y. (2023). Energy-Efficient and Privacy-Preserved Incentive Mechanism for Federated Learning in Mobile Edge Computing. IEEE International Conference on Communications, 2023, 172-178. https://doi.org/10.1109/ICC45041.2023.10279757
Published in
IEEE International Conference on CommunicationsDate
2023Discipline
TekniikkaSecure Communications Engineering and Signal ProcessingTietotekniikkaEngineeringSecure Communications Engineering and Signal ProcessingMathematical Information TechnologyAccess restrictions
Embargoed until: 2025-10-23Request copy from author
Copyright
© 2023, IEEE
In mobile edge computing (MEC)-assisted federated learning (FL), the MEC users can train data locally and send the results to the MEC server to update the global model. However, the implementation of FL may be prevented by the selfish nature of MEC users, as they need to contribute considerable data and computing resources while scarifying certain data privacy for the FL process. Therefore, it is of great importance to design an efficient incentive mechanism to motivate the users to join the FL. In this work, with explicit consideration of the impact of wireless transmission and data privacy, we design an energy-efficient and privacy-preserved incentive scheme to facilitate the FL process by investigating interactions between the MEC server and MEC users in a MEC-assisted FL system. Using a Stackelberg game model, we explore the transmit power allocation and privacy budget determination of MEC users and reward strategy of the MEC server, and then analyze the Stackelberg equilibrium. The simulation results demonstrate the effectiveness of our proposed scheme.
...
Publisher
IEEEConference
ISSN Search the Publication Forum
1550-3607Keywords
Publication in research information system
https://converis.jyu.fi/converis/portal/detail/Publication/197232266
Metadata
Show full item recordCollections
License
Related items
Showing items with similar title or keywords.
-
Energy-Efficient Edge Computing Service Provisioning for Vehicular Networks : A Consensus ADMM Approach
Zhou, Zhenyu; Feng, Junhao; Chang, Zheng; Shen, Xuemin Sherman (Institute of Electrical and Electronics Engineers, 2019)In vehicular networks, in-vehicle user equipment (UE) with limited battery capacity can achieve opportunistic energy saving by offloading energy-hungry workloads to vehicular edge computing nodes via vehicle-to-infrastructure ... -
Energy-Efficient Resource Optimization with Wireless Power Transfer for Secure NOMA Systems
Lei, Lei; Chang, Zheng; Hu, Yun; Ristaniemi, Tapani; Yuan, Yaxiong; Chatzinotas, Symeon (IEEE, 2019)In this paper, we investigate resource allocation algorithm design for secure non-orthogonal multiple access (NOMA) systems empowered by wireless power transfer. With the consideration of an existing eavesdropper, the ... -
Encryption and Generation of Images for Privacy-Preserving Machine Learning in Smart Manufacturing
Terziyan, Vagan; Malyk, Diana; Golovianko, Mariia; Branytskyi, Vladyslav (Elsevier, 2023)Current advances in machine (deep) learning and the exponential growth of data collected by and shared between smart manufacturing processes give a unique opportunity to get extra value from that data. The use of public ... -
Anonymization as homeomorphic data space transformation for privacy-preserving deep learning
Girka, Anastasiia; Terziyan, Vagan; Gavriushenko, Mariia; Gontarenko, Andrii (Elsevier, 2021)Industry 4.0 is largely data-driven nowadays. Owners of the data, on the one hand, want to get added value from the data by using remote artificial intelligence tools as services, on the other hand, they concern on privacy ... -
Communication-Efficient Federated Learning in Channel Constrained Internet of Things
Hu, Tao; Zhang, Xinran; Chang, Zheng; Hu, Fengye; Hämäläinen, Timo (IEEE, 2022)Federated learning (FL) is able to utilize the computing capability and maintain the privacy of the end devices by collecting and aggregating the locally trained learning model parameters while keeping the local personal ...