dc.contributor.author | Bai, Yu | |
dc.contributor.author | Chang, Zheng | |
dc.contributor.author | Jäntti, Riku | |
dc.contributor.editor | Valenti, Matthew | |
dc.contributor.editor | Reed, David | |
dc.contributor.editor | Torres | |
dc.contributor.editor | Melissa | |
dc.date.accessioned | 2024-11-28T07:52:21Z | |
dc.date.available | 2024-11-28T07:52:21Z | |
dc.date.issued | 2024 | |
dc.identifier.citation | Bai, Y., Chang, Z., & Jäntti, R. (2024). Deep Reinforcement Learning-enabled Dynamic UAV Deployment and Power Control in Multi-UAV Wireless Networks. In M. Valenti, D. Reed, Torres, & Melissa (Eds.), <i>ICC 2024 : IEEE International Conference on Communications</i> (pp. 1286-1290). IEEE. IEEE International Conference on Communications. <a href="https://doi.org/10.1109/ICC51166.2024.10622465" target="_blank">https://doi.org/10.1109/ICC51166.2024.10622465</a> | |
dc.identifier.other | CONVID_242680767 | |
dc.identifier.uri | https://jyx.jyu.fi/handle/123456789/98692 | |
dc.description.abstract | Using Unmanned Aerial Vehicles (UAVs) as aerial base stations for providing services to ground users has received growing research interest in recent years. The dynamic deployment of UAVs represents a significant research direction within UAV network studies. This paper introduces a highly adaptable UAV wireless network that accounts for the mobility of UAVs and users, the variability in their states, and the tunable transmission power of UAVs. The objective is to maximize energy efficiency while ensuring the minimum number of unserved online users. This dual objective is achieved by jointly optimizing the states, transmission powers, and movement strategies of UAVs. To address the variable state challenges posed by the dynamic environment, user and UAV data is encapsulated within a multi-channel map. A Convolutional Neural Network (CNN) then processes this map to extract key features. The deployment and power control strategy are determined by an agent trained by the Proximal Policy Optimization (PPO)-based Deep Reinforcement Learning (DRL) algorithm. Simulation results demonstrate the effectiveness of the proposed strategy in enhancing energy efficiency and reducing the number of unserved online users. | en |
dc.format.mimetype | application/pdf | |
dc.language.iso | eng | |
dc.publisher | IEEE | |
dc.relation.ispartof | ICC 2024 : IEEE International Conference on Communications | |
dc.relation.ispartofseries | IEEE International Conference on Communications | |
dc.rights | In Copyright | |
dc.title | Deep Reinforcement Learning-enabled Dynamic UAV Deployment and Power Control in Multi-UAV Wireless Networks | |
dc.type | conference paper | |
dc.identifier.urn | URN:NBN:fi:jyu-202411287515 | |
dc.contributor.laitos | Informaatioteknologian tiedekunta | fi |
dc.contributor.laitos | Faculty of Information Technology | en |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | |
dc.relation.isbn | 978-1-7281-9055-6 | |
dc.type.coar | http://purl.org/coar/resource_type/c_5794 | |
dc.description.reviewstatus | peerReviewed | |
dc.format.pagerange | 1286-1290 | |
dc.relation.issn | 1550-3607 | |
dc.type.version | acceptedVersion | |
dc.rights.copyright | © 2024 IEEE | |
dc.rights.accesslevel | embargoedAccess | fi |
dc.type.publication | conferenceObject | |
dc.relation.conference | IEEE International Conference on Communications | |
dc.subject.yso | energiatehokkuus | |
dc.subject.yso | energiajärjestelmät | |
dc.subject.yso | langattomat verkot | |
dc.subject.yso | miehittämättömät ilma-alukset | |
dc.subject.yso | vahvistusoppiminen | |
dc.format.content | fulltext | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p8328 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p22348 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p24221 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p24149 | |
jyx.subject.uri | http://www.yso.fi/onto/yso/p40315 | |
dc.rights.url | http://rightsstatements.org/page/InC/1.0/?language=en | |
dc.relation.doi | 10.1109/ICC51166.2024.10622465 | |
dc.type.okm | A4 | |