Novel Adaptive Transmission for Effective URLLC Support in 5G and Beyond Wireless Systems: Reinforcement Learning based Designs
dc.contributor.author | Saatchi, Negin Sadat | |
dc.contributor.supervisor | Yang, Hong-Chuan | |
dc.date.accessioned | 2023-08-31T23:04:44Z | |
dc.date.available | 2023-08-31T23:04:44Z | |
dc.date.copyright | 2023 | en_US |
dc.date.issued | 2023-08-31 | |
dc.degree.department | Department of Electrical and Computer Engineering | en_US |
dc.degree.level | Master of Applied Science M.A.Sc. | en_US |
dc.description.abstract | The Industrial Internet of Things (IIoT) has transformed industrial processes by connecting devices and enabling real-time data exchange. However, the increasing demands of future IIoT applications necessitate a trustworthy, ultra-reliable, and low-latency communication (URLLC) service to support critical and time-sensitive operations. This requires the development of advanced wireless technologies capable of delivering data reliably while meeting stringent latency requirements. In this work, we first propose a novel adaptive transmission design for the fifthgeneration New Radio (5G NR) technology to enhance its URLLC provision capability. Our approach involves jointly selecting numerology, mini-slot size, and modulation and coding scheme (MCS) for each transmission attempt. By considering the prevailing channel conditions and the available latency budget, we aim to maximize the probability of successful data delivery while strictly adhering to latency constraints. We formulate the problem as a sequential decision-making process, which we cast as a finite-step Markov Decision Process (MDP). Our objective is to derive an optimal policy that guides the selection of transmission parameters at each step, ensuring efficient resource allocation and adaptive decision-making. To achieve this, we apply a model-based reinforcement learning and model-free deep reinforcement learning techniques to obtain the optimal policy. Through selected numerical examples, we demonstrate the superior performance of our proposed joint design compared to conventional schemes. The numerical results highlight the significant performance gains achieved across a wide range of transmission scenarios, particularly in situations with stringent latency budgets and poor channel quality. While our proposed joint design is demonstrated within the context of 5G NR, its applicability extends to future generations of wireless systems that adopt similar reliability and latency mechanisms. | en_US |
dc.description.scholarlevel | Graduate | en_US |
dc.identifier.bibliographicCitation | N. S. Saatchi, H. -C. Yang and Y. -C. Liang, "Novel Adaptive Transmission Scheme for Effective URLLC Support in 5G NR: A Model-Based Reinforcement Learning Solution," in IEEE Wireless Communications Letters, vol. 12, no. 1, pp. 109-113, Jan. 2023, doi: 10.1109/LWC.2022.3218488. | en_US |
dc.identifier.uri | http://hdl.handle.net/1828/15335 | |
dc.language | English | eng |
dc.language.iso | en | en_US |
dc.rights | Available to the World Wide Web | en_US |
dc.subject | URLLC | en_US |
dc.subject | probability of successful transmission | en_US |
dc.subject | adaptive modulation and coding | en_US |
dc.subject | deep reinforcement learning | en_US |
dc.subject | reinforcement learning | en_US |
dc.subject | Markov decision process | en_US |
dc.title | Novel Adaptive Transmission for Effective URLLC Support in 5G and Beyond Wireless Systems: Reinforcement Learning based Designs | en_US |
dc.type | Thesis | en_US |