• <tr id='18613'><strong id='3852c'></strong><small id='9bc47'></small><button id='e965c'></button><li id='09882'><noscript id='1a856'><big id='2c9e9'></big><dt id='ad80d'></dt></noscript></li></tr><ol id='95656'><option id='58271'><table id='e3536'><blockquote id='38948'><tbody id='e045d'></tbody></blockquote></table></option></ol><u id='09e24'></u><kbd id='ad259'><kbd id='c3fb0'></kbd></kbd>

    <code id='4b98e'><strong id='0a1ea'></strong></code>

    <fieldset id='edf83'></fieldset>
          <span id='d6489'></span>

              <ins id='82d76'></ins>
              <acronym id='0a201'><em id='23457'></em><td id='a0bc5'><div id='20f03'></div></td></acronym><address id='9aeca'><big id='5d8ba'><big id='598c6'></big><legend id='7ac09'></legend></big></address>

              <i id='78eb1'><div id='31683'><ins id='411c3'></ins></div></i>
              <i id='7221e'></i>
            1. <dl id='cc8bd'></dl>
              1. <blockquote id='29a5b'><q id='432ca'><noscript id='f3020'></noscript><dt id='05dd6'></dt></q></blockquote><noframes id='3a55f'><i id='fd97c'></i>
                您所在位置:杏彩彩票 > 学术交流 > Data Driven Network Control with Reinforcement Learning

                Data Driven Network Control with Reinforcement Learning

                2018-12-13来源: 浏览次数:

                题目:Data Driven Network Control with Reinforcement Learning



                报告人:崔曙光教授 香港中文大学(深圳)


                We first start with a brief introduction of Reinforcement Learning (RL) and then discuss its applications in self-organizing networks. The first application is on handover control: We propose a two-layer framework to learn the optimal HO controllers in possibly large-scale wireless systems supporting mobile users, where the user mobility patterns could be heterogeneous. In particular, our proposed framework first partitions the User Equipments (UEs) with different mobility patterns into clusters, where the mobility patterns are similar in the same cluster. Then, within each cluster, an asynchronous multi-user deep RL scheme is developed to control the HO processes across the UEs in each cluster, in the goal of lowering the HO rate while ensuring certain system throughput. At each user, a deep-RL framework with LSTM RNN is used. We show that the adopted global-parameter-based asynchronous framework enables us to train faster with more UEs, which could nicely address the scalability issue to support large systems. The second application is on joint energy and access control in energy harvesting wireless systems, where we show that a double-deep-RL solution could lead us to significant system gains.



                崔曙光的研究论文被广泛地引用,在2014年当选为Thomson Reuters高被引科学家,并被Sciencewatch列为世界最具影响力科学家之一。他荣获了IEEE Signal Processing Society 2012最佳论文奖,也是两次最佳会议论文的获得者。他一直在担任多个专业会议、期刊和委员会的主席、分区主编或副主编。他在2013年当选了IEEE Fellow,并在2014年当选IEEE通信协会杰出讲师。2018年入选教育部长江学者和广东省珠江创新团队带头人。

                雷达信号处理国家级重点实验室 信息与通信工程学部

                雷达认知探测成像识别“111基地” 国际合作与交流处

              2. 上一篇文章:Analysis of Alternating Direction Method of Multipliers for Nonconvex Problems
              3. 下一篇文章:华盛顿大学Jenq-Neng Hwang教授访问西电开展学术交流
              4. 返回顶部