Virtual Seminar by Danijela Cabric

Title: Open Set Wireless Transmitter Authorization: Deep Learning Approaches and Practical Considerations

Date and Time: November 19, 2021 at 11AM ET

Registration Process: Please register at

Abstract: As the Internet of Things (IoT) continues to grow, ensuring the security of systems that rely on wireless IoT devices has become critically important. Deep learning-based passive physical layer transmitter authorization systems, referred as RF fingerprinting, have been introduced recently for this purpose. RF fingerprinting approaches use wireless signals directly to verify identity of radio frequency transmitters based on imperfections in their radio hardware. However, most of the existing work using machine learning for RF fingerprinting has mainly focused on classification approaches assuming a closed set of transmitters. In practice, most serious security problems would arise if malicious transmitters outside this closed set are misclassified and authorized. In this talk, we formulate the problem of recognizing authorized transmitters and rejecting new transmitters as open set recognition and anomaly detection. We consider approaches based on one and several binary classifiers, multi-class classifiers, and signal reconstruction, and study how these approaches scale with the number of authorized transmitters. We propose using a known set of unauthorized transmitters to assist the training and study its impact. The authorization robustness against temporal changes in fingerprints is also evaluated as a function of the approach and the dataset structure. For this work, we have created a large Wi-Fi dataset consisting of about 10 million packets sent by 174 off-the-shelf Wi-Fi radios and simultaneously captured by 41 USRPs during 4 captures performed in ORBIT testbed along a month. We have also developed generative deep learning methods to emulate unauthorized signal samples for the augmentation of training datasets. We explored two different data augmentation techniques, one that exploits a limited number of known unauthorized transmitters and the other that does not require any unauthorized transmitters. Our results indicate that data augmentation allows for significant increases in open set classification accuracy, especially when the authorized set is small. Another practical problem in authentication systems based on deep learning is training time when the new transmitters are added. We have developed a fast authentication algorithm based on information retrieval that uses feature vectors as RF fingerprints and locality sensitive hashing(LSH) to create a database that can quickly searched by approximate nearest neighbor algorithm. The proposed algorithm matches the accuracy of deep learning models, while allowing for more than 100x faster retraining.

Bio: Danijela Cabric is Professor in the Electrical and Computer Engineering Department at the University of California, Los Angeles. Her research interests include novel radio architectures, signal processing, communications, machine learning and networking techniques for spectrum sharing, 5G millimeter-wave, massive MIMO and IoT systems. Dr. Cabric received the Samueli Fellowship in 2008, the Okawa Foundation Research Grant in 2009, Hellman Fellowship in 2012, the National Science Foundation Faculty Early Career Development (CAREER) Award in 2012 and Qualcomm Faculty Award in 2020 and 2021. Dr. Cabric is an IEEE Fellow. Her research on deep learning based RF transmitter fingerprinting is supported by SRC/JUMP CONIX center (

About the Monthly Virtual Seminar Series:

The IEEE TCCN Security Special Interest Group conducts a monthly virtual seminar series to highlight the challenges in securing the next generation (xG) wireless networks. The talks will feature cutting edge research addressing both technical and policy issues to advance the state-of-the-art in security techniques, architectures, and algorithms for wireless communications.

Virtual Seminar by Walid Saad

Title: Brainstorming Generative Adversarial Networks (BGANs): Framework and Application to Wireless Networks

Date and Time: October 22, 2021 at 10AM EDT

Registration Process: Please register at

Abstract: Due to major communication, privacy, and scalability challenges stemming from the emergence of large-scale Internet of Things services, machine learning is witnessing a major departure from traditional centralized cloud architectures toward a distributed machine learning (ML) paradigm where data is dispersed and processed across multiple edge devices. A prime example of this emerging distributed ML paradigm is Google’s renowned federated learning framework. Despite the tremendous recent interest in distributed ML, remarkably, prior work in the area remains largely focused on the development of distributed ML algorithms for inference and classification tasks. In contrast, in this talk, we introduce the novel framework of brainstorming generative adversarial networks (BGANs) that constitutes one of the first implementations of distributed, multi-agent generative GAN models that does not rely on a centralized parameter server. We show how BGAN allows multiple agents to gain information from one another, in a fully distributed manner, without sharing their real datasets but by “brainstorming” their generated data samples. We then demonstrate the higher accuracy and scalability of BGAN compared to the state of the art through extensive experiments. We then illustrate how BGAN can be used to address key problems in the field of wireless communications by analyzing a millimeter wave channel modeling problem for wireless networks that rely on unmanned aerial vehicles (UAVs). We conclude this talk with an overview on the future outlook of the exciting area of distributed ML and its current and future applications in wireless systems.

Bio: Walid Saad (S’07, M’10, SM’15, F’19) received his Ph.D degree from the University of Oslo in 2010. He is currently a Professor at the Department of Electrical and Computer Engineering at Virginia Tech, where he leads the Network sciEnce, Wireless, and Security (NEWS) laboratory. His research interests include wireless networks, machine learning, game theory, security, unmanned aerial vehicles, cyber-physical systems, and network science. Dr. Saad is a Fellow of the IEEE. He is also the recipient of the NSF CAREER award in 2013 and of the Young Investigator Award from the Office of Naval Research (ONR) in 2015. He was the author/co-author of ten conference best paper awards at WiOpt in 2009, ICIMP in 2010, IEEE WCNC in 2012, IEEE PIMRC in 2015, IEEE SmartGridComm in 2015, EuCNC in 2017, IEEE GLOBECOM in 2018, IFIP NTMS in 2019, IEEE ICC in 2020, and IEEE GLOBECOM in 2020. He is the recipient of the 2015 Fred W. Ellersick Prize from the IEEE Communications Society, of the 2017 IEEE ComSoc Best Young Professional in Academia award, of the 2018 IEEE ComSoc Radio Communications Committee Early Achievement Award, and of the 2019 IEEE ComSoc Communication Theory Technical Committee. He was also a co-author of the 2019 IEEE Communications Society Young Author Best Paper and of the 2021 IEEE Communications Society Young Author Best Paper. He currently serves as an editor for several major IEEE Transactions.

About the Monthly Virtual Seminar Series:

The IEEE TCCN Security Special Interest Group conducts a monthly virtual seminar series to highlight the challenges in securing the next generation (xG) wireless networks. The talks will feature cutting edge research addressing both technical and policy issues to advance the state-of-the-art in security techniques, architectures, and algorithms for wireless communications.

Virtual Seminar by Ekram Hossain

Title: Federated Learning in Unreliable and Resource-Constrained Cellular Wireless Networks

Date: May 26, 2021; Time: 11AM EDT

Registration: Please register at

Abstract: Federated learning is a machine learning setting where the centralized location trains a learning model by using remote devices. Federated learning algorithms cannot be employed in wireless networks unless the unreliable and resource-constrained nature of the wireless medium is taken into account. In this talk, I shall present a federated learning algorithm that is suitable for cellular wireless networks in a real-world scenario. I shall discuss its convergence properties, and the effects of local computation steps and communication steps on its convergence. Through experiments on real and synthetic datasets, I shall demonstrate the convergence of the proposed algorithm.

Bio: Ekram Hossain (IEEE Fellow) is a Professor in the Department of Electrical and Computer Engineering at University of Manitoba, Winnipeg, Canada. He is a Member (Class of 2016) of the College of the Royal Society of Canada, a Fellow of the Canadian Academy of Engineering, and also a Fellow of the Engineering Institute of Canada ( He received his Ph.D. in Electrical Engineering from University of Victoria, Canada, in 2001. Dr. Hossain’s current research interests include design, analysis, and optimization of wireless communication networks (with emphasis on beyond 5G/6G cellular), applied machine learning and game theory, and network economics. He was elevated to an IEEE Fellow “for contributions to spectrum management and resource allocation in cognitive and cellular radio networks”. He was listed as a Clarivate Analytics Highly Cited Researcher in Computer Science in 2017, 2018, 2019, and 2020. Dr. Hossain has won several research awards including the “2017 IEEE Communications Society Best Survey Paper Award, the 2011 IEEE Communications Society Fred Ellersick Prize Paper Award, University of Manitoba Merit Award in 2010, 2013, 2014, and 2015 (for Research and Scholarly Activities), and the IEEE Wireless Communications and Networking Conference 2012 (WCNC’12) Best Paper Award. He received the 2017 IEEE ComSoc TCGCC (Technical Committee on Green Communications & Computing) Distinguished Technical Achievement Recognition Award “for outstanding technical leadership and achievement in green wireless communications and networking”. Currently he serves as the Editor-in-Chief of the IEEE Press and an Editor for IEEE Transactions on Mobile Computing. Previously, he served as an Area Editor for the IEEE Transactions on Wireless Communications in the area of “Resource Management and Multiple Access” (2009-2011) and an Editor for the IEEE Journal on Selected Areas in Communications – Cognitive Radio Series (2011-2014). He serves as the Director of Magazines for the IEEE Communications Society (2020-2021). Dr. Hossain was an elected Member of the Board of Governors of the IEEE Communications Society for the term 2018-2020. He is a Distinguished Lecturer of the IEEE Communications Society. He is a registered Professional Engineer in the province of Manitoba, Canada.

About the Monthly Virtual Seminar Series: The IEEE TCCN Special Interest Group for AI and Machine Learning in Security conducts a monthly virtual seminar series to highlight the challenges in securing the next generation (xG) wireless networks. The talks will feature cutting edge research addressing both technical and policy issues to advance the state-of-the-art in security techniques, architectures, and algorithms for wireless communications.


Virtual Seminar by Gene Tsudik

Title: Secure Code Execution on Untrusted Remote Devices

Date: April 28, 2021; Time: 1PM EDT

Registration: Please register using the following link. You will receive a link in your email to attend the talk online.

Abstract: Our society is increasingly reliant upon a wide range of Cyber-Physical Systems (CPS), Internet-of-Things (IoT), embedded, and so-called “smart” devices. They often perform safety-critical functions in numerous settings, e.g., home, office, medical, automotive and industrial. Some devices are small, cheap and specialized sensors and/or actuators. They tend to have meager resources, run simple software, sometimes upon “bare metal”. If such devices are left unprotected, consequences of forged sensor readings or ignored actuation commands can be catastrophic, particularly, in safety-critical settings. This prompts the following three questions: (1) How to trust data produced by a simple remote embedded device? (2) How to ascertain that this data was produced via execution of expected software? And, (3) Is it possible to attain (1) and (2) under the assumption that all software on the remote device might be modified or compromised? In this talk, we answer these questions by describing APEX: (Verified) Architecture for Proofs of Execution, the first of its kind result for low-end embedded systems. This work has a range of applications, especially, to authenticated sensing and trustworthy actuation, APEX incurs low overhead, making it affordable even for lowest-end embedded devices; it is also publicly available.

Bio: Gene Tsudik is a Distinguished Professor of Computer Science at the University of California, Irvine (UCI). He obtained his PhD in Computer Science from USC in 1991. Before coming to UCI in 2000, he was at the IBM Zurich Research Laboratory (1991-1996) and USC/ISI (1996-2000). His research interests include many topics in security, privacy and applied cryptography. Gene Tsudik is a Fulbright Scholar, Fulbright Specialist (twice), a fellow of ACM, IEEE, AAAS, IFIP and a foreign member of Academia Europaea. From 2009 to 2015 he served as Editor-in-Chief of ACM Transactions on Information and Systems Security (TISSEC, renamed TOPS in 2016). Gene was the recipient of 2017 ACM SIGSAC Outstanding Contribution Award. He is also the author of the first crypto-poem published as a refereed paper.

About the Monthly Virtual Seminar Series:

The IEEE TCCN Security Special Interest Group conducts a monthly virtual seminar series to highlight the challenges in securing the next generation (xG) wireless networks. The talks will feature cutting edge research addressing both technical and policy issues to advance the state-of-the-art in security techniques, architectures, and algorithms for wireless communications.

Virtual Seminar by Yalin Sagduyu

Title: Adversarial Machine Learning for Wireless Security in 5G and Beyond

Date and Time: March 26, 2021 at 10AM ET

Registration Process: Please register using the following link. You will receive a link in your email to attend the talk online.

Abstract: Machine learning provides powerful means to learn from the dynamic spectrum environment and solve complex tasks for wireless communications. Supported by recent advances in algorithmic and computational capabilities, deep learning has emerged as a viable solution to efficiently utilize the limited spectrum resources and optimize wireless communications, with 5G and beyond enhancements to meet the ever-growing demands for high-rate and low-latency communications. As deep learning is becoming a key component in emerging wireless technologies, a new security threat arises due to adversarial machine learning that exploits the vulnerabilities of deep learning to adversarial manipulations. Adversarial machine learning has been applied to different data domains ranging from computer vision to natural language processing. By considering the unique characteristics of the wireless medium, this talk will present adversarial machine learning as a new attack surface for the next-generation communication systems. Novel attack and defense mechanisms built upon adversarial machine learning will be described with examples from signal classification, dynamic spectrum access, and 5G and beyond applications related to spectrum co-existence, user authentication, covert communications, and network slicing. Research challenges and directions will be discussed for effective and safe adoption of much-needed machine learning techniques in the emerging wireless technologies.

Bio: Dr. Yalin Sagduyu is the Director of Networks and Security Division at Intelligent Automation, Inc. (IAI). He received his Ph.D. degree in Electrical and Computer Engineering from University of Maryland, College Park. At IAI, he directs a division of over 50 research scientists and engineers, and executes a broad portfolio of R&D projects on wireless communications, networks, security, machine learning, adversarial machine learning, and 5G and beyond. He has been a Visiting Research Professor in the Electrical and Computer Engineering Department of University of Maryland, College Park. He served as a Conference Track Chair at IEEE PIMRC, IEEE GlobalSIP and IEEE MILCOM, and in the organizing committee of IEEE GLOBECOM. He organized and chaired workshops at IEEE CNS, IEEE ICNP, ACM Mobicom, and ACM WiSec. He received the Best Paper Award at IEEE HST.

About the Monthly Virtual Seminar Series:

The IEEE TCCN Security Special Interest Group conducts a monthly virtual seminar series to highlight the challenges in securing the next generation (xG) wireless networks. The talks will feature cutting edge research addressing both technical and policy issues to advance the state-of-the-art in security techniques, architectures, and algorithms for wireless communications.

Virtual Seminar by Wade Trappe

Title: A Quick Look at New Risks Facing Wireless Systems

Date and Time: February 25, 2021 at 10AM ET

Registration Process: Please register using the following link. You will receive a link in your email to attend the talk online.

Abstract: Wireless networks are susceptible to a wide range of security risks. The evolution from old wireless technologies, such as3G and 802.11, to newer technologies, such as 5G and mmWave, hasnot fundamentally changed the core challenges that undermines the security of wireless networks: Wireless systems are easy to access,the wireless medium is easy to broadcast and eavesdrop on, and the increasingly pervasive nature means that we are becoming increasingly reliant on them for day-to-day functions. This talk will examine a broad sampling of wireless-based threats that will likely become more prevalent as we move towards the next generation of wireless system. These systems are characterized by a closer integration between communications, computation, and the real world. As such, the challenges we face to secure these systems requires that wireless engineers and systems developers think more holistically about how they will design and implement security mechanisms. In short, we must really work to protect our systems “across the stack” and even “into the application.”

Bio: Wade Trappe is a Professor in the Electrical and Computer Engineering Department at Rutgers University, and Associate Director of the Wireless Information Network Laboratory (WINLAB), where he directs WINLAB’s research in wireless security. He has led several federally funded projects in the area of cybersecurity and communication systems, projects involving security and privacy for sensor networks, physical layer security for wireless systems, a security framework for cognitive radios, the development of wireless testbed resources (the ORBIT testbed,, and new RFID technologies. He was the principal investigator for the original DARPA Spectrum Challenge, in which teams battled for spectrum superiority against each other on the ORBIT testbed arena. His experience in network security and wireless spans over 20 years, and he has co-authored a popular textbook in security, Introduction to Cryptography with Coding Theory, as well as several monographs on wireless security, including Securing Wireless Communications at the Physical Layer and Securing Emerging Wireless Systems: Lower-layer Approaches.

About the Monthly Virtual Seminar Series:

The IEEE TCCN Security Special Interest Group conducts a monthly virtual seminar series to highlight the challenges in securing the next generation (xG) wireless networks. The talks will feature cutting edge research addressing both technical and policy issues to advance the state-of-the-art in security techniques, architectures, and algorithms for wireless communications.

Virtual Seminar by Rose Qingyang Hu

Title: AI and Machine Leaning in Spectrum Sharing Security

Date and Time: January 29, 2021 at 10AM ET

Registration Process: Please register using the following link. You will receive a link in your email to attend the talk online.

Abstract: Dynamic spectrum sharing has been widely considered a key enabler of supporting future wireless networks for massive connectivity and pervasive communications. The complexity and dynamics of the spectrum sharing systems are being exposed to various new attacks, which require novel security and protecting mechanisms that are adaptive, reliable, and scalable. Artificial intelligence and Machine learning based methods have been widely explored to address these issues. In this talk, we will present the recent research advancements in AI/ML based spectrum sharing as well as the corresponding security mechanisms. In particular, we will focus on the state-of-art methodologies for improving the performance of the spectrum sharing communication systems by using AI/ML in different sharing paradigms such as cognitive radio networks, Licensed shared access/spectrum access systems, LTE-U/LAA networks, and ambient backscatter networks. How AI and ML are used to tackle spectrum sharing specific security issues such as primary user emulation attacks, spectrum sensing data falsification attacks, jamming/eavesdrop attacks, privacy issues, as well as how AL/ML can be possibly exploited to launch adversarial attacks in the spectrum sharing systems will be further elaborated. We expect that this talk will highlight the challenges as well as research opportunities in exploring AI and ML techniques to support the ever increasingly important yet complicated spectrum sharing as well as the related security mechanisms.

Bio: Rose Qingyang Hu currently is a Professor of Electrical and Computer Engineering Department and Associate Dean for Research of College of Engineering at Utah State University. Besides more than 12 years’ academia research experience, Prof. Rose Hu has more than 10 years R&D experience with Nortel, Blackberry and Intel as technical manager, senior research scientist, and senior wireless system architect, leading industrial 3G and 4G technology development, 3GPP/IEEE standardization, system level simulation and performance evaluation. Her current research interests include next-generation wireless communications, wireless network design and optimization, Internet of Things and Cyber Physical System, AI/ML, Mobile Edge Computing, wireless security. She has published over 260 papers in leading IEEE journals and conferences and holds over 30 patents in her research areas. Prof. Rose Hu is a Fellow of IEEE, NIST Communication Technology Laboratory Innovator 2020, IEEE Communications Society Distinguished Lecturer 2015-2018, IEEE Vehicular Technology Society Distinguished Lecturer 2020 – 2022, member of Phi Kappa Phi honor society, and recipient of Best Paper Awards from IEEE Globecom 2012, IEEE ICC 2015, IEEE VTC Spring 2016, and IEEE ICC 2016. She serve as TPC Co-Chair for IEEE ICC 2018 and TPC Co-Chair for IEEE Globecom 2023. She is currently serving on the editorial boards for IEEE Transactions on Wireless Communications, IEEE Transactions on Vehicular Technology, IEEE Communications Magazine, IEEE Wireless Communications Magazine.

About the Monthly Virtual Seminar Series:

The IEEE TCCN Security Special Interest Group conducts a monthly virtual seminar series to highlight the challenges in securing the next generation (xG) wireless networks. The talks will feature cutting edge research addressing both technical and policy issues to advance the state-of-the-art in security techniques, architectures, and algorithms for wireless communications.

Virtual Seminar by Kaushik Chowdhury

Title: Deep Convolutional Neural Networks for Device Identification

Date and Time: December 16 at 9AM ET

Registration Process: Please register using the following link. You will receive a link in your email to attend the talk online.

Abstract: Network densification is poised to enable the massive throughout jump expected in the era of 5G and beyond. In the first part of the talk, we identify the challenges of verifying identity of a particular emitter in a large pool of similar devices based on unique distortions in the signal, or ‘RF fingerprints’, as it passes through a given transmitter chain. We show how deep convolutional neural networks can uniquely identify a radio in a large signal dataset composed of over a hundred WiFi radios with accuracy close to 99%. For this, we use tools from machine learning, namely, data augmentation, attention networks and deep architectures that have proven to be successful in image processing and modify these methods to work in the RF-domain. In the second part of the talk, we show how intentional injection of distortions and carefully crafted FIR filters applied to the transmitter-side can help in enhanced classification. Finally, we discuss how to detect new devices not previously seen during training using observed statistical patterns. We conclude by showing a glimpse of other applications of RF fingerprinting, like 5G waveform detection in large-scale experimental platforms and identifying a specific UAV in a swarm.

Bio: Kaushik Chowdhury is Professor and Faculty Fellow in the ECE department and Associate Director at the Institute for the Wireless IoT at Northeastern University, Boston. He was awarded the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2017, the DARPA Young Faculty Award in 2017, the Office of Naval Research Director of Research Early Career Award in 2016, and the NSF CAREER award in 2015. He has received best paper awards at several conferences that include, Infocom, Globecom, ICC (3x), SenSys, ICNC, and DySpan. He is presently a co-director of the Platforms for Advanced Wireless Research (PAWR) project office and the Colosseum RF emulator. His current research interests span applied machine learning to wireless systems, networked robotics, wireless charging and at-scale experimentation for emerging 5G and beyond networks.

About the Monthly Virtual Seminar Series:

The IEEE TCCN Security Special Interest Group conducts a monthly virtual seminar series to highlight the challenges in securing the next generation (xG) wireless networks. The talks will feature cutting edge research addressing both technical and policy issues to advance the state-of-the-art in security techniques, architectures, and algorithms for wireless communications.

Secure Cognitive Radio Networks with Multi-Phase Smart Relaying and Cooperative Jamming

Originally posted in Sec-IG blog (link to the original post)

Pin-Hsun Lin and Eduard A. Jorswieck

Dresden University of Technology, Germany

Due to the broadcast nature of wireless networks, communications are potentially subject to attacks, such as passive eavesdropping or active jamming. Instead of using the traditional cryptographic approaches [2] to combat the malicious users, we consider the information-theoretic secrecy. Note that the information-theoretic secrecy approach, initiated by Shannon [3] and developed by Wyner [4], can exploit the randomness of the wireless channels to ensure the secrecy of the transmitted messages while there is no assumption on the computation capabilities at the malicious users. As a performance measure for communication systems with secrecy constraints, a secrecy rate is defined as a rate at which the message can be transmitted reliably and securely between the legitimate nodes. However, similar to communication networks without secrecy constraints, the overall performance is limited by the relative channel qualities to guarantee secure communications. Many signal processing and multi-user techniques have therefore been proposed to overcome this limitation such as the use of multiple antennas.

Recently, there has been a substantial interest in the secrecy of multi-user systems [5], with a particular emphasis on potential cooperation between users to enhance the secrecy of communications. Cooperation in communication networks is an emerging technique to improve the reliability of wireless communication systems, and it involves multiple parties assisting each other in the transmission of messages, see e.g., [6]. Assuming that the cooperative node(s) can be trusted and that they aim at increasing the secrecy of the original transmission in the presence of a possible external eavesdropper, several cooperative strategies have been proposed. As one kind of cooperative communications schemes, cognitive radio technology has been proposed by Mitola in [7] as an efficient way to enhance the spectrum efficiency which has considerable development over the last few decades. The concept of cooperation for secrecy, and the corresponding cooperative techniques can naturally be applied to the cognitive radio network.

In this article, we consider a cognitive radio (CR) network including four single-antenna half-duplex nodes, where the CR receiver is treated as a potential eavesdropper with respect to the primary transmission [1]. In exchange for cooperation from the CR user to improve/maintain his own secrecy rate, the primary user allows the CR user to share part of the spectrum. Compared to some important literature in this research line, e.g., [8], [9], and [10], etc., we additionally consider the following secure coexistence conditions:

(i) the transmission of CR transmitter does not degrade the primary user’s secrecy rate, and
(ii) the encoder and decoder at the primary transmitter and receiver, respectively, are left intact whether CR transmits or not.

The reasons to consider the secure coexistence conditions are twofold. First, to utilize the time-frequency slot in the overlay sense, cognitive radio systems are obligated not to interfere the primary systems, which is common in cognitive radio systems design. Second, with the condition (ii), cognitive radios are backward compatible with the legacy systems, which cannot sense and adapt to the environment agilely. This conditions make the cognitive radio capable of operating in broader usage scenarios. One of the possible practical scenarios of the considered model is that, the primary users belong to a licensed system, who sells rights of the spectrum usage to a femtocell system. Here we can let the CR transmitter and receiver be the femtocell base station and users, respectively. However, the femtocell operator may not be able to guarantee that the femtocell users are malicious or not. Thus, to provide a secrecy transmission to the primary users, not only the primary base station needs to use the wiretap coding, but also the femtocell base station needs to help to maintain the secrecy transmission for the primary system.

We analyze the achievable secrecy rate with weak secrecy of the cognitive user in the cognitive radio network under the secure coexistence conditions. In addition, we derive the rate constraints to guarantee that the primary user’s weak secrecy is unchanged as well, which requires different analysis compared to [8], [9], [10]. For example, the relation between channels observed by the primary transmitter before and after the cognitive transmitter is active should investigated for proper relay and jamming design. Otherwise, either the reliability of the cognitive user or the secrecy of the primary user will be violated. In Fig. 1 we show two improper system designs, where the black rectangular denotes the wiretap code used by the primary user, i.e., the row and column of it are indexed by the secure and confusion messages, respectively and each entry is a codeword. The height and width of the blue rectangular denote the capacity of the channels between the primary transmitter to the primary receiver and that between the primary transmitter to the CR receiver, respectively, after CR transmitter starts to transmit. Fig. 1 (a) shows that, both reliability and secrecy are fulfilled. However, the cognitive transmitter may overdesign the relay power for the primary user’s signal such that the capacity is too large, which is inefficient for the CR user. In particular, that means CR transmitter wastes power on constructing a too good channel for the primary user., while the remained power for CR’s own transmission is reduced. In contrast, Fig. 1 (b) shows that, the relay is efficient, i.e., the new channel is efficient for the transmission of the secure message. However, the confusion rate is not high enough for the new channel, which causes that the secrecy is violated. Therefore, the analysis of the aforementioned rate constraints is important. In addition, we also derive a capacity upper bound for the CR user under both discrete memoryless and additive white Gaussian noise (AWGN) channels to evaluate the performance of the achievable scheme.

Fig. 1. Improper design of the relay and jamming.

We then propose a multi-phase transmission scheme, which considers the following additional phases. First, to accommodate the operations of practical systems, we take into account the first additional phase for listening to/decoding the primary’s signal at the CR transmitter. Note that the primary user’s signal is commonly assumed non-causally known at CR transmitter. Second, we introduce another additional phase as the third one to endow the cognitive system an extra degree of freedom for utilizing different transmission schemes. For AWGN channels, this degree of freedom improves the performance by exploiting pure relaying and jamming but not simultaneously transmitting cognitive user’s own signal.

Finally, we illustrate our results through one numerical example as shown in Fig. 2 based on a geometrical setup, which highlights the impact of the node geometry on the achievable rates and on the optimal power allocation and time splitting of the CR transmitter. Note that we fix the locations of the primary transmitter and receiver at the coordinates (0,0) and (1,0), respectively. The CR receiver is fixed at (1,-1). We assume a path-loss model with path-loss exponent. The power constraints at the primary and CR transmitters are 10 dB and 20 dB, respectively. Note that we also include the power control as a possible design parameter for the CR transmitter, i.e., the transmission power utilized is not necessarily fixed to its maximum. The unit of rate results is bit per channel use. Further numerical results in [1] show that 1) the proposed 3-phase clean relaying scheme indeed improves the cognitive user’s rate; 2) the proposed achievable scheme is close to capacity when the CR transmitter/receiver is far/close enough to the primary receiver/transmitter, respectively.

Fig. 2. Maximum achievable CR user’s rates as a function of the position of the CR transmitter.



[1] P. -H LIn, F. Gabry, R. Thobaben, E. A. Jorswieck and M. Skoglund, “Multi-Phase Smart Relaying and Cooperative Jamming in Secure Cognitive Radio Networks”, IEEE Transactions on Cognitive Communications and Networking, Vol. 2, No 1 pp. 38-52, Mar. 2016

[2] A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, Handbook of Applied Cryptography. Boca Raton, FL, USA: CRC Press, 1996.

[3] C. E. Shannon, “Communication theory of secrecy systems”, Bell Syst. Tech. J., vol. 28, no 4, pp. 656-715, Oct. 1949.

[4] A. D. Wyner, “The wire-tap channel”, Bell Syst. Tech. J., vol. 54, no 8, pp. 1355-1387, Oct. 1975.

[5] Y. Liang, A. Somekh-Baruch, H. V. Poor, S. S. Shamai, and S. Verdú, “Capacity of cognitive interference channels with and without secrecy,” IEEE Trans. Inf. Theory, vol. 55, no. 2, pp. 604–619, Feb. 2009.

[6] H. G. Bafghi, S. Salimi, B. Seyfe, and M. R. Aref, “Cognitive interference channel with two confidential messages,” in Proc. IEEE Int. Symp. Inf. Theory Appl. (ISITA), Taichung, Taiwan, 2010, pp. 952–956.

[7] R. K. Farsani and R. Ebrahimpour, “Capacity theorems for the cognitive radio channel with confidential messages,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Honolulu, HI, USA, 2014, pp. 1416–1420.

Artificial Intelligence as an Enabler for Cognitive Self-Organizing Future Networks

Originally posted in Sec-IG blog (link to the original post)

Siddique Latif¹, Farrukh Pervez¹, Muhammad Usama², Junaid Qadir²

¹National University of Science and Technology, Islamabad
²Information Technology University (ITU), Lahore

The explosive increase in number of smart devices hosting sophisticated applications is rapidly affecting the landscape of information communication technology industry. Mobile subscriptions, expected to reach 8.9 billion by 2022 [1], would drastically increase the demand of extra capacity with aggregate throughput anticipated to be enhanced by a factor of 1000 [2]. In an already crowded radio spectrum, it becomes increasingly difficult to meet ever growing application demands of wireless bandwidth. It has been shown that the allocated spectrum is seldom utilized by the primary users and hence contains spectrum holes that may be exploited by the unlicensed users for their communication. As we enter the Internet Of Things (IoT) era in which appliances of common use will become smart digital devices with rigid performance requirements (such as low latency, energy efficiency, etc.), current networks face the vexing problem of how to create sufficient capacity for such applications. The fifth generation of cellular networks (5G) envisioned to address these challenges are thus required to incorporate cognition and intelligence to resolve the aforementioned issues. Cognitive radios (CRs) and self-organizing wireless networks are two major technologies that are envisaged to meet the future needs of such next generation wireless networks.

CRs are intelligent and fully programmable radios that can dynamically adapt according to their prevalent environment. In other words, they sense the spectrum and dynamically select the clearer frequency bands for better communication in the most prevailing conditions. In this way, CRs can adaptively tune their internal parameters to optimize the spectrum usage, transmission waveform, channel access methods and modulation schemes with enhanced coverage. However, it is due to the recent advancements in machine learning, software defined radio (SDR) that CR is able to emerge from simulation environment to the real-time applications [3].

The overwhelming traffic growth coupled with the greedy approach towards high quality of service (QoS) has been a major challenge for current wireless systems in terms of network resources and QoS. A new paradigm for wireless communication called 5G has been envisioned to address these challenges. The major component of the envisioned 5G scheme is Self-Organizing Network (SON). SON is a relatively new concept in perspective of wireless cellular networks, it refers to an intelligent network that learns from its immediate environment, while autonomously adapting accordingly to ensure reliable communication. In fact, SON underlines new aspect for automation of future networks in 5G era.

The sensing, learning and reasoning behavior of both CRs and SON is achieved by extensively using artificial intelligence (AI) and machine-learning techniques. The CRs are an evolved form of SDRs, realized by the embodiment of cognitive engine (CE) that exploits the AI techniques for the cognitive behavior to decide optimally.

The CR network (CRN) follows the cognitive cycle, for unparalleled resource management and better network performance. Cognitive cycle, as illustrated in figure 1, begins with sensing of dynamic radio environment parameters, subsequently observing and learning recursively the sensed values for reconfiguration of the critical parameters in order to achieve the desired objectives.

Fig. 1: Learning process in cognitive radios

Cognitive cycle is elaborated in figure 2, which highlights the parameters that CR needs to quantify in order to utilize the available spectrum without affecting primary user’s performance. The sensed parameters are treated as stimuli for achieving different performance objectives, for instance, minimizing the bit error rate or minimizing the power consumption etc. [4]. To achieve the aforementioned objectives, CR adaptively learns deciding optimal values for various significant variables such as power control, frequency band allocation, etc. [4].

Fig. 2: The cognitive cycle of CR

CR incorporates machine learning techniques for dynamic spectrum access (DSA) and capacity maximization. AI-based techniques for decision making such as optimization theory, Markov decision processes (MDPs), and game theory is used to encompass a wide range of applications [3]. The popular learning techniques used in cognitive cycle are support vector machine (SVM), artificial neural networks (ANNs), metaheuristic algorithms, fuzzy logic, genetic algorithms, hidden Markov models (HMMs), Bayesian learning, reinforcement learning, multi-agent systems. Fuzzy logic theory has been used for effective bandwidth, resource allocation, interference and power management [3], [5]–[7]. Genetic algorithms (GAs) have been employed for CRs spectrum and parameters optimization [8]–[10]. ANNs have been incorporated to improve the spectrum sensing and adaptively learn complex environments, without substantial overhead [11], [12]. Game theory enables CRNs to learn from its history, scrutinize the performance of other CRNs, and adjust their own behavior accordingly [13], [14]. In multi-agent domain, reinforcement learning (RL) a reward-penalty based technique, which reinforces immediate rewards to maximize long term goals has been employed for efficient spectrum utilization [15], minimum power consumption [16] and filling the spectrum holes dynamically [17]. SVM, a supervised classification model, is being utilized for channel selection [18], adaptation of transmission parameters [19] and beam-forming design [20]. In CRNs, HMMs have been widely used to identify spectrum holes detection [21], spectrum handoff [22], and competitive spectrum access [23]. The range of AI-based techniques are not limited to the above mentioned applications, other applications of AI in CRNs are expressed in [3], [4]. By combining increasing spectrum agility, context aware adaptability of CR and AI techniques, CR has become an increasingly important feature of wireless systems. IEEE 802.16h has recommended CR as one of its key features and a lot of efforts are being made to introduce CR features in 3GPP LTE-Advance.

The rapid proliferation of multi-radio access technology-disparate smart devices has resulted in complicated heterogeneous mobile networks thus making configuration, management and maintenance cumbersome and error-prone. 5G, expected to handle diverse devices at a massive scale, is foreseen as one of the most complicated networks and hence extensive efforts are being carried out for its standardization. In the recent years, SONs, as depicted in figure 3, have gained significant attention regarding self-configuration, self-optimization and self-healing of such complex networks. The idea behind SONs is to automate network planning, configuration and optimization jointly in a single process in order to minimize human involvement. The planning phase, which includes ascertaining cells locations, inter-cell connecting links and other associated network devices as well as parameters, precedes the configuration phase [24]. Self-configuration means that a newly deployed cell is able to automatically configure, test and authenticate itself and adjust its parameters such as transmission power, inter-cell interference etc. in a plug and play fashion [24]. Self-healing allows trouble-free maintenance and enables networks to recover from failures in an autonomous fashion. In addition, it also helps in routine upgrades of the equipments in order to remove legacy bugs. Self-optimization is the ability of the network to keep improving its performance with respect to various aspects including link quality, coverage, mobility and handoff with an objective to achieve network level goals [24]. Since AI-based techniques are capable to handle complex problems in large systems intrinsically, these techniques are now being proposed to achieve Self Organization (SO) in 5G.

Fig. 3: Illustration of AI-based self-organization in the networks

Self-configuration, in cellular networks, refers to the automatic configuration of initial parameters—neighbouring cells list, IP Addresses and radio access parameters—by a node itself. AI techniques like Dynamic Programming (DP), RL and Transfer Learning (TL) may be employed in 5G to automatically configure a series of parameters to render best services. RL, as opposed to DP which initially builds the environment model to operate, is a model free learning technique that iterates through to reach optimal strategy and may yield superior results in dynamically changing radio conditions [25]. Self-healing is about automatic fault detection, its classification and initiating necessary actions for recovery. Irregularities and anomalies in network may be timely spotted to further restore the system by leveraging different AI based sensing techniques like Logistic Regression (LR), SVM and HMM [25]. Self-optimization includes continuous optimization of parameters to achieve system-level objectives such as load balancing, coverage extension, and interference avoidance. AI techniques that may be exploited to optimize provisioning of QoS to various services mainly belong to the class of unsupervised learning. Besides Gradient Boosting Decision Tree (a supervised learning technique), Spectral Clustering, One-class SVM and Recurrent Neural Networks are few examples in this regard [25]. Figure 4 summarizes the AI algorithms that can be utilized to enhance cellular networks performance.

Fig. 4: AI algorithms for 5G

AI techniques may also exploit network traffic patterns to predict future events and help pre-allocate network resources to avoid network overloading. Furthermore, user-centric QoS-provisioning across tiers of heterogeneous cells may also be granted using AI [25]. Similarly, GAs are employed for cell planning and optimization of coverage with power adjustment [26]. GAs are also suited for the problem of finding the shortest path routing in a large scale dynamic networks [27]. Wenjing et al. in [28] proposed an autonomic particle swarm compensation algorithm for cell outage compensation. The study in [29] introduces the self-optimization technique for the transmission power and antenna configuration by exploiting the fuzzy neural network optimization method. It integrates fuzzy neural network with cooperative reinforcement learning to jointly optimize coverage and capacity by intelligently adjusting power and antenna tilt settings [29]. It adopts a hybrid approach in which cells individually optimize respective radio frequency parameters through reinforcement learning in a distributed manner, while a central entity manages to cooperate amongst individual cells by sharing their optimization experience on a network level [29]. Cells iteratively learn to achieve a trade-off between coverage and capacity through optimization, since increase in coverage leads to reduction in capacity, while additionally improving energy efficiency of the network [29]. ANNs can also be effectively utilized for the estimation of link quality [30]. Mobile devices in an indoor environment have also been localized through the use of ANNs [31]. In fact, AI-based techniques enable network entities to automatically configure their initial parameters before becoming operational [24], adaptively learn radio environment parameters to provide optimal services [25], autonomously perform routine maintenance and upgrades and recover from network failures [24],[25].

In view of the continued proliferation of smart devices, we anticipate that CRs and SON will soon become the basic building blocks of future wireless networks. These technologies will transform future networks into an intelligent network that would encompass user preferences alongside network priorities/constraints. AI, being the basis of both these technologies, will continue to drive ongoing 5G standardization efforts and therefore be the cause of a major paradigm shift. AI techniques will continue to intervene future networks finding their usage in from radio resource management to management and orchestration of networks. In fact, we anticipate that future wireless networks would be completely dominated by AI.


[1] P. Cerwall, “Ericsson mobility report, mobile world congress edition,” 2016.
[2] J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. Soong, and J. C. Zhang, “What will 5G be?” IEEE Journal on selected areas in communications, vol. 32, no. 6, pp. 1065–1082, 2014.
[3] J. Qadir, “Artificial intelligence based cognitive routing for cognitive radio networks,” Artificial Intelligence Review, vol. 45, no. 1, pp. 25–96, 2016.
[4] N. Abbas, Y. Nasser, and K. El Ahmad, “Recent advances on artificial intelligence and learning techniques in cognitive radio networks,” EURASIP Journal on Wireless Communications and Networking, vol. 2015, no. 1, p. 174, 2015.
[5] P. Kaur, M. Uddin, and A. Khosla, “Fuzzy based adaptive bandwidth allocation scheme in cognitive radio networks,” in Knowledge Engineering, 2010 8th International Conference on ICT and. IEEE, 2010, pp. 41–45.
[6] S. R. Aryal, H. Dhungana, and K. Paudyal, “Novel approach for interference management in cognitive radio,” in Internet (AH-ICI), 2012 Third Asian Himalayas International Conference on. IEEE, 2012, pp. 1–5.
[7] M. Matinmikko, J. Del Ser, T. Rauma, and M. Mustonen, “Fuzzy-logic based framework for spectrum availability assessment in cognitive radio systems, IEEE Journal on Selected Areas in Communications, vol. 31, no. 11, pp. 2173–2184, 2013.
[8] H. Qin, L. Zhu, and D. Li, “Artificial mapping for dynamic resource management of cognitive radio networks,” in Wireless Communications, Networking and Mobile Computing (WiCOM), 2012 8th International Conference on. IEEE, 2012, pp. 1–4.
[9] S. Chen, T. R. Newman, J. B. Evans, and A. M. Wyglinski, “Genetic algorithm-based optimization for cognitive radio networks,” in Sarnoff Symposium, 2010 IEEE. IEEE, 2010, pp. 1–6.
[10] M. R. Moghal, M. A. Khan, and H. A. Bhatti, “Spectrum optimization in cognitive radios using elitism in genetic algorithms,” in Emerging Technologies (ICET), 2010 6th International Conference on. IEEE, 2010, pp. 49–54.
[11] X. Tan, H. Huang, and L. Ma, “Frequency allocation with artificial neural networks in cognitive radio system,” in TENCON Spring Conference, 2013 IEEE. IEEE, 2013, pp. 366–370.
[12] T. Zhang, M. Wu, and C. Liu, “Cooperative spectrum sensing based on artificial neural network for cognitive radio systems,” in Wireless Communications, Networking and Mobile Computing (WiCOM), 2012 8th International Conference on. IEEE, 2012, pp. 1–5.
[13] Z. Han, Game theory in wireless and communication networks: theory, models, and applications. Cambridge University Press, 2012.
[14] D. Bellhouse, “The problem of waldegrave,” Electronic Journal for the History of Probability and Statistics, vol. 3, no. 2, pp. 1–12, 2007.
[15] S. S. Barve and P. Kulkarni, “Dynamic channel selection and routing through reinforcement learning in cognitive radio networks,” in Computational Intelligence & Computing Research (ICCIC), 2012 IEEE International Conference on. IEEE, 2012, pp. 1–7.
[16] P. Zhou, Y. Chang, and J. A. Copeland, “Reinforcement learning for repeated power control game in cognitive radio networks,” IEEE Journal on Selected Areas in Communications, vol. 30, no. 1, pp. 54–69, 2012.
[17] S. Arunthavanathan, S. Kandeepan, and R. J. Evans, “Reinforcement learning based secondary user transmissions in cognitive radio networks,” in 2013 IEEE Globecom Workshops (GC Wkshps). IEEE, 2013, pp. 374–379.
[18] Y. Huang, H. Jiang, H. Hu, and Y. Yao, “Design of learning engine based on support vector machine in cognitive radio,” in Computational Intelligence and Software Engineering, 2009. CiSE 2009. International Conference on. IEEE, 2009, pp. 1–4.
[19] S. Hu, Y.-d. Yao, and Z. Yang, “Mac protocol identification using support vector machines for cognitive radio networks,” IEEE Wireless Communications, vol. 21, no. 1, pp. 52–60, 2014.
[20] M. Lin, J. Ouyang, and W.-P. Zhu, “BF design in cognitive relay networks via support vector machines,” in 2013 IEEE Global Communications Conference (GLOBECOM). IEEE, 2013, pp. 3247–3252.
[21] A. Mukherjee, S. Maiti, and A. Datta, “Spectrum sensing for cognitive radio using blind source separation and hidden markov model,” in 2014 Fourth International Conference on Advanced Computing & Communication Technologies. IEEE, 2014, pp. 409–414.
[22] C. Pham, N. H. Tran, C. T. Do, S. I. Moon, and C. S. Hong, “Spectrum handoff model based on hidden markov model in cognitive radio networks,” in The International Conference on Information Networking 2014 (ICOIN2014). IEEE, 2014, pp. 406–411.
[23] X. Li and C. Xiong, “Markov model bank for heterogenous cognitive radio networks with multiple dissimilar users and channels,” in International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, 2014. IEEE, 2014, pp. 93–97.
[24] X. Wang, X. Li, and V. C. Leung, “Artificial intelligence-based techniques for emerging heterogeneous network: State of the arts, opportunities, and challenges,” IEEE Access, vol. 3, pp. 1379–1391, 2015.
[25] R. Li, Z. Zhao, X. Zhou, G. Ding, Y. Chen, Z. Wang, and H. Zhang, “Intelligent 5G: When cellular networks meet artificial intelligence.”
[26] L. T. Ho, I. Ashraf, and H. Claussen, “Evolving femtocell coverage optimization algorithms using genetic programming,” in 2009 IEEE 20th International Symposium on Personal, Indoor and Mobile Radio Communications. IEEE, 2009, pp. 2132–2136.
[27] U. Mehboob, J. Qadir, S. Ali, and A. Vasilakos, “Genetic algorithms in wireless networking: techniques, applications, and issues,” Soft Computing, vol. 20, no. 6, pp. 2467–2501, 2016.
[28] L. Wenjing, Y. Peng, J. Zhengxin, and L. Zifan, “Centralized management mechanism for cell outage compensation in lte networks,” International Journal of Distributed Sensor Networks, 2012.
[29] S. Fan, H. Tian, and C. Sengul, “Self-optimization of coverage and capacity based on a fuzzy neural network with cooperative reinforcement learning,” EURASIP Journal on Wireless Communications and Networking, vol. 2014, no. 1, pp. 1–14, 2014.
[30] M. Caleffi and L. Paura, “Bio-inspired link quality estimation for wireless mesh networks,” in IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks & Workshops, 2009. WoWMoM 2009. IEEE, 2009, pp. 1–6.
[31] N. Ahad, J. Qadir, and N. Ahsan, “Neural networks in wireless networks: Techniques, applications and guidelines,” Journal of Network and Computer Applications, vol. 68, pp. 1–27, 2016.