Prospective Graduate Students / Postdocs
This faculty member is currently not actively recruiting graduate students or Postdoctoral Fellows, but might consider co-supervision together with another faculty member.
This faculty member is currently not actively recruiting graduate students or Postdoctoral Fellows, but might consider co-supervision together with another faculty member.
Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.
Future communication networks are expected to connect a large number of heterogeneous devices. For example, the fifth-generation infrastructure public-private partnership (5G PPP) targets to connect over 7 trillion devices. This makes trust management among devices challenging in trust-based applications due to their centralized architecture. Blockchain is promising to provide decentralized trust management. However, to use the blockchain in communication networks, we need to address delay and chain inconsistency challenges. In this thesis, we first focus on routing protocols as one of the main network functions. We propose a blockchain-based routing protocol (BCR) to mitigate the challenges raised by centralized network architecture. BCR uses blockchain smart contracts to transfer the routing protocol's control messages. The results show that BCR can reduce routing overhead by about $5$ times compared to the AODV routing protocol at the cost of a slightly lower packet delivery ratio. However, its performance depends on the time required to mine protocol transactions inside the blockchain. To study this issue, we propose a blockchain mining scheme, Wait-Min(D), in which miners wait for a certain number of transactions to arrive in the mining pool to minimize the average transaction waiting time. Numerical results show that the average waiting time can be reduced by about 10% compared to the existing mining schemes. This improvement is significant for time-critical network applications. To address the requirements of different network applications, we propose a blockchain mining framework, namely MCBS, that provides multi-class network services in a decentralized manner. We model the mining framework as an r-priority queue and derive the average blockchain service time. The results show that MCBS enables networks to categorize applications according to their priorities. Finally, we propose an adaptive double-spend attack (ADSA) which makes blockchains more inconsistent compared to traditional double-spend attacks (TDSA). Our results show that when attackers control 45% of the total mining processing power, to keep the successful attack probability less than 0.1%, network nodes should receive at least 547 confirmation blocks for each block compared to 340 confirmation blocks for a TDSA. Thus, to avoid inconsistency, network nodes should wait for more confirmation blocks.
View record
Cognitive radio (CR) is a promising technology designed to improve the utilization of lightly used portions of the licensed spectrum while ensuring no undue interference with incumbent users (IUs). CR networks (CRNs) employ collaborative spectrum sensing (CSS) methods to discover spectrum opportunities. Spectrum and energy overhead costs play important roles in the efficiency of CSS in CRNs.A trust-based energy efficient CSS (EE-CSS) protocol is proposed. The protocol achieves energy efficiency by reducing the total number of sensing reports exchanged between the secondary users (SUs) and the fusion center (FC) in the presence of misbehaving SUs (MSUs). The steady-state and transient behavior of the average number of sensing reports and trust values of SUs in EE-CSS are analyzed and compared to those in traditional CSS (T-CSS). The impact of link outages on the global false alarm (FA) probabilities, ℚf, and the global miss detection (MD) probabilities, ℚmd, in EE-CSS and T-CSS is also analyzed.A centralized trust-based collusion attack strategy, in conjunction with integer linear programming, is proposed to compromise the decision of the FC in EE-CSS. The proposed strategy aims to attack only when it is likely to alter the decision of the FC. A mitigating scheme, based on the cross-correlation of sensing reports, is proposed to identify SUs with abnormal behaviors and to eliminate them from the decision making process at the FC.We also propose a trust-based spectrum and energy efficient CSS (SEE-CSS) scheme for the IEEE 802.22 standard wireless regional area network (WRAN). The proposed scheme aims to reduce the number of urgent coexistence situation (UCS) notifications transmitted from customer premise equipment (CPE) nodes to the WRAN base station (BS). The UCS messages inform the BS of the presence of active IUs on the licensed spectrum. We adapt the collusion attack strategy for SEE-CSS and apply the cross-correlation method at the BS to mitigate against the collusion attack. The results show that while ℚf and ℚmd are kept the same in T-CSS and SEE-CSS, the SEE-CSS protocol is more energy and spectrum efficient.
View record
The requirements of wireless sensor networks for localization applications are largely dictated by the need to estimate node positions and to establish routes to dedicated gateways for user communication and control. These requirements add significantly to the cost and complexity of such networks.In some applications, such as autonomous exploration or search and rescue, which may benefit greatly from the capabilities of wireless sensor networks, it is necessary to guide an autonomous sensor and actuator platform to a target, for example to acquire a large data payload from a sensor node, or to retrieve the target outright. We consider the scenario of a mobile platform capable of directly interrogating individual, nearby sensor nodes. Assuming that a broadcast message originates from a source node and propagates through the network by flooding, we study applications of autonomous target search and mapping, using observations of the message hop count alone. Complex computational and communication tasks are offloaded from the sensor nodes, leading to significant simplifications of the node hardware and software.This introduces the need to model the hop count observations made by the mobile platform to infer node locations. Using results from first-passage percolation theory and a maximum entropy argument, we formulate a stochastic jump process which approximates the message hop count at distance r from the source. We show that the marginal distribution of this process has a simple analytic form whose parameters can be learned by maximum likelihood estimation.Target search involving an autonomous mobile platform is modeled as a stochastic planning problem, solved approximately through policy rollout. The cost-to-go at the rollout horizon is approximated by an open-loop search plan in which path constraints and assumptions about future information gains are relaxed. It is shown that the performance is improved over typical information-driven approaches.Finally, the hop count observation model is applied to an autonomous mapping problem. The platform is guided under a myopic utility function which quantifies the expected information gain of the inferred map. Utility function parameters are adapted heuristically such that map inference improves, without the cost penalty of true non-myopic planning.
View record
With the rapid growth in demand for wireless communications, service providers are expected to provide always-on, seamless and ubiquitous wireless data services to a large number of users with different applications and different Quality of Service (QoS) requirements. The multimedia traffic is envisioned to be a concurrent mix of real-time traffic and non-real-time traffic. However, radio spectrum is a scarce resource in wireless communications. In order to adapt to the changing wireless channel conditions and meet the diverse QoS requirements, efficient and flexible packet scheduling algorithms play an increasingly important role in radio resource management (RRM).Much of the published work in RRM has focused on exploiting multi-user and multi-channel diversities. In this thesis, we adopt an adaptive cross layer approach to exploit multi-application diversity in single-carrier communication systems and additionally, multi-bit diversity in multi-carrier communication systems. Efficient and practical resource allocation (RA) algorithms with finer scheduling granularity and increased flexibility are developed to meet QoS requirements. Specifically, for single-carrier communication systems, we develop RA algorithms with flow and user multiplexing while jointly considering physical-layer time-varying channel conditions as well as application-layer QoS requirements. For multi-carrier communication systems, we propose a bitQoS-aware RA framework to adaptively match the QoS requirements of the user application bits to the characteristics of the narrowband channels.The performance gains achievable from the proposed bitQoS-aware RA framework are demonstrated with suboptimal algorithms using water-filling and bit-loading approaches. Efficient algorithms to obtain optimal and near-optimal solutions to the joint subcarrier, power and bit allocation problem with continuous and discrete rate adaptation, respectively, are developed. The increased control signaling that may be incurred, as well as the computational complexity as a result of the finer scheduling granularity, are also taken into consideration to establish the viability of the proposed RA framework and algorithms for deployment in practical networks. The results show that the proposed framework and algorithms can achieve a higher system throughput with substantial performance gains in the considered QoS metrics compared to RA algorithms that do not take QoS requirements into account or do not consider multi-application diversity and/or multi-bit diversity.
View record
Cognitive radio (CR) is a novel wireless communication approach that may alleviate the looming spectrum-shortage crisis. Orthogonal frequency division multiplexing (OFDM) is an attractive modulation candidate for CR systems. In this thesis, we study resource allocation (RA) for OFDM-based CR systems using both aggressive and protective sharing.In aggressive sharing, cognitive radio users (CRUs) can share both non-active and active primary user (PU) bands. We develop a model that describes aggressive sharing, and formulate a corresponding multidimensional knapsack problem (MDKP). Low-complexity suboptimal RA algorithms are proposed for both single and multiple CRU systems. A simplified model is proposed which provides a faster suboptimal solution. Simulation results show that the proposed suboptimal solutions are close to optimal, and that aggressive sharing of the whole band can provide a substantial performance improvement over protective sharing, which makes use of only the non-active PU bands.Although aggressive sharing generally yields a higher spectrum-utilization efficiency than protective sharing, aggressive sharing may not be feasible in some situations. In such cases, sharing only non-active PU bands is more appropriate. When there are no fairness or quality of service (QoS) considerations among CRUs, both theoretical analysis and simulation results show that plain equal power allocation (PEPA) yields similar performance as optimal power allocation in a multiuser OFDM-based CR system. We propose a low-complexity discrete bit PEPA algorithm. To improve spectrum-utilizationefficiency, while considering the time-varying nature of the available spectrumas well as the fading characteristics of wireless communication channels and providing QoS provisioning and fairness among users, this thesis introduces thefollowing novel algorithms: (1) a distributed RA algorithm that provides both fairness and efficient spectrum usage for ad hoc systems; (2) a RA algorithm for non-real-time (NRT) services that maintains average user rates proportionally on the downlink of multiuser OFDM-based CR systems; and (3) cross-layer RA algorithms for the downlink of multiuser OFDM-based CR systems for both real-time (RT) services and mixed (RT and NRT) services. Simulation results show that the proposed algorithms provide satisfactory QoS to all supported services and perform better than existing algorithms designed for multiuser OFDM systems.
View record
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
Age of Information (AoI), namely the time that has elapsed since the most recently delivered packet was generated, is receiving increasing attention with the emergence of many real-time applications that rely on the exchange of time-sensitive information. AoI captures the freshness of the information from the perspective of the destination. The term “accuracy of information” is introduced in this thesis to assess how close the information estimated by the destination is to the information monitored by the sensor. The mean squared error (MSE) between the estimate and the information being tracked by the sensor is used to evaluate the accuracy of information.In this thesis, we consider a single-sensor wireless network with this sensor monitoring a time-sensitive physical process, which is modelled as a random walk. The update scheme is that whenever the state of the underlying random walk changes by more than a specified amount, the sensor generates a status update packet and transmits it instantly to the destination over an error-free channel. When there are no status updates, the destination assumes the status is the same as what it received most recently. We address the problem of finding the minimum update rate under AoI and accuracy of information constraints. More specifically, we develop methods to derive analytical expressions for the three metrics: the update rate, the AoI, and the MSE. As expected, the AoI and the MSE decrease with the update rate. The analytical results are verified by using Monte-Carlo simulation experiments.
View record
Diabetic Retinopathy (DR) is a diabetic complication that affects the eyes and may lead to blurred vision or even blindness. The diagnosis of DR through retinal fundus images is traditionally performed by ophthalmologists who inspect for the presence and significance of many subtle features, a process which is cumbersome and time-consuming. As there are many undiagnosed and untreated cases of DR, DR screening of all diabetic patients is a huge challenge. Deep convolutional neural network (CNN) has rapidly become a powerful tool for analyzing medical images. There have been previous works which use deep learning models to detect DR automatically. However, these methods employed very deep CNNs which require vast computational resources. Thus, there is a need for more computationally efficient deep learning models for automatic DR diagnosis. The primary objective of this research is to develop a robust and computationally efficient deep learning model to diagnose DR automatically.In the first part of this thesis, we propose a computationally efficient deep CNN model MobileNet-Dense which is based on the recently proposed MobileNetV2 and DenseNet models. The effectiveness of the proposed MobileNet-Dense model is demonstrated using two widely used benchmark datasets, CIFAR-10 and CIFAR-100. In the second part of the thesis, we propose an automatic DR classification system based on the ensemble of the proposed MobileNet-Dense model and the MobileNetV2 model. The performance of our system is evaluated and compared with some of the state-of-the-art methods using two independent DR datasets, the EyePACS dataset and the Messidor database. On the EyePACS dataset, our system achieves a quadratic weighted kappa (QWK) score of 0.852 compared to a QWK score of 0.849 achieved by the benchmark method while using 32% fewer parameters and 73% fewer multiply-adds (MAdds). On the Messidor database, our system outperforms the state-of-the-art method on both Normal/Abnormal and Referable/Non-Referable classification tasks.
View record
Human Activity Recognition (HAR) serves a diverse range of human-centric applications in healthcare, smart homes, and security. Recently, Wi-Fi-based solutions have attracted a lot of attention. The underlying principle of these is the effect that human bodies have on nearby wireless signals. The presence of static objects such as ceilings and furniture cause reflections while dynamic objects such as humans result in additional propagation paths. These effects can be empirically observed by monitoring the Channel State Information (CSI) between two Wi-Fi devices. As different human postures induce different signal propagation paths, they result in unique CSI signatures, which can be mapped to corresponding human activities. However, there are some limitations in current state-of-the-art solutions. First, the performance of CSI-based HARs degrades in complex environments. To overcome this limitation, we propose Wi-HACS: Leveraging Wi-Fi for Human Activity Classification using Orthogonal Frequency Division Multiplexing (OFDM) Subcarriers. In our work, we propose a novel signal segmentation method to accurately determine the start and end of a human activity. We use several signal pre-processing and noise attenuation techniques, not commonly used in CSI-based HAR, to improve the features obtained from the amplitude and phase signals. We also propose novel features based on subcarrier correlations and autospectra of principal components. Our results indicate that Wi-HACS can outperform the state-of-the-art method in both precision and recall by 8% in simple environments, and by 14.8% in complex environments. The second limitation in existing CSI-HAR solutions is their poor performance in new/untrained environments. Since accurate Wi-Fi based fall detectors can greatly benefit the well-being of the elderly, we propose DeepFalls: Using Wi-Fi Spectrograms and Deep Convolutional Neural Nets for Fall Detection. We utilize the Hilbert Huang Transform spectrograms and train a Convolutional Neural Network to learn the features automatically. Our results show that DeepFalls can outperform the state-of-the-art RT-Fall in untrained environments with improvements in sensitivity and specificity by 11% and 15% respectively.
View record
As the global population is ageing, the demand for elderly care facilities and services is expected to increase. Assisted living technologies for detecting medical emergencies and assessing the wellness of the elderly are becoming more popular. A person normally performs activities of daily living (ADLs) on a regular basis. A person who is able to perform recurring ADLs indicates a certain wellness level. Anomalies in activity patterns of a person might indicate changes in the person's wellness. A method is proposed in this thesis for detecting anomalies in activity patterns of a lone occupant using electricity consumption measurements of his/her residence. The proposed method infers anomalies in activity patterns of an occupant from electricity consumption patterns without a need of explicitly monitoring the underlying individual activities. The proposed method provides a score which is a quantitative assessment of anomalies in the electricity consumption pattern of an occupant for a given day. A survey was conducted to obtain the hourly activities of three lone occupants for a month. The level of suspicion values, which are quantitative assessments of anomalies in the daily activity patterns of the occupants, were deduced from the survey. Using Fuzzy C-Means (FCM) clustering with Euclidean distance measure, the scores and level of suspicion values were clustered respectively. A day was then classified as regular or irregular based on the clustering results of the scores and level of suspicion values respectively. The results showed that anomalies in electricity consumption patterns can effectively reflect anomalies in the underlying activity patterns. The results also showed that the proposed feature and model based method outperforms a chosen raw data based approach. The performance of the proposed method was improved when subsets of features were considered based on the minimum Redundancy Maximum Relevance (mRMR) feature selection. A supervised learning method based on the Curious Extreme Learning Machine (C-ELM) was then proposed. The proposed method based on C-ELM (PM-CELM) outperforms the proposed method based on FCM (PM-FCM), but PM-FCM can operate without labelled training data.
View record
Long Term Evolution has been standardized by the 3GPP consortium since 2008, with 3GPP Release 12 being the latest iteration of LTE Advanced, which was finalized in March 2015. High Efficiency Video Coding has been standardized by the Moving Picture Experts Group since 2012 and is the video compression technology targeted to deliver High-Definition video content to users. With video traffic projected to represent the lion's share of mobile data traffic in the next few years, providing video and non-video users with high Quality of Experience is key to designing 4G systems and future 5G systems.In this thesis, we present a cross-layer scheduling framework which delivers video content to video users by exploiting encoding features used by the High Efficiency Video Coding standard such as coding structures and motion compensated prediction. We determine which frames are referenced the most within the coded video bitstream to determine which frames have higher utility for the High Efficiency Video Coding decoder located at the user's device and evaluate the performances of best effort and video users in 4G networks using finite buffer traffic models. We look into throughput performance for best effort users and packet loss performance for video users to assess Quality of Experience. Our results demonstrate that there is significant potential to improve the Quality of Experience of best effort and video users using our proposed Frame Reference Aware Proportional Fair scheme compared to the baseline Proportional Fair scheme.
View record
Wireless communication has experienced tremendous growth over the past three decades. This led to the development of many novel technologies aimed at enhancing the system performance due to the limited availability of radio resources. Cooperative relaying is a promising technology which enhances transmission reliability using simple hardware. However, the extra power consumed for the process of information relaying may be an issue. Recent advances in wireless energy transfer have made it possible for self-sustainable relays that power themselves by capturing ambient energy wirelessly. In this thesis we focus on two technologies, namely, cooperative relaying which enhances the energy efficiency and reliability by allowing multi-hop communication with low power nodes, and Radio Frequency (RF) energy harvesting which obviates the need for a battery by capturing the ambient RF energy and using it as a source power.In the first part of the thesis, we study RF energy harvesting in a Decode-and-Forward (DF) Wireless Relay Network (WRN) in the presence of an interferer node. We consider the Time Switching Relaying (TSR) protocol, the Power Splitting Relaying (PSR) protocol and we propose a new hybrid TSR-PSR protocol. We derive expressions for the outage probability and throughput in the delay-sensitive transmission mode for the three relaying protocols, and compare their performances. For simplicity, we neglect the energy harvested from the interferer signal. In the second part, we study the general case in which we include the effect of harvesting energy from the interferer signal. Expressions for the outage probability and throughput in the delay-sensitive transmission mode are derived for the three relaying protocols. Numerical results are presented to illustrate the effect of including RF energy harvesting from the interferer. In the third part, we study shared and non-shared power allocation schemes for a two-hop DF WRN with multiple source-destination pairs. The pairs communicate via a single relay which harvests RF energy from the source transmissions in the presence of an interfering signal. The studied schemes are compared in terms of outage probability, throughput in the delay-sensitive transmission mode and fairness.
View record
Sleep arousals are sudden awakenings from sleep which can be identified as an abrupt shift in EEG frequency and can be manually scored from various physiological signals by sleep experts. Frequent sleep arousals can degrade sleep quality, result in sleep fragmentation and lead to daytime sleepiness. Visual inspection of arousal events from PSG recordings is cumbersome, and manual scoring results can vary widely among different expert scorers. The main goal of this project is to design and evaluate the performance of an effective and efficient algorithm to automatically detect sleep arousals using a single channel EEG.In the first part of the thesis, a detection model based on a Curious Extreme Learning Machine (C-ELM) using a set of 22 features is proposed. The performance was evaluated using the term Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) and the Accuracy (ACC). The proposed C-ELM based model achieved an average AUC and ACC of 0.85 and 0.79 respectively. In comparison, the average AUC and ACC of a Support Vector Machine (SVM) based model were 0.69 and 0.67 respectively. This indicates that the proposed C-ELM based model works well for the sleep arousal detection problem.In the second part of the thesis, an improved detection model is proposed by adding a Minimum Redundancy Maximum Relevance (MRMR) feature selection into the C-ELM based model proposed in the first part. The efficiency of the model is improved by reducing dimensionality (reducing the number of features) of the dataset while the performance is largely unaffected. The achieved average AUC and ACC were 0.85 and 0.80 when a reduced set of 6 features were used, while the AUC and ACC were 0.86 and 0.79 for a full set of 22 features. The result indicates MRMR feature selection step is important for sleep arousal detection. By using the improved sleep arousal detection model, the system runs faster and achieves a good performance for the dataset utilized in our study.
View record
In a wireless relay network, the source and the destination are not able to communicate directly with each other because they are out of radio communication range. Instead, they communicate with the aid of intermediate node(s) which relay the signals.In the first part of the thesis, we analyze the lifetime distribution in a variable gain amplify-and-forward (VG-AF) opportunistic wireless relay network (OWRN) in the presence of Rayleigh fading. Two different methods are proposed to derive the probability density function (pdf) of the network lifetime. The first method is the Pearson method, which is useful in obtaining approximate expressions for probability distributions. Using a relatively short simulation for the network lifetime of interest, we obtain estimates of the first four moments of the network lifetime. Based on these moments, a fairly accurate approximation of the lifetime distribution is derived. The second method is based on the central limit theorem (CLT). Based on the fact that there are sufficient transmissions in one lifetime of a network, the lifetime distribution is close to normal distribution. To verify our methods, we use large Monte Carlo simulations to obtain good approximations for the network lifetime distribution.In the second part of the thesis, we analyze the lifetime distribution in the presence of Weibull fading as well as Nakagami fading. First, exact outage probability expressions for both cases with the opportunistic amplify-and-forward relaying strategy are derived. Simulations results verify our theoretical analysis. Using the techniques in the first part, we then obtain the lifetime distributions for both cases with the methods used in the first part of the thesis.In the third part of the thesis, an algorithm based on an energy saving adaptive transmit power threshold is proposed to improve the relay network lifetime. Calculated based on the residual energy information, QoS requirements and the number of relays, this transmit power threshold is adaptive to the residual energy value of each relay. Simulation results verify that this algorithm prolongs the lifetime of the relay network compared with a number of other existing relay selection strategies while satisfying the QoS requirements.
View record
If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.