Andre Ivanov

Professor

Relevant Thesis-Based Degree Programs

Affiliations to Research Centres, Institutes & Clusters

 
 

Graduate Student Supervision

Doctoral Student Supervision

Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.

Machine learning based techniques for routing interconnects in very large scale integrated (VLSI) circuits (2022)

Global routing is a significant challenge in Integrated Circuit (IC) designs due to circuits' increasing number of metal layers, transistors, and the resulting growth in design complexity. Congestion is a crucial factor contributing to routing completion because required interconnects of a design with no congestion can be easily routed. A circuit with congestion will have challenges during routing and may require a re-placement, which lengthens the time to complete the design and may delay time to market. Congestion also affects routing complexity, which may increase wire length and the number of vias and detours in a layout, affecting overall circuit performance. Furthermore, congested areas in a layout may cause manufacturing yield and reliability problems. Congested areas have a higher potential for creating shorts and opens which can eventually lead to unworkable chips. Traditional congestion estimation algorithms use simple, fixed models which do not change as the technology nodes scale to finer dimensions. To address this shortcoming, we investigate Machine Learning (ML) based congestion estimation approaches. By training from previously routed circuits, an ML-based estimator implicitly learns from the advanced design rules of a particular technology node as well as from the routing behaviours of routers. In this investigation, three ML-based approaches for congestion estimation are explored. First, a regression model to estimate congestion is developed, which is in average 9X faster than traditional approaches while maintaining a similar quality of routing solution. Second, a Generative Adversarial Network (GAN) based model is developed to accurately predict congestion of fixed-size tiles of a circuit. Third, a customized Convolutional Neural Network (CNN) is designed for congestion estimation which uses a sliding-window approach to smooth tile-based discontinuities. This CNN produces the best correlations with actual post-routing congestion compared with other state-of-the-art academic routers. Furthermore, an improved global routing heuristic is developed with which congestion estimators can be merged. Results show that my work achieves 14% reduction in runtimes on average compared with other routers and achieves significantly lower runtimes on difficult-to-route circuits. Overall, this work demonstrates the feasibility of using ML-based approaches to improve routing algorithms for complex IC implemented in nanometer technologies.

View record

Addressing security in drone systems through authorization and fake object detection (2020)

There now exists more than eight billion IoT devices with expected growth to reach over 22 billion by 2025. IoT devices are comprised of sensor and actuator components which generate live-stream data and share information via a common communication link, e.g., the Internet. For example, in a smart home, a number of IoT devices such as a Google Home/Amazon Alexa, smart plugs, security cameras, a garage door, and a thermostat connect to the WiFi network to routinely communicate with each other, share information, and take actions accordingly. However, a main security challenge is protecting shared information between authorized devices/users while distinguishing real objects from fake ones in the network. Such a challenge aggravates man-in-the-middle, and denial-of-service vulnerabilities. To defend such concerns, in this thesis, we first propose an authorization framework called Dynamic Policy-based Access Control (DynPolAC) as a model for protecting information in dynamic and resource-constrained IoT systems. We focus our experiments with DynPolAC on an IoT environment comprised of drones. DynPolAC achieves more than 7x speed performance improvements in authorization when compared to previously proposed methods for resource-constrained IoT platforms such as drones. Secondly, in this thesis, we implement a method called Phoenix to detect fake drones in an IoT network from real drones. We experimentally train and derive Phoenix from a control function called the Lyapunov stability function. We evaluate Phoenix for drones using an autopilot simulator as well as flying a real drone. We find that Phoenix takes about 50 ms to distinguish real drones from fake ones, while by asymmetry, it could take days for motivated attackersto reconstruct Phoenix. Phoenix also achieves a precision rate of 99.55% to detect real drones and a recall rate of 99.84% to detect fake drones.

View record

Design space exploration in ion channels using fine grained brownian dynamics (2020)

Design Space Exploration (DSE) in the context of ion channels refers to the systematic explorationof a design space defined with the dimensions of the space corresponding to channel characteristics. The goal of DSE here is to find points within the space that maximize a figure of merit related to conduction. Finding an efficient means to performing DSE for ion channels holds promise in several application areas such as nano-medicine and drug development, where it is commonly desirable to efficiently reverse-engineer drugs or channels in order to determine which channel structures or drugs lead to a particular conduction behavior. One example of DSE would have the dimensions of the design space specified by dielectric constants throughout the channel, and with the figure-of-merit defined by conduction. The primary roadblock to using DSE for ion channels is computational complexity as evaluating each channel characteristic (design point) requires 10¹⁰ simulation iterations. If, for example, the design space is defined by 5 parameters each having 10 possible values, the process of evaluating all possible combinations of these parameters exhaustively would require 10⁵ * 10¹⁰ = 10¹⁵ simulation iterations. Depending on the time it takes for each iteration, a DSE study could take years or even decades. As a result, it is critical that the approach used for evaluating each design point is both fast and efficient in order to save on simulation time and computational resources. This thesis proposes two-fold strategy for improving the efficiency of DSE for ion channels. First, it proposes an approach for improving the speed of DSE by systematically reducing the design space size using statistical-based inference. It shows how this methodology can be utilized to reduce the design space size by orders of magnitude for two different scenarios: with and without the presence of a drug in the channel. Second, it proposes a novel Finite Grained Brownian Dynamics framework for evaluating design points. Using both approaches together, the framework achieves accuracy that is consistent with Molecular Dynamics (with R² = 82%), a significantly higher resolution modeling technique, at a fraction of the cost.

View record

An investigation of age-causing molecular phenomena at the gate-dielectric channel interface of MOSFET devices (2018)

Complementary metal-oxide-semiconductor (CMOS) scaling has led to numerousreliability challenges. A major source of such challenges is the molecular phenomenaat the channel/dielectric interface in Metal-Oxide-Semiconductor Field-EffectTransistors (MOSFET). In this work, MOSFET dielectric/channel interface is investigated,and hydrogen diffusion as the cause behind one of the major MOSFETreliability issues (namely NBTI) is characterized. Within the realm of device simulation,classical molecular dynamics bridges the gap between accurate but complexquantum chemical methods and crude but straightforward statistical/Monte-Carlomethods. In particular, classical molecular dynamics alongside customized forcefieldparameters were used to study hydrogen dissociation and diffusion at the silicon/silicon dioxide interface. Such processes govern BTI-like MOSFET aging. Forthe first time, a full molecular-level characterization of hydrogen dissociation anddiffusion at the gate-dielectric/channel interface was developed. We also showedhow some mechanical alterations may improve the MOSFET devices resilience tolongterm NBTI aging. Further, new forcefield parameters were developed and usedto predict some of the characteristics of high-k dielectrics and their interface withsilicon dioxide. This work sets the grounds for a systematic, simulation-drivenapproach towards engineering reliable nano-scale MOSFET devices.

View record

Optical Absorption in Carbon Nanotubes (2014)

Due to their unique optical properties, carbon nanotubes have been widely investigated for use in photonic and optoelectronic devices and optical absorption and emission with nanotubes have been achieved in experiments. On the other hand, the structural characteristics of nanotubes, e.g. the chirality, diameter, and length, as well as other factors such as the polarization of the incident light, presence of a magnetic field and mechanical deformation can significantly affect the optical properties of these structures. Some of these effects have been theoretically studied at the tight-binding approximation level. However, a systematic first-principles-based study of nanotubes that addresses these effects did not exist in the literature prior to the present work. This thesis aims at performing such a fundamental study. We first describe a method for calculating the dipole moments and transition rates in nanotubes. This also enables the study of selection rules, based on which a modified set of rules is defined. The probability of absorption is studied in the full range of infrared-visible-ultraviolet. We show that π-σ*, σ-π*, and σ-σ* transitions that are neglected in previous works are allowed and can lead to high probabilities of transition. We then investigate several effects caused by the curvature of the nanotube sidewall and their impacts on the optical properties. The overall effect is shown to not only depend on the diameter, but also on the chirality of the nanotube. Through the study of the light polarization effect, we show that the overall transition rate spectrum of the perpendicularly polarized light is suppressed for smaller-diameter nanotubes in the IR/VIS range. In the UV region, however, perpendicular polarization is generally absorbed at a higher rate compared to parallel polarization. Finally, we show how the absorption spectra of short nanotube segments can be different from those of long nanotubes. We examine the effect of length on individual absorption peaks and also investigate the effect of spin on the optical properties of nanotube segments. The calculation method described in this thesis and the results can be used to estimate the effects of structural and environmental factors on the optical absorption properties of nanotubes.

View record

New Signal Enhancement Algorithms for One-Dimensional and Two-Dimensional Spectroscopic Data (2011)

In research, the usefulness of the analytical results is crucially dependent upon the quality of the measurements. However, data measured by instruments is always corrupted. The desired information—-the "signal"-—may be distorted by a variety of effects, especially noise. In spectroscopy, there are two additional significant causes of signal corruption—-instrumental blurring and baseline distortion.Consequently, signal processing is required to extract the desired signal from the undesired components. Thus, there is an on-going need for signal enhancement algorithms which collectively can 1) perform high quality signal-to-noise-ratio (SNR) enhancement, especially in very noisy environments, 2) remove instrumental blurring, and 3) correct baseline distortions. Also, there is a growing need for automated versions of these algorithms. Furthermore, the spectral analysis technique, Two-Dimensional Correlation Spectroscopy (2DCoS), needs similar solutions to these same problems.This dissertation presents the design of four new signal enhancement algorithms, plus the application of two others, to address these measurement problems and issues. Firstly, methods for one-dimensional (1D) data are introduced—-beginning with an existing algorithm, the Two-Point Maximum Entropy Method (TPMEM). This regularization-based method is very effective in low-SNR signal enhancement and deconvolution. TPMEM is re-examined to clarify its strengths and weaknesses, and to provide ways to compensate for its limitations. Next, a new regularization method, based on the chi-squared statistic, is introduced and demonstrated in its ability for noise reduction and deconvolution. Then, two new 1D automated algorithms are introduced and demonstrated: a noise filter and a baseline correction scheme.Secondly, a new two-dimensional (2D) regularization method (Matrix-MEM or MxMEM), derived from TPMEM, is introduced and applied to SNR enhancement of images. Lastly, MxMEM and 2D wavelets are applied to 2DCoS noise reduction.The main research contributions of this work are 1) the design of three new high performance signal enhancement algorithms for 1D data which collectively address noise, instrumental blurring, baseline correction, and automation, 2) the design of a new high performance SNR enhancement method for 2D data, and 3) the novel application of 2D methods to 2DCoS.

View record

Master's Student Supervision

Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.

Machine learning techniques for routability-driven routing in application-specific integrated circuits design (2022)

Routing is a challenging stage of the Integrated Circuit (IC) design process. A routing algorithm often adopts the two-stage approach of global routing followed by detailed routing. One of the routing objectives is the routability, which requires completing all the required connections without causing routing overflows or wireshorts. Otherwise, the chip would not function well and may even fail. Moreover, detours need to be taken to avoid overflows and wire-shorts, which may increasethe wire length and number of vias in the physical design, affecting the overall performance of the circuit. Predicting the existence and locations of routing overflows and wire-shorts before routing takes place helps the router to improve the routability and circuit performance. Here, we present two Machine Learning (ML) techniques that improve the routability of routing by predicting the number and locations of overflows and wire-shorts. First, we design and develop GlobalNet, a Fully Convolutional Network (FCN) based global routing congestion predictor that estimates the density of wires and vias of global routing in 3-Dimensional (3D) from a placed netlist. The locations of overflows are derived from the prediction result. A global router is also implementedto utilize the congestion estimation result to improve the performance of global routing. Second, a Convolutional Neural Network (CNN) based wire-short predictor, VioNet, is developed. VioNet replaces the global router with global routing congestion estimation (GlobalNet) so that the runtime is significantly reduced. To improve the prediction accuracy, we adopt a top-down iterative strategy where a low-resolution prediction first gives the approximate locations of wire-shorts, followedby a high-resolution prediction which determines the precise wire-shorts’ locations. Experimental results show that both GlobalNet and VioNet achieve high accuracy on ISPD 2018 and ISPD 2019 benchmarks. Moreover, UBC-GR increases the routability of global routing by reducing the number of overflows.

View record

Methodology to forecast a BTI-induced accelerated aging test result (2022)

Integrated circuits are stressed at temperature and voltage levels beyond their nominal ratings for long durations (~100s of hrs) during their development stage to study their reliability under nominal conditions. This is crucial to understand their operating lifetimes (typically in years) in their actual fields of use. The conventional HTOL test (an industry-standard reliability test to determine the intrinsic failure rate of ICs) is remarkably short as compared to the ICs' operating lifetime but still requires 1,000 hrs of elapsed test time. Future ICs' development may involve less time for such reliability tests due to the recent concerns being highlighted by most semiconductor manufacturers on reducing a product’s time to market. To partially answer such concerns, I am introducing a methodology that models the results of such reliability tests.To verify the feasibility of this method, several reliability experiments were conducted on Zynq-7000 FPGAs. To successfully perform those experiments, a reliability test platform was developed that can sustainably execute a high-temperature test for 1,000 hrs and requires minimum human intervention during the experiment. This platform is built on a commercial PYNQ-Z1 board that embeds the Zynq-7000 FPGA chip. To quantify the impact of thermal stress, several copies of a ring-oscillator-based test structure were implemented on the chip. Their free-running frequency was considered as a reference parameter to measure degradation.I leverage an existing transistor-level aging model to develop a circuit-level aging model that can mathematically describe a circuit parameter’s degradation as a function of time. This circuit-level aging model is then fitted onto the degradation data collected for a relatively shorter time frame to compute its parametric constants. Finally, with the known parametric constants, the model is used to determine how it fits the actual degradation data of the entire experiment.An analysis reveals that the first ~400 hrs of degradation data has sufficient information to forecast within a 3% accuracy margin the degree of degradation accomplished until the end of a 1,000-hour-long experiment. Subsequently, the analysis is applied to other test durations to study the effectiveness of this approach to other industry-standard reliability tests which are shorter than HTOL.

View record

A Molecular Dynamics Approach to Studying Gate Oxides in Ge-MOSFETs (2019)

As silicon-based transistors are reaching their performance limit, a growing need for a new semiconductor material has arisen. Germanium has been suggested as the potential substitute for silicon-based Metal-Oxide-Semiconductor Field-Effect Transistors (MOSFETs). This dissertation is focused on the source of reliability issues of MOSFETs fabricated on germanium and offers several solutions to cope with MOSFETs reliability issues. Due to the miniaturization of electronic devices, especially MOSFETs, some reliability challenges have arisen, such as the higher threshold voltage and increased gate leakage current. This device downscaling has led to a poor interface quality at the dielectric/substrate interface of Ge-MOSFETs. I employed Molecular Dynamics (MD) tools to investigate the nature of the dielectric material structure on the germanium substrate and the type of defects responsible for electrical degradation. This dissertation is dedicated to proposing several solutions which enable the semiconductor industry to mitigate the associated reliability issues of Ge-MOSFETs which leave behind the commercialization of these MOSFETs. A reactive molecular dynamics force field was employed in this research, enabling the simulation of ongoing bond breaking and formation. In addition to finding the effect of oxidation temperature on the density of interfacial defects, this research has shed light on the effect of oxide thickness on interface quality. The need for stabilizing the native oxide of germanium leads to proposing a novel approach to improve both interface quality and dielectric constant. Dilute concentrations of aluminum were doped into the oxide network, and as a result, improved dielectric constant and enhanced dielectric/substrate interface quality were obtained.

View record

CORGIDS: a correlation-based generic intrusion detection system (2019)

Cyber-Physical Systems (CPS) consist of software and physical components which collaborate and interact with each other continuously. CPS deployed in security critical scenarios such as medical devices, autonomous cars, and smart homes have been targets of security attacks due to their safety-critical nature and relative lack of protection. Anomaly-based Intrusion Detection System (IDS) using data, temporal, and logical correlations have been proposed in the past. But none of the approaches except the ones using logical correlations take into account the mainingredient in the operation of CPS, namely the use of physical properties. On the other hand, IDS that use physical properties either require the developer to define invariants manually or have designed their IDS for a specific CPS. This study proposes a Correlation-based Generic Intrusion Detection System (CORGIDS), a generic IDS capable of detecting security attacks by inferring the logical correlations of the physical properties of a CPS, and checking if they adhere to the predefined framework. A CORGIDS-based prototype is built and used for detecting attacks on two example CPSs - Unmanned Aerial Vehicle (UAV) and Smart Artificial Pancreas (SAP). It is found that CORGIDS achieves a precision of 95.70%, and a recall of 87.90%, while detecting attacks with modest memory and performance overheads.

View record

Experiments and Simulations on Negative/Positive Bias Temperature Instability in 28nm CMOS Devices (2015)

CMOS transistors come with a scaling potential, which brings along challenges such as process variation and NBTI/PBTI (Negative/Positive Bias Temperature Instability). My objectives during this project are to investigate effects of aging on CMOS devices as well as to show experimental results in order to model the effect of N/PBTI specifically targeting the 28nm technology node. The direct effect of transistor aging is a degradation of device threshold voltage, which can lead to performance degradation or malfunctions. Places such as server farms, data centers, and outer space-crafts, where device reliability for a long period is significant and accessibility is an issue, can benefit from an aging reversal process. In addition, as transistor channel lengths become smaller, they are more prone to a reduced lifetime. The exact causes of aging are not entirely known until this day and as a result, no real mechanism to reverse the process has been fully implemented on FPGAs or ASICs. I believe the true solution to these scalability challenges lay within the device structure and materials used in CMOS transistors, however, accelerated recovery at high temperatures can also help in reversing the effect of aging by a noticeable amount. I have been able to use this technique to reverse the effect of threshold voltage degradation in FPGAs. In this thesis, I present experimental results on the effect of degradation and recovery on a commercial FPGA. I then use the experimental results to calculate degradation parameters of transistor aging in this technology node and propose experimental setups for a 28nm ASIC.

View record

Reducing Post-Silicon Coverage Monitoring Overhead with Emulation and Statistical Analysis (2015)

With increasing design complexity, post-silicon validation has become a critical problem. In pre-silicon validation, coverage is the primary metric of validation effectiveness, but in post-silicon, the lack of observability makes coverage measurement problematic. On-chip coverage monitors are a possible solution, where a coverage monitor is a hardware circuit that sets to one when the coverage event of interest occurs. However, prior research has shown that the overhead is prohibitive for anything beyond a small number of coverage monitors. In this thesis, I explore techniques that reduce the number of instrumented coverage monitors, while still being able to imply full coverage with high probability. These techniques use a deterministic forward feature selection algorithm with the objective functions based on statistical information. For gathering the required statistical information, the method relies on emulation, where all coverage monitors of interest are instrumented on a version of the design. On this emulator, such as an FPGA, I stress the design with randomly generated tests to collect the data from the instrumented coverage monitors. I propose three objective functions for the feature selection algorithm: the first estimates the probability of a coverage monitor being set during a test; the next objective function builds a Bayesian Network (BN), then takes advantage of the relationship information between nodes (coverage monitors), which the network provides; the last objective function directly estimates the conditional probability of coverage from the gathered data. I demonstrate these techniques on a non-trivial system-on-chip, by measuring the code coverage achieved after executing randomly generated test programs. Depending on the objective function, results show a 67.7% to 95.5% reduction in the number of required coverage monitors. In the ASIC implementation, this would translate into an impact of 0.33-1.96% in silicon area overhead and 0.40-2.70% in static power overhead. These results show my technique works, by proving it is possible to track a smaller number of coverage events that statistically represent the whole set.

View record

Post-silicon code coverage for functional verification of systems-on-chip (2012)

Post-silicon validation requires effective techniques to better evaluate the functional correctness of modern systems-on-chip. Coverage is the standard measure for validation effectiveness and is extensively used pre-silicon. However, there is little data evaluating the coverage of post-silicon validation efforts on industrial-scale designs. This thesis addresses this knowledge-gap. We employ code coverage, which is one of the most frequently used coverage technique in simulation, and apply it post-silicon. To show our coverage methodology in practice, we employ an industrial-size open source SoC that is based on the SPARC architecture and is synthesizable to FPGA. We instrument code coverage in a number of IP cores and boot Linux as our experiment to evaluate coverage --- booting an OS is a typical industrial post-silicon test. We also compare coverages between pre-silicon directed tests and the post-silicon Linux boot. Our results show that in some blocks, the pre-silicon and post-silicon tests can achieve markedly different coverage figures --- in one block we measured over 50 percentage point coverage difference between the pre- and post-silicon results, which signifies the importance of post-silicon coverage. Moreover, we calculate the area overhead imposed by the additional coverage circuitry on-chip. We apply state-of-the-art software analysis techniques to reduce the excessively large overhead yet preserve data accuracy. The results in this thesis are valuable data for guidance to future research in post-silicon coverage.

View record

Current Students & Alumni

This is a small sample of students and/or alumni that have been supervised by this researcher. It is not meant as a comprehensive list.
 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.

 
 

Get key application advice, hear about the latest research opportunities and keep up with the latest news from UBC's graduate programs.