Andre Ivanov

Professor

Relevant Degree Programs

 

Graduate Student Supervision

Doctoral Student Supervision (Jan 2008 - May 2019)
An investigation of age-causing molecular phenomena at the gate-dielectric channel interface of MOSFET devices (2018)

Complementary metal-oxide-semiconductor (CMOS) scaling has led to numerousreliability challenges. A major source of such challenges is the molecular phenomenaat the channel/dielectric interface in Metal-Oxide-Semiconductor Field-EffectTransistors (MOSFET). In this work, MOSFET dielectric/channel interface is investigated,and hydrogen diffusion as the cause behind one of the major MOSFETreliability issues (namely NBTI) is characterized. Within the realm of device simulation,classical molecular dynamics bridges the gap between accurate but complexquantum chemical methods and crude but straightforward statistical/Monte-Carlomethods. In particular, classical molecular dynamics alongside customized forcefieldparameters were used to study hydrogen dissociation and diffusion at the silicon/silicon dioxide interface. Such processes govern BTI-like MOSFET aging. Forthe first time, a full molecular-level characterization of hydrogen dissociation anddiffusion at the gate-dielectric/channel interface was developed. We also showedhow some mechanical alterations may improve the MOSFET devices resilience tolongterm NBTI aging. Further, new forcefield parameters were developed and usedto predict some of the characteristics of high-k dielectrics and their interface withsilicon dioxide. This work sets the grounds for a systematic, simulation-drivenapproach towards engineering reliable nano-scale MOSFET devices.

View record

Optical absorption in carbon nanotubes (2014)

Due to their unique optical properties, carbon nanotubes have been widely investigated for use in photonic and optoelectronic devices and optical absorption and emission with nanotubes have been achieved in experiments. On the other hand, the structural characteristics of nanotubes, e.g. the chirality, diameter, and length, as well as other factors such as the polarization of the incident light, presence of a magnetic field and mechanical deformation can significantly affect the optical properties of these structures. Some of these effects have been theoretically studied at the tight-binding approximation level. However, a systematic first-principles-based study of nanotubes that addresses these effects did not exist in the literature prior to the present work. This thesis aims at performing such a fundamental study. We first describe a method for calculating the dipole moments and transition rates in nanotubes. This also enables the study of selection rules, based on which a modified set of rules is defined. The probability of absorption is studied in the full range of infrared-visible-ultraviolet. We show that π-σ*, σ-π*, and σ-σ* transitions that are neglected in previous works are allowed and can lead to high probabilities of transition. We then investigate several effects caused by the curvature of the nanotube sidewall and their impacts on the optical properties. The overall effect is shown to not only depend on the diameter, but also on the chirality of the nanotube. Through the study of the light polarization effect, we show that the overall transition rate spectrum of the perpendicularly polarized light is suppressed for smaller-diameter nanotubes in the IR/VIS range. In the UV region, however, perpendicular polarization is generally absorbed at a higher rate compared to parallel polarization. Finally, we show how the absorption spectra of short nanotube segments can be different from those of long nanotubes. We examine the effect of length on individual absorption peaks and also investigate the effect of spin on the optical properties of nanotube segments. The calculation method described in this thesis and the results can be used to estimate the effects of structural and environmental factors on the optical absorption properties of nanotubes.

View record

New signal enhancement algorithms for one-dimensional and two-dimensional spectroscopic data (2011)

In research, the usefulness of the analytical results is crucially dependent upon the quality of the measurements. However, data measured by instruments is always corrupted. The desired information—-the "signal"-—may be distorted by a variety of effects, especially noise. In spectroscopy, there are two additional significant causes of signal corruption—-instrumental blurring and baseline distortion.Consequently, signal processing is required to extract the desired signal from the undesired components. Thus, there is an on-going need for signal enhancement algorithms which collectively can 1) perform high quality signal-to-noise-ratio (SNR) enhancement, especially in very noisy environments, 2) remove instrumental blurring, and 3) correct baseline distortions. Also, there is a growing need for automated versions of these algorithms. Furthermore, the spectral analysis technique, Two-Dimensional Correlation Spectroscopy (2DCoS), needs similar solutions to these same problems.This dissertation presents the design of four new signal enhancement algorithms, plus the application of two others, to address these measurement problems and issues. Firstly, methods for one-dimensional (1D) data are introduced—-beginning with an existing algorithm, the Two-Point Maximum Entropy Method (TPMEM). This regularization-based method is very effective in low-SNR signal enhancement and deconvolution. TPMEM is re-examined to clarify its strengths and weaknesses, and to provide ways to compensate for its limitations. Next, a new regularization method, based on the chi-squared statistic, is introduced and demonstrated in its ability for noise reduction and deconvolution. Then, two new 1D automated algorithms are introduced and demonstrated: a noise filter and a baseline correction scheme.Secondly, a new two-dimensional (2D) regularization method (Matrix-MEM or MxMEM), derived from TPMEM, is introduced and applied to SNR enhancement of images. Lastly, MxMEM and 2D wavelets are applied to 2DCoS noise reduction.The main research contributions of this work are 1) the design of three new high performance signal enhancement algorithms for 1D data which collectively address noise, instrumental blurring, baseline correction, and automation, 2) the design of a new high performance SNR enhancement method for 2D data, and 3) the novel application of 2D methods to 2DCoS.

View record

Master's Student Supervision (2010 - 2018)
Experiments and simulations on negative/positive bias temperature instability in 28nm CMOS devices (2015)

CMOS transistors come with a scaling potential, which brings along challenges such as process variation and NBTI/PBTI (Negative/Positive Bias Temperature Instability). My objectives during this project are to investigate effects of aging on CMOS devices as well as to show experimental results in order to model the effect of N/PBTI specifically targeting the 28nm technology node. The direct effect of transistor aging is a degradation of device threshold voltage, which can lead to performance degradation or malfunctions. Places such as server farms, data centers, and outer space-crafts, where device reliability for a long period is significant and accessibility is an issue, can benefit from an aging reversal process. In addition, as transistor channel lengths become smaller, they are more prone to a reduced lifetime. The exact causes of aging are not entirely known until this day and as a result, no real mechanism to reverse the process has been fully implemented on FPGAs or ASICs. I believe the true solution to these scalability challenges lay within the device structure and materials used in CMOS transistors, however, accelerated recovery at high temperatures can also help in reversing the effect of aging by a noticeable amount. I have been able to use this technique to reverse the effect of threshold voltage degradation in FPGAs. In this thesis, I present experimental results on the effect of degradation and recovery on a commercial FPGA. I then use the experimental results to calculate degradation parameters of transistor aging in this technology node and propose experimental setups for a 28nm ASIC.

View record

Reducing post-silicon coverage monitoring overhead with emulation and statistical analysis (2015)

With increasing design complexity, post-silicon validation has become a critical problem. In pre-silicon validation, coverage is the primary metric of validation effectiveness, but in post-silicon, the lack of observability makes coverage measurement problematic. On-chip coverage monitors are a possible solution, where a coverage monitor is a hardware circuit that sets to one when the coverage event of interest occurs. However, prior research has shown that the overhead is prohibitive for anything beyond a small number of coverage monitors. In this thesis, I explore techniques that reduce the number of instrumented coverage monitors, while still being able to imply full coverage with high probability. These techniques use a deterministic forward feature selection algorithm with the objective functions based on statistical information. For gathering the required statistical information, the method relies on emulation, where all coverage monitors of interest are instrumented on a version of the design. On this emulator, such as an FPGA, I stress the design with randomly generated tests to collect the data from the instrumented coverage monitors. I propose three objective functions for the feature selection algorithm: the first estimates the probability of a coverage monitor being set during a test; the next objective function builds a Bayesian Network (BN), then takes advantage of the relationship information between nodes (coverage monitors), which the network provides; the last objective function directly estimates the conditional probability of coverage from the gathered data. I demonstrate these techniques on a non-trivial system-on-chip, by measuring the code coverage achieved after executing randomly generated test programs. Depending on the objective function, results show a 67.7% to 95.5% reduction in the number of required coverage monitors. In the ASIC implementation, this would translate into an impact of 0.33-1.96% in silicon area overhead and 0.40-2.70% in static power overhead. These results show my technique works, by proving it is possible to track a smaller number of coverage events that statistically represent the whole set.

View record

A study of two wideband CMOS LC-VCO structures (2012)

Phase-locked loops (PLLs) are widely used in telecommunication, radio, and computer applications. This thesis focuses on the study of wide-band PLLs, as they are a critical building block of many wireless and wireline systems. In particular, wide tuning range, low phase noise, and low power are desirable attributes for multi-standard and multi-band communication systems. One of the most critical components in a PLL is the voltage-controlled oscillator (VCO). In this work, two techniques for implementing a wide-tuning-range LC-VCO are presented. As a proof of concept, the techniques are used to design and layout two 13-GHz LC-VCOs, which are fabricated in a 90-nm CMOS technology and successfully tested. One design (Design A) uses two VCO cores and has an extra source-follower buffer while the other (Design B) uses one VCO core with a bank of switched capacitors. The 90-nm CMOS prototypes operate from a supply of 1.2 V. The Design A prototype has a 28.20% tuning range and a phase noise of ‒90.98 dBc/Hz at 1 MHz offset from the carrier, while the Design B prototype has a 24.42% tuning range and a phase noise of ‒94.20 dBc/Hz at 1 MHz offset. This measured performance is comparable with state-of-the-art wide-tuning-range VCOs. The total chip size, excluding pads, is 0.335 × 0.750 mm² and 0.316 × 0.425 mm² for Designs A and B, respectively. It was found that the addition of the source-follower buffer allows the VCO to function at a higher frequency, while the presence of the switched capacitor tends to deteriorate phase noise.

View record

Post-silicon code coverage for functional verification of systems-on-chip (2012)

Post-silicon validation requires effective techniques to better evaluate the functional correctness of modern systems-on-chip. Coverage is the standard measure for validation effectiveness and is extensively used pre-silicon. However, there is little data evaluating the coverage of post-silicon validation efforts on industrial-scale designs. This thesis addresses this knowledge-gap. We employ code coverage, which is one of the most frequently used coverage technique in simulation, and apply it post-silicon. To show our coverage methodology in practice, we employ an industrial-size open source SoC that is based on the SPARC architecture and is synthesizable to FPGA. We instrument code coverage in a number of IP cores and boot Linux as our experiment to evaluate coverage --- booting an OS is a typical industrial post-silicon test. We also compare coverages between pre-silicon directed tests and the post-silicon Linux boot. Our results show that in some blocks, the pre-silicon and post-silicon tests can achieve markedly different coverage figures --- in one block we measured over 50 percentage point coverage difference between the pre- and post-silicon results, which signifies the importance of post-silicon coverage. Moreover, we calculate the area overhead imposed by the additional coverage circuitry on-chip. We apply state-of-the-art software analysis techniques to reduce the excessively large overhead yet preserve data accuracy. The results in this thesis are valuable data for guidance to future research in post-silicon coverage.

View record

 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.