Panos Nasiopoulos

Professor

Relevant Degree Programs

 

Graduate Student Supervision

Doctoral Student Supervision (Jan 2008 - May 2019)
Complexity reduction schemes for video compression (2017)

With consumers having access to a plethora of video enabled devices, efficient transmission of video content with different quality levels and specifications has become essential. The primary way of achieving this task is using the simulcast approach, where different versions of the same video sequence are encoded and transmitted separately. This approach, however, requires significantly large amounts of bandwidth. Another solution is to use scalable Video Coding (SVC), where a single bitstream consists of a base layer (BL) and one or more enhancement layers (ELs). At the decoder side, based on bandwidth or type of application, the appropriate part of an SVC bit stream is used/decoded. While SVC enables delivery of different versions of the same video content within one bit stream at a reduced bitrate compared to simulcast approach, it significantly increases coding complexity. However, the redundancies introduced between the different versions of the same stream allow for complexity reduction, which in turn will result in simpler hardware and software implementation and facilitate the wide adoption of SVC. This thesis addresses complexity reduction for spatial scalability, SNR/Quality/Fidelity scalability, and multiview scalability for the High Efficiency Video Coding (HEVC) standard. First, we propose a fast method for motion estimation of spatial scalability, followed by a probabilistic method for predicting block partitioning for the same scalability. Next, we propose a content adaptive complexity reduction method, a mode prediction approach based on statistical studies, and a Bayesian based mode prediction method all for the quality scalability. An online-learning based mode prediction method is also proposed for quality scalability. For the same bitrate and quality, our methods outperform the original SVC approach by 39% for spatial scalability and by 45% for quality scalability. Finally, we propose a content adaptive complexity reduction scheme and a Bayesian based mode prediction scheme. Then, an online-learning based complexity reduction scheme is proposed for 3D scalability, which incorporates the two other schemes. Results show that our methods reduce the complexity by approximately 23% compared to the original 3D approach for the same quality/bitrate. In summary, our methods can significantly reduce the complexity of SVC, enabling its market adoption.

View record

Compression efficiency improvement for 2D and 3D video (2017)

Advances in video compression technologies have resulted in high visual quality at constrained amounts of bitrate. This is crucial in video transmission and storage, considering the limited bandwidth of communication channels and storage media with limited capacities. In this thesis, we propose new methods for improving the compression efficiency of HEVC and its 3D extension for stereo and multiview video content.To achieve high video quality while keeping the bitrate within certain constraints, the characteristics of the human visual system (HVS) play an important role. The utilization of video quality metrics that are based on the human visual system and their integration within the video encoder can improve compression efficiency. We, therefore, propose to measure the distortion using a perceptual video quality metric (instead of sum of squared errors) inside the coding unit structure and for mode selection in the rate distortion optimization process of HEVC. Experiments show that our method improves HEVC compression efficiency by 10.21%.Next, we adjust the trade-off between the perceptual distortion and the bitrate based on the characteristics of the video content. The value of the Lagrange multiplier is estimated from the first frame for every scene in the video. Experimental results show that the proposed approach further improves the compression efficiency of HEVC (up to 2.62% with an average of 0.60%).Furthermore, we extend our work to address the HEVC extension for 3D video. First, we integrate the perceptual video quality in the rate distortion optimization process of stereo video coding where the dependencies between the two views are exploited to improve coding efficiency. Next, we extend our approach to multiview video coding for auto-stereoscopic displays (where 3D content can be viewed without using 3D glasses). In this case, two or three views and their corresponding depth maps need to be coded. Our proposed perceptual 3D video coding increases the compression efficiency of multi-view video coding by 2.78%.Finally, we show that compression efficiency of stereoscopic videos improves if we take advantage of asymmetric video coding. The proposed approach reduces the amount of bitrate required for transmitting stereoscopic video while maintaining the stereoscopic quality.

View record

Energy efficient video sensor networks for surveillance applications (2016)

Video sensor networks (VSNs) provide rich sensing information and coverage, both beneficial for applications requiring visual information such as smart homes, traffic control, healthcare systems and monitoring/surveillance systems. Since a VSN-based surveillance application is usually assumed to have limited resources, energy efficiency has become one of the most important design aspects of such networks. However, unlike common sensor network platforms, where power consumption mostly comes from the wireless transmission, the encoding process in a video sensor network contributes to a significant portion of the overall power consumption. There is a trade-off between encoding complexity and bitrate in a sense that in order to increase compression performance, i.e., achieve a lower bitrate, a more complex encoding process is necessary. The coding complexity and video bitrate determine the overall encoding and transmission power consumption of a VSN. Thus, choosing the right configuration and setting parameters that lead to optimal encoding performance is of primary importance for controlling power consumption in VSNs. The coding complexity and bitrate also depend on the video content complexity, as spatial details and high motion tend to lead to higher computation costs or increased bitrates. In a video surveillance network, each node captures an event from a different point of view, such that each captured video stream has unique spatial and temporal information. This thesis investigates the trade-off between encoding complexity and communication power consumption in a video surveillance network where the effect of video encoding parameters, content complexity, and network topology are taken into consideration. In order to take into account the effect of content complexity, we created a video surveillance dataset consisting of a large number of captured videos with different levels of spatial information and motion. Then, we design an algorithm that minimizes the video surveillance network’s power consumption for different scene settings. Models that estimate the coding complexity and bitrate were proposed. Finally, these models were used to minimize the video surveillance network’s power consumption and estimate the encoding parameters per each node that yield the minimum possible power consumption for the entire network.

View record

3D video quality assessment (2015)

A key factor in designing 3D systems is to understand how different visual cues and distortions affect the perceptual quality of 3D video. The ultimate way to assess video quality is through subjective tests. However, subjective evaluation is time consuming, expensive, and in most cases not even possible. An alternative solution is objective quality metrics, which attempt to model the Human Visual System (HVS) in order to assess the perceptual quality. The potential of 3D technology to significantly improve the immersiveness of video content has been hampered by the difficulty of objectively assessing Quality of Experience (QoE). A no-reference (NR) objective 3D quality metric, which could help determine capturing parameters and improve playback perceptual quality, would be welcomed by camera and display manufactures. Network providers would embrace a full-reference (FR) 3D quality metric, as they could use it to ensure efficient QoE-based resource management during compression and Quality of Service (QoS) during transmission.In this thesis, we investigate the objective quality assessment of stereoscopic 3D video. First, we propose a full-reference Human-Visual-system based 3D (HV3D) video quality metric, which efficiently takes into account the fusion of the two views as well as depth map quality. Subjective experiments verified the performance of the proposed method. Next, we investigate the No-Reference quality assessment of stereoscopic video. To this end, we investigate the importance of various visual saliency attributes in 3D video. Based on the results gathered from our study, we design a learning based visual saliency prediction model for 3D video. Eye-tracking experiments helped verify the performance of the proposed 3D Visual Attention Model (VAM). A benchmark dataset containing 61 captured stereo videos, their eye fixation data, and performance evaluations of 50 state-of-the-art VAMs is created and made publicly available online. Finally, we incorporate the saliency maps generated by our 3D VAM in the design of the state-of-the- art no-reference (NR) and also full-reference (FR) 3D quality metrics.

View record

Capturing and post-processing of stereoscopic 3D content for improved quality of experience (2013)

3D video can offer real-life viewing experience by providing depth impression. 3D technology has not yet been widely adopted due to challenging 3D-related issues, ranging from capturing to post-processing and display. At the capturing side, lack of guidelines may lead to artifacts that cause viewers headaches and nausea. At the display side, not having 3D content customized to a certain aspect ratio, display size, or display technology may result in reduced quality of experience. Combining 3D with high-dynamic-range imaging technology adds exciting features towards real-life experience, whereas conventional low-dynamic-range content often suffers from color saturation distortion when shown on high-dynamic-range displays. This thesis addresses three important issues on capturing and post-processing 3D content to achieve improved quality of experience. First, we provide guidelines for capturing and displaying 3D content. We build a 3D image and video database with the content captured at various distances from the camera lenses and under different lighting conditions. We conduct comprehensive subjective tests on 3D displays of different sizes to determine the influence of these parameters to the quality of 3D images and videos before and after horizontal parallax adjustment. Next, we propose a novel and complete pipeline for automatic content-aware 3D video reframing. We develop a bottom-up 3D visual attention model that identifies the prominent regions in a 3D video frame. We further provide a dynamic bounding box that crops the video and avoids annoying problems, such as jittering and window violation. Experimental results show that our algorithm is both effective and robust. Finally, we propose two algorithms for correcting saturation in color images and videos. One algorithm uses a fast Bayesian-based approach that utilizes images’ strong spatial correlation and the correlations between the R, G, and B color channels. The other algorithm takes advantage of the strong correlation between the chroma of the saturated pixels and their surrounding unsaturated pixels. Experimental results show that our methods effectively correct the saturated 2D and 3D images and videos. Our algorithms significantly outperform the existing state-of-the-art method in both objective and subjective qualities, resulting in plausible content that resembles real-world scenes.

View record

Correcting capturing and display distortions in 3D video (2012)

3D video systems provide a sense of depth by showing slightly different images to the viewer’s left and right eyes. 3D video is usually generated by capturing a scene with two or more cameras and 3D displays need to be able to concurrently display at least two different images. The use of multiple cameras and multiple display channels creates problems that are not present in 2D video systems. At the capturing side, there can be inconsistencies in the videos captured with the different cameras, for example the videos may differ in brightness, colour, sharpness, etc. At the display side, crosstalk is a major problem. Crosstalk is an effect where there is incomplete separation of the images intended for the two eyes; so the left eye sees a portion of the image intended for the right eye and vice versa. In this thesis, we develop methods for correcting these capturing and display distortions in 3D video systems through new digital video processing algorithms.First we propose a new method for correcting the colour of multiview video sets. Our method modifies the colour of all the input videos to match the average colour of the original set of views. Experiments show that applying our method greatly improves the efficiency of multiview video coding. We present a modification of our colour correction algorithm which also corrects vignetting (darkening of an image near its corners), which is useful when images are stitched together into a panorama.Next, we present a method for making stereo images match in sharpness based on scaling the discrete cosine transforms coefficients of the images. Experiments show that our method can greatly increase the accuracy of depth maps estimated from two images that differ in sharpness, which is useful in 3D systems that use view rendering.Finally, we present a new algorithm for crosstalk compensation in 3D displays. Our algorithm selectively adds local patches of light to regions that suffer from visible crosstalk, while considering temporal consistency to prevent flickering. Results show our method greatly reduces the appearance of crosstalk, while preserving image contrast.

View record

Tone-mapping high dynamic range images and videos for bit-depth scalable coding and 3D displaying (2012)

High dynamic range (HDR) images and videos provide superior picture quality by allowing a larger range of brightness levels to be captured and reproduced than their traditional 8-bit low dynamic range (LDR) counterparts. Even with existing 8-bit displays, picture quality can be significantly improved if the content is first captured in HDR format and then converted to LDR format. This converting process is called tone-mapping. In this thesis, we address different aspects of tone-mapping.HDR video formats are unlikely to be broadly accepted without the backward-compatibility with LDR devices. We first consider the case where only the tone-mapped LDR content is transmitted and the HDR video is reconstructed at the receiver by inversely tone-mapping the encoded-decoded LDR video. We show that the appropriate choice of a tone-mapping operator can result in a reconstructed HDR video with good quality. We develop a statistical model of the distortion resulting from tone-mapping, compressing, de-compressing and inverse tone-mapping the HDR video. This model is used to formulate an optimization problem that finds the tone-curve that minimizes the distortion in the reconstructed HDR video. We also derive a simplified version of the model that leads to a closed-form solution for the optimization problem.Next, we consider the case where the HDR content is transmitted using an LDR and an enhancement layers. We formulate an optimization problem that minimizes the transmitted bit-rate of a video sequence and also results in the tone-mapped video that satisfies some desired perceptual appearance. The problem formulation also contains a constraint that suppresses temporal flickering artifacts.We also propose a technique that tone-maps an HDR video directly in a compression friendly color space (e.g., YCbCr) without the need to convert it to the RGB domain.We study the design of 3D HDR-LDR tone-mapping operators. To find the appropriate tone-mapping characteristics that contribute to good 3D representation, subjective psychophysical experiments are performed for i) evaluating existing tone-mapping operators on 3D HDR images and ii) investigating how the preferred level of brightness and details differ between 3D and 2D images. The results are analyzed to find out the desired attributes.

View record

3D-TV Content generation and multi-view video coding (2010)

The success of the 3D technology and the speed at which it will penetrate the entertainment market will depend on how well the challenges faced by the 3D-broadcasting system are resolved. The three main 3D-broadcasting system components are 3D content generation, 3D video transmission and 3D display. One obvious challenge is the unavailability of a wide variety of 3D content. Thus, besides generating new 3D-format videos, it is equally important to convert existing 2D material to the 3D format. This is because the generation of new 3D content is highly demanding and in most cases, involves post-processing correction algorithms. Another major challenge is that of transmitting a huge amount of data. This problem becomes much more severe in the case of multiview video content. This thesis addresses three aspects of the 3D-broadcasting system challenges.Firstly, the problem of converting 2D acquired video to a 3D format is addressed. Two new and efficient methods were proposed, which exploit the existing relationship between the motion of objects and their distance from the camera, to estimate the depth map of the scene in real-time. These methods can be used at the transmitter and receiver-ends. It is especially advantageous to employ them at the receiver-end since they do not increase the transmission bandwidth requirements. Performance evaluations show that our methods outperform the other existing technique by providing better depth approximation and thus a better 3D visual effect. Secondly, we studied one of the problems caused by unsynchronized zooming in stereo-camera video acquisition. We developed an effective algorithm for correcting unsynchronized zoom in 3D videos. The proposed scheme finds corresponding pairs of pixels between the left and right views and the relationship between them. This relationship is used to estimate the amount of scaling and translation needed to align the views. Experimental results show our method produces videos with negligible scale difference and vertical parallax. Lastly, the transmission of 3D-content problem is addressed and two schemes for multiview video coding (MVC) are proposed. While both methods outperform the current MVC standard, one of them introduces significantly less random access delay compared to the MVC standard.

View record

Advances in medical image compression : novel schemes for highly efficient storage, transmission and on demand scalable access for 3D and 4D medical imaging data (2010)

Three dimensional (3D) and four dimensional (4D) medical images are increasingly being used in many clinical and research applications. Due to their huge file size, 3D and 4D medical images pose heavy demands on storage and archiving resources. Lossless compression methods usually facilitate the access and reduce the storage burden of such data, while avoiding any loss of valuable clinical data. In this thesis, we propose novel methods for highly efficient storage and scalable access of 3D and 4D medical imaging data that outperform the state-of the-art. Specifically, we propose (1) a symmetry-based technique for scalable lossless compression of 3D medical images; (2) a 3D scalable medical image compression method with optimized volume of interest (VOI) coding; (3) a motion-compensation-based technique for lossless compression of 4D medical images; and (4) a lossless functional magnetic resonance imaging (fMRI) compression method based on motion compensation and customized entropy coding. The proposed symmetry-based technique for scalable lossless compression of 3D medical images employs wavelet transform technology and a prediction method to reduce the energy of the wavelet sub-bands based on a set of axes of symmetry. We achieve VOI coding by employing an optimization technique that maximizes reconstruction quality of a VOI at any bit-rate, while incorporating partial background information and allowing for gradual increase in peripheral quality around the VOI. The proposed lossless compression method for 4D medical imaging data employs motion compensation and estimation to exploit the spatial and temporal correlations of 4D medical images. Similarly, the proposed fMRI lossless compression method employs a motion compensation process that uses a 4D search, bi-directional prediction and variable-size block matching for motion estimation; and a new context-based adaptive binary arithmetic coder to compress the residual and motion vector data generated by the motion compensation process.We demonstrate that the proposed methods achieve a superior compression performance compared to the state-of-the-art, including JPEG2000 and 3D-JPEG2000.

View record

Computationally efficient techniques for H.264/AVC transcoding applications (2010)

Providing universal access to end-users is the ultimate goal of the communications, entertainment and broadcasting industries. H.264/AVC has become the coding choice for broadcasting, and entertainment (i.e., DVD/Blu-ray), meaning that the latest set-top boxes and playback devices support this new video standard. Since many existing videos had been encoded using previous video coding standards (e.g., MPEG-2), playing them back on the new devices will be possible only if they are converted or transcoded into the H.264/AVC format. In addition, even in the case that videos are compressed using H.264/AVC, transmitting them over different networks for different user applications (e.g., mobile phones, TV) will require transcoding in order to adapt them to different bandwidth and resolution requirements. This thesis tackled the H.264/AVC transcoding problems in 3 aspects. At first, we propose the algorithms that improve the resultant video quality of the transform-domain MPEG-2 to H.264/AVC transcoding structure. Transform-domain transcoding offers the least complexity. However, it produces transcoded videos suffering from some inherent video distortions. We provide a theoretical analysis for these distortions and propose algorithms that compensate for the distortions. Performance evaluation shows that the proposed algorithms greatly improve the resultant transcoded video quality with reasonable computational complexity. Second, we develop an algorithm that speeds up the process of the pixel-domain MPEG-2 to H.264/AVC transcoding. Motion re-estimation is the most time consuming process for this type of transcoding. The proposed algorithm accelerates the motion re-estimation process by predicting the H.264/AVC block-size partitioning. Performance evaluation shows the proposed algorithm significantly reduces the computational complexity compared to the existing state-of-the-art method, while maintaining the same compression efficiency. At last, we propose the algorithm that accelerates the transcoding process of downscaling a coded H.264/AVC video into its downscaled version using arbitrary downscaling ratios. To accelerate the process of encoding the downscaled video, the proposed algorithm derives accurate initial motion vectors for the downscaled video, thus greatly reducing the computational complexity of the motion re-estimation process. Compared to other downscaling state-of-the-art methods, the proposed method requires the least computation while yields the best compression efficiency.

View record

Fast motion estimation methods for H.264 video coding on mobile devices (2010)

Digital video is becoming an increasingly widespread application on a multitude of devices ranging from mobile devices to digital cinema. Technological advancements in processing speed and available bandwidth along with substantial improvements in compression techniques enable completely new applications and services for digital video content.The most demanding task in video encoding is the motion estimation process which aims to identify similarities to previously transmitted video frames. Up to 90% of the processing requirements are attributable to this element.In this thesis, we present three methods for encoding new and transcoding existing video content with significantly reduced computational complexity while maintaining both quality and bitrate.The first method reduces the number of steps required to perform motion estimation by adaptively adjusting the search accuracy needed in distortion measurement.The second method addresses the topic of mode decision in video encoding and provides an algorithm that allows an early decision about the most probable modes without the need to evaluate all 259 different combinations of block sizes.The third method provides a multi-dimensional measure that facilitates evaluating only the most likely modes for efficiently transcoding existing pre-encoded content to lower resolutions with an arbitrary downscaling ratio. This is an important factor for the ever-growing number of devices and application scenarios that access existing pre-encoded content. Our method supplements existing fast transcoding schemes that primarily focus on efficiently determining motion vectors in transcoding.

View record

Modeling of scalable video content for multi-user wireless transmission (2009)

This thesis addresses different aspects of wireless video transmission of scalable video content to multiple users over lossy and under-provisioned channels. Modern wireless video transmission systems, such as the Third Generation Partnership Project (3GPP)'s high speed packet access (HSPA) networks and IEEE 802.11-based wireless local area networks (WLANs) allow sharing common bandwidth resources among multiple video users. However, the unreliable nature of the wireless link results in packet losses and fluctuations in the available channel capacity. This calls for flexible encoding, error protection, and rate control strategies implemented at the video encoder or base station.The scalable video coding (SVC) extension of the H.264/AVC video standard delivers quality scalable video bitstreams that help define and provide quality of service (QoS) guarantees for wireless video transmission applications. We develop real-time rate and distortion estimation models for the coarse/medium granular scalability (CGS/MGS) features in SVC. These models allow mobile video encoders to predict the packet size and corresponding distortion of a video frame using only the residual mean absolute difference (MAD) and the quantization parameter (QP).This thesis employs different cross layer resource allocation techniques that jointly optimize the video bit-rate, error protection, and latency control algorithms in pre-encoded and real-time streaming scenarios. In the first scenario, real-time multi-user streaming with dynamic channel throughput and packet losses is solved by controlling the base and enhancement layer quality as well as unequal erasure protection (UXP) overhead to minimize the frame-level distortion. The second scenario considers pre-encoded scalable video streaming in capacity limited wireless channels suffering from latency problems and packet losses. We develop a loss distortion model for hierarchical predictive coders and employ dynamic UXP allocation with a delay-aware non-stationary rate-allocation streaming policy. The third scenario addresses the problem of efficiently allocating multi-rate IEEE 802.11-based network resources among multiple scalable video streams using temporal fairness constraints. We present a joint link-adaptation at the physical (PHY) layer and a dynamic packet dropping mechanism in the network or medium access control (MAC) layer for multi-rate wireless networks. We demonstrate that these methods result in significant performance gains over existing schemes.

View record

Master's Student Supervision (2010 - 2018)
High quality virtual view synthesis for immersive video applications (2018)

Advances in image and video capturing technologies, coupled with the introduction of innovative Multiview displays, present new opportunities and challenges to content providers and broadcasters. New technologies that allow multiple views to be displayed to the end-user, such as Super Multiview (SMV) and Free Viewpoint Navigation (FN), aim at creating an immersive experience by offering additional degrees of freedom to the user. Since transmission bitrates are proportional to the number of the cameras used, reducing the number of capturing devices and synthesizing/generating intermediate views at the receiver end is necessary for decreasing the required bandwidth and paving the path toward practical implementation.View synthesis is the common approach for creating new virtual views either for expanding the coverage or closing the gap between existing real camera views, depending on the type of Free Viewpoint TV application, i.e., SMV or 2D walk-around-scene-like (FN) immersive experience. In these implementations, it is common for the majority of the cameras to have dissimilar characteristics and different viewpoints often yielding significant luminance and chrominance discrepancies among the captured views. As a result, synthesized views may have visual artifacts, caused by incorrect estimation of missing texture in occluded areas and possible brightness and color differences between the original real views.In this thesis, we propose unique view synthesis methods that address the inefficiencies of conventional view synthesis approaches by eliminating background leakage and using edge-aware background warping and inter-pixel color interpolation techniques to avoid deformation of foreground objects. Improved occlusion filling is achieved by using information from a temporally constructed background. We also propose a new view synthesis method specifically designed for FN applications, addressing the challenge of brightness and color transition between consecutive virtual views. Subjective and objective evaluations showed that our methods significantly improve the overall objective and subjective quality of the synthesized videos.

View record

Tone mapping of high dynamic range video for video gaming applications (2018)

High Dynamic Range (HDR) technology is regarded as the latest revolution in digital multimedia, as it aims at capturing, distributing and displaying a range of luminance and color values that better correspond to what the human eye can perceive. Inevitably, physical-based rendering in High Dynamic Range (HDR) has recently gained a lot of interest in the video gaming industry. However, the limited availability of commercial HDR displays on one hand and the large installed base of Standard Dynamic Range (SDR) displays on the other imposed the need for techniques to efficiently display HDR content on SDR TVs. Several such techniques, known as Tone-Mapping Operators (TMOs), have been proposed, but all of them are specifically designed for natural content. As such, these TMOs fail to address the unique characteristics of the HDR gaming content, causing loss of details and introducing visual artifacts such as brightness and color inconsistencies.In this thesis, we propose an automated, low complexity and content adaptive video TMO specifically designed for video gaming applications. The proposed method uses the distribution of HDR light information in the perceptual domain and takes advantage of the unique properties of rendered HDR gaming content to calculate a global piece-wise-linear tone-mapping curve to efficiently preserve the global contrast and texture details of the original HDR scene. A unique flickering reduction method is also introduced that eliminates brightness inconsistencies caused by the tone-mapping process while successfully detecting scene changes. Subjective and objective evaluations have shown that our method outperforms existing TMOs, offering better overall visual quality for video gaming content.

View record

Improving non-constant luminance color encoding efficiency for high dynamic range video applications (2017)

Non-Constant Luminance (NCL) and Constant Luminance (CL) are the two common methods for converting RGB values to luma and chroma for compression efficiency. CL coefficients have been derived from the luminous efficacy of the used gamut color primaries in the light linear domain. NCL applies the same coefficients but on non-linear inputs, which are perceptually encoded values using proper transfer function, thus leading to reduced compression efficiency and color shifts. However, since legacy cameras capture perceptually encoded values of light, it is common practice to use NCL in the existing video distribution pipelines. Although color distortion was not a serious problem with legacy Standard Dynamic Range (SDR) systems, this is not the case with High Dynamic Range (HDR) applications where color shifts become much more visible and prohibitive to delivering high quality HDR. In this thesis, we propose methods that address the inefficiencies of the conventional NCL method by optimizing NCL luma values to be as close as possible to those of CL, thus improving compression performance and color accuracy, while maintaining the current pipeline infrastructure. First, we develop a global optimization method for deriving new optimum coefficients that approximate NCL values to those of the CL approach. Then, we improve upon this approach by conducting content based optimization. This adaptive optimization method takes content pixel density into consideration and optimizes only based on these color distributions. Finally, we propose a weighted global optimization method, which separates chromaticity into three categories (Red, Green, and Blue), and assigns weights based on their contributions to luminance. Evaluations show that the proposed method improves color quality and compression efficiency over NCL.

View record

Automatic real-time 2D-to-3D video conversion (2016)

The generation of three-dimensional (3D) videos from monoscopic two-dimensional (2D) videos has received a lot of attention in the last few years. Although the concept of 3D has existed for a long time, the research on converting from 2D-to-3D in real-time is still on going. Current conversion techniques are based on generating an estimated depth map for each frame from different depth cues, and then using Depth Image Based Rendering (DIBR) to synthesize additional views. Efficient interactive techniques have been developed in which multiple depth factors (monocular depth cues) are utilized to estimate the depth map using machine-learning algorithms. The challenge with such methods is that they cannot be used for real-time conversion. We address this problem by proposing an effective scheme that generates high quality depth maps for indoor and outdoor scenes in real-time. In our work,we classify the 2D videos into indoor or outdoor categories using machine-learning-based scene classification. Subsequently, we estimate the initial depth mapsfor each video frame using different depth cues based onthe classification results. Then, we fuse these depth maps and the final depth map is evaluated in two steps. First, depth values are estimated at edges. Then, these depth values are propagated to the rest of the image using an edge-aware interpolation method. Performance evaluations show that our method outperforms the existing state-of-the-art 2D-to3D conversion methods.

View record

A visual attention model for high dynamic range (HDR) video content (2015)

High dynamic range (HDR) imaging is gaining widespread acceptance in computer graphics, photography and multimedia industry. Representing scenes with values corresponding to real-world light levels, HDR images and videos provide superior picture quality and more life-like visual experience than traditional 8-bit Low Dynamic Range (LDR) content. In this thesis, we present a few attempts to assess and improve the quality of HDR using subjective and objective approaches.We first conducted in-depth studies regarding HDR compression and HDR quality metrics. We show that High Efficiency Video Coding (HEVC) outperforms the previous version of compression standard on HDR content and could be used as a platform for HDR compression if provided with some necessary extensions. We also find that, compared to other quality metrics, the Visual Information Fidelity (VIF) quality metric has the highest correlation with subjective opinions on HDR videos. These findings contributed to the development of methods that optimize existing video compression standards for HDR applications. Next, the viewing experience of HDR content is evaluated both subjectively and objectively. The study shows a clear subjective preference for HDR content when individuals are given a choice between HDR and LDR displays. Eye tracking data were collected from individuals viewing HDR content in a free-viewing task. These eye tracking data collected are utilized in the development of a visual attention model for HDR content.Last but not least, we propose a computational approach to predict visual attention for HDR video content, the only one of its kind as all existing visual attention models are designed for HDR images. This proposed approach simulates the characteristics of the Human Visual System (HVS) and makes predictions by combining the spatial and temporal visual features. The analysis using eye tracking data affirms the effectiveness of the proposed model. Comparisons employing three well known quantitative metrics show that the proposed model substantially improves predictions of visual attention of HDR.

View record

Peceptually Based Compression of Emerging Digital Media Content (2014)

No abstract available.

Improvements of interpolation and extrapolation view synthesis rendering for 3D and multiview displays (2013)

To display video content in 3D, traditional stereoscopic televisions require two views of the same scene filmed at a small distance from one another. Unfortunately, having the required number of views is not always possible due to the complexity of obtaining them and the required bandwidth for transmission. In cases where more advanced auto-stereoscopic televisions require more than two views, the issue of obtaining and transmitting those additional views becomes even more complex. These issues led to the idea of having a small number of real views and their corresponding depth-maps, showing the distance of each object from the viewing plane, which together can be used to generate virtual intermediate views. These virtual synthesized views are generated by moving different objects in the real views a specific amount of pixels based on their distance from the viewing plane. The need for synthesizing virtual views is more pronounced with the introduction of stereoscopic and autostreoscopic (multiview) displays to the consumer market. In this case, as it is not practical to capture all the required views for different multiview display technologies, a limited number of views are captured and the remaining views are synthesized using the available views. View synthesis is also important in converting existing 2D content to 3D, a development that is necessary in the quest for 3D content which has been deemed a vital factor for faster adoption of the 3D technology.In this thesis a new hybrid approach for synthesizing views for stereoscopic and multiview applications is presented. This approach utilizes a unique and effective hole filling method that generates high quality 3D content. First, we present a new method for view interpolation where missing-texture areas are filled with data from the other available view and a unique warping approach that stretches background objects to fill in these areas. Second, a view extrapolation method is proposed where small areas of the image are filled using nearest-neighbor interpolation and larger areas are filled with the same unique image warping approach. Subjective evaluations confirm that this approach outperforms current state-of-the-art pixel interpolation-based as well as existing warping-based techniques.

View record

 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.