Full metadata record
DC FieldValueLanguage
dc.contributor.author蔡長廷en_US
dc.contributor.authorTsai, Chang-Tingen_US
dc.contributor.author杭學鳴en_US
dc.contributor.authorHang, Hsueh-Mingen_US
dc.date.accessioned2014-12-12T02:34:32Z-
dc.date.available2014-12-12T02:34:32Z-
dc.date.issued2012en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT070050213en_US
dc.identifier.urihttp://hdl.handle.net/11536/72271-
dc.description.abstract過去十幾年來,2D影像品質估測的相關研究已經做了很多。隨著3D立體視訊的快速發展,能提供3D影像品質的數學模型就成了迫切的工具。MPEG國際標準會議(ISO/IEC Moving Picture Expert Group)正在制定3DVC(3D Video Coding)標準,這使解碼端能用解完彩色影像及相對的深度圖合成出自由視角的色彩圖。如何有效地衡量虛擬視角的品質為3D影像品質相關研究裡重要的課題之一。 大部分現存的3D品質估測模型是直接使用2D的方法來預測人類主觀的3D感受。但是對於3D自由視角的合成影像,解碼端產生的深度圖或者色彩圖若有失真,將在合成影像上產生如物體的位移、鬼影、前景及背景交接處的不自然現象……等等的現象。而這些現象和傳統的2D雜訊相當不一樣,所以使用2D品質估測模型來評量3D合成影像的品質是不適當的。 本篇論文中,提出了深度失真所合成影像的主觀品質資料庫,並且建立一個數學模型來評估這些合成影像。此模型分成兩個部分,一個是IQS(Image Quality Score),它負責衡量經過位移彌補的資訊的色彩品質;另一個ESD(Edge Structural Distortion)使用Hausdorff距離來計算鬼影的程度。實驗結果顯示,我們的3D影像品質模型分數比傳統2D模型更能預測人類的主觀結果。 此論文中另外也建立了色彩失真所合成的自由視角影像的主觀品質資料庫,主要是要探討,雜訊及合成這兩個步驟的先後順序對於合成影像的影響。我們從實驗結果發現,對於高斯雜訊,先失真再合成會比先合成再失真有較高的2D品質以及主觀的感受;另外,彩色影像如果有模糊的失真,這樣和為失真的深度圖做合成會在合成影像前景及背景交接處的不自然現象。zh_TW
dc.description.abstract2D image/video quality assessment have been researched in the last decades. Since the popularity of 3D videos, quality assessment methods for the 3D contents become popular, too. The ISO/IEC Moving Picture Expert Group (MPEG) is in the process of specifying the 3D video coding (3DVC) standards based on the multiple-view plus depth (MVD) format. With the standard of 3D virtual view systems, how to predict the quality of the synthesized views becomes an important issue. Most existing 3D image quality metrics use conventional 2D image quality assessment (IQA) models to predict the 3D subjective quality. But in a free viewpoint television (FTV) system, the depth map or color image errors often produce novel artifacts such as object shift, ghost artifact, sticker artifact etc. on the synthesized pictures due to the use of Depth Image Based Rendering (DIBR) technique. These artifacts are very different from the ordinary 2D distortions such as blur, Gaussian noise, and compression errors, and the pixel based 2D IQA metrics are sensitive to that. Thus, we describe a 3D databases with depth maps error. We proposed an objective image QA model for depth map distortion. Proposed algorithm evaluates two scores, the Image Quality Score (IQS) and the Edge Structural Distortion (ESD). IQS computes 2D color quality of the synthesized image with object shift compensation. ESD estimates the degree of structural error by implying the Hausdorff distance. The final score of proposed model is obtained by combining IQS and ESD together in the pooling stage. The experimental results show that the proposed method enhances the correlation of the objective quality score to the 3D subjective scores. We also describe a 3D database with color distortion. There are two sets of view-synthesized images in the database, Distortion-Synthesis (D-S) images and Synthesis-Distortion (S-D) images. The most significant difference of these two kind of images is the distortion applied to images before rendering process or after it. In our collected data, the SSIM scores of the D-S images with Gaussian noise is much higher than those of the S-D images, that is, the view synthesis process can cover Gaussian noise distortion. The D-S images with Gaussian blur would produce the sticker artifact around the different depth object boundaries.en_US
dc.language.isoen_USen_US
dc.subject合成視角zh_TW
dc.subject品質估測zh_TW
dc.subject3D video codingen_US
dc.subjectSynthesized viewen_US
dc.subjectQuality assessmenten_US
dc.title3D合成視角基於深度或色彩失真的品質估測zh_TW
dc.titleQuality Assessment of 3D Synthesized View with Depth or Color Distortionen_US
dc.typeThesisen_US
dc.contributor.department電子工程學系 電子研究所zh_TW
Appears in Collections:Thesis


Files in This Item:

  1. 021301.pdf