標題: 基於虛擬視角的立體影片合成
Virtual-view-based Stereo Video Composition
作者: 張鈞凱
Chang, Chun-Kai
杭學鳴
Hang, Hsueh-Ming
電子工程學系 電子研究所
關鍵字: 影像合成;視角合成;分割;不匹配;深度競爭;攝影機軌跡;image composition;view synthesis;segmentation;mismatch;depth competition;camera motion
公開日期: 2012
摘要: 立體數位內容日益被重視,新型態的技術包含了自由視點視訊(FTV, Free Viewpoint Television)與擴增實境(AR, Augmented Reality),這些應用以任意視點合成技術為最主要的關鍵議題。有許多的任意視點合成演算法被提出,通常都是利用多重影像以及其對應的深度資訊圖來產生虛擬試點的影像以達到任意視點的效果。我們利用這種基於影像與深度的影像合成渲染(DIBR, Depth Image-based Rendering)來產生背景置換後的立體視訊內容。 輸入為兩組多個攝影機分別拍攝的兩組視訊,我們希望結合這些輸入來產生新的立體場景。此立體場景由其中一組輸入的前景物體,與另外一組的背景場景共同組成。為了這個目的,我們將以多組視訊間場景間不匹配(mismatch)的角度來觀察,在此論文中主要將討論包括攝影機參數以及攝影機定位的不匹配。當使用者在背景影像中選取了定位點(landing point),我們需要經由改變攝影機相關參數來合成出背景場景的對應虛擬視角(配合前景攝影機),以達成背景置換。這樣的方式可以大幅增加創作的自由度。 相較於傳統的影像創作(Image Composition),上述的過程需要利用到深度幾何的資訊。欲被合成的背景場景需要經由虛擬攝影機參數的計算。此外,為了保持場景物體間互相遮蔽的關係,在背景置換時深度競爭(Depth Competition)是另外一個被探討的議題。當我們將靜態影像延伸至視訊時,我們需要攝影機移動行為的資訊來補償不同場景間攝影機的移動不匹配問題。實驗結果顯示我們可以達成令人滿意的視覺觀感。
3D video is gaining its popularity recently. In addition to the conventional left-right view 3D pictures, new forms of 3D video such as free viewpoint TV (FTV) and augmented reality (AR) are introduced. The Depth Image-based Rendering (DIBR) technique is one enabling rendering technique behind these applications. Typically, it uses multiple views with depth information to generate the intermediate view at any arbitrary viewpoint. We can use the DIBR techniques to produce new stereo videos with background substitution. Given two sets of videos captured by two sets of multiple cameras, we like to combine them to create a new stereo scene with the foreground objects from one set of video and the background from the other set. We will study a few mismatch issues between two scenes such as camera parameter mismatch and camera orientation mismatch problems in this thesis. We propose a floor model to adjust the camera orientation. Once we pick up the landing point (of foreground) in the background scene, we need to adjust the background camera parameters (position etc.) to match the foreground object, which enriches the freedom of composition. In contrast to the conventional 2D composition methods, the depth information is used in the above calculation. Thus, the new background scenes may have to be synthesized based on the calculated virtual camera parameters and the given background pictures. The depth competition problem is another issue to maintain the inter-occlusion relationship in the composite new scene. If we extend this 3D composition form still pictures to motion pictures, we need the camera movement information too. The camera motion is estimated for individual scene to solve the mismatch of camera motion of two scenes. Plausible results are demonstrated using the proposed algorithms.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT070050206
http://hdl.handle.net/11536/72284
Appears in Collections:Thesis


Files in This Item:

  1. 020601.pdf
  2. 020602.pdf