標題: 以多攝影機進行人物定位
People Localization Using Multiple Cameras
作者: 羅國華
Lo, Kuo-Hua
莊仁輝
陳華總
Chuang, Jen-Hui
Chen, Hua-Tsung
資訊科學與工程研究所
關鍵字: 消失點;二維/三維樣本線段取樣;多攝影機;人物定位;即時;Vanishing point;2D/3D line sampling;multi-camera;people localization;real-time
公開日期: 2013
摘要: 在以視覺為基礎的人物定位與追蹤的研究中,人物遮掩是一個重要且具挑戰性的研究課題。為了處理這樣的問題在本博士論文中,我們提出數個以多攝影機進行人物定位的方法。先前被提出的方法是藉由將多個視角影像中的前景資訊投影至多參考平面來確認空間中不同高度的參考平面上是否有人物存在,因此比起僅使用單一參考平面,將能夠有效地處理人物遮掩之問題,然而這將使得計算量隨著參考平面與使用的畫面數量而大幅增加。為了減低上述投影所需之計算,我們提出了第一個方法:基於線段取樣式定位法。此方法可利用影像中垂直於地面直線的消失點,估計出人物的成樣本線段,如此一來,在各高度參考平面上的人物定位將僅需計算線段的交點來重建出人物的位置,而能夠大量地減少先前的作法中需將前景資訊投影於多重平面的計算量。接著我們對這些交點進行分析後,將不同平面的交點進行連線即可形成三維樣本線段。這些樣本線段經過品質的評估,並淘汰掉不合適的軸線後,依據分群的演算法被分為數群,再依照各群內的三維軸線整合的結果推算出人物的位置。 然而由於上述的方法在重建時仍需要較多的時間,為了更進一步地改善其效率,我們提出了第二種非重建型的人物定位方法。此方法不需要將所有的前景資訊投影到多重平面上,而是先初步地以足跡分析估計出人物的潛在位置,再產生三維樣本線段來確認人物所在的位置。這樣一來不僅改善了我們的計算速度,同時也可將人物的高度在計算的過程中估計出來。另外,我們也針對第一種方法進行改良,提出第三種人物定位方法。其主要的兩項改良為: (1)新的兩個垂直三角形的相交重建方式與微調步驟來找出人物可能的三維樣本線段,(2)新增兩項與頭部高度有關的幾何過濾規則,用來過濾這些三維樣本線段。兩者皆能夠改進定位正確性,包含了精確率與查全率(precision and recall) ,而(2)則能提升計算的效率。此外,我們還提出了一個具有視角不變之特性的線段對應性的測量方法,能以量化方式測量不同視角影像中任意線段之對應性。我們更進一步地將其應用於人物定位方法之上,不但改善了效率而且並未減低其定位的正確性。最後我們探討了利用樣本線段之間的對應性以及兩個視角之間的角度,來進一步地降低人物定位誤差的可能性。
Occlusion has been an important and challenging task in vision-based people localization and tracking. To handle this problem, we propose several people localization methods in this thesis, which are based on multiple cameras. Some existing methods have been proposed to check the existence of people at reference planes of different heights by projecting image foreground from multiple views to these planes; such approaches can deal with occlusions better than using only a single reference plane. In order to reduce the amount of calculation due to image projection, especially for a large number of reference planes and camera views, we first propose a sample line-based method. The method estimates 2D line samples, which are originated from the vanishing point of lines perpendicular to ground plane, for each person in different images and project these 2D line samples on reference planes to reconstruct people locations so that the computation of previous work can be greatly reduced. For the subsequent localization process, these intersection points are analyzed and integrated to form some 3D line samples, and these 3D line samples are then grouped and integrated to reconstruct the locations of people in the scene. Because the above method still takes a lot computation during the reconstruction of 3D line sample, we propose the second method which is not based on reconstruction by projecting all foreground pixels to multiple reference planes. In particular, a footstep analysis is developed to find potential people locations, and 3D line samples are then generated to identify people locations. This method results in significant improvement in computational efficiency, with people heights being estimated as by-product. We proposed another method to improve the performance of the first method with (i) new reconstruction from the intersection of two vertical triangles and refinement procedures for possible 3D (vertical) line samples of human body and (ii) addition of two new geometric rules (associated with the head level of a person) for the screening of these samples. While (i) reconstructs a 3D line sample directly (and efficiently). Both of them offer valuable improvements in the localization performance, in terms of precision and recall, with (ii) also saving some computation time spent for invalid samples. In addition, we also propose a correspondence a view-invariant measure of 2D line segments in two different views. Such a quantitative measure can handle line segment of arbitrary configuration in the 3D scene. By applying such a measure, efficiency of people localization is further improved without sacrificing the localization correctness. Finally, possibilities of using the correspondence of line samples and the difference between a pair of viewing angles to decrease the error of people localization as studied, with some promising results obtained.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT079555819
http://hdl.handle.net/11536/41425
Appears in Collections:Thesis


Files in This Item:

  1. 581901.pdf