In-car Tour Guidance in Outdoor Parks Using Augmented Reality and Omni-vision Techniques with an Automatic Learning Capability
|關鍵字:||戶外園區;擴增實境;環場影像;自動學習;車內導覽;outdoor parks;augmented reality;omni-image;automatic learning;in-car tour guidance|
|摘要:||本研究利用車輛與電腦視覺技術，建立一個基於擴增實境(augmented reality; AR)技術且具有自動學習能力的室外園區導覽系統。利用此系統，使用者可以輕易的建立導覽地圖供系統使用。當車上乘客乘坐在導覽車上時，可以接收到園區的導覽資訊，如路徑上車輛周圍之建築物名稱等。此導覽資訊可顯示在車內乘客的手持裝置上影像的建築物上。
In this study, an augmented-reality based in-car tour guidance system with an automatic learning capability for use in outdoor park areas using computer vision techniques has been proposed. With the proposed system, a user can construct a tour guidance map for a park area in a simple and clear way, and use this map to provide tour guidance information to in-car passengers. When a passenger is in a vehicle driven in a park area, he/she can get from the system tour guidance information mainly about the names of the nearby buildings appearing along the way on the guidance path. The building names are augmented on the passenger-view image which is displayed on the mobile device held by the passenger. To implement the proposed system with the above-mentioned capability, at first an environment map is generated in the learning phase, which includes the information about the tour path and the along-path buildings (mainly the building names). All the data are learned either manually or by programs, and saved into the database for use in the navigation phase. Secondly, a method for automatic learning of the along-path vertical-line features, mainly, the edges of light poles, is proposed for use by the system. In this feature-learning stage, the vehicle equipped with a GPS device and a two-camera omni-imaging device is driven on a pre-selected guidance path. On each visited spot of the path, the system analyzes the input omni-image pair taken by the upper and lower cameras of the imaging device respectively, to detect the nearby vertical-line features and compute the positions and heights of them by the use of the GPS device. And the learned features are added to the map as landmarks for vehicle localization in the navigation phase. Next, a method for vehicle localization is proposed for use by the system. The method analyzes the omni-image taken by the upper camera of the imaging device to detect the learned features by the use of the learned information about them and the GPS device. It then computes the vehicle position by using the relation between the features and the vehicle. Finally, a method for AR-based guidance is proposed, which at first generates a passenger-view image by transforming the omni-image acquired from the upper omni-camera onto the user’s mobile-device screen. The method then uses the passenger-view image as a base, and augments the building names on the image before the image is displayed. To accomplish this function, the system computes the position of each building on the passenger-view image by using the result of vehicle localization. Good experimental results are also presented to show the feasibility of the proposed methods for real applications.