Indoor AR-based Multi-user Navigation for Merchandise Shopping Using Down-looking Omni-cameras
|關鍵字:||室內導覽;擴增實境;使用者定位;環場影像;魚眼攝影機;行動裝置;indoor navigation;augmented reality;user localization;omni-vision;fisheye camera;mobile device|
本研究也提出了一個利用加速型穩健特徵(speeded-up robust feature, SURF)演算法來進行商品辨識導覽的方法，該方法是從使用者端傳送行動裝置上的影像至伺服器端，再由伺服器端利用事先建立的商品資訊資料庫進行比對與辨識。最後，伺服器端將導覽資訊傳送到行動裝置上的使用者端，此資訊包括商品資訊、定位資訊、周遭環境地點及搜尋商品路徑，讓使用者端能將導覽資訊覆蓋在行動裝置影像中對應的真實物件上，來提供擴增實境導覽介面。
When people enter unfamiliar indoor environments, like shopping malls, supermarkets, grocery stores, etc., they generally have to rely on staff members to guide them to the locations of desired merchandise items. In this study, an indoor multi-user navigation system based on augmented reality (AR) and computer vision techniques by the use of a mobile device like an HTC Flyer is proposed. At first, an indoor vision infra-structure is set up by attaching fisheye cameras on the ceiling of the navigation environment. The locations and orientations of multiple users are detected from the acquired images using the fisheye cameras by a remote server-side system, and the analysis results are sent to the client-side system on each user’s mobile device. Meanwhile, the server-side system also analyzes the acquired images to recognize merchandise items and sends the information of the surrounding environment and the merchandises, as well as the navigation path to the client-side system. The client-side system then displays the information in an AR way on the mobile device, which provides clear information for each user to conduct the navigation. For multi-user identification, a method is proposed to attach a multicolor-edge mark on top of each user’s mobile device and the server-side system analyzes them in each consecutive image frame captured by the closest fisheye camera and classifies the edge mark according to its color pattern to obtain the identification number of each user. For multi-user localization, a method is proposed to analyze the omni-image captured from the fisheye cameras and detect human activities in the environment. The server-side system separates the foreground from the background in the image and detects the location of each user. Furthermore, three techniques are proposed and integrated together to conduct user orientation detection effectively. The first technique is analysis of user motions in consecutive images. The second is utilization of the orientation sensor on the user’s mobile device. The last is estimation for the direction of the multi-color edge mark attached on the top of the mobile device using the omni-image. For AR-based merchandise guidance, the client-side system sends the image captured from client device camera to the server-side system. Then, the server-side system analyzes it by the SURF algorithm, matches the resulting features against a pre-constructed merchandise image database, and transmits the corresponding information to the client-side system for display in an AR way. Also, a path planning technique is used for generating a collision-free path from the current user’s position to a selected merchandise item via the use of an environment map. Finally, the navigation and merchandise information is overlaid onto the images shown on the mobile devices. In this way, the system can accomplish the AR functions and provide a convenient guidance for merchandise shopping or other similar activities. Good experimental results show the feasibility of the proposed system and methods.