標題: 設計及製作介面系統產生器用於整合連結前端認知辨識系統及後端應用軟體系統
The Design and Implementation of Interface Interfacing Generator for Integrating and Bridging Front-End Recognizers and Back-End Application Software Systems
作者: 陳登吉
CHEN DENG-JYI
國立交通大學資訊工程學系(所)
關鍵字: See-through interface;程式碼的剖析器(Parser Generator);介面產生器;認知辨識;語音辨識(Speech Recognizers);See-through interface;Software Engineering;Parser Generator;Interface generator;Recognizers
公開日期: 2009
摘要: 在[31]的文獻中指出大部份的應用軟體系統裡幾乎有80%的軟體程式碼是和介面系 統有相關的。介面系統的花費是開發應用軟體時必需面對的議題,應用軟體的使用者常 以人機介面的好壞來評定該應用軟體的好壞。如何快速並簡化介面系統的開發或修改是 一重要的研究主題。一般而言,介面系統可分為人機介面系統和應用軟體及應用軟體之 間的整合介面系統。前者是目前較常見的議題,我們已在上期國科會三年期研究計畫有 開發一使用者人機介面產生器,而後者是強調在兩種應用軟體之間的整合時所設計的介 面系統。本研究針對後者提出一個為期三年的研究計畫。期能達到有彈性且易延伸的介 面之介面系統架構Interface Interfacing System (如圖1 所示),期待能給軟體開發者在製作 應用軟體的介面系統時能有較彈性且易維護的解決方式。此計畫的完成我們將予提供一 簡易的橋樑,來協助系統開發者將兩種應用軟體(前端的認知器與後端GUI 的應用軟體) 整合成以前端認知器為人機介面的應用系統,將可和前期的國科會計畫整體連成一無縫 式的軟體介面系統的整體解決方案。對軟體介面將有重要的貢獻。 下圖為整合後端window 環境內的傷心小棧應用軟體及前端語音辨識應用軟體之間 的介面整合的interface interfacing system。透過此介面系統 軟體開發者可較容易整合一使 用語音為輸入的傷心小棧(solitary)的應用軟體。換句話說是將現有的傷心小棧(solitary)的 軟體可透過語音的方式來做人機操作介面的軟體。其他應用軟體若要使用相同語音方式 來操控應用軟體亦可,此interface interfacing system 將比傳統的方法在製作介面系統時達 快速且不易出錯的好處,同時可減輕介面軟體程式設計製作者的維護。 圖1. The proposed Interface Interfacing System 此Interface Interfacing System,我們已做了先期研究,已完成的先期成果如附件一所 述。部份內容已發表在Journal of Information Science and Engineering 期刊上。本計畫基於 此研究成果上,重新作一較完整的規畫並實作完成此理想的架構及系統。 在第一年的計畫裡我們將先期研究成果重新規畫並定義其系統架構,包括前端的語 音命令語言(command language)、parser,以及後端的應用程式的介面整合模組。建立基本 架構與定義所需的Script 語言及GUI 模組,先以滑鼠進行測試,接著企圖提供一個有系 統的方法利用語音辨識操控應用軟體來和滑鼠模組結合以達使用語音來控制滑鼠的互 動工作。 在第二年裡我們將把第一年在PC 上開發的觀念架構修整使其能使用到手持裝置上 (如PDA、smart phone 等環境),專注於設計及製作一個應用程式介面載入器用以載入PC 上Java AP 與處理來自手機操作介面程式的控制命令。設計及製作一個手機內Java 程式 之介面產生器用以產生手機內的Java 應用軟體之介面設定使其遙控PC 上相同之應用軟 體。前端的認知器改為遙控方式而後端的應用程式則在PC 環境模擬成功後再移植到手 持系統的環境。本研究以Java Midlet AP 為例子,透過Smart phone 的手持系統實際遙 控應用軟體,圖2 為一各模組間的Remote Interfacing System 架構。 圖2、Remote Control Interfacing System Overview 第三年我們將前兩年的研究成果加以應用到多媒體內容的製作及不同的多媒體播 放平台上,例如行動多媒體名片樣板編輯器及播放器、電子賀卡樣板編輯器及播放器, 最後考量這些已編輯完成的多媒體軟體內容在大型LCD及LED平台上可以撥放的多媒 體協調技術,並實際以這些應用軟體為例來驗證所提出的介面整合系統的效益。更詳細 內容請參考第十二、研究計畫內容。
It has been shown that the major effort spent on the design and implementation of the application system software is the user interfaces (UI) [31] (or human-machine interface (HMI)). If UI can be developed in a short time, it will be a great help to reduce development time for application software systems. Therefore, many researchers have been seeking better solutions to aid UI designers to create UI systems. In general, there are two kinds of interface system: human-machine interface and interface for bridging application software as one. The former concerns the GUI design and implementation for the application software. The later concerns with the integration of recognizer and application software together to form a new application software that uses the recognizer as the front-end system. In this proposal research, we layout a three-year integration project that focus on the later interface technology, called generic Interface Interfacing system, Figure 1 depicts the detailed components. Figure 1. The proposed Interface Interfacing System Basically, application systems that utilize recognition technologies such as speech, gesture, and color recognition provide human-machine interfacing to those users that are physically unable to interact with computers through traditional input devices such as the mouse and keyboard. Current solutions, however, use an ad hoc approach and lack of a generic and systematic way of interfacing application systems with recognizers. The common approach used is to interface with recognizers through low-level programmed wrappers that are application dependent and require the details of system design and programming knowledge to perform the interfacing and to make any modifications to it. Thus, a generic and systematic approach to bridge the interface between recognizers and application systems must be quested. In the first year of this integration research work, we propose a generic and visual interfacing framework for bridging the interface between application systems and recognizers through the application system’s front end, applying a visual level interfacing without requiring the detailed system design and programming knowledge, allowing for modifications to an interfacing environment to be made on the fly and more importantly allowing the interfacing with the 3rd party applications without requiring access to the application’s source code. Specifically, an interfacing script language for building the interfacing framework is designed and implemented. The interfacing framework uses a see-through grid layout mechanism to position the graphic user interface icons defined in the interfaced application system. The proposed interfacing framework is then used to bridge the visual interface commands defined in application systems to the voice commands trained in speech recognizers. The proposed system achieved the vision of interface interfacing by providing a see-through grid layout with a visual interfacing script language for users to perform the interfacing process. Moreover such method can be applied to commercial applications without the need of accessing their internal code, and also allowing the composition of macros to release interaction overhead to users through the automation of tasks. Figure 1 also indicates an example that a solitary game or an authoring system in window system can be played using the speech recognizer in window system after the integration using the proposed approach. The main contributions of such interface interfacing system include 1) Productivity is reasonable good: system developers no need to trace the low level code (without requiring the detailed system design and programming knowledge) while integration the recognizer with the application software, 2) Maintenances effort is low: allowing for modifications to an interfacing environment to be made on the fly, and 3) Flexibility is good: allowing the interfacing with the 3rd party applications without requiring access to the application’s source code. In the 2nd year project, we continue the concept used in the first year to investigate the handheld device environment such as PDA or Smart phone. In this case, we use the remote control capability in the smart phone as the front-end recognizer and java program as the back-end application software. The choice of the java as the implementation language is rested on its heterogeneous platform adaptation features. Specifically, we will propose an interface generator, similar to the concept of the parser generator, to automatically generate remote control programs for a specific multimedia application in the smart phone. With this generator, designer does not need to write the textual remote control programs in the smart phone. This will simplify the development process and make the control system development and modification more flexible. Figure 2 depicts the detailed components. In Figure 2, it indicates that a interface generator (the interface interfacing system) can proceed to perform a code generation (Java Midlet AP) after the back-end application software in the PC environment has been integrated with the remote control module using the proposed approach. Of course, the remote control module can be replaced by Wii- like recognizer if it is needed. Figure 2、Remote Control Interfacing System Overview With the quick advance of technology, screen display of digital TV and mobile system becomes more and more elegant and is able to present fine and vivid multimedia contents. Most of the multimedia contents, such as advertisement, motion pictures, messages, etc., can be displayed on different kinds of platforms. If user can use some simple instruments (such as smart phone, PDA, etc.) to remotely communicate with the multimedia application module in the display device (such as PC monitor, digital TV, etc.), then the control becomes live and interesting. But there are various control instruments and display devices, and different kinds of control methods. If one wants to write the control program or partially modify the control features for the multimedia application module in the display device, then he needs to know the software source code in the multimedia application module that will be remotely controlled, so that he can custom-design a set of remote control programs for each multimedia application. However, there is a lot of multimedia application; a custom design for each of these applications becomes time consuming and less efficient. Once we have built the interface interfacing system for both in the PC environment (the first-year project) and smart phone environment (the 2nd year project), we are ready to author various kinds of multimedia presentation such as mobile name card template system or e-card presentation and use smart phone to remotely synchronize the presentation on top the big LCD and LED displayers。This is the major effort spent in the 3rd year. A more detailed elaboration of this part will be narrated in section 12.
官方說明文件#: NSC97-2221-E009-062-MY3
URI: http://hdl.handle.net/11536/101085
https://www.grb.gov.tw/search/planDetail?id=1751756&docId=298592
顯示於類別:研究計畫


文件中的檔案:

  1. 972221E009062MY3(第1年).PDF
  2. 972221E009062MY3(第2年).PDF
  3. 972221E009062MY3(第3年).PDF