الجمعة، 28 ديسمبر 2012

Multi-Touch Software Architecture:


Multi-Touch Software Architecture:
Once the hardware part is done, there is another equally important task to accomplish, that is to organize the raw input image data into processed image capable of gesture detection and hence implementation. This requires the development of multi-touch software. A generic architecture for the design of multi-touch software can be detailed as under:     



This framework establishes a link between the input from hardware and gesture recognition. Multi-touch pro­gramming is a two-fold process: reading and translating the “blob” input from the camera or other input device, and relaying this information through pre-defined protocols to frameworks which allow this raw blob data to be assembled into gestures that high-level language can then use to interact with an application.
1.      Input Hardware layer: This forms the lowest layer of the framework. It garners raw input from the hardware in the form of video or electrical signals. The data may be taken from the optical sensor or a physical mouse.
2.      Hardware Abstraction layer: At this layer the raw input data is processed through image processing to generate a stream of position of fingers, hands or objects.  
3.      Transformation layer: This layer converts the image coordinates into screen coordinates. Here the calibration of raw data is done and sets it ready for interpretation at the next layer. Calibration of the touchscreen for the raw data means that the software understands which image or blob on the touch sensor (here the image coordinates) indicates which spot on the touchscreen.

The process in layer 2 and 3 is combined called blob tracking. For the purpose of blob tracking raw data packets are sent to the server following a set of rules and regulations known as TUIO protocol.
4.      Interpretation layer: The interpretation layer already has the knowledge of regions on the screen. When the calibrated data reaches this layer, it assigns a meaning to every gesture performed over the screen. This process is known as gesture recognition. A gesture is defined by its starting point, end point and the dynamic motion between the start and the end points. With a multi-touch, the device should be capable of detecting a combination of such gestures. The process of gesture recognition can be subdivided into three steps-
·         Detection of Intention- The first step is to confirm whether the touch is made within the application window or not. Only touches inside the window are required to be deciphered. The TUIO protocol then relays the touch event to the server in order to know its application.
·         Gesture Segmentation- The touch events are patterned into parts depending on the object of intention.
·         Gesture Classification- Now the patterned data is mapped to its correct command.
The various modules available for gesture recognition are Hid­den Markov Models, Artificial Neural Networks, Finite State Machine etc.
5.      Widget layer: The widget layer generates visible output for the user. The real touch interface to the user is observed at this layer. The most popular multi-touch gestures are scaling, rotating, translating images with two fingers and many other innovative and interesting gestures.

·         Sensor based multi-touch technology
Multi-touch systems are also designed on various other sensing technologies like
o   Resistance based touch surfaces
o   Capacitance based touch surfaces
o   Surface wave touch surfaces (SAW)

It is to be noted that these technology caters to high precision design and accuracy, hence complies to industry standards only. Since the resistive and capacitive touchscreen have already been discussed before, let us have a glance at SAW technique. The SAW hardware consist of an infrared grid. For the detection of X and Y coordinates, transmitting and receiving piezoelectric transducers are mounted on the glass faceplate. On the surface ultrasonic waves are created and directed by the reflectors. Whenever a touch is made, change in the received wave intensity is observed and interpreted by the processor to calculate the position of interaction

هناك تعليق واحد:

  1. Hi Khlood ^_^
    Thank you for the helpful information
    I like your blog design so much .
    Good luck

    ردحذف