//
you're reading...
HCI

Response to [OmniTouch: wearable multitouch… ] by Harrison et al.

One Sentence

OmniTouch provides a wearable depth-sensing-enabled mechanism of turning any surface into an interactive multitouch interface.

(Original: “OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces.”

More Sentences

OmniTouch used a depth-sensing camera to obtain the depth information of the surface touching scenes, based on which it recognized the pointing finger and its state (e.g., touching, hovering). Further, it spatially tracked and defined the projected surface and calibrated the projector/camera. The system is demonstrated in a number of example applications and tested in the user study where they studied whether OmniTouch correctly register touch events and how accurately they can be localized.

Key Points

  • Wording
    •     Diminutive screens and buttons mar the user experience, and otherwise prevent us from realizing their full potential;
    •      The ordering of the surfaces was randomized to compensate for any order effecrts;
  • A key contribution is our depth-driven template matching and clustering approach to multitouch finger tracking. This enables  on-the-go interactive capabilities, with no calibration, training or instrumentation of the environment or the user, creating an always-available interface;
  • Five streams of related work:
    • Interactive projection interfaces
    • Pico-projectors-enabled
    • Detecting fingers, hands, touches, and gestures
    • Depth-camera tracking
  • (For kinect) a minimum sensing distance of ~50cm necessitated awkward placement high above the head to capture the hands;
  • Three ways to define, present, and track interactive areas:
    • One size fit all (hand size)
    • Classification-Driven Placement
    • User-Specified Placement
  • Very nice way of writing up user study:
    • “At a high level, can OmniTouch correctly register touch events and how accurately can they be localized? At a meta-level, how large would interface elements have to be to enable reliable operation of an ad hoc interface rendered on the hand?”

Inspiration

  • This paper: multitouch on any surface; next step: on non-planar surface; future steps: in/on/around any objects;
  • Idea: setup can be cumbersome (for the moment); but the tech can still be awesome;
  • Other than thinking about the next step, maybe try thinking about the ‘previous steps’?

To Do

  • Learn Kalman filter;
  • Learn POSIT algorithm (to find the position and orientation of the projector)
Advertisements

About Xiang 'Anthony' Chen

Making an Impact in Your Life

Discussion

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Twitter Updates

%d bloggers like this: