This paper presents a novel way of designing pen and touch input by dividing the labor – pen writes and touch manipulates – and synthesizing them to yield new tools.
- The pen writes, touch manipulates, and the combination of pen+touch yields new tools;
- (A reusable design solution) the nonpreferred hand frames the action of the preferred hand;
- The observational study:
- We asked each participant to illustrate their ideas for a hypothetical short film by pasting and annotating clippings in a paper notebook;
- We taped the sessions and looked for patterns in how users worked, gestured, held objects, or structured their working space.
- … pen strokes in reference to an object while the user holds it are recognized as gestural commands, rather than ink strokes that would otherwise mark the object.
- B1. Tuck the pen -> what does this imply? Tools can be hidden/suspended/set-aside temporarily?
- One (maybe controversial) issue is: in reality we mostly use pen to write; yet the function of a pen on an interactive surface is multiplexed well beyond writing or inking. Do we have to adhere to the real world metaphor? If not, how can a person understand and learn the non-writing functions of a pen? Can we let users define the gestures while using the interface?
- The activation force of touch might not be zero. I suspect we would subtly apply different forces with our nonpreferred hand, e.g., adding force to fixate a writing artifact.
- What kinds of tasks need pen + touch? This might be another interesting question. While touch requires nothing but the user’s hands, pen is an ‘external’ tool deliberately carried with the device. Why pen? Why not other tools? Or why not just touch? What is that touch cannot do but pen can? And what is that none of touch, pen or pen+touch can do?
- I guess pen and touch are inter-exchangeable only on tapping (not dragging, crossing, etc.) (also see Fig. 3 summary).
- In some sense, people’s natural gestures of using a pen are re-defined in this paper.
- It remains a question whether it is worth it to mimic the physical properties of paper, book, etc on a flat display. Such a display by nature (i.e., 2D, lacking haptic feedback, etc.) has rejected such attempts. Why cannot we just create new paradigm of drawing, reading, etc.?