This paper describes a vision tagging system called CyberCode where a variety of applications were derived from the design and implementation of a piece of printed, camera-readable tags. In this system, CCD/CMOS cameras or other environmental cameras are used as readers and in this way the ID as well as the position and orientation of the code can be recognized. Sample applications include physically embedded links, indoor navigation systems, usages in ubiquitous computing environment, etc.
What is important in this paper, as viewed from today’s perspective, is: given a piece of technology (say, a camera being able to read the code), how can you develop it into a systematic variety of usage scenarios?
In the paper there are 7 applications. We can see some inherent variations between them, namely,
- Physical context (e.g. indoor navigation system) vs. digital context (e.g. object recognition and registration);
- ID information (e.g. embedded links) vs. position and orientation information (e.g. annotating the real world);
- Nouns (e.g. retrieving embedded links) vs. verbs (e.g. direct manipulation).