Skip navigation

Prototype

The second and last week of the workshop began with Luca Mezzalira’s training session on Flash and ActionScript 3. Flash is a key program for developing multimedia applications while ActionScript is the scripting language used to control the activity of multimedia applications in response to their users’ input.

During the workshop’s first week the students had chosen to prototype a concept, ‘Cinderella’, which uses a tangible user interface (TUI). So the students learnt the basics of connecting sensors to the computer to allow the computer to perceive the users’ physical actions.

In Cinderella,  customers trying on a shoe can tap it on various parts of an interactive floor in front of them. The floor then displays information about the shoe (its size, price, materials etc.) and the availability of alternatives (different sizes or colours of the same style, for instance.) Pressing the shoe on the interactive floor can also, if desired, effect its purchase.

The prototype made in this workshop used reacTIVision, an open-source image-recognition system which allows the computer to identify and track the movement of simple amoeba-like graphic images. These images (‘fiducials’), a sort of 2D barcode, are a few centimeters wide and easily printed on paper.

Each particular shoe’s specification (style, colour and size) is assigned a unique fiducial. For this prototype the fiducials were printed onto adhesive labels and stuck to the shoe’s sole. This allows the computer to ‘read’ the shoe and display the appropriate information.

To demonstrate the principle, the students constructed their prototype using whatever materials were at hand. A Perspex (Plexiglass) sheet, representing the interactive floor, was placed on a table frame and covered with tracing paper to reduce its transparency.

Under the Perspex the students hung an ordinary video projector and connected it to their computer so that it would project the information. Because the projector was close to the Perspex ‘floor’, an inclined mirror was used to double the length of beam and thus the projected image. Finally they put an ordinary video camera underneath to face upwards towards the interactive floor.

The prototype was presented for initial evaluation to a group of IUAV interaction design students and professors, and to an expert who had made an installation, similar in principle to this, at Venice’s Architecture Biennale. One of the students sat in front of the prototype interactive floor, put on a shoe with its fiducial sticker, and tapped it on various area of the floor in order to elicit various kinds of information.

The video camera sent an image of the floor to the computer which, though the software, identified the fiducial and its position and caused the projector to beam the desired information up to the floor.

The concept could be extended to identify regular registered customers. It could allow such customers to complete a self-service purchase by placing loyalty or credit cards on the interactive floor. It could also offer real-time updating of statistics about customers’ preferences and purchase habits to shops and large shoe-manufacturing firms.

This low-budget prototype used a low-quality video camera, and, to illuminate the fiducials, normal rather than infra-red lights. So it did not work reliably in all lighting conditions. But it successfully demonstrated Cinderella’s technical feasibility and the interactive experience it could offer to the customer. It will be prototyped by H-umus in early 2008 and take its place with the other ACRIB project prototypes.