5. The Development of Auto-ID Technologies
As I have shown in the Literature Review (ch. 2), a thorough assessment of auto-ID indicates that there are a large number of techniques and devices available. While studying each of these auto-ID technologies in-depth is beyond the scope of this investigation, the more prominent ones will be examined using a qualitative case study methodology. In this chapter the story behind the development of individual auto-ID technology will be explored. First to highlight the importance of incremental innovation within auto-ID; second to show the growth of the auto-ID selection environment as being more than just bar code and magnetic-stripe technology; third to point to the notion of technological trajectory as applied to auto-ID; fourth to highlight the occurrence of creative symbiosis taking place between various auto-ID devices; and fifth to establish a setting in which results in the forthcoming chapters can be interpreted. The high-level drivers that led to each invention will also be presented here as a way to understand innovation in the auto-ID industry.
5.1. Bar Codes
5.1.1. Revolution at the Check-out Counter
Of all the auto-ID technologies in the global market today, bar code is the most widely used. Ames (1990, p. G-1) defines the bar code as:
an automatic identification technology that encodes information into an array of adjacent varying width parallel rectangular bars and spaces.
The technology’s popularity can be attributed to its application in retail, specifically in the identification and tracking of consumer goods. Before the bar code, only manual identification techniques existed. Handwritten labels or carbon-copied paper were attached or stuck to ‘things’ needing identification. In 1932 the first study on the automation of supermarket checkout counters was conducted by Wallace Flint. Subsequently in 1934 a patent was filed presenting bar code-type concepts (Palmer 1995, p. 11) by Kermode and his colleagues. The patent described the use of four parallel lines as a means to identify different objects. Yet it was not until the mid-1950s when digital computers began to be used more widely for information storage, that the introduction of automated identification and data collection techniques became feasible. In 1959 a group of railroad research and development (R&D) managers (including GTE Applied Research Lab representatives) met in Boston to solve some of the rail industry’s freight problems. By 1962 Sylvania (along with GTE) had designed a system which was implemented in 1967 using colour bar code technology (Collins & Whipple 1994, p. 8). In 1968, concentrated efforts began to develop a standard for supermarket point-of-sale which culminated in the RCA developing a bull’s eye symbol to be operated in the Kroger store in Cincinnati in 1972 (Palmer 1995, p. 12). Until then, bar codes in retail were only used for order picking at distribution centres (Collins & Whipple 1994, p. 10). But it was not the bull’s eye bar code that would dominate but the Universal Product Code (UPC) standard. The first UPC bar code to cross the scanner was on a packet of Wrigley’s chewing gum at Marsh’s supermarket in Ohio in June 1974 (Brown 1997, p. 5). Within two years the vast majority of retail items in the United States carried a UPC.
Bar code technology increased in popularity throughout the 1980s as computing power and memory became more affordable, and consumer acceptance increased. An explosion of useful applications was realised. Via the retail industry alone, the bar code had permeated a global population in just a short period of time. The changes in the check-out process did not go unnoticed. It changed the way consumers bought goods, the way employees worked and how businesses functioned. In terms of bar code developments, the 1990s have been characterised by an attempt to evolve standards and encourage uniformity. This has been particularly important in the area of supply chain management (SCM). For a history of bar code see table 5.1 on the following page.
Table 5.1 Timeline of the History of Auto-ID
Year | Event
1642 | Pascal’s numbering machine
1800 | Infrared radiation
1801 | Ultraviolet radiation
1803 | Accumulator
1833 | Babbage’s proposed analytical engine
1850 | Faraday’s Thermistor had many of the elements needed for auto-ID
1859 | Hollerith’s tabulating machine used punched cards for data input and was used to enter data for the 1890 US census
1890 | P. G. Nipkow invented sequential scanning, whereby an image was analysed line by line
1932 | Wallace Flint’s thesis on auto identification for supermarkets using punched cards
1934 | Frequency standards
1939 | Digital computers with card and switch input
1943 | ENIAC computer using punched card input
1946 | CRT input from pulses on the face of the CRT
1947 | Quality amplifier circuits
1948 | Information theory
1949 | Patent applied for by Norm Woodland for a circular bar code
1960 | Light-emitting diodes
960 | Improved photo-conductive detectors
1961 | Bar codes on rail cars, invented by F. H. Stites
1968 | Two-of-five (2-of-5) code by Jerry Wolfe
1970 | Charged coupled devices
1970 | Modern industrial applications of bar code
1972 | Codabar
1972 | Interleaved 2 of 5 invented by David Allais
1972 | First major multi-facility installation, at General Motors in which engines and axles were bar coded with Interleaved 2 of 5. Initial installations by David Collins and Computer Identics, the first significant continuing company to be into bar codes 100% followed by Al Wurz and AccuSort
1973 | U.P.C adopted
1974 | Marsh supermarket in Troy, Ohio, the first store using U.P.C. bar codes regularly
1974 | Code 39, the first practical alphanumeric bar code invented by David Allais and Ray Stevens of INTERMEC Corporation
1977 | EAN-adopted Codabar selected by the American Blood Commission
1979 | General Motors developed identification and traceability program for automobile parts using Code 39 and Interleaved 2 of 5 auto-discriminantly
1981-1982 | Code 93 and Code 128 introduced
1982 | British Army develops bar code system for military items. U.S. Department of Defence LOGMARS program for replacement of parts using Code 39
1984 | US Health Industry bar code standard using Code 39
1987 | Code 49 and Code 16, high-density, stacked codes developed
This table has been compiled using numerous sources, but primarily LaMoreaux (1998, pp. 52-53). It is not meant to be exhaustive but it does highlight the major bar code related developments.
5.1.2. The Importance of Symbologies
When examining the technical features of the bar code it is important to understand symbologies, also known as configurations. There are many different types of symbologies that can be used to implement bar codes, each with its distinct characteristics. New symbologies are still being introduced today. As Cohen (1994, p. 55) explains a symbology is a language with its own rules and syntax that can be translated into ASCII code. Common to all symbologies is that the bar code is made up of a series of dark and light contiguous bars (Collins & Whipple 1994, pp. 20-24). When the bar code is read (by a device called a scanner), light is illuminated onto the bars. This pattern of black and white spaces is then reflected (like an OFF/ON series) and decoded using an algorithm. This special pattern equates to an identification number but can be implemented using any specification. For instance, the major linear bar code symbologies include: Interleaved 2 of 5, Code 39 (also known as code 3-of-9), EAN 13, U.P.C. 8 and Code 128. Major two-dimensional symbologies, known also as area symbologies, include Data Matrix, MaxiCode, and PDF417. The 2D bar code configuration has increased the physical data limitations of the linear configurations. End-users are now able to store larger quantities of information on bar codes with many company-defined fields. Contrarily, linear bar codes should never extend to more than 20 characters as they become difficult to read by scanners. Other linear and 2D bar code symbologies include: Plessey Code, Matrix 2 of 5, Nixdorf Code, Delta Distance A, Codabar, Codablock, Code 1, Code 16K, Code 11, Code 39, Code 49, Code 93, Code 128, MSI Code, USD-5, Vericode, ArrayTag, Dotcode.
Of the significant incremental innovations to bar code technology has been bar coding small sized objects and the reading of different symbologies using a single hardware device. In 1996 the UCC and EAN recognised the need for a symbology that could be applied to small-sized products such as microchips and health care products. The UCC and EAN Symbol Technical Advisory Committee (STAC) identified a solution that was able to incorporate the benefits of both linear and 2D bar codes. The symbol class is called Composite Symbology (CS), and the family of bar codes is called Reduced Space Symbology (RSS). It has been heralded as the new generation of bar codes because it allows for the co-existence of symbologies already in use (Moore & Albright 1998, pp. 24-25). The biggest technical breakthrough (conceived prior to the 1990s) was autodiscrimination. This is the ability for a bar code system to read more than one symbology by automatically detecting which symbology has been used and converting the data to a relevant locally-used symbology using look-up tables. This not only allows the use of several different types of symbologies by different companies but has enormous implications for users trading their goods across geographic markets.
5.1.3. Bar Code Limitations
A technical drawback of the bar code itself is that it cannot be updated. Once a bar code is printed, it is the identifier for life. In many applications this is not presented as a problem, however it does make updating the database where data is stored a logistical nightmare. Unlike other auto-ID technologies that can be reprogrammed, a bar code database once set up is difficult to change; it is easier (in some instances) to re-label products. It should also be noted that labels print quality can decline with age, depending on the quality of the material used for the label, the number of times the label has been scanned, environmental conditions and packaging material. “[I]t is possible (especially with marginal quality bar codes) for the bar code read today… not to be read by the same reader tomorrow” (Cohen 1994, p. 93). Verification, also known as quality assurance, is required during the production process to ensure that bar codes are made without defects. Problems that can be encountered include: undersized quiet zones, underburn/overburn, voids, ribbon wrinkling, short or long bar codes, transparent or translucent backgrounds, missing information which is human-readable, symbol size or font is incorrect, spread or overlays, location on packaging, roughness and spots. For this purpose, quality analysis should be seen as compulsory.
5.2. Magnetic-Stripe Cards
Almost simultaneously that the retail industry underwent revolutionary changes with the introduction of bar code, the financial industry adopted magnetic-stripe card technology. What is of interest is that both bar code and magnetic-stripe card enjoyed limited exposure when they were first introduced in the late 1960s. It took about a decade for the technologies to become widespread. Each overcame a variety of obstacles. Coupled together, the two techniques were major innovations that affected the way that consumers carried out their day-to-day tasks. The technologies went hand in hand, on the one side were the actual commodities consumers purchased and on the other was the means with which to purchase them (see exhibit 5.1 on the following page). Yet, the bar code differed from magnetic-stripe card in that it was more a service-enabler offered by retailers to consumers, in addition to being effective in business back-end operations. The magnetic-stripe card however, had a more direct and personal impact on the cardholder, as it was the individual’s responsibility to maintain it. The consumer had to carry it, use it appropriately, and was liable for it in every way.
5.2.1. The Virtual Banking Revolution (24x7)
Plain card issuing became popular in the 1920s when some United States retailers and petrol companies began to offer credit services to their customers. McCrindle (1990, p. 15) outlines the major developments that led to the first magnetic-stripe being added to embossed cards in 1969.
By the 1920s the idea of a credit card was gaining popularity... These were made of cardboard and engraved to provide some security... The 1930s saw the introduction of some embossed metal and plastic cards... Embossed cards could be used to imprint information on to a sales voucher... Diners Club introduced its charge card in 1950 while the first American Express cards date from the end of the 1950s.
Magnetic-stripe cards made their debut more than a decade after computer technology was introduced into the banking system in the 1950s. Until that time computers were mainly used for automating formerly manual calculations and financial processes rather than offering value-added benefits to bank customers (Essinger 1999, p. 66). One of the first mass mail-outs of cards to the public was by credit card pioneer, Chuck Russell who launched the Pittsburgh National Charge Plan. Out of the one hundred thousand cards that were sent to households about fifty per cent of them were returned, primarily because consumers did not know what to do with them or how to use them. Cash remained the preferred method of payment for some time.
Historically, embossed cards had made an impact on the market, particularly on the financial services industry. Financial transaction cards (FTC) were widespread by the late 1970s and large firms that had invested heavily in embossed-character imprinting devices needed time to make technological adjustments (Bright 1988, p. 13). Jerome Svigals (1987, p. 28f) explained the integration of the embossed card and the new magnetic-stripe as something that just had to happen:
It would take a number of years before an adequate population of magnetic-stripe readers became available and were put into use. Hence, providing both the embossing and stripe features was a transition technique. It allowed issued cards to be used in embossing devices while the magnetic-stripe devices built up their numbers.
Today magnetic-stripe cards are the most widely used card technology in the world (Kaplan 1996, p. 68), and they still have embossed characters on them for the cardholder’s name, card expiry date, and account or credit number. This is just one of many examples showing how historical events have influenced future innovations. As Svigals (1987, p. 29) noted fifteen years ago, it is not clear when or even if, embossing will eventually be phased out. Hence, his prediction that the smart card would start its life as “...a carrier of both embossed and striped media.” These recombinations are in themselves new innovations even though they are considered interim solutions at the time of their introduction; they are a by-product of a given transition period that continues for a time longer than expected. Perhaps here also can be found the reason why so many magnetic-stripe cards still carry bar codes also. Essinger (1999, p. 80) describes this phenomenon by describing technology as being in a constant state of change. No sooner has a major new innovation been introduced than yet another incremental change causes a more powerful, functional, and flexible innovation to be born. Essinger uses the example of the magnetic-stripe card and subsequent smart card developments, cautioning however, that one should not commit the “cardinal sin of being carried away by the excitement of new technology and not stopping to pause to ask whether there is a market for it.” He writes (1999, p. 80) “what matters is not the inherent sophistication of technology but the usefulness it offers to customers and, in extension, the commercial advantage it provides”.
5.2.2. Encoding the Magnetic-strip
The magnetic stripe technology had its beginnings during World War II (Svigals 1987, p. 170). Magnetic-stripe cards are composed of a core material such as paper, polyester or PVC. Typically, plastic card printers use either thermal transfer or dye sublimation technology. The process as outlined on a manufacturer’s web page is quite basic:
...you simply insert the ribbon and fill the card feeder. From there, the cards are pulled from the card feeder to the print head with rollers. When using a 5 panel colour ribbon the card will pass under the print head and back up for another pass 5 times. When all the printing is complete, the card is then ejected and falls into the card hopper.
Finally, the magnetic-strip (similar to that of conventional audio tapes) is applied to the card and a small film of laminated patches is overlaid. The strip itself is divided laterally into three tracks, each track designed for differing functions (see table 5.2 on the following page). Track 1 developed by IATA, is used for transactions where a database requires to be accessed such as an airline reservation. Track 2, developed by the ABA contains account or identification number(s). This track is commonly used for access control applications and is written to before the card is despatched to the cardholder so that every time it is presented it is first interrogated by the card reading device. As Bright (1988, p. 14) explains:
...[t]he contents, including the cardholder’s account number, are transferred directly to the card issuer’s computer centre for identification and verification purposes. This on-line process enables the centre to confirm or deny the terminal’s response to the presenter...
Finally, Track 3 is used for applications that require data to be updated with each transaction. It was introduced some time after Tracks 1 and 2. It contains an encoded version of the personal identity number (PIN) that is private to each individual card. The cardholder must key in the PIN at a terminal that is then compared with the PIN verification value (PVV) to verify a correct match.
Table 5.2 Magnetic-strip Track Description
Track Number | Description
Track 1 (read only)
210 bits/inch; 79 characters (alpha/numeric)
Used mainly by airline developers (IATA)
First field for account number (up to 19 digits)
Second field for name (up to 26 alphanumerics)
Track 2 (read only)
75 bits/inch; 40 digits (numeric only)
Developed by American Bankers Association on-line
First field for account number (up to 19 digits)
Track 3 (read/write)
210 bits/inch; 107 digits (numeric only)
Higher density achieved by later technology
Rewritten each use. Suitable for off-line
Uses PIN verification value (encoded)
* This table has been compiled using Bright (1988, p. 14).
Each magnetic-stripe card is magnetically encoded with a unique identification number. This unique number is represented in binary on the strip. This is known as biphase encodation. When the strip is queried, the 1s and 0s are sent to the controller in their native format and converted for visual display only into decimal digits. When magnetic-stripe cards are manufactured they do not have any specific polarity. Data is encoded by creating a sequence of polarised vertical positions along the stripe. Mercury Security Corporation explains this process in detail. When choosing a magnetic-stripe card for an application the following issues should be taken into consideration. First, should the magnetic-stripe be loco or hico. Hico stripes can typically withstand 10 times the magnetic field strength of loco stripes. Most stripes today are hico so that they are not damaged by heat or exposure to sunlight and by other magnets. Second, which track should the application use to encode data, track one, two or three. One should be guided by ANSI/ISO standards here that recommend particular applications to particular tracks. Other considerations include whether the card requires lamination, to be embossed or watermarked and whether the card will follow ISO card dimensions? The cost of the card chosen should also be considered as it can vary significantly (see table 5.3).
Table 5.3 Magnetic Stripe Card Types
Type | Feature | Typical Cost
7 mm paper | Cheap | 1 cent
10 mm PET | Durable | 8 cents
30 mm PVC | Emboss | 25 cents
PET laminate | Versatile | 50 cents
PVC D2T2 | Graphics | 75 cents
* This table is based on 1999 price estimates.
5.2.3. Magnetic-stripe Drawbacks
The durability of magnetic-stripe cards often comes into question. “Magnetic stripes can be damaged by exposure to foreign magnetic fields, from electric currents or magnetised objects, even a bunch of keys” (Cohen 1994, p. 27). This is one reason why so many operators have expiry dates on cards they issue. According to Svigals (1987, p. 185), “[m]agnetic stripes have been tested and are generally specified to a two-year product life by the card technology standards working groups.” Another drawback is that once a magnetic-stripe has been damaged, data recovery is impossible (Cohen 1994, p. 29). Another way that a magnetic-stripe card can be worn out is if it has been read too many times by a reader. Svigals (1987, p. 36) is more explicit in describing the limitations of magnetic-stripe by writing that “[m]ost knowledgeable tape experts readily admit that the magnetic stripe content is: readable, alterable, modifiable, replaceable, refreshable, skimmable, counterfeitable, erasable, simulatable.” The magnetic-stripe has rewrite capability and data capacity ranges from 49-300 characters. The latter is clearly a handicap when a chosen application(s) requires the addition of new data or features. While linear bar codes are even more limited as has been explained above, magnetic-stripe may still not be suitable for a particular solution. Another issue that requires some attention is security. As Bright explains (1998, p. 15):
[t]he primary problem may be described with one word ‘passivity’; lacking any above board intelligence, the magnetic stripe card must rely on an external source to conduct the positive checking/authentication of the card and its holder. This exposes the system to attack. The scale of the problem exacerbated by the relative ease of obtaining a suitable device with which to read and amend the data stored in the stripe.
There are however, numerous innovators that continue to believe that magnetic-stripe technology still has a future and they are researching means to make the technology more secure. For example, “ValuGard from Rand McNally relies on imperfections and irregularities of standard magnetic stripes... XSec from XTec employs the natural jitter of the encoded data to produce a security signature of the card... Watermark Magnetics from Thorn EMI involves modifications in the structure of the magnetic medium” (Jose & Oton 1994, p. 21f).
5.3. Smart Cards
5.3.1. The Evolution of the Chip-in-a-Card
The history of the smart card begins as far back as 1968. By that time magnetic-stripe cards while not widespread, had been introduced into the market. Momentum from these developments, together with advancements in microchip technology made the smart card a logical progression. Two German inventors, Jürgen Dethloff and Helmut Grötrupp applied for a patent to incorporate an integrated circuit into an ID card (Rankl & Effing 1997, p. 3). This was followed by a similar patent application by Japanese academic, Professor Kunitaka Arimura in 1970. Arimura was interested in incorporating “one or more integrated circuit chips for the generation of distinguishing signals” in a plastic card (Zoreda & Oton 1994, p. 36). His patent focused on how to embed the actual micro circuitry (Lindley 1997, p. 13). In 1971 Ted Hoff from the Intel Corporation also succeeded in assembling a computer on a tiny piece of silicon (Allen & Kutler 1997, p. 2). McCrindle (1990, p. 9) made the observation that the evolution of the smart card was made possible through two parallel product developments- the microchip and the magnetic-stripe card- that merged into one product in the 1970s. However, it was not until 1974 that previous chip card discoveries were consolidated. Roland Moreno’s smart card patents and vision of an electronic bank manager triggered important advancements, particularly in France. In that year, Moreno successfully demonstrated his electronic payment product by simulating a transaction using an integrated circuit (IC) card. What followed for Moreno, and his company Innovatron, was a batch of patents among which was a stored-value application mounted on a ring which connected to an electronic device. Other subsequent important chip card patents can be seen in table 5.4.
Table 5.4 Significant Chip Card Patents After 1974
Innovator | Year | Country | Patent Description
Moreno | 1975 | France | Covering PIN and PIN comparator within chip. Patent assigned to Innovatron.
Ugon | 1978 | France | Covering automatic programming of microprocessor.
Billings | 1987 | France | Covering flexible inductor for contactless smart cards, AT&T.
LeRoux | 1989 | France | Covering a system of payment on information transfer by money card with an electronic memory. Assigned to Gemplus.
Hennige | 1989 | Germany | Covering method and device for simplifying the use of a plurality of credit cards, or the like.
Lawlor | 1993 | USA | Covering method and system for remote delivery of retail banking services.
* This table has been compiled using Kaplan (1996, p. 228).
By the late 1970s the idea of a chip-in-a-card had made a big enough impression that large telecommunications firms were committing research funds towards the development of IC cards. In 1978 Siemens built a memory card around its SIKART chip which could function as an identification and transaction card (see exhibit 5.2 on the following page). Despite early opposition to the new product it did not take long for other big players to make significant contributions to its development. In 1979 Motorola supplied Bull with a microprocessor and memory chip for the CP8 card. In July of that year Bull CP8’s two-chip card was publicly demonstrated in New York at American Express. French banks were convinced that the chip card was the way of the future and called a bid for tender by the seven top manufacturers at the time: CII-HB, Dassault, Flonic-Schlumberger, IBM, Philips, Transac and Thomson. Ten French banks with the support of the Posts Ministry created the Memory Card Group in order to launch a new payment system in France. Such was the publicity generated by the group that more banks began to join in 1981, afraid they would be left behind as the new technology was trialled in Blois, Caen and Lyon. Additionally, the US government awarded a tender to Philips to supply them with IC identification cards. By 1983 smart cards were being trialled in the health sector to store vaccination records and to grant building access to hemodialysis patients.
It was during this period in the early 1980s that the French recognised the potential of smart cards in the provision of telephony services. The first card payphones were installed by Flonic Schlumberger for France Telecom and were called Telecarte. By 1984 Norway had launched Telebank, Italy the Tellcard, and Germany the Eurocheque. A number of friendly alliances began between the large manufacturers who realised they could not achieve their goals in isolation. Bull and Philips signed agreements with Motorola and Thomson respectively. Meanwhile, MasterCard International and Visa International made their own plans for launching experimental applications in the United States. In 1986 Visa published the results of its collaborative trials with the Bank of America, the Royal Bank of Canada and the French CB group. The “...study show[ed] that the memory card [could] increase security and lower the costs of transactions” (Cardshow 1996, p. 1). Visa quickly decided that the General Instrument Corporation Microelectronics Division would manufacture their smart cards. The two super smart card prototypes were supplied by Smart Card International and named Ulticard (see exhibit 5.2 above). In 1987 MasterCard decided to spend more time reviewing the card’s potential and continued to conduct market research activities. Issues to do with chip card standardisation between North America and Europe became increasingly important as more widespread diffusion occurred.
Today it can be said that a microprocessor explosion has occurred. “Smart cards are part of the new interest in ‘wearable’ computing. That’s computing power so cheap and small it’s always with you” (Cook 1997, p. xi). The progress toward the idea of ubiquitous computing is quite difficult to fathom when one considers that the credit-card sized smart card possesses more computing power than the 1945 ENIAC computer which:
“...weighed 30 tonnes, covered 1500 square feet of floor space, used over 17000 vacuum tubes... 70000 resistors, 10000 capacitors, 1500 relays, and 6000 manual switches, consumed 174000 W of power, and cost about $500000” (Martin 1995, p. 3f).
Today’s smart card user is capable of carrying a ‘mental giant’ in the palm of their hand. Smart cards can be used as payment vehicles, access keys, information managers, marketing tools and customised delivery systems (Allen & Kutler 1997, pp. 10-11). Many large multinational companies have supported smart card technology because the benefits are manifold over other technologies. It was projected that by the year 2000, an estimated volume of smart-card related transactions would exceed twenty billion annually (Kaplan 1996, p. 10). Michael Ugon, a founding father of smart card, said in 1989 that the small piece of plastic with an embedded chip was destined to “...invade our everyday life in the coming years, carrying vast economical stakes” (Ugon 1989, p. 4). McCrindle (1990, p. ii) likewise commented that the smart card “...ha[d] all the qualities to become one of the biggest commercial products in quantity terms this decade”. And the French in 1997 were still steadily pursuing their dream of a smart city, “...a vision made real by cards that [could] replace cash and hold personal information (Amdur 1997, p. 3). Currently, while there is a movement by the market to espouse smart card technology, numerous countries and companies continue to use magnetic-stripe cards.
5.3.2. Memory and Microprocessor Cards
As Lindley (1997, p. 15f) points out there is generally a lack of agreement on how to define smart card. This can probably be attributed to the differences not only in functionality but also in the price of various types of smart cards. According to Rankl and Effing (1997, pp. 12-14) smart cards can be divided into two groups: memory cards and microprocessor cards (contact/contactless). As described by Allen and Kutler (1997, p. 4) memory cards are:
...primarily information storage cards that contain stored value which the user can “spend” in a pay phone, retail, vending, or related transaction.
Memory cards are less flexible than microprocessor cards because they possess simpler security logic. Additionally only basic coding can be carried out on the more advanced memory cards. However, what makes them particularly attractive is their low cost per unit to manufacture, hence their widespread use in pre-paid telephone and health insurance cards. The other type of smart card, the microprocessor card is defined by the International Standards Organisation (ISO) and the International Electronic Commission (IEC), as any card that contains a semiconductor chip and conforms to ISO standards (Hegenbarth 1990, p. 3). The microprocessor actually contains a central processing unit (CPU) which
...stores and secures information and makes decisions, as required by the card issuer’s specific application needs. Because intelligent cards offer a read/write capability, new information can be added and processed (Allen & Kutler 1997, p. 4).
The CPU is surrounded by four additional functional blocks: read only memory (ROM), electrical erasable programmable ROM (known as EEPROM), random access memory (RAM) and the input/output (I/O) port. The Smart Card Forum Committee (1997, p. 237) outlines that the card is:
...capable of performing calculations, processing data, executing encryption algorithms, and managing data files. It is really a small computer that requires all aspects of software development. It comes with a Card Operating System (COS) and various card vendors offer Application Programming Interface (API) tools.
One further variation to note is that microprocessor cards can be contact, contactless (passive or active) or a combination of both. Thus users carrying contactless cards need not insert their card in a reader device but simply carry them in their purse or pocket. While the contactless card is not as established as the contact card it has revolutionised the way users carry out their transactions and perceive the technology. For an exhaustive discussion on different types of smart cards from ROM to FRAM to EEPROM see Rankl and Effing (1997, pp. 40-60).
5.3.3. Standards and Security
Smart card dimensions are typically 85.6 mm by 54 mm. The standard format ‘ID-1’ stipulated in ISO 7810 was first created in 1985 for magnetic-stripe cards. As smart cards became more popular, ISO made allowances for the microchip to be included in the standard. Smaller smart cards have been designed for special applications such as GSM handsets; these are ID-000 format known as the ‘plug-in’ card and ID-00 known as the ‘mini-card’ (Rankl & Effing 1997, p. 21). In contact smart cards, a power supply requires to have physical contact for data transfer. The tiny gold-plated 6-8 contacts are defined in ISO 7816-2. As a rule, if a contact smart card contains a magnetic-stripe, the contacts and the stripe must never appear on the same size. Each contact plays an important role. Two of the eight contacts have been reserved (C4 and C8) for future functions but the rest serve purposes such as supply voltage (C1), reset (C2), clock (C3), mass (C5), external voltage for programming (C6), and I/O (C7). Contactless smart cards on the other hand work on the same technical principles that animal transponder implants do. For simple solutions the card only needs to be read so that transmission can be carried out by frequency modulation for instance.
Several different types of materials are used to produce smart cards. The first well-known material (also used for magnetic-stripe cards) is PVC (polyvinyl chloride). PVC smart cards however, were noticeably non-resistant to extreme temperature changes, so ABS (acrylonitrile-butadiene-styrol) material has been used for smart cards for some time. PVC cards have been known to melt in climates that reach consistent temperature of 30 degrees celsius. For instance, when the ERP system was launched in Singapore in 1998 a lot of people complained that melting smart cards had destroyed their card readers. Among the group who reported the most complaints to local newspapers were taxi drivers, who were driving for long periods of time. Similarly card errors often occur to mobile handsets that have been left in high temperatures. PET (polyethylene terephthalate) and PC (polycarbonate) are other materials also used in the production of smart cards. The two most common techniques for mounting a chip on the plastic foil is the TAB technique (tape automated bonding) and the wire bond technique. The former is a more expensive technique but is considered to have a stronger chip connection and a flatter finish; the latter is more economical because it uses similar processes to that of the semiconductor industry for packaging strips but is thicker in appearance. New processes have recently been developed to allow a card to be manufactured in a single process. Rankl and Effing (1997, p. 40) explain, “[a] printed foil, the chip module and a label are inserted automatically into a form, and injected in one go”.
Just like in magnetic-stripe technology, the most common method of user identification in smart cards is the PIN. The PIN is usually four digits in length (even though ISO 9564-1 recommends up to twelve characters), and is compared with the reference number in the card. The result of the comparison is then sent to the terminal which triggers a transaction- accept or reject. Additional to the PIN is a password which is stored in a file on the card and is transparently verified by the terminal. While the magnetic-stripe card relies solely on the PIN, smart card security is implemented at numerous hierarchical levels (Ferrari et al. 1998, pp. 11f). There are technical options for chip hardware (passive and active protective mechanisms), and software and application-specific protective mechanisms. With all these types of protection against a breach of security, logical and physical attacks are almost impossible (Rankl & Effing 1997, pp. 261-272). The encryption in smart cards is so much more sophisticated than that of the magnetic-stripe. Crypto-algorithms can be built into smart cards that ensure both secrecy of information and authenticity. External security features that can be added to the card include: signature strip, embossing, watermarks, holograms, biometrics, microscript, multiple laser image (MLI) and lasergravure. While the smart card is a secure auto-ID technology it has been argued that the device is still susceptible to damage, loss and theft. This has led to biometrics being stored on the smart card for additional security purposes (see exhibit 5.3 on the following page).
5.4.1. Leaving Your Mark
Biometrics is not only considered a more secure way to identify an individual but also a more convenient technique whereby the individual does not necessarily have to carry an additional device, such as a card. As defined by the Association for Biometrics (AFB) a biometric is “...a measurable, unique physical characteristic or personal trait to recognise the identity, or verify the claimed identity, of an enrollee.” The technique is not a recent discovery. There is evidence to suggest that fingerprinting was used by the ancient Assyrians and Chinese at least since 7000 to 6000 BC (O’Gorman 1999, p. 44). The practice of using fingerprints in place of signatures for legal contracts is hundreds of years old (Shen & Khanna, 1997 p. 1364). See table 5.5 on the following page for a history of fingerprint developments. It was as early as 1901 that Scotland Yard introduced the Galton-Henry system of fingerprint classification (Halici et al. 1999, p. 4; Fuller et al. 1995, p. 14). Since that time fingerprints have traditionally been used in law enforcement. As early as 1960, the FBI Home Office in the UK and the Paris Police Department began auto-ID fingerprint studies (Halici et al. 1999, p. 5). Until then limitations in computing power and storage had prevented automated biometric checking systems from reaching their potential. Yet it was not until the late 1980s when personal computers and optical scanners became more affordable that automated biometric checking had an opportunity to establish itself as an alternative to smart card or magnetic-stripe auto-ID technology.
Table 5.5 History of Fingerprint Identification
Year | Name | Achievement
1684 | N. Grew | Published a paper reporting the systematic study on the ridge, furrow, and pore structure in fingerprints, which is believed to be the first scientific paper on fingerprints.
1788 | Mayer | A detailed description of the anatomical formations of fingerprints... in which a number of fingerprint ridge characteristics were identified.
1809 | T. Bewick | Began to use his fingerprint as his trademark, which is believed to be one of the most important contributions in the early scientific study of finger identification.
1823 | Purkinje | Proposed the first fingerprint classification scheme.
1880 | H. Fauld | First scientifically suggested the individuality and uniqueness of fingerprints.
1880 | Herschel | Asserted that he had practiced fingerprint identification for about 20 years.
1888 | Sir F. Galton | Conducted an extensive study of fingerprints. He introduced the minutiae features for single fingerprint classification.
1899 | E. Henry | Established the famous ‘Henry System’ of fingerprint classification, an elaborate method of indexing fingerprints very much tuned to facilitating the human experts performing (manual) fingerprint identification.
1920s | Law Enforcement | Fingerprint identification formally accepted as a valid personal-identification method by law-enforcement agencies and a standard routine in forensics.
1960s | FBI UK & Paris Police | Invested a large amount of effort in developing AFIS.
* This table has been compiled using Jain et al. (1997, pp. 1367-1368).
According to Parks (1990, p. 99), the personal traits that can be used for identification include: “facial features, full face and profile, fingerprints, palmprints, footprints, hand geometry, ear (pinna) shape, retinal blood vessels, striation of the iris, surface blood vessels (e.g., in the wrist), electrocardiac waveforms.” Keeping in mind that the above list is not exhaustive, it is impressive to consider that a human being or animal can be uniquely identified in so many different ways. Unique identification, as Zoreda and Oton (1994, p. 165) point out, is only a matter of measuring a permanent biological trait whose variability exceeds the population size where it will be applied. As a rule however, human physiological or behavioural characteristics must satisfy the following requirements as outlined by Jain et al. (1997, pp. 1365f):
1) universality, which means that every person should have the characteristic;
2) uniqueness, which indicates that no two persons should be the same in terms of the characteristic;
3) permanence, which means that the characteristic should be invariant with time; and
4) collectability, which indicates that the characteristic can be measured quantitatively.
Currently nine biometric techniques are being used or under investigation in mainstream applications. These include face, fingerprint, hand geometry, hand vein, iris, retinal pattern, signature, voice print, and facial thermograms. Most of these major techniques satisfy the following practical requirements (Jain et al. 1997, p. 1366):
1) performance, which refers to the achievable identification accuracy, the resource requirements to achieve acceptable identification accuracy, and the working or environmental factors that affect the identification accuracy;
2) acceptability, which indicates to what extent people are willing to accept the biometric system; and
3) circumvention, which refers to how easy it is to fool the system of fraudulent techniques.
5.4.2. Biometric Diversity
Since there are several popular biometric identification devices (see exhibit 5.4), some space must be dedicated to each. While some devices are further developed than others, there is not one single device that fits all applications. “Rather, some biometric
Exhibit 5.4 Biometric Device Suite: Fingerprint, Hand, Iris and Facial Recognition
techniques may be more suitable for certain environments, depending on among other factors, the desired security level and the number of users... [and] the required amount of memory needed to store the biometric data” (Zoreda & Oton 1994, p. 167f). Dr J. Campbell, a National Security Agency (NSA) researcher and chairman of the Biometrics Consortium agrees that no one biometric technology has emerged as the perfect technique suitable for all applications (McManus 1996). See table 5.6 for a comparison of biometric technologies based on different criteria.
Table 5.6 Biometric Comparison Chart
The brief technical description offered below for each major biometric system only takes into consideration the basic manner in which the biometric transaction and verification works, i.e., what criteria are used to recognise the individual which eventuates in the acceptance or rejection of an enrolee. For each technique verification is dependent upon the person’s biological or behavioural characteristic being previously stored as a reference value. This value takes the form of a template, a data set representing the biometric measurement of an enrolee, which is used to compare against stored samples. In summary, fingerprint systems work with the Galton-defined features and ridge information; hand geometry works with measurements of the distances associated between fingers and joints; iris systems work with the orientation of patterns of the eye; and voice recognition uses voice patterns (IEEE 1997, p. 1343). See table 5.7 for a brief description of various biometric techniques.
Table 5.7 Biometric Techniques and Criteria Used for Verification
Biometric | Description of criteria used to identify an enrolee against a previously stored value
“[U]sed for both the classification and subsequent matching of fingerprints. Classification is based upon a number of fingerprint characteristics or unique pattern types, which include arches, loops and whorls. A match or positive identification is made when a given number of corresponding features are identified... The analysis stages include: feature extraction, classification, matching” (Cohen 1994, p. 228).
“[I]ndividual hands have unique features such as finger lengths, skin web opacity and radius of curvature of fingertips. Systems have been produced which measure hand geometries by scanning with photo-electric devices. The hand is positioned on a faceplate and a capacitive switch senses the presence of the hand and initiates scanning. The measurements are then compared to… stored data” (McCrindle 1990, p. 101).
“Signature verification is a typical example of so-called behavioural features (i.e., biometric data not based on anatomical features). Devices for signature recording range from bar code scanners to digitising pads. Signatures are usually analysed as prints... the input systems instead detect motion, relative trajectories, speed, and/or acceleration of the penlike device given to the user. The precise algorithm used by each manufacturer is generally kept secret” (Zoreda & Oton 1994, p. 170).
“Retina scan is being used for both access control and for identifying and releasing felons from custody. Retina identification is based on a medical finding in 1935 that no two persons have the same pattern of blood vessels in their retinas. The retina scan device was developed by an ophthalmologist and is used to capture the unique pattern of blood vessels in a person’s eye. The data are converted to an algorithm and then stored in a computer or in a scanner’s memory… For identity verification, an individual would enter a PIN and place his or her eye over the lens in proper alignment for scanning. The reading is compared with the eye signature stored with the PIN in the system. If there is a match the individual is identified” (Steiner 1995, p. 14).
“Bell Laboratories began work on speaker verification about 1970... The Bell approach operates in the time domain, based on extraction of ‘contours’ from the speech signal. These contours correspond to the time function of: (1) patch period (2) gain (intensity)... A sentence long utterance is used which is sampled at 10kHz rate... Reference utterances collected at enrolment are combined after time registration (using intensity contour and dynamic programming methods) and are length standardised. Each contour is reduced to 20 equispaced samples for storage as a sequence of 80 means and variances after time registration and length standardisation” (Parks 1990, p. 122).
“Facial recognition is an attempt to make computers mimic human capabilities. Special computation techniques like neural networks are being investigated… current results, however, are far from those of the human brain, since the systems usually lack tolerances in position, lighting, and orientation of the face” (Zoreda & Oton 1994, p. 170f).
126.96.36.199. Fingerprint Recognition
If one inspects the epidermis layer of the fingertips closely, one can see that it is made up of ridge and valley structures forming a unique geometric pattern. The ridge endings are given a special name called minutiae. Identifying an individual using the relative position of minutiae and the number of ridges between minutiae is the traditional algorithm used to compare pattern matches (Jain, L. C. et al. 1999). The alternative to the traditional approach is using correlation matching (O’Gorman 1999, pp. 53-54) or the pores of the hand, though the latter is still a relatively new method. Pores have the characteristic of having a higher density on the finger than the minutiae which may increase even more the accuracy of identifying an individual. The four main components of an automatic fingerprint authentication system are “acquisition, representation (template), feature extraction, and matching” (Jain et al. 1997, p. 1369). To enrol a user types in a PIN and then places their finger on a glass to be scanned by a charge-coupled device (CCD) (see an example in exhibit 5.5 on the previous page). The image is then digitised, analysed and compressed into a storable size. In 1994, Miller (p. 26) stated that the mathematical characterisation of the fingerprint did not exceed one kilobyte of storage space; and that the enrolment process took about thirty seconds and verification took about one second. Today these figures have been significantly reduced.
188.8.131.52. Hand Recognition
Hand recognition differs from fingerprint recognition as a three dimensional shape is being captured, including the “[f]inger length, width, thickness, curvatures and relative location of these features…” (Zunkel 1999, p. 89). The scanner capturing the images is not concerned with fingerprints or other surface details but rather comparing geometries by gathering data about the shape of the hand, both from the top and side perspectives. The measurements taken are then converted to a template for future comparison. A set of matrices helps to identify plausible correlations between different parts of the hand. The hand geometric pattern requires more storage space than the fingerprint and it takes longer to verify someone’s identity. Quality enrolment is very important in hand recognition systems due to potential errors. Some systems require the enrolee to have their hand scanned three times, so that readings of the resultant vectors are averaged out and users are not rejected accidentally (Ashbourn 1994, p. 5/5).
184.108.40.206. Face Recognition
While fingerprinting and hand recognition require a part of the body to make contact with a scanning device, face recognition does not. In fact, recognising someone by their appearance is quite natural and something humans have done since time began (Sutherland et al. 1992, p. 29). But identifying people by the way they look is not as simple as it might sound (Pentland 2000, pp. 109-111). People change over time, either through the natural aging process or by changes in fashion (including hair cuts, facial hair, make-up, clothing and accessories) or other external conditions (Miller 1994, p. 28). If humans have trouble recognising each other in certain circumstances, one can only begin to imagine how much more the problem is magnified through a computer which possesses very little intelligence. What may seem like an ordinarily simple algorithm is not; to a computer a picture of a human face is an image like any other that is later transformed into a map-like object. This feature vector is compared against the discriminating power, the variance tolerance, and the data reduction efficiency. Shen and Khanna describe these variables (1997, p. 1422):
[t]he discriminating power is the degree of dissimilarity of the feature vectors representing a pair of different faces. The variance tolerance is the degree of similarity of the feature vectors representing different images of the same individual’s face. The data-reduction efficiency is the compactness of the representation.
Engineers use one of three approaches to automate face recognition. These are eigen-face, elastic matching, and neural nets (IEEE 1997, p. 1344). Once the face image has been captured, dependent on the environment, some pre-processing may take place. The image is first turned into greyscale and then normalised before being stored or tested. Then the major components are identified and matching against a template begins (Bigun et al. 1997, pp. 127f).
220.127.116.11. Iris Recognition
The spatial patterns of the iris are highly distinctive. Each iris is unique (like the retina). Some have reckoned automated iris recognition as only second to fingerprints. According to Wilde (1997, p. 1349) these claims can be substantiated from clinical observations and developmental biology. The iris is “a thin diaphragm stretching across the anterior portion of the eye and supported by the lens” (IEEE 1997, p. 1344). The first step in the process of iris identification is to capture the image. Second, the image must be cropped to contain only the localised iris, discarding any excess. Third, the iris pattern must be matched, either with the image stored on the candidate’s card or the candidate’s image stored in a database. Between the second and third step processing occurs to develop an iris feature vector. This feature vector is so rich that it contains more than 400 degrees of freedom, or measurable variables. Most algorithms only need to use half of these variables and searching an entire database can take only milliseconds with an incredible degree of accuracy (Williams 1997, p. 23). Matching algorithms are applied to produce scale, shift, rotation and distance measurements to determine exact matches. Since iris recognition systems are non-invasive/ non-contact, some extra protections have been invented to combat the instance that a still image is used to fool the system. For this reason, scientists have developed a method to monitor the constant oscillation of the diameter of the pupil, thus declaring a live specimen is being captured (Wildes 1997, p. 1349).
18.104.22.168. Voice Recognition
The majority of research and development dollars for biometrics has gone into voice recognition systems. Due to its attractive characteristics, telecommunications manufacturers and operators like Nortel and AT&T, along with a number of universities have allocated large amounts of funds to this cause. Among one of the most well-known voice recognition implementations is Sprint’s Voice FONCARD which runs on the Texas Instruments voice verification engine. Out of all the variety of biometric technologies, consumers consider voice recognition as the most friendly. The two major types of voice recognition systems are text-dependent and text-independent. The way voice recognition works is based on the extraction of a speech interval sample typically spanning 10 to 30 ms of the speech waveform. The sequence of feature vectors is then compared and pattern matched back into existing speaker models (Campbell 1999, p. 166).
5.4.3. Is There Room for Error?
While biometric techniques are considered to be among the most secure and accurate automatic identification methods available today, they are by no means perfect systems. False accept rates (FAR) and false reject rates (FRR) for each type of biometric are measures that can be used to determine the applicability of a particular technique to a given application. Some biometric techniques may also act to exclude persons with disabilities by their very nature, for instance in the case of fingerprint and hand recognition for those who do not possess fingers or hands. In the case of face recognition systems, one shortcoming is that humans can disguise themselves and gain the ability to assume a different identity (Jain, A. et al. 1999, p. 34). Other systems may be duped by false images or objects pertaining to be hands or iris images of the actual enrolee (Miller 1994, p. 25). In the case of the ultimate unique code, DNA, identical twins are excluded because they share an identical pattern (Jain, A. et al. 1999, p. 11). Even voice recognition systems are error-prone. Some problems that Campbell (1997, p. 1438) identifies include: “misspoken or misread prompted phrases, extreme emotional states, time varying microphone placement, poor or inconsistent room acoustics, channel mismatch, sickness, aging.” Finally the environment in which biometric recognition systems can work must be controlled to a certain degree to ensure low rates of FAR and FRR. To overcome some of these shortcomings in highly critical applications, multimodal biometric systems have been suggested. Multimodal systems use more than one biometric to increase fault tolerance, reduce uncertainty and reduce noise (Hong & Jain 1999, p. 327-344). Automated biometric checking systems have acted to dramatically change the face of automatic identification.
5.5. RF/ID Tags and Transponders
5.5.1. Non-contact ID
Radio frequency identification (RF/ID) in the form of tags or transponders is a means of auto-ID that can be used for tracking and monitoring objects, both living and non-living. One of the first applications of RF/ID was in the 1940s within the US Defence Force. Transponders were used to differentiate between friendly and enemy aircraft (Ollivier 1995, p. 234). Since that time, transponders continued mainly to be used by the aerospace industry (or in other niche applications) until the late 1980s when the Dutch government voiced their requirement for a livestock tracking system. The commercial direction of RF/ID changed at this time and the uses for RF/ID grew manifold as manufacturers realised the enormous potential of the technology. Before RF/ID, processes requiring the check-in and distribution of items were mostly done manually. Gerdeman (1995, p. 3) highlights this by the following real-life example: “[e]ighty thousand times a day, a long shoreman takes a dull pencil and writes on a soggy piece of paper the ID of a container to be key entered later… This process is fraught with opportunity for error.” Bar code systems in the 1970s helped to alleviate some of the manual processing, but it was not until RF/ID became more widespread in the late 1990s that even greater increases in productivity were experienced. RF/ID was even more effective than bar code because it did not require items that were being checked to be in a stationary state or in a particular set orientation. RF/ID limits the amount of human intervention required to a minimum, and in some cases eliminates it altogether.
The fundamental electromagnetic principles that make RF/ID possible were discovered by Michael Faraday, Nikola Tesla and Heinrich R. Hertz prior to 1900.
From them we know that when a group of electrons or current flows through a conductor, a magnetic field is formed surrounding the conductor. The field strength diminishes as the distance from the wire increases. We also know that when there is a relative motion between a conductor and a magnetic field a current is induced in that conductor. These two basic phenomena are used in all low frequency RF/ID systems on the market today (Ames 1990, p. 3-2).
Ames (1990, p. 3-3) does point out however, that RF/ID works differently to normal radio transmission. RF/ID uses the near field effect rather than plane wave transmission. This is why distance plays such an important role in RF/ID. The shorter the range between the reader and the RF device the greater the precision for identification. The two most common RF/ID devices today are tags and transponders but since 1973 (Ames 1990, p. 5-2) other designs have included contactless smart cards, wedges (plastic housing), disks and coins, glass transponders (that look like tubes), keys and key fobs, tool and gas bottle identification transponders, even clocks (Finkenzeller 2001, pp. 13-20). See exhibit 5.6 below for some example RF/ID devices manufactured by Deister Electronics. RF/ID has acted to take advantage of numerous existing innovations and further-developed these for the purpose of satisfying specific application needs.
5.5.2. Active versus Passive Tags and Transponders
An RF/ID system has several separate components. It contains a reusable programmable tag which is placed on the object to be tracked, a reader that captures information contained within the tag, an antenna that transmits information, and a computer which interprets or manipulates the information (Gerdeman 1995, pp. 11-25; Schwind 1990, p. 1-27). Gold (1990, p. 1-5) describes RF tags as:
[t]iny computers embedded in a small container sealed against contamination and damage. Some contain batteries to power their transmission; others rely on the signal generated by the receiver for the power necessary to respond to the receiver’s inquiry for information. The receiver is a computer-controlled radio device that captures the tag’s data and forwards it to a host computer.
The RF/ID tag has one major advantage over bar codes, magnetic-stripe cards, contact smart cards and biometrics- the wearer of the tag need only pass by a reading station and a transaction will take place, even if the wearer attempts to hide the badge (Sharp 1990, p. 1-15). Unlike light, low-frequency (or medium-to-high) radio waves can penetrate all solid objects except those made of metal. Thus the wearer does not have to have direct physical contact with a reader.
Transponders, unlike tags, are not worn on the exterior of the body or part. On humans or animals they are injected into the subcutaneous tissue. Depending on their power source, transponders can be classified as active or passive. Whether a system uses an active or passive transponder depends entirely on the application. Geers et al. (1997, p. 20f) suggests the following to be taken into consideration when deciding what type of transponder to use.
When it is sufficient to establish communication between the implant and the external world on a short-range basis, and it is geometrically feasible to bring the external circuitry a very close distance from the implant, the passive device is suitable... On the other hand choosing for an active system is recommended when continuous monitoring, independent transmission or wider transmission ranges are required. In particular for applications where powering is of vital importance (e.g. pacemakers), only the active approach yields a reliable solution.
Active transponders are usually powered by a battery that operates the internal electronics (Finkenzeller 2001, p. 13). Some obvious disadvantages of active transponders include: the replacement of batteries after they have been utilised for a period of time, the additional weight batteries add to the transponder unit and their cost. A passive transponder on the other hand, is triggered by being interrogated by a reading device which emits radiofrequency (RF) power because the transponder has no internal power source. For this reason, passive transponders cost less and can literally last forever. Both active and passive transponders share the same problem when it comes to repair and adjustment which is inaccessibility. The transponder requires that adjustments and repairs are “operated remotely and transcutaneously through the intact skin or via automatic feedback systems incorporated into the design” (Goedseels et al. 1990, quoted in Geers 1997, p. xiii).
22.214.171.124. RF/ID Components Working Together
Electronic tags and transponders are remotely activated using a short range and pulsed echo principle at around 150 kHz. Once a tag or transponder moves within a given distance of the power transmitter coil (antenna), it is usually requested to transmit information by activating the transponder circuit. The transponder may be read only, one-time programmable (OTP) or read/write. Regardless the type, each contains a binary ID code which after encoding modulates the echo so that information is transmitted to a receiver using the power of an antenna (Curtis 1992, p. 2/1). The whole procedure is managed by a central controller in the transmitter. Read only tags contain a unique code between 32 and 64 bits in length. Read/write tags support a few hundred bits, typically 1 kbit, although larger memories are possible. The ID field is usually transmitted from a tag with a header and check sum fields for validation, just in case data is corrupted during transmission.
Transmission is also a vital part of any RF/ID system. When information is transmitted by radio waves it must be transformed into an electromagnetic radiation form. According to Geers et al. (1997, p. 8),
[e]lectromagnetic radiation is defined by four parameters: the frequency, the amplitude of the electric field, the direction of the electric field vector (polarisation) and the phase of the wave. Three of these, namely amplitude, frequency and phase, are used to code the transmitted information, which is called modulation.
Two types of modulation are used- analogue or digital. Common encoding techniques for the former include pulse amplitude modulation (PAM) and pulse width modulation (PWM); for the latter pulse coded modulation (PCM) is common. According to Finkenzeller (2001, pp. 44f) digital data is transferred using bits as modulation patterns in the form of ASK (amplitude shift keying) or FSK (frequency shift keying) or PSK (phase shift keying). A bit rate can be determined by the bandwidth available and the time taken for transfer. Error detection algorithms like parity or cyclic redundancy checks (CRC) are vital since radio communication, is susceptible to interference. It can never be taken for granted that the message transmitted has not been distorted during the transmission process but with error detection implemented into the design, “accuracy approaches 100 percent” (Gold 1990, p. 1-5).
5.6. Evolution or Revolution?
When auto-ID technologies first made their presence felt in retail and banking they were considered revolutionary innovations. They made sweeping changes to the way people worked, lived, and interacted with each other. Before their inception, both living and nonliving things were identified manually; auto-ID devices automated the identification process, allowing for an increase in the level of accuracy and reliability. Supermarket employees could check-out non-perishable items just by swiping a bar code over a scanner, and suppliers could distribute their goods using unique codes. Consumers could withdraw cash without walking into a bank branch and purchase goods at the point-of-sale (POS). And subsequently banks no longer required the same number of staff to serve customers directly. Auto-ID enacted radical change. This cluster of related innovations differed considerably from any others. Though most auto-ID technologies had their foundations in the early 1900s, all of these required other breakthroughs in system components to take place first before they could proliferate.
Up until the 1970s, consumers were largely disconnected from computer equipment. About the most sophisticated household item was the television set. While ordinary people knew computers were changing the face of business, their first-hand experience of these technologies was limited. Mainframe computers at the time were large, occupying considerable floor space and there was a great mystique surrounding the capabilities of these machines. One must remember that the personal computer did not officially arrive until 1984. Meanwhile, bar codes and scanner equipment were being deployed to supermarket chains and credit card companies were distributing magnetic-stripe cards in mass mail-outs. Consumers were encouraged to visit automatic teller machines (ATMs), and for many this was their first encounter with some form of computer. No matter how elementary it may seem to us today typing a PIN and selecting the “withdraw”, “amount”, and “enter” buttons was an experience for first-time users who had most likely never touched a terminal keypad before. By the time the 1990s had arrived, so had other technologies like the laptop, mobile phone and personal digital assistants (PDAs). The range of available auto-ID devices had now grown in quantity, shape and sophistication including the use of smart cards that could store more information, biometric techniques that ensured an even greater level of security, and wireless methods such as radio-frequency identification tags and transponders that required little human intervention. By this time, consumers were also more experienced users. Auto-ID had reached ubiquitous proportions in a period of just over thirty years.
The changes brought about by auto-ID were not only widespread but propelling in nature. No sooner had one technology become established than another was seeking entry into the market. The technical drawbacks of magnetic-stripe cards for instance led to the idea that smart cards may be more suitable for particular applications. A pattern of migration from one technology to the other seemed logical until biometric techniques increased security not only in magnetic-stripe cards but bar code cards as well. There was also the movement from contact cards to contactless cards and bar codes to RF/ID but by no means were the technologies making one another obsolete but spurring on even more research and development and an even greater number of new applications and uses (Michael 2003, pp. 135-152). Diagram 5.1 below shows the different types of changes that occurred between auto-ID devices. The three main flows that are depicted in the diagram are migration, integration and convergence.
The recombination of existing auto-ID techniques flourished in the 1990s with integrated cards and combinatory reader technologies. These new product innovations indicated that coexistence of auto-ID devices was not only possible but important for the success of the industry at large. A few techniques even converged as was the case of contactless smart cards and RF/ID systems (see exhibit 5.7). Auto-ID had proven it maintained a driving force of its own while still piggybacking on the breakthroughs in microchip processing speeds, storage capacity, software programs, encryption techniques, networks and other peripheral requirements that are generally considered auto-ID system enablers.
Now having said that auto-ID belonged to that cluster of IT&T innovations that can be considered revolutionary, the process of innovation was in fact evolutionary. There is no doubt that auto-ID techniques were influenced by manual methods of identification, whether it was labels that were stuck onto objects, plain or embossed cards, comparing signatures or methods for fingerprint pattern matching. Early breakthroughs in mechanical calculators, infrared, electro-magnetic principles, magnetic tape encoding and integrated circuits also aided the advancement of auto-ID technologies. Allen and Kutler (1996, p. 11) called this the “evolving computing” phenomenon. McCrindle (1990, ch. 2) even discussed the “evolution of the smart card”, tracing the historical route all the way back from French philosopher Blaise Pascal.
In conclusion the development of auto-ID followed an evolutionary path, yet the technologies themselves were revolutionary when considered as part of that cluster known as information technologies. From devices that one could carry to devices one could implant in themselves. The advancement of auto-ID technology, since its inception, has been so magnanimous that even the earliest pioneers would have found the changes that have taken place since the 1970s inconceivable. For the first time, service providers could put in place mechanisms to identify their customer base and also to collect data on patterns of customer behaviour and product/services traffic. Mass market applications once affected or ‘infected’ by auto-ID continue to push the bounds of what this technology can or cannot do. Technology has progressed from purely manual techniques to automatic identification techniques. Furthermore auto-ID continues to grow in sophistication towards full-proof ways for identification. The above auto-ID cases show that major development efforts continue both for traditional and newer technologies. Even the humble bar code has been resurrected as a means of secure ID, revamped with the aid of biometric templates stored using a 2D symbology.
In addition, the lessons learned from the widespread introduction of each distinct technique are shaping the trajectory of the whole industry. For instance, the smart card has not neglected to take advantage of other auto-ID techniques such as biometrics and RF/ID. Thus, new combinations of auto-ID technologies are being introduced as a result of a cross-pollenisation process in the industry at large. These new innovations (that could be classified as either mutations or recombinations) are acting to thrust the whole industry forward. The importance of this chapter is that it has established that auto-ID is more than just bar code and magnetic-stripe card and that coexistence and convergence of auto-ID technologies is occurring (see ch. 7 for the selection environment of auto-ID). And now, having set the historical context and offered a brief description on the evolution of each device, the dynamics of the auto-ID innovation system will be explored using the systems of innovation (SI) conceptual framework.
 For example other technologies like optical character recognition (OCR), magnetic-ink character recognition (MICR), laser card, optical card, infrared-tags and microwave tags will not be studied here.
 According to Cohen (1994, p. 55) “...bar code technology is clearly at the forefront of automatic identification systems and is likely to stay there for a long time.” Palmer (1995, p. 9) also writes that “bar code has become the dominant automatic identification technology”.
 This enabled programs and peripheral devices (complementary innovations) to be built to support bar codes for the identification and capture of data. A bar code can only work within a system environment. Bar code labels in themselves are useless.
 See also Palmer (1995, ch. 3), ‘History of Bar Code’.
 Each symbology has benefits and limitations. It is important for the adopter of bar code technology to know which symbologies are suitable to their particular industry. Standards associations and manufacturers can also help with a best-fit recommendation (Grieco et al. 1989, pp. 43-45). Other considerations may include: what character sets are required by the company, what the required level of accuracy of the symbology should be, whether the symbology allows for the creation and printing of a label (in terms of density), and whether the symbology has specifications that make it intolerant to particular circumstances. Sometimes there may also be pressure by industry groups for users to conform to certain symbologies. As Cohen (1994, p. 99f) points out, there are some bodies that have created industrial bar code standards such as: ODETTE (Organisation for Data Exchange by Tele Transmission in Europe) that adopted Code 39; IATA (International Air Transport Authority) that adopted Interleaved 2 of 5; HIBCC (Health Industry Business Communication Council) that adopted Code 39 as well as Code 128; and LOGMARS (Logistic Applications of Automated Marking and Reading Symbols) that has also adopted Code 39.
 For an in depth discussion on symbologies see LaMoreaux (1998, ch. 4), Palmer (1995, ch. 4), Collins and Whipple (1994, ch. 2) and Greico et al. (1989, ch. 2). Palmer especially dedicates whole appendices to the most common specifications and their characteristics.
 Each bar code differs based on the width of the bars. Of particular importance is the width of the narrowest bar which is called the ‘X dimension’ (usually measured in millimetres) and the number of bar widths. Essentially, this defines the character width- the amount of bars needed to encode data.
 Interleaved 2 of 5 is based on a numeric character set only. Two characters are paired together using bars. The structure of the bar code is made up of a start quiet zone, start pattern, data, stop pattern and trail quiet zone. According to Palmer (1995, p. 29) it is mainly used in the distribution industry.
 Code 39 is based on a full alphabet, full numeric and special character set. It consists of a series of symbol characters represented by five bars and four spaces. Each character is separated by an intercharacter gap. This symbology was widely used in non-retail applications.
 The bar code is made up of light and dark bars representing 1s and 0s. The structure of the bar codes includes three guard bars (start, centre and stop), and left and right data. The bar codes can be read in an omni-directional fashion as well as bi-directional. Allotted article numbers are only unique identification numbers in a standard format and do not classify goods by product type. Like the Interleaved 2 of 5 symbology, EAN identification is exclusively numerical. The structure of the EAN and U.P.C. includes (i) the prefix number that is an organisation number that has been preset by EAN, and (ii) the item identification that is a number that is given to the product by the country-specific numbering organisation. The U.P.C. relevant only to the U.S. and Canada does not use the prefix codes as EAN does but denotes the prefix by 0, 6, or 7.
 According to Palmer (1995, p. 37), Code 128 has been increasingly adopted because it is a highly-dense alphanumeric symbology that allows for variable length and multiple element widths.
 With the introduction of the Data Matrix symbology even more information could be packed onto a square block. Since the symbology is scalable it is possible to fit hundreds of thousands of characters on a block. Data Matrix used to be a proprietary technology until it became public in 1994.
 As opposed to the light and dark bars of the EAN symbology, MaxiCode is a matrix code which is made up of a series of square dots, an array of 866 interlocking hexagons. On each 3cm by 3cm square block, about 100 ASCII characters can be held. It was developed by the United Parcel Service for automatic identification of packages.
 Like the MaxiCode symbology, PDF417 is stacked. The symbology consists of 17 modules each containing 4 bars and spaces. The structure allows for between 1000 and 2000 characters per symbol.
 Collins and Whipple (1994, p. 41) suggest a maximum of 50 characters when using linear symbologies.
 According to Palmer (1995, p. 31) Codabar was developed in 1972 and is used today in libraries, blood banks and certain parcel express applications. Collins and Whipple (1994, p. 28) do not consider Codabar a sophisticated bar code symbology, though it has served some industry groups well for decades.
 For the “ten commandments of bar coding”, see Meyer’s (2000) feature article in the August edition of Frontline News.
 Certainly bar codes on cards were being used early on but they were far less secure than magnetic stripe cards and therefore not adopted by financial institutions. Magnetic-stripe cards however became synonymous with the withdrawal of cash funds and the use of credit which acted to heighten the importance of the auto-ID technology.
 Russell was a creative thinker who later went on to become the chairman of Visa International in the 1980s.
 The bar code on the same card can be advantageous to the card issuer. For instance, in an application for a school it can serve a multifunctional purpose: the bar code can be used for a low risk application such as in the borrowing of books, the magnetic-stripe card in holding student numbers, and the embossing can also be used for back up if on-line systems fail.
 The advantage of dye sublimation over thermal transfer is the millions of colours that can be created by heat intensity. If colour is required by the operator on both sides then one side of the card is coloured first before the other but this is expensive.
 The magnetic-strip, typically gamma ferric oxide “...is made of tiny needle-shaped particles dispersed in a binder on a flexible substrate” (Jose & Oton 1994, p. 16).
 An important concept in understanding how tracks are triggered to change polarity is coercivity (measured in Oersted, Oe). This can be defined as the amount of magnetic energy or solenoid required which can be broadly defined as low (about 300 Oe) and high (3000-4000 Oe). Most ATM cards are said to have low coercivity (loco) while access control cards have high coercivity (hico) to protect against accidental erasure. Here is one reason why embossed account numbers still appear on ATM or credit cards. If the card has been damaged, information can be manually retrieved and identified (from the front of the card) while the replacement card is despatched.
 “The magnetic media is divided into small areas with alternating polarisation; the first area has North/South polarisation, and the next has South/North, etc. In order to record each “0” and “1” bit in this format, a pattern of “flux” (or polarity) changes is created on the stripe. In a 75bpi (bits per inch) format, each bit takes up 1/75th (0.0133) of an inch. For each 0.0133” unit of measure, if there is one flux change, then a zero bit is recorded. If two flux changes occur in the 0.0133” area, then a one bit is recorded.” See http://www.mercury-security.com/howdoesa.htm (1998).
 The read head has a small surface window (known as the field of view) that comes into direct contact with the magnetic-stripe. When a card is passed through or inserted in a reader a read head generates a series of electrical pulses. These alternating voltages correspond to alternating polarities on the magnetic-stripe. Per bit length, the reader counts the changes in polarity that are then decoded by the reader’s electronics to recover the information that is hidden on the card.
 Jose and Oton (1994, p. 20) explain in detail the primary methods of magnetic-stripe fraud. These include: theft, counterfeit, buffering, and skimming. See also Watson (2002).
 Ferrari et al. 1998, dedicate a whole chapter to the card selection process in their IBM Redbook (ch. 4). Card selection considerations should include the card type, interface method, storage capacity, card operating functions, standards compliance, compatibility issues and reader interoperability, security features, chip manufacturers, card reliability and life expectancy, card material and quantity and cost. It is interesting to note that even within smart card there are so many options. Taken within a wider context of other auto-ID technologies, the selection process becomes even more complex.
 Other important standards related to smart card include: ISO 7811 parts 1-6, ID Cards; ISO 7816 parts 1-8, contact IC cards; ISO 10536 parts 1-4, close coupling cards; and ISO 14443 parts 1-4, remote coupling cards. For these and other supporting standards for smart cards see Ferrari et al. (1998, p. 3).
 The standard size in the magnetic-stripe and smart cards gave way to the possibility of card migration.
 For an in depth discussion on smart cards standards and specifications, see Ferrari et al. 1998 ch. 3.
 It is believed that the first scientific studies investigating fingerprints were conducted some time in the late sixteenth century (Lee & Gaensslen 1994).
 See Withers (2002) and Jain, A. et al (2002) for an overview of biometrics. For emerging biometric techniques see Lockie (2000).
 Such things as a person’s voice, style of handwriting and DNA are just a few other common unique identifiers. Even the Electroencephalogram (EEG) can be used as a biometric as proven by Paranjape et al. (2001, pp. 1363-1366).
 See Greening et al. (1995, pp. 272-278) for the use of handwriting identification for forensic purposes.
 See Ferrari et al. (1998, p. 23) for another comparison of biometrics and also Hawkes (1992, p. 6/4).
 For a thorough technical overview on the topic of biometrics see Bigun et al. (1997).
 See Meenen and Adhami (2001, pp. 33-38) for fingerprint security.
 For a neural network approach to fingerprint subclassification see Drets and Liljenstrom (1999, pp. 113-134), and for the Gabor filter-based method see Hamamoto (1999, pp. 137-151).
 Facial recognition usually refers to “…static, controlled full-frontal portrait recognition” (Hong & Jain 1998, p. 1297).
 See also Weng and Swets (1999, p. 66); Howell (1999, p. 225); and Chellappa et al. (1995, pp. 705-740).
 For a more detailed description of face recognition see Bigun et al. (1997, pp. 125-192), “face-based authentication”. For different types of approaches to face recognition see also Weng and Swets (1999, pp. 69-77), Howell (1999, pp. 227-245) and Jain, L. C. et al. (1999, ch. 8- ch. 13).
 According to Williams (1997, p. 24) the possibility that two irises would be identical by random chance is approximately 1 in 1052.
 This can be done using a normal digital camera with a resolution of 512 dpi (dots per inch). The user must be a predetermined distance from the camera (Jain, A. et al. 1999, p. 9).
 See Camus et al. (1998, pp. 254-255) and Daugman (1999, pp. 103-121).
 Since there are literally billions of telephones in operation globally, voice recognition can be used as a means to increase operator revenues and decrease costs. See Miller (1994, p. 30).
 For telecoms applications of voice recognition see Boves and Os (1998, pp. 203-208).
 Markowitz (2001) writes that “[d]espite the dot.com crash, 2001 has been a very good year for [speaker verification] vendors, with the number of pilots and actual deployments increasing”. See also Markowitz (2000).
 See Furui (2001, pp. 631-636) for progress toward ‘flexible’ speech recognition.
 Carter and Nixon (1990, p. 8/4) call this act forgery. Putte (2001) discusses the challenge for a fingerprint scanner to recognise the difference between the epidermis of the finger and dummy material (like silicone rubber). See also http://news.bbc.co.uk/1/hi/sci/tech/1991517.stm (2002).
 Another issue with voice recognition systems is languages. Some countries like Canada have populations that speak several languages, in this instance English and French.
 As Finkenzeller rightly underlines, “[t]he omnipresent barcode labels that triggered a revolution in identification systems some considerable time ago, are being found to be inadequate in an increasing number of cases. Barcodes may be extremely cheap, but their stumbling block is their low storage capacity and the fact that they cannot be reprogrammed” (Finkenzeller 2001, p. 1). See also Hind (1994, p. 215).
 For a detailed explanation of fundamental RF operating and physical principles see Finkenzeller (2001, ch. 3-4, pp. 25-110). See also Scharfeld (1998, p. 9) for a brief history of RF/ID.
 RF/ID espouses different principles to smart cards but the two are closely related according to Finkenzeller (2001, p. 6). RF/ID systems can take advantage of contactless smart cards transmitting information by the use of radio waves.
 The size and shapes of tags and transponders vary. Some more common shapes include: glass cylinders typically used for animal tracking (the size of a grain of rice), wedges for insertion into cars, circular pills, ISO cards with or without magnetic stripes, polystyrene and epoxy discs, bare tags ready for integration into other packaging (ID Systems 1997, p. 4).
 Herbert Simon predicted in 1965 that by 1985 “machines [would] be capable of doing any work a man [could] do” (Simon 1965, quoted in Kurzweil 1997, p. 272).