6. Five Case Studies Analysing the Dynamics of the Auto-ID Innovation System
The literature reviewed in chapter two determined that the auto-ID industry could be defined as a technological system (TS), and chapter five characterised the development of representative auto-ID techniques. This chapter will explore the dynamics of the auto-ID innovation process- those drivers and inhibitors that set the direction of the whole industry on a particular course. First the innovation process of individual technologies will be examined in isolation, then within the notion of a larger system of innovation defined as auto-ID. The organisational, institutional, economic, regulatory and social determinants of auto-ID innovation will be presented. The analysis will put forward a holistic and interdisciplinary view of how an innovation comes about and the complex process that takes place through stakeholder interaction and feedback within the technology system. Patterns emerging from these dynamic interactions act as a guidepost for future developments. Understanding these dynamics better can lead to predicting future possibilities more accurately because history matters in the SI framework.
6.1. Definition of Stakeholders
Before endeavouring to investigate the complex innovation process of auto-ID, the relevant stakeholders of auto-ID systems must be identified. The stakeholders can broadly be categorised into two groups, including those involved:
(i) in the invention, innovation and supply of auto-ID technological system components such as manufacturers, universities and government research bodies; and
(ii) in the provision of services that require customers to use auto-ID technological system components such as issuers, merchants and consumers.
While the manufacturers (i.e. the firm) are the focal point of the thesis, the interactions between manufacturers and other stakeholders are paramount in this study. Diagram 6.1 on the following page is divided into three parts. The customer stakeholders include consumers, issuers and merchants; the technology provider stakeholders include manufacturers, system integrators and value-added resellers; and finally the service provider stakeholders, the owners of the operation, act to bring the two former groups together. Both the customers and technology providers have an infrastructure within which to operate. Customers use a physical infrastructure in the way of information technology and telecommunications (IT&T) to carry out transactions, and technology providers use a knowledge infrastructure that includes standards committees, university researchers, regulators and others. Essentially organisations are those entities that are consciously formed with an explicit purpose and institutions are those that are formed spontaneously to regulate interaction between people. The economic relationships that exist between organisations and institutions can be described as physical and knowledge infrastructures. The interplay between all these different stakeholders forms the technology system specific to auto-ID.
Noticeable in diagram 6.1 are the feedback loops inherent in the auto-ID innovation process. Without collaboration a given product innovation will not reach its potential and probably fade away to find a resting place in the mass of great ideas that were never realised. For example, if standards committees do not work with manufacturers to understand their requirements and learning experiences, then a default standard will most likely not be adhered. With each new major invention, a system is formed giving it the support and momentum it requires to follow a particular path. For instance, firms did not just happen to invent bar codes and then make commodity suppliers use them. There had to be some degree of interaction between the relevant actors and more importantly some mutual agreement on how to go forward. For example, suppliers of the technology had to make attempts to engage merchants, but via their commodity suppliers first. Taking the retail metaphor a little further, the questions along the path of bar code innovation may have included some of those listed in table 6.1. All these questions can be addressed by the SI dimensions as will be shown throughout the rest of this chapter. The investigation continues to follow a chronological order to draw out preceding developments that may have influenced or impacted (either positively or negatively) on proceeding innovations.
Table 6.1 Bar Code Questions Along the Innovation Path
§ How would the introduction of bar code impact consumers and businesses?
§ Would these entities be willing to change?
§ Would they trust that the technology is accurate and would actually increase productivity?
§ How would this impact on employment opportunities, especially in the retail sector?
§ How much training would be required to re-skill staff to use bar code-related equipment?
§ What about the new skills required to implement, operate and maintain bar code systems?
§ Which bar code symbology should be used? Which configuration is future-proof, if any?
§ How much would the technology cost? And what peripheral equipment would be required?
§ Which bar code numbers should be allocated to products? What about globally?
§ Who would co-ordinate the activity of unique product coding?
§ Could firms produce the amount of bar codes and bar code systems required by enterprise?
§ What incremental innovations could be made to bar codes for different application areas?
§ What results have been published about other auto-ID innovations like bar code?
§ What are the current research activities happening in the field?
§ How would data collected by the bar coding of products related to the check-out process be used by store owners? What are the legal implications if this information is misused?
6.2. Bar Code: The Auto-ID Pioneer Technology
6.2.1. Committees, Subcommittees and Councils
As LaMoreaux (1998, p. 51) points out, “[n]o invention comes in a flash. Each is built on many minds sharing ideas and working towards the same goals.” At first, the auto-ID industry had very few innovators, most of who were involved in bar code development. As was recognised in chapter five, it was around 1970 that product coding started to be noticed by retail and manufacturing companies, especially in the U.S. Until that time, individual innovators in small firms were attempting to offer solutions to companies in isolation. These solutions were dissimilar because they were based on proprietary solutions. At the time the retail industry especially feared that bar code might cause more problems than it would solve through incompatible check-out systems and the implementation of a number of different product coding schemes (Brown 1997, p. 39). Firms had valuable ideas regarding the direction of bar code but were not able to share these with each other as there was no common body linking everyone together. This eventually led to the urgent formation of the Ad Hoc Committee in 1970. Trade associations collectively posed five questions to this committee. These included (Brown 1997, pp. 42f):
(1) is a standard industry product code worthwhile even if it not feasible to devise a standard symbol? (2) If so, what should that code be? (3) How can widespread acceptance of the industry standard be obtained? (4) How shall the code be managed? (5) Should there be a standard symbol representing the code, and if so what should it be?
As can be seen, these questions were all concerned with the bar code technology itself, not about such things as end-user acceptance. This is characteristic of a technology in its early adoption phase. The technology must work properly and must make sense economically before it can enjoy widespread adoption. In this manner, progress is connected to technology itself, “vorsprung durch Technik”.
In 1971 the Symbol Selection Subcommittee was formed, aided by the Ad Hoc Committee. The Committee was made up of young, intense and brilliant individuals who were committed to the cause. Meetings were “electric as idea fed upon idea” (Brown 1997, p. 58). Many skilful people committed large amounts of time to the committee while holding full-time positions during the day. The Symbol Committee enthusiastically sought help from anyone that was willing and so attracted a wider group of players who brought with them a great number of diverse issues, many of which were not technical in nature. The focus was now on how to get bar code successfully to market. Key tasks included to:
1. Develop alternate agreements to license and/or put selected symbol
in public domain
2. Visit key equipment companies
3. Initiate and coordinate special studies
4. Contact other affected groups, e.g., printer… manufacturers
5. Develop test parameters and formats
6. Develop environment guidelines
7. Interview and decide on special consultants
8. Develop press releases (Brown 1997, pp. 61f).
This was an important point in the history of bar code because the Committee encouraged firm-to-firm and firm-to-agency interaction. For the first time, industry stakeholders could voice their concerns about the proposed standard. Representatives from companies could also share their visions about the technology and potential applications. This kind of information exchange was fruitful in that it encouraged participatory behaviour by stakeholders, giving the Committee the ability to address critical issues in a timely manner.
Determined to complete its mission the Symbol Committee finished its two-year investigation in 1973 announcing a suitable standard- the UPC (Universal Product Code) was officially born. A spin-off of the Symbol Committee was the formation of the Symbol Technical Advisory Committee (STAC) and later the Universal Product Code Council (UPCC). Seeing the invaluable work done by the UPCC, other standards-setting organisations were also subsequently formed such as EAN (Electronic Article Numbering) and AIM (Automatic Identification Manufacturers). It is through these well-known organisations, councils and committees that international standards are set via ISO (International Standards Organisation) today. While bar code enjoyed steady growth after the mid 1970s, it was only when mass merchants like Kmart and Wal-mart committed to U.P.C. scanning that adoption boomed. This is when bar code started to become noticeable to the general public.
6.2.2. Public Policy
The primary aim of the bar code was to improve the efficiency and productivity of the checkout process- it was oriented towards savings for business. Increased consumer convenience was a by-product but not something that preoccupied the attention of the Ad Hoc or Symbol Committees in the U.S. Very early on in the development of bar code, labour unions and consumer activist groups joined forces to oppose the new technology. First and foremost, any level of automation at the check out counter equated to job losses. Labour unions were quick to highlight the inevitabilities and journalists were quick to report on them. Second, consumers were very sceptical about the removal of price tags on supermarket store items. Many shoppers had never seen electronic devices at that time, so the scanner at check-out was treated apprehensively. A lot of doubt initially crept in regarding the accuracy of the new technology. It was difficult for many consumers to understand how a bar with black and white lines imprinted on products could equate to a cost for the good they were purchasing or a decrease in queuing time.
Political games eventuated from the polemical situation between consumer activists and the Committee. Members of the Public Policy Committee (for bar code) even ended up at state legislatures and finally succumbed to the demands of consumers by putting forward several proposals for itemised pricing as well as the establishment of by-laws. By the late 1970s politicians had grown weary of the debate and abandoned it altogether. The Public Policy Committee ceased to exist in 1977 but served a crucial role in the early stages of bar code development as a mechanism to encourage interaction between various stakeholders. Yet this was not the end of public policy issues related to bar code. By the 1990s, labour unions and other independent bodies were now pointing to serious injuries suffered by employees who had to repetitively scan products for long periods of time with awkward equipment and heavy supermarket store items. Repetitive strain injury (RSI) received a lot of media attention and affected employees sought compensation. The U.P.C. also received attention from religious groups who saw the bar coding of products as a movement towards the fulfilment of prophecies in the Book of Revelation. Namely, members of these groups linked the U.P.C. with the “number of the beast” (666).
6.2.3. Spreading the Word
As more and more distributors, suppliers and retailers implemented bar code solutions, the word spread about the significant gains offset by the technology. It caused a ripple effect in company supply chains especially. As a result, a greater number of customer inquiries were made to technology providers who were only too willing to answer queries from prospective customers. With each new request for information (RFI), technology providers could understand the needs of customers better and feed this knowledge back into the development process. The future was thus being moulded by the learning gained from each successive customer engagement. The evolution of bar code innovations became an interactive experience. As the awareness grew that bar code could be used not only for product coding but for literally thousands of other applications, bar code suppliers became inundated with requests and the rate of bar code-related patents increased substantially. Auto-ID firms no longer solely relied on their own knowledge production but also on the interaction between the various players in the industry such as issuers of bar code cards, merchants and consumers for valuable feedback. Cooperatives and alliances began to emerge to support and promote activities for auto-ID product innovation such as AIM. Among numerous other associations and forums, AIM assisted to catapult bar code and other auto-ID technologies into the fore.
6.2.4. Clusters of Knowledge and a Growing Infrastructure
Formal knowledge generated and documented by councils, standards bodies, patent offices, universities and R&D programs became of growing importance, especially to new bar code company entrants who relied on existing infrastructure to start their operations. Associations like AIM Global provided support by publishing important documents and specifications for members. In addition, a great deal of explicit knowledge continues to be produced by students and staff doing research at universities on behalf of private enterprise or government (who fund their work). Among these are the Centre for Auto-ID at Ohio University, the Auto-ID Centre at MIT, the Automatic Data Capture Laboratory at the University of Pittsburgh, the NCTU Automatic Information Processing Lab in Taiwan, InsightU.org- an on-line university, the Information Management Institute (IMI), the Automatic Identification and Data Capture Program at Purdue University and the Robert W. Rylander Corporation that has numerous collaborative projects with universities throughout the U.S. Some universities have even tailored undergraduate programs for the advancement of bar code or related auto-ID solutions. University researchers have the opportunity to exchange information with private enterprise via auto-ID conferences, trade publications and industry associations. Knowledge distribution in this environment has been among the most useful. What all these stakeholders understand is that communication about bar code technology and its future direction is paramount to its continued success. Back in the 1970s, informative communication may have been about letting retailers know that bar codes were useless without scanning equipment to recognise the code (Brown 1997, p. 153) but today the issues are more sophisticated in nature. The stakeholders themselves are generally more educated about bar code technology. Today questions may arise regarding the type of physical infrastructure required such as Local Area Network (LAN) requirements, the suitability of various peripheral devices and how to exploit the information collected by bar code technology.
6.2.5. Setting Standards
Today each individual bar code application requires numerous standards considerations. Before a bar code can be used a symbology for the product innovation must be chosen along with the rules for information content, the bar code label, where the label is to be placed, the electronic data interchange (EDI) standard and verification standard. “[I]n some industries not only does the bar code label need to meet the required quality in terms of printing standards, but the data conveyed by the bar code also has to conform to a required structure” (Cohen, J. 1994, p. 100). Even the way bar code information is collected using data terminal equipment (DTE), transmitted over a network and stored in a relational database is standardised (Collins & Whipple 1994, ch. 5-7). Bar code standards have also been established by voluntary committees which over time have assisted in convincing other companies in the same industry to follow similar practices. Some standards-setting organisations like UCC/EAN are heavily oriented towards offering specific solutions to retail and have in some respect ignored the needs of non-retail members who are not commodity oriented (Moore 1998, p. 6).
Depending on the bar code aspect to be standardised (see exhibit 6.1) the process can be as simple as an employee presenting their findings to their immediate manager or as complex as multiple presentations to AIM International by technology providers, proceeding to global standardisation through ISO. According to Bert Moore (1998, p. 3), former director of AIM technical communications, “it already takes an average of one to two years to create a standard” which is pan-national. At the international level it takes at least 50 per cent longer to accomplish anything.
Standards differ in type and importance. LaMoreaux (1998, pp. 213-214) distinguishes between major, mid-level, industry, company and lower level bar code standards. Examples of each can be found in table 6.2. Perhaps the most influential standards in the world today are industry-specific. Two examples of this in the retail industry are the U.P.C. and EAN. The U.P.C., a subset of EAN, is used to identify supermarket goods. First a manufacturer’s number must be obtained to ensure uniqueness between say one can of pet food and another from a different manufacturer. Second each product is allotted a number. When combined, manufacturer number and product number uniquely represent a particular product. In the case of EAN-13, the above-mentioned U.P.C. numbers apply, plus an additional first two digits which identify the country of origin in which the manufacturer’s number was allocated. EAN has now been implemented in over 70 countries worldwide. Overall, the aims of bar code standards bodies as outlined by J. Cohen (1994, p. 99) include:
- multiple use of a single symbology by a number of different users in the same industry
- reduce the amount of research needed by any single user to implement a bar code system
- encourage the development of standardised data collection systems within any one industry
- meet the majority of needs of all users within any one user group or industry.
The gradual industry movement has been towards the tracking of products throughout the enterprise (e.g. Enterprise Resource Planning, ERP) and the supply chain (SCM). The eventual goal is to implement true EDI using bar code technology to take advantage of value added services (VAS) over the company extranet. TRADANET, the UK data network formed in 1982 is based on specific standards now able to offer EDI to international companies. However, not all industries want to conform to a single major bar code standard. In a move that could have a major impact on the global bar code market, the UCC and NATO (North Atlantic Treaty Organisation) are believed to have been working together to reach a consensus on shipment identification codes in the form of the SSCC-18 (Serialised Shipping Container Code) standard. If this is true, the ramification would be startling and would cause a ripple effect to take place throughout NATOs supply chain. From NATO supplier companies to other government agencies it has been predicted that “every industry segment would, of necessity, adopt UCC/EAN coding and marking” (Moore 1998, p. 6). This would place immense pressure on bar code suppliers specialising in custom symbologies to conform to a potential super-standard.
Table 6.2 Different Levels of Standards
Type of Standard | Standards Body | Description
Major | ISO | Country-specific standards organisations like ANSI in the U.S. ANSI findings are given to ISO. ISO coordinates country-to-country negotiations until one agreed standards version is reached.
Mid-level | UCC | Industry accepted standards that are optional and depend on local adoption. Such as application-specific data in a string.
Industry | UCC/EAN | Bar code standard used in retail products. A can of SPC baked beans can be identified anywhere in the world by its bar code.
Company | K-Mart | Use standards in all their stores patterned after the UPC series.
Lower Level | Factory Plant | A standard that ties in all the various symbologies used by the one factory location.
* This table has been compiled using LaMoreaux (1998, ch. 8).
6.2.6. Legal Aspects
Bar code developers once placed symbologies in the public domain, granting access to whoever needed them, at no cost. As Palmer (1995, p. 243) recollects early on there was a spirit of openness, even between competitors who often assisted one another in an effort to get their products to work with new symbologies. Early developers could see the long-term benefit for all concerned of such cooperation. Today, that same spirit of openness does not exist. Bar code is a mature technology and there are a lot more players in the global market than there used to be, all vying for a share of the profits. By patenting bar code inventions manufacturers have realised that as well as protecting their intellectual property (IP) rights, they can also collect money via royalties from license agreements and other contracts. However, one criticism of recent behaviour has been the incidence of over-patenting, especially by bar code manufacturers. Some inventors are taking advantage of the patent process in some countries and even patenting ideas that are intuitively obvious. According to Palmer (1995, p. 241) these instances have been counter-productive to the real spirit of innovation and ultimately end-users end up paying for the costs, and technical progress in some areas of development is stifled as a result. Patents in the field of bar code are usually related to symbologies, hardware or applications. It is important for all stakeholders to be aware of what is happening in the industry because they do not want to find themselves having to pay large amounts of money to inventors who are mostly concerned with royalty revenues than solutions. Formal challenges have been launched against a variety of committees, other manufacturers, and even end-users in the past.
6.3. Magnetic-Stripe Card: the Consolidating Force
6.3.1. Retail and Banking Associations Join Forces
The rise of the magnetic-stripe card, as we know it today, can be attributed to the collaborative efforts between the banking and transport associations, namely the American Banking Association (ABA) and the International Air Transport Association (IATA). It is commonly stated that an American National Standards Institute (ANSI) publication in 1973, developed jointly by ABA and IATA for a plastic credit card with a magnetic-stripe, laid the foundations for widespread diffusion. By banding together, the two associations were able to present a positive case for standardisation. Banking and transport are two broad application areas that affect the masses, so the influence of the organisations on the direction of the magnetic-stripe card cannot be underestimated. Early on however, magnetic-stripe technology like bar code was hampered by a lack of standards: “[a]s has so often been the case with the commercialisation of new ideas, one of the delaying factors was the absence of recognised international standards during its early existence” (Bright 1988, p. 14). ISO finally resolved this issue through its Technical Committee for information processing standards (TC 97). International Standards (IS) 7810 and 7811 were published outlining definitions about the physical dimensions of the magnetic-stripe card, embossing, layout and reading requirements. With input from the IATA, ABA and the Thrift industry, specific tracks were defined on the magnetic-stripe for specific uses. Track 2 for instance, reserved for banking applications, contained a field for the primary account number (PAN) of 19 digits.
6.3.2. From Exclusivity to Interoperability
Solutions for magnetic-stripe cards based on proprietary schemes were initially used strategically by banks and other companies to secure a loyal customer base. Cash dispensers were not plentiful initially, so banks were able to attract customers by being the first to market. Louderbacker (1980, p. 40) recounts that one of the first cash dispensers was installed by the Chemical Bank in New York City in 1969. By early 1970, other banks began planning for full-service ATM (Automatic Teller Machine) installations. By the late 1970s bank card technology became a mechanism for differentiating financial institutions. If a bank was able to offer the card linked to its existing portfolio of services it was considered technologically advanced. Egner (1991, p. 56) wrote that ATM services were exclusive, and institutions like Citibank were actually able to shift market share by their promotion. The same could be said for Barclays Bank in the UK. There was often friction between the major bank players who had reaped the rewards for taking the risk with the new technology versus the banking association that wished to exercise authority on behalf of all the other (and in most cases smaller) banks to make it a level playing field. In fact Citibank, so protective of its market share, vehemently challenged magnetic-stripe standardisation. Yet the bank soon realised that if it did not commit to the changes that it would be left behind, eventually becoming the minority. In essence, what Citibank and others in a similar position were afraid of was losing their competitive advantage to interoperability. Also known as interchange, interoperability “…[r]elates to a situation whereby a card issued by one organisation, e.g. a bank, can be used in an ATM belonging to another” (Bright 1988, p. 15). Today most major service provider’s cards can be used in each others’ ATMs. And all this is possible because of the PAN that is defined in Track 2 of the magnetic stripe.
6.3.3. The ATM Economic Infrastructure
As ATM machines began to sprout up all over North America, the UK, Japan and Scandanavia in the 1980s, a physical infrastructure began to grow to support the banking sector. First and foremost, magnetic-stripe cards without ATMs were almost entirely useless: “[i]mprovements in card technology would not be particularly valuable without reader technology” (Browne & Cronin 1996, pp. 102). Second, internal bank equipment needed to be able to communicate with ATMs. A physical network was required for this to become possible, and telecommunication data providers quickly sought these opportunities as they became available using protocols such as X.25. Here is perhaps one reason why smart cards have not yet replaced magnetic-stripe cards in North America- the physical infrastructure in terms of the installed base of ATMs and POS equipment kept growing and growing throughout the 1990s. In some parts of the world like the United States, Japan and Hong Kong large investments in magnetic-stripe equipment have tied card issuing organisations to the technology. Weighing up the total potential losses as a direct result of fraud and other drawbacks of magnetic-stripe cards, against the potential multi-million dollar investment of upgrading readers and writers for smart cards worldwide; one is able to understand how physical infrastructure directly affects innovations. Smart cards are also more complicated to produce and need more expertise than magnetic-stripe. And the more complicated the production process, the harder it is produce large quantities.
18.104.22.168. The Global Inter-bank Network
The success of magnetic-stripe card technology can be measured by the increasing need for the interconnection of thousands of banks across every continent in the world. Colton and Kraemer (1980, p. 22-23) list some of the major centralised network operations.
Federal Reserve System (FedWire) manages Federal reserve banks across the US interconnecting 275 banks; Clearinghouse Interbank Payment System (CHIPS) has the capability to execute international transactions among 62 financial institutions in New York; interbank switching in Japan is provided by Zenginkyo and the National Cash Service (NCS) network systems; the UK clearing banks have formed a company called Bankers Automated Clearing Services (BACS); Society for Worldwide International Financial Telecommunications (SWIFT) links more than 239 banks.
One can only begin to guesstimate the number of agreements that are in place between so many different entities to allow it all to work properly. This kind of meshed structure cannot be established instantaneously but only after years of formal exchanges. The European Union is another example of inter-bank data transfer standardisation that will require thousands of banks to agree on a particular type of electronic payment system (EPS) that goes beyond even SWIFT (Central Banks 1989, p. 102). Of course to understand the extent of sharing, of not only data but of physical resources such as ATMs, one must consider the networks of the large credit card and banking associations of Visa, MasterCard, Cirrus, PLUS, GlobalAccess, ATM™, AutoCash. What is worthy of noting here is the support structure that has been built around the magnetic-stripe functionality, i.e. being able to withdraw, deposit and transfer funds almost anywhere in the world. Not only are networks making communications possible but business processes have been established to overcome the complexities, across geographies and borders, across economic and political systems, across currencies and across dissimilar institutions.
6.3.4. Calculated Social Change
“Twenty-five years ago, the very idea of going to a machine in order to withdraw money from a bank seemed outlandishly fanciful. Yet, with the rapidity so often associated with technological change, it soon became just another part of everyday life” (Korala & Basham 1999, p. 6-1). The same could be said for Electronic Funds Transfer at Point of Sale (EFTPOS). It important to note however, that while change was “rapid”, it still took a considerable amount of time for end-users to come to terms with the fact that they did not have to physically enter a branch to withdraw money. Governments across the globe committed resources to investigating the potential impact of the technical change. As in the case of bar code, labour unions and other groups were again quick to point out that the automation would mean job losses for bank staff. The technology appealed more to the needs of business, as they sought ways to operate more efficiently. Many bank branches have been closed as a result of the automation and face-to-face over the counter staff numbers have been significantly reduced, driving consumers to change their habits for the sake of minimising bank fees and charges. Stephen Bennett (1995, p. 10) a senior manager with KPMG wrote:
[e]lectronic transactions are considerably more cost effective than the counter based equivalent. This led to banks in the U.S. charging fees for branch based transactions and providing “free” transactions via telephone, ATM’s and EFTPOS, a concept that is now being embraced in Australia.
As part of their marketing campaign in the 1970s credit companies mailed out plastic cards to consumers and in the early 1980s banks mailed out magnetic-stripe cards to prospective cardholders. For many of the recipients, it was unclear what added benefit the card could provide, although this was later realised. In addition, some consumers believed that the new technology would eventually lead to breaches in privacy. The rise of magnetic-stripe cards coincided with numerous Big Brother predictions made by Orwell and others. It was also at this stage that many countries across the globe formulated Privacy Acts. Citizen identity cards were also a topical issue in which civil libertarians became involved. The media added fuel to the debate by reporting on cases that were related to social security fraud and stolen identities which caused some consumer groups to lobby against the idea of a card altogether. Yet what most consumer groups did not realise is that they were really arguing against an identity number and not the card itself. At the same time, professional thieves were taking advantage of the lack of security on the magnetic-stripe card and in some instances siphoning thousands of dollars from legitimate cardholder accounts. All this acted to create some level of mistrust of the technology. There are still people today who will not use plastic cards to make any sort of transactions, though it is becoming more and more difficult for them to continue this practice. The younger generation, who have been brought up surrounded by technology like the Internet are less cynical about high technologies in general. There is now an established customer base with which to leap into the new-age authentic cashless society (Egner 1991, pp. 105-109).
6.3.5. A Patchwork of Statutes
Current laws worldwide have lagged behind technological innovation. US privacy law, for instance, has been developed in a piecemeal fashion and in a case-by-case mode. It is no wonder that some types of personal information that have been enabled mostly by auto-ID techniques, such as supermarket transaction records, are still unprotected (Barr et al. 1997, p. 75). As can be seen from table 6.3 on the following page, U.S. privacy-related laws are a patchwork of statutes addressing specific areas and specific types of data. There is, however, no structure or governing authority in place to enforce these statutes. This means that not only can laws vary between states but with respect to the global arena, laws in other countries are also disparate, if existent at all.
Table 6.3 U.S. Federal Law Statutes
Examples of Federal Law Statutes in the U.S.
§ Fair Credit Reporting Act (Credit records)
§ Internal Revenue Code (Tax return)
§ Electronic Funds Transfer Act (Banking records)
§ Electronic Communications Privacy Act (Information transmitted electronically)
§ National Labour Relations Act (Labour-related records)
§ Computer Security Act (Benefits-related records)
§ Video Privacy Protection Act (Video rental or sale records)
§ Cable Communications Privacy Act (Subscriber records)
§ Family Educational Rights and Privacy Act (Educational records) (see Barr et al. 1997, p. 75)
§ Credit Card Abuse Laws
§ Wire Fraud Act
§ The National Stolen Property Act
§ U.S. Copyright Act
§ Electronic Communications Privacy Act
§ State Computer Crime Laws (see Cavazos & Morin 1995, pp. 109-117)
§ Computer Matching and Privacy Protection Act (see O’Connor 1998, p. 10).
Consider the case where a traveller to a foreign country has his credit card stolen and misused by a perpetrator. Where does the liability lie- with the traveller, with the credit card company, with the perpetrator? Whatever the perspective, for those unfortunate persons who have found themselves in this predicament (and these are not isolated incidences) the experience can be daunting as they attempt to provide evidence of their innocence. In the U.S. there is no law governing electronic payments; these aspects are covered by provisions in the Civil Code (Central Banks 1989, p. 217).
The Uniform Commercial Code (UCC)... drafted in 1953... is currently being expanded to address the rights and liabilities of parties to large-dollar electronic funds transfers... Small-dollar electronic funds transfers, principally consumer-oriented automated clearing house payments, ATM transactions, and POS transactions are governed by Regulation E. This regulation was issued by the Federal Reserve in response to the Electronic Funds Transfer Act of 1978. It applies to all financial institutions and takes precedence over state law to the extent that it provides greater consumer protection than state law. Regulation E sets forth standards for financial disclosure, card issuance, access, and error resolution procedures. It also addresses the rights and liabilities of both consumers and financial institutions.
Regulation E under the Act of 1978 does not include cheque guarantee and authorisation services, transmission of data between banks and any transaction that is about the purchase or sale of securities (Scott, M.D. 1994, p. 497). Canada has also followed the United States by setting up a voluntary code of practice for debit card issuers, retailers, and consumer associations, as well as the federal and provincial regulatory bodies. In 1992 the code for Consumer Debit Card Services was introduced by the Canadian Bankers Association (CBA). Numerous associations have endorsed the code. High-tech innovations like EFTPOS require long-term commitments to improvements to rules and regulations if they are to continually evolve to meet the needs of the end-user and withstand the test of time.
6.3.6. Incremental Innovations
A number of incremental innovations to the basic magnetic-stripe card have been introduced since its inception (see table 6.4 on the following page, for Xico-specific product changes). Developers in magnetic-stripe have primarily aimed to increase basic track capacity and protect data content with some form of encryption. While some of these improvements are theoretically possible many hold that the widespread introduction of these techniques is not economically viable and not worth pursuing. Take the example of the Magnetics and Information Science Centre (MISC) who have discovered a way of protecting the magnetic-stripe card against fraud. “The biggest expense of deploying Magneprint will be replacing or modifying card readers so they can read the magnetic wave patterns” (Stroud 1998, p. 2). Other experts, particularly those in smart card, believe that the costs of delivering projected magnetic-stripe innovations are too high and fall short when compared to smart card solutions which are already proven and on offer now. Svigals (1987, p. 146) predicted that if smart card was to replace magnetic-stripe cards that “...the economic and functional break-even point might be reached within a five-year period.” Svigals did not believe that the incremental density changes to the magnetics would come close to even challenging the advantages of the smart card. Yet these and other predictions made in the late 1980s and early 1990s have not eventuated and those that were quick to publicise the demise of the magnetic-stripe card have been left wondering where things went wrong. It is true that smart card is starting to reach economies of scale and is becoming more affordable but this does not necessarily equate to the total extinction of magnetic-stripe. Even Svigals (1987, p. 175f) himself, acknowledged that:
[a]ll evidence suggests that the magnetic-stripe FTC will have a place in the future. A financial institution with a static market, a significant investment in magnetic-stripe work stations, a very low card acceptance rate and/or rapid customer turnover, and little prospect of additional types of electronic services will probably stay with the magnetic-stripe FTC... At the other end of the spectrum is the institution with a large stable of aggressive magnetic-stripe FTC users, a fast-growing range of electronic services, an increasing set of interchange and sharing arrangements, and a growing concern about magnetic-stripe-based losses and frauds. That institution will take an early look at Smart Cards... In between the two extremes are the majority of institutions... In the final analysis, an active effort to accommodate both types of financial transaction cards appears to be the appropriate action path.” (Svigals 1987, p. 175f).
The new-found relationship between the magnetic-stripe and biometrics techniques has also opened a plethora of new opportunities for the technology. In addition manufacturers of numerous auto-ID devices have even seen a possible convergence between the bar code, magnetic-stripe and integrated circuit (IC) onto the one device.
Table 6.4 Xico “Firsts” in Magnetic Stripe Technology
Year | Innovations & Achievements
1979 | Xico founded to design and manufacture advanced magnetic-stripe equipment.
1980 | First RS-232 controller for 3-track swipe readers and encoders.
1982 | First microcomputer-based manual swipe reader, with PC host- controlled sensor, device, and LED circuits via software command set.
1983 | First microcomputer-based manual insertion reader and reader/encoder, with PC host software control.
1983 | First integrated reader and reader/encoder terminals, both manual swipe and manual insertion, designed as smart peripherals ready for connection to PC.
1984 | First integrated Hi-Co/Lo-Co swipe encoder peripheral to encode three ISO track (#1,#2,#3), at four encoding densities (75, 105, 150, and 210 BPI), with three data formats (ISO, ALPHA, ISO BCD, and Hex), and three coercivities (300, 2750, and 4000 Oersteds).
1986 | First customer programmable card activated multifunction controller for time-and-attendance and access control.
1987 | First magnetic-stripe swipe reader with binary-pair (Wiegand) interface for access control.
1988 | First intelligent peripheral combo magstripe/barcode reader.
1989 | First comprehensive line of motorised reader/encoders for stored-value thin flexible cards.
1990 | First user programmable multi-coercivity encode for research, quality control and production.
1992 | First user programmable motorised hi-co encoder terminal for driver licenses.
1993 | First fully waterproof swipe reader for access control.
1994 | First fully automated card verifier for investigative authorities and financial institutions.
1994 | First integrated stored-value card payment system using thin flexible cards for the amusement industry.
1995 | First user programmable production encoder for thin flexible cards.
1996 | First 2-channel multi-image reader for the gaming industry.
1997 | First secure stored-value reader for wireless telephony.
Source: Xico, Inc. http://www.xico.com/refs.html (1999).
6.3.7. Collaborative Research
Together with firms and standards-setting organisations, universities are also investing research dollars in developing further magnetic-stripe innovations, admittedly however many of these projects are sponsored. Yet it is firms that are generally more overprotective about their intellectual property (IP). But as Bright (1988, p. 136) does rightly point out, the reluctance on the part of potential suppliers to disclose their techniques and progress is understandable, granted the commercial sensitivity. Universities on the other hand, to some degree, are able to be less secretive about their research, taking bigger risks as they attempt to gather more funding for future projects by attracting industry players. For instance, at Washington University in St. Louis, there is a research institute dedicated to magnetic information technologies. MISC has developed MAGNEprint to increase the security of magnetic-stripe technology. Previously, this had been one of the technology’s technical limitations, making smart card technology more favourable for access control applications.
Researchers at Washington University have invented a method for the positive identification of any piece of magnetic recording medium. The innovation permits a reading device to verify the authenticity of a document bearing magnetically recorded information, and to reject unauthorised copies... The innovation eliminates all types of magnetic fraud.
This innovation can now be implemented by manufacturers of magnetic-stripe cards to increase the attractiveness of magnetic-stripe technology compared to other card technologies. The innovation was first presented at a number of technical forums. Thereafter an article was published in a recognised journal and in 1993 became protected through worldwide patenting. With further trials conducted the university licensed Magneprint to Mag-Tek Incorporated, a firm that makes electronic readers (Stroud 1998, p. 1). It is especially encouraging to observe research efforts continue with magnetic-stripe media- this is yet another sign that the technique is continuing to evolve and will continue to meet the needs of a variety of applications.
6.4. Smart Card: the Next Generation
6.4.1. Social Specialisation of Labour
The fundamental difference between the smart card and magnetic-stripe card is the on-board intelligence. Since the smart card’s invention, the microchip has acted to boost the profile of the device. The ultimate vision for the card has been that of a ‘PC in your pocket’, i.e., a mobile PC. Although the card did not achieve expected diffusion rates in places like North America in the mid 1980s, entrepreneurs did not abandon it (especially in Europe). Throughout the 1990s smart card gave rise to a new breed of start-up companies that were eager to exploit opportunities as they arose. With these new start-up companies came new knowledge and also the delineation of niche areas of expertise. These companies included: integrated circuit (IC) manufacturers, smart card manufacturers, terminal manufacturers, smart card integrators, smart card software specialists (operating systems, applications and access) and numerous other third parties. Smart card product development was unlike traditional technologies. With so many individual stakeholders, many of whom were extremely specialised, designing an end-to-end smart card solution was a complicated task. Fruin (1998, p. 248) summarises developing smart card technology as “[h]ighly problematic, fraught with technical, organisational, managerial, and human resource difficulties”. Apart from the few large smart card manufacturers, the other technology providers were usually small in size and had limited resources. Departments within the company had to be agile and customer-oriented but also forward-looking in terms of building generic hardware and reusable software. At the same time smart card component suppliers were also dependent on one another, particularly because no one vendor could provide the whole solution without relying on contributions from smaller players (Lindley 1998, p. 87).
6.4.2. Firm-to-Firm Collaboration
Firm-to-firm collaboration between smart card companies continued to proliferate particularly in Europe (Allen & Kutler 1997, p. 20), even though the North American market was still struggling. The establishment of the Smart Card Forum (SCF) in 1993 was an attempt to bring stakeholders even closer together. Citicorp, Bellcore, and the U.S Treasury Financial Management Services Division were integral to the formation of the Forum attracting business leaders from the public and private sector to share a common smart card vision. By the end of 1997, the Forum boasted of 230 corporate and government international entities (Allen & Barr 1997, pp. 268-273). The common goals of SCF included the:
- promotion of the interoperability of cards, devices, and systems to assure an open market capable of rapid growth
- facilitation of information exchange, communications, and relationship development across industries in order to stimulate market trials
- service as a resource to policy makers, regulatory bodies, and consumer groups on issues impacting smart cards, especially in the areas of social responsibility and privacy (Allen & Barr 1997, p. 266).
Working groups and cross-industry committees were subsequently set up to brainstorm on issues specific to applications. The results of the studies are routinely published in white papers, standards and delivered at industry presentations. Similar forums have begun to sprout throughout the globe (see exhibit 6.2). For example, the Asia Pacific Smart Card Forum (APSCF) based in Australia was established in 1995 and had over fifty members in 2001. APSCF not only brings firms with common interests together but also promotes the interests of members to key policy makers at both the political and bureaucratic level of government (ASPF 2000).
6.4.3. Geographic Clustering
A pattern soon began to emerge linking the success of the smart card technology provider to its physical proximity to the customer. Lindley (1998, p. 88) also noted this stating that there was globally “…a strong correlation between the incidence of local suppliers and smart card application users.” In an effort to increase their revenues, European and Asian suppliers entered the US market, establishing a local presence in the hope that this would result in sales. Some of these smart card suppliers in the US believe that a smart card manufacturing group should be established in Silicon Valley: “[a] group such as this is needed to provide a road map, if you like, and a vision for the industry over the next decade.” Townend believed that the full spectrum of industry should participate in the group (McIntosh 1997, p. 45) in order for greater collaboration to take place between firms and also as a central location to be able to demonstrate the full potential of smart card to prospective customers.
22.214.171.124. Private Enterprise and University: Forging New Links
As a result of geographic clustering very useful relationships began to form between private enterprise and local university research institutes. Not only was this a mechanism to perform useful collaborative research and development but it was also a way to attract skilled talent into the industry. Big smart card players like Schlumberger continue to fund and support initiatives. The University of Michigan’s Centre for Information Technology Integration (CITI) is just one example. In late 1999 it formed a partnership with Schlumberger to develop the world’s smallest web server to run on a smart card. Prior to that CITI was investigating the future possibilities of the U-M card, the university’s campus smart card, supplied by Schlumberger. Both groups believed that the partnership will be mutually beneficial in the long term. At the University of Malaga the GISUM group is also researching smart card. The work is being supported by the European Union (EU) and the Spanish Ministry of Science. Two projects are of interest here- the eTicket project and the electronic forms framework for citizen-to-government (C2G) Internet-based transactions. Some collaboration between universities and enterprise has resulted in university campus space being dedicated to technology parks/centres. For instance, the Smart Card Design Centre is operated as a business unit, housed within the City University of Hong Kong.
126.96.36.199. Consortiums and Alliances
The late 1990s saw a trend towards the formation of consortia and strategic alliances. Consortiums in high-tech typically pool together specialist resources from private enterprise, universities and other institutes, usually in anticipation of a new opportunity. As opposed to collaborative research on a specific topic that seeks to satisfy particular outcomes, a consortium’s scope is broader and usually more exploratory in response to a government or prospective large customer initiative. An example of this is the VerifiCard project in Europe which includes six partners from four different countries, though it is not unusual to find consortiums with twenty partners containing mostly private companies. Most consortia usually have at least one or two big players that influence the direction of the rest of the group. It is also not unusual to find fierce competitors come together in consortiums, although typically this is avoided in overly competitive scenarios whereby separate consortia form.
Traditional players in auto-ID applications have especially sought to form alliances with providers of infrastructure, including banks, financial services, and telecommunications companies (Allen & Kutler 1997, p. 16; Keenan et al. 1997, p. 37). Smart card business developers have identified new creative possibilities, piggybacking on the success of existing applications but in many cases the market response from users and merchants has been uncertain. For instance, there is the possibility for telecommunications operators to be earning revenues from public payphones capable of acting as cashless de facto ATMs or consumers being able to add vending machine purchase charges to their mobile phone bill or even CATV companies making use of set-top boxes to give subscribers online services-on-demand. All these ideas sound very useful but in addition to the possibility of very slow take-up rates, deployment can be very tricky as well. Independent software vendors (ISVs) specialising in smart card can build what look to be cutting-edge applications but without the access infrastructure (fixed or wireless), it is impossible to proceed. In the same way telecommunication operators may wish to deploy state-of-the-art applications but how to collect revenues from subscribers (i.e. billing issues) and how to share profits between the players in the value chain may be fuzzy. Alliances also act to curb the threats from non-traditional new entrants that may know little about the smart card business but have the venture capital to invest. Kaplan (1996, p. 22) provides a good example of the Smart Card International experience. The company assembled worldwide licensing rights but it was unable to distribute its product because it had no strategic alliances with other companies to assist with reselling.
6.4.4. Communicating Information
With smart card development innately encouraging so many interactions between stakeholders it is no surprise that so much literature has been published on the topic. The distribution of information has acted to continually educate all the various stakeholders, including users, about smart cards and their applications. People are generally more well-informed than they were when bar code and magnetic-stripe card were introduced. Even the number of journals dedicated to smart cards is indicative of the general growth of the auto-ID industry over the years. There is an explicit knowledge infrastructure that has grown with the industry. Industry associations are also contributing to smart card growth, like the Smart Card Industry Association (SCIA) that was established in 1989. SCIA acts as a resource centre and is also involved in organising conferences and other industry events. SCIA’s primary purpose is educational in nature. SCIA represents smart card technology providers. Card Europe is another association that helps promote user confidence in smart cards. Card Europe believes (Kaplan 1996, p. 318):
...that only by achieving consensus across both industry and country borders, will we be able to achieve a true representative set of products and standards leading to full interoperability with a multi-service capability...
6.4.5. The Importance of ISO
It is not difficult to see why standards play such an important role in smart card development. Without them there would be no common point of reference for any of the stakeholders to follow. ISO is a worldwide federation of national standards bodies which has worked towards ways of making cards and equipment interoperable. Adherence to ISO standards is not compulsory but it is advisable. In the case of magnetic-stripe card technology, it was no coincidence that ISO 7810 was composed, “rather [it was] the close cooperation among major providers that established global standards and specifications” (Kaplan 1996, p. 210). Early smart card developers adopted existing magnetic-stripe standards initially in order to allow a smooth migration from magnetic-stripe. Other important ISO standards that influenced the rise of smart cards were ISO 7816 which defines ICCs (Integrated Circuit Cards) with contacts and ISO 10536 which defines contactless ICCs. Suppliers should be ISO 7816 or ISO 10536 compliant even though adhering to ISO standards does not ensure that interoperability is achieved between cards and terminal equipment. ISO leaves room for industry-level specifications but when none exist mismatches can happen (McKenna & Ayer 1997, p. 48).
As has already been mentioned ISO ICC standards are not so constraining that there is no room for industry-specific standards. Thus in some cases additional specifications need to be drawn. In late 1993, Europay, MasterCard and Visa took the initiative to join forces as EMV to formulate ICC Specifications for Payment Services. As Kaplan (1996, p. 214) explains, the EMV cooperation was the pooling of expertise for a common goal. The objective was,
“to eventually permit interoperability among chip-based payment cards for credit and debit applications. Without common technical standards, an array of incompatible systems would proliferate- building serious barriers to both consumer and merchant acceptance” (Allen & Kutler 1997, p. 8).
Dreifus and Monk (1998, p. 42) notice that the development of the EMV specification followed a series of evolutionary steps. The EMV specifications were delivered in three parts each focusing on a different set of issues. EMV-1 described the smart card and its environment, EMV-2 described the terminal environment and EMV-3 described how data would be exchanged between the card and the terminal. An excellent lesson learnt from the development of the EMV specification is (Allen & Kutler 1997, p. 12):
that progress will require collective discussion, and action. No one company can optimise smart cards unilaterally, and even industry-wide coordination through, say, a banking or retailing association, will fall short of the mark.
Just like EMV, ETSI (European Telecommunications Standard Institute) decided to formulate an industry specification in the 1980s for its proposed Global Systems for Mobile (GSM) network. The specification, known as SIM (Subscriber Identity Module) is predominantly used in Europe and Asia. The SIM has the functionality to perform authentication and offer a personalised service to subscribers. GSM offers international compatibility and allows for the subscriber to roam in any country where there is GSM coverage. GSM specifications include: security aspects (02.09), SIM (02.17), network functions (03.20) and SIM interface (11.11). When designing smart card solutions different levels of standards need to be adhered to dependent on the application. It should also be noted that standards and specifications can change and/ or evolve.
6.4.6. Legal, Regulatory and Policy Issues
In 1987 Svigals (p. xviii) noticed that the national governments of Japan and France were beginning to implement government policies and actions relating to smart cards. Fifteen years later the rise of smart card schemes in operation has brought the question of regulation into the spotlight. This is not necessarily a bad thing for the industry; some experts see it as an evolutionary step in the life-cycle of smart cards. Barr et al. (1997, p. 69) believe that a technology such as smart card is becoming commercially significant when lawyers and regulators begin to study the legal, regulatory and policy implications, as is happening presently. From about the late 1990s discussion about Regulation E has increased. In the past it has been easier to identify smart card applications that require financial transactions to be performed and need appropriate regulations, but with the introduction of multiapplication smart cards this defining line has blurred. Financial institutions are no longer banks, building societies and credit unions; they can be anything from telecommunications companies to airlines, it all depends on the services being offered. The Federal Reserve board believes “that if cards are used to access an account” they are subject to Regulation E (Noe 1995, p. 44). Thus, the Board has issued proposed changes to Regulation E and how it should be applied to stored value cards (SVCs). Namely, what is the definition of an ‘account’ as applied to smart cards, SVC conditions and terms, transaction receipts and consumer liability are important. One industry spokesman, the president of Cash Station Incorporated, James Hayes, does not think that Regulation E should be imposed on SVCs. Hayes rather compares SVCs to cash equivalents rather than customer transaction accounts. He believes that smart card
development will be impeded by regulation imposed before the purpose, risks and benefits can be clearly assessed... [he] cautioned that smart card regulation is in its infancy and that it will continue to evolve (Noe 1995, p. 45).
Smart cards also bring with them a whole new array of questions related to privacy. Consumer acceptance of the smart card in some geographic regions is very low, even in some cases where adoption of other high-technologies such as mobile phones has been high. Tarbox et al. (1997, p. 262) is blunt regarding the amount of information that can be potentially stored on a smart card. He believes that smart card issuers must disclose to application developers and consumers, how and who will have access to information, and how it will be distributed. When considering the rise of multiapplication cards, the problem of ‘who owns the information’ is even more complex an issue to solve. At least a single application card can undergo some sort of assessment with visible limits. Another question mark that surrounds world-wide interoperability of smart cards is how they will be regulated when they are used in different countries. For example, does a regulation applied in the U.S. have any legal bearing in Australia or Japan? Some have suggested the enactment of a number of privacy torts related to smart card, others are encouraging the use of electronic contracts between issuers and consumers since new laws are not about to appear overnight. The problem of card management is also not straightforward. In the case of multiapplication smart cards which can hold concurrently several applications, which company is liable for card issuance, faults in applications linked to the card, and other such matters. Many citizens across the globe have vehemently protested the use of smart cards for citizen identification. However in some countries citizens are powerless to voice their concerns, and governments have already introduced unique lifetime identifiers (ULI) linked to an ‘everything’ card.
6.5. Biometrics: In Search of a Full-Proof Solution
6.5.1. An Emerging Technology
The biometrics industry is considered “young” and “emerging” (Kroeker 2000, p. 57; Tilton 2000, p. 130). It is made up of about 150 separate hardware and software vendors (Liu & Silverman 2001, p. 30). The companies are usually small in size when compared to the rest of the computer industry. For this reason they are dependent on resellers and systems integrators to get their product to market (Burnell 1998, p. 2). Given the newness of the technology it can be a difficult task finding the right integrators in the right place at the right time to implement a particular type of solution. A fair degree of customisability and niche expertise is required in biometric applications- it is not a case of one size fits all. For example, an integrator specialising in fingerprint recognition systems may not have the same level of competency to do a voice recognition implementation. Thus, each new customer contract is not only an opportunity to gain more revenue but also exposure to a different set of problems that will equip all the stakeholders with valuable tacit insights for the longer-term.
Over the last five years, integrated solutions for biometrics have seen the formation of a number of alliances that has led to a greater acceptability of the auto-ID technique. In most cases the hardware suppliers are teaming with software companies, while some other companies have enjoyed such synergy within an alliance that they have sought to form completely new companies together (Cummey 1998b, p. 3). Investors have generally been wary of sponsoring technologies like biometrics that have not proved completely roadworthy in certain situations; and in these instances “banks [especially] tend to err on the side of caution” (Jacobs 1998, p. 1). In recent times however, the major computing, networking, security and Original Equipment Manufacturers (OEM) have begun to play a more visible role in the support and development of biometric technology as they have seen its potential bolster, particularly through government adoption for mass market applications. As end-to-end solution providers start to surface and the infrastructure to support biometrics is put in place the technology will inevitably stabilise.
6.5.2. From Proprietary to Open Standards
One problem that so many small players in biometrics causes is in the fragmented and non-standard manner in which vendors develop their products, in isolation from one another. For instance, Vendor A may have developed a robust biometric technology that solves a particular part of an overall solution, and Vendor B may have a supplementary piece of technology, but the two products from each vendor cannot be integrated for a particular solution without some expensive and arduous programming. This has deterred customers from choosing biometric solutions and in the opinion of many players has held back the industry. Like most new technologies, biometrics companies have been slow to embrace a set of standards. But according to Tilton (2000, p. 130) this is exactly what the industry requires. Traditionally biometric technology was used for government and law enforcement applications where a high degree of custom integration was required. Today what is needed is off-the-shelf type biometrics for rapid deployment and this is currently what is being evolved.
With so many small companies, and so many different types of biometric techniques and components one can only imagine the number of proprietary interfaces, algorithms and data structures that were introduced by the biometrics community. As the small industry began to grow, vendors started to offer software development kits (SDKs) with proprietary APIs. While this was a step in the right direction the standards were still proprietary. According to Burnell (1998b, p. 1) 1998 was a defining stage in biometrics history as suppliers began to reach out to the wider computing community. The standards issue gathered momentum as large players like the Microsoft Corporation saw the technology’s potential and the BioAPI Consortium was born. The creation of a standard application programming interface (API) was championed by the Consortium.
BioAPI is an open-systems standard developed by a consortium of more than 60 vendors and government agencies. Written in C, it consists of a set of function calls to perform basic actions common to all biometric technologies, such as enrol user, verify asserted identity (authentication), and discover identity (Liu & Silverman 2001, p. 30).
BioAPI is based on an architecture model which contains two to four layers, depending on the design. The highest level contains the fundamental biometric functions. The lowest level is where the control of interfaces with devices occurs (Tilton 2000, p. 131). Subsequent to the fine work of the BioAPI Consortium has been that of the Information Technology Laboratory (ITL). After the tragic events of the September 11th attacks, biometric standards activities were accelerated in response to newly formed U.S. security legislation. ITL spearheaded this development in collaboration with Federal Agencies, end-users, biometric vendors and the IT industry at large. The current standards activities are extensive and are gaining a great deal of attention.
6.5.3. Consortiums and Associations
Apart from the BioAPI Consortium, a number of other working groups have formed to support biometric technology. These consortiums differ somewhat from the smart card consortiums. They have been established for the purpose of instilling stakeholder confidence in the technology and to bring together key representatives who have a common interest. Among the list of consortiums and associations are the International Biometric Association (IBIA), the Commercial Biometrics Developer’s Consortium (CBDC), the Biometric Testing Services (BIOTEST), the Association for Biometrics (AfB), the Financial Services Technology Consortium, the International Association for Identification (IAI), and the National Centre for Identification Technology. Perhaps the most influential of them all however is the Biometric Consortium (Alyea & Campbell 1996). The Biometric Consortium can be likened to the Smart Card Forum in aim and purpose, except that it is working on behalf of the U.S. Government and represented by officials from six executive government departments and each of the military services. By establishing one central body for the research, development, testing and evaluation of biometrics, the National Security Agency (NSA) formed the Consortium as part of its Information Systems Security mission and invested personnel resources and funds to provide support to the Consortium. The NSA considered biometrics to have excellent potential for DOD (Department of Defence) applications and other Federal agencies and wanted the independent technical capability to make decisions for government needs. The Consortium however, is also concerned with the exchange of information between the government, private industry and academia. If biometrics is to continue to develop worldwide, these vital forms of interaction must continue.
6.5.4. Government and Industry Links with Academia
Biometrics research centres have sprouted up all over the globe. This is one technology where there is a lot of scope for government and industry linkages with academia for the development of potential biometric applications. In 2001, for instance, DOD became a member of the Centre for Identification Technology Research (CITeR) at West Virginia University (WVU). WVU has one of the world’s leading forensics degree programs. CITeR was developed in collaboration with Marshall University, Michigan State University and San Jose State University to serve as one of the first academic biometric centres. The latter was awarded a 400,000 U.S. dollar contract in 1995 to “study and develop standards for biometric identifiers for use with commercial truck drivers’ licenses” (Woodward 1997, p. 1482). Research on biometrics at San Jose University began in 1994. In 1997 the Biometric Consortium established the National Biometric Test Centre at the university. San Jose is also the only university to participate as a member in the Biometric Consortium. In Asia, the Hong Kong Polytechnic University have some impressive ties with industry and other academic institutions including the National Tsing Hua University in Taiwan, University of Sinica and University of South Florida. The Lab in Hong Kong specialises in transferring multiple biometric technologies to industry and is currently exploring integrated biometric solutions. It is continually building up its knowledge base as it sees local opportunities for biometrics arising. Other universities involved in biometric research include: MIT Lincoln Labs, Purdue University, Nagoya University (Japan) and Rutgers University. Some of the European universities researching biometrics include: the University of Bologna (Biometric Systems Laboratory in Italy), the University of Neuchatel (Pattern Recognition Group- IMT in Switzerland), and the University of Cambridge (Speech Vision and Robotics Group).
6.5.5. Legislation and New Technologies
Laws almost always lag behind new innovations. In the case of biometrics, this is not any different (Walden 2000, pp. 2/1-2/11). Kralingen et al. (1998, p. 2) clearly state that “[w]hen a new technology is introduced, its applicability and the adequacy of existing laws needs to be examined.” Yet opinions are divided whether present laws are sufficient to handle privacy issues or new protections for privacy need to be introduced specifically for biometrics. Right now, biometrics is still new for courts- there is no law governing biometrics. The best service providers can do is to develop their own Code of Fair Information Practice (CFIP) to gain the confidence of the consumer, even if these are not enforceable by law (Woodward 1997, p. 1484). It follows from this that there is a growing need for policy makers to understand biometric technology and how unique human features stored digitally can or may be used. Kralingen et al. (1998, p. 1) prefer the proactive approach rather than “simply waiting until problems arise and then think[ing] up an ad hoc legal solution.” By the time a new innovation is introduced and adopted by the mass market, some analysis of the legal implications of those applications can be conducted. At the present, the reverse can be said to be taking place, as governments especially, throughout the world, implement mass market biometric applications for voting and social security welfare.
One of the most contentious issues in biometrics today is whether enrolment in particular applications is obligatory as opposed to voluntary. The former has statutory implications (Kralingen et al. 1998, p. 2) because a biometric can be considered a type of personal data, owned by the individual. However, what court cases in the U.S. have consistently ruled on, is that certain biometrics do not violate federal laws like the Fourth Amendment. O’Connor (1998, p. 9) determined that the “…real test for constitutionality of biometrics… appears to be based on the degree of physical intrusiveness of the biometric procedure. Those that do not break the skin are probably not searches, while those that do are”. In purely rational terms it is also a difficult case to argue against a technology that could save governments (and subsequently taxpayers) millions of dollars in areas like Social Security by reducing fraud. The fear is however, that biometrics gathered for one purpose could be submitted as admissible proof, in a court of law, for a completely different purpose. The debate over access to biometrics has taken on another perspective since the recent terrorist attacks on the U.S. World Trade Centre in 2001 and the Bali bombing in 2002. O’Connor (1998, p. 9) prophetically stated years before that “[t]he government may still be able to show compelling state interests in combating terrorism, defending national security, or reducing benefits fraud sufficient to preserve the program’s constitutionality.” In these extreme circumstances (i.e. terrorism attacks) the case for mandatory biometric identification is a lot stronger. Having said that, government applications that use biometrics should be considered carefully. Kralingen et al. (1998, p. 3) stipulate that the government has a role to play in ensuring that an adequate framework is in place for a given context, that special attention be placed on user acceptance, and the quality of critical social processes is to be guaranteed.
6.5.6. Privacy: Friend or Foe?
There are two schools of thought when it comes to biometrics: either these devices are privacy safeguards or they are privacy’s foe. The positions can be summarised:
1) biometrics do help to protect an individual’s right to privacy because identification is ensured and access to information is limited;
2) biometrics is “a threat to civil liberties, because it represents the basis for a ubiquitous identification scheme, and such a scheme provides enormous power over the populace” (Clarke 1994).
Those who belong to privacy’s foe hold numerous fears about biometrics and related technologies. First, they do not like the idea that they must give up a biometric identifier which is unique. Second, they believe that an underground market will form around biometric data. Third, that biometric data may be used for law-enforcement purposes. Fourth, some biometric data may be linked to centralised databases containing medical history (Woodward 1997, p. 1484). Fifth, data gathered for one purpose will be used for another depending on who has power over it (this is known as function creep). Sixth, biometrics technology discriminates some persons with disabilities. It is to this end that widespread consumer acceptance of the technology has been hampered. The following paragraph is a typical scenario of what civil libertarians claim they are fighting against.
Imagine an America in which every citizen is required to carry a biometrically-encoded identification card as a precondition for conducting business. Imagine having your retina scanned every time you need to prove your identification. Imagine carrying a card containing your entire medical, academic, social, and financial history. Now, imagine that bureaucrats, police officers, and social workers have access under certain circumstances to the information on your card. Finally, imagine an America in which it is illegal to seek any employment without approval from the United States government (Williams 1996, p. 1).
According to Wayman (2000, p. 76), the privacy fear is very much related to how governments could use biometric records in the future to track individuals in real-time. In an interview Davies states: “[w]e can conceivably end up with a multiple purpose national/international system from which people can’t escape” (Roethenbaugh 1998, p. 2). Perhaps the most controversial of all biometrics is DNA and its potential future applications.
6.5.7. End-User Resistance
Biometrics has also differed to any other auto-ID device before it, in terms of its level of invasiveness. People were used to remembering PINs and carrying cards but they were definitely not used to using body parts to grant them access to funds etc. Biometrics has forced an ideological and cultural shift to take place. The human body almost becomes an extension of the machine for that one moment that the physical trait is being verified or authenticated. This is what could be considered intimate human-computer interaction (HCI). And biometrics designers have had to pay attention to consumer requirements when building biometric systems to minimise resistance.
Fears of ‘Big Brother’- combined with intrusive measuring devices such as bright lights and ink pads- have had even technophiles dragging their feet on occasion. As the systems have become less intrusive however, user resistance has dwindled, but the suspicion is still there, vendors said, and agencies should not underestimate the importance of a user feeling comfortable with a technology (Lazar 1997, p. 4).
While designers can respond to making biometric systems more user friendly, they really cannot cater for the needs of those people who hold religious beliefs about how biometric technology may lead to the fulfilment of prophecy in the Book of Revelation (13:16-17).
16 Also it causes all, both small and great, both rich and poor, both free and slave, to be marked on the right hand or the forehead, 17 so that no one can buy or sell unless he has the mark, that is, the name of the beast or the number of its name.
Short of calling this group of people fundamentalists, as Woodward (1997, p. 1488) refers to one prominent leader, Davies is more circumspect:
“I think they’re legitimate [claims]. People have always rejected certain information practices for a variety of reasons: personal, cultural, ethical, religious and legal. And I think it has to be said that if a person feels bad for whatever reason, about the use of a body part then that’s entirely legitimate and has to be respected” (Roethenbaugh 1998, p. 3).
Opponents to the DSS Connecticut fingerprint imaging scheme for instance, mostly argued that fingerprinting was invasive and dehumanising. These opponents cannot be considered fundamentalists because they do not agree with the State. The naive response of the DSS was to “narrow [public] perception” by making the states chief executive the first to be fingerprinted (Connecticut Dept. 1998, p. 2). Of course, if it was that easy to change public perception, it would be equally easy to change people with all sorts of cultural, religious and philosophical objections against biometrics. This kind of intolerance to diverse attitudes however is dangerous.
One of the least discussed topics in biometrics which is related to privacy is ethics. Davies stated in 1998 that “[t]he biometrics industry need[ed] to develop an ethical backbone” (Roethenbaugh 1998, p. 3). This was with specific reference to the targeted use of biometric technology on minority groups such as prisoners, uniformed personnel and the military. Davies is quoted as saying: “I’ve heard it said that captive groups are a good target market and that the biometrics industry can work outwards from there… The idea of target captive populations is offensive and sneaky” (Roethenbaugh 1998, p. 3). In the same token, multimodal biometrics present more ethical dilemmas. The legitimacy of one or two biometrics being used for a variety of applications may be warranted but the use of numerous biometrics could be considered somewhat intrusive and dangerous. However, multimodal biometrics vendors pronounce that several modalities “…achieves much greater accuracy than single-feature systems” (Frischholz & Dieckmann 2000, p. 64). In the final analysis, “[d]espite 20 years of predictions that biometrics devices will become the next big thing, proliferation has been slow because of technical, economic, human-factor, legal, ethical, and sociological considerations” (San Jose 2002, p. 1). Until these matters are brought to the fore biometrics innovation will be stifled.
6.6. RF/ID Tags and Transponders: The New Arrival
6.6.1. A Time to Grow, a Time to Nurture
Due to the relatively small number of manufacturers in RF/ID coupled with the lack of standardised equipment, service providers have had a limited range of systems to choose from. According to Kitsz (1990, p. 3-41) the issue of interoperability has hardly been addressed. Users cannot pick and choose different equipment from several vendors based on price or capability (or any other differentiating factor) with the assurance that everything will work together. In fact, the likelihood at present is that equipment will not work together seamlessly. For instance, tags purchased from one vendor may not be read by a device from another vendor.
The goal of standardisation is to create a generic tag and reader that ideally could be purchased from several vendors, resulting in lower costs and multiple ready sources of supply. While standardisation makes specifying easier, standards pose a problem in that the tag-to-reader communication is typically proprietary to each manufacturer. The problem is compounded by the fact that tags come in many differing forms and information capacities, and are used in different environments.
This has surely deterred some users from choosing RF/ID over other auto-ID technologies. Consider the service provider who needs to make a large investment in RF/ID and only has a choice between vendors and not between equipment components such as tags, transponders, readers, software, etc. In this instance, a proprietary solution from one vendor alone has major implications. To offset this predicament, advocates of RF/ID point to the ever-increasing investment in new start-up companies focussed on RF/ID technology and applications. These new companies are vital to the technology’s accelerated growth. As users, present and potential, see more and more players entering the market they become more comfortable with the technology and are more likely to purchase RF/ID systems for long-term solutions. Ames (1990, p. 6-10) uses the words “legitimacy” and “credibility” to describe the effects that new companies have on users and the industry worldwide.
6.6.2. Standardisation: Opposing Forces at Hand
The fact that some RF/ID manufacturers see standardisation as a threat to their survival (Ames 1990, p. 5-6) does not comfort potential users at all. Some manufacturers believe their core business is based on remaining a closed system supplier so they are not concerned about contributing to a global standards process. The reality is that conforming to a set of open standards will inevitably lead to a reduction in competition based on proprietary interfaces and protocols. Other differentiating factors will subsequently become the basis for competitive advantage. As RF/ID begins to find applicability in open systems, vendors have a lot to lose if they are not willing to conform to a set of standards. The potential for the technology is incredible but as long as “[n]obody’s system is ever compatible with anybody else’s” the technology will fall short of its mark (Kitsz 1990, p. 3-41). Ames’ (1990, p. 5-8) prediction that interoperability will become a critical issue after the year 2000, particularly for applications with a global purpose, has been found true. In the example of herd management, tags are still utilised in proprietary environments. Worldwide, governments have started to impose regulations which will affect farmers, particularly in Europe and the U.S. As traceability of individual animals, literally from the farm onto the kitchen table, becomes a directive rather than a proposal, “[i]nteroperability is essential, and any animal identification system that is not compatible with the larger system will lose its value” (Look 1998, p. 3). Technology providers will be forced to weigh up the benefits and costs of standardisation, the latter of which are likely to be short-term.
188.8.131.52. From Industry-Specific to Global Standards
“According to industry experts, the growth of RFID, despite its potential, has been stymied by the inability of RFID systems to communicate with each other” (Tuttle 1997, p. 7). The problem is very much related to the manner in which RF/ID technology was applied historically. As new applications for RF/ID were conceived, lead manufacturers with the greatest expertise in that area funnelled their resources towards getting that application to market. Over time, standards were developed sporadically and in almost every case prior to 2000, those standards (if any) were industry-specific, for instance for trucking, rail, etc. To solve this problem some manufacturers are now working towards a RF/ID global open standard for communications. Several industry-specific and global organisations are progressing towards addressing RF/ID standardisation, hopefully to bring about some commonality in systems. Some of these can be found in table 6.5. More recently, new RF/ID ventures like the Electronic Product Code (EPC) initiative of the Auto-ID Centre fully appreciate the importance of standards. Part of the vision of EPC is to create a “Smart World” where there is intelligent infrastructure linking between objects, information and people, through a computer network. This infrastructure would be based on “…open standards, protocols and languages to facilitate worldwide adoption of this network” (Brock 2001, p. 5). Before launching any type of commercial product the Auto-ID Centre is settling on a standardised architecture model.
Table 6.5 RF/ID Standards and Committees
US Military X3T6 Committee, American Railroad Association, Automatic Identification Manufacturers Small Animal Task Force (AIM), Automotive Industry Action Group (AIAG), International Airline Transport Association Baggage Identification (IATA), ISO Standards- RF/ID of Animals, ISO 11784:1996 Code Structure, ISO 11785:1996 Technical Concept.
ISO FDX B; FECAVA- European companion animals; CALTRAN (State of California Code of Regulation) Title 21- USA toll roads and traffic monitoring; ISO 10374- standard for American Association of Railroads; European CEN standard for railroads.
ISO 10536 and ISO 14443 for contactless smart cards, ISO 69873 for data carriers for tools and clamping devices, ISO 10374 for container identification, VDI 4470 for Anti-theft Systems for Goods (Finkenzeller 2001, ch. 9).
184.108.40.206. Organisations Supporting Change
The most influential standards-support group in RF/ID has been AIM. “AIM brings together products with one common capability... [and] has been liberal in including products in the definition of automatic identification” (Ames 1990, p. 5-19). AIM differs from other organisations in that its purpose is industry-wide. With such a massive potential in auto-ID there was a need “for a specialist non-commercial association to coordinate national and international education. AIM has now firmly established itself as such an association” (Smith 1990, p. 49) and with global coverage. It offers a host of services including a library of technical literature, an online web site www.aimglobal.org, educational videos, comprehensive exhibitions and conferences on auto-ID (e.g. SCAN-TECH), it publishes Auto ID Today and it is a cosponsor of the Auto-ID User Association among other things (Smith 1990, p. 50). ISO has also realised the importance of RF/ID standards and together with the International Electrotechnical Commission (IEC) has sponsored a Joint Technical Committee (JTC) to accomplish some milestones. Two committees that are addressing the critical issues of standardisation include: Sub-committee 31 (SC31) Automatic ID and Data Capture and Sub-Committee 17 (SC17) Contactless Card Working Group. There are ways to bypass particular steps in the ISO process but one should be aware that there are potential pitfalls to fast-tracking (Halliday 1999, p. 1).
6.6.3. Abiding to Regulations
220.127.116.11. Frequency Ranges and Radio Licensing Regulations
Manufacturers may voluntarily respect standards but they must abide by regulations. RF/ID requires the use of radio spectrum “[b]ecause RFID systems generate and radiate electromagnetic waves” (Finkenzeller 2001, p. 111). It is important that radio services of any kind do not impact one another negatively. To this end, RF/ID systems are allotted a special frequency range within which they may suitably operate. RF/ID systems designers need to comply with these regulations. It should also be noted that the spectrum available for RF/ID is a limited national resource which is managed independently by each country. For example, in Japan there is no spectrum available for RF/ID as it has been taken up by other radio services. Ames (1990, p. 6-10) and Marsh see this as a serious impediment to RF/ID which will however be rectified in the long-term. Marsh writes:
[i]n order to bring a measure of uniformity the world has recently been divided into three regulatory areas with a view to trying to get some uniformity within the areas. Uniformity will however only be achieved towards the year 2010 as it requires each country to implement the plans for that region. The regions are: (1) Europe and Africa, (2) North and South Africa, (3) Far East and Australasia.
Related to the issue of regulation, Geers et al. (1997, p. 4) see the major problems of RF/ID as being “the availability of sufficient radiofrequencies with adequate bandwidths, the complexity of governmental regulations and, extremely important, the interference of other users. Another aspect regarding implant applications is the potential damage of the high-frequency waves to the living tissue.” Particular applications will be allotted particular frequency bands, according to the bandwidth required for an application to be successful. For example, injectable transponders require a frequency band of less than 125 kHz whereas an EAS (Electronic Articles Surveillance) transponder systems in retail stores require between 1.95 mHz and 8.2 mHz. Thus RF/ID regulation can be broken down into four levels- international, national, local and application-specific, the latter of which will be discussed below.
18.104.22.168. Application-specific Regulations
The tracking of farm animals is beginning to be stringently regulated in some countries. Among the most regulated markets for the identification and recording of animals is within the European Union. The Council Directive 92/102/EC of 27 November 1992 made it mandatory for certain types of livestock to be marked. In the U.S. AIM and the National Livestock Trust are playing a coordinating role with regard to farm animals. “However, there is no consensus on whether or not one system has to be used for all species, and whether or not there should be only one central database” (Geers et al. 1997, p. 29). In the U.K. farmers ear tag their animals and record them in a central database 36 hours after birth. Farm animals in the Netherlands have been uniquely identified since 1975 for animal health and breeding support. Farmers have the choice of plastic or electronic ear tags or injectable transponders. In the future the animal’s DNA code may be used as a unique identifier. Farmers in the Netherlands use ISO protocol (ISO/DIS 11788-1) to exchange information with central registers. In Belgium a system called SANITEL is in operation which was developed by the Ministry of Agriculture for disease surveillance and premium control (Geers et al. 1997, pp. 29-32). Ever since major outbreaks of bovine spongiform encephalopathy (BSE) in Europe, the most recent of which was in 2001, and even greater number of regulations have been introduced by government bodies.
6.6.4. The Importance of Collaboration
22.214.171.124. Collaboration within the Firm
RF/ID systems are nowhere near as straightforward as bar code systems. With developing standards, enforced regulations, and technical rules to follow, open internal collaboration within RF/ID companies themselves is paramount. Both as an entrepreneur and employee, working for a new high-tech company is a challenge. Resources are limited and employees are most likely to be juggling more than one job role. When RF/ID companies were initially established, interaction between firms was still premature with few competitors willing to share any part of their intellectual property. Thus entrepreneurs of new start-ups have to be focussed- on employing the right people with the necessary skills and experience, to be motivated to achieving company goals, to attract investors, to have sufficient capital to continue the development of products and to be able to pay for on-going expenses (Ames 1990, p. 6-12). Without products, customers cannot buy any equipment from a company, and without frequent incoming sales revenue a business will eventually discontinue operating. This is another reason why companies start small and build up over time. RF/ID companies have traditionally begun with a size of 5-10 employees and reached levels of 80-100 persons as customer demand increased. The initial team usually comprises of experts that are technical and have general application knowledge. The employer is usually the one that builds up the reputation of the firm and makes the initial customer contacts, as well as keeping abreast with what everybody else in the industry is doing. As the start-up company becomes involved in the bidding process and wins contracts, a new interactive process begins between the firm and the customer. Ames (1990, p. 5-21) describes this creative process of product innovation in RF/ID:
The products either come into existence primarily in two ways. First, a company goes outside the ‘business or industry they are in’ to combine existing technologies in a new way, or second potential customers describe problems facing them and the attributes of various products that are needed to solve the problem and this description becomes the blue print for a totally new product. In either case, confusion about what exists, stimulate creative thought and results in a new product- as in the hypothetical example- or a new way to apply existing ones, resulting in a higher quality solution for users.
126.96.36.199. Private Enterprise and University Collaboration
As firms grow in confidence and stature new relationships begin to take shape outside the company. The least threatening relations a technology provider can form are those with public institutions such as universities. Not only is this a positive public relations (PR) strategy but the research conducted can bear some good fruit. For example, Symbol Technologies established an affiliation with Nankai University of Tianjin in China to support technology-based research. Symbol has strategically chosen a China-based university as a way to show local business partners in commerce and government that it is committed to solutions for the Chinese market. In addition, “Symbol Technologies has always placed a great emphasis on training and education. Some of the most important technological breakthroughs that Symbol has developed has been achieved by working closely with universities” (Picker 1999, p. 1). A number of university-based research projects have also been funded by the Defense Advanced Research Projects Agency (DARPA) involving RF/ID including investigations into the miniaturisation of RF/ID tags (i.e. the PENI tag) and landmine detection equipping bees with RF/ID “backpacks”.
Perhaps the most proactive university-based RF/ID initiative was the establishment of the Auto-ID Centre. “The Auto-ID Centre is an industry sponsored research centre charged with investigating automated identification technologies and their use with disparate technologies such as the Internet” (Engels et al. 2001, p. 76). Already the Centre has the support of bar code associations like the Uniform Code Council and EAN International. It is also funded by major companies like Proctor and Gamble, Gillette International Paper, Sun Microsystems and Invensys who are all keen to profit by EPC, either within their own respective supply chain or by on-selling complementary EPC technologies. Still other organisations are making contributions through participating in field trials and supporting staff learning through secondments. The Auto-ID Centre Research Labs are located within Massachusetts Institute of Technology (MIT), University of Cambridge, and the University of Adelaide (Australia). The labs undertake research in three domains including: infrastructure, application and synthesis. Each laboratory is complementary to the other, drawing on individual established strengths. Cambridge, for instance, has a plethora of experience within its Institute for Manufacturing. Initially the research programme will be linked to the Automation and Control Group although eventually it is hoped that there is multidisciplinary participation from groups across the University. The Auto-ID Centre was also the host of the 15th Automatic Identification and Data Capture Institute in 2001. This Institute brings educators together from all over the world to share material on various topics to enable the suitable realignment of undergraduate and postgraduate auto-ID programs being offered by universities worldwide.
6.6.5. Patent Explosion
Patents are generally a good measure of the activity within an industry. The greater the number of RF/ID-related patents filed each month, the greater the likelihood that the technology is growing in importance. Patents also become a source of formal knowledge for firms. By keeping abreast of official patents (using publicly available databases), firms can learn about the latest developments of other companies and their core business focus well in advance of a product launch. According to RF/ID inventor Mike Marsh who has about 200 international patent applications and is editor of Transponder News:
[t]he time to publication seems typically to be three years, therefore the patents effectively document the state of technology to within three years of the leading edge inventions. This is generally much shorter than one will find in either technical books or even commercial products on the shelves.
A visit to the Transponder News web site http://rapidttp.co.za/transponder/ is extremely informative for manufacturers, customers, regulators, academics and other organisations. A list (with descriptive details) of recently granted RF/ID tag and transponder system patents for commercial and scientific applications can be found on the web site. A few interesting patents for human-centric applications that caught the author’s attention can be found in table 6.6. See also http://188.8.131.52 (2003).
Table 6.6 Recently Granted Patents in the U.S.A.
Patent Title | Inventors/Company | Patent Description
Implantable Biosensing Transponder | Kovacs, G., Knapp, T. (LipoMatrix)
A biosensing transponder for implantation in an organism including a human comprises a biosensor for sensing one or more physical properties related to the organism after the device has been implanted. Methods for using an implantable biosensing transponder include the steps of associating the device with an implant, including temporary implants, prostheses, and living tissue implants, physically attaching the device to a flexible catheter, sensing parameter values in an organism, and transmitting data corresponding to the sensed parameter values to a remote reader.
Method of Assembly of Implantable Transponder | Campbell, N., Urbas, D. (Bio Medic Data Systems)
An improved identification marker and method of assembling the marker is provided, which includes the steps of providing a glass vial and filling the glass vial with a quick curing liquid to a predetermined volume... The IC circuit hybrid and antenna are placed in the vial so as to be entirely enveloped by the liquid... Preferably, the cap is an anti-migration cap so that when the transponder is implanted in an animal, it prevents the transponder from sliding out.
Electronic Monitoring Device and Monitoring System | Reisman, Y., Greitser, G., Gemer, G., Pilli, T.
An electronic monitoring device to be attached to a subject for monitoring, at a remote location, tracking movement and/or other activities of the subject, includes a closure member incorporating an identification tag having a unique identification number stored therein. The monitoring device further includes an electronic data processor programmed to read and store the identification number of the electronic tag when the closure member is applied, and periodically thereafter, to make a determination of whether a closure member is currently attached to the subject having an identification number matching the stored identification number, and to transmit to the remote location (Elmo-Tech).
Source: Adapted from Transponder News, http://rapidttp.co.za/transponder/cpat98se.html (1998)
6.6.6. Necessary Product Improvements
In 1990 Ames (p. 5-4) believed there was room for cross-the-board improvement in RF/ID systems, particularly in capacity and cost. He also believed that the lack of LAN connectivity on the factory floor and the availability of application software were stifling RF/ID growth. By 1997 Geers et al. (p. 4) described all the major problems of RF/ID to being related to regulations. In the seven years between these observations, many incremental improvements were made to RF/ID. The problem-focus shifted as the technology started to show signs of wider applicability, yet the design goals remained relatively unchanged throughout the same period. In 1990 Ames (p. 6-9) stated that the efficiency of the use of power had to be improved, at the same time reducing the feature size of the tag (as soon as was practical) and incorporating the use of superconductive on-chip interconnection. In 1997 Geers et al. (p. 15) wrote that the main design goal was to develop optimum performance systems, the ability to manufacture items cheaply in large quantities by putting a micro-electronic or integrated circuit (IC) in the transponder, and producing as small a transponder as possible.
Product improvements specific to transponders that are injected into animals is presently a topic that is receiving attention. The design of the new transponder itself is highly miniaturised- about the size of a grain of rice. At the same time the implant must have the ability to transmit information on the ID and body temperature suitable for both animals and humans (Geers et al. 1997, p. 106). Along with miniaturisation, low power consumption is seen as a continual mandatory improvement to the transponder. Additionally, chip movement and migration within the body of the animal or human must be eliminated. The Destron Fearing Corporation developed Bio-Pond (a porous polypropylene polymer sheath) which fits snugly on transponders, so that implants stay at the original implant location. An additional improvement (which is more of a safeguard than a technical advancement) is the ability for the transponder to resist high temperatures within the body of the animal or human. Passive radiofrequency tags should be used in this case but if active transponders are needed, safety must be implemented so that the batteries do not explode or lose power when exposed to high temperatures (Geers et al. 1997, p. 62). Surgical implantation also needs to be improved. “Surgery... has been shown to create some degree of stress, and 4-7 days may be required for the animal to return to equilibrium” (Geers et al. 1997, p. 77). While this may be sufficient for animals, it is not for humans. Some of these improvements have come through major technical breakthroughs discovered by university research.
184.108.40.206. Consumer Fears
The implanting of a foreign object into an animal brings with it some health issues. First, what type of object is being implanted and does it have the capacity to cause harm to the animal? Second, if the animal is being raised for human consumption is the final produce free from contamination? Both these issues may appear hyper-sensitive but they all have their basis in regulations. For example, a transponder’s signal strength must comply with the Postal and Telecommunications Service (PTT) regulations. It is presently limited to 150 kHz but more recently, the European Committee for Electrotechnical Standardisation (CENELEC) has proposed higher strengths. Tests are being conducted to see how animals react to this higher signal strength. Active transponders that contain batteries may also pose health risks particularly if there is breakage. Similarly larger devices housed in glass may also be more prone to the risk of breakage. As Geers et al. comments (1997, p. 68):
[i]ntroducing foreign material into animals intended for human consumption inevitably leads to questions about the toxicity hazard for the animal itself, and the risk of contaminating the food chain. The choice of a suitable material encapsulating the electronic circuit is crucial, since it determines the level of biocompatibility as well as other mechanical and physical aspects (e.g. breakage resistance, radiowave transparency).
Even products such as readers must comply with government agency requirements. For instance, Destron’s readers are tested for compliance with the Federal Communications Commission (FCC) Part 15 Regulation for Electromagnetic Emissions.
6.6.7. Once Labelled Conspiracy Theories
While consumers recognise other auto-ID devices like bar codes and magnetic-stripe cards, RF/ID technologies are more discrete and have traditionally been used for industrial supply automation. Communications about the technology have been mostly between technology and service providers- the average consumer still lacking an elementary understanding of RF/ID capabilities and its potential uses. One area that has however caught the attention of some members of the community is prospective human-centric applications for transponder implants (Witt 1999, p. 89). Conspiracy theorists believe that the ultimate security device, to be enforced by government, will be microchip implants that contain a Universal Lifetime Identifier (ULI). The ethical and legal implications of such an application have not yet been discussed widely enough, at least not in targeted forums. Once labelled conspiracy theories, scientists and private enterprise have proven that human implants for monitoring and access purposes are not only possible but commercially viable innovations. Nowadays, however, it is a little rash to label “techno-observers” as conspiracy theorists or even worse “fundamentalists” of one kind or another. Applied Digital Solutions is just one company that is pioneering efforts that are focused on providing human chip implant services. They market their VeriChip solution to people who would like to use it for emergency situations. With RF/ID devices or company names like Biomark, BioWare, BRANDERS, MARC, Soul Catcher, Digital Angel, Therion Corporation, it is not surprising that some religious groups and civil libertarians among others are very concerned.
In conclusion this chapter has presented the dynamics of auto-ID innovation by analysing five case studies. It has shown the interplay between the various stakeholders tracking individual technologies from their inception to their maturation. This is the first time that a study of this kind in the field of auto-ID innovation has been conducted. The results can be summarised in six phases of development, as can be seen in diagram 6.2.
This diagram should not to be confused with a product lifetime cycle curve; it is more concerned with capturing those factors/ dimensions of innovation which either inhibit and/ or drive the momentum of auto-ID technologies specifically. The curve shown should be interpreted as depicting the innovation path, and point to a common industry trajectory for auto-ID. The following chapter will focus on auto-ID product innovations and chapter eight will explore the last phase of change, the auto-ID trajectory, in depth.
 See also Braco (1997, pp. 116-119) who discusses card issuers, financial consumers, merchants, device providers and intermediaries as stakeholders.
 After conducting four mini case studies Elliot and Loebbecke (1998) present the major roles of players in smart card implementations. These roles include: card owner, card issuer, acquirer, merchant, cardholder and manufacturer.
 The committee was made up of ten chief executive officers. Five would come from grocery manufacturers and another five from distributor associations (Brown 1997, pp. 40).
 “Progress through technology” was a 1986 advertising slogan for Audi motor vehicles.
 Some companies, like Jewel, voiced their concerns through formal letters to the Committee. In one such letter to the Symbol Committee the company president listed seven main concerns about the work, including, whether the standard defined in 1971 would soon become obsolete, that the ten-digit code would not stand the test of time and that the lack of compatibility with other codes would be a major problem. Jewel believed that technological innovation was inevitably a continual process and that it was up to the Ad Hoc Committee to make decisions on key issues (Brown 1997, p. 84).
 Apart from its technical contributions to refining the symbol specification, STAC was given the task to evangelise and encourage the adoption of the code and symbol.
 Dutchman Albert Heijn transferred the American lessons of the bar code experience to Europe. He began the European counterpart of the Ad Hoc Committee which finally led to the birth of EAN (Brown 1997, p. 195).
 See consumer reactions to bar codes in Lamoreaux (1998, pp. 17-19) who stated that the “…fears of barcodes, today, are more psychological and economic. People are afraid they will be cheated… or that they will be used for spying. Trade unions still fight barcoding if they perceive that it will negatively affect members’ jobs” (p. 17).
 Traditionally, consumers were used to purchasing goods with a price tag on the item itself. At the check-out counter, a sales assistant would then key in the price of the item and the consumer would pay the amount. The introduction of bar codes changed the way people shopped.
 The light emitting from the scanner, and the beeps heard when an item was entered contributed to some of the customer feeling.
 Brown (1997, p. 128) described the deep mistrust consumers held of business: “[f]rom their perspective, of course industry wanted to remove prices from items: using computer technology would enable prices to be manipulated without fear of detection”.
 While the bar code did act to increase productivity levels, some consumers could argue that they are still queuing up at large supermarkets for the same amount of time, as less staff is hired offset by the productivity gains. Also, the need for a single item to be scanned, like a packet of chewing gum, is debatable. It would be faster to pay for the item and leave.
 Accuracy issues related to bar code were finally put to rest in 1996 when the Federal Trade Commission (FTC) published its findings on the impacts of bar codes on pricing. “Checkout scanners result in fewer errors than manual entry of prices at the checkout” (Reeves 1996, p. 41). The FTC report revealed that on average most supermarkets will undercharge rather than overcharge when an error has occurred in the price.
 There are still groups, especially some monastic communities who refuse to purchase goods that are marked with the bar code. This would surely limit their ability to survive on anything, save subsistence farming practices.
 A plethora of web sites have noted the uncanny coincidence between the number of the beast “666” (Revelation 13:18) and the left (101), centre (01010) and right (101) border codes of the U.P.C equating to 6, 6, 6. Some of the more prominent religious web sites that discuss the UPC include: http://www.666soon.com (2003), http://www.light1998.com (2003), http://www.greaterthings.com (2003), http://www.countdown.com.org (2003), http://www.raidersnewsupdate.com (2003), http://www.av1611.org (1996). At first the sites focused on bar code technology, now they have grown to encompass a plethora of auto-ID technologies, especially biometrics and looming chip implants. See also Watkins (1996).
 Kevin Sharp (1998, p. 2) emphasises that potential bar code technology users should look at “[t]he reputation and experience of suppliers... Check out the manufacturer and the reseller. How long have they been in the bar code business?... Look for local support, or at least support you can access when you need it, in the language that works best for you... To a large degree you get what you pay for, so I feel it’s imperative to look at features and reputation first, and price last”.
 For a representative list of relevant patents in the U.S. beginning in 1995, see Palmer (1995, pp. 361-369). Kaplan (1996, p. 228) explains that in the U.S. a patent is “a government grant giving its holder the right to exclude others from making, using, or selling a particular invention for the non-renewable period of 17 years.” Usually patents need to be filed in different countries, if the inventor wishes to enjoy the same privileges elsewhere. An applicant must file for a patent, submitting in written form a description of the invention and its potential uses.
 In July 2002, TEKLYNX donated fifteen thousand dollars worth of software (CODESOFT) to the University of Ohio and another fourteen universities for education research purposes across North and South America.
 “The Ohio University Centre for Automatic Identification is the nation’s first university-based research centre devoted solely to the study of automatic identification and data capture (AIDC). The Centre was established in 1988… Industry sponsored research projects conducted at the Centre include two very comprehensive bar code symbology tests… The Centre can perform R&D work, standards comparisons as well as independent verification of other customized research results.” See http://www.ent.ohiou.edu/autoid/whatisit.htm (2002).
 Marlin Mickle and Nickolas A. DeCecco Professor in Pitt’s School of Engineering are working on the PENI Tag project which they expect to be a successor to bar code.
 See http://debut.cis.nctu.edu.tw/Epages/Research/e_barcode.htm (2002) for recent developments on the 2D bar code.
 It should be noted that a lot of these universities specialise in a variety of auto-ID technology. Less than five years ago, most of these focused on bar code, magnetic-stripe and smart card technology, now many of these have started to focus on biometrics and RF/ID devices. They do not discount the value of bar code technology but they are now researching multiple auto-ID technologies.
 See Purdue University, University of Ohio (IT 454 - Automatic Identification), Temple University and Michigan State University.
 Both manufacturers and VARs are able to exhibit their product innovations and attract interested customers to view a range of possible solutions. Valuable feedback is often gained from such events. The proceedings of these conferences are usually published.
 Universities are also excellent locations to store archival information as they have public libraries and other specialised facilities. At Stony Brook State University in New York an automatic identification and data capture industry archive was launched in October 2002. The AIDC 100 Archive at Stony Brook University includes “documents, financial reports, conference proceedings, market studies, periodicals, books and prototype hardware… AIDC 100 is an organisation founded in 1997 by industry leaders… the vision of the leaders was to create an intellectual gathering place for those business professionals who have made significant contributions… AIDC archive is constantly growing” (Media Relations 2002). See also “authentication” in the University of Leicester Elite Project Library, http://www.le.ac.uk/li/distance/eliteproject/elib/authentication.html (2002).
 See AIM Global (2000, pp. 1-9) for bar code badge guidelines, a technical paper.
 See also Palmer (1995, pp. 159-174).
 For an explanation on the UCC/EAN symbology see Adams (1996, pp. 1-3).
 Bert Moore was the former director of Technical Communications for AIM USA and was executive director of the Federation of Automated Coding Technologies (FACT), a major user group.
 Although they seem to have struck a reasonable alliance, “[t]he growing use of UCC-EAN standards across industries and borders continues to test the relationship between the two organisations” (Brown 1997, p. 201).
 “Joining forces are the Article Numbering Association (ANA), the standards authority for bar coding and electronic data interchange (EDI) and the Electronic Commerce Association (ECA), which offers guidance and solutions to businesses seeking to take up paperless trading” (Jones ed. 1998, p. 13).
 While EDI has matured within the UCC there are quite a few historical issues which have caused friction between EDI leaders and UPC pioneers. Brown (1997, p. 173) believes that “time… will bring new understanding and cooperation” between the two groups.
 In 2000, Hutchison reported that PSC and Symbol Technologies were embroiled in yet another patent-infringement suit over a portable bar code scanner named the Grocer e-Scan. The reporter noted that the two companies had a history of litigation.
 In some of the more prominent bar code-related legal battles, can be included Walter Kaslow’s coupon validation system (1976), Ilhan Bilgutay’s challenge on the UPC symbol (1985) and IAMPO’s UPC definition (1992). See ‘Formal Challenges’ in Brown (1997, ch. 12).
 Magnetic-stripe can boast a 30 year stockpile of documentation. ISO and ANSI have published a plethora of information on the topic, together with IATA and ABA.
 Another field for additional data such as the expiration date (4 digits) of the card, restriction or type (3 digits), offset or PVV (5 digits) or discretionary data is available, as well as control characters for the start and end sentinel, field separator and redundancy check character.
 It is important to note, that not all applications require a standardised magnetic-stripe card format, especially for ‘closed’ systems like amusement parks. In fact there are some instances when a non-ISO design would be more appropriate, acting to increase security by non-conformity. This usually makes counterfeiting or fraudulent alterations to the card difficult (Mullen & Sheppard 1998, p. 1).
 Although the majority of this section of the chapter focuses on bank card standardisation there is an equal amount of literature dedicated to the telecommunications sector. For a historical review of credit card standards for telecommunications see Lind (1992).
 According to Essinger (1999, p. 172-173), “[t]he UK’s first cash dispensers, branded ‘Barclaycash’, were installed by Barclay’s bank in 1967. They were not strictly speaking ATMs, as their function was restricted to providing cash. They were only open for limited periods in the day and were off-line (i.e. not connected to the central computer in real time)… The first implementation in the UK of a machine which was recognisably an… ATM rather than simply a cash dispenser is regarded as having taken place on 30 June 1975”.
 “[T]he US Citibank, which pioneered the ‘magic middle card’ based on infrared technology for ATM transactions, against the mainstream of magnetic stripes... failed to gain support from other banks… and as a result Citibank today uses those same magnetic stripes that it fought so hard against” (Cohen 1994, p. 17).
 In Australia customers were only able to access funds from the ATMs of different banks in 1992. The National Bank’s corporate affairs manager was quoted as saying: “[t]he attitude of the 1980s has certainly changed for the better and it’s only a matter of time before a uniform system comes into being” (Daily Telegraph 1992). See also the notion of international ATM sharing (Essinger 1999, p. 160).
 All PANs contain an industry code for the issuer (1 digit), an issuer identification (5 digits), customer identification (12 digits) and check digit. It was this very field that enabled different banks to accept magnetic-stripe cards at ATMs, regardless the operator. The PAN can identify the card issuer and cardholder, thus making interoperability possible via advanced card readers.
 It should be noted however that this infrastructure was very expensive and it took about 16 years for the first one hundred thousand ATMs to be installed.
 In the 1970s and 1980s ATM volumes boomed but in the 1990s manufacturers turned their attention to adding POS functionality (Mitchell 1996, p. 57).
 For instance, in 1997, NCR installed three thousand units (ATMs) in just 150 days for Banc One (Korala & Basham 1999, p. 6-7). China seems to be the next big market for mass ATM deployment.
 Apart from the initial investment it should also be considered that ATMs also incur ongoing rental space costs (Godin 1995, p. 178).
 Murphy (1996, p. 82) outlines the intricate process by which one can only assume that the person in charge must have acquired some first hand experience previously. “Converting to smart card production is no easy task. Not only does a company need state-of-the-art printing presses, it must upgrade its plastics to a thickness that can accommodate the computer chip that makes a smart card ‘smart,’ as well as ensure the cards are temperature resilient; it needs special machines to drill holes for the chips, and another set of machines to place computer chips in those holes…” Economies of scale are necessary here.
 See also Central Bank’s Payment Systems in Eleven Developed Countries (1989) and Kirkman (1987, pp. 224-227).
 SWIFT stands for the Society for World-wide Interbank Financial Telecommunications. It was established in 1973, and by 1984 it enveloped 1,104 banks in 49 countries (Dean 1984). Dean’s article on the cashless society raises ethical issues about the power of an organisation like SWIFT.
 Without this infrastructure in place, the magnetic-stripe card would not have become as prolific as it has. Brands like Visa and Mastercard would not have had in excess of twenty million members each. See also Moore (1996, p. 41) who described the launch of the Integrion Financial Network in North America. “The network offers bank customers another avenue to pay bills, transfer funds between accounts and monitor account balances, using either the private IBM Global Network or the Internet… the 15 member banks claim as customers more than half the households in North America.” In on the network are Banc One, Keycorp and Royal Bank of Canada among others.
 A landmark report was written by the Council of Europe (1983) based on the notion of international recognition of national identity cards. Magnetic-stripe was one of the first technologies to enable these theoretical concepts to become a possibility.
 Numerous business people were convinced during the mid 1980s that EFTPOS would be an unsuccessful application and yet it is increasingly being used today (Essinger 1999, p. 9). For an explanation on EFTPOS and how it works see Read (1989).
 Essinger regards ATMs to be the “[m]ost visible, and perhaps most revolutionary, element of the virtual banking revolution… That it has changed our lives is incontestable. Every day, millions of people around the world in thousands of walks of life rely on the speed and convenience of cash machines to get access to the money they need” (Essinger 1999, p. 159).
 In Australia a Technological Change Committee investigated the possible changes EFT would initiate (ASTEC 1986). One of the earliest EFT trials in Australia was conducted in 1982 between the Whyalla Credit Union and the G.J. Coles company (S.A. Council of Technology 1983, pp. 21-25). The government had a role to play in regulating EFT transactions but before doing so it had to ensure that it had adequately researched the implications of the new technology. Worldwide studies were also conducted on EFT by the OECD in particular (OECD 1989; Revell 1983, pp. 108-110).
 Both banks and retailers saw the advantages that had to be gained by using financial transaction cards. Speed and security were among the most important attributes. Retailers also saw a reduction in the amount of cash-on-hand they required to handle.
 Learning about consumer spending habits through transaction history records was also important.
 This has particularly affected rural communities many of which have started their own independent local banks.
 Essinger (1999, p. 8) wrote: “…it is likely that the availability of the new technology, and the fact that someone had decided to create it, is what is determining the application, rather than the customer need for it. In effect, after the invention has been put on the market, the customer demand is created for it.” He continues by pointing out that “…the cash machine was not an instant success; people needed to get used to the idea. However, once they had, the cash machine rapidly became an essential part of the customer service armoury of any bank…” (p. 68).
 For a comprehensive overview of issues related to the invasion of privacy see Rothfeder (1995, pp. 152-162). Colton and Kraemer (1980, pp. 28-30) also provide a good summary of ‘controversial issues’ surrounding EFT technology. This study on public policy was years ahead of its time. I have deliberately avoided a detailed discussion on privacy due to the scope of this chapter. For a detailed discussion of issues on privacy see (Campbell et al. 1994; Wacks 1993; Tucker 1992; Young 1978; Federal Department of Communications and Justice in Canada 1974; Madgwick & Smythe 1974; Cowen 1972). Watts (1997) highlights that breaches in privacy have more to do with government outsourcing contracts than auto-ID itself.
 See Will’s The Big Brother Society (1983). Compare this with ‘Big Brotherdom has benefits’ (MIS 1994, p. 80): “[i]t is a mistake to believe that the information supplied to such public and private organisations, or to the tax commissioner or to your employer, is your property…” Some other publications that reference the term as related to auto-ID include: Thompson (1997), Andersen (1995), Conolly (1995), Martin (1995), Privacy Committee of NSW (1995), Smith (1995a), Vincent (1995), Crosby (1994), Stix (1994), Davies (1992; 1996), Hogarth (1987), Donelly (1986).
 See also Essinger’s (1999, pp. 184-185) discussion on 1984 vs Nineteen Eighty-Four.
 For community attitudes to privacy in Australia see the privacy commissioner’s publications at http://www.austlii.edu.au/au/other/hreoc/privacy (1997). It is commonly held that the Australia Card proposition in 1986-1987 was primarily linked to the requirement for Australia to establish a Privacy Act, the ID number itself was secondary (Jones 1987b).
 See the Privacy International web site for frequently asked questions regarding identity cards http://www.privacy.org/pi/activities/idcard/idcard_faq.html (Davies 1996) and the opposition campaign to ID cards http://www.privacy.org/pi/activities/idcard/campaigns.html (Davies 1993). For an interesting case study on the Australia Card defeat see Smith (1989). This book is dedicated to “all those who fought against the Australia Card”. The Privacy Committee (1986) also wrote a report on issues to do with a national identification scheme.
 See Jones (1987a) on the secret Australian government plan to push ID on citizens; Walker (1987) a feature article on the Australia Card debate; Evans (1987) who highlights just how invasive the Card would become interlinking all facets of life; Collier & Hill (1987) that present the power of the government to introduce the ID card; Walsh (1987) on the demise of the card; Perkins (1987); Kosmos (1987); Fewster (1986) on the potential ID card non-compliance penalties; Cumming (1986) on the declining support for Australia’s proposed national ID card; Glynn (1987); Dawes’ (1986) letter to the editor discussing how one number would reveal all; Ransom’s (1986) letter to the editor alleging that the ID card is a fraud on the people; and Hurry’s (1987) advertisement about the hidden clauses of the proposed Australia Card, in the interest of the community. Apart from all the media press, the government also published a number of reports on the topic of an Australia Card, for instance, Commonwealth Department of Health (1987) and Joint Select Committee (1986). See also a relevant research paper by Graham (1990) on bureaucratic politics and the Australia Card as well as a NSW Combined Community Legal Centres Group (1988) submission to the Senate on a national identification system for Australia. For a short summary of the bureaucratic issues with the Australia Card see Martin et al. (1997, pp. 27-30). For an American perspective on the national ID card debate see Eaton (1986).
 While an Australian citizen card did not make an appearance, a tax file number (TFN) eventually did in its place. See Hogarth (1997, p. 4); Parliament of the Commonwealth of Australia (1988) on the feasibility of a national ID scheme (i.e. the TFN); Davies (1992, ch. 3) on the government versus the people; and Clarke (1993) on why people are scared of the public sector.
 For a discussion on PINs, see Essinger (1999, pp. 162f). Essinger is correct in highlighting that cardholders also need to adhere to the bank’s instructions of never writing a PIN down. However recent attacks against magnetic-stripe cards in Australia have focused on using secret cameras or other equipment to steal cardholder PINs as they are entering them onto the ATM keypad (Smith 2002, p. 3). See also Watson (2002, p. 2).
 In 1994, fraud on Visa was about 0.4 per cent of total credit card transactions (Harris 1994). See also Newton (1995, pp. 198-201) for a report on how to reduce plastic counterfeiting and how to fight organised crime.
 For the notion of “trust” within the context of electronic commerce see Kini and Choobineh (1998). While the content of the article relates to the Internet, it is the closest discussion one will find regarding electronic commerce.
 Internet banking (Yan et al. 1997, pp. 275-284) has been adopted by a technology-savvy population that appreciates the convenience of banking from anywhere/ anytime.
 Some countries like Singapore disclosed their agenda to abandon cash by the year 2000, thus preparing all consumers for the change, even though this did not exactly eventuate. “In France, an agreement has been signed that forms the basis of a nationwide, electronic replacement for cash” (O’Sullivan 1997, p. 57). See also Fisher (1996) and Pope (1990).
 A similar problem is faced in Australia. Harris (1994) reported that “[t]he Australian Federal Police Association (AFPA) [was] calling for national legislation to curb credit card fraud… officials find themselves virtually powerless…” In 1994, counterfeit cards accounted for $US260 million of credit card fraud worldwide, i.e. one quarter of the world’s credit card fraud. Cornford (1995) reported that Australian “[f]ederal police fear that our laws are inadequate to deal with this type of crime. The Indonesian criminal caught with the card encoder was set free on a legal technicality. Two Americans who used counterfeit cards to steal $250,000 and then sent it back to the US could be charged only with illegal transfer… A Malaysian is awaiting trial after being arrested with 77 counterfeit Visa cards. A Hong Kong criminal was jailed for nine months after using three counterfeit credit cards to get $40,000 in Sydney… The Chinese Public Security Bureau raided factories in Beijing and Shantau, which together made more than 110,000 counterfeit Visa and MasterCard holograms.” See also European fraud (Freeze 2000).
 See Polding (1996, pp. 23-25) who reported on the initiative of the Association of Payment and Clearing Services (APACS) to investigate the use of smart card technology as an evolution of the discussions between Visa, Mastercard and Europay in 1994.
 As consumers would know, “[i]n most national jurisdictions, once the customer has notified the bank of the loss or theft, the customer is then no longer liable for any withdrawals made by a third party, although sometimes the liability remains if the customer has disclosed the PIN to somebody else” (Essinger 1999, p. 27).
 However, it would be essential to remember, “[t]he code applies only to services which use debit cards and personal identification numbers (PINs) to access automated banking machines and point of sale terminals in Canada. It does not apply to cross-border transactions. The code establishes a code of practice for the issuance, use, and security of PINs. It sets the general requirements for cardholder agreements, transaction records, and transaction security, and is intended to set a minimum standard which participating organisations meet or exceed. It does not preclude protection given by other laws and standards. The code deals with the theft, fraud, technical malfunction, and other losses, and requires card issuers to establish fair and timely procedures for resolving disputes” (Campbell 1994, p. 44).
 The most prominent members include: the Canadian Payments Association, the Trust Companies Association of Canada, Credit Union Central of Canada, Retail Council of Canada, Canadian Federation of Independent Business and Consumers’ Association Canada.
 The Commonwealth of Australia wrote a detailed report on the rights and obligations of users and providers of EFT systems in 1986, however much of what was documented was voluntary codes of practice like in the case of Canada and the United States.
 See Smith et al. (1996) for possible algorithmic solutions to increasing the storage capacity on a magnetic-stripe card.
 For a novel magnetic card protection system see Chu (1995, pp. 207-211).
 “Washington University has been active in magnetic information technologies since 1986… with funding from the National Science Foundation, in collaboration with Hewlett-Packard Laboratories in Palo Alto, research was begun. In 1988 Professor Ronald Indeck joined Muller after a year long fellowship in Japan. This created a strong program in both experimental and theoretical research… MISC was established in 1992 as an interdisciplinary theoretical and experimental centre focusing on fundamental recording physics and information science.” http://www.misc.ee.wustl.edu/misc_intro.html (2002).
 This is not to discount the efforts of MISC or other commercial manufacturers. There is evidence to suggest that companies are still investing R&D dollars into magnetic-stripe. For example see the new developments listed by International Plastic Cards (IPC) at http://www.ipccards.com/developments/developments_main.htm (2001).
 The University of Kent began to conduct research on encoding facial images on blocks of data small enough to fit on a magnetic-stripe in (Middleton 1998). See also de Bruyne (1990).
 IDTECH’s MagBar solution http://www.idt-net.com/products/mag_stripe/magbar.cfm (2000) is essentially a chipset that decodes bar codes and magnetic-stripes.
 Yet while the smart card is a far more sophisticated technology it does not mean it should be considered superior per se. See Chadwick (1999, pp. 142-143) for a discussion on why smart cards are not always the smart choice. For advantages of the chip technology see Kaplan (1996, p. 8) and Allen and Kutler (1997, pp. 5-7). See also Zoreda and Oton (1994, ch. 1).
 Shogase (1988) coined the term ‘plastic pocket bank’. He worked for the Toshiba corporation while they were developing the VISA SuperSmart Card.
 For a discussion on the barriers to smart card success see Kaplan (1996, pp. 22-24) and Hill (1996, p. 1). A Gartner study in 1998 also reported that smart cards were a push technology and until new developments established their business value, that the technology would continue not to meet wild expectations (Essick 1998, p. 1). See Dataquest’s worldwide chip market forecast for 1997-2002 at http://www.smartcardcentral.com/research/ (1999). In 1997 M. Johnston (pp. 62-63), reported on how smart cards were poised to takeoff in the U.S.- it is debatable whether this in fact has happened. For a discussion on technology adoption relevant to smart card, see Schiffer (2000) who discusses why the electric automobile lost market share and the way that social behaviour stifled that development process.
 In the late 1980s, Bright (1988, ch. 8) wrote that France and Japan were leading the way followed by the U.S. Today, this geographic concentration still exists but other markets are starting to make an impact on the smart card industry, such as Hong Kong, Taiwan, Singapore, Belgium, Denmark, Spain, U.K. and Australia. See also, worldwide developments and player motivations (McKenna & Ayer 1997, ch. 3).
 The excitement even attracted some traditional magnetic-stripe card manufacturers. This was especially true of the system integration specialists who now had the job to build systems that could “talk” to each other (Ferrari 1998, ch. 13). Not all auto-ID system integration companies were up to the task however, acquiring smart card knowledge required employee retooling and training (Keenan et al. 1997, p. 35f).
 After 1996, emerging companies have primarily focused on the innovation of “security and encryption applications, operating systems and graphics for collectible cards” (Allen & Kutler 1997, p. 19).
 Hendry (1997, p. 250) suggests a T-shaped knowledge base in a smart card organisation where there are many people who have a top-level understanding of the technology while a few people will develop detailed knowledge.
 Different vendors have different operating systems (OS) at present. For instance, IBM introduced the MultiFunction Card (MFC) operating system in 1990, Bull introduced Odyssey I for the JavaCard and Gemplus has PCOS among others. For a sample list of smart card OS see Ferrari et al. (1998, pp. 2f). For an applied case study of the Java Card see Fünfrocken (1999).
 In the closing chapter of his book Hendry (1997, ch. 19) considers forward-looking strategies for several types of smart card stakeholders including manufacturers (semiconductors, masks and cards), system designers and managers as well as scheme operators. He offers some very important insights.
 Part of the difficulties with smart card, besides the fact that it is a relatively new high-technology is that most often project requirements are ill-defined, and they keep shifting throughout the lifetime of the project. Timeframes for each phase of development are difficult to estimate along with costs and exactly what resources are required and when. Coordinating efforts between various suppliers can also be problematic. In addition smart cards are privy to high rates of technical change and higher levels of uncertainty than other technologies. For a deeper discussion see Fruin (1998, pp. 241-249).
 Some of the more complex issues are: “[h]ow can smart cards include multiple brand logos without confusing the consumer? Who is liable for lost and/or stolen cards and how are they replaced? Who provides customer service and how is it made seamless to the consumer? How are applications developed, certified, installed, and upgraded? How are privacy, accuracy, and security insured? How are revenues shared?” (Allen & Kutler 1997, p. 12f).
 Fruin (1998, p. 246) provides a project-specific organisation chart for Toshiba and Yanagicho’s development of the VISA SuperSmart card. The team comprised of 172 members at its prime, 47 of which were corporate-level personnel. Large projects such as these require decision-making power at the highest levels.
 Dreifus and Monk (1998, pp. 305-314) describe the typical job roles within various smart card organisations and departments.
 It is not always easy to mobilise resources in companies whose core products are applicable to more than just one high technology. For instance, in the case of integrated circuit suppliers, smart cards are only one technology among many that they are supplying. It is the same in the case of ISVs (Independent Software Vendors) who may be developing software for not only smart card players but also Internet-centric applications etc. It can be a dangerous proposition to freeze resources on a product-by-product basis but a fine balance needs to be struck between the two possible extremes.
 For a thorough list of smart card and system components see Hendry (1997, ch. 8-9). In the VISA SuperSmart card development, Fruin (1998, p. 243) observed that “[n]either Toshiba nor Yanagicho boasted the complex and precise component-design, system-development, and product/process capabilities required for the project. A need for these forced Yanagicho to forge alliances with other Toshiba units and outside vendors.” In addition, one company may have the capabilities to do a particular part of the design process but the sheer magnitude of the project may not afford the time to complete tasks in-house, or there are other firms that have certain core competencies that would do that particular phase more economically.
 For more recent predictions about the smart card market in the U.S., see Cagliostro (1999) who offered promising statistics from IDC and Forrester Research. Cortese (1997) also reported how the smart card market was poised to grow in the U.S.
 See also the JavaCard Forum, among others.
 The period at the turn of the second millennium saw a very competitive IT&T labour market where skilled resources were in short supply. While this has changed due to the current global economic circumstances, at the turn of the century, private enterprise attempted to secure skills in the longer term by investing in universities, albeit by scholarships or research funding. See also Dreifus and Monk (1998, pp. 218f) regarding the types of skill sets that are in demand.
 See http://www.smartcard.com.hk/layout.htm (2002). The Smart Card Design Centre is funded by the Innovation and Technology Commission and the Hong Kong Government. “It provides technical support to Hong Kong industry for design of smart card modules, crypto-engines, readers and chip operating systems.”
 Some consortium stay in operation for over ten years especially if their very existence is linked to supporting industry-wide stakeholders, though typically most consortium disband within a year or two as newer opportunities present themselves. Knowledge gained from one initiative is subsequently reused.
 The list of partners includes: the University of Nijmegen (Netherlands), INRIA (France), Technical University of Munich (Germany), University of Kaiserslautern (Germany), Swedish Institute of Computer Science (Sweden) and SchlumbergerSema (France). See http://www.cs.kun.nl/VerifiCard/files/partners.html (2002).
 For a discussion on how to protect intellectual property related to smart cards see Kaplan (1996, ch. 9) and Bright (1988, pp. 21-23). In 1995, Douglas Taylor received a notification from the patent office that his patent application had been accepted. Taylor patented the idea behind the multiapplication smart card, although it is quite debatable whether or not he really was the first to consider this idea, he was however the first to patent it in the U.S.
 Cooper, J. et al. (1996) put forward a sociotechnical approach to smart card design that incorporates the user as an aid to defining requirements. See Zoreda and Oton (1994, ch. 7) on designing smart card applications and Lokan (1989).
 Mitchell (1995) believes that one of the reasons that smart cards have not reached their potential in the U.S. is because merchants do not accept the card. The merchant indifference towards smart card means that consumers cannot offer the payment method to purchase goods and services because the likelihood of their being an available device to read the card is very low.
 Hendry (1997, p. 250) makes the important observation that while individual applications can be built in a very short time frame (especially for closed systems), it can take two to three years for a national infrastructure to support the application to emerge and even five to ten years for a global one. Hendry’s analysis is precise: “[g]etting the infrastructure right, and making it easy to upgrade and add applications, should… be a top priority for any scheme.”
 With the rise of the dot.coms, non-traditional players especially entered the banking and telecommunication sectors hoping to make a lot of money from online applications. Many of these companies were attracted by the inflated revenue forecasts that were being predicted by analysts and their whole business was built on shaky foundations from the outset. There was little in the form of user surveys granting valuable feedback, and unfortunately millions of dollars have been wasted during this time on ‘get rich quick’ schemes. For an overview of successes and failures in the smart card industry pre-1996 see Kaplan (1996, ch. 4). See also Marron (2000b), ‘incubators nurture e-com ventures’.
 For example, the Global Chipcard Alliance.
 Today users are a lot more technically astute than they used to be. The PC, cable television, game play-stations, Internet and mobile phone, and more recently the personal digital assistant (PDA) have all contributed to a more technology-savvy society. In some ways the permeation of so much information may have been one reason why some users have resisted the change. For a discussion on social resistance as it pertains to smart card see Lindley (1998, pp. 144-145). See also Keenan et al. (1997, pp. 26-34) for what consumers think about smart cards.
 The Internet has played a large role in granting people access to information that was otherwise in hard-copy form in limited locations, such as public libraries. Today there are daily reports on worldwide smart card activities. See Smart Card Central online at http://www.smartcardnews.com (1999).
 Some of the more prominent journals include: Card Technology Today (now CTT), Report on Smart Cards, Smart Card and Systems Weekly, Smart Card Monthly, Smart Card News and Smart Cards and Comments.
 The association was a co-founder of the very successful CardTech/SecurTech conferences. See also http://www.scia.org (2001) and http://cardtech.faulknergray.com/scia.htm (1999). In 2001 SCIA had over 70 members in total contributing to varying capacities. See also EuroSmart, the European smart card industry association at http://www.eurosmart.com/ (1999).
 See also the SmartCard Developers Association, the International Card Manufacturers Association (ICMA) and the Smart Card Club.
 According to ISO, “[s]tandards are documented agreements containing technical specifications or other precise criteria to be used consistently as rules, guidelines, or definitions of characteristics, to ensure that materials, products, processes and services are fit for their purpose” (Dreifus & Monk 1998, p. 29). See also chapter three of the same text and appendix B.
 Unlike magnetic-stripe cards where proprietary schemes could possibly increase the security of applications in particular scenarios, smart cards have in-built security features and standardisation is almost always desirable.
 Today all three technologies can be utilised on the same card- “the information... can be accessed by reading the chip, swiping the magnetic stripe, or making an imprint from the embossing” (Dreifus & Monk 1998, p. 31).
 ISO 7816 contains seven parts stipulating guides to physical characteristics, dimensions and locations of the contacts, electrical signals and transmission protocols, inter-industry commands, application identifiers and data elements for interchange.
 For a complete list and description of ISO standards related to smart cards see Hendry’s (1997, pp. 253-258) appendix, “Standards”, Hegenbarth (1990) and Devargas (1992, ch. 3). See also Ferrari et al. (1998, ch. 3) which includes a discussion on standards and specifications, especially ISO 7816, CEN726 (the ETSI version), GSM, EMV (MULTOS), PC/SC, the OpenCard framework (Macaire 2000), IATA Resolution 791, SEIS (Secured Electronic Information in Society), Cryptoki, CDSA (Common Data Security Architecture), PC/SC Workgroup, and MASSC a generic architecture for multiapplication smart cards (Tual 1999, p. 52). Rankl and Effing (1996) cover the technical details of standards.
 EMVCo was established by the EMV alliance in 1999 to administer EMV standards for debit/credit cards. The newly published CEC (Chip Electronic Commerce) and the existing SET (Secure Electronic Transaction) is combined in the new EMV specifications (D. Jones 2000b). See the EMVCo web site http://www.emvco.com/ and SET http://www.setco.org/ (2000). It also should be noted that new e-purse standards have emerged (not in competition to EMV but at another layer of detail) called CEPS (Common Electronic Purse Specifications) and TAPA (Terminal Architecture for PSAM Applications), i.e. PSAM standing for Purchase Secure Application Modules. “The PSAM is a device that performs security functions during an electronic purse purchase transaction. TAPA provides a structure for terminals that can process single or multiple applications” (D. Jones 2000b).
 These levels may pertain to the physical card itself, the contact pads, the card reader, the interface, the Application Programming Interface (API), the application itself, even card management.
 According to Dreifus and Monk (1998, p. 46) changes in standards are “…a result of the natural evolution and the maturation of the technology”.
 Argy and Bollen (1999) argue that “[t]he general commercial law operating in Australia is sufficient to govern e-commerce, but it is sometimes difficult for lawyers and judges who are not familiar with emerging technology to apply traditional legal concepts” (p. 56). In 1999 an Electronic Transactions Bill was introduced by Parliament so that businesses felt encouraged to take advantage of electronic commerce capabilities.
 “Regulation E was promulgated by the Federal Reserve Board as the implementing regulation for the Electronic Fund Transfer Act of 1978. It is designed to protect consumers and defines the right and obligations of consumers and ‘financial institutions’ with respect to electronic transaction affecting consumer accounts” (Barr et al. 1997, p. 70).
 For an explanation on multiapplication cards see Hendry (1997, ch. 16), Allen and Kutler (1997, pp. 12-13), and J. Elliot (1999), Schaumüller-Bichl (1987) and Piller (1987).
 According to Barr et al. (1997, p. 78), the following issues need to be considered: “is the issuer of a SVC going to be treated as a bank for federal or state purposes; will there be export control restrictions because of the encryption used in the smart cards; and how will general commercial law principles which have evolved in connection with old-style payment systems apply to smart card.”
 For a discussion on the institutional and economic implications of stored value see Crowley (1996). See also Browne and Cronin (1995, pp. 101-116) on the impact of electronic money.
 For a comprehensive discussion on regulations, legal and privacy issues as they relate to credit cards, debit cards and SVCs, see Owens and Onyshko (1996). This 37 page report has over 160 references and is among the most up-to-date publications on the topic by a legal firm. The report can also be downloaded from the web at the following address, http://www.smythlyons.ca/it/credit/index.htm (1999).
 For privacy issues as they pertain to smart card see Lindley (1998, pp. 132-142). See also Barr et al. (1997, pp. 73-78) and Vincent (1995). A case study on the Ontario Smart Card Project can be found on the Information Policy Research Program (IPRP) web site http://www.fis.utoronto.ca/research/iprp/sc/ (2002). The site contains a number of useful press clippings and articles on public policy and smart card that offer another perspective. Included in this site are links to Roger Clarke’s articles on public policy issues related to identification. For Clarke’s main publications http://www.anu.edu.au/people/Roger.Clarke/DV/RogersDVBibl.html (1997). The Privacy Committee of NSW (1995) also contributed a report on smart cards and named it ‘Big Brothers little helpers’. Following this report the Human Rights of Australia published another report on smart card implications for privacy compiled by the Privacy Commissioner (1995).
 For a brief introduction into consumer acceptance issues see Bright (1988, pp. 145-149). See also Card World (1990, pp. 42-45) for a specific case study on early cultural resistance to plastic cards in Italy in the 1990s and Radigan (1995), ‘consumers are lukewarm on smart cards’. Svigals (1987, ch. 16) is one of the first authors to discuss the potential societal impacts of smart card as is C. P. Smith (1990, ch. 9). For key strategies and considerations for user acceptance of smart cards, see Lindley (1994).
 See Branscomb (1994) Who Owns Information?, for a thorough discussion on privacy versus public access to information. On the topic of smart cards (p. 70) she provocatively questions: “[b]ut are we willing to have so much medical information about ourselves contained in so little electronic space, with possible access not only to us and the doctors treating us, but as well to our insurance companies, our employers, and the FBI, not to mention that bizarre world of computers voyeurs?” Additional texts that should be referred to include: Cuddy (1994), Brin (1998), Davies (1996, ch. 7; 1992, ch. 4).
 See Larson (1992). His book titled, The Naked Consumer, adds an interesting perspective to how private information has become a public commodity. See also Flaherty (1979) for a discussion on privacy and government data banks. For community attitudes towards privacy see O’Connor (1995). Clarke (1993) sheds some light on the topic of why people are generally afraid of the public sector.
 Chaum (1992) discusses how to achieve electronic privacy in a way that will see control of personal information return to the individual. Together with his colleagues at the Dutch nationally funded Centre for Mathematics and Computer Science in Amsterdam, Chaum puts forward new cryptographic solutions.
 For a discussion on card management see Ferrari et al. (1998, ch. 12). See also multiple application cards, including a discussion on branding and ownership issues in Barr et al. (1997, pp. 64-68).
 For a closer look at how some religious groups view the ULI see Wilshire (1992; 1993), Relfe (1982; 1981), Smith (1980; 1985). Common beliefs can be summarised, in the expected formation of a one world government and the dawn of a new monetary system that will give birth to a new one world order. For a broader perspective of how the use of technology is understood within a religious context see Noble (1999), Hensley (1998), Lucas (1996), Fisher (1990), Klinken (1977), Jeeves (1972). See Stahl (1996) especially, God and the Chip: Religion and the Culture of Technology.
 Refer to Drudge (1998), http://www.warroom.com/natid.html. It is not the technology itself that most people fear but what it represents and how the capability of unique identification can be used by anyone who has access to the information, particularly potential totalitarian governments or regimes.
 While there are many advantages gained by the use of multiapplication smart cards for government and non-government applications, more research needs to go into what these advantages mean in real terms. The notion of many ‘little brothers’ versus one Big Brother has been put forward in opposition to multiapplication cards. While the intent of the issuer may be noble, i.e. to offer a better service to its customers, no one can guarantee that the information will not be used ‘against’ an individual. These are not conspiracy theories but lessons from history. One of the most infamous uses of dossiers against a people was that of the Nazis against the Jews (Black 2001). See also Evan (1987) who writes with reference to the proposition of an ID card in Australia: “I can understand why many people- particularly those who have lived under totalitarian regimes or fled from Nazism- oppose the Australia card”.
 As Burnell has accurately stated, “[f]our years ago, if you talked about a biometric, it was new to just about everybody… That’s just not the case anymore. Resellers are seeing the benefits of biometrics for certain applications” (Burnell 1998, p. 2).
 Estimates in 1990 (Parks, p. 98) indicated that there were over one hundred firms, institutions and government agencies that had substantial activity in the area of Automatic Personal Identification (API).
 For an extensive list of biometrics companies see http://www.findbiometrics.com/Pages/ (2002) and the Google web directory at http://directory.google.com/Top/Computers/Security/Biometrics/Companies/ (2002).
 While integrators and support technology providers play an important role in biometric implementation, the actual service provider is equally responsible for the longer-term operational success of the application. Realising this, the Department of Social Services in Connecticut made extensive use of cross divisional workgroup teams to ensure a buy-in of the new process by DSS staff first. The work group teams focused primarily on process integration (Connecticut Dept. 1998, p. 1).
 For instance, in 1999, biometrics provider Sensar had seven high profile partners including: Citibank, OKI, Siemens Nixdorf, Fujitsu, NCR, LG Electronics and WANG Global. See http://www.sensar.com/partners/partners.stm (1999).
 Even government departments are said to stay away from bleeding edge technology that are not on the evaluated list of products (EPL). They need to undergo thorough testing before they are adopted (Withers 2002, p. 78).
 An example of an OEM agreement in smart card is between Australian company Intellect and NCR. Some of Intellect’s smart card system components are NCR-badged (Bell 1997, p. 37). The NCR brand name is more well-known than that of Intellect and NCR like to promote a uniform brand image to their customers so it looks like they can provide an end-to-end smart card solution.
 As has often been stated, “[t]his makes it difficult to link biometric technologies from different vendors, freely substitute biometric technologies, or use a single technology across multiple applications…” (Lawton 1998, p. 18).
 Historically, algorithms were hardcoded into custom biometric applications (Tilton 2000, p. 130).
 “The existence of a single industry standard will settle the confusion caused by competing specifications and hasten the adoption of biometric technology for a wide range of commercial applications” (Tilton 2000, p. 132). Standards play a strategic role in deregulating the industry and making it a more competitive field, granting customers a greater variety of choice.
 Lazar (1997, p. 3) believes that biometric technology is not different to any other new technology. Initially, there are few standards and most systems are proprietary contributing to a lack of standard infrastructure for storing and transferring data captured.
 The important features organisations seeking to adopt biometric technology should look for are outlined by Liu and Silverman (2001, p. 32). These include: “the biometric’s stability, including maturity of the technology, degree of standardisation, level of vendor and government support, market share, and other support factors. Mature and standardised technologies usually have stronger stability.”
 Manual standards for instance existed since the 1920s when the FBI (Federal Bureau of Investigation) in the U.S. started processing fingerprint cards. These standards ensured completeness, quality and permanency. In the 1980s another standard was devised to herald in the new live-scan fingerprint devices; the Minimum Image Quality Requirements (MIQR) was born. Eventually the FBI allowed virtual fingerprint cards to be submitted electronically and a new set of standards had to be introduced including “comprehensive guidelines on the required message formats and image quality standards” (Higgins 1995, p. 2). Finally the FBI transitioned to the Integrated Automated Identification System (IAFIS). Higgins observed that many of the existing standards had corollaries in the electronic world- they did not just disappear, but were carried over. For example, ANSI/NIST-CSL 1-1993 describes the record types associated with digital fingerprint transmission.
 The speech recognition community has already developed Speaker Verification API (SVAPI) and the National Security Agency (NSA) sponsored the development of Human Authentication API (HA-API) in 1997. See http://www.bioapi.org (2002).
 Several specifications were published by ANSI, the International Computer Security Association (ICSA) certified biometrics products for the first time, and AIM USA began undertaking biometrics efforts along with the formation of the International Biometrics Industry Association (IBIA).
 Ironically Microsoft later dropped out of the race to pursue its own super-interface standard.
 See the importance of the BioAPI standard in Dunstone (2001, pp. 351-354).
 An example of a draft level standard is the Biometric Exchange File Format which defines how to store and exchange data from a variety of biometric devices (Liu & Silverman 2001, p. 30).
 Some of these standards activities include the INCITS M1-Biometrics Technical Committee, Common Biometric Exchange File Format, ANSI INCITS 358-2002 Information Technology- BioAPI Specification (Version 1.1), Human Recognition Services Module (HRS) of the Open Group’s Common Data Security Architecture, ASNI X9.84-2000 Biometrics Management and Security for the Financial Services Industry, ANSI/NIST-ITL 1-2000 Fingerprint Standard Revision, AAMVA Fingerprint Minutiae Format/National Standards for the Driver License/Identification Card DL/ID-2000, Part 11 of the ISO/IEC 7816 standards, and NIST Biometric Interoperability Performance and Assurance Working Group. For an explanation of each of these see http://www.itl.nist.gov/div895/biometrics/legislation.html (2002) and http://www.ncits.org/tc_home/m1.htm (2002).
 “The IBIA focuses on educating lawmakers and regulators about how biometrics can deter identity theft and increase personal security” (Kroeker 2000, p. 57). The IBIA has established a strong code of ethics for members to follow.
 BIOTEST is a European project aimed at developing standard metrics for measuring/comparing the performance of biometric devices.
 See http://www.afb.org.uk (2001). The AfB want to be considered an international authority on biometrics. “Whereas other industry organisations are mainly designed for biometric industry companies, the AfB’s membership will continue to be a broad church comprising biometric suppliers, end users, government agencies, academics and consultants” (Lockie 2001).
 This list was obtained from http://www.biometrics.org/html/sites.html (1998). See also http://directory.google.com/Top/Computers/Security/Biometrics/Organizations/ (2002) for a more complete directory of associations. Some informative sites that are not that well known include the Southern California Association of Fingerprint Officers dedicated to scientific investigation and identification since 1937 http://www.scafo.org/ (2002), the Association for Biometrics in the UK http://www.afb.org.uk/ (2002), the TeleTrusT Biometric Group in Germany http://www.teletrust.de.default.asp (2002) and the Biometric Institute in Australia http://www.biometricsinstitute.org/ (2002).
 Lawton (1998, p. 18) makes an interesting observation about biometric technologies, stating that “[s]ecurity technologies start with the government, and work their way down to industrial and then finally to personal applications” (Lawton 1998, p. 18). This is true of most auto-ID techniques.
 The Consortium was established in 1992 (its charter formally approved in 1995) and meets to promote biometrics, create standards and relevant protocols, provide a forum for information exchange between stakeholders, to encourage government and commercial interaction, to run workshops linking academia and private industry and address ethical issues surrounding the technology among other things (Alyea & Campbell 1996, p. 2). It has quite a broad agenda.
 The U.S. government became especially interested in biometrics in the 1970s. They commissioned the Scandia Labs to compare various biometric identifiers. The report concluded that this technique was more accurate than the others. So influential were the findings of the government-commissioned report, that “[t]he impact of the study was to shift focus on fingerprint technology. Because of this early emphasis on fingerprint technology, the years since 1970 have produced a large body of research and development in fingerprint identification algorithms and integrated systems” (Ruggles 1996 p. 8). Thus it is not surprising that the U.S. government, more than twenty years later, invested time and money into the establishment of the Biometrics Consortium.
 See http://www.csee.wvu.edu/citer (2002) and http://www.wvu.edu/~forensic/ (2002). “The goal of CITeR is to further the development of biometrics through new technologies research, interdisciplinary training of scientists and engineers, and facilitation of the transfer of this technology to the private and government sectors” (Dobbs 2001, p. 2). According to the CITeR web site, it is the first National Science Foundation Industry/University Cooperative Research Centre focusing on biometrics.
 See the Pattern Recognition and Image Processing Lab web site which is maintained by Arun Ross and Anil Jain, http://www.cse.msu.edu/rgroups/prip/ (2002). For a list of biometrics publications, including patents, interviews, books and conference proceedings that have come from MSU see http://biometrics.cse.msu.edu/publications.html (2001).
 See http://www.engr.sjsu.edu/biometrics/ (2002). “The resources of the Test Centre have been brought to bear on substantial questions that have been impeding the evolution of the industry… It is important to recognise that the Test Centre works with people in industry and in government to optimise resources and assist in the development and enhancement of the industry as a whole.”
 Laws at different levels should be considered including at the constitutional, federal or state level.
 Woodward (1997, p. 1487) argues that “[w]e do not need a new “Law of Biometrics” paradigm; the old bottles will hold the new wine of biometrics quite well.” See Miller (1971), especially the chapter on the federal government’s handling of information.
 In 1998 Mexico and Brazil followed several other countries when its national parliaments officially decided to use biometric technology to secure the voting process (Bunney 1998b, pp. 2f).
 This is not to say that governments are ignoring legislative impacts of the technologies they are using to facilitate citizen services. Rather, it seems that government choices in technology are driving legislation in some states to enable the deployment of more of the same. Wayman (2000, p. 76) supports this argument: “[e]ncouraged or mandated by federal legislation, governmental agencies at all levels have turned to technology in an attempt to meet… requirements.”
 See Wayman’s (2000, pp. 76-80) important study on federal biometric technology legislation covering drivers licensing, immigration, employment eligibility, welfare and airport security.
 Perhaps the fundamental question is whether or not a government requirement to record a particular biometric is in breach of one’s legitimate right to privacy (O’Connor 1998, p. 8).
 See relevant federal court cases Katz v. United States, Schmerber v. California, Rochin v. California, Davis v. Mississippi, United States v. Dionisio, United States vs Sechrist, Perkey v. Department of Motor Vehicles (O’Connor 1998, pp. 8-9).
 Incidentally, O’Connor’s finding which is looking at the issue purely from a legal perspective is not in contradiction with the pure definition of the “mark” of the beast (Revelation 13:17). In the Greek (New Testament), the “mark” is described as a “charagma”, and is not usually considered a surface feature but an incision into the skin. For a complete definition see Michael (1998, p. 278, ft. 3).
 For example, in the U.S. changes to Regulation E in 1994 granted citizens, limited liability to EBT (Electronic Benefits Transfer) at the federal, state and local government level. “The Government Office of Accounting (GAO) projected fraud losses as a result of the Regulation E amendment, in the vicinity of 164 million and 986 million dollars” (Fuller et al. 1995, p. 8). In another example in the U.K. the National Audit Office (NAO) reported that one in ten welfare claims are fraudulent. In 1995 NAO estimated that 561,000 people made fraudulent Social Security claims at a cost to the government of 1.4 billion U.K. pounds (SJB ed. 1996b, p. 1).
 Among the most versatile biometrics used to show criminal activity are fingerprints and DNA. See http://www.biology.washington.edu/fingerprint/dnaintro.html (Brinton & Lieberman 1994). O’Connor (1998) has suggested that guidelines be set-up for biometric records such as in the case where an arrest does not lead to a conviction etc. See also the national DNA database established by the FBI (Herald Tribune 1998, p. 7). The database is similar to that launched in the U.K. in 1995 that has matched 28,000 people to crime scenes and made 6,000 links between crime scenes.
 As a result of the September 11th attacks, the U.S. moved quickly to create several Public Laws. Relevant to biometrics are Public Law 107-56 and Public Law 107-71. The former describes the appropriate tools required to intercept and obstruct terrorism and the latter focuses on introducing emerging technologies like biometrics for airport security (including passengers and airport personnel). See http://www.itl.nist.gov/div895/biometrics/legislation.html (2002). See also Snyderwine and Murray (1999).
 When comparing the mandatory recording of a biometric feature against the innocent loss of lives in a terrorist attack, biometrics as a ‘human rights violation’ diminishes in importance. However, “[w]hile some people have revised their opinions about the invasiveness of various biometric techniques in light of September’s tragedies, the privacy debate continues throughout the US. If this hurdle is to be overcome, accurate information and education will still be required” (Watson 2001).
 The legislative process to get a bill through parliament can take a long time. In the case of the Connecticut DSS (Department of Social Security) it took three years for welfare recipients (those on general assistance (GA) and Aid to Families with Dependent Children (ADFC)) to be digitally fingerprinted. Jeanne Garvey who worked on the legislation said the process was unexpectedly difficult. She is quoted as saying “I didn’t know the process or the key people, but I know one thing- if you want to get something done you go to the top” (Storms 1998, p. 2). The article by Storm on Garvey shows the complexity of human relationships in these types of projects. One is left to ponder on whether Garvey’s endeavour to reduce DSS fraud turns out to be a self-seeking journey to topple her opponents. Garvey says: “[i]f you want something badly enough, you have to be in people’s faces a little bit harder”. Perhaps however, it is not about wanting something badly enough, it is about doing the right thing by citizens, since as a senator you are acting on their behalf. Garvey continues: “I had to baby-sit this thing like a hawk… the thing I learned through this whole experience was never, never, never give up… these are once-in-a-lifetime type things” (Storms 1998, pp. 3-4).
 For a thorough explanation of the notion of privacy foe within biometric literature see Woodward (1997, pp. 1485-1487). He also discusses the notion of privacy’s friend (pp. 1488-1489). See also http://www.dss.state.ct.us/digital/privacy.htm (1998). Dunstone (2001) describes the opposing thoughts in another way, those users who believe that there is no downside to privacy by using biometric technology and those who would only use biometrics in extremely limited circumstances (if at all). He writes: “[b]oth sides have salient points to back up their views. However there is significant middle ground which deals with the responsible and pragmatic use of biometrics”.
 See Computing (1999), ‘Why the fear of biometrics?’ and Moskowitz (1999, p. 85). For the risks associated with biometrics see McMurchie (1999, p. 11).
 See http://dlis.gseis.ucla.edu/people/pagre/bar-code.html (Agre 2001). Agre argues: “[f]ace recognition systems in public places… are a matter for serious concern. The issue recently came to broad public attention when it emerged that fans attending the Super Bowl had unknowingly been matched against a database of alleged criminals…” (p. 1). See also Lockie (2001b) and Scholtz & Johnson (2002, p. 564). Agre provides the most extensive list of web resources for both sides of the debate. His concerns about facial recognition are similarly voiced by Rosenweig (2000). In Hong Kong, Mathewson (1998) reports how hair testing helps detect drugs in school students. In this case, if a sample of hair was retained for DNA records it would be unethical.
 Davies is adamant, “[w]e would go for outright prohibition on the transfer of biometric data for anybody, for any purpose. If I give my biometric data for a specific purpose then it is locked-in, for all time, for that purpose. I cannot give my consent for its transfer and no one can force, or request for access to that information” (Roethenbaugh 1998, p. 2).
 The U.S. social security number (SSN) introduced in 1936 is an excellent example of function creep (Hibbert 1996, p. 686). It ended up being used by the banking sector, among numerous other uses. “The risks to privacy therefore do not lie in data by themselves, but in the way in which they are concatenated- or, more, generally, ‘processed’ or ‘handled’- for some specific purpose” (Sieghart 1982, p. 103).
 Jim Wayman, head of the National Biometrics Test Centre at San Jose State University, says that biometric systems are not perfect. He notes that 2% to 3% of the population cannot use them at a given time: “[e]ither they don’t have the (body) part or the part doesn’t look and work like everyone else’s, or something is just off” (Weise 1998, p. 2).
 Service providers are aware of people’s privacy concerns and are conducting trials before implementing fully operational biometric systems to gauge the amount of end-user resistance. For example, when Nationwide considered using iris identification, a spokesman said: “[i]t’s a very unknown area, and we want to see what the reaction is like and whether or not it is commercially viable” (Craig 1997, p. 3). What trials have discovered is that in general, “[t]he less intrusive the biometric, the more readily it is accepted” (Liu & Silverman 2001, p. 32). However there are certain groups such as religious and civil-liberties groups that have rejected the implementation of any biometric technology altogether.
 See Woodward (1997, pp. 1489-1490) and the idea of biometric centralisation versus balkanisation.
 For a discussion on the S. 269 bill that was put forward in 1996 to Congress see Williams (1996, pp. 2-3).
 Wayman states that those people who propose, design and implement biometric solutions for government applications are sympathetic to citizen concerns about potential breaches in privacy etc. This is likely to be true but as vigilant as the technology providers may be there are defining limits to the number of hours and the number of resources any one company can dedicate to a project. In a perfect world, a perfect biometric solution could operate without any qualms but the world we live in is not perfect, and no one can categorically state that a system is full-proof even if the teams working on the solutions do their very best. See Dale (2001) who writes that privacy concerns are an issue for biometrics used for law enforcement. The challenge is in the sharing of sensitive data between the relevant agencies.
 It is interesting that Davies notes that “[w]e are always thinking about The Terminator at some point in the future”, in response to Roethenbaugh’s comment on the movie. This can be associated back to the movies referenced in the Literature Review as to possible insights into the future.
 According to the Privacy Committee of Canada (1992), current and potential uses of genetic testing (i.e. acquiring a DNA sample) include: workplace testing, screening associated with human reproduction, screening as part of basic medical care, genetic screening to determine the right of access to services or benefits, forensic DNA analysis in criminal investigations and testing for research (pp. 16-25). For example: “[e]mployers (both public and private sector) may wish to identify “defective” (less productive) or potentially defective employees or applicants through genetic screening” (p. 16). “Governments may one day wish to test persons to see if they are genetically suited to have access to certain services (advanced schooling, immigration or adoption)… or benefits (disability payments)” (p. 20). “Forensic analysis identifies victims and connects suspects to crimes. In about one-third of the cases in which it is used in the United States, it exonerates suspects by showing that their genetic samples do not match samples taken from a crime scene” (p. 21). While the Privacy Committee of Canada offer a number of recommendations, one can only begin to ponder on the potential privacy issues linked with such widespread use of DNA. An incorrect record entry could affect an individual’s life indefinitely.
 According to the Sandia report, retinal scan had the most negative client reaction when compared to other biometric techniques. The “users have… concerns about retina identification, which involves shining an infrared beam through the pupil of the eye” (Ruggles 1996, p. 7). See also Gunnerson (1999), ‘Are you ready for biometrics?’
 Davies (1996, pp. 236-239) describes something similar to this in his section entitled the Future of Fusion.
 For example, the stigma that biometrics is for law enforcement has some users opposed to being fingerprinted even for physical access control applications (Lazar 1997, p. 2). When biometrics for social security services was first proposed in the state of Connecticut to say it was controversial “…would be an understatement… Public perception and the association of fingerprinting with the criminal element was pervasive” (Connecticut Dept. 1998, p. 1). But this in itself did not stop its implementation.
 See some religious publications (in Greek) that are against some applications of auto-ID technology including: Hristodoulou (1994), Kontogiannis (1994), Witness (1993), Moulatsioti (1991), Greek Herald (1988) and Athonite Monks (1986). Moulatsioti, now an Abbot of an Orthodox monastery even produced an album with a hit song titled ‘say no to the chip’. He also dedicated a whole issue of his Orthodox Witness periodical, 18(109), in opposition to financial applications of auto-ID. See especially April-June (2000, pp. 2-9, 46-68). Articles are often published against auto-ID, especially smart card.
 Dunstone (2001), the executive director at the Biometrics Institute also adds “[p]ublic concerns over biometric use should be taken seriously. It is particularly important that these issues are openly recognised as valid, both by the biometric vendors and by system implementers, if they are to reduce the risk of adverse public sentiment, particularly for those systems that are intended for wide scale deployment.”
 See Jain, A. et al. (1999, p. 35): “[a]ny biometrics-based technology is traditionally perceived as dehumanising and as a threat to an individual’s privacy rights”. Sims (1994) discusses the decriminalisation of the fingerprint.
 Another strategic plan to change public perception was the publication of the Digital Imaging Fact Sheet with answers to frequently asked questions (FAQ). In addition to this an Internet webpage was also set up.
 The Federal Privacy Commissioner and the president of the Australian Council of Liberties have expressed concerns over privacy implications for an Australian passport based on face recognition. The response has been “whether we like it or not, it’s going to happen” (Withers 2002, p. 79).
 “Sandia envisage multiple biometrics being used for ultra-secure physical access control applications in the future. They are working on a system that simultaneously applies facial, voice and hand geometry checks” (SJB ed. 1996, p. 1).
 Multimodal biometrics may be convenient but there still seems to be a fair degree of privacy issues that have not been considered. It is regularly expressed that “[c]ivil libertarians worry that we’re moving toward a world where our privacy is the price of convenience” (Weise 1998, p. 1).
 In 1997 Geers et al. (p. 90) identified only “ten manufacturers of passive electronic identification transponders for animals (subcutaneously injectable, bolus, eartag).” Some of the companies on this list included AVID, DataMars, Destron/ID and Euro-ID/Trovan. Within a space of one to two years, this figure more than tripled to include companies that specialised in other applications apart from transponders for animals. Some of these companies include: Amtech Corporation, Checkpoint Systems, Cochlear, Electronic Identification Devices, Elmo-Tech, HID Corporations, Identichip, LipoMatrix, Tagmaster, and Trolley Scan. More recently, the potential of RF/ID has drawn many new companies to the technology, especially for supply chain automation and the tracking of humans and animals.
 “At the end of 1988 there were approaching 500 companies which either manufacture or supply auto ID technologies and which were members of an AIM association somewhere in the world” (Smith 1990, p. 49). This figure of 500 includes companies involved not just in RF/ID tags and transponders but other auto-ID devices as well, i.e., the whole auto-ID industry. This figure should not be misinterpreted or quoted out of context.
 More recently however, some efforts to standardise on certain criteria have begun.
 As Gerdeman has precisely captured, “[s]tandards have been a cornerstone to the computer revolution and the identification community. Without standards the user community would have significant troubles in communicating with their constituents, gaining significant productivity from common capabilities, or having a point of comparison reflecting the views of the experts” (Gerdeman 1995, p. 45).
 “[S]ystems of one vendor must be compatible with those of another, and must additionally operate under both foreign and domestic regulations. Efforts to develop standards for RFID and various applications are continuing” (Scharfeld 2001, p. 9).
 This conflict in RF/ID equipment is prevalent in the microchipping of domesticated animals. One politician in Taipei called the microchipping of animals the “joke of the century”. Shu-ling (2001, pp.1f) explains that “… electronically tagged dogs haven’t been reunited with their owners because of the poor quality of some ID chips or conflicting scanner and tag systems… competing tag and scanner systems available on the market make it difficult to facilitate reunions, as public shelters are unlikely to be equipped with a collection of different scanners that could decode every chip in existence.” See also an American Pet Association (APA) press release discussing the shortcomings of animal chip implants at http://www.apapets.com/pro1.htm (2001). These shortcomings could be overcome with standard equipment. Compare these references with Simpson (2002, p. 5), ‘Microchip saves trauma for Benson’. “Mrs Stewart said she feared for the worst when her dog went missing. But council rangers were able to identify Benson because he had been microchipped and obtained a lifetime registration.”
 See http://www.autoidnews.com/technologies/concepts/need.htm (Auto I.D. News 1998, pp. 2f).
 For instance, will the vendor support products sold for the lifetime of the business? Will the vendor maintain the system for a substantial period of time? Will future product changes mean that the user will have to make future mandatory investments? Will future expansion cost too much to implement?
 More recently however, users are becoming more critical of new start-up companies in most areas of IT&T. Citizens are even more cautious today to buy shares in any company that has not proven itself over time. As one industry analyst put it, the technology needs to be ‘cooked not eaten raw and today’s businesses have products that haven’t even thawed’. Compare this remark with the pre-dot.com crash article written by Ferguson (2000, p. C1) about how “to think big, start smart and scale fast” in the new competitive economy.
 RF/ID veteran, Gerdeman, (1995, p. 45) states that “[g]enerally, politics surround the formation of a standard. There is also a significant amount of technical engineering support” that is required. See also http://www.rfidnews.com/returns.html (2002).
 “All major RFID vendors offer proprietary systems, with the result that various applications and industries have standardised on different vendors’ competing frequencies and protocols. The current state of RFID standards is severe Balkanisation… This lack of open systems’ interchangeability has hindered RFID industry growth as a whole, and has resulted in slower technology price reductions that often come with broad-based interindustry use” (AIM Global 1999, p. 2). Tuttle (1997, p. 7) agrees that “[s]ingle source supplying creates monopoly, which drives prices up- and deters customers.”
 For detailed industry standards on ISO Freight Containers and the American Trucking Association (ATA) see chapters 6 and 7 of Gerdeman (1995).
 Common items of concern listed by Gerdeman (1995, p. 46) that should be considered as part of the standardisation process include: reliability, accuracy, tag life, speed, temperature, frequency, tag position, data content and distance.
 See http://www.autoidnews.com/technologies/concepts/need.htm, (Auto I.D. News 1998, pp. 2f).
 For instance, “[a]s a leading worldwide organisation providing data and Automatic Data Capture standards, EAN International and the UCC take a proactive role in… the lack of open standards regarding the use of RFID” (Franciosi 2001, p. 5).
 Byfield (2002, p. 1) concurs with Ames that the term “Auto ID” is an umbrella word to represent all technologies which automatically identify coded items. However, more recently, the term seems to have been “hijacked to mean only miniscule RFID tags.”
 First set up in the United States as the Automatic Identification Manufacturers (AIM) association, similar associations are now in operation in New Zealand, Australia, Japan, Korea, Europe in general and France, Britain, Denmark, Finland and Spain on a national basis. “These associations have been licensed to operate as AIM affiliates by AIM International, the overall governing body…” (Smith 1990, p. 49). AIM member companies are located in all these countries and they are mostly technology providers, inventors, developers and suppliers of auto-ID technologies. For a list of AIM contacts and locations see http://www.aimglobal.org/techinfo/rfid/aimrfidbasics.html (AIM Global 1998, p. 13).
 As Smith (1990, p. 49) well observes, “[t]he Association is in the unique position of helping users and potential users understand the benefits, develop standards, apply the technologies and solve the technical problems that can arise…”
 See http://www.autoidnews.com/technologies/concepts/need.htm, (Auto I.D. News 1998, pp. 2f). The working group, ISO/IEC JTCI/SC31/WG4, is aiming for standardisation which will allow interoperability (Franciosi 2001, p. 7).
 See also Finkenzeller (2001, ch. 5).
 As well as international regulations there are also national licensing regulations. For example, “[i]n the U.S.A. RFID systems must be licensed in accordance with licensing regulation FCC Part 15. This regulation covers the frequency range from 9 kHz to above 64 GHz…” (Finkenzeller 2001, p. 123).
 “A new CEPT harmonisation document has been available since October 1997 that serves as the basis for new regulations. The old regulations for Short Range Devices (SRDs) are thus being successively withdrawn. This document also refers to the ETSI standards EN 300330, EN 300220 and EN 3000440 that are relevant to RFID systems” (Finkenzeller 2001, p. 119).
 For relevant RF/ID standards and regulations see appendix 15.1, (Finkenzeller 2001, pp. 293-294).
 In the City of Toronto, as in many cities and municipalities of the world, there are microchip by-laws for pets like dogs and cats. License fees vary depending on the length of the license (annual versus lifetime). In some cities penalties apply for non-compliance (e.g. Indianapolic, Ind., Albuquerque, N.M., and Dade County, Florida). See http://www.petnet.ca/files/municipal.htm (1999). See also the Ventura County Animal Regulation on microchip implants at http://www.ventura.org/animreg/infopet.html (2001) and the Australian Companion Animals Act. A local Australian council pamphlet for the municipality of Kiama, NSW, stated: “[a]fter 1 July 1999 we must permanently identify and register any puppy or new dog. We have three years to transfer older dogs from annual registration to the new lifetime system… Also from 1 July 1999 all cat owners must identify their cat either by collar and tag or by microchip” (Local Government 1998, p. 3).
 In the case of the mad cow disease the European Union implemented new rules as of January 2001 “…requiring all cattle over 30 months old to be tested for the disease. The EU has set aside about $1 billion for the tests, which cost about $100 per animal… The European Commission estimates the cost of incinerating slaughtered animals at $3.3 billion”. See http://www.pbs.org/newshour/bb/health/mad_cow.html (2001). With such losses, countries are looking to safeguard themselves from future disasters by using RF/ID tags and transponders. See also Associated Press (2001).
 See how human resources play a key part in the success of any e-business (The Globe 2000, p. C3).
 activeRF “succeeded in completing its first round of funding. Having completed the experience Beart learnt that investors bring much more than money and the importance of checking the skills of his direct and indirect team. He also learnt that, in raising finance, it is important to act in good time and keep a buffer. It is interesting to note that in the several months since the investment, about one third of the initial investment has been spent on professional fees, including lawyers for investment and commercial contracts, and patent agents.” See http://www.cec.cam.ac.uk/teaching/vln/fhg1/rc/cases/activeRF.htm (1999).
 See also Driefus and Monk (1998, ch. 10) for an overview of smart card development skills, methods and tools.
 Some larger companies that manufacture contactless smart cards have very large global staff counts however due to the recent downturn in the economy have begun to lay off thousands of employees. For news on staff reductions at Gemplus and Schlumberger see RFID Journal headlines on 12 December 2002, ‘Smart card companies slash jobs’, http://www.rfidjournal.com/news/dec02/jobcuts121202.html. These two companies have announced a collective reduction in their workforce of 4,300 employees.
 The most valuable employee in the formative years of an RF/ID company is one that can deliver solutions to meet the customer’s requirements. The employee will typically have good communication skills to complement their sound technical know-how. Biomark’s “Our People” description on its web site stated that the company employed people with a wide range of expertise and experience. Drawing these individual resources together to work as a team is paramount. “The team concept used in developing a system ensures the customer of a well thought out, tried and tested solution…” The “Company Philosophy” description supports this: “[d]evelopment and innovation emerge from Biomark’s strongest resource- its employees. Employees are actively encouraged to pursue new theories and ideas in an environment created to foster intellectual growth and development. A team philosophy is utilised in creating new systems for clients; a solution is built upon a solid platform of unified individual strengths” (Biomark 1999, pp. 1-2).
 In a great number of countries, particularly in Asia, entrepreneurs realise that it is not solely about ‘who has the best product at the least cost’ but about developing business relationships.
 Elektrobit is one company that enters into partnerships with universities because it feels it is their social responsibility to support students, PhD’s and institutes. “We cooperate with different university teams in Europe especially in Switzerland, including the Swiss Federal Institute of Technology Zurich (ETHZ), HSR University of Applied Research Rapperswil and Aalborg University Denmark, Centre for PersonKommunikation” (Elektrobit AG. 2002, p. 1). The company also highlights that it benefits from the research being conducted by the universities.
 Professor Mickle of the University of Pittsburgh may have contributed to a monumental RF/ID technical improvement but he candidly states: “I’m not a good judge of what makes a good product… [noting he has no ambitions to dive into the business himself]. I leave that to somebody else” (Spice 2002, p. 2). See also http://www.pitt.edu/utimes/issues34/020307/19.html (2001).
 See Maloof (1999, pp. 1-2). The project is being coordinated by the University of Montana in Missoula in collaboration with the Department of Energy’s Pacific Northwest National Laboratory.
 See http://www.pitt.edu/utimes/issues34/020307/19.html (2001) and Little (2002, pp. 2-3). See also Dr Derrek Dunn at the North Carolina State University who is conducting a project on wireless indoor position location systems for NASA. The work links RF/ID with GPS equipment (Dunn 2002, p. 1).
 For example the “[e]stablishment of the world’s first professional chair in radio frequency identification systems (RF/ID) at the University of Adelaide has been hailed as a positive model for Australian electronics research and development... There are huge opportunities for all sorts of commercial development. Billions of dollars will flow from this... Successes like this will just help to cement the process by which industry and academia work together” (Denby 2001, p. 1). See also Kerin (1996) regarding Adelaide’s progress to become a “smart city”.
 Case in point, the new RF/ID chair at the University of Adelaide is backed by Gemplus Tag Australia, a company that was originally Integrated Silicon Design (ISD) formed to commercialise technology developed by the university in the 1980s. This company has more than 15 years experience in their respective specialisation (Denby 2001, p. 1).
 One of the university’s entrepreneurship case studies is activeRF and it can be found at http://www.cec.cam.ac.uk/teaching/vln/fhg1/rc/cases/activeRF.htm (1999).
 “The institute brings professors from all over the world to learn about automatic identification and data capture technologies such as bar coding, voice recognition and biometric identification. More than 400 professors representing over 18 countries have attended the institute during the last 15 years… The goal of the institute is to further professors’ knowledge of automatic identification and data capture so they may, in turn, introduce or expand automatic identification into their own course material. To date, professors report that after attending the institute they have instructed more than 11,000 students collectively about automatic identification” (Smith, J. 2001, p. 1).
 It is not without significance that “[t]here have also been reports of the chip moving or migrating from its initial injection location over the shoulder. This is very rare in the cat, and slightly more common in dogs with very loose skin... New designs, including the use of special coatings now used in human implants, will make migration less likely” (Vetinfonet 1998, p. 2).
 The degree of animal discomfort in the microchip implant procedure has often been misrepresented. Canada’s national pet registry, PETNET, publicised in 1999 that the implant procedure was “quick, safe and painless” (Anitech 1999, p.1). This is in direct contrast to Geers (1997).
 We are informed by Strauth, that “[r]ecent breakthroughs in this technology were first developed at the University of Pittsburgh and are now being further perfected and tested in collaboration with engineers at Oregon State University… The University of Pittsburgh has several patents or patent applications under way on this technology, and is working closely with OSU researchers for further testing and product development. One of the immediate goals… will be to develop standards for this technology that are approved by… ANSI. Work then needs to be done to better refine the product, test its performance and see what products can evolve from it” (Strauth 2002, p. 1).
 It should be well noted that the question of chip implants for humans brings with it an even greater number of issues, vastly more complex as well. For example see Rahmoeller (1988, p. 1). These will be further explored in chapter eight especially.
 See Black (2002, pp. 1-6) and the Illuminati Conspiracy web site at http://www.conspiracyarchive.com/NWO/chip_implant.htm (2002). See also www.freepublic.com especially http://www.freepublic.com/forum/a596342.htm (2001) and McConnell (2003).
 DARPA recently awarded Eagle Eye Technologies “…a contract to build a bracelet-sized mobile terminal designed for compatibility with existing satellite communication systems. The contract is overseen by the U.S. Army Space and Strategic Defence Command at Hunstville, Alabama. Suggested uses, according to Eagle Eye, include “tracking Alzheimer’s patients, children, executives, probationers and parolees, and military personnel- a market that could conceivably encompass the world’s entire populace in just a few decades” (Lange 1997a, p. 2). For an example of “electronic jails” see Goldsmith (1996, p. 32). Compare the “electronic jail” idea with that of “future smart homes” and how they will be advantageous to the elderly and young children (OOMO 2002, pp. 2-5) at http://senrs.com/future_homes.htm. See also the Vivago, http://www.istsec.fi/eng/Etuotteet.htm (2003).
 According to conspirators, these implants will be linked to databases that store personal information for each individual that is born. They will be capable of releasing signals into the body that stimulate certain behaviour. Ultimately GPS technology and RF/ID will be used together to track citizens.
 Shortly after ADS announced the Digital Angel product, Gossett (2002) reported that the Verichip manufacturer was plagued by multiple law suits. See http://www.worldnetdaily.com/news/printer-friendly.asp?ARTICLE_ID=27917 (2002). The controversy surrounding the Verichip was manifold. First, the FDA launched an investigation into whether the product had been misrepresented; four class-action lawsuits were filed on behalf of shareholders. Second, the company was plagued by auditors, the NASDAQ threatening to de-list the Florida-based company. ADS also announced prematurely certain technical solutions instead of reporting on the real news. Following the premature announcement shares of Digital Angel and ADS rose by 10 percent. Yet the company continues to operate and attract attention.
 Before microchip implants for humans became commercially viable, wristbands were introduced that contained RF/ID tags. Among the first companies to launch these wristbands for human monitoring purposes was Sensormatic. They launched a child safety marketing service called SafeKids™, targeting childcare centres especially. “The anti-theft tags are embedded in wristbands placed on children upon entering the childcare centre. Security cameras also beam images to monitors located throughout the store” (Sensormatic 1999, p. 1). At about the same time that Sensormatic released its product Olivetti marketed the “tot tracker”. Olivetti’s technology was a device placed in the child’s backpack instead of a wristband device. See http://whyfiles.news.wisc.edu/056spy/other.html (1998). Other niche companies getting on board include ParentNet and Simplex Knowledge Company (Time Digital 1997, p. 5). Many observers tracking the evolution of microchip applications believe that the wristbands were really de facto trials for the chip implants which were launched at the turn of the century. Comparing Olivetti’s Active Badge product solution for health (Puchner 1994, p. 26) with the “tot tracker” gives an indication of the progression in direction.
 According to Mechanic (1996), Israeli-born Daniel Man, a practicing plastic surgeon first patented a homing device implant designed for humans in 1987. Predictions of human implant trials in the 1990s were not that far-fetched after all.
 As RF/ID companies jostle for market share, strategic mergers and acquisitions between key players in complimentary technologies continue to take place. For example, Applied Digital Solutions (ADS) acquired the Destron Fearing company in 2000 for 130 million US dollars. Applied Digital Solutions’ main product is the Digital Angel. By acquiring Destron Fearing, ADS now own patents on implanted transmitter technology given Destron Fearing specialised in implanted animal tracking systems (Cochrane, N. 2001, pp. 1-4). While ADS originally denied it was going to use similar technology on humans, within two years of acquiring Destron Fearing it launched a human-centric RF/ID system. See http://www.politechbot.com/p-02154.html (2001) in contrast to the statements made on the following press release http://www.adsx.com/news/press_releases/1999/12-15-99.stm (1999). For the present, other companies like Trovan have dealer agreements that “…prohibit placing a chip under human skin” (Lange 1997a, p.1).
 At face value, the idea seems harmless enough- an implant the size of a point on a ballpoint pen is inserted into the subdermal layer of the skin, and only used for identification purposes. A remote database that stores more specific information about the individual is then queried once identification has been determined. The invention has the potential to be a life-saving device and could be used as a complementary component in any location-based system. Yet a greater amount of discussion is required before the application becomes widely adopted. Interestingly PETNET in Canada promoted the idea of the “microchip as a guardian angel” (Anitech 1999).
 See BRANDERS point of sale technology developed by the University of New South Wales in Australia.
 The Multi Technology Automated Reader Card known as the MARC card used by the Department of Defense to store soldier information was introduced during the Clinton administration when his plan for a universal U.S. health care ID was aborted.
 See British Telecom (BT) Lab Research or see Cochrane (1999).
 See http://www.adsx.com/news/press_releases/1999/12-15-99.stm (1999). See also http://www.digitalangel.net/consumer.asp (2002).
 A company that specialises in DNA profiles for animals.
 One of the most well-known religious books on the topic is Cook’s (1999), The New World Order. See also http://www.warroom.com/america.html (1998) excerpted from the companion video. Other interesting religious links related to the Mark of the Beast and RF/ID chip implants include: http://qualia-net.com/religion/chambio.html (Chambers 1998, pp. 1-8), http://infoweb.magi.com/~rah/beast1.html (Hallman 1998, pp. 1-4), http://www.best.com/~ray673/search/database/is41.3.htm (Pearson 1998, pp. 1-5) and http://www.otherside.net/beast.html (Howerter 1997, pp. 1-3).
 See Freedom Chronicles at http://www.allfreewithfreedom.com/fc-ultimate1.htm (2001). This group believes that globalisation using technology and other means will lead to ultimate control of humanity. They have a plethora of links on their web site in support of their argument.