Chapter I: Introduction
This study is concerned with the automatic identification (auto-ID) industry which first came to prominence in the early 1970s. Auto-ID belongs to that larger sector known as information technology (IT). As opposed to manual identification, auto-ID is the act of identifying a living or nonliving thing without direct human intervention. Of course, the process of auto-ID data capture and collection requires some degree of human intervention, but the very act of authenticating or verifying an entity can now be done automatically. An entity can possess a unique code indicating personal identification or a group code indicating conformity to a common set of characteristics. Some of the most prominent examples of auto-ID techniques that will be explored in this book include bar code, magnetic-stripe, integrated circuit (IC), biometric and radio-frequency identification (RFID). The devices in which these techniques are packaged include a variety of form factors such as labels and tags, card technologies, human feature recognition, and implants. Generally the devices are small in size, not larger than that of a standard credit card. There has been a visible trend towards miniaturization through the development of mini-cards and tiny RFID tags (some even as minute as pinheads).
Traditionally auto-ID has been synonymous with bar code labels on supermarket store items, financial transaction cards (FTCs) used to withdraw money from automatic teller machines (ATMs), and subscriber identity module (SIM) cards in mobile phones. Today auto-ID devices are being applied in very different ways to what they were originally intended. For instance, frequent air travelers can bypass immigration queues using their biometric trait, prisoners can serve their sentences from home by wearing electronic tags and animals can be identified by implanted transponders. While the nature of auto-ID is one that is innately compatible to mass market diffusion, it does also accommodate well for niche applications where for instance security is paramount and access is limited to only a few authorized persons. Auto-ID has also become an integral part of electronic commerce (EC) applications, particularly those related to e-government. Increasingly we are seeing bar code scanners and RFID readers integrated onto mobile devices.
The Significance of Auto-ID
Prior to the 1970s who could have envisaged that every packaged item sold on a supermarket shelf would be equipped with a bar code label. And that by the early 1990s the majority of the population in more developed countries (MDCs) would be carrying a magnetic-stripe or smart card to conduct financial transactions, without having to visit a bank branch. And furthermore, by the turn of the twenty-first century that it would be enforceable by law to implant domesticated animals with a microchip. These examples not only indicate the pervasiveness of auto-ID but also how reliant the world has become upon the technology, including public and private enterprise. The impact of auto-ID is irreversible, an essential part of life. It is interwoven in a highly structured manner with the way we live and work and is a seamless part of our day-to-day routine activities. The technology is so widespread and diffused that it seems to possess an almost omnipresent quality.
Auto-ID technologies are complex artifacts. In their natural state they are simply inventions seeking an economically significant purpose. Only when the devices are applied to a given context as part of an information system (IS), and they achieve a desired result, can they be considered product innovations. For example, a plastic card with a magnetic-stripe is quite useless unless it grants the cardholder the ability to make an EFTPOS (electronic funds transfer point-of-sale) transaction at a restaurant to pay for a meal. Furthermore, one need only consider just how complex an auto-ID system is: first cards need to be produced by a manufacturer based on a common set of standards; second the cards need to be acquired by a financial institution and set up with the appropriate parameters; third an end-user with that financial institution must adopt the card and be inclined to make an EFTPOS transaction; and fourth the merchant must accept EFTPOS payments and have predefined agreements with the appropriate financial institutions to enact a valid transaction. The auto-ID innovation process requires that there be dynamic interaction among numerous stakeholders including technology providers, service providers and customers. All too often studies will only focus on the first of these, neglecting to understand that the other stakeholders are equally important to the innovation process. It is the premise of this book that the citizen perspective not be ignored; citizens must be active participants in the design of systems related to e-government (Kumar & Vragov, 2009).
LOCATION-BASED SERVICS AND TECHNOLOGIES
Location-Based Services (LBS) is a branch of m-Commerce that has revolutionized the way people communicate with others or gather timely information based on a given geographic location. Everything living and nonliving has a location on the earth’s surface, a longitude and latitude coordinate that can be used to provide a subscriber with a wide range of value added services (VAS). Subscribers can use their mobile phone, personal digital assistant (PDA) or laptop to find information relating to their current location. Commercial applications that utilize positioning technologies are diverse and range from child monitoring devices used to ensure safety to care-related devices for Alzheimer’s sufferers who may lose their way. Typical LBS consumer applications include roadside assistance, who is nearest, where is, and personal navigation. Humans are not the only living recipients of positioning technologies; animals too are now implanted or tagged to prevent the extinction of species, to encourage better agricultural practices and even to track food down the chain to the point of consumption. Objects are also being equipped with GPS receivers and RFID tags. It is now possible to get directions from in-car GIS applications, objects on-the-move, and from stolen vehicles being tracked. LBS business applications differ in their focus and many are linked to core business challenges such as optimizing supply chain management (SCM) and enhancing customer relationship management (CRM). There exist niche LBS companies that specialize in offering fleet management services incorporating vehicle navigation and property asset tracking via air, ship and road.
Positioning technologies differ in their capacity to locate. Some technologies work well outdoors while others are tailor-made for the in-building environment. Independent of the positioning technology application, location information is being sought to allow the furthering of processes such as the ostensibly simple “am I on the right track” or “where am I” queries. In some instances the value returned to the end-user is a latitude and longitude coordinate, in other instances it is the nearest base transmission station (BTS), nearest building or a specific location within an area. Spatial data plays an important role in visualizing location information, whether this is in hardcopy or on digital maps. Knowing where things are, where one has been, and where one is going can be vital. Defense has long realized this potential and was preoccupied with positioning techniques even before digital technologies became available. Mobile handsets can even be tracked, either by the use of an in-house GPS chipset receiver or by the current zonal information acquired by nearby base transmission stations. Initially, the general method of network triangulation could only identify a mobile device as being inside a particular BTS coverage area, however, over time, algorithms became increasingly sophisticated and could denote whether the phone was right next to a BTS or over 30 kilometers away.
The emergency services sector in the United States (US) was responsible for driving the first pin-point location service, demonstrating to the world the potentially life-saving functionality of the technology. In 2003 the Federal Trade Commission (FTC) in the U.S. asked that wireless operators provide Automatic Location Identification (ALI) for persons making emergency services calls. The resultant Public Service Answering Point (PSAP) now allows wireless operators to accurately identify the location of an individual to between 50-150 meters (Gabber & Wool, 1998, p. 145). ALI standards designate that more than two-thirds of emergency calls received require the location of the individual to be accurate to within 50 meters, and 95 per cent of calls to within 150 meters. The technology is available for potential mass market deployment, how feasible it is however is a separate issue altogether.
By 2010, Nokia has estimated that fifty per cent of all its mobile phones will contain GPS chipset receivers which will be able to locate individuals to 15 meters on average (Choney, 2008). In some place in Europe and Asia, penetration of GPS chipsets in mobile phones is much higher, at around eighty percent. And in some countries like Japan, RFID readers have been integrated into mobile phones with astonishing user adoption levels over 50 million people in a just a little more than two years (Das, 2008, p. 11). LBS have been a catalyst towards the integration and convergence of various technologies including automatic identification techniques, wireless telecommunications, sensor networks, location positioning technologies and spatial content. The next generation of advanced location-based services (A-LBS) will be those linked to social networking applications (Labrador, Michael & Kuepper, 2008).
Auto-ID + LBS = ALI (Automatic Location Identification)
Most books and papers published on the topic of auto-ID are either wholly focused on presenting technical aspects of a particular device or architecture or show how it is being applied commercially. Experts continue to publish new material on niche topics related to auto-ID but few offer a holistic approach to understanding the industry. Contributions are primarily aimed at making the wider community, including potential customers of auto-ID, aware of what technology options are available to them. The vast majority of refereed publications focus on only one technology. However, more recently a few contributions have appeared making reference to multiple auto-ID technologies. For instance, in 2003, the development that received most attention surrounded the storage of a biometric pattern onto a bar code or smart card. And today, the integration of GPS chipsets with RFID readers is in an early concept stage. This indicates that auto-ID and LBS organizations, specializing in a given technique, are at least beginning to consider themselves as members of a larger value chain system, that being the auto-ID industry. In 2003, the notion of an auto-ID technology system (TS) within which organizations and institutions could innovate together dynamically was lacking. Today, we are witnessing the emergence of such a system, and a willingness of various stakeholders to at least be on speaking terms. No doubt the vital question of the interoperability of their respective systems has been a prime motivator.
At the same time one might be led to consider whether or not a location-based services technology system exists. LBS are complex; the value chain for LBS is even more meshed and detailed than that of auto-ID. So why explore both these technologies in the one research project? What do auto-ID and LBS actually have in common? How are they complementary to one another? How may they be exploited together to provide powerful functionality for advanced services? What do they tell us together, that they do not reveal if used in a stand-alone fashion? Fundamentally automatic identification, identifies a living or nonliving thing, while location based services can provide a physical position of a living or nonliving thing. Consider a large group of employees who are all located in the confines of a single multi-story building. Their approximate position may be very similar in terms of coordinates, but their personal identification is unique. On the other hand I may have personal information designating that a particular individual made a given transaction but without location services I do not know categorically where that transaction actually took place. Auto-ID together with LBS is powerful because it can tell us more about a given situation or context, i.e. we have a more complete picture than if the technologies were used on their own. Few works to date (which include those that have been written by the authors of this book) actually address the use of auto-ID techniques and location-based services in the same space.
LANDMARK STUDIES ON AUTO-ID AND INNOVATION
Until 2003, only one author had written an extensive work directly on the topic of innovation as related to one auto-ID technology, i.e. smart cards. Robyn A. Lindley used socio-technical theory and a case study method to come out with her overall conclusions in her book Smart Card Innovation (1997). It was an exploratory study that thoroughly examined the interaction between smart card users, the technology and the organization. Lindley was not the only researcher who believed that there was a growing need to develop our understanding of new and complex technologies within the scope of the field of innovation. However, she was the first to put forward a concise volume on innovation and any type of auto-ID technology. This investigation takes the next step forward in exploring that cluster of innovations known as auto-ID.
The need for this study has been steadily increasing over the last decade as many researchers have begun to not only compare one auto-ID technology to another, but to consider new combinations of existing innovations. This can be seen within the context of magnetic stripe cards and smart cards. Lindley (1997, p. 18) writes: “There is also now little doubt among leaders in the banking industry that smart card will take over from magnetic-stripe card technology because of its ability to reduce fraud. The main advantage of smart card compared to other technologies is that it does provide a large range of design and service options with a high degree of security which is required when monetary or secret information exchanges are to occur. The old card technologies are rapidly being made obsolescent as the rate and level of sophistication of fraudulent use are rapidly approaching unacceptable levels. It is therefore now seen by many as only a matter of when, and how the services will be differentiated.”
Thus, examining auto-ID innovations is beneficial in understanding the industry trends. Hewkin (1989) and Swartz (1999) realized this need earlier than most and were compelled to write about it. Hewkin saw the industry-wide need for an understanding of the auto-ID innovation process but presented scattered thoughts and did not follow up with other complimentary publications in the field. He also used a neo-classical model of interpretation based on the price mechanism. He did however allude to the future evolution of new auto-ID technologies. Swartz in particular provided some helpful concise insights (1999, p. 21). “Today, many of us see Auto ID technologies as “complementary,” with each filling a space in the market defined by the fit between its strengths and weaknesses, and the requirements of target applications. And looking forward, I believe we’ll evolve from a “coexistence” model to one that leverages the many converging opportunities around the intersections and in the gaps between those technologies.”
Four works that should be consulted regarding the notion of “convergence” in technology studies include: ‘A definition of convergence in the area of information and telecommunication technologies’ (Radinger & Goeschka, 2002), Digital Convergence (Covell, 2000, ch. 7), Competing in the Age of Digital Convergence (Yoffie, 1997, ch. 5), Convergence: Integrating media, information and communication (Baldwin et al., 1996, ch. 5). The term “convergence” means different things to different people and can be used at different levels. In some instances, the term has been used too loosely in high-tech with reference to digital or technological convergence, e.g. the “combination of computing, communications, and digital media technologies” (Covell, 2000, p.161). It is not that Covell’s definition is wrong but it is perhaps somewhat too all-encompassing for this study of auto-ID innovation. Convergence in the context of this study is not anything and everything coming together. Radinger and Goeschka (2002, p. 88) define convergence for the area of technology as “…the multidisciplinary integration of inhomogeneous methods, systems, views, knowledge areas and other disciplines of technology with the aim to reach an added value.”
A more sophisticated definition for convergence is that given by Greenstein and Khanna (1997, pp. 203-205). They suggest there are two kinds, convergence in substitutes and convergence in complements. “Two products converge in substitutes when users consider either product interchangeable with the other. Convergence in substitutes occurs when different firms develop products with features that become increasingly similar to the features of certain other products… Two products converge in complements when the products work better together than separately or when they work better together now than they worked together formerly. Convergence in complements occurs when different firms develop products or subsystems within a standard bundle that can increasingly work together to form a larger system… [D]epending on the level at which a computing system or communications system is analyzed, a particular instance of convergence may be construed as being of either kind. It may be interpreted as a convergence in substitutes at one level of analysis and, equally appropriately, as a convergence in complements at a different level.”
The Emergence of the Auto-ID Paradigm
It is surprising to note that from the thousands of articles reviewed for this research project, that until 2003, the term automatic identification had appeared in the titles of only a limited number of publications including: Moran (n.d.), Berge (1987), K. R. Sharp (1987), Schwind (1987), Gold (1988), Hewkin (1989), I. G. Smith (1990), Adams (1990), J. Cohen (1994), LaMoreaux (1998), O’Gorman and Pavlidis (1999), and Swartz (1999). And it should be noted that most of these works were republished in 1990 collectively in a book edited by Ron Ames titled, Perspectives on Radio Frequency Identification: what is it, where is it going, should I be involved? This does not mean that the term automatic identification is not popular for it is continually used in the main body of papers, irrespective of the type of technique being discussed. Rather what it may indicate is that the term auto-ID carries a loaded meaning when it is used in a paradigmatic fashion. Perhaps as a concept that has industry-wide applicability, admitting to the reality that numerous auto-ID solutions are co-existing and that there are common experiences that can be shared between stakeholders in the innovation process.
Four works must be especially highlighted here in support of the emerging auto-ID paradigm described above. While these works point to the emergence of an auto-ID paradigm, it is not to be assumed that this was the conscious intent of each of the authors. The first is Automatic Identification and Data Collection Systems, by Jonathan Cohen (1994). Its contribution to the field is its attempt to give a thorough industry-wide perspective, though it falls short of its aim in terms of its unbalanced focus on bar code technology. It also does not compare auto-ID technologies and dedicates little space in the form of predictions about the future of the industry. The second work is by Hewkin (1989), ‘Future Automatic Identification Technologies’; and the third by Swartz (1999), ‘The Growing “MAGIC” of Automatic Identification’. These works are both short articles centered on the need to understand auto-ID innovation.
One will note a ten year interval between these publications. Neither goes into great depth but both offer insights worthy of research efforts yet to be entirely realized. There is an apparent need for research in auto-ID innovation and the characterization and prediction of the auto-ID industry. It is in response to this pressing need that this book is making one its principal contributions to knowledge. Hewkin understood the auto-ID market well and emphasized the need for industry-wide communication flows between the different auto-ID players, independent of their major auto-ID product focus. Swartz, on the other hand, who had been able to witness the changes in the industry over the decade between 1989 and 1999), analysed the most prominent auto-ID technologies and offered a brief but useful description of the emerging auto-ID paradigm. His insights are very important in that they assist and garner support for some of the findings of this investigation. Finally, I.G. Smith (1990, pp. 49, 52) presents the AIM (automatic identification manufacturers) activity group in a brief article, stipulating that their focus is broader than just bar code, “[s]o the automatic identification industry has an almost unique global communication network… The members of AIM collectively cover all the established technologies as well as most of the emerging ones”. What is apparent is that AIM is promoting the idea of one auto-ID industry sharing in common resources. This is further reflected on their web site which is all inclusive of auto-ID technologies (Beauchamp, 2007).
The Gap in the Literature
At the macro level there is a requirement for a well-researched, up-to-date work that traces the evolution of the auto-ID industry; a summation of the last forty years of change. This book offers an intricately interwoven discussion on the history, background, development and likely future directions of auto-ID. Currently researchers are normally offering fragmented perspectives on the auto-ID selection environment by focusing on a given technology and mostly neglecting the rest or at best mentioning them in passing. At the micro level the key issues that have affected auto-ID innovation and its ancillary extensions need to be explored. Demystifying the complex auto-ID innovation process is important as well and has not been adequately explored. Another gap in the literature is predicting the trajectory of auto-ID. This is perhaps where the least work has been done in the field. The outcomes of a study such as this current one have far-reaching implications, both to practitioners and end-users, of a technical and social nature. For example, how does one understand competition in the auto-ID industry? Are new electronic commerce application requirements driving the path of auto-ID? How will auto-ID technology be used in the future? What are some of the long-term impacts of the widespread introduction of auto-ID devices? And how are new technologies such as location positioning systems going to affect auto-ID?
The Auto-ID Trajectory
One of the fundamental questions this study seeks to answer is what is the auto-ID trajectory? The question requires an interdisciplinary approach and is intended to allow for the characterization of devices from their inception into the market to the present day, in a hope to predict future trends in the industry. In other words, what is the destiny of auto-ID and just how intertwined will it become to applications that everyone relies on? How far can the human-computer metaphor be taken, now that the prospects of chip implants for auto-ID have been confirmed? And what risks or benefits may this pose to humans and the general economy? How much further can engineers develop individual auto-ID technologies and how will these advancements be affected by other breakthroughs in the IT sector, such as location-based services? The nature of these questions implies a holistic methodology to understanding the auto-ID technology system- a novel approach seeking to discover new facts. Location-based services are integral to the trajectory of auto-ID as we will later discover.
The other purpose of this study is to establish the auto-ID paradigm. It is to convey to stakeholders that the dynamics within the technology system (TS) are paramount to the success of individual auto-ID devices. It is also important to determine how one auto-ID device should be considered within the wider auto-ID selection environment. In addition, forecasting the auto-ID trajectory is not only meant to assist technology and service providers but also to prepare end-users for potential change. The book is designed to also bring to the fore thought-provoking and challenging philosophical questions that are often neglected at the expense of other topics centered almost exclusively on technical breakthroughs. How will the auto-ID trajectory affect our everyday lives? What could the impacts be on our culture, society, and even on our metaphysics? How can the impact of the trajectory be tempered and safeguards established to ensure their use as opposed to their potential misuse? What are we to make of all this technological development in the context of our intellectual development? Today most high-tech complex technology innovation is perceived as ‘useful’, ‘progressive’, ‘awe-inspiring’, a step toward a better future. This book challenges the notion that all developments are necessarily useful (Mills, 1997). The authors are mindful that some technologies can be used toward both positive and negative ends, ethical and unethical means, for productive and counter-productive objectives. There is a strong socio-ethical component running throughout the whole study in an attempt to harmonize opposing forces. For foundation principles on ethics in information technology and social informatics see: Tavani, 2007; Quinn, 2006; Morris & Zuluaga, 2006; Reynolds, 2006; Edgar, 2003; Hester & Ford, 2001; Ermann, Williams, & Shauf, 1997; Kling, 1996.
THE SYSTEMS OF INNOVATION CONCEPTUAL FRAMEWORK
Traditionally studies in innovation have followed one of two theories, the neo-classical or the more recent evolutionary. Compare for instance the works of Schumpeter (1934) with those of Nelson and Winter (1982). Neo-classical economic theory focuses on the production function as the major indicator of product/process innovation. On the other hand, the evolutionary theory of innovation is characterized by the concepts of reproduction, variety and selection (Andersen, 1997, p. 175). It is considered by many that the former theory has depreciated as a tool for investigating modern product and process innovations. Among its primary limitations is that technological change is treated as an exogenous factor (Edquist, 1997, p. 16). The more recent evolutionary theory of innovation has become increasingly accepted in that it is an interdisciplinary approach with the ability to bring within a “...single framework the institutional/organizational as well as cognitive/cultural aspects of social and economic change” (Carlsson, 1995, p. 23). It is this conceptual framework which will be used to set the system bounds of this present research.
Founded on the principles of evolutionary theory, is the systems of innovation (SI) approach. SI is a conceptual framework rather than an established theory in which most innovation investigations that have taken place in the 1990s have followed empirically. Researchers in Europe, Asia and North America have used this approach to study technology, especially advanced technologies. Since the early 1990s, national, regional, sectoral and technological systems investigations in innovation have shifted from a product-focused view, to a view that incorporates the whole process of innovation including the institution, organization and market orientation. It is in this light that the research will be conducted, deviating from the norm only on the condition that a micro-level investigation focusing on the auto-ID industry will be conducted. While other schools of thought are presently emerging, particularly in the field of information technology methodologies and socio-technical theory, none offer such a complete interdisciplinary understanding of technological change. “The systems of innovation approach also allows for the inclusion not only of economic factors influencing innovation but also of institutional, organizational, social and political factors” (Edquist, 1997, p. 17). This will allow for the investigation of previously neglected material important to understanding auto-ID innovation.
The available SI literature helps the researcher to seek out specific references and sources that are relevant to the dimensions of innovation, for instance, what is meant by the factor “organizational” as opposed to “institutional” and “economic”. For example, the key words searched for three of the most important SI dimensions considered in the investigation included:
§ “Organizational: public organizations, policy, political bodies, regulatory agencies, organizations for higher education, technology support entities (e.g. training), patent offices, standards setting organizations, consulting agencies, knowledge production, universities, organizations with formal structures, explicit purpose, players or actors, other firms
§ Institutional: norms, habits, practices, routines, laws, interaction, often no specific purpose, form spontaneously, relations between groups, research and development links, consumer reactions, conflicts and cooperation, reduction in uncertainty, technical standards, rules of the game, framework conditions
§ Economic: infrastructure, physical infrastructure, knowledge infrastructure, standards, formal knowledge, tacit knowledge, explicit knowledge, research councils, standard setting organizations, libraries, databases, skilled/technical personnel, routine, industry associations, conferences, training centers, trade publications, research laboratories, public agencies.”
Throughout chapters five to nine entire sections are dedicated to discussing SI dimensions. Headings such as “Committees, Subcommittees and Councils”, “Public Policy”, “Clusters of Knowledge and a Growing Infrastructure”, “Setting Standards”, “A Patchwork of Statutes” and so forth, can be found to reflect direct SI concepts. In concluding chapters, key terms from evolutionary theory are used to discuss patterns and trends in the auto-ID industry.
Qualitative research expert John Creswell (1998, p. 15) emphasizes the requirement for exploratory research to possess this “holistic picture”. He believes this whole view “…takes the reader into the multiple dimensions of a problem or issue and displays it in all of its complexity” (Creswell, 1998, p. 15). In fact, what attracts researchers to SI is that it is indeed a “holistic and interdisciplinary” approach which “encompasses all or most determinants of innovation” (Edquist, 1998a, p. 20). Creswell (1998, p. 13) himself believes that it is the application of appropriate frameworks that “hold qualitative research together”. He uses the following metaphor to convey the importance of frameworks: “I think metaphorically of qualitative research as an intricate fabric composed of minute threads, many colors, different textures, and various blends of material. The fabric is not explained easily or simply. Like the loom on which the fabric is woven, general frameworks hold qualitative research together.”
Case Studies and Usability Context Analyses
Due to the exploratory nature of this research, the most appropriate methodology to use is that of multiple embedded case studies. The five auto-ID technology case studies form the main unit of analysis: bar codes, magnetic stripe cards, smart cards, biometrics, radio-frequency identification (RFID) tags and transponders. The order in which the cases are presented is chronological in terms of the way one technology has impacted on the innovation and diffusion of subsequent technologies. This historical perspective helped to draw out the pattern of technical change that occurred in auto-ID since its commercial introduction. Chapter three especially is dedicated to tracing the historical perspective of identification, from manual to automatic. In fact, the whole study has a historical element attached to it. Similarly, in the research conducted by Edquist et al. (1998, p. 17) “…case studies within the sub-project employed a historical approach, and many covered processes of technological development spanning several decades.”
Industry dynamics happen over time, thus history is very important. Coincidentally, the chronological manner in which the devices have been presented has also corresponded to a growing level of technological invasiveness. From bar codes attached to non-living things; to magnetic-stripe cards and smart cards carried by humans; to the biometrics of humans; to RFID tags and transponders implanted in animals and people. The researchers describe this development as the “human evolution”. The term is derived from numerous sources. However two phrases ‘Human Metaphor’ and ‘New Age Systems’ from Tren (1995) have influenced the researchers the most. Auto-ID was initially developed to identify packaged goods at the checkout counter, now it is being used increasingly to monitor and track animals and humans.
The sub-unit of analysis is the usability context which can be considered a context within which a given technology can be applied. For each of the technologies, space is dedicated to two main usability contexts. For example, in the chapter 7 on smart card innovation, the telecommunications usability context is studied as is health care. It is well-known in the auto-ID industry that these contexts as applied to smart card have led to widespread diffusion of the technology. Traditionally, retail also has been synonymous with bar code technology and financial services with magnetic-stripe card technology. As there were five auto-ID technologies chosen for the research, a maximum of ten electronic commerce (EC) applications were potentially feasible within the scope of this study. The ten applications cover a wide variety of vertical sectors specifically to address the growing pervasiveness of auto-ID technology. Within each application area (e.g. telecommunications) there can be literally thousands of product innovations, e.g., pre-paid telephone cards, subscriber identity module (SIM) cards, virtual private network (VPN) cards, cable television (CATV) cards. Several innovations are documented within each sub-unit. The number of different product innovations discussed varies dependent on the available literature and space constraints. It should also be pointed out that the innovation system studied is ‘supranational’ (i.e. global), concentrating on the technological system rather than the geographical dimension. Location-based services and respective applications, while not strictly automatic identification innovations, were added to the study after the bulk of the research was completed in 2003.
Data Collection and Data Analysis
This study adopts a qualitative research strategy (Flick, 2002) that uses some elements of descriptive research to enhance the central usability context analyses. These analyses are similar to case studies as they investigate “a contemporary phenomenon within its real life context when the boundaries between phenomenon and context are not clearly evident” (Yin, 1998, p. 123). They also similarly use multiple sources of evidence, however are differentiated on the basis of the unit of analysis. In a usability context analysis methodology, units are not individuals, groups or organizations but are applications or application areas for a product, where ‘product’ is defined as “any interactive system or device designed to support the performance of users' tasks” (Thomas & Bevan, 1996). The results of multiple analyses are more convincing than a singular study, and the broad themes identified cover the major fields of current auto-ID development. Further, the usability context analyses in this study are supplemented by a discussion of surrounding social, legal and ethical ambiguities. By this means, the addition of a narrative analysis to the methodology ensures a thorough investigation of usage and context (Masters & Michael, 2007).
Traditionally books in the field of auto-ID contain a brief historical introduction about one or more technologies and give examples of applications without going into too much depth. Auto-ID books show static representations of technologies at a given point in time, however, they are useful in that they make the researcher aware of the incremental changes that have occurred over the years. They also familiarize the researcher with the more important auto-ID definitions and concepts and raise some very important issues. General computing or engineering journal articles and reports have much the same function as books but with the advantage that they are more frequent and up-to-date pieces of research. Auto-ID industry articles are also able to focus on aspects of the technology and are usually written by experts who have had professional experience in the field. There were also the specific auto-ID journals and magazines which provided industry insights which could not be found elsewhere. These were excellent and reliable sources of evidence that included relevant industry contact names and telephone numbers for further investigation.
Conference proceedings are also particularly useful in a study such as this that is exploratory and requires empirical evidence to justify its findings. A researcher can expect to find in conference proceedings, information about the newest auto-ID innovations. Leading edge case studies and surveys are usually compiled by consultants who are at the forefront of the industry and have had real-life experiences implementing auto-ID solutions. Press releases are also crucial- though brief they are a good way of tracking developments in specific product innovations throughout the year. One criticism of press releases is that they are sometimes written by product marketers or communications employees who have the interests of the corporation at heart. Nevertheless they do indicate changes that are occurring in auto-ID.
Newspaper articles about auto-ID are usually not technical in nature and are often written by journalists who do not have experience in the field. However, they do act to raise issues that are not dealt with in mainstream journals and magazines. They have predominantly reported on the social implications of the technology with a view to capturing the interests and imaginations of readers. However, while one must be careful to separate sensational material from scientific fact, it does not mean that popular material cannot be used in an investigation such as this. Often these articles are surprisingly up-to-date and offer different perspectives than would otherwise be found in scientific journals. The popular media- newspapers, radio, television- have long been used as an open forum to gauge political and social responses to technological developments before they are actually introduced. A classic example of this is the Australia Card debate of 1987. All forms of media were heavily involved in the debate; from front page headlines, to letters to the editor, to polls taken during current affairs shows, to talk-back radio comments. See Smith (1989) who gave a first-hand account of the Australia Card and the story of its defeat, especially chapter 8 titled ‘[t]he day of the media’ and chapter 9 titled ‘[t]he role of the press’. In the latter, Smith (p. 150) writes: “[i]f one accepts the Australia Card as a matter of great importance, the bringing about of its demise… must rank as foremost in the achievements of ordinary people. And the events of September would not have occurred but for the part played by the media, and in particular the press… in my opinion the role of the press was paramount… The newspapers responded to the groundswell of public opinion.”
The archival records used in this study are in the form of electronic information sources, publicly available on the Internet. Web site information is a newfound source of evidence that many researchers have simply ignored in the past. At each web site it is commonplace to find a myriad of press releases dating back to the mid-90s, a historical overview of the technology, product-specific technical information, case studies of the latest applications, as well as customer testimonials. The information in these web sites has been largely sidelined by researchers in the field but should be given attention based on their individual merit. It is here that many references were gathered with respect to the auto-ID trajectory. In this study auto-ID web sites have been chosen, after being identified in the documentation collected or by querying a variety of search engines. These web sites include: auto-ID companies such as manufacturers, value-added resellers (VARs) and system integrators; customers such as private organizations; auto-ID research centers such as universities; service providers; and auto-ID-related associations. By visiting web sites, further electronic links to other sites containing valuable information were also found.
Company web sites are very informative containing a huge amount of information that cannot be found elsewhere. Private companies, organizations, even government agencies are now placing internal articles, product technical specifications, marketing brochures, whitepapers, press releases and other auto-ID information on the Internet for wider readership and greater accessibility to employees and customers. It is estimated that around 2500 web sites were visited by the researchers over a period of 5 years between 1998 and 2003. At least 200 of these web sites were exclusive auto-ID technology providers. In the field of technological innovation especially, the Internet is an invaluable data gathering tool. Patent databases can be searched, government policies reviewed, standards bodies referenced and academic research laboratories consulted for future directions, among many other capabilities. Wherever applicable the content analysis tool Leximancer was used in analyzing large volumes of data in digital format.
The researchers used a variety of means to identify relevant Internet sites. Initially generic searches were conducted based on the key words defined by major categories, such as “bar code”, “magnetic-stripe card” etc. A variety of popular search engines were used, such as Yahoo! and Google. The hits returned were then examined for relevance and accuracy. The researchers periodically performed these searches and downloaded files in text format, HTML but largely targeted Microsoft Word and Powerpoint and PDF formats, storing them in digital folders with meaningful names. With each periodic search performed, categories were refined and new subcategories defined. The key words used to search became more targeted and granular with each iteration and instead of thousands of web links being returned, a few hundred would result (Michael, Fusco & Michael, 2008, p. 1192).
The guidelines for which web pages were included in the data collection were not inflexible however there were several overriding controls the researchers used. First, there had to be an author of the web page or site, whether this was an individual, group or company. Second, the content of the web page had to be accurate. Accuracy was established by cross-referencing similar web pages or documentation on the exact same topic(s). Third, a date on the material viewed also added weight to what was being conveyed as well as a date for the period the web site was last updated (Yule & Hewson, 2003, pp. 11f). As much as was possible the researchers attempted to verify the authority of documents where content or sourcing was potentially dubious.
Apart from the exhaustive searches that were performed on the major categories, auto-ID company marketing lists were publicly obtained from magazines such as the Automatic ID News and individual company web sites searched and further recommended links on these sites followed through. It soon becomes apparent to the researchers which sites are those considered significant to the industry (e.g. AIM Global), as numerous companies reference the same link on their respective web sites. In addition, the experienced Internet researcher who has spent thousands of hours searching for relevant online material is able to quickly discern between web content that is of value to their investigation and that which should be set aside.
Validating the Outcomes with Interviews
The semi-structured interviews were chosen to validate the outcomes of the secondary data collection completed in 2003. It is important to note that the interviews were conducted by Katina Michael and M.G. Michael almost four years after the initial data collection of documents, archival data, and e-research. They took place between 12 October 2006 and 11 December 2007. In total, seven interviews were conducted to shed light onto the current state of development in the auto-ID industry and emerging location-based services. The interviews could have been used as individual case studies, with the interviewee being the main unit of analysis but in the context of this study, the interviews were used to validate the author’s projections of the auto-ID trajectory.
The choice of academic interviewees was based on several factors including: (i) the main field of research of the individual researcher, (ii) the track record of the individual researcher, (iii) and the need for the study to explore interdisciplinary perspectives. Telephone and in-person interviews in Australia, the United Kingdom, and United States were conducted with Professor Allan Brimicombe of the University of East London, Professor Christofer Toumazou of Imperial College, Professor Ian Angell of the London School of Economics and Political Science and Professor Kevin Warwick of the University of Reading. The interviews varied in length from thirty-five minutes to seventy-five minutes.
As the study sought to emphasize the role of citizens in the auto-ID and LBS innovation process, three interviews were also conducted with members of the general public. These interviewees were also handpicked and could be considered key informants in the debate over the social implications of technology. Mrs Judy Nachum, Mr Amal Graafstra and Mr Kenneth Lea, represent three very diverse perspectives: historical, techno-ethical, and life-world. These interviews varied in length, between sixty and one hundred and twenty minutes. Mr Graafstra commented that it was the most comprehensive interview he had ever conducted with any academic or journalist. He wrote in his blog: “[w]e covered implants, automatic identification, location based services, and biometrics… with a bit of security and privacy issues sprinkled in for flavor. Basically, this phone interview is the current state of my thoughts on these subjects all laid out in conversation. She thanked me for my time, but frankly I was glad to get it all out… maybe now when someone wants to interview me for a magazine or newspaper or something, I can point them to the recording and if they have any more questions to shoot me an email”(Graafstra, 2007). In comparing the sentiments of the researchers with those of the general public, great synergy can be found, not only regarding the future projections made by the authors on the auto-ID trajectory but on those specific questions we need to critically address at the very least during the course of the next decade..
Each of the interviewees was asked a different set of questions related to their specific field or area of concern, though there were some ‘typical’ questions that we might be able to trace any common discernment. The level of technical response was commensurate to the background of the interviewee. A brief biography of the interviewee is therefore included before each interview transcript is presented. Each interview serves a unique role in the study, complementing specific chapters in the book. A key words definition section is also included to help readers with understanding specialist concepts discussed in each interview. Nevertheless every interview is predominantly preoccupied by questions related to the trajectory of auto-ID and LBS technology. Of emphasis in most of the interviews was the question of microchip implants for humans, the ability to locate humans, and the socio-ethical implications of the technology. It was particularly reassuring when Professor Toumazou and Professor Warwick supported the concept of Electrophorus put forward to them by the authors.
Professor Allan Brimicombe is the Head of Geo-Information Studies specializing in spatial data analysis, data mining and location-based services. Professor Kevin Warwick is a professor of cybernetics and renowned for his Cyborg 1.0 and Cyborg 2.0 chip implant experiments. Professor Christofer Toumazou is the director of the New Institute of Biomedical Engineering at Imperial College and founder of the company Toumaz specializing in health monitoring devices. Professor Ian Angell, department convener of the Information Systems and Innovation group at LSE, is a well-known expert in the global consequences of information and communication technologies (ICT), specializing in national IT policies. Mrs Judy Nachum is a WWII holocaust survivor. Mr Amal Graafstra is a do-it-yourselfer hobbyist and entrepreneur who has two RFID tag implants, one in each hand. Mr Kenneth Lea is a carer of a sufferer of Alzheimer’s disease. The interviews have not been explicitly analyzed; they stand in their original complete transcription form, serving as primary evidence in their own right.
A NOTE ON FORECASTING
Since one of the aim’s of this study is to “characterize” as well as to “predict” the path of auto-ID, some space must be given to that body of literature encompassing the prediction of future events. Braun (1995, p. 133) writes that “[f]orecasts do not state what the future will be... they attempt to glean what it might be.” In this study to ‘predict’ means to look at ‘past’ and ‘present’ trends and use these for providing a road map of future possibilities. Such a projection can take the form of an extensive technology assessment (TA) or technology forecast. TA as defined by Braun (1995, p. 129) “...is the activity of describing, analyzing and forecasting the likely effects of a technological change on all spheres of society, be it social, economic, environmental or any other.” TAs are usually armored with their own methodologies and are conducted over a short period involving a group of experts, government policy officials, and other interested parties from the field. Although the scope of this research did not allow for a genuine TA to be conducted, it did allow for auto-ID technology forecasting. “Here the emphasis is on predicting the development of the technology and assessing its potential for adoption, including an analysis of the technology’s market” (Westrum, 1991, p. 328). It was not the intention of the researchers to predict extraordinary things, it was to predict with the use of reliable evidence that was accessible. Kaku (1998, p. 14) advises, “[i]n making predictions about the future, it is crucial to understand the time frame being discussed, for, obviously, different technologies will mature at different times… These are not absolute time frames; they represent only the general period in which certain technologies and sciences will reach fruition”.
Auto-ID forecasts may not eventuate but are still more likely to happen than ‘predictions in their pure form’ and for this reason they are more valuable. Even the founder of Microsoft, Bill Gates (1995, p. 274) accepted that his predictions may not come true. But his insights in the Road Ahead are to be commended, even though they are broad. “The information highway will lead to many destinations. I’ve enjoyed speculating about some of these. Doubtless I’ve made some foolish predictions, but I hope not too many.” In the quest to prove or disprove forecasts or predictions, “[s]cientific understanding can lead to practical uses. With the first such application, the quest for further understanding intensifies, leading to even more advanced applications” (Queisser, 1985, pp. viif). Allaby (1996, p. 206) writes “[f]orecasts deal in possibilities, not inevitabilities, and this allows forecasters to explore opportunities.” In speculating about the next 500 years Berry (1996, p. 1) writes, “[p]rovided the events being predicted are not physically impossible, then the longer the time scale being considered, the more likely they are to come true… if one waits long enough everything that can happen will happen.”
Thus the term “forecaster” rather than such loaded terms as “futurist”, “visionary” or “secular prophet” is to be preferred. Someone who predicts, at any rate in the classical sense of the foreknowledge of future events, is being prophetic. In the traditional meaning, The Seer of Patmos, the author of the Book of Revelation (c. CE 95), can be considered a prophet; a predictor of events, but a modern day forecaster like Nicholas Negroponte, for example, is not being prophetic. It can be said that there are many forecasters and very few prophets. History is proof enough, vaticinium ex eventu not withstanding! Forecasters, as it will be seen in the brief review of works below, usually use trends or patterns or present-day findings to make projections. Forecasters are more likely to make predictions about new innovations rather than new inventions. For the greater part, they raise challenging issues that are thought provoking, about how existing inventions or innovations will impact society. They give scenarios for the technology’s projected pervasiveness, how they may affect other technologies, what potential benefits or drawbacks they may introduce, how they will affect the economy, etc. And it is here that a robust framework like the systems of innovation approach can assist a researcher in making predictions, as it looks at the whole system, not just a single fragment.
Michio Kaku (1998, p. 5) argued, “that predictions about the future made by professional scientists tend to be based much more substantially on the realities of scientific knowledge than those made by social critics, or even those by scientists of the past whose predictions were made before the fundamental scientific laws were completely known”. He believes that among the scientific body today there is a growing concern regarding predictions that for the greater part come from consumers of technology (writers, sociologists etc.) rather than those who shape and create it. Kaku is of course correct in so far that scientists should be consulted as well, since they are the ones actually making things possible after discoveries have occurred. But to these researchers, a balanced view is necessary, encompassing various perspectives of different disciplines is extremely important. In the 1950s for instance, when technical experts forecasted improvements in computer technology they envisaged even larger machines but science fiction writers predicted microminiaturization. They “[p]redicted marvels such as wrist radios and pocket-sized computers, not because they foresaw the invention of the transistor, but because they instinctively felt that some kind of improvement would come along to shrink the bulky computers and radios of that day” (Bova, 1988 quoted in Berry, 1996, p. 18). The methodologies used as vehicles to predict in each discipline should be respected. The question of who is more correct in terms of predicting the future is perhaps the wrong question. For example, some of Kaku’s own predictions in Visions can be found in science fiction movies dating back to the 1960s.
Prominent Technology Forecasters
Forecasters have diverse backgrounds. The contemporaries include a long list of scientists, engineers, physicists, biologists, mathematicians, entrepreneurs, lawyers, economists, geographers, sociologists, historians, philosophers, religious thinkers, science fiction writers, culture critics, ethicists and others. While all of them cannot be mentioned here, some of the more prominent and representative ‘technology-focused’ forecasters (i.e. automated machinery, computers, networks, digital media technologies, artificial intelligence, etc.) and their important works include: Ellul (1964), McLuhan (1962; 1964), A. C. Clarke (1968), Toffler (1970; 1981), Minsky (1985), Moravec (1988, 1999), Kelly (1994), Gates (1995), Negroponte (1995), Kaku (1998), Cochrane (1999), Mann (2001), Warwick (2002), Crichton (2006), and Dvorsky (2009). In making predictions these forecasters are required to not only draw on their own expertise, but to also utilize sources outside their own disciplines in order to effect a more complete picture of their projections (and to avoid potentially embarrassing errors)..Of topical relevance here in the bringing together of seemingly disparate fields, is William A. Stahl’s (1999) God and the Chip: Religion and the Culture of Technology. In providing evidence for the likelihood of their future predictions, forecasters often use the work of other forecasters to support their own positions. These works also track the changes that have occurred over time, setting their findings in the context of larger events in history, and then making predictions. See especially the brilliant timeline compiled and presented by Ray Kurzweil (1999, pp. 261-280) in The Age of Spiritual Machines and his further considerations of our destiny in The Singularity is Near (2005).
Often forecasters need to use an interdisciplinary approach to successfully bring together related projections. For instance the founding members of the Media Lab were made up “of a filmmaker, a graphic designer, a composer, a physicist, two mathematicians, and a group of research staff who, among other things, had invented multimedia in preceding years. We came together… [t]he common bond was not a discipline, but a belief that computers would dramatically alter and affect the quality of life through their ubiquity, not just in science, but in every aspect of living” (Negroponte, 1995, p. 225). Professor Steve Mann (2009) is possibly our greatest living example of this diverse talent- inventor, engineer, developer, philosopher, filmmaker, artist, instrumentalist, author, photographer, actor, and much more. This resonates of Toffler’s (1970, p. 463) assertion that the “…world’s biggest and most tough-minded corporations… today hire intuitive futurists, science fiction writers and visionaries as consultants. A gigantic European chemical company employs a futurist who combines a scientific background with training as a theologian. An American communications empire engages a future-minded social critic…” Seeing the same problem from different perspectives is crucial, and pondering the future armored with skills in completely different disciplines offers unique insights.
A spate of publications predicting future technical breakthroughs were published prior to the onset of the new millennium. Most of these touched upon topics to do with advancements in computer technology, cybernetics, economic change, cloning, and space exploration. The following works are quite challenging in terms of the predictions they present: Knoke (1996), Paul and Cox (1996), Berry (1996), Stork (1997), Robertson (1998), Cetron and Davies (1998), Gershenfeld (1999), Johnscher (1999), Canton (1999), Rantala and Milgram (1999), Kurzweil (1999, 2005).
From “Electronic Banks” to “Digital Money”
When Jacques Ellul predicted the use of “electronic banks” in his ground-breaking book, Technological Society (1964, p. 432), he was not referring to the computerization of financial institutions, ATMs or EC (electronic commerce). Rather it was in the context of the possibility of the dawn of a new entity- “the coupling of man and machine”. Ellul (1964, pp. 395, 414, 430) was predicting that one day knowledge would be accumulated in electronic banks and “transmitted directly to the human nervous system by means of coded electronic messages… What is needed will pass directly from the machine to the brain without going through consciousness…” As unbelievable as this “man-machine” complex may have sounded at the time, over forty years later forecasters are still predicting such scenarios will be possible by the turn of the 22nd century. Today, of course they have a better understanding of the issues at hand and write with a clearer road map of how to get there. Kaku (1998, p. 112) observes that: “[s]cientists are proceeding to explore this possibility with remarkable speed. The first step in attempting to exploit the human brain is to show that individual neurons can grow and thrive on silicon chips. Then the next step would be to connect silicon chips directly to a living neuron inside an animal, such as a worm. One then has to show that human neurons can be connected to a silicon chip. Last… in order to interface directly with the brain, scientists would have to decode millions of neurons which make up our spinal cord”. The main obstacle at present is the complexity of the brain. “The brain’s wiring is so complex and delicate that a bionic connection with a computer or neural net is something that is, at present, seemingly impossible without causing permanent damage…. Nonetheless, this has not prevented some individuals from making certain conjectures about mind/machine links, which properly belong in the far future” (Kaku, 1998, p. 115).
One can trace the predictions of these forecasters over time and see that the predictions themselves are evolving as new discoveries are made to defend or attack a given forecast (see table 1). In like manner this is precisely how the present writers wish to make predictions about auto-ID; by using existing findings as a ‘launch-pad’ for building likely future scenarios (Perusco & Michael, 2007). The main point of this example is to demonstrate that some very credible persons have made what many may believe (or used to believe) to be some very incredible predictions about the future. But they can do this with authority because their predictions are supported by work that is being conducted in universities and commercial research laboratories around the world. Berry (1996, p. 5) is quite right when he comments that “[e]vents only seem extraordinary at the time when they are predicted, never after they have happened.”
In terms of auto-ID, several forecasters have made predictions about technologies and applications. Gates, Negroponte and Kaku all agree that auto-ID technologies, especially smart cards and biometrics, will have a great impact on society in the next twenty years. Gates places much emphasis on the wallet PC, Negroponte on wearable devices and Kaku on ubiquitous computing (see table 2). Simply by analyzing these three positions one can trace an evolution of ideas. From devices that one carries, to those that one wears, to those that are everywhere. Additionally in terms of auto-ID-related electronic commerce applications, all three forecasters are in agreement that these are going to become increasingly interconnected over the Internet. Thus, there is an underlying synergy between the predictions, especially in the context of trend curves and growth curves, which puts what the forecasters are saying inside the margins of the probable.
It is the intention of the authors however, to offer more detailed evidence for the predictions regarding the auto-ID trajectory. For instance, Gates (1995, p. 77) stated that “[t]he smart card of the future will identify its owner and store digital money, tickets, and medical information…” Kaku (1998, p. 37f) agreed that “[t]here will be enormous economic pressure for people to convert to smart cards and digital money… In the future, smart cards will replace ATM cards, telephone cards, train and transit passes, credit cards, as well as cards for parking meters, petty cash transactions, and vending machines. They will also store your medical history, insurance records, passport information, and your entire family photo album. They will even connect to the Internet.” But the evidence they offer in their books for these particular outcomes is quite scarce. Of course the overall scope of their books does not allow for such inquiry, but this presents an adequate gap in the literature to be filled by this study, offering a unique contribution to knowledge.
Innovative Auto-ID and Location Based Services: From Bar Codes to Chip Implants is sure to take the reader on a grand tour of more than just the future trajectory of automatic identification technologies, but the big picture question of the social impact of technology. What began as a study in the confines of innovation within an information technology dissertation has grown to become a detailed inquiry that has reached out to multiple disciplines including philosophy, ethics, culture, religion, sociology, political science, law and economics. The authors clearly do not proclaim to have profound expertise in all of these fields but their research is informed by decades of study in tertiary institutions beginning in the early 1980s. This study has further been supported by decades of relevant practical work experience in information and communication technology, theology, policing and defense, and vocational instruction. There is much more to say on this topic but alas one has to stop somewhere: Ars longa, vita brevis.
Adams, R. (1990). Sourcebook of Automatic Identification and Data Collection. New York: Van Nostrand Reinhold.
Allaby, M. (1996). Facing The Future: the case for science. London: Bloomsbury.
Andersen, E. S. (1997). Innovation systems: evolutionary perspectives. In C. Edquist (Ed.), Systems of Innovation: technologies, institutions and organizations (pp. 174-179). London: Pinter.
Baldwin, T. F., McVoy, D. S., & Steinfield, C. W. (1996). Convergence: integrating media, information & communications. London: Sage Publications.
Banbury, C. M. (1997). Surviving Technological Innovation in the Pacemaker Industry 1959-1990. New York: Garland Publishing.
Beauchamp, M. (2007). Association for Automatic Identification and Mobile Data Capture. AIM UK Retrieved 31 January 2009, from http://www.aimuk.org/
Berge, P. (1987). IATA and Automatic Identification Standards for the Airlines. London: AIM.
Berry, A. (1996). The Next 500 Years: life in the coming millennium. New York: Gramercy Books.
Braun, E. (1995). Futile Progress: technology’s empty promise. London: Earthscan Publications Ltd.
Brodsky, I. (1995). Wireless: the revolution in personal telecommunications. London: Artech House.
Canton, J. (1999). Technofutures. New York: Hay House.
Carlsson, B. (Ed.). (1995). Technological Systems and Economic Performance: the case of factory automation. Dordrecht: Kluwer.
Cetron, M., & Davies, O. (1998). Cheating Death: the promise and the impact of trying to live forever. New York: St Martin’s Press.
Choney, S. (2008). Cell phone want list: Finding some GPS. msnbc.com Retrieved 31 January 2009, from http://www.msnbc.msn.com/id/24728056/
Clarke, A. C. (1968). 2001: a space odyssey. London: Orbit.
Cochrane, P. (1999). Tips For Time Travelers: visionary insights into new technology, life, and the future on the edge of technology. New York: McGraw-Hill.
Cohen, J. (1994). Automatic Identification and Data Collection Systems. London: McGraw-Hill Book Company.
Covell, A. (2000). Digital Convergence: how the merging of computers, communications, and multimedia is transforming our lives. Rhode Island: Aegis Publishing.
Creswell, J. W. (1998). Qualitative Inquiry and Research Design: choosing among five traditions. London: Sage Publications.
Crichton, M. (2006). Next. USA: HarperCollins.
Das, R. (2008). NFC-enabled phones and contactless smart cards 2008–2018. Card Technology Today, 20(7/8), 11-13.
Dvorsky, G. (2009). Transhumanist perspectives on science, philosophy, ethics and the future of intelligent life. Sentient Developments Retrieved 30 January 2009, from http://www.sentientdevelopments.com/
Edgar, S. L. (2003). Morality and Machines (2nd ed.). Toronto: Jones and Bartlett Computer Science.
Edquist, C. (1998). Findings and conclusions of ISE case studies on public technology procurement. Innovation Systems and European Integration (ISE)(April), 1-39.
Edquist, C. (Ed.). (1997). Systems of Innovation: Technologies, Institutions and Organizations. London: Pinter.
Ellul, J. (1964). The Technological Society. New York: Vintage Books.
Ermann, M. D., Williams, M. B., & Shauf, M. S. (1997). Computers and Ethics, and Society (2nd ed.). Oxford: Oxford University Press.
Flick, U. (2002). An Introduction to Qualitative Research. London: Sage.
Gabber, E., & Wool, A. (1998). How to prove where you are: tracking the location of customer equipment. Paper presented at the Proceedings of the 5th ACM Conference on Computer and Communications Security.
Gates, B. (1995). The Road Ahead. New York: The Penguin Group.
Gellersen, H.-W. (1999). Handheld and Ubiquitous Computing. Paper presented at the First International Symposium HUC ‘99 September 1999 Proceedings, Karlsruhe, Germany.
Gershenfeld, N. (1999). When Things Start to Think. New York: An Owl Book.
Gold, A. E. (1988). An overview of automatic identification technologies’. In R. Ames (Ed.), Perspectives on Radio Frequency Identification: what is it, where is it going, should I be involved? (pp. 1-2- 1-8). New York: Van Nostrand Reinhold.
Graafstra, A. (2007). Interview with Katina Michael. Amal Graafstra: An object at rest cannot be stopped! Retrieved 24 May 2007, from http://blog.amal.net/?p=36
Greenstein, S., & Khanna, T. (1997). What does industry convergence mean? In D. B. Yoffie (Ed.), Competing in the Age of Digital Convergence (pp. 201-226). Massachusetts,: Harvard Business School.
Harmon, C. K., & Adams, R. (1989). Reading Between the Lines- An Introduction to Bar Code Technology. New Hampshire
Helmers Publishing Inc.
Hester, M., & Ford, P. J. (2001). Computers and Ethics in the Cyberage. New York: Prentice Hall.
Hewkin, P. F. (1989). Future automatic identification technologies. Colloquium on the Use of Electronic Transponders in the Automation, 6/1-6/10.
Johnscher, C. (1999). The Evolution of Wired Life. UK: John Wiley & Sons.
Kaku, M. (1998). Visions: how science will revolutionize the 21st century and beyond. Oxford: Oxford University Press.
Keenan, W. (1997). Components of the business proposition: the consumer demand proposition:the consumer demand proposition. In C. A. Allen & W. J. Barr (Eds.), Smart Cards: Seizing Strategic Business Opportunities (pp. 21-43). New York: McGraw-Hill.
Kelly, K. (1994). Out of Control: the new biology of machines, social systems and the economic world. Massachusetts: Perseus Books.
Kling, R. (1996). Computerization and Controversy: value conflicts and social choices. New York: Academic Press.
Knoke, W. (1996). Brave New World: the essential roadmap for the twenty-first century. New York: Kodansha International.
Kumar, N., & Vragov, R. (2009). Active citizen participation using ICT tools. Communications of the ACM, 52(1), 118-121.
Kurzweil, R. (1999). The Age of Spiritual Machines. New York: Penguin Books.
Kurzweil, R. (2005). The Singularity is Near. New York: Penguin.
Labrador, M., Michael, K., & Kuepper, A. (2008). Advanced Location-Based Services. Computer Communications, 31(6), 1053-1054.
LaMoreaux, R. D. (1998). Barcodes and Other Automatic Identification Systems. New York: Pira International.
Lindley, R. A. (1997). Smart Card Innovation. Australia: Saim.
Mann, S. (2009). WearComp.org, WearCam.org... Retrieved 31 January 2009, from http://genesis.eecg.toronto.edu/
Mann, S., & Niedzviecki, H. (2001). Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer Toronto: Random House.
Marriot, M. (1987). International Bar Code Standards: Bar Code Symbologies, Standards and Technology Updates. London: AIM.
Masters, A., & Michael, K. (2007). Lend Me Your Arms: the Use and Implications of Humancentric RFID. Electronic Commerce Research and Applications, 6(1), 29-39.
McLuhan, M. (1962). The Gutenberg Galaxy: The Making of Typographic Man. Toronto: University of Toronto Press.
McLuhan, M. (1964). Understanding Media: the extensions of man. Cambridge: MIT Press.
Michael, K., & Masters, A. (2004, 18-21 July). Applications of human transponder implants in mobile commerce. Paper presented at the The 8th World Multiconference on Systemics, Cybernetics and Informatics, Orlando, Florida.
Michael, M. G., Fusco, S. J., & Michael, K. (2008). A Research Note on Ethics in the Emerging Age of Uberveillance (Überveillance). Computer Communications, 31(6), 1192-1199.
Mills, S. (Ed.). (1997). Turning Away From Technology: a new vision for the 21st century. San Francisco: Sierra Club Books.
Minsky, M. (1985). Society of Mind. New York: A Touchstone Book.
Moran, R. (n.d.). Automatic Identification Systems: growth markets- a major enabling technology for the 90s. New York: Business Communications Co.
Moravec, H. (1988). Mind Children: the future of robot and human intelligence. Cambridge: Harvard University Press.
Moravec, H. (1999). Robot: mere machine to transcendent mind. Oxford: Oxford University Press.
Morris, E., & Zuluaga, C. (2006). Information Technology Issues: Ethical and Legal. Sydney: Pearson Education.
Moutray, R. E., & Ponsford, A. M. (2003). Integrated maritime surveillance: protecting national sovereignty. Radar, 385-388.
Negroponte, N. (1995). Being Digital. Australia: Hodder and Stoughton.
Nelson, R. R., & Winter, S. G. (1982). An Evolutionary Theory of Economic Change. Cambridge: Harvard University Press.
O'Gorman, L., & Pavlidis, T. (1999). Auto ID technology: From barcodes to biometrics. IEEE Robotics & Automation Magazine, 6(1), 4-6.
Paul, G. S., & Cox, E. D. (1996). Beyond Humanity: cyberevolution and future minds. Massachusetts: Charles River Media.
Pavlidis, T. (1996). Challenges in document recognition bottom up and top down processes. Paper presented at the IEEE Proceedings of ICPR ‘96.
Perusco, L., & Michael, K. (2007). Control, Trust, Privacy, and Security: Evaluating Location-Based Services. IEEE Technology and Society, 26(1), 4-16.
Perusco, L., Michael, K., & Michael, M. G. (2006). Location-based Services and the Privacy-Security Dichotomy. Paper presented at The Third International Conference on Mobile Computing and Ubiquitous Networking (ICMU 2006), London, UK.
Queisser, H. (1985). The Conquest of the Microchip: science and business in the silicon age. Cambridge: Harvard University Press.
Quinn, M. J. (2006). Ethics for the Information Age (2nd ed.). Sydney: Pearson International.
Radinger, W., & Goeschka, K. M. (2002). A definition of convergence in the area of information and telecommunication technologies Paper presented at the Conference on Object Oriented Programming Systems Languages and Applications, Seattle, Washington
Rantala, M. L., & Milgram, A. J. (Eds.). (1999). Cloning: for and against. Chicago: Open Court.
Reynolds, G. (2006). Ethics in Information Technology (2nd ed.). New York: Thomson Course Technology.
Robertson, D. S. (1998). The New Renaissance: computers and the next level of civilisation. New York: Oxford University Press.
Saxenian, A. (1994). Regional Advantage: culture and competition in Silicon Valley and Route 128. Cambridge: Harvard University Press.
Schumpeter, J. A. (1934). The Theory of Economic Development. Cambridge: Harvard University Press.
Schwind, G. (1990). Electronic codes for automatic identification. In R. Ames (Ed.), Perspectives on Radio Frequency Identification: What is it, Where is it going, Should I be Involved? (pp. 1-19- 11-27). New York: Van Nostrand Reinhold.
Sharp, K. R. (1987). Automatic identification improves the bottom line. In R. Ames (Ed.), Perspectives on Radio Frequency Identification: What is it, Where is it going, Should I be Involved? (pp. 1-9- 1-18). New York: Van Nostrand Reinhold.
Smith, E. (1989). The Australia Card: the story of its defeat. Australia: Macmillan.
Smith, I. G. (1990). AIM- an industry activity group for automatic identification. Computing & Control Engineering Journal, 11, 49-52.
Stahl, W. A. (1999). God and the Chip: religion and the culture of technology. Canada: Canadian Corporation for Studies in Religion.
Stork, D. G. (Ed.). (1997). Hal’s Legacy. Massachusetts: MIT Press.
Swartz, J. ( 1999). The growing ‘MAGIC’ of automatic identification. IEEE Robotics & Automation Magazine, 20-22, 56.
Tavani, H. T. (2007). Ethics & Technology: Ethical Issues in An Age of Information and Communication Technology. New York: John Wiley & Sons.
Thomas, C., & Bevan, N. (1996). Usability Context Analysis: A Practical Guide. Middlesex: National Physical Laboratory DITC.
Toffler, A. (1970). The Third Wave. London: Pan Books.
Toffler, A. (1981). Future Shock. New York: Bantam Books.
Tren, R. (1995). Trends in the Cards Industry. Andersen Consulting, 1-99.
Warwick, K. (2002). I, Cyborg: Century.
Westrum, R. (1991). Technologies and Society: the shaping of people and things. California: Wadsworth Publishing Company.
Yin, R. (1998). The Case Study Method As A Tool For Doing Evaluation. Current Sociology, 40(1), 123.
Yoffie, D. B. (Ed.). (1997). Competing in the Age of Digital Convergence. Massachusetts: Harvard Business School.
Yule, P., & Hewson, C. (2003). Internet Research Methods: a practical guide for the social and behavioral sciences. London: Sage Publications.