Innovative Auto-ID and LBS - Chapter Twelve The Auto-ID Trajectory

Chapter XII: The Auto-ID Trajectory



This chapter considers the automatic identification (auto-ID) trajectory within the context of converging disciplines to predict the realm of likely possibilities in the short-term future of the technology. The chapter relies heavily on presenting a cross-section of research conducted primarily up until 2003 when the first commercial chip implant occurred, as a window to forecasting what kinds of technologies may become widely diffused by 2020. After showing the evolutionary development from first generation to third generation wearable computing, medical breakthroughs using implantable devices are documented. The findings of the chapter suggest that before too long, implantable devices will become commonplace for control, convenience and care-related applications. The paradigm shift is exemplified in the use of auto-ID, from its original purpose in identifying humans and objects to its ultimate trajectory with multifunctional capabilities buried within the body.



According to Siewiorek (1999, p. 82) the first wearable device was prototyped in 1961 at MIT (Massachusetts Institute of Technology) by Edward Thorp and Claude Shannon. The idea for the device came in 1955 in an attempt to be able to predict roulette. However, the term “wearable computer” was first used by a research group at Carnegie Mellon University in 1991, coinciding with the rise of the laptop computer (early models of which were known as “luggables”). Wearable computing can be defined as: “anything that can be put on and adds to the user’s awareness of his or her environment… mostly this means wearing electronics which have some computational power” (Sydänheimo et al., 1999, p. 2012). While the term “wearables” is generally used to describe wearable displays and custom computers in the form of necklaces, tie-pins and eyeglasses, it is the opinion of the researchers that the definition should be broadened to incorporate PDAs (personal digital assistants), e-wallets, and other mobile accessories such as cellular phones and smart cards that require the use of belt buckles or satchels attached to conventional clothing.

Before the widespread diffusion of personal computers (PCs) and laptops it was auto-ID devices in the form of bar code cards, magnetic-stripe cards and smart cards that were ‘luggable’ and to some degree wearable with the aid of an external clip or fastener. In the case of contactless smart cards they could even be carried in a wallet or purse or in a trouser or shirt pocket. While they did not have the same processing power as PCs or laptops, auto-ID devices did point to a practical ideal, in terms of their size. IBM and other computer manufacturers have quickly caught onto the notion of wearable computing- their vision of a portable computer that could be worn instead of carried has been well-documented. According to Phil Hester of IBM’s Personal Systems Group, the wearable PC, a hybrid device, would allow a user to freely walk around a building connected to a wireless network and perform all the day-to-day functions like send emails but with the added option of voice navigation/recognition (Wilcox, 1999, p. 1).

Wearable computing is about to reinvent the way we work and go about our day-to-day business, just like auto-ID devices did in the 1970s and 1980s. It is predicted that highly mobile professionals will soon take advantage of smart devices that will be built into their clothing so that they will be able to “…check messages, finish a presentation, or browse the Web while sitting on the subway or waiting in line at a bank” (Schiele et al., 2001, p. 44). And not just professionals but society at large is taking advantage of the latest gadgetry. MIT’s “Group-Media” are creating socially intelligent wearables for the following projects: The Jerk-O-Meter, MoodPhones / VibePhones, Elevator Rater, Human Interest-Meter, Speed Dating v2, Negotiations, and Movie Audience Reactions (Pentland, 2009).



Early prototypes of wearable computers throughout the 1980s and 1990s could have been described as outlandish, bizarre, abnormal-looking or even weird. For the greater part, wearable computing efforts have focused on head-mounted displays (a visual approach) that unnaturally interfered with human vision and made proximity to others cumbersome (Sawhney & Schmandt, 1997, p. 171). But the long-term aim of research groups is to make wearable computing inconspicuous as soon as technical improvements allow for it (figures 1 and 2). The end user should look as ‘normal’ as possible (Mann, 1997, p. 177). One need only consider the size of the first mobile phones in the early 1990s; they weighed the size of a small brick, were expensive, and very few people thought that widespread diffusion would be achieved. Yet today, numerous countries have reached in excess of 80 per cent penetration, which equates to a mobile phone for almost every adult in that country. As Cochrane (1999, p. 1) observed, “[t]oday, mobiles are smaller than a chocolate bar and cost nothing, and we can all afford them. And they are not bolted into vehicles as was originally conceived, but kept in pockets and hung on trouser belts.”

Today it is commonplace to find professionals and younger technology-savvy students not only carrying mobile phones but 3G-enabled notebooks, GPS-enabled PDAs and even miniature secondary storage units that can hold hundreds of gigabytes worth of data. To this list Starner (2001a, p. 46) adds a pager, electronic translator and calculator wristwatch. Starner even made the observation that “[s]ome people wear too many computers.” He noted that these separate computers have similar components such as a microprocessor and memory. In other words, there is a fair amount of redundancy in the separate devices. Wearable computers of the future will integrate all these functions into the one unit. The hope of wearable device developers is that the capabilities will converge to such an extent that the user will not consider the mobile phone as separate from a PDA or a PDA separate from a notebook. Nokia’s 6260 classic series is an example of this integration-  it has a 5 megapixel camera with Carl Zeiss optics and flash, both HSUPA and HSDPA support, WiFi, aGPS, Bluetooth, web browser, music player and more. Global positioning systems devices especially are becoming increasingly powerful. The Trackstick Pro (2009) is also another example of powerful computing one can carry to geotag photos or create logs of routes taken. Trackstick Pro is now being marketed to a whole range of segments including: fleets, contractors, emergency services, commercial equipment, personal and business, sporting activities, government and national security. At just a little over $350 Australian dollars there is technology that can now be used for covert surveillance.


Case 1: Industrial Application

Wearable computers should not just be considered solely for personal electronics but suitable for industrial purposes as well. Several companies like Symbol Technologies (now Motorola), Honeywell and Xerox have researched industrial wearable devices for over a decade, along with names completely focused to this cause including Xybernaut and ViA (figure 3-5). Perhaps one of the most well-known industrial uses of wearable computing is the United Parcel Service (UPS) case study. In 1995, UPS challenged Symbol Technologies “…to create a Wearable Data Collection device for their package loaders” (Stein et al., 1998, p. 18). Symbol’s goal “was to create a wearable system that increased efficiency and productivity through mobility and hands-free computing and scanning” (Stein et al., 1998, p. 19). After considerable feedback between users at UPS and Symbol and evaluations for possible disease transmission given the wearable computer would assume skin contact, the Wrist Computer was released in 1996. At one point Symbol was shipping about seventeen thousand units per month to UPS, such was the success of the product. What is interesting to note is that Stein et al. (1998, p. 24) report that the “[t]he initial response from users who had been using hand-held computers was to not want to give up the wearable once they tried it.” Perhaps the same can be said for other wearable devices. How many individuals can do without their mobile phones today, or PDAs, or camera phones or recorders?



As wearable computing devices get smaller and smaller there has been a conscious effort to create an electronic wallet that combines the traditional wallet, the computer and communication technology. For some time many believed that the Mondex smart card system would act to revolutionize the way people exchanged money. AT&T was so convinced that it invested in developing an electronic wallet. The “Mondex Wallet allows users to perform on-line transactions and view balance and transaction information stored on their card” (Cooper, 1999, p. 87). The Mondex Wallet has not reached its potential diffusion rates but this has more to do with market maturity than anything else. While the Wallet is not the sophisticated type of wearable device that Mann and others envision, it was an incremental step towards that vision. Swatch had also introduced an electronic wallet in the form of a wristwatch, known as Swatch Access. The wristwatch featured a “miniature antenna and a computer chip, similar to those used in conventional smart card payment systems. This allowed users to perform transactions using money stored on the chip” (Cooper, 1999, p. 87). Trials of the watch have taken place in Finland’s transport system. Another more sophisticated wristwatch solution known as, Digital Angel, “offered a unique combination of GPS, wireless Internet and sensor technologies” (ADS, 2002b). The all-in-one unit which looks like a conventional watch can monitor temperature, contains a boundary alert function and has panic button feature. The versatility of the technology is seen in its wide range of formats and configurations such as a pager-like device, necklace, pendant, bracelet, and even belt buckle (ADS, 2002b). In 2008, Williams reported that NTT DoCoMo was very close to prototyping a bio-sensing cell phone which carried a DNA chip. One main obstacle remained- how to get molecules from the user’s body to the cell phone. The idea of molecular communication was raised and preliminary testing took place at the University of Tokyo.


Case 2: Medical Application

Wearables have also found a niche market in medical applications. Hinkers et al. (1995, p. 470) describes a small wearable device that continuously monitors glucose levels so that the right amount of insulin is calculated for the individual reducing the incidence of hypoglycaemic episodes.  Hinkers once predicted the use of automated insulin delivery systems as well which are currently under development. Medical wearables even have the capability to check and monitor 26 different products in one’s blood (Ferrero, 1998, p. 88). Today medical wearable device applications include: “…monitoring of myocardial ischemia, epileptic seizure detection, drowsiness detection… physical therapy feedback, such as for stroke victim rehabilitation, sleep apnea monitoring, long-term monitoring for circadian rhythm analysis of heart rate variability (HRV) (Martin et al. 2000, pp. 44).” Some of the current shortcomings of medical wearables are similar to those of conventional wearables, namely the size and the weight of the device is too heavy. In addition wearing the devices for long periods of time can be irritating due to the number of sensors that may be required to be worn for monitoring. The gel applied for contact resistance between the electrode and the skin can also dry up causing nuisance. Other obstacles to the widespread diffusion of medical wearables include government regulations and the manufacturers’ requirement for limited liability in the event that an incorrect diagnosis is made by their equipment (Martin et al., 2000, p. 44). More recently the issue of privacy has been raised especially for medical wearable devices that are applied within shared hospital facilities where access to results could be abused (Kargl, Lawrence, Fischer, & Lim, 2008).

Of worthy note here is how much we have come in just a little under 10 years. Wearable devices are usually no longer clunky. In fact, consider Toumaz Technology’s Digital Plaster prototype or a current product the Sensium Life Pebble TZ203002 (Toumaz, 2009). The Life Pebble has the ability to enable continuous, auditable acquisition of physiological data without interfering with the patient’s activities. The device can continuously monitor ECG, heart rate, physical activity and skin temperature.



There are two things we carry with us everywhere we go, that is, clothes (such as undergarments, shirts, pants and accessories) and our actual bodies (composed of skin, muscles, nerves, water). Wearable computing experts have always sought a seamless and transparent way to introduce their high-tech devices. Many wearable computing developers believe the answer lies in distributing the equipment evenly throughout the body so that it does not feel excessively heavy for the end-user or look cumbersome. Known as “smart clothes” or “underwearables”, they will do more than keep you warm:- “[w]ith the help of computers and special high-tech fabrics, smart clothes could send and receive information and adjust to give you what you need at any moment” (Kastor, 2000, p. 1). A research group in Belgium had been developing the “i-Wear” range (i.e. Intelligent Wear). Siddle (2000, p. 1) reported that the clothes: “will perform many of the current functions of mobile phones, computers and even hospital monitoring equipment… The company [i-Wear] says the range of tasks that the clothes will be able to perform is vast, from taking phone calls to keeping a check on the health of the wearer.”

While mass-scale commercial production of such clothes is probably two decades away, shirts with simple memory functions have been developed and tested. Sensors will play a big part in the functionality of the smartware helping to determine the environmental context, and undergarments closest to the body will be used for body functions such as the measurement of temperature, blood pressure, heart and pulse rates. For now however, the aim is to develop ergonomically-astute wearable computing that is actually useful to the end-user. Head-mounted displays attached to the head with a headband may have acted to prototype the capabilities of wearable computing but it was not practical and definitely not attractive. Displays of the next generation will be mounted or concealed within eyeglasses themselves (Spitzer et al., 1997, p. 48). See here especially the incredibly imaginative work of Professor Steve Mann of the University of Toronto (Mann, 2009). Accessories like ear-rings, cuff-links, tie-pins and pendants are also considered wearables if they contain intelligence. The Gesture Pendant, for instance, can be used in an Aware Home granting occupants the ability to be recognized and their activities interpreted to improve the quality of their life. The wearer has the ability to control different house elements like lights, the television, radio, telephone via simple hand gestures that are detected and interpreted by the smart pendant. The target audience for the Gesture Pendant is the elderly or disabled who suffer from particular ailments but who would still want to maintain their independence by living in their own homes. The device could be also used for medical monitoring over time. Georgia Tech’s aware home research initiative is an incredible introduction providing scenarios for the possibilities today (GVU, 2009).


Case 3: Military Application

The military is paying particular attention to wearable computing developments. Combatants of the future may look like something/someone out of a film like “Universal Soldier”. This should not be surprising since as far back as the 1960s there were attempts to make a “Man Amplifier”; to grant a soldier the added help of an exoskeleton, a sort of first line of defense in protection of the mortal flesh. While the Man Amplifier was unsuccessful due to obvious technological limitations of the time, today systems like FREFLEX (Force Reflecting Exoskeleton) are being trialed to augment human strength characteristics (Repperger et al., 1996, pp. 28-31). The US Army for instance, has been involved in trying to build a military uniform that utilize wearable computing components. They are seeking a uniform that can make: “…soldiers nearly invisible, grant superhuman strength and provide instant medical care… All this would be achieved by developing particle-sized materials and devices- called “nanotechnology”- nestled into the uniform’s fabric… Supercharged shoes could release energy when soldiers jump… Microreactors could detect bleeding and apply pressure… Light-deflecting material could make the suit blend in with surroundings” (LoBaido, 2001, p. 1).

This may sound highly exaggerated or Hollywood-esque but it is not. A British company that have called itself the Electronic Shoe Company have developed a pair of walking boots that can be used to power electrical equipment such as a mobile phone. Footwear could also be used to help orientate the soldier, leading them to specific targets through the safest possible route, with the capability of even detecting landmines. In the event of injury to a soldier it is hoped that smart shirts like the Sensate Liner (in which is woven optical fiber) can even aid to localize life-threatening wounds to the upper torso (Gorlick, 1999, p. 121). According to Kellan (2000, p. 1) each soldier would be equipped with a wearable computer, GPS locator and wireless connections to the military network. This would grant individuals the ability to send signals back to base camp in times of trouble or for base camp to send new instructions to the soldier based on more up-to-date intelligence reports. It is not inconceivable for whole divisions to be redirected to areas of safety, minimizing the loss of life (for one side of the combatants at least). It is no surprise that the U.S. Army have major interests in areas like nanotechnology (Samuel, 2002). It is well-publicized that nanotechnology will play a major role in defense (Carstairs, 2008).



A new line of “wearables” is now emerging that does not quite fit the definition of the traditional wearable that makes an assumption about a device being worn on the outside the human body. Implantable devices such as RFID transponders cannot correctly be referred to as “wearables” because the component is not worn, rather it is ingrained, embedded, entrenched in the human body. The implant device is more than an extension; it becomes one with the body, a seamless fusion between flesh and foreign object. Years ago, automated biometric recognition techniques were heralded as a coming together of humans and machines but today we have something beyond a meeting point, we have the potential for a union of biological proportions on an evolutionary scale. The term “cyborg” seems to have been hijacked by science fiction novels and movies to mean “part machine, part human”. “Saffo, director of the Institute for the Future, does not doubt that people may become a race of cyborgs- “part man and part machine”… “We put all sorts of implants in [our bodies] today,” says Saffo. “If we have metal hips, it only makes sense to have chips in, too” (Eng, 2002). But the definition of cyborg would be more relevant to bionics than to implantable devices.

The human who has been implanted with a microchip is an Electrophorus, a bearer of electric/electromagnetic technology. One who “bears” (i.e. a phorus) is in some way intrinsically or spiritually connected to that which they are bearing, in the same way an expecting mother is to the child in her womb. The root “electro” comes from the Greek word meaning “amber” and “phorus” means to “wear, to put on, to get into”. To electronize something is “to furnish it with electronic equipment” and electrotechnology is “the science that deals with practical applications of electricity”. The Macquarie Dictionary definition of electrophorus is “an instrument for generating static electricity by means of induction.” The term “electrophoresis” has been borrowed here, to describe the act that an electrophorus is involved in. McLuhan and Zingrone (1995, p. 94) believed that “…electricity is in effect an extension of the nervous system as a kind of global membrane.” The term electrophorus seems increasingly more suitable today than that of any other term, including “cyborg”.

So why the requirement for implantable devices when the same devices could apparently be worn? Two opposing arguments have come from the same institution. Chief futurologist, Ian Pearson, of British Telecom (BT) is not convinced that implants will take the place of wearable components, whereas x-BT researcher, Peter Cochrane is convinced otherwise. Pearson’s argument is that “[t]here is nothing you can do with embedded chips that you can’t do with wearable ones” (LoBaido, 2001, part 1, pp. 2f). Pearson however, does believe in the pervasive nature of the chips predicting that by 2006 wearable identity chips will be implemented. And yes, they were, a whole two years earlier than what he had originally predicted. Only one year prior to this interview, Peter Cochrane told McGinity (2000, p. 17) that there “…will come a day when chips are not just worn around the neck, but are actually implanted under a human’s skin.” When I [McGinity] scoffed at such an idea as merely science fiction, Cochrane offered up that he himself would be testing out such a human chip and looked forward to the opportunity.”

And who could ever doubt such a possibility after Professor Kevin Warwick’s 1998 Cyborg 1.0 trial? After the microchip implant Warwick was able to walk around his rigged up building in the Cybernetics department at the University of Reading and be recognized as being “Kevin Warwick.” As he walked through the doorways, the radio signal energized the coil in the chip, produced current, and gave the chip the ability to send out an identifying signal (Witt, 1999, p. 2). Warwick and Cochrane are not alone in their efforts. Mieszkowski (2000, part 1, p. 2) writes: “[m]any theorists see people carrying embedded technology as mobile computing’s next “killer application”... Instead of just implanting machines into humans to reconstruct joints or regulate heartbeats, they imagine the addition of sensors and chips in bodies which will make people better, stronger and faster.” For those like Mr Amal Graafstra, however, they may not be so interested in the “better, stronger and faster” scenario, but just enjoy tinkering on their computer, looking at how they can go further each time they dabble with “fun-stuff,” and just make life that little bit easier. What of the divide between the motivations of the three types of stakeholders: researcher, hobbyist and commercial organization


The Role of Auto-ID

Shortly after the excitement of the Warwick implant (1998) wore off and Cochrane launched his Tips for Time Travelers (1999), Applied Digital Solutions (ADSX) was founded. The company first announced its VeriChip solution on December 19, 2001. RFID, traditionally used in contactless smart cards, tags and keys, and transponders interwoven into clothing, was now being marketed as a suitable identity verification chip for a variety of security, financial, emergency service and healthcare applications for humans. In a press release the company announced that the VeriChip would be “…available in several formats, some of which [could] be inserted under the skin” (ADS, 2002a, p. 2). The Chief Technology Officer (CTO) of ADSX told Scheeres (2002, p. 1) that “[t]he chip… is injected into the subject’s forearm or shoulder under local anesthesia during an outpatient procedure and leaves no mark.” Furthermore VeriChip is expected to sell at a low two hundred US dollars with the Digital Angel service packaged at a monthly $29.95 US dollars with a one year minimum contract (Associated Press, 2002; Farrell, 2002). Scanners that could identify the VeriChip, very similar to those used to identify pet implants would cost between one thousand and three thousand US dollars. More recently ADSX have begun to aggressively market their products, attracting a lot of publicity as both young and old have opted for the chip implant. The “Get Chipped™” promotion and the ChipMobile™ that roams the US have increased the awareness level of the general public. ADSX had scheduled visits to “recreation and stadium events, health clinics, nursing homes” among other locations (ADS, 2002c).


The Impact of Mobility

The added function of networking to wearable computing components and implantable devices has acted to create an extremely powerful platform for monitoring and tracking humans “anywhere, anytime”. Starner (2001b, p. 54) identified three network communication levels: (i) off the body to a fixed network (e.g. wireless-enabled wristwatch); (ii) between different wearable devices on the body (e.g. between intelligent eyeglasses and belt buckle); and (iii) near the body between the user and objects (e.g. between a gesture pendant and a television set). Location has always been an important attribute in people-centric applications but it is only now that the capability exists to query this data in real-time (Michael, 2004). Krikelis (1999, p. 13) calls this “context information” and this is exactly what is set to revolutionize daily consumer and business activities. Future fourth generation (4G) mobile services base their core value proposition around being able to retrieve this type of data.

A typical 4G service example could be as follows (Michael, 2002, p. 293): “An employee who works for a multinational company is traveling from Sydney to China and making a stopover in Singapore. While on his way to Sydney airport, the employee encounters a major traffic accident on the Harbour Bridge. Traffic stops to a standstill, while police and ambulance treat people at the scene. A camera on the bridge tracks all delays, alerting the roads and traffic authority (RTA). The RTA estimates that the delay will be in excess of two hours and sends this information to the central information bureau. The employee is alerted by the wireless service provider that they will most likely miss his flight and will have to stay at Sydney’s Airport Hilton overnight waiting for the next available flight which is scheduled to depart in the morning. The employee replies to the message and updates are made to his itinerary as detailed on his reply message. The panic of having to reorganize everything is removed from the traveler. Though he will end up missing the first meeting in Singapore, he is relieved with the almost instantaneous knowledge that he will be leaving Sydney in time for subsequent meetings.”

Throughout this scenario a number of smart devices are being used to execute operations seamlessly. These may include a RFID device in the car of the employee traveling to the Airport, a wireless mobile phone carried by the individual to be able to send and receive information (either by voice or data), a smart wristwatch which contains itinerary information about flights, hotels and forthcoming meetings. Somewhere in amidst all this would be a GPS-enabled trigger that lets the respective service providers know where the individual is located and grants them the ability to calculate estimated times of arrival (Figure 6). This kind of service however would require the cooperation of numerous stakeholders, via web services that rely on workflow business process management standards and tools.


Global Positioning System (GPS) Tracking

Having established the importance of network communications to wearable and implantable devices let us consider the role of the Global Positioning System (GPS). Ferrero (1998, p. 87) ponders: “[i]magine GPS in your wallet, cell phone, or watch to tell you where you are.” Well, one does not have to imagine that any longer, there are services being offered right now, beyond the smart car navigation systems. Companies like Wherify, ChildLocate, StarMax, SnapTrack, Gen-Etics, Pro-Tech, Sky-Eye and Digital Angel/ADSX are taking advantage of what GPS and other 2G wireless technologies have to offer and using it to track living and non-living things. In terms of people tracking, this is done for a variety of reasons including: child safety, reducing the incidence of kidnapping of high profile persons, for those suffering from Alzheimer’s disease who may become disorientated, for those suffering from mental illness, for parolees, for prison inmates, for military personnel, for emergency services, or just for peace of mind (K. Michael & Masters, 2004). Wherify’s (2003) “GPS Locator for Children” for instance, states that: “[c]hildren have a natural urge to explore. Parents have a natural desire to know their children are safe. That’s why Wherify created the world’s first Personal Locator to help you determine your child’s location in minutes. Wherify’s GPS Locator technology helps keep loved ones safe by combining Wherify’s patented technology with the U.S. Department of Defence’s multi-billion dollar Global Positioning System (GPS) satellite plus the largest 100% digital, nationwide PCS wireless network. So relax. Now you can have peace of mind 24 hours a day while your child is the high tech envy of the neighborhood!”

The watch worn by Wherify users contains a built-in pager, an atomic synchronized clock, an emergency 911 button, a lock button and an on-board GPS. One pitfall of the Wherify technology is that it can be seen, thus alerting a perpetrator to the possibility that their location will be found out. The evolutionary vision therefore is a technology that is fully implantable and cannot be seen by an attacker or anyone else for that matter. “Enter the Digital Angel: according to CEO Richard Sullivan, the solution combines GPS wireless communications with biosensors, powered by body heat in the form of a dime-sized chip, which can be embedded in a watch, bracelet or medallion, even under your flesh” (Mieszkowski, 2000, part 2, p. 2). Now given GPS only works outdoors, other wireless systems must cater for in-building/in-container solutions, most of which take advantage of RFID tags and transponders, video cameras, sensors or infrared signals.


Towards a Unique ID for Universal Personal Telecommunications

There are two developments that are enabling this capability: the first is the blurring between what is wireline and wireless, the second is that the new IP-based networks have turned the traditional notion of voice and data up-side-down. These two changes are setting the stage for a global platform that requires that an individual have a unique lifetime identifier (from the time of birth), either in the form of a number or email address. Some of the early protocols such as SIP (Session Initiation Protocol) and H.323 rely on such an identifier. In 2000, Bell Canada launched the “Find-Me Number Service” which allowed phone calls to follow an individual (Bell, 2000). With time this service will extend to incorporate emails and other forms of messaging. We seem to be moving towards a course dictated by the notion of a “universal” Personal Communications Service (PCS) model that is hybrid in approach, utilizing the best of both worlds (both GPS and PCS) when it is required. The critical question is how interlinked will this scheme be to a potential unique lifetime identifier chip implant? I.e., a unique identifier for communications could be just as suitable for personal user authentication. Farrell (2001) predicts, “[o]ther uses could be to replace keys and ATM cards with implanted chips, making it possible for a single implant to unlock your house, start your car and give you money from a cash point.”


Case 4: Medical-Related Implants

Auto-ID devices, in particular implantable devices like smart microchips and passive and active RFID transponders, have found themselves being utilized in medical applications for completely different purposes to what they were originally invented. Apart from serving their designated purpose as an automatic identification device they have somehow found themselves to be integral components of life-saving devices, in some instances either to the prevention or cure of particular disabilities, ailments or diseases. The devices have of course been further developed and adapted to match the requirements of the specific medical application. The evolutionary path indicates that due to developments in auto-ID and microcircuitry in general, today’s medical implantable devices have much overtaken the “humble” pacemaker. In Banbury’s (1997) historical account of technological innovation in the pacemaker industry (1959-1990), there is a chapter dedicated to the incremental changes that have taken place on pacemaker technology since 1963. Her findings are useful in considering the future use of RFID for health-related applications (p. 54).

Many technological innovations that occurred in the pacemaker industry during the incremental era changed some aspect of the product and how it was used. These changes resulted from innovations in pacing technology or from innovations in input technologies, where the research and development could have been conducted by pacemaker firms or firms and research institutions external to the pacemaker industry. Innovations in semiconductor technology and in surgical procedures used to implant pacers are examples of external innovations that were later adapted to pacing technology. Innovations in electrode and lead technologies are examples of innovations developed by firms both inside and outside of the industry. However, developments in the pacing mode, the core technology, were introduced by pacing firms. The following innovation cases point to a future path for auto-ID as something more than an identification technology, a new electrophorus paradigm (with all of the ensuing social, religious, and political rejoinders) that is set to revolutionize the way humans consider technology- no longer as a separate entity but as a life-enhancing artifact carried within the body.


Biochips for Diagnosis and Smart Pills for Drug Delivery

It is not unlikely that biochips will be implanted at birth in the not-too-distant future. “They will be able to diagnose disease with precision, pre-determine a patient’s response to treatment and make individual patients aware of any pre-disposition to susceptibility” (Wales, 2001). With response to treatment for illness, drug delivery will not require patients to swallow pills or take routine injections; instead chemicals will be stored on a microprocessor and released as prescribed. The idea is known as “pharmacy-on-a-chip” and was founded by scientists at the Massachusetts Institute of Technology (MIT) in 1999 (LoBaido, 2001, part 2, p. 2). The following extract is from The Lab (1999): “Doctors prescribing complicated courses of drugs may soon be able to implant microchips into patients to deliver timed drug doses directly into their bodies.”

Microchips being developed at Ohio State University (OSU) can be swathed with a chemical substance like pain medication, insulin, different treatments for heart disease, or gene therapies, allowing physicians to work at a more detailed level (Swissler, 2000, p. 1). The breakthroughs have major implications for diabetics especially who require insulin at regular intervals throughout the day. Researchers at the University of Delaware are working on “smart” implantable insulin pumps that may relieve people with Type I diabetes (Bailey, 1999, p. 1). The delivery would be based on a mathematical model stored on a microchip and working in connection with glucose sensors that would instruct the chip when to release the insulin. The goal is for the model to be able to simulate the activity of the pancreas so that the right dosage is delivered at the right time. The implantable chips are also being considered as a possible solution to clinical depression and/or for patients who are incapable of committing to a prescribed regime. Gasson (1998, p. 1) first named this capability as “cyberdrugs”/ “cybernarcotics” with a well-meaning intent. Professor John Santini of MIT, knowing the possible implications of such an innovation, however, has repeatedly outlined that the focus is strictly “therapeutic”, a better way to treat diseases (LoBaido, 2001, part 2, p. 2). Scientists at universities are not the only ones researching biochips or smart pills according to Wales (2001) production is quickly becoming a big business as genomic-based medicine is the next buzz-word. Some of the more well-known players include: Affymetrix, Motorola Life Sciences Codelink Division, Packard BioScience, Agilent, and Hitachi.


Cochlear Implants- Helping the Deaf to Hear

In 2000, more than thirty-two thousand people worldwide already had cochlear implants (Manning, 2000, p. 7D). In 2006, that number had grown to about 77,500 for the Nucleus implant alone which had about 70 per cent of the market share (Patrick, Busby, & Gibson, 2006) (figure ). Cochlear implants can restore hearing to people who have severe hearing loss, a form of diagnosed deafness. Unlike a standard hearing aid that works like an amplifier, the cochlear implant acts like a microphone to change sound into electronic signals. Signals are sent to the microchip implant via RF stimulating nerve fibers in the inner ear. The brain then interprets the signals that are transmitted via the nerves to be sound. For a closer look at the cochlear implant see the Clarion and Nucleus product innovations. Another company, Canadian-based Epic Biosonics, teamed up with Professor Christofer Toumazou of Imperial College in 1999. Toumazou had made significant inroads to cutting the costs of cochlear implants and making them more comfortable for the individual. Most cochlear implants today require power packs worn on belts with connecting wires generated by battery power that generally do not look aesthetically good, Toumazou has made major leaps in changing this impracticality (Imperial College, 1999, p. 2).

For now, cochlear implants are being used to overcome deafness, tomorrow however they may be open to the wider public as a performance-enhancing technique (Cooper, 2008, pp. 10-11). Audiologist, Steve Otto of the Auditory Brainstem Implant Project at the House Ear Institute in Los Angeles predicts that some day “implantable devices [will] interface microscopically with parts of the normal system that are still physiologically functional” (Stewart, 2000, p. 2). He is quoted as saying that this may equate to “ESP for everyone.” Otto’s prediction that implants will one day be used by persons who do not require them for remedial purposes has been supported by numerous other high profile scientists. The major question is whether this is the ultimate trajectory of these technologies.


Retina Implants- on a Mission to Help the Blind to See

The hope is that retina implants will be as successful as cochlear implants in the future. Like cochlear implants cannot be used for persons suffering from complete deafness, retina implants are not a solution for totally blind persons but rather those suffering from aged macular degeneration (AMD) and retinitis pigmentosa (RP). Retina implants have brought together medical researchers, electronic specialists and software designers to develop a system that can be implanted inside the eye (Ahlstrom, 2000, p. 1). A typical retina implant procedure is as follows:

“[s]urgeons make a pinpoint opening in the retina to inject fluid in order to lift a portion of the retina from the back of the eye, creating a pocket to accommodate the chip. The retina is resealed over the chip, and doctors inject air into the middle of the eye to force the retina back over the device and close the incisions” (Datamaster, 2001, p. 1).

  Brothers Alan Chow and Vincent Chow, one an engineer the other an ophthalmologist, developed the artificial silicon retina (ASR) and began the company Optobionics Corp in 1990. This was a marriage between biology and engineering, first conceived of over a Thanksgiving dinner. “In landmark surgeries at the University of Illinois at Chicago Medical Centre on June 28, the first artificial retinas made from silicon chips were implanted in the eyes of two blind patients who have lost almost all of their vision because of retinal disease.” In 1993 Branwyn (p. 3) reported that a team at the National Institute of Health (NIH) led by Dr. Hambrecht, implanted a 38-electrode array into a blind female’s brain. It was reported that she saw simple light patterns and was able to make out crude letters. The following year the same procedure was conducted by another group on a blind male resulting in the man seeing a black dot with a yellow ring around it. Joseph Rizzo of Harvard Medical School’s, Massachusetts Eye and Ear Infirmary has cautioned that it is better to talk down the possibilities of the retina implant so as not to give false hopes. The professor himself had expressed that they are dealing with “science fiction stuff” and that there are no long-term guarantees that the technology will ever fully restore sight, although significant progress is being made by a number of research institutes (Wells, 1998, p. 5). Among these pioneers are researchers at The John Hopkins University Medical Centre in Baltimore, Maryland. Brooks (2001, pp. 4f) describes how the retina chip developed by the medical centre will work: “…a kind of miniature digital camera… is placed on the surface of the retina. The camera relays information about the light that hits it to a microchip implanted nearby. This chip then delivers a signal that is fed back to the retina, giving it a big kick that stimulates it into action. Then, as normal, a signal goes down the optic nerve and sight is at least partially restored.”


Tapping into the Heart and Brain

If it was possible as far back as 1958, to successfully implant two transistors the size of an ice hockey puck in the heart of a 43 year old man (Nairne, 2000, p. 1), what will become possible by 2058 is constrained by the imagination alone. Heart pacemakers are still being further developed today, but for the greater part, researchers are turning their attention to the possibilities of brain pacemakers. In the foreseeable future brain implants may help sufferers of Parkinson’s, paralysis, nervous system problems, speech-impaired persons and even cancer patients. While the research is still in its formative years and the obstacles so great because of the complexity of the brain, scientists are hopeful of major breakthroughs in the next twenty to fifty years.

The brain pacemaker endeavors are bringing together even more people from different disciplines headed mainly by neurosurgeons. By using brain implants electrical pulses can be sent directly to nerves via electrodes. The signals can be used to interrupt incoherent messages to nerves that cause uncontrollable movements or tremors. By tapping into the right nerves in the brain, particular reactions can be achieved. Using a technique that was first founded, almost accidentally in France in 1987, the following extract describes the procedure of “tapping into” the brain: “Rezai and a team of functional neurosurgeons, neurologists and nurses at the Cleveland Clinic Foundation in Ohio had spent the next few hours electronically eavesdropping on single cells in Joan’s brain attempting to pinpoint the precise trouble spot that caused a persistent, uncontrollable tremor in her right hand. Once confident they had found the spot, the doctors had guided the electrode itself deep into her brain, into a small duchy of nerve cells within the thalamus. The hope was that when sent an electrical current to the electrode, in a technique known as deep-brain stimulation, her tremor would diminish, and perhaps disappear altogether (Hall, 2001, p. 2).” There are companies that have formed like Medtronic Incorporated (Minneapolis, Minnesota) that specialize in brain pacemakers (Med, 2009). Medtronic’s Activa implant has been designed specifically for sufferers of Parkinson’s disease (Wells, 1998, p. 3).


Attempting to Overcome Paralysis

In more speculative research surgeons believe that brain implants may be a solution for persons who are suffering from paralysis such as spinal cord damage. In these instances the nerves in the legs are still theoretically “working”, it is just that they cannot make contact with the brain which controls their movement. If somehow signals could be sent to the brain, bypassing the lesion point, it could conceivably mean that paralyzed persons regain at least part of their capability to move (Dobson, 2001, p. 2). In 2000 Reuters (pp. 1f) reported that a paralyzed Frenchman [Marc Merger] “took his first steps in 10 years after a revolutionary operation to restore nerve functions using a microchip implant… Merger walks by pressing buttons on a walking frame which acts as a remote control for the chip, sending impulses through fine wires to stimulate legs muscles…” It should be noted, however, that the system only works for paraplegics whose muscles remain alive despite damage to the nerves. Yet there are promising devices like the Bion that may one day be able to control muscle movement using RF commands (Smith, 2002, p. 2). Brooks (2001, p. 3) reports that researchers at the University of Illinois in Chicago have: “…invented a microcomputer system that sends pulses to a patient’s legs, causing the muscles to contract. Using a walker for balance, people paralyzed from the waist down can stand up from a sitting position and walk short distances… Another team, based in Europe… enabled a paraplegic to walk using a chip connected to fine wires in his legs.” These techniques are known as functional neuromuscular stimulation systems (Case Western Reserve University, 2007).


Granting a Voice to the Speech-impaired

Speech-impairment microchip implants work differently to that of the cochlear and retina implant. Whereas in the latter two, hearing and sight is restored, in implants for speech-impairment the voice is not restored, but an outlet for communication is created, possibly with the aid of a voice synthesizer. At The Emory University, neurosurgeon Roy E. Bakay and neuroscientist Phillip R. Kennedy were responsible for critical breakthroughs early in the research. In 1998, Versweyveld (p. 1) reported two successful implants of a neurotrophic electrode into the brain of a woman and man who were suffering from Amyotrophic Lateral Sclerosis (ALS) and brainstem stroke, respectively. In an incredible process, Bakay and Kennedy have somehow replicated the ability to explicitly capture the patient’s thoughts to a computer screen by the movement of a cursor. “The computer chip is directly connected with the cortical nerve cells… The neural signals are transmitted to a receiver and connected to the computer in order to drive the cursor” (Versweyveld, 1998, p. 1). This procedure has major implications for brain-computer interaction (BCI), especially bionics. Bakay predicts that by 2010 prosthetic devices will grant patients that are immobile the ability to turn on the TV just by thinking about it and by 2030 to grant severely disabled persons the ability to walk independently (Dominguez, 2000, p. 2; Adee, 2009, pp. 37-40). Despite the early signs that these procedures may offer long term solutions for hundreds of thousands of people, some research scientists believe that tapping into the human brain is a long-shot. The brain is commonly understood to be “wetware” and plugging in hardware into this “wetware” would seem to be a type mismatch according to Steve Potter, a senior research fellow in biology working at the California Institute of Technology’s Biological Imaging Centre in Pasadena. Instead Potter is pursuing the cranial route as a “digital gateway to the brain” (Stewart, 2000, p. 1). Others believe that it is impossible to figure out exactly what all the millions of neurons in the brain actually do- but it should be reminded that this is the exact same argument that was presented when there were initial discussions about the Human Genome Project.

Up until now this paper has focused on implants that are attempts at “orthopaedic replacements”, corrective in nature, required to repair a function that is either lying dormant or has failed altogether. Implants of the future however, will attempt to add new “functionality” to native human capabilities, either through extensions or additions. Globally acclaimed scientists have pondered on the ultimate trajectory of microchip implants. The literature is admittedly mixed in its viewpoints of what will and will not be possible in the future; but one of the lessons that history has taught us is that if an idea has been conceived the probability that it will come into fruition is high; perhaps not tomorrow but eventually. Warwick’s Cyborg 2.0 project for instance, intended to prove that two persons with respective implants could communicate sensation and movement by thoughts alone (Dobson, 2001, p. 1). The prediction is that terminals like telephones would eventually become obsolete if thought-to-thought communication became possible. Warwick describes this as “putting a plug into the nervous system” (Dobson, 2001, p. 1) to be able to allow thoughts to be transferred not only to another person but to the Internet and other mediums. While Warwick’s Cyborg 2.0 may not have achieved its desired outcomes, it did show that a form of primitive Morse-code-style nervous-system-to-nervous-system communication is realizable (Green, 2002, p. 3). Warwick is bound to keep trying to achieve his project goals given his philosophical perspective. And if Warwick does not succeed, he will have at least left behind a legacy and enough stimuli for someone else to succeed in his place, even if, as Berry (1996) says, the prediction will come true 500 years from now.



It has been shown that auto-ID devices have a trajectory that is, in part, radically different from the intent of the inventors. Initially attached to non-living things and later adopted to be carried by humans, it now seems inevitable that the devices will become one with humans. Converging disciplines are making the realm of the “impossible”, potentially “possible”. For the first time, the attribute of mobility is being linked to automatic identification and wearable computing components, and being applied to completely non-traditional areas of electronic commerce. Of course some resistance will be experienced initially but as society continues to change becoming more and more techno-centric, it will decide what auto-ID will be used for, even if it has little to do with what it was originally designed (Branwyn, 1993, p. 6). Society continues to be increasingly dependent on the promise of technology and it is difficult to see who and how many will resist the ultimate hope of “living for ever”. It is important to note here that the accomplishment is not in the rise of the computer/information age, it is as Grier (2000, p. 83) puts it, in “the vision, it has maintained” (Grier, 2000, p. 83).

When the ENIAC was publicly announced in 1946, no one could predict its ultimate impact. The founder of IBM famously forecasted a worldwide market of five computers (Coughlin, 2000, p. 1)! The same could be said for brain implants today but we should at least pay some respect to the instructive lessons of history. Perhaps what we really need to do is start afresh considering the implications that such developments may have without discounting them outright as an improbable science fiction or even at the other extreme, as a possible universal remedy. One reason this chapter depended so heavily on quoting research at the turn of the century was to actually dispel the myth that any type of dialogue is premature. It obviously is not. 2010 awaits; and much of what took place in the laboratories of universities and private enterprise almost a decade ago is well on its way to being commercialized. The force of the momentum is such that continual attempts will be made to go beyond that which has been achieved. It is not enough to begin discussing possible implications when the technology reaches the early adoption stage- by then the technology would have taken root- as it seems to have done already to some degree. Ultimately humanity will have a choice, and as Warwick has openly stated, hopefully it will be an individual choice- for those who would like to remain mere human and those who would like to continue to evolve.



Adee, S. (2009). The Revolution will be Prosthetized: DARPA's Prosthetic Arm Gives Amputees New Hope. IEEE Spectrum, 46(1), 37-40.

ADS. (2002a, 3 October). Digital Angel Corporation is awarded United States Patent for Next-generation, Enhanced-performance Implantable Microchip. Applied Digital Solutions, from

ADS. (2002b, 15 August). Responding to Growing Customer Inquiries and Media Interest, Applied Digital Solutions Highlights Anti-kidnapping Potential of Its “Personal Safeguard” Technologies. Applied Digital Solutions, from

ADS. (2002c, 7 October). VeriChip Corporation will Benefit from New United States Patent (#6,400,338) Awarded to Digital Angel Corporation- Manufacturer of VeriChip. Applied Digital Solutions, from

ADSX. (2003). Implantable Personal Verification Systems. Applied Digital Solutions   Retrieved 15 April 2004, from

Ahlstrom, D. (2000, 20 November). Microchip implant could offer new kind of vision. The Irish Times, from

Associated Press. (2002). Company gets okay to sell ID-only computer chip implant. The Detroit News   Retrieved 5 April, from

AT&T. (2003). Feature and Services User Guide. AT&T Wireless   Retrieved 15 April 2004, from

Bailey, R. (1999). Implantable insulin pumps. Biology, from

Banbury, C. M. (1997). Surviving Technological Innovation in the Pacemaker Industry 1959-1990. New York: Garland Publishing.

Bell. (2000). Bell Canada consumer services: Find-Me Number Service. Bell Canada.

Berry, A. (1996). The Next 500 Years: life in the coming millennium. New York: Gramercy Books.

Branwyn, G. (1993). The desire to be wired. Wired, September/October, 65.

Brooks, M. (2001, 14  November). The Cyborg cometh. Worldlink: The Magazine of the World Economic Forum, from$844

Burak, A., & Sharon, T. (2003, 5-10 April). Analysing Usage of Location Based Services. Paper presented at the CHI 2003: New Horizons, Florida, USA.

Case Western Reserve University. (2007). Study of an Implantable Functional Neuromuscular Stimulation System for Patients With Spinal Cord Injuries. Clinical Trials. gov   Retrieved 4 February 2009, from

Cochrane, P. (1999). Tips For Time Travellers: visionary insights into new technology, life, and the future on the edge of technology. New York: McGraw-Hill.

Cooper, L. (1999). A run on Sterling- personal finance on the move. Paper presented at the The Third International Symposium on Wearable Computers.

Cooper, R. A. (2008). Quality of Life Technology: A Human-Centered and Holistic Design. IEEE Engineering in Medicine and Biology, 27(2), 10-11.

Coughlin, K. (2000, 4 January). The melding of man and machine. The Star-Ledger: The Newspaper for New Jersey, from

Datamaster. (2001). More tests of eye implants planned. BrainLand: The Neuroscience Information Centre, 1-2.

Dobson, R. (2001, 5th June). Professor to try to ‘control’ wife via chip implant., from

Dominguez, A. (2000). The brain as a remote control. CBS News, from,1597,249757-412,00.shtml

Eng, P. (2002, 25 February). I, Chip? Technology to meld chips into humans draws closer., from

Farrell, N. (2002, 30 July). Kids to be served up with chips. IT Week, from

Ferrero, J. L. (1998). Wearable computing: one man’s mission. IEEE Micro, 18(5), 87-88.

Gasson, M. (1998). Implants and bioengineering. Research- Implant, from

Gorlick, M. M. (1999). Electric suspenders: a fabric power bus and data network for wearable digital devices. The Third International Symposium on Wearable Computers, 114-121.

Green, D. (2002, 2 August 2002). Why I am not impressed with Professor Cyborg. BBC News, from

Grier, D. A. (2000). Anecdotes. IEEE Annals of the History of Computing, 82-85.

GVU. (2009). Aware Home - Homepage. A Residential Laboratory at Georgia Institute of Technology   Retrieved 4 February 2009, from

Hall, S. S. (2001, September 2001). Brain pacemakers. An MIT Enterprise Technology Review, from

Hinkers, X. (1995, June 25-29). Microdialysis system for continuous glucose monitoring. Paper presented at the The 8th International Conference on Solid-State Sensors and Actuators, and Eurosensors IX, Stockholm, Sweden.

Imperial College. (1999). Micro-electronics behind novel bionic ear. Imperial College News, from

Kargl, F., Lawrence, E., Fischer, M., & Lim, Y. Y. (2008). Security, Privacy and Legal Issues in Pervasive eHealth Monitoring Systems. Paper presented at the 7th International Conference on Mobile Business, Barcelona, Spain.

Kastor, E. (2000, 28 December). Smarty-Pants pants… and shirts. Washington Post, from

Kellan, A. (2000, 19 October). Wearable gadgets offer modern look for military., from

Krikelis, A. (1999). Location-dependent multimedia computing. IEEE Concurrency(April-June), 13-15.

LoBaido, A. C. (2001). Soldiers with microchips: British troops experiment with implanted, electronic dog tag., from

Mann, S. (1997). Eudaemonic computing (‘underwearables’). IEEE First International Symposium on Wearable Computers, 177-178.

Mann, S. (2009). Prof. Steve Mann. University of Toronto   Retrieved 30 January 2009, from

Manning, A. (2000, 2 May). Implants sounding better: smaller, faster units overcome ‘nerve deafness'. USA Today, p. 7D.

Martin, T. (2000). Issues in wearable computing for medical monitoring applications: a case study of a wearable ECG monitoring device. IEEE The Fourth International Symposium on Wearable Computers, 43-49.

McDonough, B. (17 April 2002). AT&T Wireless Pushes mLife with mMode. CIO Today   Retrieved 6 April 2004, from

McGinity, M. (2000). Body of the technology: It’s just a matter of time before a chip gets under your skin. Communications of the ACM, 43(9), 17-19.

McLuhan, E., & Zingrone, F. (1995). Essential McLuhan. USA: BasicBooks.

Med. (2009). Deep Brain Stimulation. Medtronic   Retrieved 4 February 2009, from

Michael, K. (2002). The rise of the wireless Internet. In E. Lawrence (Ed.), Internet Commerce: digital models for business (pp. 291-294, 296). Queensland: John Wiley and Sons.

Michael, K. (2004). Location-based Services - a Vehicle for IT&T Convergence. Paper presented at the Advances in E-Engineering and Digital Enterprises Technology - 1: Proceedings of the Fourth International Conference on e-Engineering and Digital Enterprise Technology (e-ENGDET), Leeds, United Kingdom.

Michael, K., & Masters, A. (2004, 18-21 July). Applications of human transponder implants in mobile commerce. Paper presented at the The 8th World Multiconference on Systemics, Cybernetics and Informatics, Orlando, Florida.

Mieszkowski, K. (2000). Put that silicon where the sun don’t shine. Salon, from

Nairne, D. (2000). Building better people with chips and sensors., from

Patrick, J. F., Busby, P. A., & Gibson, P. J. (2006). The Development of the Nucleus® FreedomTM Cochlear Implant System. Sage Publications, 10(4), 175-200.

Pentland, S. (2009). Wearable Computing. MIT Media Lab   Retrieved 4 February 2009, from

Rao, B., & Minakakis, L. (2003). EVOLUTION of Mobile Location-Based Services. Communications of the ACM, 46(12), 61-65.

Repperger, D. W. (1996). Human tracking studies involving an actively powered, augmented exoskeleton. Paper presented at the IEEE Proceedings of the 1996 Fifteenth Southern Biomedical Engineering Conference.

Sawhney, N., & Schmandt, C. (1997). Nomadic Radio: A spatialized audio environment for wearable computing. Paper presented at the IEEE First International Symposium on Wearable Computers.

Scheeres, J. (2002, 15 February). Politician wants to ‘get chipped'. Wired News, from,1282,50435,00.html

Schiele, B. (2001). Sensory-augmented computing: wearing the museum’s guide. IEEE Micro, 44-52.

Siddle, J. (2000, 30 December). Clothes that do the thinking. from

Siewiorek, D. P. (1999). Wearable computing comes of age. IEEE Computer, 82-83.

Smith, D. (2002, 16 February). Chip implant signals a new kind of man. The Age, from

Spitzer, M. B. (1997). Eyeglass-based systems for wearable computing. IEEE First International Symposium on Wearable Computers, 48-51.

Starner, T. (2001a). The challenges of wearable computing: part 1. IEEE Micro(July-August), 44-52.

Starner, T. (2001b). The challenges of wearable computing: part 2. IEEE Micro(July-August), 54-67.

Stein, R. (1998). Development of a commercially successful wearable data collection system. IEEE Second International Symposium on Wearable Computers, 18-24.

Stewart, S. (2000). Neuromaster. Wired 8.02  February. from

Swissler, M. A. (2000, 8 September). Microchips to monitor meds. Wired, from,1282,39070,00.html

Sydänheimo, L. (1999). Wearable and ubiquitous computer aided service, maintenance and overhaul. Paper presented at the IEEE International Conference on Communications.

The Lab. (1999, 28 January). Microchip implants for drug delivery. ABC: News in Science, from

Toumaz. (2009). Sensium Life Pebble. Toumaz Technology   Retrieved 4 February 2009, from

Trackstick. (2009). Get a Trackstick.   Retrieved 4 February 2009, from

Versweyveld, L. (1998). Chip implants allow paralysed patients to communicate via the computer. Virtual Medical Worlds Monthly   Retrieved 13 October, from

Wales, E. (2001, 20 November). It’s a living chip. The Australian, p. 4.

Warwick, K. (1998). Professor Kevin Warwick. from

Wells, W. (1998). The chips are coming. Biotech Applied, from

Wherify. (2003, 5 January 2003). GPS Locator for children: Peace of mind for parents, cool for kids. Wherify Wireless Location Services, from

Wherify. (2004). Frequently Asked Questions: Wherify Wireless.   Retrieved 15 April 2004, from

Wilcox, J. (1999, 20 December). More details emerge on IBM’s wearable PC., from http://news/

Williams, M. (29 March 2008). NTT DoCoMo Close to Bio-Sensing Cell Phones. PC World   Retrieved 4 Feburary 2009, from

Witt, S. (1999, 14 January). Is human chip implant wave of the future?, from

Zeimpekis, V., Giaglis, G., & Lekakos, G. (2003). A Taxonomy of Indoor and Outdoor Positioning Techniques for Mobile Location Services. Journal of ACM SIGecom Exchanges, 3(4), 19-27.