Societal Implications of Wearable Technology

Source: (opening art only)

Societal Implications of Wearable Technology: Interpreting “Trialability on the Run”


This chapter presents a set of scenarios involving the GoPro wearable Point of View (PoV) camera. The scenarios are meant to stimulate discussion about acceptable usage contexts with a focus on security and privacy. The chapter provides a wide array of examples of how overt wearable technologies are perceived and how they might/might not be welcomed into society. While the scenario is based at the University of Wollongong campus in Australia, the main implications derived from the fictitious events are useful in drawing out the predicted pros and cons of the technology. The scenarios are interpreted and the main thematic issues are drawn out and discussed. An in depth analysis takes place around the social implications, the moral and ethical problems associated with such technology, and possible future developments with respect to wearable devices.


This chapter presents the existing, as well as the potential future, implications of wearable computing. Essentially, the chapter builds on the scenarios presented in an IEEE Consumer Electronics Magazine article entitled: “Trialability on the Run” (Gokyer & Michael, 2015). In this chapter the scenarios are interpreted qualitatively using thick description and the implications arising from these are discussed using thematic analysis. The scenario analysis is conducted through deconstruction, in order to extract the main themes and to grant the reader a deeper understanding of the possible future implications of the widespread use of wearable technology. First, each of the scenarios is analyzed to draw out the positive and the negative aspects of wearable cameras. Second, the possible future implications stemming from each scenario context are discussed under the following thematic areas: privacy, security, society, anonymity, vulnerability, trust and liberty. Third, direct evidence is provided using the insights of other research studies to support the conclusions reached and to identify plausible future implications of wearable technologies, in particular use contexts in society at large.

The setting for the scenario is a closed-campus environment, (a large Australian University). Specific contexts such as a lecture theatre, restroom, café, bank, and library, are chosen to provide a breadth of use cases within which to analyze the respective social implications. The legal, regulatory, and policy-specific bounds of the study are taken from current laws, guidelines and normative behavior, and are used as signposts for what should, or should not, be acceptable practice. The outcomes illustrate that the use cases are not so easily interpretable, given the newness of the emerging technology of wearable computing, especially overt head-mounted cameras, that draw a great deal of attention from bystanders. Quite often resistance to the use of a head-mounted camera is opposed without qualified reasoning. “Are you recording me? Stop that please!” is a common response to audio-visual bodyworn recording technology in the public space by individuals (Michael & Michael, 2013). Yet companies such as Google have been able to use fleets of cars to gather imagery of homes and streets, with relatively little problem.

There are, indeed, laws that pertain to the misuse of surveillance devices without a warrant, to the unauthorized recording of someone else whether in a public or private space, and to voyeuristic crimes such as upskirting. While there are laws, such as the Workplace Surveillance Act, 2005 (NSW), asserting a set of rules for surveillance (watching from above), the law regarding sousveillance (watching from below) is less clear (Clarke, 2012). We found that, while public spaces like libraries and lecture theatres have clear policy guidelines to follow, the actual published policies, and the position taken by security staff, do not in fact negate the potential to indirectly record another. Several times, through informal questioning, we found the strong line “you cannot do that because we have a policy that says you are not allowed to record someone”, to be unsubstantiated by real enforceable universitywide policies. Such shortcomings are now discussed in more detail against scenarios showing various sub-contexts of wearable technology in a closed-campus setting.


The term sousveillance has been defined by Steve Mann (2002) to denote a recording done from a portable device such as a head-mounted display (HMD) unit in which the wearer is a participant in the activity. In contrast to wall-mounted fixed cameras typically used for surveillance, portable devices allow inverse surveillance: recordings from the point of view of those being watched. More generally, point of view (POV) has its foundations in film, and usually depicts a scene through the eyes of a character. Body-worn video-recording technologies now mean that a wearer can shoot film from a first-person perspective of another subject or object in his or her immediate field of view (FOV), with or without a particular agenda.

During the initial rollout of Google Glass, explorers realized that recording other people with an optical HMD unit was not perceived as an acceptable practice, despite the fact that the recording was taking place in a public space. Google’s apparent blunder was to assume that the device, worn by 8,000 individuals, would go unnoticed, like shopping mall closed-circuit television (CCTV). Instead, what transpired was a mixed reaction by the public—some nonusers were curious and even thrilled at the possibilities claimed by the wearers of Google Glass, while some wearers were refused entry to premises, fined, verbally abused, or even physically assaulted by others in the FOV (see Levy, 2014).

Some citizens and consumers have claimed that law enforcement (if approved through the use of a warrant process) and shop owners have every right to surveil a given locale, dependent on the context of the situation. Surveilling a suspect who may have committed a violent crime or using CCTV as an antitheft mechanism is now commonly perceived as acceptable, but having a camera in your line of sight record you—even incidentally—as you mind your own business can be disturbing for even the most tolerant of people.

Wearers of these prototypes, or even of fully-fledged commercial products like the Autographer (see, claim that they record everything around them as part of a need to lifelog or quantify themselves for reflection. Technology such as the Narrative Clip may not capture audio or video, but even still shots are enough to reveal someone else’s whereabouts, especially if they are innocently posted on Flickr, Instagram, or other social media. Many of these photographs also have embedded location and time-stamp metadata stored by default.

A tourist might not have malicious intent by showing off in front of a landmark, but innocent bystanders captured in the photo could find themselves in a predicament given that the context may be entirely misleading.

Wearable and embedded cameras worn by any citizen carry significant and deep personal and societal implications. A photoborg is one who mounts a camera onto any aspect of the body to record the space around himself or herself (Michael & Michael, 2012). Photoborgs may feel entirely free, masters of their own destiny; even safe that their point of view is being noted for prospective reuse. Indeed, the power that photoborgs have is clear when they wear the camera. It can be even more authoritative than the unrestricted overhead gazing of traditional CCTV, given that sousveillance usually happens at ground level. Although photoborgs may be recording for their own lifelog, they will inevitably capture other people in their field of view, and unless these fellow citizens also become photoborgs themselves, there is a power differential. Sousveillance carries with it huge socioethical, environmental, economic, political, and spiritual overtones. The narrative that informs sousveillance is more relevant than ever before due to the proliferation of new media.

Sousveillance grants citizens the ability to combat the powerful using their own evidentiary mechanism, but it also grants other citizens the ability to put on the guise of the powerful. The evidence emanating from cameras is endowed with obvious limitations, such as the potential for the impairment of the data through loss, manipulation, or misrepresentation (Michael, 2013). The pervasiveness of the camera that sees and hears everything can only be reconciled if we know the lifeworld of the wearer, the context of the event being captured, and how the data will be used by the stakeholder in command.

Sousveillance happens through the gaze of the one wearing the camera, just like a first-person shooter in a video game. In 2003, WIRED published an article (Shachtman, 2003) on the potentiality to lifelog everything about everyone. Shachtman wrote:

The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person’s life, index all the information and make it searchable… The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read… All of this—and more—would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual’s health… This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual’s life.”

This goes to show how any discovery can be tailored toward any end. Lifelogging is meant to sustain the power of the individual through reflection and learning, to enable growth, maturity and development. Here, instead, it has been hijacked by the very same stakeholder against whom it was created to gain protection.

Sousveillance also drags into the equation innocent bystanders going about their everyday business who just wish to be left alone. When we asked wearable 2.0 pioneer Steve Mann in 2009 what one should do if bystanders in a recording in a public space questioned why they were being recorded without their explicit permission, he pointed us to his “request for deletion” (RFD) web page (Mann, n.d.). This is admittedly only a very small part of the solution and, for the most part, untenable. One just needs to view a few minutes of the Surveillance Camera Man Channel ( to understand that people generally do not wish to be filmed in someone else’s field of view. Some key questions include:

1. In what context has the footage been taken?

2. How will it be used?

3. To whom will the footage belong?

4. How will the footage taken be validated and stored?

Trialability on the Run

In this section, plausible scenarios of the use of wearable cameras in a closed campus setting are presented and analyzed in the story “Trialability on the Run”. Although the scenarios are not based directly on primary sources of evidence, they do provide conflicting perspectives on the pros and cons of wearables. As companies are engaged in ever-shorter market trialing of their products, the scenarios demonstrate what can go wrong with an approach that effectively says: “Let’s unleash the product now and worry about repercussions later; they’ll iron themselves out eventually—our job is solely to worry about engineering.” The pitfalls of such an approach are the unexpected and asymmetric consequences that ensue. For instance, someone wearing a camera breaches my privacy, and, although the recorded evidence has affected no one else, my life is affected adversely. Laws, and organizational policies especially, need quickly to respond as advances in technologies emerge.

“Trialability on the Run” is a “day in the life” scenario that contains 9 parts, set in a closed-campus in southern New South Wales. The main characters are Anthony, the owner and wearer of the head-mounted GoPro (an overt audio-visual recording device), and his girlfriend Sophie. The narrator follows and observes the pair as they work their way around the campus in various sub-contexts, coming into contact with academic staff, strangers, acquaintances, cashiers, banking personnel, librarians, fellow university students and finally security personnel. Anthony takes the perspective that his head-mounted GoPro is no different from the mounted security surveillance cameras on lampposts and building walls, or from the in-lecture theatre recordings captured by the Echo360 (Echo, 2016), or even from portable smart phone cameras that are handheld. He is bewildered when he draws so much attention to himself as the photoborg camera wearer, since he perceives he is performing exactly the same function as the other cameras on campus and has only the intent of capturing his own lifelog. Although he is not doing anything wrong, Anthony looks different and stands out as a result (Surveillance Camera Man, 2015). His girlfriend, Sophie, is not convinced by Anthony’s blasé attitude and tries to press a counter argument that Anthony’s practice is unacceptable in society.

Scenario 1: The Lecture


In this scenario, the main character, Anthony, has arrived at the lecture theatre, in which the lesson had already begun, intending to record the lecture instead of taking notes. Being slightly late, he decided to sit in the very front row. All the students and eventually the lecturer saw the head-mounted camera he was wearing. The lecturer continued his lecture without showing any emotion. Some students giggled at the spectacle and others were very surprised with what they observed, as it was quite probable that it was the first time that they were seeing someone wearing a camera to record a lecture. The students were generally not bothered by the head-mounted recording device in full view, as it was focused on the lecture material and the lecturer, so proceedings continued, as they otherwise would have, had the body-worn camera not been present. Students are very used to surveillance cameras on campus; this was just another camera as far as they were concerned, and besides no one objected: they were too busy taking notes and listening to instruction about the structure and form of the final examination in their engineering coursework.

Wearable User Rights and Intellectual Property

In some of the lecture theatres on university campuses, there are motion sensor based video cameras that make full audio-visual recordings of the lectures (Echo, 2016). Lecturers choose to record their lectures in this manner as available evidence of educational content covered for students, especially for those who were unable to attend the lecture, for those for whom English is a second language or for those who like to listen to lecture content as a form of revision. In this regard, there are no policies in place to keep the students from making audio-visual recordings of the lecture in the lecture theatres.

Lecture theatres are considered public spaces and many universities allow students to attend lectures whether or not they are enrolled in that particular course or subject. Anyone from the public could walk into lectures and listen, as there is no keycard access. Similar to centrally organized Echo 360 audio-visual recordings, Anthony is taping the lecture himself and he does not see any problems with distributing the recording to classmates if someone asks for it to study for the final examination. After all, everyone owns a smartphone and anyone can record the lecture with the camera on their smartphones or tablet device.

This scenario raises a small number of questions that need to be dealt with foremost, such as “What is the difference between making a recording with a smartphone and with a head-mounted camera?” or, “Does it only start being a problem when the recording device is overt and can be seen?” If one juxtaposes the surveillance camera covertly integrated into a light fixture, with an overt head-mounted camera, then why should the two devices elicit such a different response from bystanders?

These questions do not, however, address the fact that an open discussion is required on whether or not we are ready to see a great deal of these sousveillers in our everyday life, and if we are not, what are we prepared to do about it? Mann (2005) predicted the use of sousveillance would grow greatly when the sousveillance devices acquired non-traditional uses such as making phone calls, taking pictures, and having access to the Internet. This emergence produces a grey area, generating the requirement for laws, legislation, regulations and policies having to be amended or created to address specific uses of the sousveillance devices in different environments and contexts. Clarke (2014) identifies a range of relevant (Australian) laws to inform policy discussion and notes the inadequacy of current regulation in the face of rapidly emerging technology.

Scenario 2: The Restroom


In the restroom scenario, Anthony walked into a public restroom after his lecture, forgetting that his head-mounted camera was still on and recording. While unintentionally recording, Anthony received different reactions from the people present in the restroom, all of whom saw the camera and suspected some foul play. The first person, who was leaving as Anthony was entering the restroom, did not seem to care; another tried to ignore Anthony and left as soon as he was finished. The last person became disturbed by the fact that he was being recorded in what he obviously deemed to be a private place. Later that day when Anthony searched for lecture recordings on the tape, he got a sense of wrongdoing after realizing that, in the restroom, he had accidentally left the camera on in record mode. He was surprised, upon hindsight, that he did not get any major reactions, such as an individual openly expressing their discontent or the fact he did not get any specific questions or pronouncements of discomfort. If it were not for the facial expressions to which Anthony was privy, he would not have been able to tell that anybody was upset, as there was no verbal cue or physical retaliation. Of course, the innocent bystanders, going about their business, would not have been able to assume that the camera was indeed rolling.

Citizen Privacy, Voyeurism, and a Process of Desensitization

Restrooms, change rooms, and shower blocks on campus are open to the public, but they are also considered private spaces given that people are engaged in private activities (e.g. showering), and are, at times, not fully clothed. The natural corollary, then, would lead to the expectation that some degree of privacy should be granted. Can anyone overtly walk into a public toilet sporting a camera and record you while you are trying to, for modesty’s sake, do what should only be done in the restroom? Is the body-worn technology becoming so ubiquitous that no one even says a word about something that they can clearly see is ethically or morally wrong? Steve Mann has argued that surveillance cameras in the restroom are an invasion of privacy more abhorrent than body-worn cameras owned by everyday people. The direct approachability of the photoborg differs from an impersonal CCTV.

There is a long discussion to be had on personal security. For instance, will we all, one day, be carrying such devices as we seek to lifelog our entire histories, or acquire an alibi for our whereabouts should we be accused of a given crime, as portrayed in film in the drama “The Entire History of You” (Armstrong and Welsh, 2011)? It is very common to find signs prohibiting the use of mobile phones in leisure centers, swimming pools and the like. There remains, however, much to be argued around safety versus privacy trade-offs, if it is acceptable practice to rely on closed circuit television (CCTV) in public spaces.

University campuses are bound by a number of laws, at federal or state level, including (in this case) the Privacy Act 1998 (Cth), the Surveillance Devices Act 2007(NSW), and the Workplace Surveillance Act 2005 (NSW). This scenario points out that even when there cannot possibly be surveillance cameras in restrooms or change rooms, the Surveillance Devices Act 2007 (NSW) does not specify provisions about sousveillance in those public/private spaces. In Clarke’s (2014) assessment of the NSW Crimes Act (1900), the voyeurism offences provisions exist relating to photographs. They pertain to photographs that are of a sexual and voyeuristic nature, usually showing somebody’s private parts. These photographs are also taken without the consent of the individual and/or taken in places where a person would reasonably expect to be afforded privacy (toilets, showers, change rooms etc). When a person claims to have had his or her privacy breached, however, exceptions to this rule apply if s/he is a willing participant in the activity, or if circumstances indicate that the persons involved did not really care if they were seen by onlookers (Clarke, 2014). It is even less likely to be illegal, if the act was conducted in a private place, but with doors open in full view (Clarke, 2014). Thus, the law represents controls over a narrow range of abuses (Clarke, 2014), and, unless they find themselves in a predicament and seek further advice, the general populace is unaware that the law does not protect them entirely, and depends on the context.

Scenario 3: The Corridor


This scenario depicts a conversation occurring with Sophie (Anthony’s girlfriend and fellow undergraduate coursework student). In the corridor, Anthony bumps into their mutual friend, Oxford, as they vacate the lecture theatre. Throughout the conversation, Anthony demonstrates confidence in his appearance. He believed wearing a head-mounted camera was not a problem and so consequently, he did not think he was doing anything wrong. On the other hand, Sophie was questioning whether or not body-worn cameras should be used without notifying the people in their vicinity. Oxford, an international student, became concerned about the possible future uses of the recording that featured him. His main concern was that he did not want the footage to be made publicly available given how he looked and the clothing he was wearing. Although Oxford had no objection to Anthony keeping the footage for his personal archives, he did not wish for it to be splattered all over social media.

Trust, Disproportionality, and Requests for Deletion

The two student perspectives of “recording” a lifelog are juxtaposed. Anthony is indifferent as he feels he is taping “his” life as it happens around him through time. Oxford, on the other hand, believes he has a right to his own image, and that includes video (Branscomb, 1994). Here we see a power and control dominance occurring. The power and control is with the photoborg who has the ability to record, store and share the information gathered. On the other hand, the bystander is powerless and at the mercy of the photoborg, unless he/she voices otherwise explicitly. In addition, bystanders may not be so concerned with an actual live recording for personal archives, but certainly are concerned about public viewing. Often lifelogs are streamed in real-time and near real-time, which does not grant the bystander confidence with respect to acceptable use cases.

In the scenario, Sophie poses a question to those who are being incidentally recorded by Anthony’s GoPro to see whether there is an expectation among her peers to get individual consent prior to a recording taking place. Oxford, the mutual acquaintance of the protagonists, believes that consent is paramount in this process. This raises a pertinent question: what about the practice of lifelogging? Lifeloggers could not possibly have the consent of every single person they encounter in a daily journey. Is lifelogging acceptable insofar as lifeloggers choose not to share recordings online or anywhere public? Mann (2005) argues that a person wishing to do lifelong sousveillance deserves certain legal protections against others who might attempt to disrupt continuity of evidence, say for example, while going through immigration. On the other hand, Harfield (2014) extends the physical conception of a private space in considering the extent to which an individual can expect to be private in a public space, defining audio-visual recording of a subject without their consent in public spaces as a moral wrong and seeing the act of sousveillance as a moral intrusion against personal privacy.

In the scenario, Sophie pointed out that if someone wanted to record another individual around them, they could easily do so covertly using everyday objects with embedded covert cameras, such as a pen, a key fob, a handbag or even their mobile phone. Sophie was able to put into perspective the various gazes from security cameras when compared to sousveillance. The very thought about the mass surveillance she was under every moment provided a sobering counterbalance, allowing her to experience tolerance for the practice of sousveillance. Yet for Oxford, the security cameras mounted on the walls and ceilings of the Communications Building, provided a level of safety for international students. Oxford clearly justified “security cameras for security reasons”, but could not justify additional “in your face” cameras. Oxford did not wish to come across a sousveiller because the recordings could be made publicly available on the Internet without his knowledge. Further, a clear motive for the recordings had not been conveyed by the camera holder (Michael et al., 2014).

Between 1994 and 1996, Steve Mann conducted a Wearable Wireless Webcam experiment to visually record and continuously stream live video from his wearable computer to the World Wide Web. Operating 24 hours a day (on and off), this had the effective purpose of capturing and archiving day-to-day living from the person’s own perspective (Mann, 2005). Mann has argued that in the future, devices that captured lifelong memories and shared them in real-time would be commonplace and worn continuously (Mann, 2013).

It is true that almost everywhere we go in our daily lives someone, somewhere, is watching. But in the workplace, especially, where there is intent to watch an employee, the law states that individuals must be notified that they are being watched (Australasian Legal Information Institute, 2015). When it comes to sousveillance will this be the case as well? In Australia, the use, recording, communication or publication of recorded information from a surveillance device under a warrant is protected data and cannot be openly shared according to the Surveillance Device Act 2004 (Cth). In the near future when we are making a recording with an overt device, a prospective sousveillance law might posit: “You can see that I am recording you but this is for personal use only and as long as I do not share this video with someone you cannot do or say anything to stop me.” Mann (2005) claims that sousveillance, unlike surveillance, will require, and receive, strong legal support through dedicated frameworks for its protection, as well as for its limitation (Mann & Wassell, 2013).

A person can listen to, or visually monitor, a private activity if s/he is a participant in the activity (Australasian Legal Information Institute, 2014). However, this Act forbids a person to install, use or maintain an optical surveillance device or a listening device to record a private activity whether the person is a party to the activity or not. The penalties do not apply to the use of an optical surveillance device or listening device resulting in the unintentional recording or observation of a private activity (Surveillance Devices Act, 1998 (WA)). Clarke (2014) combines optical surveillance device regulation with the regulation for listening devices and concludes that a person can listen to conversations if they are a participant in the activity but cannot make audio or visual recordings. The applications of the law cover only a limited range of situations and conditions may apply for prosecutions.

Scenario 4: Ordering at the Café


Anthony and Sophie approached the counter of a café to place their orders and Anthony soon found himself engaged in a conversation with the attendants at the serving area about the camera he was wearing. He asked the attendants how they felt about being filmed. The male attendant said he did not like it very much and the female barista said she would not mind being filmed. The manager did not comment about any aspect of the head-mounted GoPro recording taking place but he did make some derogatory comments about Anthony’s behavior to Sophie. The male attendant became disturbed about the idea of someone recording him while he was at work and he tried to direct Anthony to the manager, knowing that the manager would not like it either, and it would disturb him even more. Conversely, however, the female barista was far from upset about the impromptu recording, acting as if she was on a reality TV show, and taken by the fact that someone seemed to show some interest to her, overcoming the normal daily routine.

Exhibitionism, Hesitation, and Unease

People tend to care a great deal about being watched over or scrutinized and this is reflected in their social behaviors and choices, which are altered as a result without them even realizing (Nettle et. al., 2012). Thus, some people who generally do not like being recorded (like the male attendant), might be subconsciously rejecting the idea of having to change their behaviors. Others, like the manager, simply ignore the existence of the device and others still, like the female attendant, feel entirely comfortable in front of a camera, even playing up and portraying himself or herself as someone “they want to be seen as”.

Anthony did not understand why people found the camera on his head disturbing with the additional concerns about being recorded. In certain cases where people seemed to show particular interest, Anthony decided to engage others about how they felt about being filmed and tried to understand what their reactions were to constant ground-level surveillance. Anthony himself had not been educated with respect to campus policy or the laws pertaining to audio-visual recording in a public space. Anthony was unaware that in Australia, surveillance device legislation differs greatly between states but, broadly, audio and/or visual recording of a private activity is likely to be illegal whatever the context (Clarke, 2014). An activity is, however, only considered to be “private” when it is taking place inside a building, and in the state of New South Wales this includes vehicles. People, however, are generally unaware that prohibitions may not apply if the activity is happening outside a building, regardless of context (Clarke, 2014).

If people were to see someone wearing a head-mounted camera as they were going about their daily routine, it would doubtless gain their attention, as it is presently an unusual occurrence. When we leave our homes, we do not expect pedestrians to be wearing head-mounted cameras, nor, (although increasingly we know we are under surveillance in taxis, buses, trains, and other forms of public transport), do we expect bus drivers, our teachers, fellow students, family or friends to be wearing body-worn recording devices. Having said that, policing has had a substantial impact on raising citizen awareness of body-worn audio-visual recording devices. We now have mobile cameras on cars, on highway patrol police officer helmets, and even on the lapels of particular police officers on foot. While this has helped to decrease the number of unfounded citizen complaints against law enforcement personnel on duty, it is also seen as a retaliatory strategy to everyday citizens who now have a smartphone video recorder at hand 24x7.

Although the average citizen does not always feel empowered to question one’s authority to record, everyone has the right to question the intended purpose of the video being taken of him or her, and how or where it will be shared. In this scenario, does Anthony have the right to record others as he pleases without their knowledge, either of him making the recording, or of the places where that recording might end up? Would Anthony get the same reaction if he were making the recordings with his smartphone? Owners of smart phones would be hard-pressed to say that they have never taken visual recordings of an activity where there are bystanders in the background that they do not know and from whom they have not gained consent. Such examples include children’s sporting events, wedding receptions, school events, attractions and points of interest and a whole lot more. Most photoborgs use the line of argumentation that says: “How is recording with a GoPro instead of a smartphone any different”? Of course, individuals who object to being subjected to point of view surveillance (PoVS) have potential avenues of protection (including trespass against property, trespass against the person, stalking, harassment etc.), but these protections are limited in their applications (Clarke, 2014). Even so, the person using PoVS technology has access to far more protection than the person they are monitoring even if they are doing so in an unreasonable manner (Clarke, 2014).

Scenario 5: Finding a Table at the Café


In this scenario, patrons at an on-campus café vacated their chairs almost immediately after Anthony and Sophie sat down at the large table. Anthony and Sophie both realized the camera was driving people away from them. Sophie insisted at that point in the scenario, that Anthony at least stopped recording if he was unwilling to take off the device itself. After Cygneta and Klara (Sophie’s acquaintances) had joined them at the table, Anthony, interested in individual reactions and trying to prove a point to Sophie, asked Klara how she felt about being filmed. He received the responses that he had expected. Klara did not like being filmed one bit by something worn on someone’s head. Moreover, despite being a marketing student, she had not even heard of Google Glass when Anthony tried to share his perspective around the issue by bringing up the technology in conversation. This fell on deaf ears, he thought, despite Cygneta’s thought that visual data might well be the future of marketing strategies. Anthony tried to make an argument that if a technology like Google Glass was to become prevalent on campus in a couple of years that they would not have any say about being recorded by a stranger. Sophie supported Anthony from a factual standpoint reinforcing that there were no laws in Australia prohibiting video recordings in public. That is, across the States and Territories of Australia, visual surveillance in public places are not subject to general prohibitions except when the person(s) would reasonably expect their actions to be private if they were engaging in a private act (NSW); or if the person(s) being recorded had a strong case for expecting he/she would not be recorded (Victoria, WA, NT); and in SA, Tasmania and ACT legislations for recording other people subject to various provisos (Clarke, 2014).

The reactions of Klara and Cygneta got Sophie thinking about gender and if men were more likely than women to get enthralled by technological devices. She could see this happening with drones and wearable technologies like smart watches - and came to the realization that the GoPro was no different. Some male surfers (including Anthony) and skateboard riders had well and truly begun to use their GoPros to film themselves doing stunts, then sharing these on Instagram. She reflected on whether or not people, in general, would begin to live a life of “virtual replays” as opposed to living in the moment. When reality becomes hard to handle, people tend to escape to a virtual world where they create avatars and act “freely”, leading to the postponement of the hardships of real life, and some may even become addicted to this as being a more exciting lifestyle. These issues are further explored in the following popular articles: Ghorayshi (2014), Kotler (2014) and Lagorio (2006).

Novelty and Market Potential

The patrons at the first table appeared to find the situation awkward and they rectified this problem by removing themselves from the vicinity of Anthony and his camera. Klara did not possess adequate knowledge about emerging wearable technology, and she claimed she would not use it even if it were readily available. But once wearable computers like Google Glass permeated the consumer market, Cygneta, who seemed like she ‘kept up with the Joneses’, said she would likely start using it at some point, despite Klara’s apparent resistance. While smartphones were a new technology in the 1990s, currently close to one third of the world’s population are using them regularly, with 70% projected by 2020 (Ericsson, 2015). One reason this number is not bigger is low-income countries with widespread rural populations and vast terrains: the numbers are expected to rise massively in emerging markets. By comparison wearable computers are basically advanced versions of existing technology and thus uptake of wearable technologies will likely be seamless and even quicker. As with smartphone adoption, as long as they are affordable, wearable computers such as Digital Glass and smartwatches can be expected to be used as much as, or even more than, smartphones, given they are always attached to the body or within arm’s reach.

Scenario 6: A Visit to the Bank


When Sophie and Anthony visited the bank, Anthony sat down as Sophie asked for assistance from one of the attendants. Even if Anthony was not the one who needed help, he thought people working at the bank seemed to be more friendly than usual towards him. He was asked, in fact, if he wanted some assistance with anything, and when he confirmed he did not, no further questioning by the bank manager was conducted. He thought it strange that everyone was so casual about his camera, when everyone else that day had made him feel like a criminal. Again, he was acutely aware that he was in full view of the bank’s surveillance camera but questioned if anyone was really watching anyway. The couple later queued up at the ATM where Anthony mentioned that had he had some disingenuous intentions: he could be filming people and acquiring their PINs so easily. No one had even attempted to cover up their PIN entry, even though there were now signs near the keypad to “cover up”. This entire situation made Sophie feel very uncomfortable and slightly irritated by Anthony. It was after all a criminal act to shoulder surf someone’s PIN, but to have it on film as well to replay later was outrageous. It seemed to her that, no matter how much advice people get about protecting their identity or credit from fraud, they just don’t seem to pay attention. To Anthony’s credit, he too, understood the severity of the situation and admittedly felt uncomfortable by the situation in which, with no malicious intent, he had accidentally found himself.


This scenario illustrates that people in the workplace who are under surveillance are more likely to help clients. Anthony’s camera got immediate attention and a forward request: “Can I help you?” When individuals become publicly self-aware that they are being filmed, then their propensity to help others generally increases. The feeling of public self-awareness created by the presence of a camera triggers the change in behavior in accordance with a pattern that signifies concerns with any damage that could be done to reputation (Van Bommel et. al., 2012).

Anthony also could not keep himself from questioning the security measures that the bank should be applying given the increase in incidence of cheap embedded cameras in both stationary and mobile phones. When queuing in front of the ATM for Sophie’s cash withdrawal, Anthony noticed that he was recording, unintentionally, something that could easily be used for criminal activities and he started seeing the possible security breaches which would come with emerging wearables. For example, video evidence can be zoomed in to reveal private data. While some believe that personal body worn recording devices protect the security of the individual wearer from mass surveillance, rectifying some of the power imbalances, in this instance the recording devices have diminished security by their very presence. It is a paradox, and while it all comes down to the individual ethics of the photoborg, it will not take long for fraudsters to employ such measures.

Scenario 7: In the Library


After the ATM incident, Anthony began to consider more deeply the implications of filming others in a variety of contexts. It was the very first time he had begun to place himself in other people’s shoes and see things from their perspective. In doing this, he became more cautious in the library setting. He avoided glaring at the computer screens of other users around him, as he could then record what activities they were engaged in online, what they were searching for on their Internet browser, and more. He attracted the attention of certain people he came across in the library, because obviously he looked different, even weird. For the first time that day, he felt like he was going to get into serious trouble when he was talking to the librarian who was questioning him about his practice. The librarian claimed that Anthony had absolutely no right to record other people without their permission, as it was against campus policies. Anthony did take this seriously, but he was pretty sure there was no policy against using a GoPro on campus. When Anthony asked the librarian to refer him to the exact policy and university web link, despite clearly stating that his actions were a breach of university rules, the librarian could not provide a link. She did say, however, that she would be calling the library management to convey to them that she suspected that someone was in the library in breach of university policy. While this conversation was happening, things not only began to become less clear for Anthony, but he could sense that things were escalating in seriousness and that he was about to get into some significant trouble.

Campus Policies, Guidelines, and Normative Expectations

The questions raised in this scenario are not only about privacy but also about the issues around the University’s willingness to accept certain things as permitted behavior on campus property. Token inappropriate filming of other individuals was presently a hot news item, as many young women were victims of voyeuristic behavior, such as upskirting with mobile phone cameras, and more. Yet, many universities simply rely on their “Student Conduct Rules” for support outside criminal intent. For example, a typical student conduct notice states that the students have a responsibility to conduct themselves in accordance with:

1. Campus Access and Order Rules,

2. IT Acceptable Use Policy, and

3. Library Code of Conduct.

However, none of these policies typically provide clear guidelines on audiovisual recordings by students.

Campus policies here are approved by the University Council, and various policies address only general surveillance considerations about audio-visual recordings. The Campus Access and Order Rules specifies that University grounds are private properties (University of Wollongong, 2014), and under the common law regarding real property, the lawful occupiers of land have the general right to prevent others from being on, or doing acts on, their land, even if an area on the land is freely accessible to the public (Clarke, 2014). It is Clarke’s latter emphasis which summarises exactly the context of a typical university setting which can be considered a closed-campus but open to the public.

The pace of technological change poses challenges for the law, and deficiencies in regulatory frameworks for Point of View Surveillance exist in many jurisdictions in Australia (Clarke, 2014). Australian universities as organizations are also bound (in this case) by the Workplace Surveillance Act 2005 (NSW) and the Privacy and Personal Information Protection Act 1998 (NSW) (Australasian Legal Information Institute, 2016), which again do nothing to specify what is permitted in terms of rules or policies about sousveillance in actions committed by a student on campus grounds.

Scenario 8: Security on Campus


Security arrived at the scene of the incident and escorted Anthony to the security office. By this stage Anthony believed that this might well become a police matter. Security did not wish to ask Anthony questions about his filming on campus but ostensibly wanted to check whether or not Anthony’s GoPro had been stolen. There had been a spate of car park thefts, and it was for this that Anthony was being investigated. Anthony then thought it appropriate to ask them several questions about the recordings he had made, to which security mentioned the Surveillance Devices Act 2007 (NSW) and how they had to put signage to warn people about the cameras and the fact that activity was being recorded. Additionally, Anthony was told that CCTV footage could be shared only with the police, and that cameras on campus were never facing people but were facing toward the roadways, and footpaths. When Anthony reminded the security about Google Glass and asked if they had a plan for when Glass would be used on the campus, the security manager replied that everything would be thought about when the time arrived. Anthony left to attend a lecture for which he was once again late.

Security Breaches, the Law, and Enforcement

Anthony was not satisfied with the response of the security manager about campus rules pertaining to the filming of others. While Anthony felt very uncomfortable about the footage he had on his camera, he still did not feel that the university’s security office provided adequate guidance on acceptable use. The security manager had tended to skirt around providing a direct response to Anthony, probably because he did not have any concrete answers. First, the manager brought up the Video Piracy Policy topic and then the University’s IT Acceptable Use Policy. Anthony felt that those policies had nothing to do with him. First, he was sure he was not conducting video piracy in any way, and second, he was not using the university’s IT services to share his films with others or exceed his Internet quota, etc. Somehow, the manager connected this by saying that the recording might contain copyrighted material on it, and that it should never be transferred through the university’s IT infrastructure (e.g. network). He also shared a newspaper article with Anthony that was somehow supposed to act as a warning message but it just didn’t make sense to Anthony how all of that was connected to the issue at hand.

Scenario 9: Sophie’s Lecture


Arriving at the lecture theater after the lecture had already begun, Anthony and Sophie opened the door and the lecturer noticed the camera mounted on Anthony’s head. The lecturer immediately became infuriated, asking Anthony to remove the camera and to leave his classroom. Even after Anthony left the class, the lecturer still thought he might be being recorded through the lecture theatre’s part-glass door and so he asked Anthony to leave the corridor as well. The entire time, the GoPro was not recording any of the incidents. The incident became heated, despite Anthony fully accepting the academic’s perspective. It was the very last thing that Anthony had expected by that point in the day. It was absolutely devastating to him.

Ethics, Power, Inequality, and Non-Participation

Every student at an Australian university has academic freedom and is welcome to attend lectures whether or not they are enrolled in the subject. However, it is the academic instructor’s right to privacy to keep a student from recording his or her class. A lecturer’s classes are considered “teaching material” and the lecturer owns the intellectual property of his/her teaching material (University of Wollongong, 2014). In keeping with the aforementioned statements, any recording of lectures should be carried out after consulting with the instructor. Some lecturers do not even like the idea of Echo 360, as it can be used for much more than simply recording a lecture for reuse by students. Lecture recordings could be used to audit staff or surveil whether staff are doing their job properly, display innovative teaching techniques, possess poor or good knowledge of content, and stick to time or take early marks. Some faculty members also consider the classroom to be a sacred meeting place between them and students and would never wish for a camera to invade this intimate gathering. Cameras and recordings would indeed stifle a faculty member’s or a student’s right to freedom of speech if the video was ever to go public. It would also mean that some students would simply not contribute anything to the classroom if they knew they were being taped, or that someone might scrutinize their perspectives and opinions on controversial matters.

Possible Future Implications Drawn from Scenarios

In the scenarios, in almost every instance, the overt nature of Anthony’s wearable recording device, given it was head-mounted, elicited an instantaneous response. Others displayed a variety of responses and attitudes including that:

1. They liked it,

2. They did not mind it,

3. They were indifferent about it,

4. They did not like it and finally,

5. They were disturbed by it.

Regardless of which category they belonged to, however, they did not explicitly voice their feelings to Anthony, although body language and facial expressions spoke volumes. In this closed campus scenario, the majority of people who came into contact with Anthony fell under the first two categories. It also seems clear that some contexts were especially delicate, for instance, taking the camera (while still recording) into the restroom, an obviously private amenity. It is likely that individuals in the restroom would have had no problem with the GoPro filming outside the restroom setting.

Research into future technologies and their respective social implications is urgent, since many emerging technologies are here right now. Whatever the human mind can conjure is liable to be designed, developed, and implemented. The main concern is how we choose to deal with it. In this final section, issues drawn from the scenarios are speculatively extended to project future implications when wearable computing has become more ubiquitous in society.

Privacy, Security, and Trust

Privacy experts claim that while we might once have been concerned, or felt uncomfortable with, CCTV being as pervasive as it is today, we are shifting from a limited number of big brothers to ubiquitous little brothers (Shilton, 2009). The fallacy of security is that more cameras do not necessarily mean a safer society, and statistics, depending on how they are presented, may be misleading about reductions in crime in given hotspots. Criminals do not just stop committing crime (e.g. selling drugs) because a local council installs a group of multi-directional cameras on a busy public route. On the contrary, crime has been shown to be redistributed or relocated to another proximate geographic location. In a study for the United Kingdom’s Home Office (Gill & Spriggs, 2005), only one area of the 14 studied saw a drop in the number of incidents that could be attributed to CCTV.

Questions of trust seem to be the biggest factor militating against wearable devices that film other people who have not granted their consent to be recorded. Many people may not like to be photographed for reasons we don’t quite understand, but it remains their right to say, “No, leave me alone.” Others have no trouble being recorded by someone they know, so long as they know they are being recorded prior to the record button being pushed. Still others show utter indifference, claiming that there is no longer anything personal out in the open. Often, the argument is posed that anyone can watch anyone else walk down a street. This argument fails however: watching someone cross the road is not the same as recording them cross the road, whether by design or by sheer coincidence. Handing out requests for deletion every time someone asks whether they’ve been captured on camera is not good enough. Allowing people to opt out “after the fact” is not consent-based and violates fundamental human rights including the control individuals might have over their own image and the freedom to go about their life as they please (Bronitt & Michael, 2012).

Laws, Regulations, and Policies

At the present time, laws and regulations pertaining to surveillance and listening devices, privacy, telecommunications, crimes, and even workplace relations require amendments to keep pace with advancements in wearable and even implantable sensors. The police need to be viewed as enforcing the laws that they are there to upkeep, not to don the very devices they claim to be illegal. Policies in campus settings, such as universities, also need to address the seeming imbalance in what is, and is not, possible. The commoditization of such devices will only lead to even greater public interest issues coming to the fore. The laws are clearly outdated, and there is controversy over how to overcome the legal implications of emerging technologies.

Creating new laws for each new device will lead to an endless drafting of legislation, which is not practicable, and claiming that existing laws can respond to new problems is unrealistic, as users will seek to get around the law via loopholes in a patchwork of statutes. Cameras provide a power imbalance. Initially, only a few people had mobile phones with cameras: now they are everywhere. Then, only some people carried body-worn video recorders for extreme sports: now, increasingly, many are using a GoPro, Looxcie, or Taser Axon glasses. These devices, while still nascent, have been met with some acceptance, in various contexts including some business-centric applications. Photoborgs might feel they are “hitting back” at all the cameras on the walls that are recording 24×7, but this does not cancel out the fact that the photoborgs themselves are doing exactly what they are claiming a fixed, wall-mounted camera is doing to them.

Future Implications

All of the risks mentioned above are interrelated. If we lack privacy, we lose trust; if we lack security, we feel vulnerable; if we lose our anonymity, we lose a considerable portion of our liberty and when people lose their trust and their liberty, then they feel vulnerable. This kind of scenario is deeply problematic, and portends a higher incidence of depression, as people would not feel they had the freedom to act and be themselves, sharing their true feelings. Implications of this interrelatedness are presented in Figure 1.

Since 100% security does not exist in any technological system, privacy will always be a prominent issue. When security is lacking, privacy becomes an issue, individuals become more vulnerable and the anonymity of an individual comes into question. A loss of anonymity limits people’s liberty to act and speak as they want and eventually people start losing their trust in each other and in authorities. When the people are not free to express their true selves, they become withdrawn and despite a high-tech community, people may enter a state of despondency. The real question will be in the future when it is not people who are sporting these body-worn devices, but automated data collection machines like Knightscope’s K5 (Knightscope, 2016). These will indeed be mobile camera surveillance units, converging sousveillance and surveillance in one clean sweep (Perakslis et al., 2014).

Figure 1. Major implications of wearables: the utopian and dystopian views

Future Society

Mann (2013) argues that wearable sousveillance devices that are used in everyday life to store, access, transfer and share information will be commonplace, worn continuously and perhaps even permanently implanted. Michael and Michael (2012, p. 195) in their perception of the age of Überveillence state:

There will be a segment of the consumer and business markets who will adopt the technology for no clear reason and without too much thought, save for the fact that the technology is new and seems to be the way advanced societies are heading. This segment will probably not be overly concerned with any discernible abridgement of their human rights nor the small print ‘terms and conditions agreement’ they have signed, but will take an implant on the promise that they will have greater connectivity to the Internet, and to online services and bonus loyalty schemes more generally.

Every feature added on a wearable device adds another layer of risk to the pre-existing risks. Currently, we may only have capabilities to store, access, transfer and manipulate the gathered data but as the development of technology continues, context-aware software will be able to interpret vast amounts of data into meaningful information that can be used by unauthorized third parties. It is almost certain that the laws will not be able to keep up with the pace of the technology. Accordingly, individuals will have to be alert and aware, and private and public organizations will need to set rules and guidelines to protect their employees’ privacy, as well as their own.

Society’s ability to cope with the ethical and societal problems that technology raises has long been falling behind the development of such technology and the same can be said for laws and regulations. With no legal protection and social safe zone, members of society are threatened with losing their privacy through wearable technology. When the technology becomes widespread, privacy at work, in schools, in supermarkets, at the ATM, on the Internet, even when walking, sitting in a public space, and so on, is subject to perishability.

The future is already here and, since the development of technology is seemingly unstoppable, there is more to come, but for any possible futures that may come, there needs to be a healthy human factor. “For every expert there’s an equal and opposite expert” (Sowell, 1995, p. 102; also sometimes attributed to Arthur C. Clarke). So even as we are enthusiastic about how data collected through wearable technology will enhance the quality of our daily life, we also have to be cautious to think about our security and privacy issues in an era of ubiquitous wearable technology. In this sense, creating digital footprints of our social and personal lives with the possibility of them being exposed publicly do not seem to coincide with the idea of a “healthy society”.

One has to ponder: where next? Might we be arguing that we are nearing the point of total surveillance, as everyone begins to record everything around them for “just in case” reasons such as insurance protection, establishing liability, and complaint handling (much like the in-car black box recorder unit can clear you of wrongdoing in an accident)? How gullible might we become to think that images and video footage do not lie, even though a new breed of hackers might manipulate and tamper with digital reality to their own ends. The überveillance trajectory refers to the ultimate potentiality for embedded surveillance devices like swallowable pills with onboard sensors, tags, and transponder IDs placed in the subdermal layer of the skin (Michael & Michael, 2013). Will the new frontier be surveillance of the heart and mind?

Discussion Points

• Does sound recording by wearable devices present any ethical dilemmas?

• Are wearable still cameras more acceptable than wearable video cameras?

• What should one do if bystanders of a recording in a public space question why they are being recorded?

• What themes are evident in the videos and the comments on Surveillance Camera Man Channel at

• What is the difference between making a recording with a smartphone and with a head mounted camera?

• If one juxtaposes a surveillance camera covertly integrated into a light fixture, with an overt head-mounted camera, then why should the two devices elicit a different response from bystanders?

• In what ways is a CCTV in a restroom any different from a photoborg in a restroom?
• Are there gender differences in enthusiasm for certain wearables? Who are the innovators of these technologies?

• What dangers exist around Internet addiction, escapism, and living in a virtual world?

• Are we nearing the point of total information surveillance? Is this a good thing? Will it decrease criminal activity or are we nearing a Minority Report style future?

• Will the new frontier be surveillance of the heart and mind beyond anything Orwell could have envisioned?

• How can the law keep pace with technological change?

• Can federal and state laws be in contradiction over the rights of a photoborg? How?

• Watch the movie The Final Cut. Watch the drama The Entire History of You. What are the similiarities and differences? What does such a future mean for personal security and national security?

• Consider in small groups other scenarios where wearables would be welcome as opposed to unwelcome.

• In which locations should body-worn video cameras never be worn?


• What is meant by surveillance, sousveillance and überveillance?

• What is a photoborg? And what is “point of view” within a filming context?

• Research the related terms surveillance, dataveillance, and überveillence.

• What does Steve Mann’s “Request for Deletion” webpage say? Why is it largely untenable?

• Why did Google decide to focus on industry applications of Glass finally, and not the total market?

• Are we ready to see many (overt or covert) sousveillers in our everyday life?

• Will we all be photoborgs one day, or live in a society where we need to be?

• Do existing provisions concerning voyeurism cover all possible sousveillance situations?

• If lifelogs are streamed in real-time and near real-time what can bystanders shown do about the distribution of their images (if they are ever find out)?

• Is lifelong lifelogging feasible? Desirable? Should it be suspended in confidential business meetings, going through airport security and customs or other areas? Which areas?

• Should citizens film their encounters with police, given police are likely to be filming it too?

• Should the person using PoVS technology have more legal protection than persons they are monitoring?

• Are wearables likely to be rapidly adopted and even outpace smartphone use?


Armstrong, J., & Welsh, B. (2011). The Entire History of You. In B. Reisz (Ed.), Black Mirror. London, UK: Zeppetron.

Australasian Legal Information Institute. (2014). Workplace Surveillance Act, 2005 (NSW). Retrieved June 6, 2016, from consol_act/wsa2005245/

Australasian Legal Information Institute. (2015). Surveillance Devices Act, 1998 (WA). Retrieved June 6, 2016, from nsf/main_mrtitle_946_currencies.html

Australasian Legal Information Institute. (2016). Privacy and Personal Information Protection Act 1998. Retrieved June 6, 2016, from legis/nsw/consol_act/papipa1998464/

Branscomb, A. W. (1994). Who Owns Information? From Privacy to Public Access. New York, NY: BasicBooks.

Bronitt, S., & Michael, K. (2012). Human rights, regulation, and national security (introduction). IEEE Technology and Society Magazine, 31(1), 15–16. doi:10.1109/ MTS.2012.2188704

Clarke, R. (2012). Point-of-View Surveillance. Retrieved from

Clarke, R. (2014). Surveillance by the Australian media, and its regulation. Surveillance & Society, 12(1), 89–107.

Echo. (2016). Lecture capture: Video is the new textbook. Retrieved from http:// Ericsson. (2015).

Ericsson Mobility Report. Retrieved June 6, 2016, from http://

Fernandez Arguedas, V., Izquierdo, E., & Chandramouli, K. (2013). Surveillance ontology for legal, ethical and privacy protection based on SKOS. In IEEE 18th International Conference on Digital Signal Processing (DSP).

Ghorayshi, A. (2014). Google Glass user treated for internet addiction caused by the device. Retrieved June 6, 2016, from

Gill, M., & Spriggs, A. (2005). Assessing the impact of CCTV. London: Home Office Research, Development and Statistics Directorate.

Gokyer, D., & Michael, K. (2015). Digital wearability scenarios: Trialability on the run. IEEE Consumer Electronics Magazine, 4(2), 82–91. doi:10.1109/MCE.2015.2393005

Harfield, C. (2014). Body-worn POV technology: Moral harm. IEEE Technology and Society Magazine, 33(2), 64–72. doi:10.1109/MTS.2014.2319976

Knightscope. (2016). Advanced physical security technology. Knightscope: K5. Retrieved from

Kotler, S. (2014). Legal heroin: Is virtual reality our next hard drug. Retrieved June 6, 2016, from

Lagorio, C. (2006). Is virtual life better than reality? Retrieved June 6, 2016, from

Levy, K. (2014). A surprising number of places have banned Google Glass in San Francisco. Business Insider, 3. Retrieved from google-glass-ban-san-francisco-2014-3

Mann, S. (2002). Sousveillance. Retrieved from

Mann, S. (2005). Sousveillance and cyborglogs: A 30-year empirical voyage through ethical, legal, and policy issues. Presence (Cambridge, Mass.), 14(6), 625–646. doi:10.1162/105474605775196571

Mann, S. (2013). Veillance and reciprocal transparency: Surveillance versus sousveillance,

AR glass, lifeglogging, and wearable computing. In IEEE International Symposium on Technology and Society (ISTAS). Toronto: IEEE.

Mann, S. (n.d.). The request for deletion (RFD). Retrieved from http://wearcam. org/rfd.htm

Mann, S., & Wassell, P. (2013). Proposed law on sousveillance. Retrieved from

Michael, K. (2013). Keynote: The final cut—Tampering with direct evidence from wearable computers. In Proc. 5th Int. Conf. Multimedia Information Networking and Security (MINES).

Michael, K., & Michael, M. G. (2012). Commentary on: Mann, S. (2012): Wearable computing. In M. Soegaard & R. Dam (Eds.), Encyclopedia of human-computer interaction. The Foundation. Retrieved from https://www.

Michael, K., & Michael, M. G. (2013a). Computing ethics: No limits to watching? Communications of the ACM, 56(11), 26–28. doi:10.1145/2527187

Michael, K., Michael, M. G., & Perakslis, C. (2014). Be vigilant: There are limits to veillance. In J. Pitt (Ed.), The computer after me. London: Imperial College London Press. doi:10.1142/9781783264186_0013

Michael, M. G., & Michael, K. (Eds.). (2013b). Überveillance and the social implications of microchip implants: Emerging technologies (Advances in human and social aspects of technology). Hershey, PA: IGI Global.

Nettle, D., Nott, K., & Bateson, M. (2012). ‘Cycle thieves, we are watching you’: Impact of a simple signage intervention against bicycle theft. PLoS ONE, 7(12), e51738. doi:10.1371/journal.pone.0051738 PMID:23251615

Perakslis, C., Pitt, J., & Michael, K. (2014). Drones humanus. IEEE Technology and Society Magazine, 33(2), 38–39.

Ruvio, A., Gavish, Y., & Shoham, A. (2013). Consumer’s doppelganger: A role model perspective on intentional consumer mimicry. Journal of Consumer Behaviour, 12(1), 60–69. doi:10.1002/cb.1415

Shactman, N. (2003). A spy machine of DARPA’s dreams. Wired. Retrieved from

Shilton, K. (2009). Four billion little brothers?: Privacy, mobile phones, and ubiquitous data collection. Communications of the ACM, 52(11), 48–53. doi:10.1145/1592761.1592778

Sowell, T. (1995). The Vision of the Anointed: Self-congratulation as a Basis for Social Policy. New York, NY: Basic Books.

Surveillance Camera Man. (2015). Surveillance Camera Man. Retrieved from https://

Tanner, R. J., Ferraro, R., Chartrand, T. L., Bettman, J., & van Baaren, R. (2008). Of chameleons and consumption: The impact of mimicry on choice and preferences. The Journal of Consumer Research, 34(6), 754–766. doi:10.1086/522322

University of Wollongong. (2014a). Campus Access and Order Rules. Retrieved from

University of Wollongong. (2014b). Ownership of Intellectual Property. Retrieved from

University of Wollongong. (2014c). Student Conduct Rules. Retrieved from http://

Van Bommel, M., van Prooijen, J., Elffers, H., & Van Lange, P. (2012). Be aware to care: Public self-awareness leads to a reversal of the bystander effect. Journal of Experimental Social Psychology, 48(4), 926–930. doi:10.1016/j.jesp.2012.02.011

Key Terms and Definitions

Body-Worn Video (BWV): These are cameras that are embedded in devices that can be worn on the body to record video, typically by law enforcement officers. Closed-Campus: Refers to any organization or institution that contains a dedicated building(s) on a bounded land parcel offering a range of online and offline services, such as banking, retail, sporting. Closed campus examples include schools and universities.

Closed-Circuit Television (CCTV): Also referred to as video surveillance. CCTV is the use of video cameras to transmit a signal to a specific place. CCTV cameras can be overt (obvious) or covert (hidden).

Digital Glass: Otherwise referred to as wearable eyeglasses which house multiple sensors on board. An example of digital glass is Google Glass. The future of digital glass may well be computer-based contact lenses.

Lifelogging: When a user decides to log his/her life using wearable computing or other devices, that have audio-visual capability. It is usually a continuous stream of a recording 24/7.

Personal Security Devices: These are devices that allegedly deter perpetrators from attacking others because they are always on gathering evidence, and ready to record. PSDs may have an on-board alarm alerting central care services for further assistance.

Policy: An enforceable set of organizational rules and principles used to aid decision-making that have penalties for non-compliance, such as the termination of an employee’s contract with an employer.

Private Space: Somewhere geographically that one has an expectation of privacy, naturally. Some examples include: the home, the backyard, and the restroom.

Public Space: Somewhere geographically where there is no expectation of privacy save for when someone holds a private conversation in a private context.

Sousveillance: The opposite of surveillance from above, which includes inverse surveillance, also sometimes described as person-to-person surveillance. Citizens can use sousveillance as a mechanism to keep law enforcement officers accountable for their actions.

Surveillance: “Watching from above” such as CCTV from business buildings. For behaviors, activities, or other changing information to be under the watchful eye of authority, usually for the purpose of influencing, managing, directing, or protecting the masses.

Citation: Michael, K., Gokyer, D., & Abbas, S. (2017). Societal Implications of Wearable Technology: Interpreting “Trialability on the Run”. In A. Marrington, D. Kerr, & J. Gammack (Eds.), Managing Security Issues and the Hidden Dangers of Wearable Technologies (pp. 238-266). Hershey, PA: IGI Global. doi:10.4018/978-1-5225-1016-1.ch010

Location-Based Privacy, Protection, Safety, and Security


This chapter will discuss the interrelated concepts of privacy and security with reference to location-based services, with a specific focus on the notion of location privacy protection. The latter can be defined as the extent and level of control an individual possesses over the gathering, use, and dissemination of personal information relevant to their location, whilst managing multiple interests. Location privacy in the context of wireless technologies is a significant and complex concept given the dual and opposing uses of a single LBS solution. That is, an application designed or intended for constructive uses can simultaneously be employed in contexts that violate the (location) privacy of an individual. For example, a child or employee monitoring LBS solution may offer safety and productivity gains (respectively) in one scenario, but when employed in secondary contexts may be regarded as a privacy-invasive solution. Regardless of the situation, it is valuable to initially define and examine the significance of “privacy” and “privacy protection,” prior to exploring the complexities involved.

16.1 Introduction

Privacy is often expressed as the most complex issue facing location-based services (LBS) adoption and usage [44, p. 82, 61, p. 5, 66, pp. 250–254, 69, pp. 414–415]. This is due to numerous factors such as the significance of the term in relation to human rights [65, p. 9]. According to a report by the Australian Law Reform Commission (ALRC), “privacy protection generally should take precedence over a range of other countervailing interests, such as cost and convenience” [3, p. 104]. The intricate nature of privacy is also a result of the challenges associated with accurately defining the term [13, p. 4, 74, p. 68]. That is, privacy is a difficult concept to articulate [65, p. 13], as the term is liberally and subjectively applied, and the boundaries constituting privacy protection are unclear. Additionally, privacy literature is dense, and contains varying interpretations, theories and discrepancies as to what constitutes privacy. However, as maintained by [65, p. 67], “[o]ne point on which there seems to be near-unanimous agreement is that privacy is a messy and complex subject.” Nonetheless, as asserted by [89, p. 196], privacy is fundamental to the individual due to various factors:

The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual.

The Oxford English Dictionary definition of security is the “state of being free from danger or threat.” A designation of security applicable to this research is “a condition in which harm does not arise, despite the occurrence of threatening events; and as a set of safeguards designed to achieve that condition” [92, pp. 390–391]. Security and privacy are often confused in LBS scholarship. Elliot and Phillips [40, p. 463] warn that “[p]rivacy is not the same as security,” although the two themes are related [70, p. 14]. Similarly, Clarke [21] states that the term privacy is often used by information and communication technology professionals to describe data and data transmission security. The importance of security is substantiated by the fact that it is considered “a precondition for privacy and anonymity” [93, p. 2], and as such the two themes are intimately connected. In developing this chapter and surveying security literature relevant to LBS, it became apparent that existing scholarship is varied, but nonetheless entails exploration of three key areas. These include: (1) security of data or information, (2) personal safety and physical security, and (3) security of a nation or homeland/national security, interrelated categories adapted from [70, p. 12].

This chapter will discuss the interrelated concepts of privacy and security with reference to LBS, with a specific focus on the notion of location privacy protection. The latter can be defined as the extent and level of control an individual possesses over the gathering, use, and dissemination of personal information relevant to their location [38, p. 1, 39, p. 2, 53, p. 233], whilst managing multiple interests (as described in Sect. 16.1.1). Location privacy in the context of wireless technologies and LBS is a significant and complex concept given the dual and opposing uses of a single LBS solution. That is, an application designed or intended for constructive uses can simultaneously be employed in contexts that violate the (location) privacy of an individual. For example, a child or employee monitoring LBS solution may offer safety and productivity gains (respectively) in one scenario, but when employed in secondary contexts may be regarded as a privacy-invasive solution. Regardless of the situation, it is valuable to initially define and examine the significance of “privacy” and “privacy protection,” prior to exploring the complexities involved.

16.1.1 Privacy: A Right or an Interest?

According to Clarke [26, pp. 123–129], the notions of privacy and privacy protection emerged as important social issues since the 1960s. An enduring definition of privacy is the “right to be let alone” [89, p. 193]. This definition requires further consideration as it is quite simplistic in nature and does not encompass diverse dimensions of privacy. For further reading on the development of privacy and the varying concepts including that of Warren and Brandeis, see [76]. Numerous scholars have attempted to provide a more workable definition of privacy than that offered by Warren and Brandeis.

For instance, [21] maintains that perceiving privacy simply as a right is problematic and narrow, and that privacy should rather be viewed as an interest or collection of interests, which encompasses a number of facets or categories. As such, privacy is defined as “the interest that individuals have in sustaining a ‘personal space’, free from interference by other people and organisations” [2126]. In viewing privacy as an interest, the challenge is in balancing multiple interests in the name of privacy protection. This, as Clarke [21] maintains, includes opposing interests in the form of one’s own interests, the interests of other people, and/or the interests of other people, organizations, or society. As such Clarke refers to privacy protection as “a process of finding appropriate balances between privacy and multiple competing interests.”

16.1.2 Alternative Perspectives on Privacy

Solove’s [80] taxonomy of privacy offers a unique, legal perspective on privacy by grouping privacy challenges under the categories of information collection, information processing, information dissemination, and invasion. Refer to [80, pp. 483–558] for an in depth overview of the taxonomy which includes subcategories of the privacy challenges. Nissenbaum [65, pp. 1–2], on the other hand, maintains that existing scholarship generally expresses privacy in view of restricting access to, and maintaining control over, personal information. For example, Quinn [73, p. 213] insists that the central theme in privacy debates is that of access, including physical access to an individual, in addition to information access. With respect to LBS and location privacy, Küpper and Treu [53, pp. 233–234] agree with the latter, distinguishing three categories of access: (1) third-party access by intruders and law enforcement personnel/authorities, (2) unauthorized access by providers within the supply chain for malicious purposes, and (3) access by other LBS users. Nissenbaum [65, pp. 1–2] disputes the interpretation focused on access and control, noting that individuals are not interested in “simply restricting the flow of information but ensuring that it flows appropriately.” As such, Nissenbaum offers the framework of contextual integrity, as a means of determining when certain systems and practices violate privacy, and transform existing information flows inappropriately [65, p. 150]. The framework serves as a possible tool that can assist in justifying the need for LBS regulation.

A primary contribution from Nissenbaum is her emphasis on the importance of context in determining the privacy-violating nature of a specific technology-based system or practice. In addition to an appreciation of context, Nissenbaum recognizes the value of perceiving technology with respect to social, economic, and political factors and interdependencies. That is, devices and systems should be considered as socio-technical units [65, pp. 5–6].

In relation to privacy, and given the importance of socio-technical systems, the complexities embedded within privacy may, therefore, arise from the fact that the term can be examined from a number of perspectives. For instance, it can be understood in terms of its philosophical, psychological, sociological, economical, and political significance [2126]. Alternatively, privacy theory can provide varying means of interpretation, given that available approaches draw on inspiration from multiple disciplines such as computer science and engineering, amongst others [65, p. 67]. It is also common to explore privacy through its complex dimensions.

According to Privacy International, for instance, the term comprises the aspects of information privacy, bodily privacy, privacy of communications, and territorial privacy [72]. Similarly, in providing a contemporary definition of privacy, Clarke [26] uses Maslow’s hierarchy of needs to define the various categories of privacy; that is, “privacy of the person,” “privacy of personal behavior,” “privacy of personal communications,” and “privacy of personal data.” Clarke argues that since the late 1960s the term has been confined, in a legal sense, to the last two categories. That is, privacy laws have been restricted in their focus in that they are predominantly based on the OECD fair information principles, and lack coverage of other significant categories of privacy. Therefore, the label of information privacy, typically interchangeable with data privacy, is utilized in reference to the combination of communications and data privacy [21], and is cited by [58, pp. 5–7] as a significant challenge in the information age.

16.2 Background

16.2.1 Defining Information Privacy

In Alan Westin’s prominent book Privacy and Freedom, information privacy is defined as “the right of individuals, groups and institutions to determine for themselves, when, how and to what extent information about them is communicated to others” [90, p. 7]. Information in this instance is personal information that can be linked to or identify a particular individual [33, p. 326]. For a summary of information privacy literature and theoretical frameworks, presented in tabular form, refer to [8, pp. 15–17].

16.2.2 Information Privacy Through the Privacy Calculus Perspective

For the purpose of this chapter, it is noteworthy that information privacy can be studied through differing lenses, one of which is the privacy calculus theoretical perspective. Xu et al. [95, p. 138] explain that “the calculus perspective of information privacy interprets the individual’s privacy interests as an exchange where individuals disclose their personal information in return for certain benefits.” It can be regarded a form of “cost–benefit analysis” conducted by the individual, where privacy is likely to be (somewhat) relinquished if there is a perceived net benefit resulting from information disclosure [33, p. 327]. This perspective acknowledges the claim that privacy-related issues and concerns are not constant, but rather depend on perceptions, motivations, and conditions that are context or situation dependent [78, p. 353]. A related notion is the personalizationprivacy paradox, which is based on the interplay between an individual’s willingness to reap the benefits of personalized services at the expense of divulging personal information, which may potentially threaten or invade their privacy. An article by Awad and Krishnan [8] examines this paradox, with specific reference to online customer profiling to deliver personalized services. The authors recommend that organizations work on increasing the perceived benefit and value of personalized services to ensure “the potential benefit of the service outweighs the potential risk of a privacy invasion” [8, p. 26].

In the LBS context, more specifically, Xu et al. [94] build on the privacy calculus framework to investigate the personalization–privacy paradox as it pertains to overt and covert personalization in location-aware marketing. The results of the study suggest that the personalization approaches (overt and covert) impact on the perceived privacy risks and values. A complete overview of results can be found in [94, pp. 49–50]. For further information regarding the privacy calculus and the personalization–privacy paradox in the context of ubiquitous commerce applications including LBS, refer to [78]. These privacy-related frameworks and the concepts presented in this section are intended to be introductory in nature, enabling an appreciation of the varied perspectives on privacy and information privacy, in addition to the importance of context, rather than providing thoroughness in the treatment of privacy and information privacy. Such notions are particularly pertinent when reflecting on privacy and the role of emerging information and communication technologies (ICTs) in greater detail.

16.2.3 Emerging Technologies, m-Commerce and the Related Privacy Challenges

It has been suggested that privacy concerns have been amplified (but not driven) by the emergence and increased use of ICTs, with the driving force being the manner in which these technologies are implemented by organizations [2126]. In the m-commerce domain, mobile technologies are believed to boost the threat to consumer privacy. That is, the intensity of marketing activities can potentially be increased with the availability of timely location details and, more significantly, tracking information; thus enabling the influencing of consumer behaviors to a greater extent [25]. The threat, however, is not solely derived from usage by organizations. Specifically, the technologies originally introduced for use by government and organizational entities are presently available for consumer adoption by members of the community. For further elaboration, refer to Abbas et al. [1] and chapter 8 of Andrejevic [4]. Thus, location (information) privacy protection emerges as a substantial challenge for the government, business, and consumer sectors.

16.2.4 Defining Location (Information) Privacy

Location privacy, regarded a subset of information privacy, has been defined and presented in various ways. Duckham [38, p. 1] believes that location privacy is “the right of individuals to control the collection, use, and communication of personal information about their location.” Küpper and Treu [53, p. 233] define location privacy as “the capability of the target person to exercise control about who may access her location information in which situation and in which level of detail.” Both definitions focus on the aspect of control, cited as a focal matter regarding location privacy [39, p. 2]. With specific reference to LBS, location privacy and related challenges are considered to be of utmost importance. For example, Perusco and Michael [70, pp. 414–415], in providing an overview of studies relating to the social implications of LBS, claim that the principal challenge is privacy.

In [61, p. 5] Michael et al. also state, with respect to GPS tracking, that privacy is the “greatest concern,” resulting in the authors proposing a number of questions relating to the type of location information that should be revealed to other parties, the acceptability of child tracking and employee monitoring, and the requirement for a warrant in the tracking of criminals and terrorists. Similarly, Bennett and Crowe [12, pp. 9–32] reveal the privacy threats to various individuals, for instance those in emergency situations, mobile employees/workers, vulnerable groups (e.g., elderly), family members (notably children and teenagers), telematics application users, rental car clients, recreational users, prisoners, and offenders. In several of these circumstances, location privacy must often be weighed against other conflicting interests, an example of which is the emergency management situation. For instance, Aloudat [2, p. 54] refers to the potential “deadlock” between privacy and security in the emergency context, noting public concerns associated with the move towards a “total surveillance society.”

16.2.5 Data or Information Security

It has been suggested that data or information security in the LBS domain involves prohibiting unauthorized access to location-based information, which is considered a prerequisite for privacy [88, p. 121]. This form of security is concerned with “implementing security measures to ensure that collected data is only accessed for the agreed-upon purpose” [46, p. 1]. It is not, however, limited to access but is also related to “unwanted tracking” and the protection of data and information from manipulation and distortion [10, p. 185]. The techniques and approaches available to prevent unauthorized access and minimize chances of manipulation include the use of “spatially aware access control systems” [34, p. 28] and security- and privacy-preserving functionality [9, p. 568]. The intricacies of these techniques are beyond the scope of this investigation. Rather, this section is restricted to coverage of the broad data and information security challenges and the resultant impact on LBS usage and adoption.

16.2.6 Impact of Data or Information Security on LBS Market Adoption

It has been suggested that data and information security is a fundamental concern influencing LBS market adoption. From a legal standpoint, security is an imperative concept, particularly in cases where location information is linked to an individual [41, p. 22]. In such situations, safeguarding location data or information has often been described as a decisive aspect impacting on user acceptance. These claims are supported in [85, p. 1], noting that user acceptance of location and context-aware m-business applications are closely linked to security challenges. Hence, from the perspective of organizations wishing to be “socially-responsive,” Chen et al. [19, p. 7] advise that security breaches must be avoided in the interest of economic stability:

Firms must reassure customers about how location data are used…A security lapse, with accompanying publicity in the media and possible ‘negligence’ lawsuits, may prove harmful to both sales and the financial stability of the firm.

Achieving satisfactory levels of security in location- and context-aware services, however, is a tricky task given the general issues associated with the development of security solutions; inevitable conflicts between protection and functionality; mobile-specific security challenges; inadequacy of standards to account for complex security features; and privacy and control-related issues [85, pp. 1–2]. Furthermore, developing secure LBS involves consideration of multiple factors; specifically those related to data or information accuracy, loss, abuse, unauthorized access, modification, storage, and transfer [83, p. 10]. There is the additional need to consider security issues from multiple stakeholder perspectives, in order to identify shared challenges and accurately assess their implications and the manner in which suitable security features can be integrated into LBS solutions. Numerous m-business security challenges relevant to LBS from various perspectives are listed in [85]. Data security challenges relevant to LBS are also discussed in [57, pp. 44–46].

16.3 Privacy and Security Issues

16.3.1 Access to Location Information Versus Privacy Protection

The issue of privacy in emergency situations, in particular, is delicate. For instance, Quinn [73, p. 225] remarks on the benefits of LBS in safety-related situations, with particular reference to the enhanced 911 Directive in the US, which stipulates that the location of mobile phones be provided in emergency situations, aiding in emergency response efforts. The author continues to identify “loss of privacy” as a consequence of this service, specifically in cases where location details are provided to third parties [73, p. 226]. Such claims imply that there may be conflicting aims in developing and utilizing LBS. Duckham [38, p. 1] explains this point, stating that the major challenge in the LBS realm is managing the competing aims of enabling improved access to location information versus allowing individuals to maintain a sufficient amount of control over such information. The latter is achieved through the deployment of techniques for location privacy protection.

16.3.2 Location Privacy Protection

It is valid at this point to discuss approaches to location privacy protection. Bennett and Grant [13, p. 7] claim that general approaches to privacy protection in the digital age may come in varied forms, including, but not limited to, privacy-enhancing technologies, self-regulation approaches, and advocacy. In terms of LBS, substantial literature is available proposing techniques for location privacy protection, at both the theoretical and practical levels. A number of these techniques are best summarized in [39, p. 13] as “regulation, privacy policies, anonymity, and obfuscation.” A review of complementary research on the topic of privacy and LBS indicate that location privacy has predominantly been examined in terms of the social challenges and trade-offs from theoretical and practical perspectives; the technological solutions available to maintain location privacy; and the need for other regulatory response(s)to address location privacy concerns. The respective streams of literature are now inspected further in this chapter.

16.3.3 Social Challenges and Trade-Offs

In reviewing existing literature, the social implications of LBS with respect to privacy tend to be centered on the concepts of invasion, trade-off, and interrelatedness and complexity. The first refers primarily to the perceived and actual intrusion or invasion of privacy resulting from LBS development, deployment, usage, and other aspects. Alternatively, the trade-off notion signifies the weighing of privacy interest against other competing factors, notably privacy versus convenience (including personalization) and privacy versus national security. On the other hand, the factors of interrelatedness and complexity refer to the complicated relationship between privacy and other ethical dilemmas or themes such as control, trust, and security.

With respect to the invasion concept, Westin notes that concerns regarding invasion of privacy were amplified during the 1990s in both the social and political spheres [91, p. 444]. Concentrating specifically on LBS, [62, p. 6] provides a summary of the manner in which LBS can be perceived as privacy-invasive, claiming that GPS tracking activities can threaten or invade the privacy of the individual. According to the authors, such privacy concerns can be attributed to a number of issues regarding the process of GPS tracking. These include: (1) questionable levels of accuracy and reliability of GPS data, (2) potential to falsify the data post-collection, (3) capacity for behavioral profiling, (4) ability to reveal spatial information at varying levels of detail depending on the GIS software used, and (5) potential for tracking efforts to become futile upon extended use as an individual may become nonchalant about the exercise [62, pp. 4–5]. Other scholars examine the invasion concept in various contexts. Varied examples include [55] in relation to mobile advertising, [51] in view of monitoring employee locations, and [79] regarding privacy invasion and legislation in the United States concerning personal location information.

Current studies declare that privacy interests must often be weighed against other, possibly competing, factors, notably the need for convenience and national security. That is, various strands of LBS literature are fixed on addressing the trade-off between convenience and privacy protection. For instance, in a field study of mobile guide services, Kaasinen [50, p. 49] supports the need for resolving such a trade-off, arguing that “effortless use” often results in lower levels of user control and, therefore, privacy. Other scholars reflect on the trade-off between privacy and national security. In an examination of the legal, ethical, social, and technological issues associated with the widespread use of LBS, Perusco et al. [71] propose the LBS privacy–security dichotomy. The dichotomy is a means of representing the relationship between the privacy of the individual and national security concerns at the broader social level [71, pp. 91–97]. The authors claim that a balance must be achieved between both factors. They also identify the elements contributing to privacy risk and security risk, expressing the privacy risks associated with LBS to be omniscience, exposure, and corruption, claiming that the degree of danger is reduced with the removal of a specific risk [71, pp. 95–96]. The lingering question proposed by the authors is “how much privacy are we willing to trade in order to increase security?” [71, p. 96]. Whether in the interest of convenience or national security, existing studies focus on the theoretical notion of the privacy calculus. This refers to a situation in which an individual attempts to balance perceived value or benefits arising from personalized services against loss of privacy in determining whether to disclose information (refer to [833789495]).

The relationship between privacy and other themes is a common topic of discussion in existing literature. That is, privacy, control, security, and trust are key and interrelated themes concerning the social implications of LBS [71, pp. 97–98]. It is, therefore, suggested that privacy and the remaining social considerations be studied in light of these associations rather than as independent themes or silos of information. In particular, privacy and control literature are closely correlated, and as such the fields of surveillance and dataveillance must be flagged as crucial in discussions surrounding privacy. Additionally, there are studies which suggest that privacy issues are closely linked to notions of trust and perceived risk in the minds of users [444849], thereby affecting a user’s decision to engage with LBS providers and technologies. It is commonly acknowledged in LBS privacy literature that resolutions will seek consensus between issues of privacy, security, control, risk, and trust—all of which must be technologically supported.

16.3.4 Personal Safety and Physical Security

LBS applications are often justified as valid means of maintaining personal safety, ensuring physical security and generally avoiding dangerous circumstances, through solutions that can be utilized for managing emergencies, tracking children, monitoring individuals suffering from illness or disability, and preserving security in employment situations. Researchers have noted that safety and security efforts may be enhanced merely through knowledge of an individual’s whereabouts [71, p. 94], offering care applications with notable advantages [61, p. 4].

16.3.5 Applications in the Marketplace

Devices and solutions that capitalize on these facilities have thus been developed, and are now commercially available for public use. They include GPS-enabled wristwatches, bracelets, and other wearable items [59, pp. 425–426], in addition to their supportive applications that enable remote viewing or monitoring of location (and other) information. Assistive applications are one such example, such as those technologies and solutions suited to the navigation requirements of vision impaired or blind individuals [75, p. 104 (example applications are described on pp. 104–105)].

Alternative applications deliver tracking capabilities as their primary function; an example is the Australian-owned Fleetfinder PT2 Personal Tracker, which is advertised as a device capable of safeguarding children, teenagers, and the elderly [64]. These devices and applications promise “live on-demand” tracking and “a solid sense of reassurance” [15], which may be appealing for parents, carers, and individuals interested in protecting others. Advertisements and product descriptions are often emotionally charged, taking advantage of an individual’s (parent or carer) desire to maintain the safety and security of loved ones:

Your child going missing is every parent’s worst nightmare. Even if they’ve just wandered off to another part of the park the fear and panic is instant… [It] will help give you peace of mind and act as an extra set of eyes to look out for your child. It will also give them a little more freedom to play and explore safely [56].

16.3.6 Risks Versus Benefits of LBS Security and Safety Solutions

Despite such promotion and endorsement, numerous studies point to the dangers of LBS safety and security applications. Since their inception, individuals and users have voiced privacy concerns, which have been largely disregarded by proponents of the technology, chiefly vendors, given the (seemingly) voluntary nature of technology and device usage [6, p. 7]. The argument claiming technology adoption to be optional thereby placing the onus on the user is certainly weak and flawed, particularly given situations where an individual is incapable of making an informed decision regarding monitoring activities, supplementary to covert deployment options that may render monitoring activities obligatory. The consequences arising from covert monitoring are explored in [59] (refer to pp. 430–432 for implications of covert versus overt tracking of familiy member) and [1]. Covert and/or mandatory overt monitoring of minors and individuals suffering from illness is particularly problematic, raising doubt and questions in relation to the necessity of consent processes in addition to the suitability of tracking and what constitutes appropriate use.

In [59, p. 426] Mayer claims that there is a fine line between using tracking technologies, such as GPS, for safety purposes within the family context and improper use. Child tracking, for instance, has been described as a controversial area centered on the safety versus trust and privacy debate [77, p. 7]. However, the argument is not limited to issues of trust and privacy. Patel discusses the dynamics in the parent–child relationship and conveys a number of critical points in relation to wearable and embedded tracking technologies. In particular, Patel provides the legal perspective on child (teenager) monitoring [68, pp. 430–435] and other emergent issues or risks (notably linked to embedded monitoring solutions), which may be related to medical complications, psychological repercussions, and unintended or secondary use [68, pp. 444–455]. In Patel’s article, these issues are offset by an explanation of the manner in which parental fears regarding child safety, some of which are unfounded, and the role of the media in publicizing cases of this nature, fuel parents’ need for monitoring teenagers, whereas ultimately the decision to be monitored (according to the author), particularly using embedded devices, should ultimately lie with the teenager [68, pp. 437–442].

16.3.7 Safety of “Vulnerable” Individuals

Similarly, monitoring individuals with an illness or intellectual disability, such as a person with dementia wandering, raises a unique set of challenges in addition to the aforementioned concerns associated with consent, psychological issues, and misuse in the child or teenager tracking scenario. For instance, while dementia wandering and other similar applications are designed to facilitate the protection and security of individuals, they can concurrently be unethical in situations where reliability and responsiveness, amongst other factors, are in question [61, p. 7]. Based on a recent qualitative, focus group study seeking the attitudes of varied stakeholders in relation to the use of GPS for individuals with cognitive disabilities [54, p. 360], it was clear that this is an area fraught with indecisiveness as to the suitability of assistive technologies [54, p. 358]. The recommendations emerging from [54, pp. 361–364] indicate the need to “balance” safety with independence and privacy, to ensure that the individual suffering from dementia is involved in the decision to utilize tracking technologies, and that a consent process is in place, among other suggestions that are technical and devices related.

While much can be written about LBS applications in the personal safety and physical security categories, including their advantages and disadvantages, this discussion is limited to introductory material. Relevant to this chapter is the portrayal of the tensions arising from the use of solutions originally intended for protection and the resultant consequences, some of which are indeed inadvertent. That is, while the benefits of LBS are evident in their ability to maintain safety and security, they can indeed result in risks, such as the use of LBS for cyber stalking others. In establishing the need for LBS regulation, it is, therefore, necessary to appreciate that there will always be a struggle between benefits and risks relating to LBS implementation and adoption.

16.3.8 National Security

Safety and security debates are not restricted to family situations but may also incorporate, as [59, p. 437] indicates, public safety initiatives and considerations, amongst others, that can contribute to the decline in privacy. These schemes include national security, which has been regarded a priority area by various governments for over a decade. The Australian government affirms that the nation’s security can be compromised or threatened through various acts of “espionage, foreign interference, terrorism, politically motivated violence, border violations, cyber attack, organised crime, natural disasters and biosecurity events” [7]. Accordingly, technological approaches and solutions have been proposed and implemented to support national security efforts in Australia, and globally. Positioning technologies, specifically, have been adopted as part of government defense and security strategies, a detailed examination of which can be found in [60], thus facilitating increased surveillance. Surveillance schemes have, therefore, emerged as a result of the perceived and real threats to national security promoted by governments [92, p. 389], and according to [63, p. 2] have been legitimized as a means of ensuring national security, thereby granting governments “extraordinary powers that never could have been justified previously” [71, p. 94]. In [20, p. 216], Cho maintains that the fundamental question is “which is the greater sin—to invade privacy or to maintain surveillance for security purposes?”

16.3.9 Proportionality: National Security Versus Individual Privacy

The central theme surfacing in relevant LBS scholarship is that of proportionality; that is, measuring the prospective security benefits against the impending privacy- and freedom-related concerns. For example, [71, pp. 95–96] proposes the privacy–security dichotomy, as means of illustrating the need for balance between an individual’s privacy and a nation’s security, where the privacy and security elements within the model contain subcomponents that collectively contribute to amplify risk in a given context. A key point to note in view of this discussion is that while the implementation of LBS may enhance security levels, this will inevitably come at the cost of privacy [71, pp. 95–96] and freedom [61, p. 9].

Furthermore, forsaking privacy corresponds to relinquishing personal freedom, a consequential cost of heightened security in threatening situations. Such circumstances weaken the effects of invasive techniques and increase, to some degree, individuals’ tolerance to them [41, p. 12]. In particular, they “tilt the balance in favor of sacrificing personal freedom for the sake of public safety and security” [36, p. 50]. For example, Davis and Silver [35] report that the trade-off between civil liberties and privacy is often correlated with an individual’s sense of threat. In reporting on a survey of Americans post the events of September 11, 2011, the authors conclude that civil liberties are often relinquished in favor of security in high-threat circumstances [35, p. 35], in that citizens are “willing to tolerate greater limits on civil liberties” [35, p. 74]. Similarly, in a dissertation centered on the social implications of auto-ID and LBS technologies, Tootell [86] presents the Privacy, Security, and Liberty Trichotomy, as a means of understanding the interaction between the three values [86: chapter 6]. Tootell concludes that a dominant value will always exist that is unique to each individual [86, pp. 162–163].

Furthermore, researchers such as Gould [45, p. 75] have found that while people are generally approving of enhanced surveillance, they simultaneously have uncertainties regarding government monitoring. From a government standpoint, there is a commonly held and weak view that if an individual has nothing to hide, then privacy is insignificant, an argument particularly popular in relation to state-based surveillance [81, p. 746]. However, this perspective has inherent flaws, as the right to privacy should not be narrowly perceived in terms of concealment of what would be considered unfavorable activities, discussed further by [81, pp. 764–772]. Furthermore, the “civil liberties vs. security trade-off has mainly been framed as one of protecting individual rights or civil liberties from the government as the government seeks to defend the country against a largely external enemy” [35, p. 29].

Wigan and Clarke state, in relation to national security, that “surveillance systems are being developed without any guiding philosophy that balances human rights against security concerns, and without standards or guidance in relation to social impact assessment, and privacy design features” [92, p. 400]. Solove [82, p. 362] agrees that a balance can be achieved between security and liberty, through oversight and control processes that restrict prospective uses of personal data. In the current climate, given the absence of such techniques, fears of an Orwellian society dominated by intense and excessive forms of surveillance materialize. However, Clarke [27, p. 39] proposes a set of “counterveillance” principles in response to extreme forms of surveillance introduced in the name of national security, which include:

independent evaluation of technology; a moratorium on technology deployments; open information flows; justification of proposed measures; consultation and participation; evaluation; design principles; balance; independent controls; nymity and multiple identity; and rollback.

The absence of such principles creates a situation in which extremism reigns, producing a flow-on effect with potentially dire consequences in view of privacy, but also trust and control.

16.4 Solutions

16.4.1 Technological Solutions

In discussing technology and privacy in general, Krumm [52, p. 391] notes that computation-based mechanisms can be employed both to safeguard and to invade privacy. It is, therefore, valuable to distinguish between privacy-invasive technologies (PITs) and privacy-enhancing technologies (PETs). Clarke [23] examines the conflict between PITs and PETs, which are tools that can be employed to invade and protect privacy interests respectively. Technologies can invade privacy either deliberately as part of their primary purpose, or alternatively their invasive nature may emerge in secondary uses [2324, p. 209]. The aspects contributing to the privacy-invasive nature of location and tracking technologies or transactions include the awareness level of the individual, whether an individual has a choice, and the capability of performing an anonymous transaction amongst others [22]. In relation to LBS, [23] cites person-location and person-tracking systems as potential PITs that require the implementation of countermeasures, which to-date have come in the form of PETs or “counter-PITs.”

Existing studies suggest that the technological solutions (i.e., counter-PITs) available to address the LBS privacy challenge are chiefly concerned with degrading the ability to pinpoint location, or alternatively masking the identity of the user. For example, [62, p. 7] suggests that “[l]evels of privacy can be controlled by incorporating intelligent systems and customizing the amount of detail in a given geographic information system”, thus enabling the ethical use of GPS tracking systems. Similarly, other authors present models that anonymize user identity through the use of pseudonyms [14], architectures and algorithms that decrease location resolution [46], and systems that introduce degrees of obfuscation [37]. Notably, scholars such as Duckham [37, p. 7] consider location privacy protection as involving multiple strategies, citing regulatory techniques and privacy policies as supplementary strategies to techniques that are more technological in nature, such as obfuscation.

16.4.2 Need for Additional Regulatory Responses

Clarke and Wigan [31] examine the threats posed by location and tracking technologies, particularly those relating to privacy, stating that “[t]hose technologies are now well-established, yet they lack a regulatory framework.” A suitable regulatory framework for LBS (that addresses privacy amongst other social and ethical challenges) may be built on numerous approaches, including the technical approaches described in Sect. 16.4.1. Other approaches are explored by Xu et al. [95] in their quasi-experimental survey of privacy challenges relevant to push versus pull LBS. The approaches include compensation (incentives), industry self-regulation, and government regulation strategies [95, p. 143]. According to Xu et al., these “intervention strategies,” may have an impact on the privacy calculus in LBS [95, pp. 136–137]. Notably, their survey of 528 participants found that self-regulation has a considerable bearing on perceived risk for both push and pull services, whereas perceived risks for compensation and government regulation strategies vary depending on types of services. That is, compensation increases perceived benefit in the push but not the pull model and, similarly, government regulation reduces perceived privacy risk in the push-based model [95, p. 158].

It should be acknowledged that a preliminary step in seeking a solution to the privacy dilemma, addressing the identified social concerns, and proposing appropriate regulatory responses is to clearly identify and assess the privacy-invasive elements of LBS in a given context- we have used Australia as an example in this instance. Possible techniques that can be employed to identify risks and implications, and consequently possible mitigation strategies, are a Privacy Impact Assessment (PIA) or employing other novel models such as the framework of contextual integrity.

16.4.3 Privacy Impact Assessment (PIA)

A PIA can be defined as “a systematic process that identifies and evaluates, from the perspectives of all stakeholders, the potential effects on privacy of a project, initiative or proposed system or scheme, and includes a search for ways to avoid or mitigate negative privacy impacts” [2930]. The PIA tool, originally linked to technology and impact assessments [28, p. 125], is effectively a “risk management” technique that involves addressing both positive and negative impacts of a project or proposal, but with a greater focus on the latter [67, pp. 4–5].

PIAs were established and developed from 1995 to 2005, and possess a number of distinct qualities, some of which are that a PIA is focused on a particular initiative, takes a forward-looking and preventative as opposed to retrospective approach, broadly considers the various aspects of privacy (i.e., privacy of person, personal behavior, personal communication, and personal data), and is inclusive in that it accounts for the interests of relevant entities [28, pp. 124–125]. Regarding the Australian context, the development of PIAs in Australia can be observed in the work of Clarke [30] who provides an account of PIA maturity pre-2000, post-2000, and the situation in 2010.

16.4.4 Framework of Contextual Integrity

The framework of contextual integrity, introduced by [65], is an alternative approach that can be employed to assess whether LBS, as a socio-technical system, violates privacy and thus contextual integrity. An overview of the framework is provided in [65, p. 14]:

The central claim is that contextual integrity captures the meaning of privacy in relation to personal information; predicts people’s reactions to new technologies because it captures what we care about when we question, protest, and resist them; and finally, offers a way to carefully evaluate these disruptive technologies. In addition, the framework yields practical, step-by-step guidelines for evaluating systems in question, which it calls the CI Decision Heuristic and the Augmented CI Decision Heuristic.

According to Nissenbaum [65], the primary phases within the framework are: (1) explanation, which entails assessing a new system or practice in view of “context-relative informational norms” [65, p. 190], (2) evaluation, which involves “comparing altered flows in relation to those that were previously entrenched” [65, p. 190], and (3) prescription, a process based on evaluation, whereby if a system or practice is deemed “morally or politically problematic,” it has grounds for resistance, redesign or being discarded [65, p. 191]. Within these phases are distinct stages: establish the prevailing context, determine key actors, ascertain what attributes are affected, establish changes in principles of transmission, and red flag, if there are modifications in actors, attributes, or principles of transmission [65, pp. 149–150].

The framework of contextual integrity and, similarly, PIAs are relevant to this study, and may be considered as valid tools for assessing the privacy-invasive or violating nature of LBS and justifying the need for some form of regulation. This is particularly pertinent as LBS present unique privacy challenges, given their reliance on knowing the location of the target. That is, the difficulty in maintaining location privacy is amplified due to the fact that m-commerce services and mobility in general, by nature, imply knowledge of the user’s location and preferences [40, p. 463]. Therefore, it is likely that there will always be a trade-off ranging in severity. Namely, one end of the privacy continuum will demand that stringent privacy mechanisms be implemented, while the opposing end will support and justify increased surveillance practices.

16.5 Challenges

16.5.1 Relationship Between Privacy, Security, Control and Trust

A common thread in discussions relating to privacy and security implications of LBS throughout this chapter has been the interrelatedness of themes; notably, the manner in which a particular consideration is often at odds with other concerns. The trade-off between privacy/freedom and safety/security is a particularly prevalent exchange that must be considered in the use of many ICTs [36, p. 47]. In the case of LBS, it has been observed that the need for safety and security conflicts with privacy concerns, potentially resulting in contradictory outcomes depending on the nature of implementation. For example, while LBS facilitate security and timely assistance in emergency situations, they simultaneously have the potential to threaten privacy based on the ability for LBS to be employed in tracking and profiling situations [18, p. 105]. According to Casal [18, p. 109], the conflict between privacy and security, and lack of adequate regulatory frameworks, has a flow-on effect in that trust in ICTs is diminished. Trust is also affected in the family context, where tracking or monitoring activities result in lack of privacy between family members [59, p. 436]. The underlying question, according to Mayer [59, p. 435] is in relation to the power struggle between those seeking privacy versus those seeking information:

What will be the impact within families as new technologies shift the balance of power between those looking for privacy and those seeking surveillance and information?

Mayer’s [59] question alludes to the relevance of the theme of control, in that surveillance can be perceived as a form of control and influence. Therefore, it can be observed that inextricable linkages exist between several themes presented or alluded to throughout this chapter; notably privacy and security, but also the themes of control and trust. In summary, privacy protection requires security to be maintained, which in turn results in enhanced levels of control, leading to decreased levels of trust, which is a supplement to privacy [70, pp. 13–14]. The interrelatedness of themes is illustrated in Fig. 16.1.

Fig. 16.1: Relationship between control, trust, privacy, and security, after [70, p. 14]

It is thus evident that the idea of balance resurfaces, with the requirement to weigh multiple and competing themes and interests. This notion is not new with respect to location monitoring and tracking. For instance, Mayer [59, p. 437] notes, in the child tracking context, that there is the requirement to resolve numerous questions and challenges in a legal or regulatory sense, noting that “[t]he key is balancing one person’s need for privacy with another person’s need to know, but who will define this balancing point?” Issues of age, consent, and reciprocal monitoring are also significant. Existing studies on location disclosure amongst social relations afford the foundations for exploring the social and ethical challenges for LBS, whilst simultaneously appreciating technical considerations or factors. Refer to [51632424347628487].

16.6 Conclusion

This chapter has provided an examination of privacy and security with respect to location-based services. There is a pressing need to ensure LBS privacy threats are not dismissed from a regulatory perspective. Doing so will introduce genuine dangers, such as psychological, social, cultural, scientific, economic, political, and democratic harm; dangers associated with profiling; increased visibility; publically damaging revelations; and oppression [31]. Additionally, the privacy considerations unique to the “locational or mobile dimension” require educating the general public regarding disclosure and increased transparency on the part of providers in relation to collection and use of location information [11, p. 15]. Thus, in response to the privacy challenges associated with LBS, and based on current scholarship, this research recognizes the need for technological solutions, in addition to commitment and adequate assessment or consideration at the social and regulatory levels. Specifically, the privacy debate involves contemplation of privacy policies and regulatory frameworks, in addition to technical approaches such as obfuscation and maintaining anonymity [37, p. 7]. That is, privacy-related technical solutions must also be allied with supportive public policy and socially acceptable regulatory structures.

For additional readings relevant to LBS and privacy, which include an adequate list of general references for further investigation, refer to [17] on privacy challenges relevant to privacy invasive geo-mash-ups, the inadequacy of information privacy laws and potential solutions in the form of technological solutions, social standards and legal frameworks; [12] report submitted to the Office of the Privacy Commissioner of Canada, focused on mobile surveillance, the privacy dangers, and legal consequences; and [57] report to the Canadian Privacy Commissioner dealing with complementary issues associated with mobility, location technologies, and privacy.

Based on the literature presented throughout this chapter, a valid starting point in determining the privacy-invasive nature of specific LBS applications is to review and employ the available solution(s). These solutions or techniques are summarized in Table 16.1, in terms of the merits and benefits of each approach and the extent to which they offer means of overcoming or mitigating privacy-related risks. The selection of a particular technique is dependent on the context or situation in question. Once the risks are identified it is then possible to develop and select an appropriate mitigation strategy to reduce or prevent the negative implications of utilizing certain LBS applications. This chapter is intended to provide a review of scholarship in relation to LBS privacy and security, and should be used as the basis for future research into the LBS privacy dilemma, and related regulatory debate.

Table 16.1 Summary of solutions and techniques

Solution/Technique | Merits | Limitations

Technological mechanisms

• Provide location obfuscation and anonymity in required situations

• Myriad of solutions available depending on level of privacy required

• In-built mechanisms requiring limited user involvement

• Unlike regulatory solutions, technological solutions encourage industry development

• Result in degradation in location quality/resolution

Regulatory mechanisms

• Variety of techniques available, such as industry self-regulation and government legislation

• Can offer legal protection to individuals in defined situations/scenarios

• Can be limiting in terms of advancement of LBS industry

Impact assessments, contextual frameworks, and internal policies

• Provide proactive approach in identifying privacy (and related) risks

• Used to develop suitable mitigation strategies

• Preventative and inclusive in nature

• Tend to be skewed in focus, focusing primarily on negative implications

• Can be limiting in terms of advancement of LBS industry


1. Abbas R, Michael K, Michael MG, Aloudat A (2011) Emerging forms of covert surveillance using GPS-enabled devices. J Cases Inf Technol (JCIT) 13(2):19–33

2. Aloudat A (2012) ‘Privacy Vs Security in national emergencies. IEEE Technol Soc Mag Spring 2012:50–55

3. ALRC 2008 (2012) For your information: Australian privacy law and practice (Alrc Report 108). Accessed 12 Jan 2012

4. Andrejevic M (2007) Ispy: Surveillance and Power in the Interactive Era. University Press of Kansas, Lawrence

5. Anthony D, Kotz D, Henderson T (2007) Privacy in location-aware computing environments. Pervas Comput 6(4):64–72

6. Applewhite A (2002) What knows where you are? Personal safety in the early days of wireless. Pervas Comput 3(12):4–8

7. Attorney General’s Department (2012) Telecommunications interception and surveillance. Accessed 20 Jan 2012

8. Awad NF, Krishnan MS (2006) The personalization privacy paradox: an empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Q 30(1):13–28

9. Ayres G, Mehmood R (2010) Locpris: a security and privacy preserving location based services development framework. In: Setchi R, Jordanov I, Howlett R, Jain L (eds) Knowledge-based and intelligent information and engineering systems, vol 6279, pp 566–575

10. Bauer HH, Barnes SJ, Reichardt T, Neumann MM (2005) Driving the consumer acceptance of mobile marketing: a theoretical framework and empirical study. J Electron Commer Res 6(3):181–192

11. Bennett CJ (2006) The mobility of surveillance: challenges for the theory and practice of privacy protection. In: Paper prepared for the 2006 Meeting of the international communications association, Dresden Germany, June 2006, pp 1–20.

12. Bennett CJ, Crowe L (2005) Location-based services and the surveillance of mobility: an analysis of privacy risks in Canada. A report to the Office of the Privacy Commissioner of Canada, under the 200405 Contributions Program, June 2005.

13. Bennett CJ, Grant R (1999) Introduction. In: Bennett CJ, Grant R (eds) Visions of privacy: policy choices for the digital age. University of Toronto Press, Toronto, pp 3–16.

14. Beresford AR, Stajano F (2004) Mix zones: user privacy in location-aware services. In: Proceedings of the Second IEEE Annual conference on pervasive computing and communications workshops (PERCOMW’04) pp 127–131.

15. Brickhouse Security (2012) Lok8u GPS Child Locator. Accessed 9 Feb 2012

16. Brown B, Taylor AS, Izadi S, Sellen A, Kaye J, Eardley R (2007) Locating family values: a field trial of the whereabouts clock. In: UbiComp ‘07 Proceedings of the 9th international conference on Ubiquitous computing, pp 354–371.

17. Burdon M (2010) Privacy invasive geo-mashups : Privacy 2.0 and the limits of first generation information privacy laws. Univ Illinois J Law Technol Policy (1):1–50.

18. Casal CR (2004) Impact of location-aware services on the privacy/security balance. Info: J Policy Regul Strategy Telecommun Inf Media 6(2):105–111

19. Chen JV, Ross W, Huang SF (2008) Privacy, trust, and justice considerations for location-based mobile telecommunication services. Info 10(4):30–45

20. Cho G (2005) Geographic information science: mastering the legal issues. Wiley, Hoboken.

21. Clarke R (1997) Introduction to dataveillance and information privacy, and definitions of terms.

22. Clarke R (1999) Relevant characteristics of person-location and person-tracking technologies.

23. Clarke R (2001a) Introducing PITs and PETs: technologies affecting privacy.

24. Clarke R (2001) Person location and person tracking—technologies, risks and policy implications. Inf Technol People 14(2):206–231

25. Clarke R (2003b) Privacy on the move: the impacts of mobile technologies on consumers and citizens.

26. Clarke R (2006) What’s ‘Privacy’?

27. Clarke R (2007a) Chapter 3. What ‘Uberveillance’ is and what to do about it. In: Michael K, Michael MG (eds) The Second workshop on the social implications of national security (from Dataveillance to Uberveillance and the Realpolitik of the Transparent Society). University of Wollongong, IP Location-Based Services Research Program (Faculty of Informatics) and Centre for Transnational Crime Prevention (Faculty of Law), Wollongong, Australia, pp 27–46

28. Clarke R (2009) Privacy impact assessment: its origins and development. Comput Law Secur Rev 25(2):123–135

29. Clarke R (2010a) An evaluation of privacy impact assessment guidance documents.

30. Clarke R (2010b) Pias in Australia—A work-in-progress report.

31. Clarke R, Wigan M (2011) You are where you’ve been: the privacy implications of location and tracking technologies.

32. Consolvo S, Smith IE, Matthews T, LaMarca A, Tabert J, Powledge P (2005) Location disclosure to social relations: why, when, & what people want to share. In: CHI 2005(April), pp 2–7, Portland, Oregon, USA, pp. 81–90

33. Culnan MJ, Bies RJ (2003) Consumer privacy: balancing economic and justice considerations. J Soc Issues 59(2):323–342

34. Damiani ML, Bertino E, Perlasca P (2007) Data security in location-aware applications: an approach based on Rbac. Int. J. Inf Comput Secur 1(1/2):5–38

35. Davis DW, Silver BD (2004) Civil Liberties Vs. Security: public opinion in the context of the terrorist attacks on America. Am J Polit Sci 48(1):28–46

36. Dobson JE, Fisher PF (2003) Geoslavery. IEEE Technol Soc Mag 22(1):47–52

37. Duckham M (2008) Location privacy protection through spatial information hiding.$file/pvn_07_08_duckham.pdf

38. Duckham M (2010) Moving forward: location privacy and location awareness. In: SPRINGL’10 November 2, 2010, San Jose, CA, USA, pp 1–3

39. Duckham M, Kulik L (2006) Chapter 3. location privacy and location-aware computing. In: Drummond J, Billen R, Forrest D, Joao E (eds) Dynamic and Mobile Gis: investigating change in space and time. CRC Press, Boca Raton, pp 120.

40. Elliot G, Phillips N (2004) Mobile commerce and wireless computing systems. Pearson Education Limited, Great Britain 532 pp

41. FIDIS 2007, D11.5: The legal framework for location-based services in Europe.

42. Fusco SJ, Michael K, Aloudat A, Abbas R (2011) Monitoring people using location-based social networking and its negative impact on trust: an exploratory contextual analysis of five types of “Friend” Relationships. In: IEEE symposium on technology and society (ISTAS11), Illinois, Chicago, IEEE 2011

43. Fusco SJ, Michael K, Michael MG, Abbas R (2010) Exploring the social implications of location based social networking: an inquiry into the perceived positive and negative impacts of using LBSN between friends. In: 9th international conference on mobile business (ICMB2010), Athens, Greece, IEEE, pp 230–237

44. Giaglis GM, Kourouthanassis P, Tsamakos A (2003) Chapter IV. Towards a classification framework for mobile location-based services. In: Mennecke BE, Strader TJ (eds) Mobile commerce: technology, theory and applications. Idea Group Publishing, Hershey, US, pp 67–85

45. Gould JB (2002) Playing with fire: the civil liberties implications of September 11th. In: Public Administration Review, 62 (Special Issue: Democratic Governance in the Aftermath of September 11, 2001), pp 74–79

46. Gruteser M, Grunwald D (2003) Anonymous usage of location-based services through spatial and temporal cloaking. In: ACM/USENIX international conference on mobile systems, applications and services (MobiSys), pp 31–42

47. Iqbal MU, Lim S (2007) Chapter 16. Privacy implications of automated GPS tracking and profiling. In: Michael K, Michael MG (eds) From Dataveillance to Überveillance and the Realpolitik of the Transparent Society (Workshop on the Social Implications of National Security, 2007) University of Wollongong, IP Location-Based Services Research Program (Faculty of Informatics) and Centre for Transnational Crime Prevention (Faculty of Law), Wollongong, pp 225–240

48. Jorns O, Quirchmayr G (2010) Trust and privacy in location-based services. Elektrotechnik & Informationstechnik 127(5):151–155

49. Junglas I, Spitzmüller C (2005) A research model for studying privacy concerns pertaining to location-based services. In: Proceedings of the 38th Hawaii international conference on system sciences, pp 1–10

50. Kaasinen E (2003) User acceptance of location-aware mobile guides based on seven field studies. Behav Inf Technol 24(1):37–49

51. Kaupins G, Minch R (2005) Legal and ethical implications of employee location monitoring. In: Proceedings of the 38th Hawaii international conference on system sciences, pp 1–10

52. Krumm J (2008) A survey of computational location privacy. Pers Ubiquit Comput 13(6):391–399

53. Küpper A, Treu G (2010) Next generation location-based services: merging positioning and web 2.0. In: Yang LT, Waluyo AB, Ma J, Tan L, Srinivasan B (eds) Mobile intelligence. Wiley Inc, Hoboken, pp 213–236

54. Landau R, Werner S (2012) Ethical aspects of using GPS for tracking people with dementia: recommendations for practice. Int Psychogeriatr 24(3):358–366

55. Leppäniemi M, Karjaluoto H (2005) Factors influencing consumers’ willingness to accept mobile advertising: a conceptual model. Int. J Mobile Commun 3(3):197–213

56. Loc8tor Ltd. 2011 (2012), Loc8tor Plus. Accessed 9 Feb 2012.

57. Lyon D, Marmura S, Peroff P (2005) Location technologies: mobility, surveillance and privacy (a Report to the Office of the Privacy Commissioner of Canada under the Contributions Program). The Surveillance Project, Queens Univeristy, Canada.

58. Mason RO (1986) Four ethcial challenges in the information age. MIS Q 10(1):4–12

59. Mayer RN (2003) Technology, families, and privacy: can we know too much about our loved ones? J Consum Policy 26:419–439

60. Michael K, Masters A (2006) The advancement of positioning technologies in defense intelligence. In: Abbass H, Essam D (eds) Applications of information systems to homeland security and defense. Idea Publishing Group, United States, pp 196–220

61. Michael K, McNamee A, Michael MG (2006) The emerging ethics of humancentric GPS tracking and monitoring. International conference on mobile business. IEEE Computer Society, Copenhagen, Denmark, pp 1–10

62. Michael K, McNamee A, Michael MG, Tootell H (2006) Location-based intelligence—modeling behavior in humans using GPS. IEEE international symposium on technology and society. IEEE, New York, United States, pp 1–8

63. Michael K, Clarke R (2012) Location privacy under dire threat as Uberveillance stalks the streets. In: Precedent (Focus on Privacy/FOI), vol 108, pp 1–8 (online version) & 24–29 (original article).

64. Neltronics 2012 (2012) Fleetfinder Pt2 Personal Tracker. Accessed 9 Feb 2012

65. Nissenbaum H (2010) Privacy in context: technology, policy, and the integrity of social life. Stanford Law Books, Stanford 288 pp

66. O’Connor PJ, Godar SH (2003) Chapter XIII. We know where you are: the ethics of LBS advertising. In: Mennecke BE, Strader TJ (eds) Mobile commerce: technology, theory and applications. Idea Group Publishing, Hershey, pp 245–261

67. Office of the Victorian Privacy Commissioner 2009 (2010) Privacy impact assessments: a single guide for the victorian public sector. Accessed 3 March 2010

68. Patel DP (2004) Should teenagers get Lojackedt against their will? An argument for the ratification of the United Nations convention on the rights of the child. Howard L J 47(2):429–470

69. Perusco L, Michael K (2005) Humancentric applications of precise location based services. IEEE international conference on e-business engineering. IEEE Computer Society, Beijing, China, pp 409–418

70. Perusco L, Michael K (2007) Control, trust, privacy, and security: evaluating location-based services. IEEE Technol Soc Mag 26(1):4–16

71. Perusco L, Michael K, Michael MG (2006) Location-based services and the privacy-security dichotomy. In: Proceedings of the 3rd international conference on mobile computing and ubiquitous networking, London, UK. Information Processing Society of Japan, pp. 91–98

72. Privacy International 2007, Overview of Privacy.[347]=x-347-559062. Accessed 3 Dec 2009

73. Quinn MJ (2006) Ethics for the information age, 2nd edn. Pearson/Addison-Wesley, Boston 484 pp

74. Raab CD (1999) Chapter 3. From balancing to steering: new directions for data protection. In: Bennett CJ, Grant R (eds) Visions of privacy: policy choices for the digital age. University of Toronto Press, Toronto, pp 68–93

75. Raper J, Gartner G, Karimi HA, Rizos C (2007) Applications of location-based services: a selected review. J Locat Based Serv 1(2):89–111

76. Richards NM, Solove DJ (2007) Privacy’s other path: recovering the law of confidentiality. Georgetown Law J 96:123–182

77. Schreiner K (2007) Where We At? Mobile phones bring GPS to the masses. IEEE Comput Graph Appl 2007:6–11

78. Sheng H, Fui-Hoon Nah F, Siau K (2008) An experimental study on ubiquitous commerce adoption: impact of personalization and privacy concerns. J Assoc Inf Syst 9(6):344–376

79. Smith GD (2006) Private eyes are watching you: with the implementation of the E-911 Mandate, Who will watch every move you make? Federal Commun Law J 58:705–726

80. Solove DJ (2006) A taxonomy of privacy. Univ Pennsylvania Law Rev 154(3):477–557

81. Solove DJ (2007) I’ve Got Nothing to Hide’ and other misunderstandings of privacy. San Diego Law Rev 44:745–772

82. Solove DJ (2008) Data mining and the security-liberty debate. Univ Chicago Law Rev 74:343–362

83. Steinfield C (2004) The development of location based services in mobile commerce. In: Priessl B, Bouwman H, Steinfield C (eds) Elife after the Dot.Com Bust., pp 1–15

84. Tang KO, Lin J, Hong J, Siewiorek DP, Sadeh N (2010) Rethinking location sharing: exploring the implications of social-driven vs. purpose-driven location sharing. In: UbiComp 2010, Sep 26–Sep 29, Copenhagen, Denmark, pp 1–10

85. Tatli EI, Stegemann D, Lucks S (2005) Security challenges of location-aware mobile business. In: The Second IEEE international workshop on mobile commerce and services, 2005. WMCS ‘05, pp 1–10

86. Tootell H (2007) The social impact of using automatic identification technologies and location-based services in national security. PhD Thesis, School of Information Systems and Technology, Informatics, University of Wollongong

87. Tsai JY, Kelley PG, Drielsma PH, Cranor LF, Hong J, Sadeh N (2009) Who’s Viewed You? the impact of feedback in a mobile location-sharing application. In: CHI 2009, April 3–9, 2009, Boston, Massachusetts, USA, pp 1–10

88. Wang S, Min J, Yi BK (2008) Location based services for mobiles: technologies and standards (Presentation). In: IEEE ICC 2008, Beijing, pp 1–123

89. Warren S, Brandeis L (1890) The right to privacy. Harvard Law Rev 4:193–220

90. Westin AF (1967) Privacy and freedom. Atheneum, New York 487 pp

91. Westin AF (2003) Social and political dimensions of privacy. J Social Issues 59(2):431–453

92. Wigan M, Clarke R (2006) Social impacts of transport surveillance. Prometheus 24(4):389–403

93. Wright T (2004) ‘Security. Privacy and Anonymity’, crossroads 11:1–8

94. Xu H, Luo X, Carroll JM, Rosson MB (2011) The personalization privacy paradox: an exploratory study of decision making process for location-aware marketing. Decis Support Syst 51(2011):42–52

95. Xu H, Teo HH, Tan BYC, Agarwal R (2009) The role of push-pull technology in privacy calculus: the case of location-based services. J Manage Inf Syst 26(3):135–173

Citation: Abbas R., Michael K., Michael M.G. (2015) "Location-Based Privacy, Protection, Safety, and Security." In: Zeadally S., Badra M. (eds) Privacy in a Digital, Networked World. Computer Communications and Networks. Springer, Cham, DOI:

Uberveillance and the Social Implications of Microchip Implants: Preface

Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies

In addition to common forms of spatial units such as satellite imagery and street views, emerging automatic identification technologies are exploring the use of microchip implants in order to further track an individual’s personal data, identity, location, and condition in real time.

Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies presents case studies, literature reviews, ethnographies, and frameworks supporting the emerging technologies of RFID implants while also highlighting the current and predicted social implications of human-centric technologies. This book is essential for professionals and researchers engaged in the development of these technologies as well as providing insight and support to the inquiries with embedded micro technologies.


Katina Michael, University of Wollongong, Australia

M.G. Michael, University of Wollongong, Australia


Uberveillance can be defined as an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices into the human body. These embedded technologies can take the form of traditional pacemakers, radio-frequency identification (RFID) tag and transponder implants, smart swallowable pills, nanotechnology patches, multi-electrode array brain implants, and even smart dust to mention but a few form factors. To an extent, head-up displays like electronic contact lenses that interface with the inner body (i.e. the eye which sits within a socket) can also be said to be embedded and contributing to the uberveillance trajectory, despite their default categorisation as body wearables.

Uberveillance has to do with the fundamental who (ID), where (location), and when (time) questions in an attempt to derive why (motivation), what (result), and even how (method/plan/thought). Uberveillance can be a predictive mechanism for a person’s expected behaviour, traits, likes, or dislikes based on historical fact; or it can be about real-time measurement and observation; or it can be something in between. The inherent problem with uberveillance is that facts do not always add up to truth, and predictions or interpretations based on uberveillance are not always correct, even if there is direct visual evidence available (Shih, 2013). Uberveillance is more than closed circuit television feeds, or cross-agency databases linked to national identity cards, or biometrics and ePassports used for international travel. Uberveillance is the sum total of all these types of surveillance and the deliberate integration of an individual’s personal data for the continuous tracking and monitoring of identity, location, condition, and point of view in real-time (Michael & Michael, 2010b).

In its ultimate form, uberveillance has to do with more than automatic identification and location-based technologies that we carry with us. It has to do with under-the-skin technology that is embedded in the body, such as microchip implants. Think of it as Big Brother on the inside looking out. It is like a black box embedded in the body which records and gathers evidence, and in this instance, transmitting specific measures wirelessly back to base. This implant is virtually meaningless without the hybrid network architecture that supports its functionality: making the person a walking online node. We are referring here, to the lowest common denominator, the smallest unit of tracking – presently a tiny chip inside the body of a human being. But it should be stated that electronic tattoos and nano-patches that are worn on the body can also certainly be considered mechanisms for data collection in the future. Whether wearable or bearable, it is the intent and objective which remains important, the notion of “people as sensors.” The gradual emergence of the so-called human cloud, that cloud computing platform which allows for the Internetworking of human “points of view” using wearable recording technology (Nolan, 2013), will also be a major factor in the proactive profiling of individuals (Michael & Michael, 2011).


This present volume will aim to equip the general public with much needed educational information about the technological trajectory of RFID implants through exclusive primary interviews, case studies, literature reviews, ethnographies, surveys and frameworks supporting emerging technologies. It was in 1997 that bioartist Eduardo Kac (Figure 1) implanted his leg in a live performance titled Time Capsule( in Brazil (Michael & Michael, 2009). The following year in an unrelated experiment, Kevin Warwick injected an implant into his left arm (Warwick, 2002; K. Michael, 2003). By 2004, the Verichip Corporation had their VeriChip product approved by the Food and Drug Administration (FDA) (Michael, Michael & Ip 2008). And since that point, there has been a great deal of misinformation and confusion surrounding the microchip implant, but also a lot of build-up on the part of the proponents of implantables.

Figure 1. 

Eduardo Kac implanting himself in his left leg with an RFID chip using an animal injector kit on 11 November 1997. Courtesy Eduardo Kac. More at

Radio-Frequency Identification (RFID) is not an inherently secure device, in fact it can be argued that it is just the opposite (Reynolds, 2004). So why someone would wish to implant something beneath the skin for non-medical reasons is quite surprising, despite the touted advantages. One of the biggest issues, not commonly discussed in public forums, has to be the increasing numbers of people who are suffering from paranoid or delusional thoughts with respect to enforced implantation or implantation through stealth. We have already encountered significant problems in the health domain- where for example, a clinical psychologist can no longer readily discount completely the claims of patients who identify with having been implanted or tracked and monitored using inconspicuous forms of ID. This will be especially true in the era of the almost “invisible scale to the naked eye” smart dust which has yet to fully arrive. Civil libertarians, religious advocates, and so-named conspiracy theorists will not be the only exclusive groups to discuss the real potential of microchipping people, and for this reason, the discussion will move into the public policy forum, all inclusive of stakeholders in the value chain.

Significantly, this book will also provide researchers and professionals who are engaged in the development or implementation of emerging services with awareness of the social implications of human-centric technologies. These implications cannot be ignored by operational stakeholders, such as engineers and the scientific elite, if we hope to enact long-term beneficial change with new technologies that will have a positive impact on humanity. We cannot possess the attitude that says- let us see how far we can go with technology and we will worry about the repercussions later: to do so would be narrow-sighted and to ignore the importance of socio-technical sustainability. Ethics are apparently irrelevant to the engineer who is innovating in a market-driven and research funded environment. For sure there are some notable exceptions where a middle of the way approach is pursued, notably in the medical and educational contexts. Engineering ethics, of course exist, unfortunately often denigrated and misinterpreted as discourses on “goodness” or appeals to the categorical imperative. Nevertheless industry as a whole has a social responsibility to consumers at large, to ensure that it has considered what the misuse of its innovations might mean in varied settings and scenarios, to ensure that there are limited, if any, health effects from the adoption of particular technologies, and that adverse event reports are maintained by a centralised administrative office with recognised oversight (e.g. an independent ombudsman).

Equally, government agencies must respond with adequate legislative and regulatory controls to ensure that there are consequences for the misuse of new technologies. It is not enough for example, for a company like Google to come out and openly “bar” applications for its Glass product, such as biometric recognition and pornography, especially when they are very aware that these are two application areas for which their device will be exploited. Google is trying to maintain its brand by stating clearly that it is not affiliated with negative uses of its product, knowing too well that their proclamation is quite meaningless, and by no means legally binding. And this remains one of the great quandaries, that few would deny that Google’s search rank and page algorithms have meant we have also been beneficiaries of some extraordinary inventiveness.

According to a survey by CAST, one in five persons, have reported that they want to see a Google Glass ban (Nolan, 2013). Therefore, the marketing and design approach nowadays, which is broadly evident across the universal corporate spectrum, seems to be:

We will develop products and make money from them, no matter how detrimental they may be to society. We will push the legislative/regulatory envelope as much as we can, until someone says: Stop. You’ve gone too far! The best we can do as a developer is place a warning on the packaging, just like on cigarette notices, and if people choose to do the wrong thing our liability as a company is removed completely because we have provided the prior warning and only see beneficial uses. If our product is used for bad then that is not our problem, the criminal justice system can deal with that occurrence, and if non-users of our technology are entangled in a given controversy, then our best advice to people is to realign the asymmetry by adopting our product.


This edited volume came together over a three year period. We formed our editorial board and sent out the call for book chapters soon after the IEEE conference we hosted at the University of Wollongong, the International Symposium on Technology and Society (ISTAS) on 7-10 June 2010, sponsored by IEEE’s Society on the Social Implications of Technology (SSIT) ( The symposium was dedicated to emerging technologies and there were a great many papers presented from a wide range of views on the debate over the microchipping of people. It was such a highlight to see this sober conversation happening between experts coming at the debate from different perspectives, different cultural contexts, and different lifeworlds. A great deal of the spirit from that conversation has taken root in this book. The audio-visual proceedings aired on the Australian Broadcasting Corporation’s much respected 7.30 Report and received wide coverage in major media outlets. The significance is not in the press coverage but in the fact that the topic is now relevant to the everyday person. Citizens will need to make a personal decision- do I receive an implant or not? Do I carry an identifier on the surface of my skin or not? Do I succumb to 24x7 monitoring by being fully “connected” to the grid or not?

Individuals who were present at ISTAS10 and were also key contributors to this volume include keynote speakers Professor Rafael Capurro, Professor Roger Clarke, Professor Kevin Warwick, Dr Katherine Albrecht, Dr Mark Gasson, Mr Amal Graafstra, and attendees Professor Marcus Wigan, Associate Professor Darren Palmer, Dr Ian Warren, Dr Mark Burdon, and Mr William A. Herbert. Each of these presenters have been instrumental voices in the discussion on Embedded Surveillance Devices (ESDs) in living things (animals and humans), and tracking and monitoring technologies. They have dedicated a portion of their professional life to investigating the possibilities and the effects of a world filled with microchips, beyond those in desktop computers and high-tech gadgetry. They have also been able to connect the practice of an Internet of Things (IoT) from not only machine-to-machine but nested forms of machine-to-people-to-machine interactions and considered the implications. When one is surrounded by such passionate voices, it is difficult not to be inspired onward to such an extensive work.

A further backdrop to the book is the annual workshops we began in 2006 on the Social Implications of National Security which have had ongoing sponsorship by the Australian Research Council’s Research Network for a Secure Australia (RNSA). Following ISTAS10, we held a workshop on the “Social Implications of Location-Based Services” at the University of Wollongong’s Innovation Campus and were fortunate to have Professor Rafael Capurro, Professor Andrew Goldsmith, Professor Peter Eklund, and Associate Professor Ulrike Gretzel present their work ( Worthy of note is the workshop proceedings which are available online have been recognised as major milestones for the Research Network in official government documentation. For example, the Department of the Prime Minister and Cabinet (PM&C) among other high profile agencies in Australia and abroad have requested copies of the works for their libraries.

In 2012, the topic of our annual RNSA workshop was “Sousveillance and the Social Implications of Point of View Technologies in Law Enforcement” held at the University of Sydney ( Professor Kevin Haggerty keynoted that event, speaking on a theme titled “Monitoring within and beyond the Police Organisation” and also later graciously contributed the foreword to this book, as well as presenting on biomimetics at the University of Wollongong. The workshop again acted to bring exceptional voices together to discuss audio-visual body-worn recording technologies including, Professor Roger Clarke, Professor David Lyon, Associate Professor Nick O’Brien, Associate Professor Darren Palmer, Dr Saskia Hufnagel, Dr Jann Karp, Mr Richard Kay, Mr Mark Lyell, and Mr Alexander Hayes.

In 2013, the theme of the National Security workshop was “Unmanned Aerial Vehicles - Pros and Cons in Policing, Security & Everyday Life” held at Ryerson University in Canada. This workshop had presentations from Professor Andrew Clement, Associate Professor Avner Levin, Mr Ian Hannah, and Mr Matthew Schroyer. It was the first time that the workshop was held outside Australian borders in eight years. While drones are not greatly discussed in this volume, they demonstrate one of the scenario views of the fulfilment of uberveillance. Case in point, the drone killing machine signifies the importance of a remote controlled macro-to-micro view. At first, something needs to be able to scan the skies to look down on the ground, and then when the target has been identified and tracked it can be extinguished with ease. One need only look at the Israel Defence Force’s pinpoint strike on Ahmed Jabari, the head of the Hamas Military Wing, to note the intrinsic link between the macro and micro levels of details (K. Michael, 2012). How much “easier” could this kind of strike have been if the GPS chipset in the mobile phone carried by an individual communicated with a chip implant embedded in the body? RFID can be a tracking mechanism, despite the claims of some researchers that it has only a 10cm proximity. That may well be the case for your typical wall-mounted reader, but the mobile phone can act as a continuous reader if in range, as can a set of traffic lights, lampposts, or even wifi access nodes, depending on the on-board technology and the power of the reader equipment being used. A telltale example of the potential risks can be seen in the rollout of Real ID driver’s licenses in the USA, since the enactment of the REAL ID Act of 2005.

In 2013, it was also special to meet some of our book contributors for the first time at ISTAS13, held at the University of Toronto on the theme of “Wearable Computers and Augmediated Reality in Everyday Life,” among them Professor Steve Mann, Associate Professor Christine Perakslis, and Dr Ellen McGee. As so often happens when a thematic interest area brings people together from multiple disciplines, an organic group of interdisciplinary voices has begun to form at The holistic nature of this group is especially stimulating in sharing its diverse perspectives. Building upon these initial conversations and ensuring they continue as the social shaping of technology occurs in the real world is paramount.

As we brought together this edited volume, we struck a very fruitful collaboration with Reader, Dr Jeremy Pitt of Imperial College London, contributing a large chapter in his disturbingly wonderful edited volume entitled This Pervasive Day: The Potential and Perils of Pervasive Computing (2012). Jeremy’s book is a considered forecast of the social impact of new technologies inspired by Ira Levin’s This Perfect Day(1970). Worthy of particular note is our participation in the session entitled “Heaven and Hell: Visions for Pervasive Adaptation” at the European Future Technologies Conference and Exhibition (Paechter, 2011). What is important to draw out from this is that pervasive computing will indeed have a divisive impact on its users: for some it will offer incredible benefits, while to others it will be debilitating in its everyday effect. We hope similarly, to have been able to remain objective in this edited volume, offering viewpoints from diverse positions on the topic of humancentric RFID. This remained one of our principal aims and fundamental goals.

Questioning technology’s trajectory, especially when technology no longer has a medical corrective or prosthetic application but one that is based on entertainment and convenience services is extremely important. What happens to us when we embed a device that we cannot remove on our own accord? Is this fundamentally different to wearing or lugging something around? Without a doubt, it is! And what of those technologies, that are presently being developed in laboratories across the world for microscopic forms of ID, and pinhole video capture? What will be the impact of these on our society with respect to covert surveillance? Indeed, the line between overt and covert surveillance is blurring- it becomes indistinguishable when we are surrounded by surveillance and are inside the thick fog itself. The other thing that becomes completely misconstrued is that there is actually logic in the equation that says that there is a trade-off between privacy and convenience. There is no trade-off. The two variables cannot be discussed on equal footing – you cannot give a little of your privacy away for convenience and hope to have it still intact thereafter. No amount of monetary or value-based recompense will correct this asymmetry. We would be hoodwinking ourselves if we were to suddenly be “bought out” by such a business model. There is no consolation for privacy loss. We cannot be made to feel better after giving away a part of ourselves. It is not like scraping one’s knee against the concrete with the expectation that the scab will heal after a few days. Privacy loss is to be perpetually bleeding, perpetually exposed.

Additionally, in the writing of this book we also managed a number of special issue journals in 2010 and 2011, all of which acted to inform the direction of the edited volume as a whole. These included special issues on “RFID – A Unique Radio Innovation for the 21st Century” in the Proceedings of the IEEE (together with Rajit Gadh, George Roussos, George Q. Huang, Shiv Prabhu, and Peter Chu); “The Social Implications of Emerging Technologies” in Case Studies in Information Technology with IGI (together with Dr Roba Abbas); “The Social and Behavioral Implications of Location-Based Services” in the Journal of Location Based Services with Routledge; and “Surveillance and Uberveillance” in IEEE Technology and Society Magazine. In 2013, Katina also guest edited a volume for IEEE Computer on “Big Data: Discovery, Productivity and Policy” with Keith W. Miller. If there are any doubts about the holistic work supporting uberveillance, we hope that these internationally recognised journals, amongst others, that have been associated with our guest editorship indicate the thoroughness and robustness of our approach, and the recognition that others have generously provided to us for the incremental work we have completed.

It should also not go without notice that since 2006 the term uberveillance has been internationally embedded into dozens of graduate and undergraduate technical and non-technical courses across the globe. From the University of New South Wales and Deakin University to the University of Salford, from the University of Malta right through to the University of Texas at El Paso and Western Illinois University- we are extremely encouraged by correspondence from academics and researchers noting the term’s insertion into outlines, chosen text book, lecture schedules, major assessable items, recommended readings, and research training. These citations have acted to inform and to interrogate the subjects that connect us. That our research conclusions resonate with you, without necessarily implying that you have always agreed with us, is indeed substantial.


Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies follows on from a 2009 IGI Premier Reference source book titled Automatic Identification and Location-Based Services: from Bar Codes to Chip Implants. This volume consists of 6 sections, and 18 chapters with 7 exclusive addendum primary interviews and panels. The strength of the volume is in its 41 author contributions. Contributors have come from diverse professional and research backgrounds in the field of emerging technologies, law and social policy, including, information and communication sciences, administrative sciences and management, criminology, sociology, law and regulation, philosophy, ethics and policy, government, political science, among others. Moreover, the book will provide insights and support to every day citizens who may be questioning the trajectory of micro and miniature technologies or the potential for humans to be embedded with electro-magnetic devices. Body wearable technologies are also directly relevant, as they will act as complementary if not supplementary innovations to various forms of implants.

Section 1 is titled “The Veillances” with a specific background context of uberveillance. This section inspects the antecedents of surveillance, Roger Clarke’s dataveillance thirty years on, Steve Mann’s sousveillance, and MG Michael’s uberveillance. These three neologisms are inspected under the umbrella of the “veillances” (from the French veiller) which stems from the Latin vigilare which means to “keep watch” (Oxford Dictionary, 2012).

In 2009, Katina Michael and MG Michael presented a plenary paper titled: “Teaching Ethics in Wearable Computing: the Social Implications of the New ‘Veillance’” (K. Michael & Michael, 2009d). It was the first time that surveillance, dataveillance, sousveillance, and uberveillance were considered together at a public gathering. Certainly as a specialist term, it should be noted “veillance” was first used in an important blogpost exploring equiveillance by Ian Kerr and Steve Mann (2006): “the valences of veillance” were briefly described. In contrast to Kerr and Mann (2006), Michael and Michael (2006) were pondering on the intensification of a state of uberveillance through increasingly pervasive technologies that can provide details from the big picture view right down to the miniscule personal details.

Alexander Hayes (2010), pictorialized this representation using the triquetra, also known as the trinity knot and Celtic triangle (Figure 2), and describes its application to uberveillance in the educational context in chapter 3. Hayes uses mini cases to illustrate the importance of understanding the impact of body-worn video across sectors. He concludes by warning that commercial entities should not be engaged in “techno-evangelism” when selling to the education sector but should rather maintain the purposeful intent of the use of point of view and body worn video recorders within the specific educational context. Hayes also emphasises the urgent need for serious discussion on the socio-ethical implications of wearable computers.

Figure 2. 

Uberveillance triquetra (Hayes, 2010). See also Michael and Michael (2007).


By 2013, K. Michael had published proceedings from the International Symposium on Technology and Society (ISTAS13) using the veillance concept as a theme (, with numerous papers submitted to the conference exploring veillance perspectives (Ali & Mann, 2013; Hayes, et al., 2013; K. Michael, 2013; Minsky, et al., 2012; Paterson, 2013). Two other crucial references to veillance include “in press” papers by Michael and Michael (2013) and Michael, Michael, and Perakslis (2014). But what does veillance mean? And how is it understood in different contexts? What does it mean to be watched by a CCTV camera, to have one’s personal details deeply scrutinized; to watch another; or to watch oneself?

Dataveillance (see Interview 1.1) conceived by Roger Clarke of the Australian National University (ANU) in 1988 “is the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons” (Clarke, 1988a). According to the Oxford Dictionary, dataveillance is summarized as “the practice of monitoring the online activity of a person or group” (Oxford Dictionary, 2013). It is hard to believe that this term was introduced a quarter of a century ago, in response to government agency data matching initiatives linking taxation records and social security benefits, among other commercial data mining practices. At the time it was a powerful statement in response to the Australia Card proposal in 1987 (Clarke, 1988b) which was never implemented by the Hawke Government, despite the Howard Government attempts to introduce an Access Card almost two decades later in 2005 (Australian Privacy Foundation, 2005). The same issues ensue today, only on a more momentous magnitude with far more consequences and advanced capabilities in analytics, data storage, and converging systems.

Sousveillance (see chapter 2) conceived by Steve Mann of the University of Toronto in 2002 but practiced since at least 1995 is the “recording of an activity from the perspective of a participant in the activity” (Wordnik, 2013). However, its initial introduction into the literature came in the inaugural publication of the Surveillance and Society journal in 2003 with a meaning of “inverse surveillance” as a counter to organizational surveillance (Mann, Nolan, & Wellman, 2003). Mann prefers to interpret sousveillance as under-sight which maintains integrity, contra to surveillance as over-sight which equates to hypocrisy (Mann, 2004).

Whereas dataveillance is the systematic use of personal data systems in the monitoring of people, sousveillance is the inverse of monitoring people; it is the continuous capture of personal experience. For example, dataveillance might include the linking of someone’s tax file number with their bank account details and communications data. Sousveillance on the other hand, is a voluntary act of logging what one might see around them as they move through the world. Surveillance is thus considered watching from above, whereas sousveillance is considered watching from below. In contrast, dataveillance is the monitoring of a person’s online activities, which presents the individual with numerous social dangers (Clarke, 1988a).

Uberveillance (see chapter 1) conceived by MG Michael of the University of Wollongong (UOW) in 2006, is commonly defined as: “ubiquitous or pervasive electronic surveillance that is not only ‘always on’ but ‘always with you,’ ultimately in the form of bodily invasive surveillance” (ALD, 2010). The term entered the Macquarie Dictionary of Australia officially in 2008 as “an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices in the human body” (Macquarie, 2009, p. 1094). The concern over uberveillance is directly related to the misinformationmisinterpretation, and information manipulation of citizens' data. We can strive for omnipresence through real-time remote sharing and monitoring, but we will never achieve simple omniscience (Michael & Michael, 2009).

Uberveillance is a compound word, conjoining the German über meaning over or above with the French veillance. The concept is very much linked to Friedrich Nietzsche’s vision of the Übermensch, who is a man with powers beyond those of an ordinary human being, like a super-man with amplified abilities (Honderich, 1995; M. G. Michael & Michael, 2010b). Uberveillance is analogous to embedded devices that quantify the self and measure indiscriminately. For example, heart, pulse, and temperature sensor readings emanating from the body in binary bits wirelessly, or even through amplified eyes such as inserted contact lens “glass” that might provide visual display and access to the Internet or social networking applications.

Uberveillance brings together all forms of watching from above and from below, from machines that move to those that stand still, from animals and from people, acquired involuntarily or voluntarily using obtrusive or unobtrusive devices (Figure 3) (K. Michael, et al., 2010). The network infrastructure underlies the ability to collect data direct from the sensor devices worn by the individual, and big data analytics ensures an interpretation of the unique behavioral traits of the individual implying more than just predicted movement, but intent and thought (K. Michael & Miller, 2013).

Figure 3. From surveillance to uberveillance (K. Michael, et al., 2009b)

It has been said that uberveillance is that part of the veillance puzzle that brings together the surdata, and sous to an intersecting point (Stephan, et al., 2012). In uberveillance, there is the “watching” from above component (sur), there is the “collecting” of personal data and public data for mining (data), and there is the watching from below (sous) which can draw together social networks and strangers, all coming together via wearable and implantable devices on/in the human body. Uberveillance can be used for good but we contend that independent of its application for non-medical purposes, it will always have an underlying control factor of power and authority (Masters & Michael, 2005; Gagnon, et al., 2013).

Section 2 is dedicated to applications of humancentric implantables in both the medical and non-medical space. Chapter 4 is written by professor of cybernetics, Kevin Warwick at the University of Reading and his senior research fellow, Dr Mark Gasson. In 1998, Warwick was responsible for Cyborg 1.0, and later Cyborg 2.0 in 2002. In chapter 4, Warwick and Gasson describe implants, tracking and monitoring functionality, Deep Brain Stimulation (DBS), and magnetic implants. They are pioneers in the implantables arena but after initially investigating ID and location interactivity in a closed campus environment using humancentric RFID approaches, Warwick has begun to focus his efforts on medical solutions that can aid the disabled, teaming up with Professor Tipu Aziz, a neurosurgeon from the University of Oxford. He has also explored person-to-person interfaces using the implantable devices for bi-directional functionality.

Following on from the Warwick and Gasson chapter are two interviews and a modified presentation transcript demonstrating three different kinds of RFID implant applications. Interview 4.1 is with Mr Serafin Vilaplana the former IT Manager at the Baja Beach Club who implemented the RFID implants for club patronage in Barcelona, Spain. The RFID implants were used to attract VIP patrons, perform basic access control, and be used for electronic payments. Katina Michael had the opportunity to interview Serafin after being invited to attend a Women’s in Engineering (WIE) Conference in Spain in mid-2009 organised by the Georgia Institute of Technology. It was on this connected journey that Katina Michael also met with Mark Gasson during a one day conference at the London School of Economics for the very first time, and they discussed a variety of incremental innovations in RFID.

In late May 2009, Mr Gary Retherford, a Six Sigma black belt specialising in Security, contacted Katina to be formally interviewed after coming across the Michaels’ work on the Internet. Retherford was responsible for instituting the employee access control program using the VeriChip implantable device in 2006. Interview 4.2 presents a candid discussion between Retherford and K. Michael on the risk versus reward debate with respect to RFID implantables. While Retherford can see the potential for ID tokens being embedded in the body, Michael raises some very important matters with respect to security questions inherent in RFID. Plainly, Michael argues that if we invite technology into the body, then we are inviting a whole host of computer “connectedness” issues (e.g. viruses, denial-of-service-attacks, server outages, susceptibility to hacking) into the human body as well. Retherford believes that these are matters that can be overcome with the right technology, and predicts a time that RFID implant maintenance may well be as straightforward as visiting a Local Service Provider (LSP).

Presentation 4.3 was delivered at IEEE ISTAS10 by Mr Amal Graafstra and can be found on the Internet here: This chapter presents the Do-It-Yourselfer perspective, as opposed to getting an implant that someone else uses in their operations or commercial applications. Quite possibly, the DIY culture may have an even greater influence on the diffusion of RFID implantables than even the commercial arena. DIYers are usually circumspect of commercial RFID implant offerings which they cannot customise, or for which they need an implant injected into a pre-defined bodily space which they cannot physically control. Graafstra’s published interview in 2009, as well as his full-length paper on the RFID subculture with K. Michael and M.G. Michael (2010), still stand as the most informative dialogue on the motivations of DIYers. Recently, in 2012, Graafstra began his own company touting the benefits of RFID implantables within the DIY/hacking community. Notably, a footer disclaimer statement reads: “Certain things sold at the Dangerous Things Web shop are dangerous. You are purchasing, receiving, and using the items you acquired here at your own peril. You're a big boy/girl now, you can make your own decisions about how you want to use the items you purchase. If this makes you uncomfortable, or you are unable to take personal responsibility for your actions, don't order!”

Chapter 5 closes section 2, and is written by Maria Burke and Chris Speed on applications of technology with an emphasis on memory, knowledge browsing, knowledge recovery, and knowledge sharing. This chapter reports on outcomes from research in the Tales of Things Electronic Memory (TOTeM) large grant in the United Kingdom. Burke and Speed take a fresh perspective of how technology is influencing societal and organisational change by focusing on Knowledge Management (KM). While the chapter does not explicitly address RFID, it rather explores technologies already widely diffused under the broad category of tagging systems, such as quick response codes, essentially 2D barcodes. The authors also do not fail to acknowledge that tagging systems rely on underlying infrastructure, such as wireless networks and the Internet more broadly through devices we carry such as smartphones. In the context of this book, one might also look at this chapter with a view of how memory aids might be used to support an ageing population, or those suffering with Alzheimer’s disease for example.

Section 3 is about the adoption of RFID tags and transponders by various demographics. Christine Perakslis examines the willingness to adopt RFID implants in chapter 6. She looks specifically at how personality factors play a role in the acceptance of uberveillance. She reports on a preliminary study, as well as comparing outcomes from two separate studies in 2005 and 2010. In her important findings, she discusses RFID implants as lifesaving devices, their use for trackability in case of an emergency, their potential to increase safety and security, and to speed up airport checkpoints. Yet the purpose of the Perakslis study is not to identify implantable applications as such but to investigate differences between and among personality dimensions and levels of willingness toward implanting an RFID chip in the human body. Specifically, Perakslis examines the levels of willingness toward the uberveillance trajectory using the Myers Briggs Type Indicator (MBTI).

Interview 6.1 Katina Michael converses with a 16-year-old male from Campbelltown, NSW, about tattoos, implants, and amplification. The interview is telling with respect to the prevalence of the “coolness” factor and group dynamics in youth. Though tattoos have traditionally been used to identify with an affinity group, we learn that implants would only resonate with youth if they were functional in an advanced manner, beyond just for identification purposes. This interview demonstrates the intrinsic connection between technology and the youth sub-culture which will more than likely be among the early adopters of implantable devices, yet at the same time remain highly susceptible to peer group pressure and brand driven advertising.

In chapter 7, Randy Basham considers the potential for RFID chip technology use in the elderly for surveillance purposes. The chapter not only focuses on adoption of technology but emphasises the value conflicts that RFID poses to the elderly demographic. Among these conflicts are resistance to change, technophobia, matters of informed consent, the risk of physical harm, Western religious opposition, concerns over privacy and GPS tracking, and transhumanism. Basham who sits on the Human Services Information Technology Applications (HUSITA) board of directors provides major insights to resistance to change with respect to humancentric RFID. It is valuable to read Basham’s article alongside the earlier interview transcript of Gary Retherford, to consider how new technologies like RFID implantables may be diffused widely into society. Minors and the elderly are particularly dependent demographics in this space and require special attention. It is pertinent to note, that the protests by CASPIAN led by Katherine Albrecht in 2007 blocked the chipping of elderly patients who were suffering with Alzheimer’s Disease (Lewan, 2007; ABC, 2007). If one contemplates on the trajectory for technology crossover in the surveillance atmosphere, one might think on an implantable solution with a Unique Lifetime Identifier (ULI) which follows people from cradle-to-grave and becomes the fundamental componentry that powers human interactions.

Section 4 draws on laws, directives, regulations and standards with respect to challenges arising from the practice of uberveillance. Chapter 8 investigates how the collection of DNA profiles and samples in the United Kingdom is fast becoming uncontrolled. The National DNA Database (NDNAD) of the UK has more than 8% of the population registered with much higher proportions for minority groups, such as the Black Ethnic Minority (BEM). Author Katina Michael argues that such practices drive further adoption of what one could term, national security technologies. However, developments and innovations in this space are fraught with ethical challenges. The risks associated with familial searching as overlaid with medical research, further compounds the possibility that people may carry a microchip implant with some form of DNA identifier as linked to a Personal Health Record (PHR). This is particularly pertinent when considering the European Union (EU) decision to step up cross-border police and judicial cooperation in EU countries in criminal matters, allowing for the exchange of DNA profiles between the authorities responsible for the prevention and investigation of criminal offences (see Prüm Treaty).

Chapter 9 presents outcomes from a large Australian Research Council-funded project on the night time economy in Australia. In this chapter, ID scanners and uberveillance are considered in light of trade-offs between privacy and crime prevention. Does instituting ID scanners prevent or minimise crime in particular hot spots or do they simply cause a chilling effect and trigger the redistribution of crime to new areas. Darren Palmer and his co-authors demonstrate how ID scanners are becoming a normalized precondition of entry into one Australian nighttime economy. They demonstrate that the implications of technological determinism amongst policy makers, police and crime prevention theories need to be critically assessed and that the value of ID scanners needs to be reconsidered in context. In chapter 10, Jann Karp writes on global tracking systems in Australian interstate trucking. She investigates driver perspectives and attitudes on the modern practice of fleet management, and on the practice of tracking vehicles and what that means to truck drivers. Whereas chapter 9 investigates the impact of emerging technology on consumers, chapter 10 gives an employee perspective. While Palmer et al. question the effectiveness of ID scanners in pubs and clubs, Karp poses the challenging question- is locational surveillance of drivers in the trucking industry helpful or is it a hindrance?

Chapter 11 provides legislative developments in tracking, in relation to the “Do Not Track” initiatives written by Mark Burdon et al. The chapter focuses on online behavioral profiling, in contrast to chapter 8 that focuses on DNA profiling and sampling. US legislative developments are compared with those in the European Union, New Zealand, Canada and Australia. Burdon et al. provide an excellent analysis of the problems. Recommendations for ways forward are presented in a bid for members of our communities to be able to provide meaningful and educated consent, but also for the appropriate regulation of transborder information flows. This is a substantial piece of work, and one of the most informative chapters on Do Not Track initiatives available in the literature.

Chapter 12 by Kyle Powys Whyte and his nine co-authors from Michigan State University completes section 4 with a paper on the emerging standards in livestock industry. The chapter looks at the benefits of nanobiosensors in livestock traceability systems but does not neglect to raise the social and ethical dimensions related to standardising this industry. Whyte et al. argue that future development of nanobiosensors should include processes that engage diverse actors in ways that elicit productive dialogue on the social and ethical contexts. A number of practical recommendations are presented at the conclusion of the chapter, such as the role of “anticipatory governance” as linked to Science and Technology Studies (STS). One need only consider the findings of this priming chapter, and how these results may be applied in light of the relationship between non-humancentric RFID and humancentric RFID chipping. Indeed, the opening sentence of the chapter points to the potential: “uberveillance of humans will emerge through embedding chips within nonhumans in order to monitor humans.”

Section 5 contains the critical chapter dedicated to the health implications of microchipping living things. In chapter 13, Katherine Albrecht uncovers significant problems related to microchip-induced cancer in mice or rats (2010). A meta-data analysis of eleven clinical studies published in oncology and toxicology journals between 1996 and 2006 are examined in detail in this chapter. Albrecht goes beyond the prospective social implications of microchipping humans when she presents the physical adverse reactions to implants in animals. Albrecht concludes her chapter with solid recommendations for policy-makers, veterinarians, pet owners, and oncology researchers, among others. When the original report was first launched (, Todd Lewan (2007) of the Associated Press had an article published in the Washington Post titled, “Chip Implants Linked to Animal Tumors.” Albrecht is to be commended for this pioneering study, choosing to focus on health related matters which will increasingly become relevant in the adoption of invasive and pervasive technologies.

The sixth and final section addresses the emerging socio-ethical implications of RFID tags and transponders in humans. Chapter 14 addresses some of the underlying philosophical aspects of privacy within pervasive surveillance. Alan Rubel chooses to investigate the commercial arena, penal supervision, and child surveillance in this book chapter. He asks: what is the potential for privacy loss? The intriguing and difficult question that Rubel attempts to answer is whether privacy losses (and gains) are morally salient. Rubel posits that determining whether privacy loss is morally weighty, or of sufficient moral weight to give rise to a right to privacy, requires an examination of reasons why privacy might be valuable. He describes both instrumental value and intrinsic value and presents a brief discussion on surveillance and privacy value.

Panel 14.1 is a slightly modified transcription of the debate over microchipping people recorded at IEEE ISTAS10 ( This distinguished panel is chaired by lawyer William Herbert. Panel members included, Rafael Capurro, who was a member of the European Group on Ethics in Science and New Technologies (EGE), and who co-authored the landmark Opinion piece published in 2005 “On the ethical aspects of ICT implants in the human body.” Capurro, who is the director for the International Center for Information Ethics, was able to provide a highly specialist ethical contribution to the panel. Mark Gasson and Amal Graafstra, both of whom are RFID implantees, introduced their respective expert testimonies. Chair of the Australian Privacy Foundation Roger Clarke and CASPIAN director Katherine Albrecht represented the privacy and civil liberties positions in the debate. The transcript demonstrates the complexity and multi-layered dimensions surrounding humancentric RFID, and the divisive nature of the issues at hand: on whether to microchip people, or not.

In chapter 15 we are introduced to the development of brain computer interfaces, brain machine interfaces and neuromotor prostheses. Here Ellen McGee examines sophisticated technologies that are used for more than just identification purposes. She writes of brain implants that are surgically implanted and affixed, as opposed to simple implantable devices that are injected in the arm with a small injector kit. These advanced technologies will allow for radical enhancement and augmentation. It is clear from McGee’s fascinating work that these kinds of leaps in human function and capability will cause major ethical, safety, and justice dilemmas. McGee clearly articulates the need for discourse and regulation in the broad field of neuroprosthetics. She especially emphasises the importance of privacy and autonomy. McGee concludes that there is an urgent need for debate on these issues, and questions whether or not it is wise to pursue such irreversible developments.

Ronnie Lipschutz and Rebecca Hester complement the work of McGee, going beyond the possibilities to making the actual assumption that the human will assimilate into the cellular society. They proclaim “We are the Borg!” And in doing so point to a future scenario where not only bodies are read, but minds as well. They describe “re(b)organization” as that new phenomenon that is occurring in our society today. Chapter 16 is strikingly challenging for this reason, and makes one speculate what or who are the driving forces behind this cyborgization process. This chapter will also prove of special interest for those who are conversant with Cartesian theory. Lipschutz and Hester conclude by outlining the very real need for a legal framework to deal with hackers who penetrate biodata systems and alter individual’s minds and bodies, or who may even kill a person by tampering with or reprogramming their medical device remotely.

Interview 16.1 directly alludes to this cellular society. Videographer Jordan Brown interviews Katina Michael on the notion of the “screen bubble.” What is the screen culture doing to us? Rather than looking up as we walk around, we divert our attention to the screen in the form of a smart phone, iPad, or even a digital wearable glass device. We look down increasingly, and not at each other. We peer into lifeless windows of data, rather than peer into one another’s eyes. What could this mean and what are some of the social implications of this altering of our natural gaze? The discussion between Brown and K. Michael is applicable to not just the implantables space, but to the wearables phenomenon as well.

The question of faith in a data driven and information-saturated society is adeptly addressed by Marcus Wigan in the Epilogue. Wigan calls for a new moral imperative. He asks the very important question in the context of “who are the vulnerable now?” What is the role of information ethics, and where should targeted efforts be made to address these overarching issues which affect all members of society- from children to the elderly, from the employed to the unemployed, from those in positions of power to the powerless. It is the emblematic conclusion to a book on uberveillance.


ABC. (2007). Alzheimer's patients lining up for microchip. ABCNews. Retrieved from

Albrecht, K. (2010). Microchip-induced tumors in laboratory rodents and dogs: A review of the literature 1990–2006. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS10). Wollongong, Australia: IEEE.

Ali, A., & Mann, S. (2013). The inevitability of the transition from a surveillance-society to a veillance-society: Moral and economic grounding for sousveillance. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS13). Toronto, Canada: IEEE.

Australian Privacy Foundation. (2005). Human services card. Australian Privacy Foundation. Retrieved 6 June 2013, from

Clarke R. (1988a). Information technology and dataveillance.Communications of the ACM, 31(5), 498–512. 10.1145/42411.42413

Clarke R. (1988b). Just another piece of plastic in your wallet: The ‘Australian card’ scheme.ACM SIGCAS Computers and Society, 18(1), 7–21. 10.1145/47649.47650

Gagnon M. Jacob J. D. Guta A. (2013). Treatment adherence redefined: A critical analysis of technotherapeutics.Nursing Inquiry, 20(1), 60–70. 10.1111/j.1440-1800.2012.00595.x22381079

Graafstra A. (2009). Interview 14.2: The RFID do-it-yourselfer. In MichaelK.MichaelM. G. (Eds.), Innovative automatic identification and location based services: from bar codes to chip implants (pp. 427–449). Hershey, PA: IGI Global.

Graafstra, A., Michael, K., & Michael, M. G. (2010). Social-technical issues facing the humancentric RFID implantee sub-culture through the eyes of Amal Graafstra. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS10). Wollongong, Australia: IEEE.

Hayes, A. (2010). Uberveillance (triquetra). Retrieved 6 May 2013, from

Hayes, A., Mann, S., Aryani, A., Sabbine, S., Blackall, L., Waugh, P., & Ridgway, S. (2013). Identity awareness of research data in veillance and social computing. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS13). Toronto, Canada: IEEE.

Kerr, I., & Mann, S. (n.d.). Exploring equiveillance. ID TRAIL MIX. Retrieved 26 September 2013 from

Levin I. (1970). This perfect day: A novel. New York: Pegasus.

Lewan, T. (2007, September 8). Chip implants linked to animal tumors. Washington Post. Retrieved from

Macquarie. (2009). Uberveillance. In S. Butler (Ed.), Macquarie dictionary (5th ed.). Sydney, Australia: Sydney University.

Mann, S. (2004). Sousveillance: Inverse surveillance in multimedia imaging. In Proceedings of the 12th Annual ACM International Conference on Multimedia. New York, NY: ACM.

Mann S. Nolan J. Wellman B. (2003). Sousveillance: Inventing and using wearable computing devices for data collection in surveillance environments.Surveillance & Society, 1(3), 331–355.

Masters, A., & Michael, K. (2005). Humancentric applications of RFID implants: The usability contexts of control, convenience and care. In Proceedings of the Second IEEE International Workshop on Mobile Commerce and Services. Munich, Germany: IEEE Computer Society.

Michael K. (2003). The automatic identification trajectory. In LawrenceE.LawrenceJ.NewtonS.DannS.CorbittB.ThanasankitT. (Eds.), Internet commerce: Digital models for business. Sydney, Australia: John Wiley & Sons.

Michael, K. (2012). Israel, Palestine and the benefits of waging war through Twitter. The Conversation. Retrieved 22 November 2012, from

Michael K. (2013a). High-tech lust.IEEE Technology and Society Magazine, 32(2), 4–5. 10.1109/MTS.2013.2259652

Michael, K. (Ed.). (2013b). Social implications of wearable computing and augmediated reality in every day life. In Proceedings of IEEE Symposium on Technology and Society. Toronto, Canada: IEEE.

Michael, K., McNamee, A., & Michael, M. G. (2006). The emerging ethics of humancentric GPS tracking and monitoring. In Proceedings of International Conference on Mobile Business. Copenhagen, Denmark: IEEE Computer Society.

Michael K. Michael M. G. (Eds.). (2007). From dataveillance to überveillance and the realpolitik of the transparent society. Wollongong, Australia: Academic Press.

Michael K. Michael M. G. (2009a). Innovative automatic identification and location-based services: From bar codes to chip implants. Hershey, PA: IGI Global. 10.4018/978-1-59904-795-9

Michael, K., & Michael, M. G. (2009c). Predicting the socioethical implications of implanting people with microchips. PerAda Magazine. Retrieved from

Michael, K., & Michael, M. G. (2009d). Teaching ethics in wearable computing: The social implications of the new ‘veillance’. EduPOV.Retrieved June 18, from

Michael K. Michael M. G. (2010). Implementing namebers using implantable technologies: The future prospects of person ID. In PittJ. (Ed.), This pervasive day: The potential and perils of pervasive computing (pp. 163–206). London: Imperial College London.

Michael K. Michael M. G. (2011). The social and behavioral implications of location-based services.Journal of Location-Based Services, 5(3-4), 121–137. 10.1080/17489725.2011.642820

Michael K. Michael M. G. (2013). No limits to watching?Communications of the ACM, 56(11), 26-28.10.1145/2527187

Michael K. Michael M. G. Abbas R. (2009b). From surveillance to uberveillance (Australian Research Council Discovery Grant Application). Wollongong, Australia: University of Wollongong.

Michael, K., Michael, M. G., & Ip, R. (2008). Microchip implants for humans as unique identifiers: A case study on VeriChip. In Proceedings of Conference on Ethics, Technology, and Identity. Delft, The Netherlands: Delft University of Technology.

Michael K. Michael M. G. Perakslis C. (2014). Be vigilant: There are limits to veillance. In PittJ. (Ed.), The computer after me. London: Imperial College Press.

Michael K. Miller K. W. (2013). Big data: New opportunities and new challenges.IEEE Computer, 46(6), 22–24. 10.1109/MC.2013.196

Michael K. Roussos G. Huang G. Q. Gadh R. Chattopadhyay A. Prabhu S. (2010). Planetary-scale RFID Services in an age of uberveillance.Proceedings of the IEEE, 98(9), 1663–1671. 10.1109/JPROC.2010.2050850

Michael M. G. (2000). For it is the number of a man.Bulletin of Biblical Studies, 19, 79–89.

Michael M. G. Michael K. (2009). Uberveillance: Microchipping people and the assault on privacy.Quadrant, 53(3), 85–89.

Michael M. G. Michael K. (2010). Towards a state of uberveillance.IEEE Technology and Society Magazine, 29(2), 9–16. 10.1109/MTS.2010.937024

Minsky, M. (2013). The society of intelligent veillance. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS13). Toronto, Canada: IEEE.

Nolan, D. (2013, June 7). The human cloud. Monolith. Retrieved from

Oxford Dictionary. (2012). Dataveillance. Retrieved 6 May 2013, from

Paechter B. Pitt J. Serbedzijac N. Michael K. Willies J. Helgason I. (2011). Heaven and hell: Visions for pervasive adaptation. In Fet11 essence. Budapest, Hungary: Elsevier. 10.1016/j.procs.2011.12.025

Paterson, N. (2013). Veillances: Protocols & network surveillance. In Proceedings of IEEE International Symposium on Technology and Society(ISTAS13). Toronto, Canada: IEEE.

Pitt J. (Ed.). (2012). This pervasive day: The potential and perils of pervasive computing. London: Imperial College London.

Pitt J. (2014). The computer after me. London: Imperial College Press.

Reynolds, M. (2004). Despite the hype, microchip implants won't deliver security. Gartner. Retrieved 6 May 2013, from

Rodotà, S., & Capurro, R. (2005). Ethical aspects of ICT implants in the human body. Opinion of the European Group on Ethics in Science and New Technologies to the European Commission, 20.

Shih, T. K. (2013). Video forgery and motion editing. In Proceedings of International Conference on Advances in ICT for Emerging Regions. ICT.

Stephan K. D. Michael K. Michael M. G. Jacob L. Anesta E. (2012). Social implications of technology: Past, present, and future.Proceedings of the IEEE, 100(13), 1752–1781. 10.1109/JPROC.2012.2189919

(1995). Superman. InHonderichT. (Ed.), Oxford companion to philosophy. Oxford, UK: Oxford University Press.

(2010). Uberveillance. InALD (Ed.), Australian law dictionary. Oxford, UK: Oxford University Press.

Warwick K. (2002). I, cyborg. London: Century.

Wordnik. (2013). Sousveillance. Retrieved 6 June 2013, from

Social Implications of Technology: The Past, the Present, and the Future


The social implications of a wide variety of technologies are the subject matter of the IEEE Society on Social Implications of Technology (SSIT). This paper reviews the SSIT's contributions since the Society's founding in 1982, and surveys the outlook for certain key technologies that may have significant social impacts in the future. Military and security technologies, always of significant interest to SSIT, may become more autonomous with less human intervention, and this may have both good and bad consequences. We examine some current trends such as mobile, wearable, and pervasive computing, and find both dangers and opportunities in these trends. We foresee major social implications in the increasing variety and sophistication of implant technologies, leading to cyborgs and human-machine hybrids. The possibility that the human mind may be simulated in and transferred to hardware may lead to a transhumanist future in which humanity redesigns itself: technology would become society.

SECTION I. Introduction

“Scientists think; engineers make.” Engineering is fundamentally an activity, as opposed to an intellectual discipline. The goal of science and philosophy is to know; the goal of engineering is to do something good or useful. But even in that bare-bones description of engineering, the words “good” and “useful” have philosophical implications.

Because modern science itself has existed for only 400 years or so, the discipline of engineering in the sense of applying scientific knowledge and principles to the satisfaction of human needs and desires is only about two centuries old. But for such a historically young activity, engineering has probably done more than any other single human development to change the face of the material world.

It took until the mid-20th century for engineers to develop the kind of self-awareness that leads to thinking about engineering and technology as they relate to society. Until about 1900, most engineers felt comfortable in a “chain-of-command” structure in which the boss—whether it be a military commander, a corporation, or a wealthy individual—issued orders that were to be carried out to the best of the engineer's technical ability. Fulfillment of duty was all that was expected. But as the range and depth of technological achievements grew, engineers, philosophers, and the public began to realize that we had all better take some time and effort to think about the social implications of technology. That is the purpose of the IEEE Society on Social Implications of Technology (SSIT): to provide a forum for discussion of the deeper questions about the history, connections, and future trends of engineering, technology, and society.

This paper is not focused on the history or future of any particular technology as such, though we will address several technological issues in depth. Instead, we will review the significant contributions of SSIT to the ongoing worldwide discussion of technology and society, and how technological developments have given rise to ethical, political, and social issues of critical importance to the future. SSIT is the one society in IEEE where engineers and allied professionals are encouraged to be introspective—to think about what they are doing, why they are doing it, and what effects their actions will have. We believe the unique perspective of SSIT enables us to make a valuable contribution to the panoply of ideas presented in this Centennial Special Issue of the Proceedings of the IEEE.



A. Brief History of SSIT

SSIT as a technical society in IEEE was founded in 1982, after a decade as the Committee on Social Responsibility in Engineering (CSRE). In 1991, SSIT held its first International Symposium on Technology and Society (ISTAS), in Toronto, ON, Canada. Beginning in 1996, the Symposium has been held annually, with venues intentionally located outside the continental United States every few years in order to increase international participation.

SSIT total membership was 1705 as of December 2011. Possibly because SSIT does not focus exclusively on a particular technical discipline, it is rare that SSIT membership is a member's primary connection to IEEE. As SSIT's parent organization seeks ways to increase its usefulness and relevance to the rapidly changing engineering world of the 21st century, SSIT will both chronicle and participate in the changes taking place both in engineering and in society as a whole. for a more detailed history of the first 25 years of SSIT, see [1].

B. Approaches to the Social Implications of Technology

In the historical article referred to above [1], former SSIT president Clint Andrews remarked that there are two distinct intellectual approaches which one can take with regard to questions involving technology and society. The CSIT and the early SSIT followed what he calls the “critical science” approach which “tends to focus on the adverse effects of science and technical change.” Most IEEE societies are organized around a particular set of technologies. The underlying assumption of many in these societies is that these particular technologies are beneficial, and that the central issues to be addressed are technical, e.g., having to do with making the technologies better, faster, and cheaper. Andrews viewed this second “technological optimism” trend as somewhat neglected by SSIT in the past, and expressed the hope that a more balanced approach might attract a larger audience to the organization's publications and activities. It is important to note, however, that from the very beginning, SSIT has called for a greater emphasis on the development of beneficial technology such as environmentally benign energy sources and more efficient electrical devices.

In considering technology in its wider context, issues that are unquestionable in a purely technical forum may become open to question. Technique A may be more efficient and a fraction of the cost of technique B in storing data with similar security provisions, but what if a managed offshore shared storage solution is not the best thing to do under a given set of circumstances? The question of whether A or B is better technologically (and economically) is thus subsumed in the larger question of whether and why the entire technological project is going to benefit anyone, and who it may benefit, and who it may harm. The fact that opening up a discussion to wider questions sometimes leads to answers that cast doubt on the previously unquestioned goodness of a given enterprise is probably behind Andrews' perception that on balance, the issues joined by SSIT have predominantly fallen into the critical-science camp. Just as no one expects the dictates of conscience to be in complete agreement with one's instinctive desires, a person seeking unalloyed technological optimism in the pages or discussions hosted by SSIT will probably be disappointed. But the larger aim is to reach conclusions about technology and society that most of us will be thankful for some day, if not today. Another aim is to ensure that we bring issues to light and propose ways forward to safeguard against negative effects of technologies on society.

C. Major Topic Areas of SSIT

In this section, we will review some (but by no means all) topics that have become recurring themes over the years in SSIT's quarterly peer-reviewed publication, the IEEE Technology & Society Magazine. The articles cited are representative only in the sense that they fall into categories that have been dealt with in depth, and are not intended to be a “best of” list. These themes fall into four broad categories: 1) war, military technology (including nuclear weapons), and security issues, broadly defined; 2) energy technologies, policies and related issues: the environment, sustainable development, green technology, climate change, etc.; 3) computers and society, information and communications technologies (ICT), cybersystems, cyborgs, and information-driven technologies; and 4) groups of people who have historically been underprivileged, unempowered, or otherwise disadvantaged: Blacks, women, residents of developing nations, the handicapped, and so on. Education and healthcare also fit in the last category because the young and the ill are in a position of dependence on those in power.

1. Military and Security Issues

Concern about the Vietnam War was a strong motivation for most of the early members of the Committee for Social Responsibility in Engineering, the predecessor organization of SSIT. The problem of how and even whether engineers should be involved in the development or deployment of military technology has continued to appear in some form throughout the years, although the end of the Cold War changed the context of the discussion. This category goes beyond formal armed combat if one includes technologies that tend to exert state control or monitoring on the public, such as surveillance technologies and the violation of privacy by various technical means. In the first volume of the IEEE Technology & Society Magazine published in 1982, luminaries such as Adm. Bobby R. Inman (ret.) voiced their opinions about Cold War technology [2], and the future trend toward terrorism as a major player in international relations was foreshadowed by articles such as “Technology and terrorism: privatizing public violence,” published in 1991 [3]. Opinions voiced in the Magazine on nuclear technology ranged from Shanebrook's 1999 endorsement of a total global ban on nuclear weapons [4] to Andrews' thorough review of national responses to energy vulnerability, in which he pointed out that France has developed an apparently safe, productive, and economical nuclear-powered energy sector [5]. In 2009, a special section of five articles appeared on the topic of lethal robots and their implications for ethical use in war and peacekeeping operations [6]. And in 2010, the use of information and communication technologies (ICT) in espionage and surveillance was addressed in a special issue on “Überveillance,” defined by authors M.G. Michael and K. Michael as the use of electronic means to track and gather information on an individual, together with the “deliberate integration of an individual's personal data for the continuous tracking and monitoring of identity and location in real time” [7].

2. Energy and Related Technologies and Issues

from the earliest years of the Society, articles on energy topics such as alternative fuels appeared in the pages of the IEEE Technology & Society Magazine. A 1983 article on Brazil's then-novel effort to supplement imported oil with alcohol from sugarcane [8] presaged today's controversial U.S. federal mandate for the ethanol content in motor fuels. The Spring 1984 issue hosted a debate on nuclear power generation between H. M. Gueron, director of New York's Con Edison Nuclear Coal and Fuel Supply division at the time [9], and J. J. MacKenzie, a senior staff scientist with the Union of Concerned Scientists [10]. Long before greenhouse gases became a household phrase and bandied about in debates between Presidential candidates, the Magazine published an article examining the need to increase the U.S.'s peak electrical generating capacity because the increase in average temperature due to increasing atmospheric carbon dioxide would increase the demand for air conditioning [11]. The larger implications of global warming apparently escaped the attention of the authors, focused as they were on the power-generating needs of the state of Minnesota. By 1990, the greenhouse effect was of sufficient concern to show up on the legislative agendas of a number of nations, and although Cruver attributed this to the “explosion of doomsday publicity,” he assessed the implications of such legislation for future energy and policy planning [12]. Several authors in a special issue on the social implications of systems concepts viewed the Earth's total environment in terms of a complex system in 2000 [13]. The theme of ISTAS 2009 was the social implications of sustainable development, and this theme was addressed in six articles in the resulting special issue of the IEEE Technology & Society Magazine for Fall 2010. The record of speculation, debate, forecasting, and analysis sampled here shows that not only has SSIT carried out its charter by examining the social implications of energy technology and related issues, but also it has shown itself a leader and forerunner in trends that later became large-scale public debates.

3. Computing, Telecommunications, and Cyberspace

Fig. 1. BRLESC-II computer built by U.S. Army personnel for use at the Ballistics Research Lab, Aberdeen Proving Grounds between about 1967 and 1978, A. V. Kurian at console. Courtesy of U.S. Army Photos.

In the early years of SSIT, computers were primarily huge mainframes operated by large institutions (Fig. 1). But with the personal computer revolution and especially the explosion of the Internet, SSIT has done its part to chronicle and examine the history, present state, and future trends of the hardware, software, human habits and interactions, and the complex of computer and communications technologies that are typically subsumed under the acronym of ICT.

As we now know, the question of intellectual property has been vastly complicated by the ready availability of peer-to-peer software, high-speed network connections, and legislation passed to protect such rights. In a paper published in 1998, Davis addressed the question of protection of intellectual property in cyberspace [14]. As the Internet grew, so did the volume of papers on all sorts of issues it raised, from the implications of electronic profiling [15] to the threats and promises of facial recognition technology [16]. One of the more forward-looking themes addressed in the pages of the Magazine came in 2005 with a special issue on sustainable pervasive computing [17]. This issue provides an example of how both the critical science and the technological optimism themes cited by Andrews above can be brought together in a single topic. And to show that futuristic themes are not shirked by the IEEE Technology and Society Magazine authors, in 2011 Clarke speculated in an article entitled “Cyborg rights” on the limits and problems that may come as people physically merge with increasingly advanced hardware (implanted chips, sensory enhancements, and so on) [18].

4. Underprivileged Groups

Last but certainly not least, the pages of the IEEE Technology & Society Magazine have hosted articles inspired by the plight of underprivileged peoples, broadly defined. This includes demographic groups such as women and ethnic minorities and those disadvantaged by economic issues, such as residents of developing countries. While the young and the ill are not often formally recognized as underprivileged in the conventional sense, in common with other underprivileged groups they need society's help in order to survive and thrive, in the form of education and healthcare, respectively. An important subset of education is the theme of engineering ethics, a subject of vital interest to many SSIT members and officials since the organization's founding.

In its first year, the Magazine carried an article on ethical issues in decision making [19]. A special 1998 issue on computers and the Internet as used in the K-12 classroom explored these matters in eight focused articles [20]. The roles of ethics and professionalism in the personal enjoyment of engineering was explored by Florman (author of the book The Introspective Engineer) in an interview with the Magazine's managing editor Terri Bookman in 2000 [21]. An entire special issue was devoted to engineering ethics in education the following year, after changes in the U.S. Accreditation Board for Engineering and Technology's policies made it appear that ethics might receive more attention in college engineering curricula [22].

The IEEE Technology & Society Magazine has hosted many articles on the status of women, both as a demographic group and as a minority in the engineering profession. Articles and special issues on themes involving women have on occasion been the source of considerable controversy, even threatening the organization's autonomy at one point [1, p. 9]. In 1999, ISTAS was held for the first time in conjunction with two other IEEE entities: the IEEE Women in Engineering Committee and the IEEE History Center. The resulting special issue that came out in 2000 carried articles as diverse as the history of women in the telegraph industry [23], the challenges of being both a woman and an engineering student [24], and two articles on technology and the sex industry [25], [26].

Engineering education in a global context was the theme of a Fall 2005 special issue of the IEEE Technology and Society Magazine, and education has been the focus of several special issues and ISTAS meetings over the years [27]–[28][29]. The recent development termed “humanitarian engineering” was explored in a special issue only two years ago, in 2010 [30]. Exemplified by the U.S.-based Engineers without Borders organization, these engineers pursue projects, and sometimes careers, based not only on profit and market share, but also on the degree to which they can help people who might not otherwise benefit from their engineering talents.

SECTION III. The Present

Fig. 2.  Cow bearing an Australian National Livestock Identification System (NLIS) RFID tag on its ear. The cow's identity is automatically detected as it goes through the drafting gates and the appropriate feed is provided for the cow based on historical data on its milk yields. Courtesy of Adam Trevarthen.

Fig. 2. Cow bearing an Australian National Livestock Identification System (NLIS) RFID tag on its ear. The cow's identity is automatically detected as it goes through the drafting gates and the appropriate feed is provided for the cow based on historical data on its milk yields. Courtesy of Adam Trevarthen.

Emerging technologies that will act to shape the next few years are complex in their makeup with highly meshed value chains that resemble more a process or service than an individual product [31]. At the heart of this development is convergence: convergence in devices, convergence in applications, convergence in content, and convergence in infrastructure. The current environment is typified by the move toward cloud computing solutions and Web 2.0 social media platforms with ubiquitous access via a myriad of mobile or fixed devices, some of which will be wearable on people and animals (Fig. 2) or embedded in systems (e.g., vehicles and household appliances).

Simultaneous with these changes are the emergence of web services that may or may not require a human operator for decision making in a given business process, reliance upon data streams from automatic identification devices [e.g., radio-frequency identification (RFID) tags], the accuracy and reliability of location-based services [e.g., using Global Positioning Systems (GPS)] and condition monitoring techniques (e.g., using sensors to measure temperature or other physiological data). Most of this new technology will be invisibly located in miniaturized semiconductors which are set to reach such economies of scale, that it is commonly noted by technology evangelists that every single living and nonliving thing will come equipped with a chip “on board.”

Fig. 3. Business woman checking in for an interstate trip using an electronic ticket sent to her mobile phone. Her phone also acts as a mobile payment mechanism and has built-in location services features. Courtesy of NXP Semiconductors 2009.

The ultimate vision of a Web of Things and People (WoTaP)—smart homes using smart meters, smart cars using smart roads, smart cities using smart grids—is one where pervasive and embedded systems will play an active role toward sustainability and renewable energy efficiency. The internetworked environment will need to be facilitated by a fourth-generation mobility capability which will enable even higher amounts of bandwidth to the end user as well as seamless communication and coordination by intelligence built into the cloud. Every smart mobile transaction will be validated by a precise location and linked back to a subject (Fig. 3).

In the short term, some of the prominent technologies that will impact society will be autonomous computing systems with built-in ambient intelligence which will amalgamate the power of web services and artificial intelligence (AI) through multiagent systems, robotics, and video surveillance technologies (e.g., even the use of drones) (Fig. 4). These technologies will provide advanced business and security intelligence. While these systems will lead to impressive uses in green initiatives and in making direct connections between people and dwellings, people and artifacts, and even people and animals, they will require end users to give up personal information related to identity, place, and condition to be drawn transparently from smart devices.

Fig. 4.  A facial recognition system developed by Argus Solutions in Australia. Increasingly facial recognition systems are being used in surveillance and usually based on video technology. Digital images captured from video or still photographs are compared with other precaptured images. Courtesy of Argus Solutions 2009.

Fig. 4. A facial recognition system developed by Argus Solutions in Australia. Increasingly facial recognition systems are being used in surveillance and usually based on video technology. Digital images captured from video or still photographs are compared with other precaptured images. Courtesy of Argus Solutions 2009.

The price of all of this will be that very little remains private any longer. While the opportunities that present themselves with emerging technologies are enormous with a great number of positive implications for society—for instance, a decrease in the number of traffic accidents and fatalities, a reduction in the carbon emission footprint by each household, greater social interconnectedness, etc.—ultimately these gains too will be susceptible to limitations. Who the designated controller is and what they will do with the acquired data is something we can only speculate about. We return then, to the perennial question of “who will guard the guards themselves”: Quis custodiet ipsos custodes? [32]

A. Mobile and Pervasive Computing

In our modern world, data collection from many of our most common activities begins from the moment we step out our front door in the morning until we go to sleep at night. In addition to near-continual data collection, we have become a society of people that voluntarily broadcasts to the world a great deal of personal information. Vacation photos, major life events, and trivialities such as where we are having dinner to our most mundane thoughts, all form part of the stream of data through which we electronically share our inner lives. This combination of the data that is collected about us and the data that is freely shared by us could form a breathtakingly detailed picture of an individual's life, if it could ever all be collected in one place. Most of us would consider ourselves fortunate that most of this data was historically never correlated and is usually highly anonymized. However, in general, it is becoming easier to correlate and deanonymize data sets.

1. Following Jane Doe's Digital Data Trail

Let us consider a hypothetical “highly tracked” individual [33]. Our Jane Doe leaves for work in the morning, and gets in her Chevrolet Impala, which has OnStar service to monitor her car. OnStar will contact emergency services if Jane has an accident, but will also report to the manufacturer any accident or mechanical failure the car's computer is aware of [34]. Jane commutes along a toll road equipped with electronic toll collection (ETC). The electronic toll system tracks where and at what time Jane enters and leaves the toll road (Fig. 5).

Fig. 5. Singapore's Electronic Road Pricing (ERP) system. The ERP uses a dedicated short-range radio communication system to deduct ERP charges from CashCards. These are inserted in the in-vehicle units of vehicles before each journey. Each time vehicles pass through a gantry when the system is in operation, the ERP charges are automatically deducted. Courtesy of Katina Michael 2003.

When she gets to work, she uses a transponder ID card to enter the building she works in (Fig. 6), which logs the time she enters and by what door. She also uses her card to log into the company's network for the morning. Her company's Internet firewall software monitors any websites she visits. At lunch, she eats with colleagues at a local restaurant. When she gets there, she “checks in” using a geolocation application on her phone—for doing so, the restaurant rewards her with a free appetizer [35].


Fig. 6. Employee using a contactless smart card to gain entry to her office premises. The card is additionally used to access elevators in the building, rest rooms, and secure store areas, and is the only means of logging into the company intranet. Courtesy of NXP Semiconductors 2009.

She then returns to work for the afternoon, again using her transponder ID badge to enter. After logging back into the network, she posts a review of the restaurant on a restaurant review site, or maybe a social networking site. At the end of the work day, Jane logs out and returns home along the same toll road, stopping to buy groceries at her local supermarket on the way. When she checks out at the supermarket, she uses her customer loyalty card to automatically use the store's coupons on her purchases. The supermarket tracks Jane's purchases so it can alert her when things she buys regularly are on sale.

During Jane's day, her movements were tracked by several different systems. During almost all of the time she spent out of the house, her movements were being followed. But Jane “opted in” to almost all of that tracking; it was her choice as the benefits she received outweighed her perceived costs. The toll collection transponder in her car allows her to spend less time in traffic [36]. She is happy to share her buying habits with various merchants because those merchants reward her for doing so [37]. In this world it is all about building up bonus points and getting rewarded. Sharing her opinions on review and social networking sites lets Jane keep in touch with her friends and lets them know what she is doing.

While many of us might choose to allow ourselves to be monitored for the individual benefits that accrue to us personally, the data being gathered about collective behaviors are much more valuable to business and government agencies. Clarke developed the notion of dataveillance to give a name to the “systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons” in the 1980s [38]. ETC is used by millions of people in many countries. The more people who use it, as opposed to paying tolls at tollbooths, the faster traffic can flow for everyone. Everyone also benefits when ETC allows engineers to better monitor traffic flows and plan highway construction to avoid the busiest times of traffic. Geolocation applications let businesses reward first-time and frequent customers, and they can follow traffic to their business and see what customers do and do not like. Businesses such as grocery stores or drug stores that use customer loyalty cards are able to monitor buying trends to see what is popular and when. Increasingly shoppers are being introduced to the near-field communication (NFC) capability on their third-generation (3G) smartphone (Fig. 7).

Fig. 7. Purchasing grocery items effortlessly by using the near-field communication (NFC) capability on your 3G smartphone. Courtesy of NXP Semiconductors 2009.

Some of these constant monitoring tools are truly personal and are controlled by and report back only to the user [39]. for example, there are now several adaptive home thermostat systems that learn a user's temperature preferences over time and allow users to track their energy usage and change settings online. for the health conscious, “sleep monitoring” systems allow users to track not only the hours of sleep they get per night, but also the percentage of time spent in light sleep versus rapid eye movement (REM) sleep, and their overall “sleep quality” [40].

Fig. 8. Barcodes printed on individual packaged items on pallets. Order information is shown on the forklift's on-board laptop and the driver scans items that are being prepared for shipping using a handheld gun to update inventory records wirelessly. Courtesy AirData Pty Ltd, Motorola Premier Business Partner, 2009.

Businesses offer and customers use various mobile and customer tracking services because the offer is valued by both parties (Fig. 8). However, serious privacy and legal issues continue to arise [41]. ETC records have been subpoenaed in both criminal and civil cases [42]. Businesses in liquidation have sold their customer databases, violating the privacy agreements they gave to their customers when they were still in business. Geolocation services and social media that show a user's location or allow them to share where they have been or where they are going can be used in court cases to confirm or refute alibis [43].


Near-constant monitoring and reporting of our lives will only grow as our society becomes increasingly comfortable sharing more and more personal details (Fig. 9). In addition to the basic human desire to tell others about ourselves, information about our behavior as a group is hugely valuable to both governments and businesses. The benefits to individuals and to society as a whole are great, but the risks to privacy are also significant [44]. More information about group behaviors can let us allocate resources more efficiently, plan better for future growth, and generate less waste. More information about our individual patterns can allow us to do the same thing on a smaller scale—to waste less fuel heating our homes when there is no one present, or to better understand our patterns of human activity.


Fig. 9. A five step overview of how the Wherify location-based service works. The information retrieved by this service included a breadcrumb of each location (in table and map form), a list of time and date stamps, latitude and longitude coordinates, nearest street address, and location type. Courtesy of Wherify Wireless Location Services, 2009.


B. Social Computing

When we think of human evolution, we often think of biological adaptions to better survive disease or digest foods. But our social behaviors are also a product of evolution. Being able to read facial expressions and other nonverbal cues is an evolved trait and an essential part of human communication. In essence, we have evolved as a species to communicate face to face. Our ability to understand verbal and nonverbal cues has been essential to our ability to function in groups and therefore our survival [45].

The emoticon came very early in the life of electronic communication. This is not surprising, given just how necessary using facial expressions to give context to written words was to the casual and humor-filled atmosphere of the Internet precursors. Many other attempts to add context to the quick, casual writing style of the Internet have been made, mostly with less success. Indeed, the problem of communication devolving from normal conversations to meaningless shouting matches has been around almost as long as electronic communication itself. More recently, the “anonymous problem”—the problem of people anonymously harassing others without fear of response or retribution—has come under discussion in online forums and communities. And of course, we have seen the recent tragic consequences of cyberbullying [46]. In general, people will be much crueler to other people online than they would ever be in person; many of our evolved social mechanisms depend on seeing and hearing who we are communicating with.

The question we are faced with is this: Given that we now exist and interact in a world that our social instincts were not evolved to handle, how will we adapt to the technology, or more likely, how will the technology we use to communicate with adapt to us? We are already seeing the beginning of that adaptation: more and more social media sites require a “real” identity tied to a valid e-mail address. And everywhere on the Internet, “reputation” is becoming more and more important [177].

Reference sites, such as Wikipedia, control access based on reputation: users gain more privileges on the site to do things such as editing controversial topics or banning other users based on their contributions to the community—writing and editing articles or contributing to community discussions. On social media and review sites, users that are not anonymous have more credibility, and again reputation is gained with time and contribution to the community.

It is now becoming standard practice for social media of all forms to allow users to control who can contact them and make it very easy to block unwanted contact. In the future, these trends will be extended. Any social media site with a significant amount of traffic will have a way for users to build and maintain a reputation and to control access accordingly. The shift away from anonymity is set to continue and this is also evident in the way search engine giants, like Google, are updating their privacy statements—from numerous policies down to one. Google states: “When you sign up for a Google Account, we ask you for personal information. We may combine the information you submit under your account with information from other Google services or third parties in order to provide you with a better experience and to improve the quality of our services” [47].

Fig. 10. Wearable high-definition video calling and recording attire. Courtesy of Xybernaut 2002.

When people use technology to socialize, they are often doing it on mobile platforms. Therefore, the futures of social and mobile computing are inevitably intertwined. The biggest change that is coming to the shared mobile/social computing space is the final spread of WiFi and high-density mobile phone networks. There are still huge geographical areas where there is no way of wirelessly connecting to the Internet or where the connection is so slow as to be unusable. As high-speed mobile Internet spreads, extra bandwidth could help the problems inherent in communicating without being able to see the other person. High-definition (HD) video calling on mobile phones will make person-to-person communications easier and more context rich (Fig. 10). HD video calling and conferencing will make everything from business meetings to long-distance relationships easier by allowing the participants to pick up on unspoken cues.


As more and more of our social interactions go online, the online world will be forced to adapt to our evolved human social behaviors. It will become much more like offline communication, with reputation and community standing being deeply important. True anonymity will become harder and harder to come by, as the vast majority of social media will require some proof of identity. for example, this practice is already occurring in countries like South Korea [48].

While we cannot predict all the ways in which our online interactions will become more immersive, we can say for certain that they will. The beauty of all of these changes will be that it will become as easy to maintain or grow a personal relationship on the other side of the world as it would be across town. As countries and regions currently without high-speed data networks come online, they can integrate into a new global community allowing us all to know each other with a diverse array of unknown consequences.

C. Wearable Computing

Fig. 11. The prototype GPS Locator for Children with a built-in pager, a request for 911, GPS technology, and a key fob to manually lock and unlock the locator. This specific device is no longer being marketed, despite the apparent need in some contexts. Courtesy of Wherify Wireless Location Services, 2003.

According to Siewiorek [49, p. 82], the first wearable device was prototyped in 1961 but it was not until 1991 that the term “wearable computer” was first used by a research group at Carnegie Mellon University (Pittsburgh, PA). This coincided with the rise of the laptop computer, early models of which were known as “luggables.” Wearable computing can be defined as “anything that can be put on and adds to the user's awareness of his or her environment …mostly this means wearing electronics which have some computational power” [50, p. 2012]. While the term “wearables” is generally used to describe wearable displays and custom computers in the form of necklaces, tiepins, and eyeglasses, the definition has been broadened to incorporate iPads, iPods, personal digital assistants (PDAs), e-wallets, GPS watches (Fig. 11), and other mobile accessories such as smartphones, smart cards, and electronic passports that require the use of belt buckles or clip-on satchels attached to conventional clothing [51, p. 330]. The iPlant (Internet implant) is probably not far off either [52].


Wearable computing has reinvented the way we work and go about our day-to-day business and is set to make even greater changes in the foreseeable future [53]. In 2001, it was predicted that highly mobile professionals would be taking advantage of smart devices to “check messages, finish a presentation, or browse the Web while sitting on the subway or waiting in line at a bank” [54, p. 44]. This vision has indeed been realized but devices like netbooks are still being lugged around instead of worn in the true sense.

The next phase of wearables will be integrated into our very clothing and accessories, some even pointing to the body itself being used as an input mechanism. Harrison of Carnegie Mellon's Human–Computer Interaction Institute (HCII) produced Skinput with Microsoft researchers that makes the body that travels everywhere with us, one giant touchpad [55]. These are all exciting innovations and few would deny the positives that will come from the application of this cutting-edge research. The challenge will be how to avoid rushing this technology into the marketplace without the commensurate testing of prototypes and the due consideration of function creep. Function or scope creep occurs when a device or application is used for something other than it was originally intended.

Early prototypes of wearable computers throughout the 1980s and 1990s could have been described as outlandish, bizarre, or even weird. for the greater part, wearable computing efforts have focused on head-mounted displays (a visual approach) that unnaturally interfered with human vision and made proximity to others cumbersome [56, p. 171]. But the long-term aim of researchers is to make wearable computing inconspicuous as soon as technical improvements allow for it (Fig. 12). The end user should look as “normal” as possible [57, p. 177].


Fig. 12. Self-portraits of Mann with wearable computing kit from the 1980s to the 1990s. Prof. Mann started working on his WearComp invention as far back as his high school days in the 1970s. Courtesy of Steve Mann.

New technologies like the “Looxcie” [58] wearable recorders have come a long way since the clunky point-of-view head-mounted recording devices of the 1980s, allowing people to effortlessly record and share their life as they experience it in different contexts. Mann has aptly coined the term sousveillance. This is a type of inverse panopticon, sous (below) and veiller (to watch) stemming from the French words. A whole body of literature has emerged around the notion of sousveillance which refers to the recording of an activity by a participant in the activity, typically by way of small wearable or portable personal technologies. The online platform demonstrates the great power of sousveillance. But there are still serious challenges, such as privacy concerns, that need to be overcome if wearable computing is to become commonplace [59]. Just like Google has created StreetView, can the individual participate in PersonView without his neighbor's or stranger's consent [7] despite the public versus private space debate? Connected to privacy is also the critical issue of autonomy (and if we were to agree with Kant, human dignity), that is, our right to make informed and uncoerced decisions.

While mass-scale commercial production of wearable clothing is still some time away, some even calling it the unfulfilled pledge [60], shirts with simple memory functions have been developed and tested. Sensors will play a big part in the functionality of the smartware helping to determine the environmental context, and undergarments closest to the body will be used for body functions such as the measurement of temperature, blood pressure, heart and pulse rates. for now, however, the aim is to develop ergonomically astute wearable computing that is actually useful to the end user. Head-mounted displays attached to the head with a headband may be practical for miners carrying out occupational health and safety (OH&S) but are unattractive for everyday consumer users. Displays of the next generation will be mounted or concealed within eyeglasses themselves [61, p. 48].

Mann [57, p. 31] predicts that wearable computing will become so common one day, interwoven into every day clothing-based computing, that “we will no doubt feel naked, confused, and lost without a computer screen hovering in front of our eyes to guide us,” just like we would feel our nakedness without the conventional clothing of today.

1. Wearables in the Medical Domain

Unsurprisingly, wearables have also found a niche market in the medical domain. In the mid-1990s, researchers began to describe a small wearable device that continuously monitored glucose levels so that the right amount of insulin was calculated for the individual reducing the incidence of hypoglycemic episodes [62]. The Glucoday [63] and GlucoChip [64] are just two products demonstrating the potential to go beyond wearables toward in vivo techniques in medical monitoring.

Medical wearables even have the capability to check and monitor products in one's blood [65, p. 88]. Today medical wearable device applications include: “monitoring of myocardial ischemia, epileptic seizure detection, drowsiness detection …physical therapy feedback, such as for stroke victim rehabilitation, sleep apnea monitoring, long-term monitoring for circadian rhythm analysis of heart rate variability (HRV)” [66, p. 44].

Some of the current shortcomings of medical wearables are similar to those of conventional wearables, namely the size and the weight of the device which can be too large and too heavy. In addition, wearing the devices for long periods of time can be irritating due to the number of sensors that may be required to be worn for monitoring. The gel applied for contact resistance between the electrode and the skin can also dry up, which is a nuisance. Other obstacles to the widespread diffusion of medical wearables include government regulations and the manufacturers' requirement for limited liability in the event that an incorrect diagnosis is made by the equipment.

But much has been improved in the products of wearables over the past ten years. Due to commensurate breakthroughs in the miniaturization of computing components, wearable devices are now usually quite small. Consider Toumaz Technology's Digital Plaster invention known as the Sensium Life Pebble TZ203002 (Fig. 13). The Digital Plaster contains a Sensium silicon chip, powered by a tiny battery, which sends data via a cell phone or a PDA to a central computer database. The Life Pebble has the ability to enable continuous, auditable acquisition of physiological data without interfering with the patient's activities. The device can continuously monitor electrocardiogram (ECG), heart rate, physical activity, and skin temperature. In an interview with M. G. Michael in 2006, Toumazou noted how the Digital Plaster had been applied in epilepsy control and depression. He said that by monitoring the electrical and chemical responses they could predict the onset of either a depressive episode or an epileptic fit; and then once predicted the nerve could be stimulated to counter the seizure [67]. He added that this truly signified “personal healthcare.”

Fig. 13. Prof. Christofer Toumazou with a patient wearing the “digital plaster”; a tiny electronic device meant to be embedded in ordinary medical plaster that includes sensors for monitoring health-related metadata such as blood pressure, temperature, and glucose levels. Courtesy of Toumaz Technology 2008.


D. Robots and Unmanned Aerial Systems and Vehicles

Fig. 14. Predator Drone aircraft: this plane comes in the armed and reconnaissance versions and the models are known as RQ-1 and MQ-1.

Autonomous systems are those which are self-governed. In practice, there are many degrees of autonomy ranging from the highly constrained and supervised to unconstrained and intelligent. Some systems are referred to as “semiautonomous” in order to suggest that the machines are tasked or supervised by a human operator. An unmanned vehicle may be a remotely piloted “dumb” vehicle or an autonomous vehicle (Fig. 14). Robots may be designed to perform repetitive tasks in a highly constrained environment or with intelligence and a high level of autonomy to make judgments in a dynamic and unpredictable environment. As technology advancements allow for a high level of autonomy and expansion from industrial applications to caregiving and warfighting, society is coming to grips with the present and the future of increasingly autonomous systems in our homes, workplaces, and battlefields.


Robot ethics, particularly with respect to autonomous weapons systems, has received increasing attention in the last few years [68]. While some call for an outright stop to the development of such technology [69], others seek to shape the technology with ethical and moral implications in mind [6], [70]–[71][72][73]. Driving robotics weapons development underground or refusing to engage in dialog over the ethical issues will not give ethicists an opportunity to participate in shaping the design and use of such weapons. Arkin [6] and Operto [74], among others, argue that engineers must not shy away from these ethical challenges. Furthermore, the technological cat is out of the bag: “Autonomy is subtle in its development—it is occurring in a step-by-step process, rather than through the creation of a disruptive invention. It is far less likely that we will have a sudden development of a ‘positronic brain’ or its equivalent, but rather a continual and gradual relinquishment of authority to machines through the constant progress of science, as we have already seen in automated trains, elevators, and numerous other examples, that have vanished into the background noise of civilization. Autonomy is already here by some definitions” [70].

The evolution of the development and deployment of unmanned aerial vehicles and other autonomous or semiautonomous systems has outpaced the analysis of social implications and ethics of their design and use [70], [75]. Sullivan argues that the evolution of unmanned vehicles for military deployment should not be confused with the more general trend of increasing autonomy in military applications [75]. Use of robots often provides a tactical advantage due to sensors, data processing, and physical characteristics that outperform humans. Robots can act without emotion, bias, or self-preservation influencing judgment, which may be a liability or advantage. Risks to robot deployment in the military, healthcare industry, and elsewhere include trust of autonomous systems (a lack of, or too much) and diffusion of blame or moral buffering [6], [72].

for such critical applications in the healthcare domain, and lethal applications in weapons, the emotional and physical distance of operating a remote system (e.g., drone strikes via video-game style interface) may negatively influence the moral decision making of the human operator or supervisor, while also providing some benefit of emotional protection against post-traumatic stress disorder [71], [72]. Human–computer interfaces can promote ethical choices in the human operator through thoughtful or model-based design as suggested by Cummings [71] and Asaro [72].

for ethical behavior of the autonomous system itself, Arkin proposes that robot soldiers could be more humane than humans, if technologically constrained to the laws of war and rules of engagement, which they could follow without the distortions of emotion, bias, or a sense of self-preservation [6], [70]. Asaro argues that such laws are not, in fact, objective and static but rather meant for human interpretation for each case, and therefore could not be implemented in an automated system [72]. More broadly, Operto [74] agrees that a robot (in any application) can only act within the ethics incorporated into its laws, but that a learning robot, in particular, may not behave as its designers anticipate.

Fig. 15. Kotaro, a humanoid roboter created at the University of Tokyo (Tokyo, Japan), presented at the University of Arts and Industrial Design Linz (Linz, Austra) during the Ars Electronica Festival 2008. Courtesy of Manfred Werner-Tsui.

Robot ethics is just one part of the landscape of social implications for autonomous systems. The field of human–robot interaction explores how robot interfaces and socially adaptive robots influence the social acceptance, usability, and safety of robots [76] (Fig. 15). for example, robots used for social assistance and care, such as for the elderly and small children, introduce a host of new social implications questions. Risks of developing an unhealthy attachment or loss of human social contact are among the concerns raised by Sharkey and Sharkey [77]. Interface design can influence these and other risks of socially assistive robots, such as a dangerous misperception of the robot's capabilities or a compromise of privacy [78].


Autonomous and unmanned systems have related social implication challenges. Clear accountability and enforcing morality are two common themes in the ethical design and deployment of such systems. These themes are not unique to autonomous and unmanned systems, but perhaps the science fiction view of robots run amok raises the question “how can we engineer a future where we can benefit from these technologies while maintaining our humanity?”


SECTION IV. The Future

Great strides are being taken in the field of biomedical engineering: the application of engineering principles and techniques to the medical field [79]. New technologies such as prospective applications of nanotechnology, microcircuitry (e.g., implantables), and bionics will heal and give hope to many who are suffering from life-debilitating and life-threatening diseases [80]. The lame will walk again. The blind will see just as the deaf have heard. The dumb will sing. Even bionic tongues are on the drawing board. Hearts and kidneys and other organs will be built anew. The fundamental point is that society at large should be able to distinguish between positive and negative applications of technological advancements before we diffuse and integrate such innovations into our day-to-day existence.

The Bionics Institute [81], for instance, is future-focused on the possibilities of bionic hearing, bionic vision, and neurobionics, stating: “Medical bionics is not just a new frontier of medical science, it is revolutionizing what is and isn't possible. Where once there was deafness, there is now the bionic ear. And where there was blindness, there may be a bionic eye.” The Institute reaffirms its commitment to continuing innovative research and leading the way on the proposed “world-changing revolution.”

A. Cochlear Implants—Helping the Deaf to Hear

Fig. 16. Cochlear's Nucleus Freedom implant with Contour Advance electrode which is impervious to magnetic fields up to 1.5 Tesla. Courtesy of Cochlear Australia.

In 2000, more than 32 000 people worldwide already had cochlear implants [82], thanks to the global efforts of people such as Australian Professor Graeme Clark, the founder of Cochlear, Inc. [83]. Clark performed his first transplant in Rod Saunder's left ear at the Royal Eye and Ear Hospital in Melbourne, Australia, on August 1, 1978, when “he placed a box of electronics under Saunders's skin and a bundle of electrodes in his inner ear” [84]. In 2006, that number had grown to about 77 500 for the nucleus implant (Fig. 16) alone which had about 70% of the market share [85]. Today, there are over 110 000 cochlear implant recipients, about 30 000 annually, and their personal stories are testament enough to the ways in which new technologies can change lives dramatically for the better [86]. Cochlear implants can restore hearing to people who have severe hearing loss, a form of diagnosed deafness. Unlike a standard hearing aid that works like an amplifier, the cochlear implant acts like a microphone to change sound into electronic signals. Signals are sent to the microchip implant via radio frequency (RF), stimulating nerve fibers in the inner ear. The brain then interprets the signals that are transmitted via the nerves to be sound.


Today, cochlear implants (which are also commonly known as bionic ears) are being used to overcome deafness; tomorrow, they may be open to the wider public as a performance-enhancing technique [87, pp. 10–11]. Audiologist Steve Otto of the Auditory Brainstem Implant Project at the House Ear Institute (Los Angeles, CA) predicts that one day “implantable devices [will] interface microscopically with parts of the normal system that are still physiologically functional” [88]. He is quoted as saying that this may equate to “ESP for everyone.” Otto's prediction that implants will one day be used by persons who do not require them for remedial purposes has been supported by numerous other high profile scientists. A major question is whether this is the ultimate trajectory of these technologies.

for Christofer Toumazou, however, Executive Director of the Institute of Biomedical Engineering, Imperial College London (London, U.K.), there is a clear distinction between repairing human functions and creating a “Superman.” He said, “trying to give someone that can hear super hearing is not fine.” for Toumazou, the basic ethical paradigm should be that we hope to repair the human and not recreate the human [67].

B. Retina Implants—On a Mission to Help the Blind to See

Fig. 17. Visual cortical implant designed by Prof. Mohamad Sawan, a researcher at Polystim Neurotechnologies Laboratory at the Ecole Polytechnique de Montreal (Montreal, QC, Canada). The basic principle of Prof. Sawan's technology consists of stimulating the visual cortex by implanting a silicon microchip on a network of electrodes, made of biocompatible materials, wherein each electrode injects a stimulating electrical current in order to provoke a series of luminous points to appear (an array of pixels) in the field of vision of the blind person. This system is composed of two distinct parts: the implant and an external controller. Courtesy of Mohamad Sawan 2009, made available under Creative Commons License.

The hope is that retina implants will be as successful as cochlear implants in the future [89]. Just as cochlear implants cannot be used for persons suffering from complete deafness, retina implants are not a solution for totally blind persons but rather those suffering from aged macular degeneration (AMD) and retinitis pigmentosa (RP). Retina implants have brought together medical researchers, electronic specialists, and software designers to develop a system that can be implanted inside the eye [90]. A typical retina implant procedure is as follows: “[s]urgeons make a pinpoint opening in the retina to inject fluid in order to lift a portion of the retina from the back of the eye, creating a pocket to accommodate the chip. The retina is resealed over the chip, and doctors inject air into the middle of the eye to force the retina back over the device and close the incisions” [91] (Fig. 17).


Brothers Alan and Vincent Chow, one an engineer, the other an ophthalmologist, developed the artificial silicon retina (ASR) and began the company Optobionics Corporation in 1990. This was a marriage between biology and engineering: “In landmark surgeries at the University of Illinois at Chicago Medical Center …the first artificial retinas made from silicon chips were implanted in the eyes of two blind patients who have lost almost all of their vision because of retinal disease.” In 1993, Branwyn [92, p. 3] reported that a team at the National Institutes of Health (NIH) led by Dr. Hambrecht, implanted a 38-electrode array into a blind female's brain. It was reported that she saw simple light patterns and was able to make out crude letters. The following year the same procedure was conducted by another group on a blind male resulting in the man seeing a black dot with a yellow ring around it. Rizzo of Harvard Medical School's Massachusetts Eye and Ear Infirmary (Boston, MA) has cautioned that it is better to talk down the possibilities of the retina implant so as not to give false hopes. The professor himself had expressed that they are dealing with “science fiction stuff” and that there are no long-term guarantees that the technology will ever fully restore sight, although significant progress is being made by a number of research institutes [93, p. 5].

Among these pioneers are researchers at The Johns Hopkins University Medical Center (Baltimore, MD). Brooks [94, p. 4] describes how the retina chip developed by the medical center will work: “a kind of miniature digital camera…is placed on the surface of the retina. The camera relays information about the light that hits it to a microchip implanted nearby. This chip then delivers a signal that is fed back to the retina, giving it a big kick that stimulates it into action. Then, as normal, a signal goes down the optic nerve and sight is at least partially restored.” In 2009, at the age of 56, Barbara Campbell had an array of electrodes implanted in each eye [95] and while her sight is nowhere near fully restored, she is able to make out shapes and see shades of light and dark. Experts believe that this approach is still more realistic in restoring sight to those suffering from particular types of blindness, even more than stem cell therapy, gene therapy, or eye transplants [96] where the risks still outweigh the advantages.

C. Tapping Into the Heart and Brain

Fig. 18. An artificial pacemaker from St. Jude Medical (St. Paul, MN), with electrode 2007. Courtesy of Steven Fruitsmaak.

If it was possible as far back as 1958 to successfully implant two transistors the size of an ice hockey puck in the heart of a 43 year old man [97], the things that will become possible by 2020 are constrained by the imagination as much as by technological limitations. Heart pacemakers (Fig. 18) are still being further developed today, but for the greater part, researchers are turning their attention to the possibilities of brain pacemakers. In the foreseeable future brain implants may help sufferers of Parkinson's, paralysis, nervous system problems, speech-impaired persons, and even cancer patients. The research is still in its formative years and the obstacles are great because of the complexity of the brain; but scientists are hopeful of major breakthroughs in the next 20 years.


The brain pacemaker endeavors are bringing together people from a variety of disciplines, headed mainly by neurosurgeons. By using brain implants electrical pulses can be sent directly to nerves via electrodes. The signals can be used to interrupt incoherent messages to nerves that cause uncontrollable movements or tremors. By tapping into the right nerves in the brain, particular reactions can be achieved. Using a technique that was discovered almost accidentally in France in 1987, the following extract describes the procedure of “tapping into” the brain: “Rezai and a team of functional neurosurgeons, neurologists and nurses at the Cleveland Clinic Foundation in Ohio had spent the next few hours electronically eavesdropping on single cells in Joan's brain attempting to pinpoint the precise trouble spot that caused a persistent, uncontrollable tremor in her right hand. Once confident they had found the spot, the doctors had guided the electrode itself deep into her brain, into a small duchy of nerve cells within the thalamus. The hope was that when sent an electrical current to the electrode, in a technique known as deep-brain stimulation, her tremor would diminish, and perhaps disappear altogether” [98]. Companies such as Medtronic Incorporated of Minnesota (Minneapolis, MN) now specialize in brain pacemakers [98]. Medtronic's Activa implant has been designed specifically for sufferers of Parkinson's disease [93].

More recently, there has been some success with ameliorating epileptic attacks through closed-loop technology, also known as smart stimulation. The implant devices can detect an onset of epileptiform activity through a demand-driven process. This means that the battery power in the active implant lasts longer because of increased efficiency, i.e., it is not always stimulating in anticipation of an attack, and adverse effects of having to remove and install new implants more frequently are forgone [99]. Similarly, it has been said that technology such as deep brain stimulation, which has physicians implant electrodes in the brain and electrical pacemakers in the patient's clavicle for Parkinson's Disease, may well be used to overcome problems with severely depressed persons [100].

Currently, the technology is being used to treat thousands of people who are severely depressed or suffering from obsessive compulsive disorder (OCD) who have been unable to respond to other forms of treatment such as cognitive behavioral therapy (CBT) [101]. It is estimated that 10% of people suffering from depression do not respond to conventional methods. Although hard figures are difficult to obtain, several thousands of depressed persons worldwide have had brain pacemakers installed that have software which can be updated wirelessly and remotely. The trials have been based on decades of research by Prof. Helen Mayberg, from Emory University School of Medicine (Atlanta, GA), who first began studying the use of subcallosal cingulate gyrus deep brain stimulation (SCG DBS) for depression in 1990.

In her research, Mayberg has used a device that is no larger than a matchbox with a battery-powered generator that sits in the chest and produces electric currents. The currents are sent to an area deep in the brain via tiny wires which are channeled under the skin on either side of the neck. Surprisingly the procedure to have this type of implant installed only requires local anesthetic and is an outpatient procedure. In 2005, Mayberg told a meeting at the Science Media Centre in London: “This is a very new way to think about the nature of depression …We are not just exciting the brain, we are using electricity to retune and remodulate…We can interrupt or switch off an abnormally functioning circuit” [102].

Ongoing trials today continue to show promising results. The outcome of a 20-patient clinical trial of persons with depression treated with SCG DBS published in 2011, showed that: “At 1 year, 11 (55%) responded to surgery with a greater than 50% reduction in 17-item Hamilton Depression Scale scores. Seven patients (35%) achieved or were within 1 point of achieving remission (scores < 8). Of note, patients who responded to surgery had a significant improvement in mood, anxiety, sleep, and somatic complains related to the disease. Also important was the safety of the procedure, with no serious permanent adverse effects or changes in neuropsychological profile recorded” [103].

Despite the early signs that these procedures may offer long-term solutions for hundreds of thousands of people, some research scientists believe that tapping into the human brain is a long shot. The brain is commonly understood to be “wetware” and plugging in hardware into this “wetware” would seem to be a type mismatch, at least according to Steve Potter, a senior research fellow in biology working at the California Institute of Technology's Biological Imaging Center (Pasadena, CA). Instead Potter is pursuing the cranial route as a “digital gateway to the brain” [88]. Others believe that it is impossible to figure out exactly what all the millions of neurons in the brain actually do. Whether we eventually succeed in “reverse-engineering” the human brain, the topic of implants for both therapeutic and enhancement purposes has aroused significant controversy in the past, and promises to do so even more in the future.

D. Attempting to Overcome Paralysis

In more speculative research, surgeons believe that brain implants may be a solution for persons who are suffering from paralysis, such as spinal cord damage. In these instances, the nerves in the legs are still theoretically “working”; it is just that they cannot make contact with the brain which controls their movement. If somehow signals could be sent to the brain, bypassing the lesion point, it could conceivably mean that paralyzed persons regain at least part of their capability to move [104]. In 2000, Reuters [105] reported that a paralyzed Frenchman (Marc Merger) “took his first steps in 10 years after a revolutionary operation to restore nerve functions using a microchip implant…Merger walks by pressing buttons on a walking frame which acts as a remote control for the chip, sending impulses through fine wires to stimulate legs muscles…” It should be noted, however, that the system only works for paraplegics whose muscles remain alive despite damage to the nerves. Yet there are promising devices like the Bion that may one day be able to control muscle movement using RF commands [106]. Brooks [94] reports that researchers at the University of Illinois in Chicago (Chicago, IL) have “invented a microcomputer system that sends pulses to a patient's legs, causing the muscles to contract. Using a walker for balance, people paralyzed from the waist down can stand up from a sitting position and walk short distances…Another team, based in Europe…enabled a paraplegic to walk using a chip connected to fine wires in his legs.” These techniques are known as functional neuromuscular stimulation systems [107]. In the case of Australian Rob Summers, who became a paraplegic after an accident, doctors implanted an epidural stimulator and electrodes into his spinal cord. “The currents mimic those normally sent by the brain to initiate movement” [108].

Others working to help paraplegics to walk again have invested time in military technology like exoskeletons [109] meant to aid soldiers in lifting greater weights, and also to protect them during battle. Ekso Bionics (Berkeley, CA), formerly Berkeley Bionics, has been conducting trials of an electronic suit in the United States since 2010. The current Ekso model will be fully independent and powered by artificial intelligence in 2012. The Ekso “provides nearly four hours of battery power to its electronic legs, which replicate walking by bending the user's knees and lifting their legs with what the company claims is the most natural gait available today” [110]. This is yet another example of how military technology has been commercialized toward a health solution [111].

E. Granting a Voice to the Speech Impaired

Speech-impairment microchip implants work differently than cochlear and retina implants. Whereas in the latter two, hearing and sight is restored, in implants for speech impairment the voice is not restored, but an outlet for communication is created, possibly with the aid of a voice synthesizer. At Emory University, neurosurgeon Roy E. Bakay and neuroscientist Phillip R. Kennedy were responsible for critical breakthroughs early in the research. In 1998, Versweyveld [112] reported two successful implants of a neurotrophic electrode into the brain of a woman and man who were suffering from amyotrophic lateral sclerosis (ALS) and brainstem stroke, respectively. In an incredible process, Bakay and Kennedy's device uses the patient's brain processes—thoughts, if you will—to move a cursor on a computer screen. “The computer chip is directly connected with the cortical nerve cells…The neural signals are transmitted to a receiver and connected to the computer in order to drive the cursor” [112]. This procedure has major implications for brain–computer interfaces (BCIs), especially bionics. Bakay predicted that by 2010 prosthetic devices will grant patients that are immobile the ability to turn on the TV just by thinking about it and by 2030 to grant severely disabled persons the ability to walk independently [112], [113].

F. Biochips for Diagnosis and Smart Pills for Drug Delivery

It is not unlikely that biochips will be implanted in people at birth in the not too distant future. “They will make individual patients aware of any pre-disposition to susceptibility” [114]. That is, biochips will be used for point-of-care diagnostics and also for the identification of needed drugs, even to detect pandemic viruses and biothreats for national security purposes [115]. The way that biosensors work is that they “represent the technological counterpart of our sense organs, coupling the recognition by a biological recognition element with a chemical or physical transducer, transferring the signal to the electrical domain” [116]. Types of biosensors include enzymes antibodies, receptors, nucleic acids, cells (using a biochip configuration), biomimetic sequences of RNA (ribonucleic) or DNA (deoxyribonucleic), and molecularly imprinted polymers (MIPs). Biochips, on the other hand, “automate highly repetitive laboratory tasks by replacing cumbersome equipment with miniaturized, microfluidic assay chemistries combined with ultrasensitive detection methodologies. They achieve this at significantly lower costs per assay than traditional methods—and in a significantly smaller amount of space. At present, applications are primarily focused on the analysis of genetic material for defects or sequence variations” [117].

with response to treatment for illness, drug delivery will not require patients to swallow pills or take routine injections; instead chemicals will be stored on a microprocessor and released as prescribed. The idea is known as “pharmacy-on-a-chip” and was originated by scientists at the Massachusetts Institute of Technology (MIT, Cambridge, MA) in 1999 [118]. The following extract is from The Lab[119]: “Doctors prescribing complicated courses of drugs may soon be able to implant microchips into patients to deliver timed drug doses directly into their bodies.”

Microchips being developed at Ohio State University (OSU, Columbus, OH) can be swathed with chemical substances such as pain medication, insulin, different treatments for heart disease, or gene therapies, allowing physicians to work at a more detailed level [119]. The breakthroughs have major implications for diabetics, especially those who require insulin at regular intervals throughout the day. Researchers at the University of Delaware (Newark, DE) are working on “smart” implantable insulin pumps that may relieve people with Type I diabetes [120]. The delivery would be based on a mathematical model stored on a microchip and working in connection with glucose sensors that would instruct the chip when to release the insulin. The goal is for the model to be able to simulate the activity of the pancreas so that the right dosage is delivered at the right time.

Fig. 19. The VeriChip microchip, the first microchip implant to be cleared by the U.S. Food and Drug Administration (FDA) for humans, is a passive microchip that contains a 16-digit number, which can be used to retrieve critical medical information on a patient from a secure online database. The company that owns the VeriChip technology is developing a microscopic glucose sensor to put on the end of the chip to eliminate a diabetic's need to draw blood to get a blood glucose reading. Courtesy of PositiveID Corporation.

Beyond insulin pumps, we are now nearing a time where automated closed-loop insulin detection (Fig. 19) and delivery will become a tangible treatment option and may serve as a temporary cure for Type I diabetes until stem cell therapy becomes available. “Closed-loop insulin delivery may revolutionize not only the way diabetes is managed but also patients' perceptions of living with diabetes, by reducing the burden on patients and caregivers, and their fears of complications related to diabetes, including those associated with low and high glucose levels” [121]. It is only a matter of time before these lab-centric results are replicated in real-life conditions in sufferers of Type 1 diabetes.



G. To Implant or Not to Implant, That Is the Question

There are potentially 500 000 hearing impaired persons that could benefit from cochlear implants [122] but not every deaf person wants one [123]. “Some deaf activists…are critical of parents who subject children to such surgery [cochlear implants] because, as one charged, the prosthesis imparts ‘the non-healthy self-concept of having had something wrong with one's body’ rather than the ‘healthy self-concept of [being] a proud Deaf’” [124]. Assistant Professor Scott Bally of Audiology at Gallaudet University (Washington, DC) has said, “Many deaf people feel as though deafness is not a handicap. They are culturally deaf individuals who have successfully adapted themselves to being deaf and feel as though things like cochlear implants would take them out of their deaf culture, a culture which provides a significant degree of support” [92]. Putting this delicate debate aside, it is here that some delineation can be made between implants that are used to treat an ailment or disability (i.e., giving sight to the blind and hearing to the deaf), and implants that may be used for enhancing human function (i.e., memory). There are some citizens, like Amal Graafstra of the United States [125], who are getting chip implants for convenience-oriented social living solutions that would instantly herald in a world that had keyless entry everywhere (Fig. 20). And there are other citizens who are concerned about the direction of the human species, as credible scientists predict fully functional neural implants. “[Q]uestions are raised as to how society as a whole will relate to people walking around with plugs and wires sprouting out of their heads. And who will decide which segments of the society become the wire-heads” [92]?


Fig. 20. Amal Graafstra demonstrating an RFID-operated door latch application he developed. Over the RFID tag site on his left hand is a single steristrip that remained after implantation for a few days. His right hand is holding the door latch.


SECTION V. Überveillance and Function Creep

Section IV focused on implants that were attempts at “orthopedic replacements”: corrective in nature, required to repair a function that is either lying dormant or has failed altogether. Implants of the future, however, will attempt to add new “functionality” to native human capabilities, either through extensions or additions. Globally acclaimed scientists have pondered on the ultimate trajectory of microchip implants [126]. The literature is admittedly mixed in its viewpoints of what will and will not be possible in the future [127].

for those of us working in the domain of implantables for medical and nonmedical applications, the message is loud and clear: implantables will be the next big thing. At first, it will be “hip to get a chip.” The extreme novelty of the microchip implant will mean that early adopters will race to see how far they can push the limits of the new technology. Convenience solutions will abound [128]. Implantees will not be able to get enough of the new product and the benefits of the technology will be touted to consumers in a myriad of ways, although these perceived benefits will not always be realized. The technology will probably be first tested where there will be the least effective resistance from the community at large, that is, on prison inmates [129], then those suffering from dementia. These incremental steps in pilot trials and deployment are fraught with moral consequences. Prisoners cannot opt out from jails adopting tracking technology, and those suffering from cognitive disorders have not provided and could not provide their consent. from there it will conceivably not take long for it to be used on the elderly and in children and on those suffering from clinical depression.

The functionality of the implants will range from passive ID-only to active multiapplication, and most invasive will be the medical devices that can upon request or algorithmic reasoning release drugs or electrically stimulate the body for mental and physical stability. There will also be a segment of the consumer and business markets who will adopt the technology for no clear reason and without too much thought, save for the fact that the technology is new and seems to be the way advanced societies are heading. This segment will probably not be overly concerned with any discernible abridgement of their human rights or the fine-print “terms and conditions” agreement they have signed, but will take an implant on the promise that they will have greater connectivity to the Internet, for example. These consumers will thrive on ambient intelligence, context-aware pervasive applications, and an augmented reality—ubiquity in every sense.

But it is certain that the new technology will also have consequences far greater than what we can presently envision. Questions about the neutrality of technology are immaterial in this new “plugged-in” order of existence. for Brin [130, p. 334], the question ultimately has to do with the choice between privacy and freedom. In his words, “[t]his is one of the most vile dichotomies of all. And yet, in struggling to maintain some beloved fantasies about the former, we might willingly, even eagerly, cast the latter away.” And thus there are two possibilities, just as Brin [130] writes in his amazingly insightful book, The Transparent Society, of “the tale of two cities.” Either implants embedded in humans which require associated infrastructure will create a utopia where there is built-in intelligence for everything and everyone in every place, or implants embedded in humans will create a dystopia which will be destructive and will diminish one's freedom of choice, individuality, and finally that indefinable essence which is at the core of making one feel “human.” A third possibility—the middle-way between these two alternatives—would seem unlikely, excepting for the “off the grid” dissenter.

In Section V-A, we portray some of the attractions people may feel that will draw them into the future world of implanted technologies. In Section V-B, we portray some of the problems associated with implanting technology under the skin that would drive people away from opting in to such a future.

A. The Positive Possibilities

Bearing a unique implant will make the individual feel special because they bear a unique ID. Each person will have one implant which will coordinate hundreds of smaller nanodevices, but each nanodevice will have the capacity to act on its own accord. The philosophy espoused behind taking an implant will be one of protection: “I bear an implant and I have nothing to hide.” It will feel safe to have an implant because emergency services, for example, will be able to rapidly respond to your calls for help or any unforeseen events that automatically log problems to do with your health.

Fewer errors are also likely to happen if you have an implant, especially with financial systems. Businesses will experience a rise in productivity as they will understand how precisely their business operates to the nearest minute, and companies will be able to introduce significant efficiencies. Losses in back-end operations, such as the effects of product shrinkage, will diminish as goods will be followed down the supply chain from their source to their destination customer, through the distribution center and retailer.

It will take some years for the infrastructure supporting implants to grow and thrive with a substantial consumer base. The function creep will not become apparent until well after the early majority have adopted implants and downloaded and used a number of core applications to do with health, banking, and transport which will all be interlinked. New innovations will allow for a hybrid device and supplementary infrastructure to grow so powerful that living without automated tracking, location finding, and condition monitoring will be almost impossible.

B. The Existential Risks

It will take some years for the negative fallout from microchip implants to be exposed. At first only the victims of the fallout will speak out through formal exception reports on government agency websites. The technical problems associated with implants will pertain to maintenance, updates, viruses, cloning, hacking, radiation shielding, and onboard battery problems. But the greater problems will be the impact on the physiology and mental health of the individual: new manifestations of paranoia and severe depression will lead to people continually wanting reassurance about their implant's functionality. Issues about implant security, virus detection, and a personal database which is error free will be among the biggest issues facing implantees. Despite this, those who believe in the implant singularity (the piece of embedded technology that will give each person ubiquitous access to the Internet) will continue to stack up points and rewards and add to their social network, choosing rather to ignore the warnings of the ultimate technological trajectory of mind control and geoslavery [131]. It will have little to do with survival of the fittest at this point, although most people will buy into the notion of an evolutionary path toward the Homo Electricus [132]: a transhumanist vision [133] that we can do away with the body and become one with the Machine, one with the Cosmos—a “nuts and bolts” Nirvana where one's manufactured individual consciousness connects with the advanced consciousness evolving from the system as a whole. In this instance, it will be the ecstatic experience of being drawn ever deeper into the electric field of the “Network.”

Some of the more advanced implants will be able to capture and validate location-based data, alongside recordings (visual and audio capture). The ability to conduct überveillance via the implant will be linked to a type of blackbox recorder as in an airplane's cockpit. Only in this case the cockpit will be the body, and the recorder will be embedded just beneath the translucent layer of the skin that will be used for memory recollection and dispute resolution. Outwardly ensuring that people are telling the full story at all times, there will be no lies or claims to poor memory. Überveillance is an above and beyond, an exaggerated, an omnipresent 24/7 electronic surveillance (Fig. 21). It is a surveillance that is not only “always on” but “always with you.” It is ubiquitous because the technology that facilitates it, in its ultimate implementation, is embedded within the human body. The problem with this kind of bodily invasive surveillance is that omnipresence in the “material” world will not always equate with omniscience, hence the real concern for misinformation, misinterpretation, and information manipulation [7]. While it might seem like the perfect technology to aid in real-time forensic profiling and criminalization, it will be open to abuse, just like any other technique, and more so because of the preconception that it is infallible.


Fig. 21.The überveillance triquetra as the intersection of surveillance, dataveillance, and sousveillance. Courtesy of Alexander Hayes.


SECTION VI. Technology Roadmapping

According to Andrews cited in [1], a second intellectual current within the IEEE SSIT has begun to emerge which is more closely aligned with most of the IEEE technical societies, as well as economics and business. The proponents of this mode participate in “technology foresight” and “roadmapping” activities, and view technology more optimistically, looking to foster innovation without being too concerned about its possible negative effects [1, p. 14]. Braun [134, p. 133] writes that “[f]orecasts do not state what the future will be…they attempt to glean what it might be.” Thus, one with technology foresight can be trusted insofar as their knowledge and judgment go—they may possess foresight through their grasp of current knowledge, through past experiences which inform their forecasts, and through raw intuition.

Various MIT Labs, such as the Media Lab, have been engaged in visionary research since before 1990, giving society a good glimpse of where technology might be headed some 20–30 years ahead of time. It is from such elite groups that visionaries typically emerge whose main purpose is to envision the technologies that will better our wellbeing and generally make life more productive and convenient in the future. Consider the current activities of the MIT Media Lab's Affective Computing Research Group directed by Prof. Rosalind W. Picard that is working hard on technology aids encapsulating “affect sensing” in response to the growing problem of autism [135]. The Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology. The work of Picard's group was made possible by the foundations laid by the Media Lab's predecessor researchers.

On the global technological roadmap we can now point to the following systems which are already under development but have not yet been widely diffused into the market:

  • alternative fuels heralding in innovations like electric cars which are self-driving, and ocean-powered energy, as well as rise of biofuels;

  • the potential for 3-D printing which will revolutionize prototyping and manufacturing practices and possibly reconstruct human tissue;

  • hologram projections for videoconferencing and televisions that respond to gestures as well as pen-sized computing which will do away with keyboards and screens;

  • quantum computing and cryptography;

  • next-generation prosthetics (Fig. 22);

  • cognitive machines such as robot humanoids;

  • carbon nanotubes and nanotech computing which will make our current silicon chips look gargantuan;

  • genetic engineering breakthroughs and regenerative health treatment such as stem cell treatment;

  • electronic banking that will not use physical cash for transactions but the singularity chip (e.g., implant);

  • ubiquitous high-speed wireless networks;

  • crowdsourced surveillance toward real-time forensic profiling and criminalization;

  • autogeneration visual life logs and location chronicles;

  • enhanced batteries that last longer;

  • body power to charge digital equipment [136];

  • brainwave-based technologies in health/gaming;

  • brain-reading technology for interrogation [137].


Fig. 22. Army Reserve Staff Sgt. Alfredo De Los Santos displays what the X2 microprocessor knee prosthetic can do by walking up a flight of stairs at the Military Advanced Training Center at Walter Reed Army Medical Center (Washington, DC), December 8, 2009. Patients at Walter Reed are testing next-generation prosthetics. Courtesy of the U.S. Army.

It is important to note that while these new inventions have the ability to make things faster and better for most living in more developed countries, they can act to increase the ever-widening gap between the rich and the poor. New technologies will not necessarily aid in eradicating the poverty cycle in parts of Africa and South America. In fact, new technologies can have the opposite effect—they can create an ever greater chasm in equity and access to knowledge.

Technology foresight is commonly held by one who is engaged in the act of prediction. Predictive studies more often than not are based on past and present trends and use this knowledge for providing a roadmap of future possibilities. There is some degree of imagination in prediction, and certainly the creative element is prevalent. Predictions are not meant to be wild, but calculated wisely with evidence that shows a given course or path is likely in the future. However, this does not mean that all predictions come true. Predictive studies can be about new inventions and new form factors, or the recombination of existing innovations in new ways (hybrid architectures, for example), or the mutation of an existing innovation. Some elements of predictive studies have heavy quantitative forecasting components that use complex models to predict the introduction of new innovations, some even based on historical data inputs.

Before an invention has been diffused into the market, scenario planning is conducted to understand how the technology might be used, who might take it up, and what percentage of society will be willing to adopt the product over time (i.e., consumption analysis). “Here the emphasis is on predicting the development of the technology and assessing its potential for adoption, including an analysis of the technology's market” [138, p. 328].

Even the founder of Microsoft Bill Gates [139, p. 274] accepted that his predictions may not come true. But his insights in the Road Ahead are to be commended, even though they were understandably broad. Gates wrote, “[t]he information highway will lead to many destinations. I've enjoyed speculating about some of these. Doubtless I've made some foolish predictions, but I hope not too many.” Allaby [140, p. 206] writes, “[f]orecasts deal in possibilities, not inevitabilities, and this allows forecasters to explore opportunities.”

for the greater part, forecasters raise challenging issues that are thought provoking, about how existing inventions or innovations will impact society. They give scenarios for the technology's projected pervasiveness, how they may affect other technologies, what potential benefits or drawbacks they may introduce, how they will affect the economy, and much more.

Kaku [141, p. 5] has argued, “that predictions about the future made by professional scientists tend to be based much more substantially on the realities of scientific knowledge than those made by social critics, or even those by scientists of the past whose predictions were made before the fundamental scientific laws were completely known.” He believes that among the scientific body today there is a growing concern regarding predictions that for the greater part come from consumers of technology rather than those who shape and create it. Kaku is, of course, correct, insofar that scientists should be consulted since they are the ones actually making things possible after discoveries have occurred. But a balanced view is necessary and extremely important, encompassing various perspectives of different disciplines.

In the 1950s, for instance, when technical experts forecasted improvements in computer technology, they envisaged even larger machines—but science fiction writers predicted microminiaturization. They “[p]redicted marvels such as wrist radios and pocket-sized computers, not because they foresaw the invention of the transistor, but because they instinctively felt that some kind of improvement would come along to shrink the bulky computers and radios of that day” (Bova, 1988, quoted in [142, p. 18]). The methodologies used as vehicles to predict in each discipline should be respected. The question of who is more correct in terms of predicting the future is perhaps the wrong question. for example, some of Kaku's own predictions in Visions can be found in science fiction movies dating back to the 1960s.

In speculating about the next 500 years, Berry [142, p. 1] writes, “[p]rovided the events being predicted are not physically impossible, then the longer the time scale being considered, the more likely they are to come true…if one waits long enough everything that can happen will happen.”



When Ellul [143, p. 432] in 1964 predicted the use of “electronic banks” in his book The Technological Society, he was not referring to the computerization of financial institutions or the use of automatic teller machines (ATMs). Rather it was in the context of the possibility of the dawn of a new entity: the conjoining of man with machine. Ellul was predicting that one day knowledge would be accumulated in electronic banks and “transmitted directly to the human nervous system by means of coded electronic messages…[w]hat is needed will pass directly from the machine to the brain without going through consciousness…” As unbelievable as this man–machine complex may have sounded at the time, 45 years later visionaries are still predicting that such scenarios will be possible by the turn of the 22nd century. A large proportion of these visionaries are cyberneticists. Cybernetics is the study of nervous system controls in the brain as a basis for developing communications and controls in sociotechnical systems. Parenthetically, in some places writers continue to confuse cybernetics with robotics; they might overlap in some instances, but they are not the same thing.

Kaku [141, pp. 112–116] observes that scientists are working steadily toward a brain–computer interface (Fig. 23). The first step is to show that individual neurons can grow on silicon and then to connect the chip directly to a neuron in an animal. The next step is to mimic this connectivity in a human, and the last is to decode millions of neurons which constitute the spinal cord in order to interface directly with the brain. Cyberpunk science fiction writers like William Gibson [144] refer to this notion as “jacking-in” with the wetware: plugging in a computer cable directly with the central nervous system (i.e., with neurons in the brain analogous to software and hardware) [139, p. 133].


Fig.&nbsp;23.&nbsp; Brain–computer interface schema. (1) Pedestal. (2) Sensor. (3) Electrode. Courtesy of Balougador under creative commons license.

Fig. 23. Brain–computer interface schema. (1) Pedestal. (2) Sensor. (3) Electrode. Courtesy of Balougador under creative commons license.

In terms of the current state of development we can point to the innovation of miniature wearable media, orthopedic replacements (including pacemakers), bionic prosthetic limbs, humanoid robots (i.e., a robot that looks like a human in appearance and is autonomous), and RFID implants. Traditionally, the term cyborg has been used to describe humans who have some mechanical parts or extensions. Today, however, we are on the brink of building a new sentient being, a bearer of electricity, a modern man belonging to a new race, beyond that which can be considered merely part man part machine. We refer here to the absolute fusion of man and machine, where the subject itself becomes the object; where the toolmaker becomes one with his tools [145]. The question at this point of coalescence is how human will the new species be [146], and what are the related ethical, metaphysical, and ontological concerns? Does the evolution of the human race as recorded in history come to an end when technology can be connected to the body in a wired or wireless form?

A. From Prosthetics to Amplification

Fig.&nbsp;24.&nbsp; Cyborg 2.0 Project. Kevin Warwick with wife Irena during the Cyborg 2.0 project. Courtesy of Kevin Warwick.

Fig. 24. Cyborg 2.0 Project. Kevin Warwick with wife Irena during the Cyborg 2.0 project. Courtesy of Kevin Warwick.

While orthopedic replacements corrective in nature have been around since the 1950s [147] and are required to repair a function that is either lying dormant or has failed altogether, implants of the future will attempt to add new functionality to native human capabilities, either through extensions or additions. Warwick's Cyborg 2.0 project [148], for instance, intended to prove that two persons with respective implants could communicate sensation and movement by thoughts alone. In 2002, the BBC reported that a tiny silicon square with 100 electrodes was connected to the professor's median nerve and linked to a transmitter/receiver in his forearm. Although, “Warwick believe[d] that when he move[d] his own fingers, his brain [would] also be able to move Irena's” [104, p. 1], the outcome of the experiment was described at best as sending “Morse-code” messages (Fig. 24). Warwick [148] is still of the belief that a person's brain could be directly linked to a computer network [149]. Commercial players are also intent on keeping ahead, continually funding projects in this area of research.


If Warwick is right, then terminals like telephones would eventually become obsolete if thought-to-thought communication became possible. Warwick describes this as “putting a plug into the nervous system” [104] to be able to allow thoughts to be transferred not only to another person but to the Internet and other media. While Warwick's Cyborg 2.0 may not have achieved its desired outcomes, it did show that a form of primitive Morse-code-style nervous-system-to-nervous-system communication is realizable [150]. Warwick is bound to keep trying to achieve his project goals given his philosophical perspective. And if Warwick does not succeed, he will have at least left behind a legacy and enough stimuli for someone else to succeed in his place.


B. The Soul Catcher Chip

The Soul Catcher chip was conceived by former Head of British Telecom Research, Peter Cochrane. Cochrane [151, p. 2] believes that the human body is merely a carcass that serves as a transport mechanism just like a vehicle, and that the most important part of our body is our brain (i.e., mind). Similarly, Miriam English has said: “I like my body, but it's going to die, and it's not a choice really I have. If I want to continue, and I want desperately to see what happens in another 100 years, and another 1000 years…I need to duplicate my brain in order to do that” [152]. Soul Catcher is all about the preservation of a human, way beyond the point of physical debilitation. The Soul Catcher chip would be implanted in the brain, and act as an access point to the external world [153]. Consider being able to download the mind onto computer hardware and then creating a global nervous system via wireless Internet [154] (Fig. 25). Cochrane has predicted that by 2050 downloading thoughts and emotions will be commonplace. Billinghurst and Starner [155, p. 64]predict that this kind of arrangement will free up the human intellect to focus on creative rather than computational functions.


Fig. 25. Ray Kurzweil predicts that by 2013 supercomputer power will be sufficient for human brain functional simulation and by 2025 for human brain neural simulation for uploading. Courtesy of Ray Kurzweil and Kurzweil Technologies 2005.

Cochrane's beliefs are shared by many others engaged in the transhumanist movement (especially Extropians like Alexander Chislenko). Transhumanism (sometimes known by the abbreviations “> H” or “H+”) is an international cultural movement that consists of intellectuals who look at ways to extend life through the application of emerging sciences and technologies. Minsky [156] believes that this will be the next stage in human evolution—a way to achieve true immortality “replacing flesh with steel and silicon” [141, p. 94]. Chris Winter of British Telecom has claimed that Soul Catcher will mean “the end of death.” Winter predicts that by 2030, “[i]t would be possible to imbue a newborn baby with a lifetime's experiences by giving him or her the Soul Catcher chip of a dead person” [157]. The philosophical implications behind such movements are gigantic; they reach deep into every branch of traditional philosophy, especially metaphysics with its special concerns over cosmology and ontology.


SECTION VIII. The Next 100 Years: Homo Electricus

A. The Rise of the Electrophorus

Fig.&nbsp;26.&nbsp; Drawing showing the operation of an Electrophorus, a simple manual electrostatic generator invented in 1762 by Swedish Professor Johan Carl Wilcke. Image by Amédée Guillemin (died 1893).

Fig. 26. Drawing showing the operation of an Electrophorus, a simple manual electrostatic generator invented in 1762 by Swedish Professor Johan Carl Wilcke. Image by Amédée Guillemin (died 1893).

Microchip implants are integrated circuit devices encased in RFID transponders that can be active or passive and are implantable into animals or humans usually in the subcutaneous layer of the skin. The human who has been implanted with a microchip that can send or receive data is an Electrophorus, a bearer of “electric” technology [158]. The Macquarie Dictionary definition of “electrophorus” is “an instrument for generating static electricity by means of induction,” and refers to an instrument used in the early years of electrostatics (Fig. 26).


We have repurposed the term electrophorus to apply to humans implanted with microchips. One who “bears” is in some way intrinsically or spiritually connected to that which they are bearing, in the same way an expecting mother is to the child in her womb. The root electro comes from the Greek word meaning “amber,” and phorus means to “wear, to put on, to get into” [159, p. 635]. When an Electrophorus passes through an electromagnetic zone, he/she is detected and data can be passed from an implanted microchip (or in the future directly from the brain) to a computer device.

To electronize something is “to furnish it with electronic equipment” and electrotechnology is “the science that deals with practical applications of electricity.” The term “electrophoresis” has been borrowed here, to describe the “electronic” operations that an electrophorus is involved in. McLuhan and Zingrone [160, p. 94] believed that “electricity is in effect an extension of the nervous system as a kind of global membrane.” They argued that “physiologically, man in the normal use of technology (or his variously extended body) is perpetually modified by it and in turn finds ever new ways of modifying his technology” [161, p. 117].

The term “electrophorus” seems to be much more suitable today for expressing the human-electronic combination than the term “cyborg.” “Electrophorus” distinguishes strictly electrical implants from mechanical devices such as artificial hips. It is not surprising then that these crucial matters of definition raise philosophical and sociological questions of consciousness and identity, which science fiction writers have been addressing creatively. The Electrophorus belongs to the emerging species of Homo Electricus. In its current state, the Electrophorus relies on a device being triggered wirelessly when it enters an electromagnetic field. In the future, the Electrophorus will act like a network element or node, allowing information to pass through him or her, to be stored locally or remotely, and to send out messages and receive them simultaneously and allow some to be processed actively, and others as background tasks.

At the point of becoming an Electrophorus (i.e., a bearer of electricity), Brown [162] makes the observation that “[y]ou are not just a human linked with technology; you are something different and your values and judgment will change.” Some suspect that it will even become possible to alter behavior of people carrying brain implants, whether the individual wills it or not. Maybury [163]believes that “[t]he advent of machine intelligence raises social and ethical issues that may ultimately challenge human existence on earth.”

B. The Prospects of Transhumanism

Fig.&nbsp;27.&nbsp; The transhumanism symbol. Courtesy of Antonu under Creative Commons license.

Fig. 27. The transhumanism symbol. Courtesy of Antonu under Creative Commons license.

Thought-to-thought communications may seem outlandish today, but it is only one of many futuristic hopes of the movement termed transhumanism. Probably the most representative organization for this movement is the World Transhumanist Association (WTA), which recently adopted the doing-business-as name of “Humanity+” (Fig. 27). The WTA's website [164] carries the following succinct statement of what transhumanism is, penned originally by Max More in 1990: “Transhumanism is a class of philosophies of life that seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.” Whether transhumanism yet qualifies as a philosophy, it cannot be denied that it has produced its share of both proponents and critics.


Proponents of transhumanism claim that the things they want are the things everyone wants: freedom from pain, freedom from suffering, freedom from all the limitations of the human body (including mental as well as physical limitations), and ultimately, freedom from death. One of the leading authors in the transhumanist movement is Ray Kurzweil, whose 652-page book The Singularity Is Near [165] prophesies a time in the not-too-distant future when evolution will accelerate exponentially and bring to pass all of the above freedoms as “the matter and energy in our vicinity will become infused with the intelligence, knowledge, creativity, beauty, and emotional intelligence (the ability to love, for example) of our human-machine civilization. Our civilization will then expand outward, turning all the dumb matter and energy we encounter into sublimely intelligent—transcendent—matter and energy” [165, p. 389].

Despite the almost theological tone of the preceding quote, Kurzweil has established a sound track record as a technological forecaster, at least when it comes to Moore's-Law-type predictions of the progress of computing power. But the ambitions of Kurzweil [178] and his allies go far beyond next year's semiconductor roadmap to encompass the future of all humanity. If the fullness of the transhumanist vision is realized, the following achievements will come to pass:

  • human bodies will cease to be the physical instantiation of human minds, replaced by as-yet-unknown hardware with far greater computational powers than the present human brain;

  • human minds will experience, at their option, an essentially eternal existence in a world free from the present restrictions of material embodiment in biological form;

  • limitations on will, intelligence, and communication will all be overcome, so that to desire a thing or experience will be to possess it.

The Transhumanist Declaration, last modified in 2009 [166], recognizes that these plans have potential downsides, and calls for reasoned debate to avoid the risks while realizing the opportunities. The sixth item in the Declaration, for example, declares that “[p]olicy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe.” The key phrase in this item is “moral vision.” While many self-declared transhumanists may agree on the moral vision which should guide their endeavors, the movement has also inspired some of the most vigorous and categorically critical invective to be found in the technical and public-policy literature.

Possibly the most well known of the vocal critics of transhumanism is Francis Fukuyama, a political scientist who nominated transhumanism as his choice for the world's most dangerous idea [167]. As with most utopian notions, the main problem Fukuyama sees with transhumanism is the transition between our present state and the transhumanists' future vision of completely realized eternal technological bliss (Fig. 28). Will some people be uploaded to become immortal, almost omniscient transhumans while others are left behind in their feeble, mortal, disease-ridden human bodies? Are the human goods that transhumanists say are basically the same for everyone really so? Or are they more complex and subtle than typical transhumanist pronouncements acknowledge? As Fukuyama points out in his foreign Policy essay [167], “Our good characteristics are intimately connected to our bad ones… if we never felt jealousy, we would also never feel love. Even our mortality plays a critical function in allowing our species as a whole to survive and adapt (and transhumanists are about the last group I would like to see live forever).”


Fig.&nbsp;28.&nbsp; Brain in a vat with the thought: “I'm walking outside in the sun” being transmitted to the computer. Image reproduced under the Creative Commons license.

Fig. 28. Brain in a vat with the thought: “I'm walking outside in the sun” being transmitted to the computer. Image reproduced under the Creative Commons license.

Transhumanists themselves admit that their movement performs some of the functions of a religion when it “offers a sense of direction and purpose.” But in contrast to most religions, transhumanists explicitly hope to “make their dreams come true in this world” [168]. Nearly all transhumanist programs and proposals arise from a materialist–reductionist view of the world which assumes that the human mind is at most an epiphenomenon of the brain, all of the human brain's functions will eventually be simulated by hardware (on computers of the future), and that the experience known as consciousness can be realized in artificial hardware in essentially the same form as it is presently realized in the human body. Some of the assumptions of transhumanism are based less on facts and more on faith. Just as Christians take on faith that God revealed Himself in Jesus Christ, transhumanists take on faith that machines will inevitably become conscious.

Fig.&nbsp;29.&nbsp; The shadow dextrous hand shakes the human hand. How technology might become society—a future agreement. Courtesy of Shadow Robot Company 2008.

Fig. 29. The shadow dextrous hand shakes the human hand. How technology might become society—a future agreement. Courtesy of Shadow Robot Company 2008.

In keeping with the transhumanists' call for responsible moral vision, the IEEE SSIT has been, and will continue to be, a forum where the implications for society of all sorts of technological developments can be debated and evaluated. In a sense, the transhumanist program is the ultimate technological project: to redesign humanity itself to a set of specifications, determined by us. If the transhumanists succeed, technology will become society, and the question of the social implications of technology will be moot (Fig. 29). Perhaps the best attitude to take toward transhumanism is to pay attention to their prophecies, but, as the Old Testament God advised the Hebrews, “if the thing follow not, nor come to pass…the prophet hath spoken it presumptuously…” [169].



SECTION IX. Ways forward

In sum, identifying and predicting what the social implications of past, present and future technologies might be can lead us to act in one of four ways, which are not mutually exclusive.

First, we can take the “do nothing” approach and meekly accept the risks associated with new techniques. We stop being obsessed by both confirmed and speculative consequences, and instead, try to see how far the new technologies might take us and what we might become or transform into as a result. While humans might not always like change, we are by nature, if we might hijack Heraclitus, in a continual state of flux. We might reach new potentials as a populace, become extremely efficient at doing business with each other, and make a positive impact on our natural environment by doing so. The downside to this approach is that it appears to be an all or nothingapproach with no built-in decision points. for as Jacques Ellul [170] forewarned: “what is at issue here is evaluating the danger of what might happen to our humanity in the present half-century, and distinguishing between what we want to keep and what we are ready to lose, between what we can welcome as legitimate human development and what we should reject with our last ounce of strength as dehumanization.”

The second option is that we let case law determine for us what is legal or illegal based on existing laws, or new or amended laws we might introduce as a result of the new technologies. We can take the stance that the courts are in the best position to decide on what we should and should not do with new technologies. If we break the law in a civil or criminal capacity, then there is a penalty and we have civil and criminal codes concerning workplace surveillance, telecommunications interception and access, surveillance devices, data protection and privacy, cybercrime, and so on. There is also the continual review of existing legislation by law-reform commissions and the like. New legislation can also be introduced to curb against other dangers or harms that might eventuate as a result of the new techniques.

The third option is that we can introduce industry regulations that stipulate how advanced applications should be developed (e.g., ensuring privacy impact assessments are done before commercial applications are launched), and that technical expectations on accuracy, reliability, and storage of data are met. It is also important that the right balance be found between regulations and freedom so as not to stifle the high-tech industry at large.

Finally, the fourth option would be to adopt the “Amish method”: complete abandonment of technology that has progressed beyond a certain point of development. This is in some respect “living off the grid” [171].

Although obvious, it is important to underline that none of these options are mutually exclusive or foolproof. The final solution may well be at times to introduce industry regulations or codes, at other times to do nothing, and in other cases to rely on legislative amendments despite the length of time it takes to develop these. In other cases, the safeguards may need to be built into the technology itself.


SECTION X. Conclusion

If we put our trust in Kurzweil's [172] Law of Accelerating Returns, we are likely headed into a great period of discovery unprecedented in any era of history. This being the case, the time for inclusive dialog is now, not after widespread diffusion of such innovations as “always on” cameras, microchip implants, unmanned drones and the like. We stand at a critical moment of decision, as the mythological Pandora did as she was about to open her box. There are many lessons to be learned from history, especially from such radical developments as the atomic bomb and the resulting arms race. Joy [173] has raised serious fears about continuing unfettered research into “spiritual machines.” Will humans have the foresight to say “no” or “stop” to new innovations that could potentially be a means to a socially destructive scenario? Implants that may prolong life expectancy by hundreds if not thousands of years may appeal at first glance, but they could well create unforeseen devastation in the form of technological viruses, plagues, or a grim escalation in the levels of crime and violence.

To many scientists of the positivist tradition anchored solely to an empirical world view, the notion of whether something is right or wrong is in a way irrelevant. for these researchers, a moral stance has little or nothing to do with technological advancement but is really an ideological position. The extreme of this view is exemplified by an attitude of “let's see how far we can go”, not “is what we are doing the best thing for humanity?” and certainly not by the thought of “what are the long-term implications of what we are doing here?” As an example, one need only consider the mad race to clone the first animal, and many have long suspected an “underground” scientific race continues to clone the first human.

In the current climate of innovation, precisely since the proliferation of the desktop computer and birth of new digital knowledge systems, some observers believe that engineers, and professionals more broadly, lack accountability for the tangible and intangible costs of their actions [174, p. 288]. Because science-enabled engineering has proved so profitable for multinational corporations, they have gone to great lengths to persuade the world that science should not be stopped, for the simple reason that it will always make things better. This ignores the possibility that even seemingly small advancements into the realm of the Electrophorus for any purpose other than medical prostheses will have dire consequences for humanity [175]. According to Kuhns, “Once man has given technique its entry into society, there can be no curbing of its gathering influence, no possible way of forcing it to relinquish its power. Man can only witness and serve as the ironic beneficiary-victim of its power” [176, p. 94].

Clearly, none of the authors of this paper desire to stop technological advance in its tracks. But we believe that considering the social implications of past, present, and future technologies is more than an academic exercise. As custodians of the technical means by which modern society exists and develops, engineers have a unique responsibility to act with forethought and insight. The time when following orders of a superior was all that an engineer had to do is long past. with great power comes great responsibility. Our hope is that the IEEE SSIT will help and encourage engineers worldwide to consider the consequences of their actions throughout the next century.


1. K. D. Stephan, "Notes for a history of the IEEE society on social implications of technology", IEEE Technol. Soc. Mag., vol. 25, no. 4, pp. 5-14, 2006.

2. B. R. Inman, "One view of national security and technical information", IEEE Technol. Soc. Mag., vol. 1, no. 3, pp. 19-21, Sep. 1982.

3. S. Sloan, "Technology and terrorism: Privatizing public violence", IEEE Technol. Soc. Mag., vol. 10, no. 2, pp. 8-14, 1991.

4. J. R. Shanebrook, "Prohibiting nuclear weapons: Initiatives toward global nuclear disarmament", IEEE Technol. Soc. Mag., vol. 18, no. 2, pp. 25-31, 1999.

5. C. J. Andrews, "National responses to energy vulnerability", IEEE Technol. Soc. Mag., vol. 25, no. 3, pp. 16-25, 2006.

6. R. C. Arkin, "Ethical robots in warfare", IEEE Technol. Soc. Mag., vol. 28, no. 1, pp. 30-33, 2009.

7. M. G. Michael, K. Michael, "Toward a state of überveillance", IEEE Technol. Soc. Mag., vol. 29, no. 2, pp. 9-16, 2010.

8. V. Baranauskas, "Large-scale fuel farming in Brazil", IEEE Technol. Soc. Mag., vol. 2, no. 1, pp. 12-13, Mar. 1983.

9. H. M. Gueron, "Nuclear power: A time for common sense", IEEE Technol. Soc. Mag., vol. 3, no. 1, pp. 3-9, Mar. 1984.

10. J. J. Mackenzie, "Nuclear power: A skeptic's view", IEEE Technol. Soc. Mag., vol. 3, no. 1, pp. 9-15, Mar. 1984.

11. E. Larson, D. Abrahamson, P. Ciborowski, "Effects of atmospheric carbon dioxide on U. S. peak electrical generating capacity", IEEE Technol. Soc. Mag., vol. 3, no. 4, pp. 3-8, Dec. 1984.

12. P. C. Cruver, "Greenhouse effect prods global legislative initiatives", IEEE Technol. Soc. Mag., vol. 9, no. 1, pp. 10-16, Mar./Apr. 1990.

13. B. Allenby, "Earth systems engineering and management", IEEE Technol. Soc. Mag., vol. 19, no. 4, pp. 10-24, Winter 2000.

14. J. C. Davis, "Protecting intellectual property in cyberspace", IEEE Technol. Soc. Mag., vol. 17, no. 2, pp. 12-25, 1998.

15. R. Brody, "Consequences of electronic profiling", IEEE Technol. Soc. Mag., vol. 18, no. 1, pp. 20-27, 1999.

16. K. W. Bowyer, "Face-recognition technology: Security versus privacy", IEEE Technol. Soc. Mag., vol. 23, no. 1, pp. 9-20, 2004.

17. D. Btschi, M. Courant, L. M. Hilty, "Towards sustainable pervasive computing", IEEE Technol. Soc. Mag., vol. 24, no. 1, pp. 7-8, 2005.

18. R. Clarke, "Cyborg rights", IEEE Technol. Soc. Mag., vol. 30, no. 3, pp. 49-57, 2011.

19. E. Levy, D. Copp, "Risk and responsibility: Ethical issues in decision-making", IEEE Technol. Soc. Mag., vol. 1, no. 4, pp. 3-8, Dec. 1982.

20. K. R. Foster, R. B. Ginsberg, "Guest editorial: The wired classroom", IEEE Technol. Soc. Mag., vol. 17, no. 4, pp. 3, 1998.

21. T. Bookman, "Ethics professionalism and the pleasures of engineering: T&S interview with Samuel Florman", IEEE Technol. Soc. Mag., vol. 19, no. 3, pp. 8-18, 2000.

22. K. D. Stephan, "Is engineering ethics optional", IEEE Technol. Soc. Mag., vol. 20, no. 4, pp. 6-12, 2001.

23. T. C. Jepsen, "Reclaiming history: Women in the telegraph industry", IEEE Technol. Soc. Mag., vol. 19, no. 1, pp. 15-19, 2000.

24. A. S. Bix, "‘Engineeresses’ invade campus", IEEE Technol. Soc. Mag., vol. 19, no. 1, pp. 20-26, 2000.

25. J. Coopersmith, "Pornography videotape and the internet", IEEE Technol. Soc. Mag., vol. 19, no. 1, pp. 27-34, 2000.

26. D. M. Hughes, "The internet and sex industries: Partners in global sexual exploitation", IEEE Technol. Soc. Mag., vol. 19, no. 1, pp. 35-41, 2000.

27. V. Cimagalli, M. Balsi, "Guest editorial: University technology and society", IEEE Technol. Soc. Mag., vol. 20, no. 2, pp. 3, 2001.

28. G. L. Engel, B. M. O'Connell, "Guest editorial: Ethical and social issues criteria in academic accreditation", IEEE Technol. Soc. Mag., vol. 21, no. 3, pp. 7, 2002.

29. J. C. Lucena, G. Downey, H. A. Amery, "From region to countries: Engineering education in Bahrain Egypt and Turkey", IEEE Technol. Soc. Mag., vol. 25, no. 2, pp. 4-11, 2006.

30. C. Didier, J. R. Herkert, "Volunteerism and humanitarian engineering—Part II", IEEE Technol. Soc. Mag., vol. 29, no. 1, pp. 9-11, 2010.

31. K. Michael, G. Roussos, G. Q. Huang, R. Gadh, A. Chattopadhyay, S. Prabhu, P. Chu, "Planetary-scale RFID services in an age of uberveillance", Proc. IEEE, vol. 98, no. 9, pp. 1663-1671, Sep. 2010.

32. M. G. Michael, K. Michael, "The fall-out from emerging technologies: On matters of surveillance social networks and suicide", IEEE Technol. Soc. Mag., vol. 30, no. 3, pp. 15-18, 2011.

33. M. U. Iqbal, S. Lim, "Privacy implications of automated GPS tracking and profiling", IEEE Technol. Soc. Mag., vol. 29, no. 2, pp. 39-46, 2010.

34. D. Kravets, "OnStar tracks your car even when you cancel service", Wired, Sep. 2011.

35. L. Evans, "Location-based services: Transformation of the experience of space", J. Location Based Services, vol. 5, no. 34, pp. 242-260, 2011.

36. M. Wigan, R. Clarke, "Social impacts of transport surveillance", Prometheus, vol. 24, no. 4, pp. 389-403, 2006.

37. B. D. Renegar, K. Michael, "The privacy-value-control harmonization for RFID adoption in retail", IBM Syst. J., vol. 48, no. 1, pp. 8:1-8:14, 2009.

38. R. Clarke, "Information technology and dataveillance", Commun. ACM, vol. 31, no. 5, pp. 498-512, 1988.

39. H. Ketabdar, J. Qureshi, P. Hui, "Motion and audio analysis in mobile devices for remote monitoring of physical activities and user authentication", J. Location Based Services, vol. 5, no. 34, pp. 182-200, 2011.

40. E. Singer, "Device tracks how you're sleeping", Technol. Rev. Authority Future Technol., Jul. 2009.

41. L. Perusco, K. Michael, "Control trust privacy and security: Evaluating location-based services", IEEE Technol. Soc. Mag., vol. 26, no. 1, pp. 4-16, 2007.

42. K. Michael, A. McNamee, M. G. Michael, "The emerging ethics of humancentric GPS tracking and monitoring", ICMB M-Business-From Speculation to Reality, 2006.

43. S. J. Fusco, K. Michael, M. G. Michael, R. Abbas, "Exploring the social implications of location based social networking: An inquiry into the perceived positive and negative impacts of using LBSN between friends", 9th Int. Conf. Mobile Business/9th Global Mobility Roundtable (ICMB-GMR), 2010.

44. M. Burdon, "Commercializing public sector information privacy and security concerns", IEEE Technol. Soc. Mag., vol. 28, no. 1, pp. 34-40, 2009.

45. R. W. Picard, "Future affective technology for autism and emotion communication", Philosoph. Trans. Roy. Soc. London B Biol. Sci., vol. 364, no. 1535, pp. 3575-3584, 2009.

46. R. M. Kowalski, S. P. Limber, P. W. Agatston, Cyber Bullying: The New Moral Frontier, U.K., London: Wiley-Blackwell, 2007.

47. Google: Policies and Principles, Oct. 2011.

48. K.-S. Lee, "Surveillant institutional eyes in South Korea: From discipline to a digital grid of control", Inf. Soc., vol. 23, no. 2, pp. 119-124, 2007.

49. D. P. Siewiorek, "Wearable computing comes of age", IEEE Computer, vol. 32, no. 5, pp. 82-83, May 1999.

50. L. Sydnheimo, M. Salmimaa, J. Vanhala, M. Kivikoski, "Wearable and ubiquitous computer aided service maintenance and overhaul", IEEE Int. Conf. Commun., vol. 3, pp. 2012-2017, 1999.

51. K. Michael, M. G. Michael, Innovative Automatic Identification and Location-Based Services, New York: Information Science Reference, 2009.

52. K. Michael, M. G. Michael, "Implementing Namebers using microchip implants: The black box beneath the skin" in This Pervasive Day: The Potential and Perils of Pervasive Computing, U.K., London:Imperial College Press, pp. 101-142, 2011.

53. S. Mann, "Wearable computing: Toward humanistic intelligence", IEEE Intell. Syst., vol. 16, no. 3, pp. 10-15, May/Jun. 2001.

54. B. Schiele, T. Jebara, N. Oliver, "Sensory-augmented computing: Wearing the museum's guide", IEEE Micro, vol. 21, no. 3, pp. 44-52, May/Jun. 2001.

55. C. Harrison, D. Tan, D. Morris, "Skinput: Appropriating the skin as an interactive canvas", Commun. ACM, vol. 54, no. 8, pp. 111-118, 2011.

56. N. Sawhney, C. Schmandt, "Nomadic radio: A spatialized audio environment for wearable computing", Proc. IEEE 1st Int. Symp. Wearable Comput., pp. 171-172, 1997.

57. S. Mann, "Eudaemonic computing (‘underwearables’)", Proc. IEEE 1st Int. Symp. Wearable Comput., pp. 177-178, 1997.

58. LooxieOverview, Jan. 2012.

59. T. Starner, "The challenges of wearable computing: Part 1", IEEE Micro, vol. 21, no. 4, pp. 44-52, Jul./Aug. 2001.

60. G. Trster, "Smart clothes—The unfulfilled pledge", IEEE Perv. Comput., vol. 10, no. 2, pp. 87-89, Feb. 2011.

61. M. B. Spitzer, "Eyeglass-based systems for wearable computing", Proc. IEEE 1st Int. Symp. Wearable Comput., pp. 48-51, 1997.

62. R. Steinkuhl, C. Sundermeier, H. Hinkers, C. Dumschat, K. Cammann, M. Knoll, "Microdialysis system for continuous glucose monitoring", Sens. Actuators B Chem., vol. 33, no. 13, pp. 19-24, 1996.

63. J. C. Pickup, F. Hussain, N. D. Evans, N. Sachedina, "In vivo glucose monitoring: The clinical reality and the promise", Biosens. Bioelectron., vol. 20, no. 10, pp. 1897-1902, 2005.

64. C. Thomas, R. Carlson, Development of the Sensing System for an Implantable Glucose Sensor, Jan. 2012.

65. J. L. Ferrero, "Wearable computing: One man's mission", IEEE Micro, vol. 18, no. 5, pp. 87-88, Sep.-Oct. 1998.

66. T. Martin, "Issues in wearable computing for medical monitoring applications: A case study of a wearable ECG monitoring device", Proc. IEEE 4th Int. Symp. Wearable Comput., pp. 43-49, 2000.

67. M. G. Michael, "The biomedical pioneer: An interview with C. Toumazou" in Innovative Automatic Identification and Location-Based Services, New York: Information Science Reference, pp. 352-363, 2009.

68. R. Capurro, M. Nagenborg, Ethics and Robotics, Germany, Heidelberg: Akademische Verlagsgesellschaft, 2009.

69. R. Sparrow, "Predators or plowshares? Arms control of robotic weapons", IEEE Technol. Soc. Mag., vol. 28, no. 1, pp. 25-29, 2009.

70. R. C. Arkin, "Governing lethal behavior in robots [T&S Interview]", IEEE Technol. Soc. Mag., vol. 30, no. 4, pp. 7-11, 2011.

71. M. L. Cummings, "Creating moral buffers in weapon control interface design", IEEE Technol. Soc. Mag., vol. 23, no. 3, pp. 28-33, 41, 2004.

72. P. Asaro, "Modeling the moral user", IEEE Technol. Soc. Mag., vol. 28, no. 1, pp. 20-24, 2009.

73. J. Canning, "You've just been disarmed. Have a nice day!", IEEE Technol. Soc. Mag., vol. 28, no. 1, pp. 13-15, 2009.

74. F. Operto, "Ethics in advanced robotics", IEEE Robot. Autom. Mag., vol. 18, no. 1, pp. 72-78, Mar. 2011.

75. J. M. Sullivan, "Evolution or revolution? The rise of UAVs", IEEE Technol. Soc. Mag., vol. 25, no. 3, pp. 43-49, 2006.

76. P. Salvini, M. Nicolescu, H. Ishiguro, "Benefits of human-robot interaction", IEEE Robot. Autom. Mag., vol. 18, no. 4, pp. 98-99, Dec. 2011.

77. A. Sharkey, N. Sharkey, "Children the elderly and interactive robots", IEEE Robot. Autom. Mag., vol. 18, no. 1, pp. 32-38, Mar. 2011.

78. D. Feil-Seifer, M. J. Mataric, "Socially assistive robotics", IEEE Robot. Autom. Mag., vol. 18, no. 1, pp. 24-31, Mar. 2011.

79. J. D. Bronzino, The Biomedical Engineering Handbook: Medical Devices and Systems, FL, Boca Raton:CRC Press, 2006.

80. C. Hassler, T. Boretius, T. Stieglitz, "Polymers for neural implants", J. Polymer Sci. B Polymer Phys., vol. 49, no. 1, pp. 18-33, 2011.

81. Bionic Hearing Bionic Vision Neurobionics, Jan. 2012.

82. A. Manning, "Implants sounding better: Smaller faster units overcome ‘nerve deafness’", USA Today, pp. 7D, 2000.

83. G. M. Clark, Sounds From Silence, Australia, Melbourne: Allen & Unwin, 2003.

84. G. Carman, Eureka Moment From First One to Hear With Bionic Ear, Feb. 2008.

85. J. F. Patrick, P. A. Busby, P. J. Gibson, "The development of the nucleus FreedomTM cochlear implant system", Sage Publications, vol. 10, no. 4, pp. 175-200, 2006.

86. "Personal stories", Cochlear, Jan. 2012.

87. R. A. Cooper, "Quality of life technology: A human-centered and holistic design", IEEE Eng. Med. Biol., vol. 27, no. 2, pp. 10-11, Mar./Apr. 2008.

88. S. Stewart, "Neuromaster", Wired 8.02.

89. J. Dowling, "Current and future prospects for optoelectronic retinal prostheses", Eye, vol. 23, pp. 1999-2005, 2009.

90. D. Ahlstrom, "Microchip implant could offer new kind of vision", The Irish Times.

91. More Tests of Eye Implants Planned, pp. 1-2, 2001.

92. G. Branwyn, "The desire to be wired", Wired 1.4.

93. W. Wells, The Chips Are Coming.

94. M. Brooks, "The cyborg cometh", Worldlink: The Magazine of the World Economic Forum.

95. E. Strickland, "Birth of the bionic eye", IEEE Spectrum, Jan. 2012.

96. S. Adee, "Researchers hope to mime 1000 neurons with high-res artificial retina", IEEE Spectrum, Jan. 2012.

97. D. Nairne, Building Better People With Chips and Sensors.

98. S. S. Hall, "Brain pacemakers", MIT Enterprise Technol. Rev..

99. E. A. C. Pereira, A. L. Green, R. J. Stacey, T. Z. Aziz, "Refractory epilepsy and deep brain stimulation", J. Clin. Neurosci., vol. 19, no. 1, pp. 27-33, 2012.

100. "Brain pacemaker could help cure depression research suggests", Biomed. Instrum. Technol., vol. 45, no. 2, pp. 94, 2011.

101. H. S. Mayberg, A. M. Lozano, V. Voon, H. E. McNeely, D. Seminowicz, C. Hamani, J. M. Schwalb, S. H. Kennedy, "Deep brain stimulation for treatment-resistant depression", Neuron, vol. 45, no. 5, pp. 651-660, 2005.

102. B. Staff, "Brain pacemaker lifts depression", BBC News, Jun. 2005.

103. C. Hamani, H. Mayberg, S. Stone, A. Laxton, S. Haber, A. M. Lozano, "The subcallosal cingulate gyrus in the context of major depression", Biol. Psychiatry, vol. 69, no. 4, pp. 301-308, 2011.

104. R. Dobson, Professor to Try to Control Wife via Chip Implant.

105. "Chip helps paraplegic walk", Wired News.

106. D. Smith, "Chip implant signals a new kind of man", The Age.

107. "Study of an implantable functional neuromuscular stimulation system for patients with spinal cord injuries", Clinical, Feb. 2009.

108. R. Barrett, "Electrodes help paraplegic walk" in Lateline Australian Broadcasting Corporation, Australia, Sydney: ABC, May 2011.

109. M. Ingebretsen, "Intelligent exoskeleton helps paraplegics walk", IEEE Intell. Syst., vol. 26, no. 1, pp. 21, 2011.

110. S. Harris, "US researchers create suit that can enable paraplegics to walk", The Engineer, Oct. 2011.

111. D. Ratner, M. A. Ratner, Nanotechnology and Homeland Security: New Weapons for New Wars, NJ, Upper Saddle River: Pearson Education, 2004.

112. L. Versweyveld, "Chip implants allow paralysed patients to communicate via the computer", Virtual Medical Worlds Monthly.

113. S. Adee, "The revolution will be prosthetized: DARPA's prosthetic arm gives amputees new hope", IEEE Spectrum, vol. 46, no. 1, pp. 37-40, 2009.

114. E. Wales, "It's a living chip", The Australian, pp. 4, 2001.

115. Our Products: MBAMultiplex Bio Threat Assay, Jan. 2012.

116. F. W. Scheller, "From biosensor to biochip", FEBS J., vol. 274, no. 21, pp. 5451, 2007.

117. A. Persidis, "Biochips", Nature Biotechnol., vol. 16, pp. 981-983, 1998.

118. A. C. LoBaido, "Soldiers with microchips: British troops experiment with implanted electronic dog tag",

119. "Microchip implants for drug delivery", ABC: News in Science.

120. R. Bailey, "Implantable insulin pumps", Biology

121. D. Elleri, D. B. Dunger, R. Hovorka, "Closed-loop insulin delivery for treatment of type 1 diabetes", BMC Med., vol. 9, no. 120, 2011.

122. D. L. Sorkin, J. McClanahan, "Cochlear implant reimbursement cause for concern", HealthyHearing, May 2004.

123. J. Berke, "Parental rights and cochlear implants: Who decides about the implant?", Deafness, May 2009.

124. D. O. Weber, "Me myself my implants my micro-processors and I", Softw. Develop. Mag., Jan. 2012.

125. A. Graafstra, K. Michael, M. G. Michael, "Social-technical issues facing the humancentric RFID implantee sub-culture through the eyes of Amal Graafstra", Proc. IEEE Int. Symp. Technol. Soc., pp. 498-516, 2010.

126. E. M. McGee, G. Q. Maguire, "Becoming borg to become immortal: Regulating brain implant technologies", Cambridge Quarterly Healthcare Ethics, vol. 16, pp. 291-302, 2007.

127. P. Moore, Enhancing Me: The Hope and the Hype of Human Enhancement, U.K., London: Wiley, 2008.

128. A. Masters, K. Michael, "Humancentric applications of RFID implants: The usability contexts of control convenience and care", Proc. 2nd IEEE Int. Workshop Mobile Commerce Services, pp. 32-41, 2005.

129. J. Best, "44000 prison inmates to be RFID-chipped",, Nov. 2010.

130. D. Brin, The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?, MA, Boston: Perseus Books, 1998.

131. J. E. Dobson, P. F. Fischer, "Geoslavery", IEEE Technol. Soc. Mag., vol. 22, no. 1, pp. 47-52, 2003.

132. K. Michael, M. G. Michael, "Homo Electricus and the Continued Speciation of Humans" in The Encyclopedia of Information Ethics and Security, PA, Hershey: IGI, pp. 312-318, 2007.

133. S. Young, Designer Evolution: A Transhumanist Manifesto, New York: Prometheus Books, 2006.

134. E. Braun, Wayward Technology, U.K., London: Frances Pinter, 1984.

135. R. el Kaliouby, R. Picard, S. Baron-Cohen, "Affective computing and autism", Ann. New York Acad. Sci., vol. 1093, no. 1, pp. 228-248, 2006.

136. D. Bhatia, S. Bairagi, S. Goel, M. Jangra, "Pacemakers charging using body energy", J. Pharmacy Bioallied Sci., vol. 2, no. 1, pp. 51-54, 2010.

137. V. Arstila, F. Scott, "Brain reading and mental privacy", J. Humanities Social Sci., vol. 15, no. 2, pp. 204-212, 2011.

138. R. Westrum, Technologies and Society: The Shaping of People and Things, CA, Belmont: Wadsworth, 1991.

139. B. Gates, The Road Ahead, New York: Penguin, 1995.

140. M. Allaby, Facing The Future: The Case for Science, U.K., London: Bloomsbury, 1996.

141. M. Kaku, Visions: How Science Will Revolutionise the 21st Century and Beyond, U.K., Oxford: Oxford Univ. Press, 1998.

142. A. Berry, The Next 500 Years: Life in the Coming Millennium, New York: Gramercy Books, 1996.

143. J. Ellul, The Technological Society, New York: Vintage Books, 1964.

144. W. Gibson, Neuromancer, New York: Ace Books, 1984.

145. M. McLuhan, Understanding Media: The Extensions of Man, MA, Cambridge: MIT Press, 1964.

146. A. Toffler, Future Shock, New York: Bantam Books, 1981.

147. C. M. Banbury, Surviving Technological Innovation in the Pacemaker Industry 19591990, New York: Garland, 1997.

148. K. Warwick, I Cyborg, U.K., London: Century, 2002.

149. M. G. Michael, K. Warwick, "The professor who has touched the future" in Innovative Automatic Identification and Location-Based Services, New York: Information Science Reference, pp. 406-422, 2009.

150. D. Green, "Why I am not impressed with Professor Cyborg", BBC News.

151. P. Cochrane, Tips For Time Travellers: Visionary Insights Into New Technology Life and the Future on the Edge of Technology, New York: McGraw-Hill, 1999.

152. I. Walker, "Cyborg dreams: Beyond Human: Background Briefing", ABC Radio National, Jan. 2012.

153. W. Grossman, "Peter Cochrane will microprocess your soul", Wired 6.11.

154. R. Fixmer, "The melding of mind with machine may be the next phase of evolution", The New York Times.

155. M. Billinghurst, T. Starner, "Wearable devices: New ways to manage information", IEEE Computer, vol. 32, no. 1, pp. 57-64, Jan. 1999.

156. M. Minsky, Society of Mind, New York: Touchstone, 1985.

157. R. Uhlig, "The end of death: ‘Soul Catcher’ computer chip due", The Electronic Telegraph.

158. K. Michael, M. G. Michael, "Microchipping people: The rise of the electrophorus", Quadrant, vol. 414, no. 3, pp. 22-33, 2005.

159. K. Michael, M. G. Michael, "Towards chipification: The multifunctional body art of the net generation", Cultural Attitudes Towards Technol. Commun., 2006.

160. E. McLuhan, F. Zingrone, Essential McLuhan, NY, New York:BasicBooks, 1995.

161. M. Dery, Escape Velocity: Cyberculture at the End of the Century, U.K., London: Hodder and Stoughton, 1996.

162. J. Brown, "Professor Cyborg",, Jan. 2012.

163. M. T. Maybury, "The mind matters: Artificial intelligence and its societal implications", IEEE Technol. Soc. Mag., vol. 9, no. 2, pp. 7-15, Jun./Jul. 1990.

164. Philosophy, Jan. 2012.

165. R. Kurzweil, The Singularity Is Near, New York: Viking, 2005.

166. Transhumanist Declaration, Jan. 2010.

167. F. Fukuyama, "Transhumanism", Foreign Policy, no. 144, pp. 42-43, 2004.

168. How Does Transhumanism Relate to Religion? in Transhumanist FAQ, Jan. 2012.


170. J. Ellul, What I Believe, MI, Grand Rapids: Eerdmans, 1989.

171. J. M. Wetmore, "Amish Technology: Reinforcing values and building community", IEEE Technol. Soc. Mag., vol. 26, no. 2, pp. 10-21, 2007.

172. R. Kurzweil, The Age of Spiritual Machines, New York: Penguin Books, 1999.

173. B. Joy, "Why the future doesn't need us", Wired 8.04.

174. K. J. O'Connell, "Uses and abuses of technology", Inst. Electr. Eng. Proc. Phys. Sci. Meas. Instrum. Manage. Educ. Rev., vol. 135, no. 5, pp. 286-290, 1988.

175. D. F. Noble, The Religion of Technology: The Divinity of Man and the Spirit of Invention, New York: Penguin Books, 1999.

176. W. Kuhns, The Post-Industrial Prophets: Interpretations of Technology, New York: Harper Colophon Books, 1971.

177. D. J. Solove, The Future of Reputation, CT, New Haven: Yale Univ. Press., 2007.

178. J. Rennie, "Ray Kurzweil's slippery futurism", IEEE Spectrum, Dec. 2010.


Technology forecasting, Social implications of technology, History, Social factors, Human factors, social aspects of automation, human-robot interaction, mobile computing, pervasive computing, IEEE society, social implications of technology, SSIT, society founding, social impacts, military technologies, security technologies, cyborgs, human-machine hybrids, human mind, transhumanist future, humanity redesigns, mobile computing, wearable computing, Überveillance, Corporate activities, engineering education, ethics, future of technology, history,social implications of technology, sociotechnical systems

Citation: Karl D. Stephan, Katina Michael, M. G. Michael, Laura Jacob, Emily P. Anesta, 2012, "Social Implications of Technology: The Past, the Present, and the Future", Proceedings of the IEEE, Volume: 100, Issue: Special Centennial Issue, May 13 2012, 1752-1781. 10.1109/JPROC.2012.2189919

The Social Implications of Location Based Social Networking



Location based social networking (LBSN) applications are part of a new suite of emerging social networking tools that run on the Web 2.0 platform. LBSN is the convergence between location based services (LBS) and online social networking (OSN). LBSN applications offer users the ability to look up the location of another “friend” remotely using a smart phone, desktop or other device, anytime and anywhere. Users invite their friends to participate in LBSN and there is a process of consent that follows. Friends have the ability to alter their privacy settings to allow their location to be monitored by another at differing levels of accuracy (e.g. suburb, pinpoint at the street address level, or manual location entry). This paper explores the impact of LBSN upon society, especially upon trust between friends. The study used focus groups to collect data, and a qualitative approach towards analysis. The paper concludes that while there are a great many positive uses of LBSN, there are some significant problems with current applications, and that better design is required to ensure that these technologies are not exploited against a user to commit harm.

Section I. Introduction

Location Based Social Networking (LBSN) applications such as Google Latitude, Loopt and BrightKite enhance our ability to perform social surveillance. These applications enable users to view and share real time location information with their “friends”. With the emergence of this technology it is crucial to consider that “technology alone, even good technology alone is not sufficient to create social or economic value” [1]. Further to not contributing “sufficient” economic or social value, Kling and other scholars have identified that technologies can have negative impacts on society [2].

As location based social networking technologies are used between “friends” they have the potential to impact friendships, which are integral not only to the operation of society but also to the individual's well being [3]. By enabling real-time location tracking of “friends” LBSN puts LBS technologies in the hands of “friends” while also enhancing the experience of online social networking (OSN). In essence it meshes together the positives and negatives of OSN and LBS creating a unique domain of enquiry, forcing researchers to ask new questions. The purpose of this paper is to explore the implication of location based social networking upon “friendships”, with a particular focus on the impact upon trust.

Section II. Social Informatics

Social informatics aims to “explore, explain and theorize about the social technical contexts of information communication technologies” [4] with a view to developing “reliable knowledge about information technology and social change based on systematic empirical research, in order to inform both public policy issues and professional practice” [5]. In this way social informatics looks at the broader picture of the implementation of information communication technologies (ICT), to understand their operation, use and implications. By undertaking research on location based services from a social informatics perspective, the credible threats of the technology, and the circumstances they arise within and their severity can be identified. One of the key concepts underlying the approach of social informatics is that “information technology are not designed or used in social or technological isolation. From this standpoint, the social context of IT influences their development, uses and consequences” [6]. Social informatics takes a nuanced approach to investigating technologies and explores the bidirectional shaping between context and ICT design, implementation and use [4] as is depicted in Figure 1.


Figure 1. Bidirectional Shaping between Context and ICT Design

This approach, which combines the social aspects and the technical aspects of technology, has been found to be useful for understanding the social shaping and consequences of information communication technologies [7]. Examples of social informatics research include the vitality of electronic journals [8], the adoption and use of Lotus Notes within organizations [9], public access to information via the internet [10], and many other studies which employ a nuanced perspective of technology in order to understand the social shaping and consequences of ICT. Social informatics research also investigates new social phenomenon that materialize when people use technology, for example, the unintended effects of behavioral control in virtual teams [11]. Social informatics is not described as a theory, but as a “large and growing federation of scholars focused on common problems”, with no single theory or theoretical notion being pursued [4]. What social informatics does provide is a framework for conducting research. The framework of social informatics research is that it is problem orientated, empirical, and interdisciplinary with a focus on informatics.

Social informatics research in the area of LBS and OSN has highlighted the implications of using these technologies, including the concepts of trust, control, privacy and security. In addition OSN studies have exposed the ability of these technologies to alter and impact upon social relations. These studies provide a guide for concepts of interest to study in terms of the emergent technology of LBSN. Studies on LBSN however have not investigated the implications of the use of sophisticated LBSN applications, as are currently available. This research aims to address this gap by engaging in a social informatics based investigation of the implications of LBSN.

The problem addressed by this research is: under what conditions do location based social networking technologies enhance or reduce trust between “friends”? This research is concerned with the formulation of the socio-technical landscape that location based social networking applications exist within. The purpose of which is to understand the bidirectional relationship of society and technology and discover the circumstances within which trust will be negatively affected by the use of the technology. The nature of social informatics warns against a simplistic cause and effect approach to technology [12]. As such this research topic does not contain simple propositions that A causes B, rather it is developed upon a set of questions that reflect the interrelated social and technical aspects of the research.

  • Who are the users of the technology?

  • What is the technology used/misused for?

  • What relationships will it be utilized within?

  • How is trust categorized in these relationships?

  • What circumstance(s)/ context will it be used for?

  • What are the technological capabilities?

Section III. Focus Groups

A focus group is a “research technique that collects data through group interaction on a topic determined by the researcher” [13]. A key characteristic of focus groups is the insight and data produced by the interaction of the participants [14]. Focus groups are primarily used within preliminary or exploratory stages of a study [15]. This study uses focus groups to explore and discuss the use and implications of LBSN with the aim of generating a nuanced understanding of the socio-technical framework that LBSN operate within. The unit of analysis for the study was both at the individual and group level [16]. Focus groups enable individuals to express their “attitudes, beliefs and feelings” and the interaction between participants enables these views to be explored on a group level.

A. Design

Five focus groups were conducted for this study. This is justified on the basis that data becomes “saturated” with very little new content emerging after the first few sessions are conducted. The focus groups were conducted with students enrolled in a third year core subject covering professional practice and ethics, in the information technology and computer science curriculum at the University of Wollongong in the first week of May 2009. The background of these students means that it can be assumed that they are technology literate and able to grasp and understand (if not already using) emerging technologies. The focus groups were run in the tenth week of session, when it could be assumed that students were equipped with refined analytical skills to identify ethical and social aspects of technology. A further benefit in utilizing tutorial classes for the study is that the groups were pre-existing and therefore group members were able to easily relate, and also comment upon incidents which they shared in their daily lives [17].

Large focus groups can consist of between 15 to 20 participants and are appropriate for topics that are not emotionally charged. Larger groups are renowned for containing “a wide range of potential responses on topics where each participant has a low level of involvement” [13]. It should be noted that each focus group in this study had on average 15 active participants. The majority of participants were aged between 18 to 22 years old with several mature age students aged between 30 to 45 years old in each class. There was an approximate 60/40 mix of domestic and international students in each of the focus groups. The majority of international students came from China and Singapore.

B. Questions and Stimulus Material

Two moderators were used to conduct the focus groups. In order to maintain consistency between moderators and encourage a neutral approach to the focus group discussion a Question and Stimulus pack was created. The moderators played an active but neutral role, facilitating discussion and probing the participants in order to engage a deeper discussion of the issues. The purpose of developing the focus group questions and stimulus material was threefold; firstly to ensure conformity and standardization across all focus groups, secondly to provide direction and stimulus for the discussion and thirdly to provide participants with knowledge relevant to the focus group discussion. Furthermore the questions and stimulus material enabled the focus group to be structured into three sections of enquiry as demonstrated in figure 2.


Figure 2. Focus Group Sections

The purpose of the focus group questions was to obtain an understanding of the socio-technical framework of LBSN. In order to develop the questions the researcher reviewed the literature on LBS, LBSN, OSN and Trust, along with general media, including blogs and web articles on LBSN and Google Latitude. The questions developed focused upon:

  • Whether participants would use LBSN

  • Why would/(not) participants use LBSN

  • Who they would allow to see their location

  • Who they would like to know the location of

  • What issues surround the use of LBSN

  • The use of LBSN in relationships generally

  • The use of LBSN in relationships focusing on trust

In order to facilitate discussion, open-ended questions were used.

C. Data Analysis

The first stage of the data analysis is the transcription of the focus groups. The data was then analyzed by drawing “together and comparing discussions of similar themes … [to] examine how these relate[d] to the variables within the sample population” [17]. The method of analysis was manual qualitative content analysis. Qualitative methods are constructivist in approach [18]. They take an “interpretive, naturalistic approach to [their] subject matter” and explore things in “their natural setting attempting to make sense of, or interpret phenomena in terms of meaning people bring to them” [19]. In most cases, qualitative research results in the discovery of themes and relationships. Qualitative content analysis is concerned with capturing the richness and describing the unique complexities of data and as such provides understanding. This method allows the researcher to position, relate and ultimately understand the abstractly inferred content from a higher level processing of text and interaction.

Section IV. Results

A. Propensity to Adopt LBSN

There were three categories of response to the question would you use LBSN: adoption, non-adoption and those who had already adopted. Within each of these categories there was a spectrum of responses with participants identifying conditions of adoption or non-adoption to qualify their position. Overall most participants were in favor of non-adoption. Each of these categories of response is explored below.

1) Participants who had Adopted LBSN

Two participants had already adopted a LBSN application. In both cases the LBSN chosen was Google Latitude. One of the participants had ceased using Latitude while the other still had it installed. The participant who no longer used Latitude stated: “I got it and got rid of it because it was just weird”. When the participant was asked why it was “weird” they responded: “because it was like running in the background and you could either sign in and then it kept logging in all the time and I didn't want my brother knowing where I was all the time.” The only person who this respondent had listed as a “friend” on Latitude was his brother as at the time, Latitude was fairly new and the respondent did not think that many people used it.

The second participant who had adopted LBSN, and was still using it was doing so without any “friends”. This participant noted that Latitude: “really wears the battery down fast. I'll exit Google Latitude and it will ask- ‘would you like to continue sharing your location’ and I'll do that but then I'll have no battery left. So it is kind of useless. I still have it. Every now and then I'll log in and update my location. There is not a lot of point.” This second participant observed that without updating your location automatically there is “not a lot of point” to the application. The barrier to allowing automatic updates in the second participant's view was not the “weird” feelings it generated, but the battery power requirement. However this user had “no friends” registered to share their location with.

2) Participants who would Adopt LBSN in the Future

Of the participants who responded that they would adopt a LBSN like Google Latitude, most set out conditions of use to qualify their position while others identified availability of technology to support Latitude as a barrier to adoption. Some focus group participants were indifferent while others identified that they were open to adopting the technology without imposing any specific conditions. The conditions of use that participants specified were the accuracy of the device/application, the level of control over the visibility of their location and when the application would be used.

The condition of adoption based upon the accuracy of the device was expressed in terms of both high and low accuracy. In terms of low accuracy one participant expressed: “Participant: Depends how accurate. [Moderator: Accurate down to street level. |Participant: I think that would be kind of weird, I wouldn't like that.” This participant perceived street level accuracy as “weird”, and stated they would not adopt LBSN if it had such a high degree of accuracy. In terms of high precision accuracy one participant said that they would use a LBSN but “it would have to have a high quality network.” This participant had used LBSN before in China but experienced problems with it and after a “one day test … I didn't go ahead because the feasibility and reliability was not good, it had nothing to do with the privacy problems.”

Several participants would use LBSN upon the condition that they would be able to control the visibility of their location. Visibility was expressed in terms of controlling the level of location information (no information or street, suburb and state level) displayed, as well as control over who had access to the location information. In terms of visibility one participant commented that they would use it if they could specify: “[d]ifferent levels of visibility. Gaming friends at the state level; family — no problem because you trust them; girlfriend — no problem. Obviously the level of relationship trust would be the determining factor in how much access each person would be able to have.” This participant identified that the level of location information disclosed correlated to the different level of trust in each relationship. Other participants simply desired the ability to “easily block your location at all times” or “deactivate” the device.

In relation to who has access to location information one participant indicated that they would use it: “only on family. … Or if children are alone [and] I want to know where they are. But not with friends because if friends know where I am maybe they wonder why I am there and they ask and I have to answer like small, small details…” Identifying that some people do not want to disclose information about themselves to friends as it would open up a Pandora's box of questions about where they were and what they were doing and who they were with and so on. Another participant stated they would use LBSN but “confine it to a restricted group like … close family”, while another would use it if they had kids: “[i]fI have kids I will put it on their phone”.

Participants identified that they would only use LBSN in certain situations for example one participant said they would only use it if they were traveling stating: “[t]he only use I see in it is if I was traveling. I went on a holiday in Tasmania and my mum was worried about where I was because I wouldn't contact her and stuff. And with this she would be able to know where I am constantly, and if I am lost somewhere they would know the last place I was at.” Another participant identified that: “[t]his thing comes in really handy in unforeseen situations, maybe you are in a car and you cannot call a person to come along. So those are a few situations where it can be helpful but for security and privacy. If I can find myself in the database and I can only be seen by my close family that will be really good.” This demonstrated that there were situations within which the utility of LBSN would motivate individuals to adopt the technology although there were some concerns about security and privacy by some participants.

Finally there were three responses which did not identify conditions upon adoption. The first response was by a participant who would adopt LBSN however, they did not have the requisite device. They reflected: “the technology that I have will not let me [conduct LBSN] because I have an older phone. I tried using it but it wouldn't work.” The second response identified that they would use it without conditions and that it did not pose any privacy concerns for them. I'd “use it but I'd stop using from boredom more than anything else, it wouldn't be because of privacy. There doesn't seem like there is a point to it. It is not a privacy thing.” The final response to mention in this section is by a participant who was open to the adoption of LBSN. “I reserve judgment until I see it in action. The general idea is pretty useful I guess. I am open to it. If you have someone's email address you can find out where they live and you can find out anything you want about them… I'm not too worried about it at this point because I think it is probably too late to start worrying about how much protection … you know… your identity and your location, it's all out there.” This participant drew upon the idea that identity and location information is already available on the Internet or in caller detail records or direct marketing material, concluding that it is therefore “probably too late to start worrying about how much protection” we place on further exposure of location information.

3) Participants who would Never Adopt LBSN

The majority of participants indicated that they would not adopt LBSN. Participants gave the following reasons; it is unnecessary or a hassle, it raises ethical concerns, segregates from human contact, or they did not want to disclose their location. The participants who identified that it was unnecessary or a hassle included the following responses: “I don't have time”  “Would be a hassle I don't use stuff like that”  “Unnecessary, I don't care exactly where my friends are. I wouldn't use it to find them whether or not they would use it to find me”  “If you are a close enough friend then would you not just call them?”  “There are other ways of getting in contact, so do we need this location based networking to get in contact. Phone calls are easy enough to make. I am saying you can have it, it is just social networking, whatever, if you just want to keep in contact with friends and that but you can also do that in other ways as well.” All these responses indicate the view of some participants that LBSN is not a necessity, and that existing technologies can be used or should be used- “would you not just call them?” A side note to observe from the latter three responses above is that these participants regarded the existing technologies, which do not allow for unobtrusive observation of location, should be used in preference to LBSN.

Participants identified a range of ethical concerns from using LBSN to prank people “because they trust it”, such as LBSN being used by “serial killers” or for the purpose of “stalking”. More detailed ethical concerns were discussed in responses to “Why would/(not) participants use LBSN?” In addition to the ethical concerns one participant commented that LBSN would change the dynamics of communication with the effect of segregating users from human contact. “It segregates people from human contact. Instead of calling them up and asking them what they are doing, you will just search thlem and see what they are doing without them knowing. It is like stalking.”

A large proportion of the participants who would not use LBSN explained their view on the basis that they did not want to share their location information. Some of the remarks included that LBSN was “[a]nother layer of what people already know about you”  “I don't like people knowing where I am half the time”  “I wouldn't use it. I just don't want everyone knowing where I am 24/7. Even if like you have the option to turn it off or whatever, I would still feel like even when it is off it is kind of … I don't know I'd still feel unsure about it”  “like you may forget to turn it off and not want people to know where you are like, if you are cheating on your girlfriend. And if she goes on and sees that you are at another girl's place”  “If you have it on 24/7 and then there are brief stints where it is off then people are like “he is up to something” or “what is he doing now”. Even if they don't know what you are doing, they might think that you are doing something suspect because this is the time that it is off”  “People like to do that — they like to think ‘Oh he could be doing something suspect, lets find out what it is’.”

Two key ideas emerge from these responses. Firstly, that some people are concerned about revealing too much information about themselves like “I don't want everyone knowing where I am 24/7”. Secondly that revealing location can be dangerous-not in and of itself-but because of what people do with that information. As the latter two responses illustrated, people's curiosity and desire for gossip can lead them to use location information for the wrong purposes and infer “suspect” scenarios.

B. Reasons Why Participants Would/Would Not Use LBSN

The second discussion question was why or why not participants would use LBSN. Some participants provided reasons for their position in response to the first question, however this second question required the respondents to expand upon that discussion and identify specific purposes for using and not using LBSN regardless of their response to the first question. The participants’ responses are summarized in Table I with a discussion of the responses in the two following sections.

1) Reasons Why Participants Would Use LBSN

The reasons that participants stated they would use LBSN included the ability to keep track of or monitor children, employees or friends, store a travel journal for themselves and others to view, to provide parents or carers with peace of mind while they were traveling or for fun. Following are excerpts of some of the responses provided by the participants.

TABLE I. Reasons to use/not use LBSN

Reasons to use LBSN

• Monitoring or tracking of children, employees, friends

• Travel journal

• Parents peace of mind while traveling


Reasons not to use LBSN

• Intrusion into peoples’ lives

• Impact upon trust

• Drain the batteries in device

• Privacy

• No one uses it

In relation to monitoring or tracking participants expressed: “[t]he only reason that I would use it is if I wanted to know where someone was and they weren't telling me where they were”  “Well if you were one of those people who always had to know where someone was then it would be useful because then you wouldn't be always calling them [saying] ‘where are you, where are you?’”  “If I had a business I would use it on my employees, especially if they had their own vehicles, so I would know where the employees are going.”

Participants also expressed that they would use LBSN if they were traveling: “[t]he only use I see in it is if I was traveling”  “Used for traveling, when you want your friends back at home to keep track of where you are”  “If you are traveling from location to location so you can see where you are and also for people who want to see where you are and who want to know what time to expect you. So they can see how long it will take before you arrive.”

And finally one participant noted that “maybe I would use this just for fun. Like, ‘where are you?’ for fun. If I don't want to use it, I'll just turn it off”.

2) Reasons Why Participants Would Not Use LBSN

Participants gave several reasons why not to use LBSN including that it would present an intrusion into peoples’ lives, impact upon trust, drain the batteries in the device, present privacy concerns and because no one else uses it. Following are some excerpts to clarify and expand upon these reasons.

Participants who identified that LBSN presents an intrusion into peoples’ lives made the following comments: “[c]omes across more as a tool for surveillance rather than a social networking tool” “Parents putting it on their children's phone — negative use for it. Good for the parents but I don't think the child will like it”  “It is just an intrusion into your kid's life, that really shouldn't be there — too much of an intrusion and not enough freedom for when you are getting older and everything, and deserve more freedom” I “Coming home from work and going to the bar but saying to your wife that you are stuck in traffic- ‘oh really but it says you're at the bar, honey’… That kind of problem would come up because people have a tendency to be doing things that they are not supposed to be doing.” These comments illustrate how LBSN can stand in the way of the human desire for freedom and autonomy with the ability to stray from plans.

Participants merely stated that privacy, trust and battery life were reasons for non-use. The participants however elaborated more upon the reason that no one else uses LBSN stating that: “I probably would not use it because no one else uses it so why would I have it. Like it might not be popular now so that is a reason for now, but in the future when everyone else has it, it might not be a reason. So its popularity might affect whether or not I would use it.” In response to this remark another participant commented that: “But when things become more popular, like MS Windows, then people decide to hack MS Windows because it is the same thing that everyone uses. So if everyone started using this, someone out there might find a way to hack it and take advantage of it.”

C. Viewing and Disclosing Location

Participants were asked “Who would you allow to see your location?” and “Who do you want to view the location of?” More responses were elicited from the first question, demonstrating that participants are more concerned with who is able to see their location rather then who they can see. Table II below summarizes the participants’ responses.

TABLE II. Viewing and Disclosing Location Information

People who can View

• No one

• Family/close friends/trusted people

• Friends

• Anyone

• Everyone

People to View

• Everyone

• Friends

• Prime Minister Kevin Rudd

• Parents (depending on the circumstances)

The majority of participants would allow their “family’ or “close friends” to view their location or specified people that they considered to be “really really trusted”. Many participants would allow “family” or “close friends” but not both categories. One participant specified that they “would not request [to use LBSN with] any family member [but] … I might accept it if they add me but I would never actually ask this from my family”. Another participant would add a sibling but not parents and when asked why not stated that: “I tell them a lot but I just don't want them to know absolutely everything. There is this thing where you want to be your own person, have your own space, you don't want to be like trapped. Because you act differently because you think ‘oh shit my parents are always going to be watching what I am doing and where I am’ and that is not good, I don't like that.”

Some participants would add their friends, however specified that it would not be just an acquaintance or “some mate you just bumped into on the road”. However other participants would add everyone or anyone: “Everyone — who would really want to know where I am? … unless I win the lotto” “I'd let anyone. But I would turn it off if I was doing something that I didn't want people to know about”  “If you were doing something and you wanted privacy you would turn it off. But otherwise if people want to enjoy laughing at where I am then I don't really mind.” Although these participants did identify that they would allow anyone or everyone, they did impose some conditions upon their answer. The participants were not as specific about who they would view the location of. Many suggested that they would want to track everyone, even Prime Minister Kevin Rudd, or just their friends.

Section V. Issues surrounding the use of LBSN

The focus group participants were asked what they thought were the potential issues with the use of LBSN. Figure 3 represents the broad categories of responses provided by the participants. The shade of color provides an indication of the number of times each issue came up within the focus groups; the darker the shade, the greater the frequency the issue arose. Security was the premier concern, followed by privacy and trust. Social relations, control, and technological issues were also important to participants.

Figure 3. Issues Surrounding LBSN

A. Security

The focus groups drew out three main issues in relation to security; security of self, security of information and security of others. In relation to security of self, participants commented that LBSN could be: “used as a bullying thing … if you see someone in an area and there is no one else really around that area then bullies could go and use it to get that person”. Another participant identified that “I can watch you on Google Latitude — if you update it every three or four hours and know where you are and build a profile”. Other participants mentioned that it could be used for “stalking” or “pedophile tracking.” One participant commented that it could be used for covert tracking: “I think that if the location is set to continuous tracking there won't be any notification sent from Google Latitude. So if anyone gets a hold of your mobile and sets it to continuous tracking they can follow you around.” The scenario depicted by this participant however, is not entirely accurate, as Latitude does provide notification that it is running in the background, however this notification is only given once a month for the first few months and then once every three months. Therefore covert tracking with latitude would be possible for at least one month or in other cases a few weeks. There are some other LBSN applications that are now entering the market, however, that provide no notification whatsoever.

Participants questioned the security of information retained by the service provider, questioning whether Google would “share our information”, or third party hackers who would “hack into the system [and then] would be able to find whoever, whenever”. In relation to security of others one participant noted that “[my friend's] location and activities  are secured to me, as long as I have my cell phone. If I lose it, and another person finds it … they can easily see the location of my friends”. Therefore having the ability to access a friend's location information can pose a potential threat to the other person's security if the device is lost, stolen, or given to a third person not authorized to view the location information.

B. Privacy

Participants identified privacy as an issue, as LBSN applications primarily involved sharing personal information. The main issue, which emerged, was the intrusion into personal life caused by LBSN. Example remarks included: “[s]omeone can track you and see whether you have gone to a medical centre, so if you wanted to be tested on something and you didn't want anyone to know about it because you would be rejected by society”  “random things like being at the doctor's surgery and having the phone in your pocket and you don't want everyone prying into your life”  “if you were doing anything — not necessarily a crime — but something you wanted to keep secret.” An additional issue was questioning the privacy policy of Google Latitude (and therefore Google) and whether that would “override” the legislation of some jurisdictions to allow for law enforcement authorities who have a warrant to obtain detailed records of one's location.

C. Trust

Participants identified three ways that LBSN could affect trust. Firstly, LBSN users could use the application to “lie” or “hide things”, taking advantage of the trust other users place in the device and creating situations of false trust. Secondly, that LBSN could cause people to “start losing trust — losing trust between everyone, between your closest friends, your boyfriends, girlfriends”, and would make people “start questioning everything and everyone and get bitter and old and grey and home alone”. Therefore LBSN would discourage trust and create distrust between individuals. Finally, participants identified that LBSN would provide people with the ability to look “too deep, watching who is where and who is near, and infer little schemes or soap operas”, and contribute to “random social problems when someone looks up their boyfriend and there is some other person at their house”. Both the latter two comments, present scenarios where the user places greater trust in the device than the individual being monitored, and this shift in trust is the cause of the social problem.

D. Control

Participants commented that “lovers” or “parents” could use LBSN as a method of exerting control. In both proposed scenarios, the control was seen as a pre-existing element of the relationship, and LBSN as a tool for exercising control. Some control-related comments which were representative in the use of LBSN included: “control by a crazy lover”  “it is not about the children it is about having access to the children. About control.” One participant, as noted earlier, spoke about control with respect to owning one's space, and therefore owning one's personhood. This participant noted parental control in this context was a form of indirect control. They might not be telling you what to do, but they are keeping tabs on you.

E. Social Relations

Participants also commented on the effect of LBSN upon social relations. “It takes away from the social part of social networking; we are not communicating with each other we are… just viewing it and it is more of a pervasive thing or voyeuristic thing than a social thing” I “People might use it to avoid certain people as well.” It was noted by another participant however, that at the same time, LBSN could also be used to generate discussion.

F. Technological

Technological issues identified were related to perceived battery consumption, and whether the location tracking/monitoring technology would work indoors. Reliability and accuracy were also important factors discussed, as was whether all new mobile devices now had the feature built in and whether data charges applied to usage.

G. No Issues

Some participants commented that there were no issues with LBSN: “[t]he Google Latitude application is great, if you don't like the system you can deactivate it,” and “[n]o issues, if your friends location is secured to you, so long as you have the phone.”

Section VI. Discussion

People and relationships form the backbone of society. Pahl [20] describes friendship as a “social glue” that provides the fulfillment of the “need for belonging and ‘reliable alliance’ — that is, for a bond that can be trusted to be there for you when you need it” [3]. Research on social networking applications, shows that new technologies can have potential negative implications upon social relationships [21] and privacy [22]. Additionally, location based services (LBS) have social ethical implications [23]. Social networking applications have the potential to become an engrained and integral part of social interactions causing those who do not have the technology to be either excluded or succumb to the adoption of the technology [22]. A bad experience with a LBSN may not only impact an individual, but one's relationships, and more broadly one's ability to trust in others and in society more generally. One might ponder that having knowledge of where someone is all the time should in fact enhance trust, that there is certain predictability behind where a loved one physically is located or where they say they are located. However, technology is not perfect, it is not always accurate, it does not always work as it should, and there is no such thing as a perfect “location” system. Humans also require their autonomy, their freedom, an ability to make every-day mistakes without prying eyes [24].

A. Theoretical Importance

This research provided an investigation of the sociotechnical context of location based social networking technologies and applications in terms of “trust” and “friendship”. Such an investigation has several theoretical contributions. Firstly, it provides an understanding of the concepts of trust, friends and friendship within the context of information communication technologies, and social networking in particular. Secondly, it adds to the scholarship in the area of social informatics, providing an example of how social informatics as a theoretical framework can be employed to arrive at a holistic contextualized understanding of the operation of ICTs. Thirdly, it contributes to the limited scholarship on location based social networking with the view to continue the scholarly dialogue on the design, use and implementation as well as implications of the technology and ICTs in general.

B. Practical Importance

Trust and friendship are important aspects of society, and as such the implications of the use of technology upon these concepts are important from a practical as well as a theoretical perspective. The outcomes of this research can be utilized to inform the creation of policy, guidelines or legislation designed to curb the negative implications of the technology upon society. A recent paper by Grimmelmann [25] argued that although “policy makers cannot make Facebook completely safe… they can help people use it safely”, similarly this applies to the emergent technology of LBSN. The outcomes can also be used to educate individuals, and provide stimulus for a dialogue within the broader community about the implications and benefits of social networking and location-based services. Additionally, the designers of the technology can utilize this research by incorporating concerns or user requirements in new or existing applications.

Section VII. Conclusion

LBSN applications provide users with the ability to conduct real time social surveillance upon their friends, including the acts of real-time tracking and monitoring. This study, through the conduct of a social informatics investigation into LBSN, has identified the potential implications of use of LBSN upon relationships, including its critical effect upon trust. The potential implications can be summarized as security, privacy, trust, control, and an impact on societal relationships. The results from the focus group provided a broad view of the use, design, implementation and context of LBSN, and insight into the possible implications of use. The conclusion to be drawn from this study is the nuanced understanding of the operation of LBSN and its implications as well as the circumstances within which it will have a negative impact upon trust. In addition, this research identified that LBSN did present a credible threat to trust between “friends” and that LBSN applications need to be more robustly designed and implemented to reduce the evident potential for an individual user to suffer harm at the hands of another.


1. R. Kling, "What is social informatics and why does it matter?", The Information Society, vol. 23, no. 4, pp. 205-220, 2007.

2. K. Robert, K. Sara, "Internet paradox revisited", Journal of Social Issues, vol. 58, no. 1, pp. 49-74, 2002.

3. B. Misztal, Trust in Modern Societies-The Search for the Bases of Social Order, Cambridge:Blackwell, 1998.

4. S. Sawyer, K. Eschenfelder, "Social informatics: perspectives examples and trends", Annual Review of Information Science and Technology, vol. 36, no. 1, pp. 427-465, 2002.

5. R. Kling, "Learning about information technologies and social change: the contribution of social informatics", The Information Society, vol. 16, no. 3, pp. 217-232, 2000.

6. R. Kling, "Social informatics", Encyclopaedia of Library and Information Science, pp. 2656-2661, 2003.

7. R. Kling, "Social informatics: a new perspective on social research about information and communication technologies", Prometheus, vol. 18, no. 3, pp. 245-264, 2000.

8. R. Kling, L. Covi, "Electronic journals and legitimate media in the systems of scholarly communication", The Information Society, vol. 11, no. 4, pp. 261-71, 1995.

9. W. Orlikowski, "Learning from notes: organizational issues in GroupWare implementation", The Information Society, vol. 9, no. 3, pp. 237-50, 1993.

10. B. Kahin, J. Keller, Public Access to the Internet, Cambridge, MA:MIT Press, 1995.

11. G. Piccoli, B. Ives, "Trust and the unintended effects of behvaior control in virtual teams", MIS Quarterly, vol. 27, no. 3, pp. 365-395, 2003.

12. D. Mackenzie, "Introductory essay" in The Social Shaping of Technology, Philadelphia:Open University Press, pp. 2-27, 1999.

13. D. Morgan, Focus Groups as Qualitative Research, California:Sage Publications, 1996.

14. A. Gibbs, "Focus group research", Social Research Update, vol. 19, pp. 1-4, 1997.

15. R. Krueger, M. Casey, Focus Groups: A Practical Guide for Applied Research, California:Sage Publications, 2000.

16. P.S. Kidd, M. B. Parshall, "Getting the focus and the group: enhancing analytical rigor in focus group research", Qualitative Health Research, vol. 10, no. 3, pp. 293-308, 2000.

17. J. Kitzinger, "Qualitative research: introducing focus groups", British Medical Journal, vol. 311, no. 7000, pp. 299-302, 1995.

18. D. Druckman, Doing Research: Methods of Inquiry for Conflict Analysis, California:Sage Publications, 2005.

19. M.D. Gall, W.R. Borg, J.P. Gall, Educational Research: An Introduction, New York:, 1996.

20. R.E. Pahl, On Friendship, Wiley-Blackwell, 2000.

21. R. Gross, A. Acquisti, "Information revelation and privacy in online social networks", Workshop on Privacy in Electronic Society, 2005.

22. D. Boyd, N. Ellison, "Social network sites: definition history and scholarship", Journal of Computer-Mediated Communication, vol. 13, no. 1, pp. 210-230, 2008.

23. M.G. Michael, S.J. Fusco, K. Michael, "A research note on ethics in the emerging age of überveillance", Computer Communications, vol. 31, no. 6, pp. 1192-1199, 2008.

24. M.G. Michael, K. Michael, "Uberveillance: microchipping people and the assault on privacy", Quadrant, vol. 53, no. 3, pp. 85-89, 2009.

25.J. Grimmelmann, "Saving Facebook: privacy on social network sites", Iowa Law Review, vol. 94, no. 4, pp. 1137-1170, 2009.


Informatics, Social network services, Privacy, Accuracy, Context, Google, Batteries
social networking (online), Internet, mobile computing, social aspects of automation
qualitative approach, social implications, location based social networking, perceived positive impacts, perceived negative impacts, Web 2.0 platform, location based services, online social networking, focus groups, implications, location based services, online social networking, location based social networking,trust, friendship

Citation: Sarah Jean Fusco,  Katina Michael, M.G. Michael, Roba Abbas, "Exploring the Social Implications of Location Based Social Networking: An Inquiry into the Perceived Positive and Negative Impacts of Using LBSN between Friends",  2010 Ninth International Conference on Mobile Business and 2010 Ninth Global Mobility Roundtable (ICMB-GMR), 13-15 June 2010, Athens, Greece, DOI: 10.1109/ICMB-GMR.2010.35

Advanced location-based services

This special issue of Computer Communications presents state-of-the-art research and applications in the area of location-based services (LBS). Initial location-based services entered the market around the turn of the millennium and for the greater part appeared in the form of restaurant finders and tourist guides, which never gained widespread user acceptance. The reasons for this were numerous and ranged from inaccurate localization mechanisms like Cell-ID, little creativity in the design and functions of such services, to a generally low acceptance of data services. However, in recent years, there has been an increasing market penetration of GPS-capable mobile phones and devices, which not only support high-accuracy positioning, but also allow for the execution of sophisticated location-based applications due to fast mobile data services, remarkable computational power and high-resolution color displays. Furthermore, the popularity of these devices is accompanied by the emergence of new players in the LBS market, which offer real-time mapping, points-of-interest content, navigation support, and supplementary services. LBS have also received a significant boost by federal government agency mandates in emergency services, such as in the United States of America. All these advancements are making LBS one of the most exciting areas of research and development with the potential to become one of the most pervasive and convenient services in the near future.

As it turns out, these developments lead to new and sophisticated LBSs, which are referred to as “Advanced LBSs” in this special issue. Examples include, but are not limited to, proactive services, which automatically inform their users when they enter or leave the bounds of pre-defined points of interest; community services, where members of a community mutually exchange their locations either on request or in a proactive fashion; or mobile gaming, where the geographic locations of the players become an integral part of the game. However, the realization of such Advanced LBSs is also associated with some challenges and problems, which have yet to be resolved. For example, there is a strong need for powerful middleware frameworks, architectures and protocols that support the acquisition of location data, their distribution, and processing. In the area of localization mechanisms, accuracy, reliability, and coverage of available technologies must be improved, for example, by combining several methods and enabling a seamless positioning handover between outdoor and indoor technologies. And, finally, because LBSs will significantly change the way people interact and communicate with each other, similar to the impact that mobile phones had a decade ago, solutions must be developed that allow an LBS user to safeguard their privacy with respect to real-time location reckoning, and historical location profiles.

In this special issue, we have addressed the challenges of Advanced LBSs. We received many high-quality submissions from all over the world and finally selected 13 articles. Papers were carefully reviewed and selected based on their scholarship and to provide as broad an appeal to a range of research topics. We received several papers with advanced and very interesting applications, of which we selected the most relevant and novel. Five papers are devoted to middleware and architectures, which are meant to make the infrastructure transparent to application developers and therefore speed up the development process. We received many submissions related to localization schemes and algorithms showing the importance of this aspect on location-based services and the maturity of this research topic. Three localization-related papers are included in the issue. Finally, although security, privacy and ethical issues are well-known concerns in the field of LBS, too few articles were submitted on these topics, indicating that this area requires much needed exploration. However, three interesting papers are included for your perusal. It therefore follows that advanced location-based services can be considered in totality of a given end-to-end offering or ‘advanced’ in a given aspect-complex network architecture, novel application, or multi-mode end-user IP device. A summary of the accepted papers follows.

Two papers are related to LBS applications. The first paper, “Location-Based Services for Elderly and Disabled People” by Alvaro Marco et al. includes a robust, low cost, highly accurate and scalable ZigBee- and ultrasound-based positioning system that provides alarm, monitoring, navigation and leisure services to the elderly and disable people in a residence located in Zaragoza, Spain. The paper “BlueBot: Asset Tracking via Robotic Location Crawling” by Abhishek Patil et al. presents a robot-based system that combines RFID and Wi-Fi positioning technology to automatically survey assets in a facility. The proposed system, which uses off-the-shelf components, promises to automate the tedious inventory process taking place in libraries, manufactures, distributors, and retailers of consumer goods.

Five of the selected papers deal with software middleware, architectures and APIs for advanced LBSs. The first paper, “The PoSIM Middleware for Translucent and Context-aware Integrated Management of Heterogeneous Positioning Systems” by Paolo Bellavista et al., presents middleware that integrates and hides different positioning systems to the application developer while providing different levels of information depending on context, LBS requirements, user preferences, device characteristics, and overall system state. PoSIM provides application developers both, a high level APIs that provides simplified access to positioning systems, and a low level API that provides detailed information from a specific positioning system. Sean Barbeau et al. present an update of the under-development JSR293 Java Location API for J2ME. The article describes the main features of the current API as well as the significant enhancements and new services included in the standardization effort of the expert group so far. Next, the paper “The Internet Location Services Model” by Martin Dawson presents the architecture and services being standardized by the IETF to provide location information to devices independently of any remote service provider. Hasari Celebi and Hüseyin Arslan in “Enabling Location and Environment Awareness in Cognitive Radios” propose a cognitive radio-based architecture that utilizes not only location but also environment information to support advanced LBS. Finally, Christo Laoudias et al. present “Part One: The Statistical Terminal Assisted Mobile Positioning Methodology and Architecture”. The paper describes the architecture of the STAMP system, which is meant to improve the accuracy of existing positioning systems by exploiting measurements collected at the mobile terminal side.

In the area of localization, three papers are included for your perusal. The first paper by Yannis Markoulidakis et al. present “Part Two: Kalman Filtering Options for Error Minimization in Statistical Terminal Assisted Mobile Positioning”, a Kalman filter-based solution to minimize the terminal position error for the STAMP system. Then, Marian Mohr et al. present “A Study of LBS Accuracy in the UK and a Novel Approach to Inferring the Positioning Technology Employed”, an empirical study of the accuracy of positioning information in the UK and a novel technique to infer the positioning technology used by the cellular operators. Finally, in “MLDS: A Flexible Location Directory Service for Tiered Sensor Networks”, Sangeeta Bhattacharya et al. present a multi-resolution location directory service that allows the realization of LBSs with wireless sensor networks. The system successfully tracks mobile agents across single and multiple sensor networks while considering accuracy and communication costs.

The final three articles are devoted to security, privacy and ethical issues, again, very important topics in the realization of advanced LBSs. In “Location Constraints in Digital Rights Management”, Adam Muhlbauer et al. describe the design and implementation of a system for creating and enforcing licences containing location constraints, which can be used to confine access to sensitive documents to a defined area. The following paper, “A TTP-Free Protocol for Location Privacy in Location-Based Services” by Agusti Solanas and Antoni Martı´nez-Ballesté, presents a distributed technique to progressively increase the privacy of the users when they exchange location information among untrusted parties. Finally, the paper “A Research Note on Ethics in the Emerging Age of Überveillance” by M.G. Michael et al. defines, describes and interprets the socio-ethical implications that tracking and monitoring services bring to humans because of the ability of the government and service providers to collect targeted data and conduct general surveillance on individuals. The study calls for further research to create legislation, policies and social awareness in the age of Überveillance, an emerging concept used to describe exaggerated, omnipresent electronic surveillance.

This issue of Computer Communications offers a ground-breaking view into current and future developments in Advanced Location-Based Services. The global nature of submissions indicates that location-based services is a world-wide application focus that has universal appeal both in terms of research and commercialization. This issue offers both academic and industry appeal- the former as a basis toward future research directions, and the latter toward viable commercial LBS implementations. Advanced location-based services in the longer-term will be characterized by their criticalness in consumer, business and government applications in the areas of banking, health, supply chain management, emergency services, and national security.

We thank Editor-in-Chief Jeremy Thompson and Co-Editor-in-Chief Mohammed Atiquzzaman for hosting this special issue. Thanks also to Lorraine McMorrow and Sandra Korver for their support overseeing the paper review and publishing processes. We also thank all the authors and anonymous reviewers for their hard and timely work.

We hope you enjoy this issue as much as we did!

Citation: Miguel A.Labrador, Katina Michael, Axel Küpper Advanced location-based services, Computer Communications, Vol. 31, No. 6, 18 April 2008, pp. 1053-1054. DOI:

The Social, Cultural, Religious and Ethical Implications of Automatic Identification

Katina Michael, School of Information Technology & Computer Science, University of Wollongong, NSW, Australia 2500,

M.G. Michael, American Academy of Religion, PO Box U184, University of Wollongong, NSW, Australia 2500,

Full Citation: Katina Michael, M.G. Michael, 2004, The Social, Cultural, Religious and Ethical Implications of Automatic Identification, Seventh International Conference on Electronic Commerce Research (ICER-7), University of Texas, Dallas, Texas, USA, June 10-13. Sponsored by ATSMA, IFIP Working Group 7.3, INFORMS Information Society.


The number of automatic identification (auto-ID) technologies being utilized in eBusiness applications is growing rapidly. With an increasing trend toward miniaturization and wireless capabilities, auto-ID technologies are becoming more and more pervasive. The pace at which new product innovations are being introduced far outweighs the ability for citizens to absorb what these changes actually mean, and what their likely impact will be upon future generations. This paper attempts to cover a broad spectrum of issues ranging from the social, cultural, religious and ethical implications of auto-ID with an emphasis on human transponder implants. Previous work is brought together and presented in a way that offers a holistic view of the current state of proceedings, granting an up-to-date bibliography on the topic. The concluding point of this paper is that the long-term side effects of new auto-ID technologies should be considered at the outset and not after it has enjoyed widespread diffusion.

1.  Introduction

Automatic identification is the process of identifying a living or nonliving object without direct human intervention. Before auto-ID only manual identification techniques existed, such as tattoos [[i]] and fingerprints, which did not allow for the automatic capture of data (see exhibit 1.1). Auto-ID becomes an e-business application enabler when authorization or verification is required before a transaction can take place. Many researchers credit the vision of a cashless society to the capabilities of auto-ID. Since the 1960s automatic identification has proliferated especially for mass-market applications such as electronic banking and citizen ID. Together with increases in computer processing power, storage equipment and networking capabilities, miniaturization and mobility have heightened the significance of auto-ID to e-business, especially mobile commerce. Citizens are now carrying multiple devices with multiple IDs, including ATM cards, credit cards, private and public health insurance cards, retail loyalty cards, school student cards, library cards, gym cards, licenses to drive automobiles, passports to travel by air and ship, voting cards etc. More sophisticated auto-ID devices like smart card and radio-frequency identification (RFID) tags and transponders that house unique lifetime identifiers (ULI) or biometric templates are increasingly being considered for business-to-consumer (B2C) and government-to-citizen (G2C) transactions. For example, the United States (US) is enforcing the use of biometrics on passports due to the increasing threats of terrorism, and Britain has openly announced plans to begin implanting illegal immigrants with RFID transponders. Internationally, countries are also taking measures to decrease the multi-million dollar costs of fraudulent claims made to social security by updating their citizen identification systems.

Exhibit 1.1 &nbsp;&nbsp;&nbsp;&nbsp;Manual versus Automatic Identification Techniques

Exhibit 1.1     Manual versus Automatic Identification Techniques

2.  Literature Review

The relative ease of performing electronic transactions by using auto-ID has raised a number of social, cultural, religious and ethical issues. Among others, civil libertarians, religious advocates and conspiracy theorists have long cast doubts on the technology and the ultimate use of the information gathered by it. Claims that auto-ID technology impinges on human rights, the right to privacy, and that eventually it will lead to totalitarian control of the populace have been put forward since the 1970s. This paper aims to explore these themes with a particular emphasis on emerging human transponder implant technology. At present, several US companies are marketing e-business services that allow for the tracking and monitoring of individuals using RFID implants in the subcutaneous layer of the skin or Global Positioning System (GPS) wristwatches worn by enrollees. To date previous literature has not consistently addressed philosophical issues related to chip implants for humans in the context of e-business. In fact, popular online news sources like CNN [[ii]] and the BBC [[iii]] are among the few mainline publishers discussing the topic seriously, albeit in a fragmented manner. The credible articles on implanting humans are mostly interviews conducted with proponents of the technology, such as Applied Digital Solutions (ADSX) [[iv]] representatives who are makers of the VeriChip system solution [[v]]; Professor Kevin Warwick of the University of Reading who is known for his Cyborg 1.0 and 2.0 projects [[vi]]; and implantees like the Jacobs family in the US who bear RF/ID transponder implants [[vii]]. Block passages from these interviews are quoted throughout this paper to bring some of the major issues to the fore using a holistic approach.

More recently academic papers on human transponder implants covering various perspectives have surfaced on the following topics: legal and privacy [[viii], [ix]], ethics and culture [[x]], technological problems and health concerns [[xi]], technological progress [[xii]], trajectories [[xiii], [xiv]]. While there is a considerable amount of other popular material available especially on the Internet related to human chip implants, much of it is subjective and not properly sourced. One major criticism of these reports is that the reader is left pondering as to the authenticity of the accounts provided with little evidence to support respective claims and conclusions. Authorship of this literature is another problem. Often these articles are contributed anonymously, and when they do cite an author’s name, the level of technical understanding portrayed by the individual is severely lacking to the detriment of what he/she is trying to convey, even if there is a case to be argued. Thus, the gap this paper seeks to fill is to provide a sober presentation of cross-disciplinary perspectives on topical auto-ID issues with an emphasis on human transponder implants, and second to document some of the more thought-provoking discussion which has already taken place on the topic, complemented by a complete introductory bibliography.

3.  Method

Articles on auto-ID in general have failed to address the major philosophical issues using a holistic approach. For instance, Woodward [[xv]] is one of the few authors to have mentioned anything overly substantial about religious issues, with respect to biometric technology in a recognized journal. Previously the focus has basically been on privacy concerns and Big Brother fears. While such themes are discussed in this paper as well, the goal is to cover a broader list of issues than the commonplace. This is the very reason why two researchers with two very different backgrounds, one in science and the other in humanities, have collaborated to write this paper. A qualitative strategy is employed in this investigation to explore the major themes identified in the literature review. It should be noted however that legal, regulatory, economic and related policy issues such as standards, have been omitted because the aim of this paper is not to inform a purely technical audience or an audience which is strictly concerned with policy. It is aimed rather at the potential end-user of auto-ID devices and at technology companies who are continually involved in the process of auto-ID innovation.

Original material is quoted extensively to ensure that the facts are presented “as is.” There is nothing lost in simplified translations and the full weight of argument is retained, raw and uncut. The authors therefore cannot be accused of bias or misrepresentation. The sheer breadth of literature used for this investigation ensures reliability and validity in the findings. The narrative reporting style helps to guide readers through the paper, allowing individuals to form their own opinions and interpretations of what is being presented. Evidence for the issues discussed has been gathered from a wide variety of sources including offline and online documentation. High level content analysis has been performed to aid in the grouping of categories of discussion including social, cultural, religious and ethical issues that form the skeleton of the main body of the article as a way to identify emerging trends and patterns. Subcategories are also useful in identifying the second tier themes covered, helping to reduce complexity in analysis. The subcategories also allow for links to be made between themes. A highly intricate thread runs through the whole paper telling the story of not just auto-ID but the impacts of the information technology and telecommunications (IT&T) revolution [[xvi]]. There is therefore a predictive element to the paper as well which is meant to confront the reader with some present and future scenarios. The ‘what if’ questions are important as it is hoped they will generate public debate on the major social, cultural, religious and ethical implications of RFID implants in humans.

4. Towards Ubiquitous Computing

Section 4 is wholly dedicated to providing a background in which to understand auto-ID innovation; it will also grant some perspective to the tremendous pace of change in IT&T; and note some of the more grounded predictions about the future of computing. The focus is on wearable and ubiquitous computing within which auto-ID will play a crucial role. This section will help the reader place the evidence presented in the main body of the article into an appropriate context. The reader will thus be able to interpret the findings more precisely once the basic setting has been established, allowing each individual to form their own opinions about the issues being presented.

From personal computers (PCs) to laptops to personal digital assistants (PDAs) and from landline phones to cellular phones to wireless wristwatches, miniaturization and mobility have acted to shift the way in which computing is perceived by humans. Lemonick [[xvii]] captures this pace of change well in the following excerpt:

[i]t took humanity more than 2 million years to invent wheels but only about 5,000 years more to drive those wheels with a steam engine. The first computers filled entire rooms, and it took 35 years to make the machines fit on a desk- but the leap from desktop to laptop took less than a decade… What will the next decade bring, as we move into a new millennium? That’s getting harder and harder to predict.

Once a stationary medium, computers are now portable, they go wherever humans go [[xviii]]. This can be described as technology becoming more human-centric, “where products are designed to work for us, and not us for them” [[xix]]. Thus, the paradigm shift is from desktop computing to wearable computing [[xx]]. Quite remarkably in the pursuit of miniaturization, little has been lost in terms of processing power. “The enormous progress in electronic miniaturization make it possible to fit many components and complex interconnection structures into an extremely small area using high-density printed circuit and multichip substrates” [[xxi]]. We now have so-named Matchbox PCs that are no larger than a box of matches with the ability to house fully functional operating systems [[xxii]]. “The development of wearable computer systems has been rapid. Salonen [[xxiii]], among others [[xxiv]] are of the belief that “quite soon we will see a wide range of unobtrusive wearable and ubiquitous computing equipment integrated into our everyday wear”. The next ten years will see wearable computing devices become an integral part of our daily lives, especially as the price for devices keeps falling. Whether noticeable or not by users, the change has already begun. Technology is increasingly becoming an extension of the human body, whether it is by carrying smart cards or electronic tags [[xxv]] or even PDAs and mobile phones. Furui [[xxvi]] predicts that “[p]eople will actually walk through their day-to-day lives wearing several computers at a time.” Cochrane described this phenomenon as technology being an omnipresent part of our lives. Not only will devices become small and compact but they will be embedded in our bodies, invisible to anyone else [[xxvii]]. For the time being however, we are witnessing the transition period in which auto-ID devices especially are being trialled upon those who either i) desperately require their use for medical purposes or ii) who cannot challenge their application, such as in the case of armed forces or prison inmates. Eventually, the new technology will be opened to the wider market in a voluntary nature but will most likely become a de facto compulsory standard (i.e. such as in the case of the mobile phone today), and inevitably mandatory as it is linked to some kind of requirement for survival. Upon reflection, this is the pattern that most successful high-tech innovations throughout history have followed.

Mark Weiser first conceived the term “ubiquitous computing” to espouse all those small information systems (IS) devices, including calculators, electronic calendars and communicators that users would carry with them every day [[xxviii]]. It is important to make the distinction between ubiquitous and wearable computing. They “have been posed as polar opposites even though they are often applied in very similar applications” [[xxix]]. Kaku [[xxx]] stated that ubiquitous computing, is the time “when computers are all connected to each other and the ratio of computers to people flips the other way, with as many as one hundred computers for every person.” This latter definition implies a ubiquitous environment that allows the user to seamlessly interface with computer systems around them. Environments of the future are predicted to be context-aware so that users are not disturbed in every context, save for when it is suitable [[xxxi]]. Kortuem [[xxxii]] stated that “[s]uch environments might be found at the home, at the office, at factory floors, or even vehicles.” There is some debate however of where to place sensors in these environments. For example, should they be located around the room or should they be located on the individual. Locating sensors around the room enforces certain conditions on an individual, while locating sensors on an individual means that that person is actually in control of their context. The latter case also requires less localized infrastructure and a greater degree of freedom. Rhodes et al. [29] argue that by “properly combining wearable computing and ubiquitous computing, a system can have the advantages of both.”

5.  Social Issues

5.1 Privacy Concerns and Big Brother Fears

Starner [[xxxiii]] makes the distinction between privacy and security concerns. “Security involves the protection of information from unauthorized users; privacy is the individual’s right to control the collection and use of personal information.” Mills [[xxxiv]] is of the opinion that some technology, like communications, is not non-neutral but totalitarian in nature and that it can make citizens passive. “These glamorous technologies extend and integrate cradle-to-grave surveillance, annihilating all concept of a right to personal privacy, and help consolidate the power of the national security state… every technology, being a form of power, has implicit values and politics…” Over the years terms like Big Brother [[xxxv], [xxxvi]] and function creep [[xxxvii]] have proliferated to correspond to the all-seeing eyes of government and to the misuse and abuse of data. In most western countries data matching programs were constructed, linked to a unique citizen ID, to cross-check details provided by citizens, claims made, and benefits distributed [[xxxviii], [xxxix]]. More recently however, the trend has tended towards information centralization between government agencies based around the auspices of a national ID to reduce fraud [[xl]] and to combat terrorism [[xli]]. Currently computers allow for the storage and searching of data gathered like never before [[xlii]]. The range of automated data collection devices continues to increase to include systems such as bar codes (with RF capabilities), magnetic-stripe card, smart card and a variety of biometric techniques, increasing the rapidity and ease at which information is gathered. RFID transponders especially have added a greater granularity of precision in in-building and campus-wide solutions, given the wireless edge, allowing information to be gathered within a closed environment, anywhere/ anytime, transparent to the individual carrying the RFID badge or tag.

Now, while auto-ID itself is supposed to ensure privacy, it is the ease with which data can be collected that has some advocates concerned about the ultimate use of personal information. While the devices are secure, breaches in privacy can happen at any level- especially at the database level where information is ultimately stored after it is collected [[xliii]]. How this information is used, how it is matched with other data, who has access to it, is what has caused many citizens to be cautious about auto-ID in general [[xliv]]. Data mining also has altered how data is filtered, sifted and utilized all in the name of customer relationship management (CRM). It is not difficult to obtain telemarketing lists, census information aggregated to a granular level, and mapping tools to represent market segments visually. Rothfeder [[xlv]] states:

[m]edical files, financial and personnel records, Social Security numbers, and telephone call histories- as well as information about our lifestyle preferences, where we shop, and even what car we drive- are available quickly and cheaply.

Looking forward, the potential for privacy issues linked to chip implants is something that has been considered but mostly granted attention by the media. Privacy advocates warn that such a chip would impact civil liberties in a disastrous way [[xlvi]]. Even Warwick, himself, is aware that chip implants do not promote an air of assurance:

Looking back, Warwick admits that the whole experiment [Cyborg 1.0] “smacked of Big Brother.” He insists, however, that it’s important to raise awareness of what’s already technically possible so that we can remain in the driver’s seat. “I have a sneaking suspicion,” he says, “that as long as we’re gaining things, we’ll yell ‘Let’s have Big Brother now!’ It’s when we’re locked in and the lights start going off- then Big Brother is a problem.” [[xlvii]]

In this instance, Warwick has made an important observation. So long as individuals are “gaining” they generally will voluntarily part with a little more information. It is when they stop gaining and blatantly start being taken advantage of that the idea of Big Brother is raised. On that point, chip implants promise the convenience of not having to carry a multitude of auto-ID devices, perhaps not even a wallet or purse.

According to McGinity [18] “[e]xperts say it [the chip] could carry all your personal information, medical background, insurance, banking information, passport information, address, phone number, social security number, birth certificate, marriage license.” This kind of data collection is considered by civil libertarians to be “crypto-fascism or high-tech slavery” [[xlviii]]. The potential for abuse cannot be overstated [[xlix]]. Salkowski agrees pointing to the ADSX VeriChip system, stating that police, parents and ADSX employees could abuse their power. “It might even be possible for estranged spouses, employers and anyone else with a grudge to get their hands on tracking data through a civil subpoena” [[l]]. Hackers too, could try their hand at collecting data without the knowledge of the individual, given that wireless transmission is susceptible to interception. At the same time, the chip implant may become a prerequisite to health insurance and other services. “You could have a scenario where insurance companies refuse to insure you unless you agree to have a chip implant to monitor the level of physical activity you do” says Pearson of British Telecom [[li]]. This should not be surprising given that insurance companies already ask individuals for a medical history of illnesses upon joining a new plan. Proponents say the chip would just contain this information more accurately [7]. Furthermore, “[c]ost-conscious insurance companies are sure to be impressed, because the portability of biomems [i.e., a type of medical chip implant] would allow even a seriously ill patient to be monitored after surgery or treatment on an outpatient basis” [[lii]]. Now a chip storing personal information is quite different to one used to monitor health 24x7x365 and then to relay diagnoses to relevant stakeholders. As Chris Hoofnagle, an attorney for the Electronic Privacy Information Centre in Washington, D.C., pointed out, “[y]ou always have to think about what the device will be used for tomorrow” [[liii]]. In its essential aspect, this is exactly the void this paper has tried to fill.

5.2 Mandatory Proof of Identification

In the US in 2001 several bills were passed in Congress to allow for the creation of three new Acts related to biometric identification of citizens and aliens, including the Patriot Act, Aviation and Transport Security Act, and the Enhanced Border Security and Visa Entry Reform Act. If terrorism attacks continue to increase in frequency, there is a growing prospect in the use of chip implants for identification purposes and GPS for tracking and monitoring. It is not an impossible scenario to consider that one day these devices may be incorporated into national identification schemes. During the SARS (severe acute respiratory syndrome) outbreak, Singapore [[liv]] and Taiwan [[lv]] considered going as far as tagging their whole population with RFID devices to monitor automatically the spread of the virus. Yet, independent of such random and sporadic events, governments worldwide are already moving toward the introduction of a single unique ID to cater for a diversity of citizen applications. Opinions on the possibility of widespread chip implants in humans range from “it would be a good idea,” to “it would be a good idea, but only for commercial applications not government applications,” to “this should never be allowed to happen”. Leslie Jacobs, who was one of the first to receive a VeriChip told Scheeres [[lvi]], “[t]he world would be a safer place if authorities had a tamper-proof way of identifying people… I have nothing to hide, so I wouldn’t mind having the chip for verification… I already have an ID card, so why not have a chip?” It should be noted that some tracking and monitoring systems can be turned off and on by the wearer, making monitoring theoretically voluntary [[lvii]]. Sullivan a spokesperson for ADSX, said: “[i]t will not intrude on personal privacy except in applications applied to the tracking of criminals” [49]. ADSX have claimed on a number of occasions that it has received more than two thousand emails from teenagers volunteering to be the next to be “chipped” [[lviii]]. There are others like McClimans [[lix]] that believe that everyone should get chipped. Cunha Lima, a Brazilian politician who also has a chip implant is not ignorant of the potential for invasion of privacy but believes the benefits outweigh the costs and that so long as the new technology is voluntary and not mandatory there is nothing to worry about. He has said, “[i]f one chooses to ‘be chipped,’ then one has considered the consequences of that action” [[lx]]. Lima argues that he feels more secure with an implant given the number of kidnappings in South America of high profile people each year- at least this way his location is always known.

Professor Brad Meyers of the Computer Science Department at Carnegie Mellon University believes that the chip implant technology has a place but should not be used by governments. Yet the overriding sentiment is that chip implants will be used by government before too long. Salkowski [50] has said, “[i]f you doubt there are governments that would force at least some of their citizens to carry tracking implants, you need to start reading the news a little more often.” Black [53] echoes these sentiments: “Strictly voluntary? So far so good. But now imagine that same chip being used by a totalitarian government to keep track of or round up political activists or others who are considered enemies of the state. In the wrong hands, the VeriChip could empower the wrong people.” In a report written by Ramesh [[lxi]] for the Franklin Pierce Law Centre the prediction is made that: 

[a] national identification system via microchip implants could be achieved in two stages: Upon introduction as a voluntary system, the microchip implantation will appear to be palatable. After there is a familiarity with the procedure and a knowledge of its benefits, implantation would be mandatory.

Bob Gellman, a Washington privacy consultant, likens this to “a sort of modern version of tattooing people, something that for obvious reasons- the Nazis tattooed numbers of people- no one proposes” [49, [lxii], [lxiii]]. The real issue at hand as Gellman sees it is “who will be able to demand that a chip be implanted in another person.” Mieszkowski supports Gray by observing how quickly a new technological “option” can become a requirement. Resistance after the voluntary adoption stage can be rather futile if momentum is leading the device towards a mandatory role.

McMurchie [[lxiv]] reveals the subtle progression toward embedded devices:

[a]s we look at wearable computers, it’s not a big jump to say, ‘OK, you have a wearable, why not just embed the device?’… And no one can rule out the possibility that employees might one day be asked to sport embedded chips for ultimate access control and security…

Professor Chris Hables Gray uses the example of prospective military chip implant applications. How can a marine, for instance, resist implantation? Timothy McVeigh, convicted Oklahoma bomber, claimed that during the Gulf War, he was implanted with a microchip against his will. The claims have been denied by the U.S. military [[lxv]], however the British Army is supposedly considering projects such as APRIL (Army Personnel Rationalization Individual Listings) [51]. Some cyberpunks have attempted to counteract the possibility of enforced implantation. One punk known by the name of “Z.L” is an avid reader of MIT specialist publications like open|DOOR MIT magazine on bioengineering and beyond. Z.L.’s research has indicated that:

[i]t is only a matter of time… before technology is integrated within the body. Anticipating the revolution, he has already taught himself how to do surgical implants and other operations. “The state uses technology to strengthen its control over us,” he says. “By opposing this control, I remain a punk. When the first electronic tags are implanted in the bodies of criminals, maybe in the next five years, I’ll know how to remove them, deactivate them and spread viruses to roll over Big Brother” [25].

5.3 Health Risks

Public concern about electromagnetic fields from cellular phones was a contentious issue in the late 1990s. Now it seems that the majority of people in More Developing Countries (MDCs) have become so dependent on mobile phones that they are disregarding the potential health risks associated with the technology [[lxvi]]. Though very little has been proven concretely, most terminal manufacturers do include a warning with their packaging, encouraging users not to touch the antenna of the phone during transmission [[lxvii]]. Covacio [11] is among the few authors to discuss the potential technological problems associated with microchips for human ID from a health perspective. In his paper he provides evidence why implants may impact humans adversely, categorizing these into thermal (i.e. whole/partial rise in body heating), stimulation (i.e. excitation of nerves and muscles) and other effects most of which are currently unknown. He states that research into RFID and mobile telephone technology [11]:

...has revealed a growing concern with the effects of radio frequency and non-ionizing radiation on organic matter. It has been revealed a number of low-level, and possible high-level risks are associated with the use of radio-frequency technology. Effects of X-rays and gamma rays have been well documented in medical and electronic journals…

In considering future wearable devices, Salonen [[lxviii]] puts forward the idea of directing antenna away from the head where “there may be either a thermal insult produced by power deposition in tissue (acute effects) or other (long-term) effects” to midway between the shoulder and elbow where radiation can be pushed outward from the body. Yet chip implants may also pose problems, particularly if they are active implants that contain batteries and are prone to leakage if transponders are accidentally broken. Geers et al. [[lxix]] write the following regarding animal implants.

Another important aspect is the potential toxic effect of the battery when using active transponders. Although it should be clear that pieces of glass or copper from passive tags are not allowed to enter the food chain. When using electronic monitoring with the current available technology, a battery is necessary to guarantee correct functioning of sensors when the transponder is outside the antenna field. If the transponder should break in the animal’s body, battery fluid may escape, and the question of toxological effects has to be answered.

In fact, we need only consider the very real problems that women with failed silicon breast implants have had to suffer. Will individuals with chip implants, twenty years down the track, be tied up in similar court battles and with severe medical problems? Surgical implantation, it must also be stated, causes some degree of stress in an animal and it takes between four to seven days for the animal to return to equilibrium [69]. Most certainly some discomfort must be felt by humans as well. In the Cyborg 1.0 project, Warwick was advised to leave the implant under his skin for only ten days. According to Trull [[lxx]], Warwick was taking antibiotics to fight the possibility of infection. Warwick also reportedly told his son while playing squash during Cyborg 1.0: “Whatever you do, don’t hit my arm. The implant could just shatter, and you’ll have ruined your father’s arm for life” [[lxxi]]. It is also worthwhile noting Warwick’s appearance after the Cyborg 2.0 experiment. He looked pale and weary in press release photographs, like someone who had undergone a major operation. Covacio [11] believes ultimately that widespread implantation of microchips in humans will lead to detrimental effects to them and the environment at large. Satellite technology (i.e. the use of GPS to locate individuals), microwave RF and related technological gadgetry will ultimately “increase health problems and consequentially increase pressure on health services already under economic duress.”

6. Cultural Issues

6.1 The Net Generation

When the ENIAC was first made known to the public in February of 1946 reporters used “anthropomorphic” and “awesome characterizations” to describe the computer. The news was received with skepticism by citizens who feared the unknown. In an article titled “The Myth of the Awesome Thinking Machine”, Martin [[lxxii]] stated that the ENIAC was referred to in headlines as “a child, a Frankenstein, a whiz kid, a predictor and controller of weather, and a wizard”. Photographs of the ENIAC used in publications usually depicted the computer to completely fill a small room, from wall-to-wall and floor-to-ceiling. People are usually shown interacting with the machine, feeding it with instructions, waiting for results and monitoring its behavior. One could almost imagine that the persons in the photographs are ‘inside the body’ of the ENIAC [[lxxiii]]. Sweeping changes have taken place since that time, particularly since the mid 1980s. Consumers now own personal computers (PCs) in their homes- these are increasingly being networked- they carry laptop computers and mobile phones and chip cards, and closely interact with public automated kiosks. Relatively speaking, it has not taken long for people to adapt to the changes that this new technology has heralded. Today we speak of a Net Generation (N-Geners) who never knew a world without computers or the Internet [[lxxiv]]; for them the digital world is as ubiquitous as the air that they breathe. What is important to N-Geners is not how they got to where they are today but what digital prospects the future holds.

“[O]ur increasing cultural acceptance of high-tech gadgetry has led to a new way of thinking: robotic implants could be so advantageous that people might actually want to become cybernetic organisms, by choice. The popularization of the cyberpunk genre has demonstrated that it can be hip to have a chip in your head” [70].

6.2 Science Fiction Genre

The predictions of science fiction writers have often been promoted through the use of print, sound and visual mediums. Below is a list of sci-fi novels, films and television series that undoubtedly have influenced and are still influencing the trajectory of auto-ID. Chris Hables Gray tells his students “…that a lot of the best cyborgology has been done in the mass media and in fiction by science fiction writers, and science fiction movie producers, because they’re thinking through these things” [[lxxv]]. The popular 1970s series of Six Million Dollar Man, for instance, began as follows: “We can rebuild him. We have the technology. We have the capability to make the world’s first Bionic man.” Today bionic limbs are a reality and no longer science fiction [[lxxvi]]. More recently AT&T’s Wireless mMode magazine alluded to Start Trek [[lxxvii]]:

They also talked about their expectations- one media executive summed it up best, saying, “Remember that little box that Mr. Spock had on Star Trek? The one that did everything? That’s what I’d like my phone to be…”

Beyond auto-ID we find a continuing legacy in sci-fi genre toward the electrification of humans- from Frankenstein to Davros in Dr Who, and from Total Recall to Johnny Mnemonic (see exhibit 1.2). While all this is indeed ‘merely’ sci-fi, it is giving some form to the word, allowing the imagination to be captured in powerful images, sounds and models. What next? A vision of a mechanized misery [[lxxviii]] as portrayed in Fritz Lang’s 1927 cult film classic Metropolis? Only this time instead of being at the mercy of the Machine, we have gone one step further and invited the Machine to reside inside the body, and marked it as a ‘technological breakthrough’ as well. As several commentators have noted, “[w]e live in an era that… itself often seems like science fiction, and Metropolis has contributed powerfully to that seeming” [[lxxix]].

Exhibit 1.2 &nbsp;&nbsp;&nbsp;&nbsp;Sci-Fi Film Genre Pointing to the Electrification of Humans

Exhibit 1.2     Sci-Fi Film Genre Pointing to the Electrification of Humans

Some of the more notable predictions and social critiques are contained within the following novels: Frankenstein (Shelley 1818), Paris in the 20th Century (Verne 1863), Looking Backward (Bellamy 1888), The Time Machine (Wells 1895), R.U.R. (Kapek 1917), Brave New World (Huxley 1932), 1984 (Orwell 1949), I, Robot (Asimov 1950), Foundation (Asimov 1951-53, 1982), 2001: A Space Odyssey (Clarke 1968), Blade Runner (Dick 1968), Neuromancer (Gibson 1984), The Marked Man (Ingrid 1989), The Silicon Man (Platt 1991), Silicon Karma (Easton 1997). The effects of film have been even more substantial on the individual as they have put some form to the predictions. These include: Metropolis (Fritz Lang 1927), Forbidden Planet (Fred Wilcox 1956), Fail Safe (Sidney Lumet 1964), THX-1138 (George Lucas 1971), 2001: A Space Odyssey (Stanley Kubrick 1968), The Terminal Man (George Lucas 1974), Zardoz (John Boorman 1974), Star Wars (George Lucas 1977), Moonraker (Lewis Gilbert II 1979), Star Trek (Robert Wise 1979), For Your Eyes Only (John Glen II 1981), Blade Runner (Ridley Scott 1982), War Games (John Badham 1983), 2010: The Year We Make Contact (Peter Hyams 1984), RoboCop (Paul Verhoeven, 1987), Total Recall (Paul Verhoeven 1990), The Terminator Series, Sneakers (Phil Alden Robinson 1992), Patriot Games (Phillip Noyce 1992), The Lawnmower Man (Brett Leonard 1992), Demolition Man (Marco Brambilla 1993), Jurassic Park (Steven Speilberg 1993), Hackers (Iain Softley 1995), Johnny Mnemonic (Robert Longo 1995), The NET (Irwin Winkler 1995) [[lxxx]], Gattaca (Andrew Niccol 1997) Enemy of the State (Tony Scott 1998), Fortress 2 (Geoff Murphy 1999), The Matrix (L. Wachowski & A. Wachowski 1999), Mission Impossible 2 (John Woo 2000), The 6th Day (Roger Spottiswoode 2000). Other notable television series include: Dr Who, Lost in Space, Dick Tracy, The Jetsons, Star Trek, Batman, Get Smart, Six Million Dollar Man, Andromeda, Babylon 5, Gasaraki, Stargate SG-1, Neon Genesis Evangelion, FarScape, and X-Files.  

6.3 Shifting Cultural Values

Auto-ID and more generally computer and network systems have influenced changes in language, art [[lxxxi]], music and film. An article by Branwyn [[lxxxii]] summarizes these changes well.

Language [[lxxxiii]]: “Computer network and hacker slang is filled with references to “being wired” or “jacking in” (to a computer network), “wetware” (the brain), and “meat” the body”.
Music: “Recent albums by digital artists Brian Eno, Clock DVA, and Frontline Assembly sport names like Nerve Net, Man Amplified and Tactical Neural Implant.” See also the 1978 album by Kraftwerk titled “The Man Machine”.
Film: “Science fiction films, from Robocop to the recent Japanese cult film Tetsuo: The Iron Man, imprint our imaginations with images of the new.”

Apart from the plethora of new terms that have been born from the widespread use of IT&T and more specifically from extropians (much of which have religious connotations or allusions [[lxxxiv]]), it is art, especially body art that is being heavily influenced by chip implant technology. Mieszkowski [49] believes that “chipification” will be the next big wave in place of tattoos, piercing and scarification (see exhibit 1.3). In the U.S. it was estimated in 2001 that about two hundred Americans had permanently changed their bodies at around nine hundred dollars per implant, following a method developed by Steve Hayworth and Jon Cobb [25].

Exhibit 1.3 &nbsp;&nbsp;&nbsp;&nbsp;The New Fashion: Bar Code Tattoos, Piercing &amp; Chips

Exhibit 1.3     The New Fashion: Bar Code Tattoos, Piercing & Chips

Canadian artist Nancy Nisbet has implanted microchips in her hands to better understand how implant technology may affect the human identity. The artist told Scheeres [[lxxxv]], “I am expecting the merger between human and machines to proceed whether we want it to or not…” As far back as 1997, Eduardo Kac “inserted a chip into his ankle during a live performance in Sao Paulo, then registered himself in an online pet database as both owner and animal” [86]. Perhaps the actual implant ceremony was not Kac’s main contribution but the subsequent registration onto a pet database. Other artists like Natasha Vita More and Stelarc have ventured beyond localized chip implants. Their vision is of a complete prosthetic body that will comprise of nanotechnology, artificial intelligence, robotics, cloning, and even nanobots [75]. More calls her future body design Primo 3M Plus. Stelarc’s live performances however, have been heralded as the closest thing there is to imagining a world where the human body will become obsolete [[lxxxvi]].

A Stelarc performance… usually involves a disturbing mix of amplified sounds of human organs and techno beats, an internal camera projecting images of his innards, perhaps a set of robotic legs or an extra arm, or maybe tubes and wires connecting the performer’s body to the internet with people in another country manipulating the sensors, jerking him into a spastic dance. It’s a dark vision, but it definitely makes you think [75].

Warwick [[lxxxvii]] believes that the new technologies “will dramatically change [art], but not destroy it.”

6.4 Medical Marvels or Human Evolution

As Sacleman wrote in 1967 “...the impact of automation on the individual involve[d] a reconstruction of his values, his outlook and his way of life” [[lxxxviii]]. Marshall McLuhan [[lxxxix], [xc]] was one of the first explorers to probe how the psycho-social complex was influenced by electricity. “Electricity continually transforms everything, especially the way people think, and confirms the power of uncertainty in the quest for absolute knowledge.” [[xci]]. Numerous examples can be given to illustrate these major cultural changes- from the use of electricity for household warmth, to wide area networks (WAN) enabling voice and data communications across long distances, to magnetic-stripe cards used for credit transactions [[xcii], [xciii], [xciv], [xcv]]. But what of the direct unification of humans and technology, i.e., the fusion between flesh and electronic circuitry [[xcvi], [xcvii], [xcviii]]? Consider for a moment the impact that chip implants have had on the estimated 23,000 cochlear recipients in the US. A medical marvel perhaps but it too, not without controversy. There are potentially 500,000 hearing impaired persons that could benefit from cochlear implants [[xcix]] but not every deaf person wants one.

Some deaf activists… are critical of parents who subject children to such surgery [cochlear implants] because, as one charged, the prosthesis imparts “the nonhealthy self-concept of having had something wrong with one’s body” rather than the “healthy self-concept of [being] a proud Deaf” [[c]].

Assistant Professor Scott Bally of Audiology at Gallaudet University has said: “Many deaf people feel as though deafness is not a handicap. They are culturally deaf individuals who have successfully adapted themselves to being deaf and feel as though things like cochlear implants would take them out of their deaf culture, a culture which provides a significant degree of support” [82].

Putting this delicate debate aside it is here that some delineation can be made between implants that are used to treat an ailment or disability (i.e. giving sight to the blind and hearing to the deaf), and implants that may be used for enhancing human function (i.e. memory). Some citizens are concerned about the direction of the human species as future predictions of fully functional neural implants are being made by credible scientists. “[Q]uestions are raised as to how society as a whole will relate to people walking around with plugs and wires sprouting out of their heads. And who will decide which segments of the society become the wire-heads” [82]? Those who can afford the procedures perhaps? And what of the possibility of brain viruses that could be fatal and technological obsolescence that may require people to undergo frequent operations? Maybury [[ci]] believes that humans are already beginning to suffer from a type of “mental atrophy” worse than that that occurred during the industrial revolution and that the only way to fight it is to hang on to those essential skills that are required for human survival. The question remains whether indeed it is society that shapes technology [[cii]] or technology that shapes society [[ciii]]. Inevitably it is a dynamic process of push and pull that causes cultural transformations over time.

7 Religious Issues

7.1 The Mark of the Beast

Ever since the bar code symbology UPC (Universal Product Code) became widespread some Christian groups have linked auto-ID to the “mark” in the Book of Revelation (13:18): “the number of the beast… is 666” [[civ], [cv], [cvi]]. Coincidentally, the left (101), centre (01010) and right (101) border codes of the UPC bars are encoded 6, 6, 6 (see exhibit 1.4). As it is now an established standard for every non-perishable item to be bar coded there was a close association with the prophecy: “so that no one could buy or sell unless he had the mark” (Rev 13:17). In full, verses 16-18 of chapter 13 of Revelation read as follows:

He also forced everyone, small and great, rich and poor, free and slave, to receive a mark on his right hand or on his forehead, so that no one could buy or sell unless he had the mark, which is the name of the beast or the number of his name. This calls for wisdom. If anyone has insight, let him calculate the number of the beast, for it is man’s number. His number is 666. [[cvii]]

According to some Christians, this reference would appear to be alluding to a mark on or in the human body, the prediction being made that the UPC would eventually end up on or under human skin [[cviii]]. As the selection environment of auto-ID devices grew, the interpretation of the prophecy further developed as to the actual guise of the mark. It was no longer interpreted to be ‘just’ the bar code (see exhibit 1.4). Some of the more prominent religious web sites that discuss auto-ID and the number of the beast include: (2003), (2003), (2003), (2003), (2003) and (1996). At first the sites focused on bar code technology, now they have grown to encompass a plethora of auto-ID technologies, especially biometrics and looming chip implants. For a thorough analysis of the background, sources and interpretation of the “number of the beast” see M.G. Michael’s thesis [[cix]].

Card technology such as magnetic-stripe and smart cards became the next focus as devices that would gradually pave the way for a permanent ID for all citizens globally: “He also forced everyone, small and great, rich and poor, free and slave, to receive a mark…” (Rev 13:16). Biometrics was then introduced and immediately the association was made that the “mark” [charagma] would appear on the “right hand” (i.e. palmprint or fingerprint) or on the “forehead” (facial/ iris recognition) as was supposedly prophesied (Rev. 13:16). For the uses of charagma in early Christian literature see Arndt and Gingrich [[cx]]. Short of calling this group of people fundamentalists, as Woodward [15] refers to one prominent leader, Davies is more circumspect [[cxi]]:

“I think they’re legitimate [claims]. People have always rejected certain information practices for a variety of reasons: personal, cultural, ethical, religious and legal. And I think it has to be said that if a person feels bad for whatever reason, about the use of a body part then that’s entirely legitimate and has to be respected”.

Finally RF/ID transponders made their way into pets and livestock for identification, and that is when some Christian groups announced that the ‘authentic’ mark was now possible, and that it was only a matter of time before it would find its way into citizen applications [[cxii]]. Terry Cook [[cxiii]], for instance, an outspoken religious commentator and popular author, “worries the identification chip could be the ‘mark of the beast’, an identifying mark that all people will be forced to wear just before the end times, according to the Bible” [[cxiv]]. The description of an implant procedure for sows that Geers et al. [69] gives, especially the section about an incision being made on the skin, is what some religious advocates fear may happen to humans as well in the future.

When the thermistor was implanted the sows were restrained with a lasso. The implantation site was locally anaesthetized with a procaine (2%) injection, shaved and disinfected. After making a small incision in the skin, the thermistor was implanted subcutaneously, and the incision was closed by sewing. The position of the thermistor (accuracy 0.1C) was wire-connected to a data acquisition system linked to a personal computer.

“Religious advocates say it [i.e. transponder implants] represents ‘the mark of the Beast’, or the anti-Christ” [[cxv]]. Christians who take this mark, for whatever reason, are said to be denouncing the seal of baptism, and accepting the Antichrist in place of Christ [[cxvi], [cxvii], [cxviii]]. Horn [[cxix]] explains:

[m]any Christians believe that, before long, an antichrist system will appear. It will be a New World Order, under which national boundaries dissolve, and ethnic groups, ideologies, religions, and economics from around the world, orchestrate a single and dominant sovereignty… According to popular Biblical interpretation, a single personality will surface at the head of the utopian administration… With imperious decree the Antichrist will facilitate a one-world government, universal religion, and globally monitored socialism. Those who refuse his New World Order will inevitably be imprisoned or destroyed.

References discussing the New World Order include Barnet and Cavanagh [[cxx]], Wilshire [[cxxi]], and Smith [[cxxii]].

Exhibit 1.4 &nbsp;&nbsp;&nbsp;&nbsp;The Mark of the Beast as Shown on

Exhibit 1.4     The Mark of the Beast as Shown on

Companies that specialize in the manufacture of chip implant solutions, whether for animals or for humans, have been targeted by some religious advocates. The bad publicity has not been welcomed by these companies- some have even notably “toned down” the graphic visuals on their web sites so that they do not attract the wrong ‘type’ of web surfers. While they are trying to promote an image of safety and security, some advocates have associated company brands and products with apocalyptic labels. Some of the company and product names include: Biomark, BioWare, BRANDERS, MARC, Soul Catcher, Digital Angel and Therion Corporation. Perhaps the interesting thing to note is that religious advocates and civil libertarians agree that ultimately the chip implant technology will be used by governments to control citizens. ADSX is one of the companies that have publicly stated that they do not want adverse publicity after pouring hundreds of thousands of dollars into research and development and the multi-million dollar purchase of the Destron Fearing company. So concerned were they that they even appeared on the Christian talk show The 700 Club, emphasizing that the device would create a lot of benefits and was not meant to fulfill prophecy [60]. A spokesperson for ADSX said: “[w]e don’t want the adverse publicity. There are a number of privacy concerns and religious implications- fundamentalist Christian groups regard [i.e., implanting computer chips] as the Devil’s work” [51].  According to Gary Wohlscheid, the president of The Last Day Ministries, the VeriChip could well be the mark.  Wohlscheid believes that out of all the auto-ID technologies with the potential to be the mark, the VeriChip is the closest. About the VeriChip he says however, “[i]t’s definitely not the final product, but it’s a step toward it. Within three to four years, people will be required to use it. Those that reject it will be put to death” [56]. These are, of course, the positions of those who have entered the debate from the so-called fundamentalist literalist perspective and represent the more vocal and visible spectrum of contemporary “apocalyptic” Christianity. In this context the idea of fundamentalism seems to be a common label today, for anyone within the Christian community who questions the trajectory of technological advancement.

With respect to the potential of brain chips in the perceived quest for “immortality” [13, 14], many Christians across the denominational confession see this as an attempt to usurp the Eternal Life promised by God, in Jesus Christ, through the Holy Spirit. This is similar to the case of human cloning, where specialist geneticists are accused of trying to play God by usurping the Creator’s role. However, the area is notoriously grey here; when for instance, do implants for medical breakthroughs become acceptable versus those required for purposes of clear identification? In the future the technology in question could end up merging the two functions onto the single device. This is a real and very possible outcome, when all factors, both market and ethical, are taken on board by the relevant stakeholders. Ultimately, for most members of a believing religious community, this subject revolves around the most important question of individual freedom and the right to choose [[cxxiii], [cxxiv]].

8. Ethical Issues

In an attempt to make our world a safer place we have inadvertently infringed on our privacy and our freedom through the use of surveillance cameras and all other ancillary. We equip our children with mobile phones, attach tracking devices to them or make them carry them [[cxxv]] in their bags and soon we might even be implanting them with microchips [[cxxvi]]. This all comes at a price- yet it seems more and more people are willing to pay this price as heinous crimes become common events in a society that should know better. Take the example of 11-year old Danielle Duval who is about to have an active chip (i.e. containing a rechargeable battery) implanted in her. Her mother believes that it is no different to tracking a stolen car, simply that it is being used for another more important application. Mrs Duvall is considering implanting her younger daughter age 7 as well but will wait until the child is a bit older: “so that she fully understands what’s happening” [[cxxvii]]. One could be excused for asking whether Danielle at the age of 11 actually can fully comprehend the implications of the procedure she is about to undergo. It seems that the age of consent would be a more appropriate age.

Warwick has said that an urgent debate is required on this matter (i.e. whether every child should be implanted by law), and whether or not signals from the chips should be emitted on a 24x7 basis or just triggered during emergencies. Warwick holds the position that “we cannot prejudge ethics” [87]. He believes that ethics can only be debated and conclusions reached only after people become aware of the technical possibilities when they have been demonstrated. He admits that ethics may differ between countries and cultures [[cxxviii]]. The main ethical problem related to chip implants seems to be that they are under the skin [70] and cannot just be removed by the user at their convenience. In fact there is nothing to stop anyone from getting multiple implants all over their body rendering some applications useless. Tien of the Electronic Frontier Foundation (EFF) is convinced that if a technology is there to be abused, whether it is chip implants or national ID cards, then it will because that is just human nature [[cxxix]]. Similarly, Kidscape, a charity that is aimed at reducing the incidence of sexual abuse in children believe that implants will not act to curb crime. Kidscape hold the position that rather than giving children a false sense of security because they are implanted with a tracking device that could be tampered with by an offender, they should be educated on the possible dangers. Implanted tracking devices may sound entirely full-proof but deployment of emergency personnel, whether police or ambulance, cannot just magically appear at the scene of a crime in time to stop an offender from committing violence against a hostage.

8.1 The Prospect of International ID Implants

There are numerous arguments for why implanting a chip in a person is outright unconstitutional. But perhaps the under-explored area as Gellman puts it are the legal and social issues of who would have power over the chip and the information gathered by its means [49]. Gellman is correct in his summation of the problem but science has a proven way of going into uncharted territory first, then asking the questions about implications later. ADSX, for instance, have already launched the VeriChip solution. Sullivan, a spokesperson for the company told Salkowski [50]:

“I’m certainly not a believer in the abuse of power,” he offered, suggesting that Congress could always ban export of his company’s device. Of course, he admits he wouldn’t exactly lobby for that law. “I’m a businessman,” he said.

Black [53] makes the observation that the US government might well indeed place constraints on international sales of the VeriChip if it felt it could be used against them by an enemy. Consider the governance issues surrounding GPS technology that has been in operation a lot longer than human RFID implants.

“Good, neutral, or perhaps undesirable outcomes are now possible… Tension arises between some of the civil/commercial applications and the desire to preclude an adversary’s use of GPS. It is extremely difficult (technically, institutionally, politically, and economically) to combine the nonmilitary benefits of the system that require universality of access, ease of use, and low cost with military requirements for denial of the system to adversaries. Practical considerations require civil/commercial applications to have relatively easy access” [[cxxx]].

From a different angle, Rummler [[cxxxi]] points out that the monitoring and tracking of individuals raises serious legal implications regarding the individual’s capacity to maintain their right to freedom. He wrote: “[o]nce implanted with bio-implant electronic devices, humans might become highly dependent on the creators of these devices for their repair, recharge, and maintenance. It could be possible to modify the person technologically… thus placing them under the absolute control of the designers of the technology.” The Food and Drug Administration’s (FDA) Dr. David Feigal has been vocal about the need for such devices as the VeriChip not to take medical applications lightly and that companies wishing to specialize in health-related implants need to be in close consultation with the FDA [[cxxxii], [cxxxiii]]. There is also the possibility that such developments, i.e. regulating chip implants, may ultimately be used against an individual. The Freedom of Information Act for instance, already allows U.S. authorities to access automatic vehicle toll-passes to provide evidence in court [2]; there is nothing to suggest this will not happen with RFID transponder implants as well, despite the myriad of promises made by ADSX.  Professor Gray is adamant that there is no stopping technological evolution no matter how sinister some technologies may appear, and that we need to become accustomed to the fact that new technologies will continually infringe upon the constitution [49].

8.2 Beyond Chip Implants

Luggables, like mobile phones, do create a sense of attachment between the user and the device but the devices are still physically separate; they can accidentally be left behind. Wearable computers on the other hand are a part of the user, they are worn, and they “create an intimate human-computer-symbiosis in which respective strengths combine” [[cxxxiv]]. Mann calls this human-computer-symbiosis, “human interaction” (HI) as opposed to HCI (human-computer interaction).

[W]e prefer not to think of the wearer and the computer with its associated I/O apparatus as separate entities. Instead, we regard the computer as a second brain and its sensory modalities as additional senses, which synthetic synesthesia merges with the wearer’s senses. [[cxxxv]]
Exhibit 1.5 &nbsp;&nbsp;&nbsp;&nbsp;The Process of Transformation

Exhibit 1.5     The Process of Transformation

Human-computer electrification is set to make this bond irrevocable (see exhibit 1.5). Once on that path there is no turning back. If at the present all this seems impossible, a myth, unlikely, a prediction far gone, due to end-user resistance and other similar obstacles facing the industry today, history should teach us otherwise. This year alone, millions of babies will be born into a world where there are companies on the New York Stock Exchange specializing in chip implant devices for humans. “They” will grow up believing that these technologies are not only “normal” but also quite useful, just   like   other   high-tech technologies before them such as the Internet, PCs, smart cards etc. Consider the case of Cynthia Tam, aged two, who is an avid computer user:

“[i]t took a couple of days for her to understand the connection between the mouse in her hand and the cursor on the screen and then she was off… The biggest problem for Cynthia’s parents is how to get her to stop… for Cynthia, the computer is already a part of her environment… Cynthia’s generation will not think twice about buying things on the Internet, just like most people today don’t think twice when paying credit card, or using cash points for withdrawals and deposits” [[cxxxvi]].

But you do not have to be a newborn baby to adapt to technological change. Even grandmothers and grandfathers surf the web these days and send emails as a cheaper alternative to post or telephone [74]. And migrants struggling with a foreign language will even memorize key combinations to withdraw money even if they do not actually fully perceive the actions they are commanding throughout the process. Schiele [[cxxxvii]] believes that our personal habits are shaped by technological change and that over time new technologies that seem only appropriate for technophiles eventually find themselves being used by the average person. “[O]ver time our culture will adjust to incorporate the devices.” Gotterbarn is in agreement [10].

We enthusiastically adopt the latest gadget for one use, but then we start to realize that it gives us power for another use. Then there is the inevitable realization that we have overlooked the way it impacts other people, giving rise to professional and ethical issues.

What is apparent regardless of how far electrophoresis is taken is that the once irreconcilable gap between human and machine is closing (see exhibit 1.6).

Beyond chip implants for tracking there are the possibilities associated with neural prosthetics and the potential to directly link computers to humans [[cxxxviii]]. Warwick is also well aware that one of the major obstacles of cyber-humans are the associated moral issues [[cxxxix], [cxl]]- who gives anyone the right to be conducting complex procedures on a perfectly healthy person, and who will take responsibility for any complications that present themselves? Rummler [131] asks whether it is ethical to be linking computers to humans in the first place and whether or not limitations should be placed on what procedures can be conducted even if they are possible. For instance, could this be considered a violation of human rights? And more to the point what will it mean in the future to call oneself “human”. McGrath [[cxli]] asks “how human”?

As technology fills you up with synthetic parts, at what point do you cease to be fully human? One quarter? One third?... At bottom lies one critical issue for a technological age: are some kinds of knowledge so terrible they simply should not be pursued? If there can be such a thing as a philosophical crisis, this will be it. These questions, says Rushworth Kidder, president of the Institute for Global Ethics in Camden, Maine, are especially vexing because they lie at “the convergence of three domains- technology, politics and ethics- that are so far hardly on speaking terms.

At the point of becoming an electrophorus (i.e. a bearer of electricity), “[y]ou are not just a human linked with technology; you are something different and your values and judgment will change” [[cxlii]]. Some suspect that it will even become possible to alter behavior in people with brain implants [51], whether they will it or not. Maybury [101] believes that “[t]he advent of machine intelligence raises social and ethical issues that may ultimately challenge human existence on earth.”


Exhibit 1.6 &nbsp;&nbsp;&nbsp;&nbsp;Marketing Campaigns that Point to the Electrophorus

Exhibit 1.6     Marketing Campaigns that Point to the Electrophorus

Gotterbarn [10] argues precisely that our view of computer technologies generally progresses through several stages:

1) naïve innocence and technological wonder, 2) power and control, and 3) finally, sometimes because of disasters during the second stage, an understanding of the essential relationship between technologies and values.

Bill Joy, the chief technologist of Sun Microsystems, feels a sense of unease about such predictions made by Ray Kurzweil in The Age of Spiritual Machines [138]. Not only because Kurzweil has proven technically competent in the past but because of his ultimate vision for humanity- “a near immortality by becoming one with robotic technology” [[cxliii]]. Joy was severely criticized for being narrow-sighted, even a fundamentalist of sorts, after publishing his paper in Wired, but all he did was dare to ask the questions- ‘do we know what we are doing? Has anyone really carefully thought about this?’ Joy believes [143]:

[w]e are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I don’t believe so, but we aren’t trying yet, and the last chance to assert control- the fail-safe point- is rapidly approaching.

Surely there is a pressing need for ethical dialogue [[cxliv]] on auto-ID innovation and more generally IT&T. If there has ever been a time when engineers have had to act socially responsibly [[cxlv]], it is now as we are at defining crossroads.

The new era of biomedical and genetic research merges the worlds of engineering, computer and information technology with traditional medical research. Some of the most significant and far-reaching discoveries are being made at the interface of these disciplines. [[cxlvi]]

9. Conclusion

The principal objective of this paper was to encourage critical discussion on the exigent topic of human implants in e-business applications by documenting the central social, cultural, religious and ethical issues. The evidence provided indicates that technology-push has been the driving force behind many of the new RFID transponder implant applications instead of market-pull. What is most alarming is the rate of change in technological capabilities without the commensurate response from an informed community involvement or ethical discourse on what these changes actually “mean”, not only for the present but also for the future. It seems that the normal standard now is to introduce a technology, stand back to see its general effects on society, and then act to rectify problems as they might arise. The concluding point of this paper is that the long-term side effects of a technology should be considered at the outset and not after the event. One need only bring to mind the Atomic Bomb and the Chernobyl disaster for what is possible, if not inevitable once a technology is set on its ultimate trajectory [103]. As citizens it is our duty to remain knowledgeable about scientific developments and to discuss the possible ethical implications again and again [10]. In the end we can point the finger at the Mad Scientists [75] but we too must be socially responsible, save we become our own worst enemy [[cxlvii]]. It is certainly a case of caveat emptor, let the buyer beware.

10. References

[1] Cohen, T., The Tattoo, Savvas, Sydney (1994).

[2] Sanchez-Klein, J., “Cyberfuturist plants chip in arm to test human-computer interaction”, CNN Interactive, armchip.idg/index.html, [Accessed 28 August 1998], pp. 1-2 (1998).

[3] Jones, C., “Kevin Warwick: Saviour of humankind?”, BBC News,, [Accessed 4 January 2003], pp. 1-4 (2000).

[4] ADSX, “Homepage”,, Applied Digital Solutions, [Accessed 1 March 2004], p. 1 (2004).

[5] ADSX, “VeriChip Corporation”, Applied Digital Solutions,, [Accessed 1 April 2004], p. 1 (2004).

[6] Warwick, K., “Professor of Cybernetics, University of Reading”, Kevin Warwick,, [Accessed 14 November 2002], pp. 1-2 (2002).

[7] Goldman, J., “Meet ‘The Chipsons’: ID chips implanted successfully in Florida family”, ABC News: techtv, TechTV/techtv_chipfamily020510.html, [Accessed 13 November 2003], pp. 1-2 (2002).

[8] Ramesh, E.M., “Time Enough: consequences of the human microchip implantation”, Franklin Pierce Law Centre, ramesh.htm, [Accessed 1 March 2004], pp. 1-26 (2004).

[9] Unatin, D., “Progress v. Privacy: the debate over computer chip implants”, JOLT: Notes, unatin. php, [Accessed 1 March 2004], pp. 1-3 (2002).

[10] Gotterbarn, D., “Injectable computers: once more into the breach! The life cycle of computer ethics awareness”, inroads- The SIGCSE Bulletin, Vol. 35, No. 4, December, pp. 10-12, (2003).

[11] Covacio, S., “Technological problems associated with the subcutaneous microchips for human identification (SMHId), InSITE-“Where Parallels Intersect, June, pp. 843-853 (2003).

[12] Warwick, K., “I, Cyborg”, 2003 Joint Lecture: The Royal Society of Edinburgh and The Royal Academy of Engineering, The Royal Society of Edinburgh, pp. 1-16 (2003).

[13] Norman, D.A., “Cyborgs”, Communications of the ACM, Vol. 44, No. 3, March, pp. 36-37 (2001).

[14] Bell, G. & Gray, J., “Futuristic forecasts of tools and technologies: digital immortality”, Communications of the ACM, March, Vol. 44, No. 3, pp. 29-31 (2001).

[15] Woodward, J.D., “Biometrics: privacy’s foe or privacy’s friend?”, Proceedings of the IEEE, Vol. 85, No. 9, pp. 1480-1492 (1997).

[16] Rosenberg, R.S., The Social Impact of Computers, Elsevier Academic Press, California (2004).

[17] Lemonick, M.D., “Future tech is now”, Time Australia, 17 July, pp. 44-79 (1995).

[18] McGinity, M., “Body of the technology: It’s just a matter of time before a chip gets under your skin”, Communications of the ACM, 43(9), September, pp. 17-19 (2000).

[19] Stephan, R., “The ultrahuman revolution”,, http://www., [Accessed 29 November 2001], pp. 1-3 (2001).

[20] Sheridan, J.G. et al., “Spectators at a geek show: an ethnographic inquiry into wearable computing”, IEEE The Fourth International Symposium on Wearable Computers, pp. 195-196 (2000).

[21] Lukowicz, P., “The wearARM modular low-power computing core”, IEEE Micro, May-June, pp. 16-28 (2001).

[22] DeFouw, G. & Pratt, V., “The matchbox PC: a small wearable platform”, The Third International Symposium on Wearable Computers, pp. 172-175 (1999).

[23] Salonen, P. et al., “A small planar inverted-F antenna for wearable applications”, IEEE Tenth International Conference on Antennas and Propagation, Vol. 1, pp. 82-85 (1997).

[24] Mann S., “Wearable computing: a first step toward personal imaging”, IEEE Computer, February, pp. 25-32 (1997).

[25] Millanvoye, M., “Teflon under my skin”, UNESCO, http://www.unesco. org/courier/2001_07/uk/doss41.htm, [Accessed 29 November 2001], pp. 1-2 (2001).

[26] Furui, S., “Speech recognition technology in the ubiquitous/wearable computing environment”, IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 6, pp. 3735-3738 (2000).

[27] Pickering, C., “Silicon man lives”, Forbes ASAP,, [Accessed 22 November 2001], pp. 1-2 (1999).

[28] Sydänheimo, L. et al., “Wearable and ubiquitous computer aided service, maintenance and overhaul”, IEEE International Conference on Communications, Vol. 3, pp. 2012-2017 (1999).

[29] Rhodes, B. J. et al., “Wearable computing meets ubiquitous computing: reaping the best of both worlds”, The Third International Symposium on Wearable Computers, pp. 141-149 (1999).

[30] Kaku, M., Visions: how science will revolutionize the 21st century and beyond, Oxford University Press, Oxford (1998).

[31] van Laerhoven, K. & Cakmacki, O., “What shall we teach our pants?”, IEEE The Fourth International Symposium on Wearable Computers, pp. 77-83 (2000).

[32] Kortuem, G. et al., “Context-aware, adaptive wearable computers as remote interfaces to ‘intelligent’ environments”, Second International Symposium on Wearable Computers, pp. 58-65 (1998).

[33] Starner, T., “The challenges of wearable computing: part 2”, IEEE Micro, July-August, pp. 54-67 (2001).

[34] Mills, S. (ed.), Turning Away From Technology: a new vision for the 21st century, Sierra Club Books, San Francisco (1997).

[35] Davies, S., Big Brother: Australia’s growing web of surveillance, Simon and Schuster, Sydney (1992).

[36] Davies, S., Monitor: extinguishing privacy on the information superhighway, PAN, Sydney (1996).

[37] Hibbert, C., “What to do when they ask for your social security number”, in Computerization and Controversy: value conflicts and social choices, (ed.) Rob Kling, Academic Press, New York, pp. 686-696 (1996).

[38] Kusserow, R.P., “The government needs computer matching to root out waste and fraud”, in Computerisation and Controversy: value conflicts and social choices, (ed.) Rob Kling, Academic Press, New York, part 6, section E, pp. 653f (1996).

[39] Privacy Commissioner, Selected Extracts from the Program Protocol Data-Matching Program (Assistance and Tax), Privacy Commission, Sydney (1990).

[40] Jones, D., “UK government launches smart card strategy”, Ctt, Vol. 11, No. 6, February, p. 2 (2000).

[41] Michels, S., “National ID”, Online NewsHour, newshour/bb/fedagencies/jan-june02/id_2-26.html, [Accessed 2 September 2001], pp. 1-8 (2002).

[42] Rosenberg, R.S., The Social Impact of Computers, Sydney, Elsevier, pp. 339-405 (2004).

[43] Brin, D., The Transparent Society: will technology force us to choose between privacy and freedom, Perseus Books, Massachusetts (1998).

[44] Branscomb, A. W., Who Owns Information: from privacy to public access, BasicBooks, USA (1994).

[45] Rothfeder, J., “Invasion of privacy”, PC World, Vol. 13, No. 11, pp. 152-162 (1995).

[46] Newton, J. “Reducing ‘plastic’ counterfeiting”, European Convention on Security and Detection, Vol. 408, pp. 198-201 (1995).

[47] Masterson, U.O., “A day with ‘Professor Cyborg’”, MSNBC,, [Accessed 29 November 2001], pp. 1-6 (2000).

[48] Associated Press, “Chip in your shoulder? Family wants info device”, USA Today: Tech,, [Accessed 15 October 2002], pp. 1-2 (2002).

[49] Mieszkowski, K., “Put that silicon where the sun don’t shine”,,, Parts 1-3, [Accessed 11 November 2001], pp. 1-3 (2000).

[50] Salkowski, J., “Go track yourself”, StarNet Dispatches, http://dispatches., [Accessed 29 November 2001], pp. 1-4 (2000).

[51] LoBaido, A.C. 2001, “Soldiers with microchips: British troops experiment with implanted, electronic dog tag”,, http://www.fivedoves. com/letters/oct2001/chrissa102.htm, [Accessed 20 November 2001], pp. 1-2 (2001).

[52] Swissler, M.A., “Microchips to monitor meds”, Wired, http://www.wired. com/news/technology/0,1282,39070,00.html, [Accessed 29 November 2001], pp. 1-3 (2000).

[53] Black, J., “Roll up your sleeve – for a chip implant”, Illuminati Conspiracy,, [Accessed 15 October 2002], pp. 1-6 (2002).

[54] RFID, “Singapore fights SARS with RFID”, RFID Journal,, [Accessed 1 May 2004], pp. 1-2 (2003).

[55] RFID, “Taiwan uses RFID to combat SARS”, RFID Journal,, [Accessed 1 May 2004], pp. 1-2 (2003).

[56] Scheeres, J. “They want their id chips now”, Wired News,,1848,50187,00.html, [Accessed 15 October 2002], pp. 1-2 (2002).

[57] Wherify, “Frequently Asked Questions”, Wherify Wireless,, [Accessed 15 April 2004], pp. 1-7 (2004).

[58] Scheeres, J., “Kidnapped? GPS to the rescue”, Wired News,,1367,50004,00.html, [Accessed 15 October 2002], pp. 1-2 (2002).

[59] McClimans, F., ‘Is that a chip in your shoulder, or are you just happy to see me?’,, idg/index.html, [Accessed 22 November 2001], pp. 1-4 (1998).

[60] Scheeres, J., “Politician wants to ‘get chipped’”, Wired News,,1282,50435,00.html, [Accessed 15 October 2002], pp. 1-2 (2002).

[61] Horn, T., “Opinionet contributed commentary”, Opinionet, http://www., [Accessed 29 November 2001], pp. 1-4 (2000).

[62] Levi, P., The Drowned and the Saved, trans. Raymond Rosenthal, Summit Books, London (1988).

[63] Lifton, R.J., The Nazi Doctors: medical killing and the psychology of genocide, Basic Books, New York (1986).

[64] McMurchie, L., “Identifying risks in biometric use”, Computing Canada, Vol. 25, No. 6, p. 11, (1999).

[65] Nairne, D., “Building better people with chips and sensors”,,, [Accessed 29 November 2001], pp. 1-2 (2000).

[66] National Radiological Protection Board, “Understanding radiation: ionizing radiation and how we are exposed to it”, NRPB, topics/risks/index.htm, [Accessed 1 May 2004], pp. 1-2 (2004).

[67] Australian Communications Authority, Human exposure to radiofrequency electromagnetic energy: information for manufacturers, importers, agents, licensees or operators of radio communications transmitters, Australian regulations, Melbourne (2000).

[68] Salonen, P. et al., “A small planar inverted-F antenna for wearable applications”, IEEE Tenth International Conference on Antennas and Propagation, Vol. 1, pp. 82-85 (1997).

[69] Geers, R. et al., Electronic Identification, Monitoring and Tracking of Animals, CAN International, New York (1997).

[70] Trull, D., “Simple Cyborg”, Parascope, articles/slips/fs29_2.htm, [Accessed 20 November 2001], pp. 1-4 (1998).

[71] Witt, S., “Professor Warwick chips in”, Computerworld, 11 January, p. 89 (1999).

[72] Martin, C.D., “The myth of the awesome thinking machine”, Communications of the ACM, 36(4), pp. 120-133 (1993).

[73] Michael, K., “The automatic identification trajectory: from the ENIAC to chip implants”, in Internet Commerce: digital models for business, E. Lawrence et al., John Wiley and Sons, Queensland, pp. 131-134, 136 (2002).

[74] Tapscott, D., Growing up digital: the rise of the net generation, McGraw- Hill, New York (1998).

[75] Walker, I., “Cyborg dreams: Beyond Human”, Background Briefing ABC Radio National, 4 November, pp. 1-15 (2001)

[76] Anonymous, “Will a chip every day keep the doctor away?”, PhysicsWeb,, [Accessed 29 November 2001], pp. 1-2 (2001).

[77] Goldberg, H., “Building a better mMode”, http://www.mmodemagazine. com/features/bettermmode.asp, mMode Magazine, [Accessed 1 April 2004), pp. 1-4 (2004).

[78] Wilmington, M., “Movie review, ‘Metropolis (Re-release)’”,, story, [Accessed 3 May 2004], pp. 1-3 (2004).

[79] McRoy, J., “Science fiction studies”, DePauw University, Vol. 28, No. 3,, [Accessed 3 May 2004], pp. 1-3 (2001).

[80] Anonymous, “The NET”, MovieWeb, index.html, [Accessed 3 May 2004], pp. 1-5 (2001).

[81] King, B., “Robots: It’s an art thing” 0,1294,48253,00.html, [Accessed 4 January 2003], pp. 1-2 (2001).

[82] Branwyn, G., “The desire to be wired”, Wired, September/October (1993).

[83] Schirato, T. & Yell, S. Communication & Cultural Literacy: an introduction, Allen and Unwin, NSW (1996).

[84] Dery, M., Escape Velocity: cyberculture at the end of the century, Hodder and Stoughton, London (1996).

[85] Scheeres, J., “New body art: Chip implants”, Wired News, http://www.,1284,50769,00.html, [Accessed 15 October 2002], pp. 1-2 (2002).

[86] Tysome, T., “Dance of a cyborg”, The Australian, p. 35 (2001).

[87] Warwick, K., “Frequently asked questions”, Professor Kevin Warwick,, [Accessed 20 November 2001], pp. 1-4 (2001).

[88] Sacleman, H. Computers, System Science, And Evolving Society: the challenge of man-machine digital systems, Wiley, New York (1967).

[89] McLuhan, M., Understanding Media: the extensions of man, The MIT Press, England (1999).

[90] McLuhan, M. & Powers, B.R., The Global Village: transformations in world life and media in the 21st century, Oxford University Press, New York (1989).

[91] McLuhan, E. & Zingrone, F., Essential McLuhan, BasicBooks, USA (1995).

[92] Ellul, J., The Technological Society, Vintage Books, New York (1964).

[93] Toffler, A., Future Shock, Bantam Books, New York (1970).

[94] Gates, B., The Road Ahead, The Penguin Group, New York (1995).

[95] Negroponte, N., Being Digital, Hodder and Stoughton, Australia (1995).

[96] Moravec, H., Mind Children: the future of robot and human intelligence, Harvard University Press, Cambridge (1988).

[97] Moravec, H., Robot: mere machine to transcendent mind, Oxford University Press, Oxford (1999).

[98] Paul, G.S. & Cox, E.D. Beyond Humanity: cyberevolution and future minds, Charles River Media, Massachusetts (1996).

[99] Sorkin, D.L. & McClanahan, J. “Cochlear implant reimbursement cause for concern”, HealthyHearing, newroot/articles/arc_disp.asp?id=147&catid=1055, [Accessed 3 May 2004], pp. 1-4 (2004).

[100] Weber, D.O., “Me, myself, my implants, my micro-processors and I”, Software Development Magazine, documentID=11149, [Accessed 29 November 2001], pp. 1-6 (2000).

[101] Maybury, M.T., “The mind matters: artificial intelligence and its societal implications”, IEEE Technology and Society Magazine, June/July, pp. 7-15 (1990).

[102] Bijker, W.E. & Law, J. (eds), Shaping Technology/Building Society: studies in sociotechnical change, The MIT Press, Massachusetts (1992).

[103] Pool, R. Beyond Engineering: how society shapes technology, Oxford University Press, New York (1997).

[104] Hristodoulou, M. Hieromonk, “In the last days”, in Geron Paisios, Mount Athos, Greece, (in Greek), pp. 181-192 (1994).

[105] Relfe, M.S., The New Money System, Ministries Inc., Alabama (1982).

[106] Relfe, M.S., When Your Money Fails, League of Prayer, Alabama (1981).

[107] Barker, K. et al. (eds), The NIV Study Bible, Zondervan Publishing House, Michigan, pp. 1939-1940 (1995).

[108] Watkins, T., “WARNING: 666 IS COMING!”, Dial-the-Truth Ministries, [Accessed 1 August 1996], now http://www.av1611. org, pp. 1-6 (1996).

[109] Michael, M.G., The Number of the Beast, 666 (Revelation 13:16-18): Background, Sources and Interpretation, Macquarie University, MA (Hons) Thesis, Sydney, Australia (1998).

[110] Arndt, W.F. & Gingrich, F.W., A Greek-English Lexicon of the New Testament and Other Early Christian Literature, The University of Chicago Press, Chicago, p. 876 (1979).

[111] Roethenbaugh, G., “Simon Davies- Is this the most dangerous man in Europe?”, Biometrics in Human Services, Vol. 2, No. 5, pp. 2-5 (1998).

[112] Decker, S., “Technology raises concerns: Pros and cons of scientific advances weighed as Christians discuss issue”, The Falcon Online Edition,, [Accessed 1 April 2003], pp. 1-3 (2002).

[113] Cook, T.L. The Mark of the New World Order, ASIN, USA (1999).

[114] Newton, C., “U.S. to weigh computer chip implant”, Netscape: Daily News, &id= 200202261956000188605, [Accessed 15 October 2002], pp. 1-2 (2002).

[115] Associated Press, “Chip in your shoulder? Family wants info device”, USA Today: Tech, verichip-family.htm, [Accessed 15 October 2002], pp. 1-2 (2002).

[116] Michael, M.G., “For it is the number of a man”, Bulletin of Biblical Studies, Vol. 19, January-June, pp. 79-89 (2000).

[117] Michael, M.G., “666 or 616 (Rev 13:18): Arguments for the authentic reading of the Seer's conundrum”, Bulletin of Biblical Studies, Vol. 19, July-December, pp. 77-83 (2000).

[118] Bauckham, R., The Climax of Prophecy: Studies on the Book of Revelation, T & T Clark: Edinburgh, pp. 384-452 (1993).

[119] Horn, T., “Opinionet contributed commentary”, Opinionet,, [Accessed 29 November 2001], pp. 1-4 (2000).

[120] Barnet, R.J. & Cavanagh, J., Global Dreams: imperial corporations and the new world order, Simon and Schuster, New York (1994).

[121] Wilshire, B., The Fine Print, Brian Wilshire, Australia (1992).

[122] Smith, B., Warning, Smith Family Evangelism, New Zealand (1980).

[123] Stahl, W.A., God and the Chip: religion and the culture of technology, EDSR, Canada (1999).

[124] Noble, D.F., The Religion of Technology: the divinity of man and the spirit of invention, Penguin Books, England (1999).

[125] Sensormatic, “SafeKids™”, Sensormatic, html/safekids/index.htm, [Accessed 3 June 1999], pp. 1-2 (1999).

[126] Raimundo, N., ‘Digital angel or big brother?’, SCU, http://cseserv.engr. [Accessed 15th December 2002], (2002).

[127] Wilson, J., “Girl to get tracker implant to ease parents’ fears”, The Guardian,,3858,4493297,00.html, [Accessed 15 October 2002], pp. 1-2 (2002).

[128] Ermann, M.D. et al. (eds), Computers, Ethics, and Society, Oxford University Press, New York (1997).

[129] Eng, P., “I, Chip? Technology to meld chips into humans draws closer”,,  chipimplant020225.html, [Accessed 15 October 2002], pp. 1-3 (2002).

[130] Pace, S. et al. (eds), The Global Positioning System: assessing national policies, Rand Corporation, New York (1996).

[131] Rummler, D.M., “Societal issues in engineering”, ENGR 300, pp. 1-3 (2001).

[132] Associated Press, “Company gets okay to sell ID-only computer chip implant”, The Detroit News, 05/technology-457686.htm, [Accessed 15 October 2002] (2002).

[133] Associated Press, “ID chip ready for implant”, USA Today: Tech, http://, [Accessed 15 October 2002], pp. 1-2.

[134] Billinghurst, M. & Starner T., “Wearable devices: new ways to manage information”, IEEE Computer, January, Vol. 32, No. 1, pp. 57-64 (1999).

[135] Mann, S., “Wearable computing: toward humanistic intelligence”, IEEE Intelligent Systems, May/June, pp. 10-15 (2001).

[136] Chan, T., “Welcome to the Internet, baby!”, Telecom Asia, p. 38 (2001).

[137] Schiele, B. et al., “Sensory-augmented computing: wearing the museum’s guide”, IEEE Micro, pp. 44-52.

[138] Kurzweil, R., The Age of Spiritual Machines: when computers exceed human intelligence, Penguin Books, New York (1999).

[139] Irwin, A., “Brain implant lets man control computer by thought”,, 1238,, [Accessed 22 November 2001], pp. 1-3 (1998).

[140] Warwick, K., “Are chip implants getting under your skin?”, Compiler,, [Accessed 1 March 2004], pp. 1-5 (2003).

[141] McGrath, P., “Technology: Building better humans”, Newsweek, http://, [Accessed 29 November], pp. 1-3 (2001).

[142] Anonymous, “Professor Cyborg”,, feature/1999/10/20/cyborg/index1.html, 3 parts, [Accessed 29 November 2001], pp. 1-3 (1999).

[143] Joy, B. “Why the future doesn’t need us”, Wired, 8.04, http://www.wired. com/wired/archive/8.04/joy_pr.html, [Accessed 4 January 2003], pp. 1-19 (2000).

[144] Masey, S. “Can we talk? The need for ethical dialogue”, The IEE, p. 4/1, (1998).

[145] Wenk, E., “The design of technological megasystems: new social responsibilities for engineers”, IEEE, pp. 47-61 (1990).

[146] Boehringer, B., “Benefits of the OHSU/OGI merger”, The Oregon Opportunity: A New Era of Medical Breakthroughs, about/opportunity/ohsu_ogi.htm, [Accessed 20 November 2001], pp. 1-2 (2001).

[147] Ebert, R., “Enemy of the State”, Ebert on Movies, http://www.suntimes. com/ebert/ebert_reviews/1998/11/112006.html, pp. 1-3 (2001).


Biographical Note

Dr Katina Michael is a lecturer in Information Technology at the University of Wollongong in Australia. In 1996 she completed her Bachelor of Information Technology degree with a co-operative scholarship from the University of Technology, Sydney (UTS) and in 2003 she was awarded her Doctor of Philosophy with the thesis “The Auto-ID Trajectory” from the University of Wollongong. She has an industrial background in telecommunications and has held positions as a systems analyst with United Technologies and Andersen Consulting. Most of her work experience was acquired as a senior network and business planner with Nortel Networks (1996-2001). In this capacity she consulted for Asia’s largest telecommunication operators and service providers. Katina now teaches and researches in eBusiness and her main academic interests are in the areas of automatic identification devices, third generation wireless applications, geographic information systems, and technology forecasting.

Dr M.G. Michael is a church historian and New Testament scholar. He has spoken at numerous international conferences and has written two highly regarded dissertations on the Book of Revelation. His specialist interests are in apocalypticism, millennial studies, and Orthodox mysticism. He has completed a Doctor of Philosophy at the Australian Catholic University, a Master of Arts (Honours) at Macquarie University, a Master of Theology and Bachelor of Arts at Sydney University and a Bachelor of Theology at the Sydney College of Divinity.