Societal Implications of Wearable Technology

Source: (opening art only)

Societal Implications of Wearable Technology: Interpreting “Trialability on the Run”


This chapter presents a set of scenarios involving the GoPro wearable Point of View (PoV) camera. The scenarios are meant to stimulate discussion about acceptable usage contexts with a focus on security and privacy. The chapter provides a wide array of examples of how overt wearable technologies are perceived and how they might/might not be welcomed into society. While the scenario is based at the University of Wollongong campus in Australia, the main implications derived from the fictitious events are useful in drawing out the predicted pros and cons of the technology. The scenarios are interpreted and the main thematic issues are drawn out and discussed. An in depth analysis takes place around the social implications, the moral and ethical problems associated with such technology, and possible future developments with respect to wearable devices.


This chapter presents the existing, as well as the potential future, implications of wearable computing. Essentially, the chapter builds on the scenarios presented in an IEEE Consumer Electronics Magazine article entitled: “Trialability on the Run” (Gokyer & Michael, 2015). In this chapter the scenarios are interpreted qualitatively using thick description and the implications arising from these are discussed using thematic analysis. The scenario analysis is conducted through deconstruction, in order to extract the main themes and to grant the reader a deeper understanding of the possible future implications of the widespread use of wearable technology. First, each of the scenarios is analyzed to draw out the positive and the negative aspects of wearable cameras. Second, the possible future implications stemming from each scenario context are discussed under the following thematic areas: privacy, security, society, anonymity, vulnerability, trust and liberty. Third, direct evidence is provided using the insights of other research studies to support the conclusions reached and to identify plausible future implications of wearable technologies, in particular use contexts in society at large.

The setting for the scenario is a closed-campus environment, (a large Australian University). Specific contexts such as a lecture theatre, restroom, café, bank, and library, are chosen to provide a breadth of use cases within which to analyze the respective social implications. The legal, regulatory, and policy-specific bounds of the study are taken from current laws, guidelines and normative behavior, and are used as signposts for what should, or should not, be acceptable practice. The outcomes illustrate that the use cases are not so easily interpretable, given the newness of the emerging technology of wearable computing, especially overt head-mounted cameras, that draw a great deal of attention from bystanders. Quite often resistance to the use of a head-mounted camera is opposed without qualified reasoning. “Are you recording me? Stop that please!” is a common response to audio-visual bodyworn recording technology in the public space by individuals (Michael & Michael, 2013). Yet companies such as Google have been able to use fleets of cars to gather imagery of homes and streets, with relatively little problem.

There are, indeed, laws that pertain to the misuse of surveillance devices without a warrant, to the unauthorized recording of someone else whether in a public or private space, and to voyeuristic crimes such as upskirting. While there are laws, such as the Workplace Surveillance Act, 2005 (NSW), asserting a set of rules for surveillance (watching from above), the law regarding sousveillance (watching from below) is less clear (Clarke, 2012). We found that, while public spaces like libraries and lecture theatres have clear policy guidelines to follow, the actual published policies, and the position taken by security staff, do not in fact negate the potential to indirectly record another. Several times, through informal questioning, we found the strong line “you cannot do that because we have a policy that says you are not allowed to record someone”, to be unsubstantiated by real enforceable universitywide policies. Such shortcomings are now discussed in more detail against scenarios showing various sub-contexts of wearable technology in a closed-campus setting.


The term sousveillance has been defined by Steve Mann (2002) to denote a recording done from a portable device such as a head-mounted display (HMD) unit in which the wearer is a participant in the activity. In contrast to wall-mounted fixed cameras typically used for surveillance, portable devices allow inverse surveillance: recordings from the point of view of those being watched. More generally, point of view (POV) has its foundations in film, and usually depicts a scene through the eyes of a character. Body-worn video-recording technologies now mean that a wearer can shoot film from a first-person perspective of another subject or object in his or her immediate field of view (FOV), with or without a particular agenda.

During the initial rollout of Google Glass, explorers realized that recording other people with an optical HMD unit was not perceived as an acceptable practice, despite the fact that the recording was taking place in a public space. Google’s apparent blunder was to assume that the device, worn by 8,000 individuals, would go unnoticed, like shopping mall closed-circuit television (CCTV). Instead, what transpired was a mixed reaction by the public—some nonusers were curious and even thrilled at the possibilities claimed by the wearers of Google Glass, while some wearers were refused entry to premises, fined, verbally abused, or even physically assaulted by others in the FOV (see Levy, 2014).

Some citizens and consumers have claimed that law enforcement (if approved through the use of a warrant process) and shop owners have every right to surveil a given locale, dependent on the context of the situation. Surveilling a suspect who may have committed a violent crime or using CCTV as an antitheft mechanism is now commonly perceived as acceptable, but having a camera in your line of sight record you—even incidentally—as you mind your own business can be disturbing for even the most tolerant of people.

Wearers of these prototypes, or even of fully-fledged commercial products like the Autographer (see, claim that they record everything around them as part of a need to lifelog or quantify themselves for reflection. Technology such as the Narrative Clip may not capture audio or video, but even still shots are enough to reveal someone else’s whereabouts, especially if they are innocently posted on Flickr, Instagram, or other social media. Many of these photographs also have embedded location and time-stamp metadata stored by default.

A tourist might not have malicious intent by showing off in front of a landmark, but innocent bystanders captured in the photo could find themselves in a predicament given that the context may be entirely misleading.

Wearable and embedded cameras worn by any citizen carry significant and deep personal and societal implications. A photoborg is one who mounts a camera onto any aspect of the body to record the space around himself or herself (Michael & Michael, 2012). Photoborgs may feel entirely free, masters of their own destiny; even safe that their point of view is being noted for prospective reuse. Indeed, the power that photoborgs have is clear when they wear the camera. It can be even more authoritative than the unrestricted overhead gazing of traditional CCTV, given that sousveillance usually happens at ground level. Although photoborgs may be recording for their own lifelog, they will inevitably capture other people in their field of view, and unless these fellow citizens also become photoborgs themselves, there is a power differential. Sousveillance carries with it huge socioethical, environmental, economic, political, and spiritual overtones. The narrative that informs sousveillance is more relevant than ever before due to the proliferation of new media.

Sousveillance grants citizens the ability to combat the powerful using their own evidentiary mechanism, but it also grants other citizens the ability to put on the guise of the powerful. The evidence emanating from cameras is endowed with obvious limitations, such as the potential for the impairment of the data through loss, manipulation, or misrepresentation (Michael, 2013). The pervasiveness of the camera that sees and hears everything can only be reconciled if we know the lifeworld of the wearer, the context of the event being captured, and how the data will be used by the stakeholder in command.

Sousveillance happens through the gaze of the one wearing the camera, just like a first-person shooter in a video game. In 2003, WIRED published an article (Shachtman, 2003) on the potentiality to lifelog everything about everyone. Shachtman wrote:

The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person’s life, index all the information and make it searchable… The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read… All of this—and more—would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual’s health… This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual’s life.”

This goes to show how any discovery can be tailored toward any end. Lifelogging is meant to sustain the power of the individual through reflection and learning, to enable growth, maturity and development. Here, instead, it has been hijacked by the very same stakeholder against whom it was created to gain protection.

Sousveillance also drags into the equation innocent bystanders going about their everyday business who just wish to be left alone. When we asked wearable 2.0 pioneer Steve Mann in 2009 what one should do if bystanders in a recording in a public space questioned why they were being recorded without their explicit permission, he pointed us to his “request for deletion” (RFD) web page (Mann, n.d.). This is admittedly only a very small part of the solution and, for the most part, untenable. One just needs to view a few minutes of the Surveillance Camera Man Channel ( to understand that people generally do not wish to be filmed in someone else’s field of view. Some key questions include:

1. In what context has the footage been taken?

2. How will it be used?

3. To whom will the footage belong?

4. How will the footage taken be validated and stored?

Trialability on the Run

In this section, plausible scenarios of the use of wearable cameras in a closed campus setting are presented and analyzed in the story “Trialability on the Run”. Although the scenarios are not based directly on primary sources of evidence, they do provide conflicting perspectives on the pros and cons of wearables. As companies are engaged in ever-shorter market trialing of their products, the scenarios demonstrate what can go wrong with an approach that effectively says: “Let’s unleash the product now and worry about repercussions later; they’ll iron themselves out eventually—our job is solely to worry about engineering.” The pitfalls of such an approach are the unexpected and asymmetric consequences that ensue. For instance, someone wearing a camera breaches my privacy, and, although the recorded evidence has affected no one else, my life is affected adversely. Laws, and organizational policies especially, need quickly to respond as advances in technologies emerge.

“Trialability on the Run” is a “day in the life” scenario that contains 9 parts, set in a closed-campus in southern New South Wales. The main characters are Anthony, the owner and wearer of the head-mounted GoPro (an overt audio-visual recording device), and his girlfriend Sophie. The narrator follows and observes the pair as they work their way around the campus in various sub-contexts, coming into contact with academic staff, strangers, acquaintances, cashiers, banking personnel, librarians, fellow university students and finally security personnel. Anthony takes the perspective that his head-mounted GoPro is no different from the mounted security surveillance cameras on lampposts and building walls, or from the in-lecture theatre recordings captured by the Echo360 (Echo, 2016), or even from portable smart phone cameras that are handheld. He is bewildered when he draws so much attention to himself as the photoborg camera wearer, since he perceives he is performing exactly the same function as the other cameras on campus and has only the intent of capturing his own lifelog. Although he is not doing anything wrong, Anthony looks different and stands out as a result (Surveillance Camera Man, 2015). His girlfriend, Sophie, is not convinced by Anthony’s blasé attitude and tries to press a counter argument that Anthony’s practice is unacceptable in society.

Scenario 1: The Lecture


In this scenario, the main character, Anthony, has arrived at the lecture theatre, in which the lesson had already begun, intending to record the lecture instead of taking notes. Being slightly late, he decided to sit in the very front row. All the students and eventually the lecturer saw the head-mounted camera he was wearing. The lecturer continued his lecture without showing any emotion. Some students giggled at the spectacle and others were very surprised with what they observed, as it was quite probable that it was the first time that they were seeing someone wearing a camera to record a lecture. The students were generally not bothered by the head-mounted recording device in full view, as it was focused on the lecture material and the lecturer, so proceedings continued, as they otherwise would have, had the body-worn camera not been present. Students are very used to surveillance cameras on campus; this was just another camera as far as they were concerned, and besides no one objected: they were too busy taking notes and listening to instruction about the structure and form of the final examination in their engineering coursework.

Wearable User Rights and Intellectual Property

In some of the lecture theatres on university campuses, there are motion sensor based video cameras that make full audio-visual recordings of the lectures (Echo, 2016). Lecturers choose to record their lectures in this manner as available evidence of educational content covered for students, especially for those who were unable to attend the lecture, for those for whom English is a second language or for those who like to listen to lecture content as a form of revision. In this regard, there are no policies in place to keep the students from making audio-visual recordings of the lecture in the lecture theatres.

Lecture theatres are considered public spaces and many universities allow students to attend lectures whether or not they are enrolled in that particular course or subject. Anyone from the public could walk into lectures and listen, as there is no keycard access. Similar to centrally organized Echo 360 audio-visual recordings, Anthony is taping the lecture himself and he does not see any problems with distributing the recording to classmates if someone asks for it to study for the final examination. After all, everyone owns a smartphone and anyone can record the lecture with the camera on their smartphones or tablet device.

This scenario raises a small number of questions that need to be dealt with foremost, such as “What is the difference between making a recording with a smartphone and with a head-mounted camera?” or, “Does it only start being a problem when the recording device is overt and can be seen?” If one juxtaposes the surveillance camera covertly integrated into a light fixture, with an overt head-mounted camera, then why should the two devices elicit such a different response from bystanders?

These questions do not, however, address the fact that an open discussion is required on whether or not we are ready to see a great deal of these sousveillers in our everyday life, and if we are not, what are we prepared to do about it? Mann (2005) predicted the use of sousveillance would grow greatly when the sousveillance devices acquired non-traditional uses such as making phone calls, taking pictures, and having access to the Internet. This emergence produces a grey area, generating the requirement for laws, legislation, regulations and policies having to be amended or created to address specific uses of the sousveillance devices in different environments and contexts. Clarke (2014) identifies a range of relevant (Australian) laws to inform policy discussion and notes the inadequacy of current regulation in the face of rapidly emerging technology.

Scenario 2: The Restroom


In the restroom scenario, Anthony walked into a public restroom after his lecture, forgetting that his head-mounted camera was still on and recording. While unintentionally recording, Anthony received different reactions from the people present in the restroom, all of whom saw the camera and suspected some foul play. The first person, who was leaving as Anthony was entering the restroom, did not seem to care; another tried to ignore Anthony and left as soon as he was finished. The last person became disturbed by the fact that he was being recorded in what he obviously deemed to be a private place. Later that day when Anthony searched for lecture recordings on the tape, he got a sense of wrongdoing after realizing that, in the restroom, he had accidentally left the camera on in record mode. He was surprised, upon hindsight, that he did not get any major reactions, such as an individual openly expressing their discontent or the fact he did not get any specific questions or pronouncements of discomfort. If it were not for the facial expressions to which Anthony was privy, he would not have been able to tell that anybody was upset, as there was no verbal cue or physical retaliation. Of course, the innocent bystanders, going about their business, would not have been able to assume that the camera was indeed rolling.

Citizen Privacy, Voyeurism, and a Process of Desensitization

Restrooms, change rooms, and shower blocks on campus are open to the public, but they are also considered private spaces given that people are engaged in private activities (e.g. showering), and are, at times, not fully clothed. The natural corollary, then, would lead to the expectation that some degree of privacy should be granted. Can anyone overtly walk into a public toilet sporting a camera and record you while you are trying to, for modesty’s sake, do what should only be done in the restroom? Is the body-worn technology becoming so ubiquitous that no one even says a word about something that they can clearly see is ethically or morally wrong? Steve Mann has argued that surveillance cameras in the restroom are an invasion of privacy more abhorrent than body-worn cameras owned by everyday people. The direct approachability of the photoborg differs from an impersonal CCTV.

There is a long discussion to be had on personal security. For instance, will we all, one day, be carrying such devices as we seek to lifelog our entire histories, or acquire an alibi for our whereabouts should we be accused of a given crime, as portrayed in film in the drama “The Entire History of You” (Armstrong and Welsh, 2011)? It is very common to find signs prohibiting the use of mobile phones in leisure centers, swimming pools and the like. There remains, however, much to be argued around safety versus privacy trade-offs, if it is acceptable practice to rely on closed circuit television (CCTV) in public spaces.

University campuses are bound by a number of laws, at federal or state level, including (in this case) the Privacy Act 1998 (Cth), the Surveillance Devices Act 2007(NSW), and the Workplace Surveillance Act 2005 (NSW). This scenario points out that even when there cannot possibly be surveillance cameras in restrooms or change rooms, the Surveillance Devices Act 2007 (NSW) does not specify provisions about sousveillance in those public/private spaces. In Clarke’s (2014) assessment of the NSW Crimes Act (1900), the voyeurism offences provisions exist relating to photographs. They pertain to photographs that are of a sexual and voyeuristic nature, usually showing somebody’s private parts. These photographs are also taken without the consent of the individual and/or taken in places where a person would reasonably expect to be afforded privacy (toilets, showers, change rooms etc). When a person claims to have had his or her privacy breached, however, exceptions to this rule apply if s/he is a willing participant in the activity, or if circumstances indicate that the persons involved did not really care if they were seen by onlookers (Clarke, 2014). It is even less likely to be illegal, if the act was conducted in a private place, but with doors open in full view (Clarke, 2014). Thus, the law represents controls over a narrow range of abuses (Clarke, 2014), and, unless they find themselves in a predicament and seek further advice, the general populace is unaware that the law does not protect them entirely, and depends on the context.

Scenario 3: The Corridor


This scenario depicts a conversation occurring with Sophie (Anthony’s girlfriend and fellow undergraduate coursework student). In the corridor, Anthony bumps into their mutual friend, Oxford, as they vacate the lecture theatre. Throughout the conversation, Anthony demonstrates confidence in his appearance. He believed wearing a head-mounted camera was not a problem and so consequently, he did not think he was doing anything wrong. On the other hand, Sophie was questioning whether or not body-worn cameras should be used without notifying the people in their vicinity. Oxford, an international student, became concerned about the possible future uses of the recording that featured him. His main concern was that he did not want the footage to be made publicly available given how he looked and the clothing he was wearing. Although Oxford had no objection to Anthony keeping the footage for his personal archives, he did not wish for it to be splattered all over social media.

Trust, Disproportionality, and Requests for Deletion

The two student perspectives of “recording” a lifelog are juxtaposed. Anthony is indifferent as he feels he is taping “his” life as it happens around him through time. Oxford, on the other hand, believes he has a right to his own image, and that includes video (Branscomb, 1994). Here we see a power and control dominance occurring. The power and control is with the photoborg who has the ability to record, store and share the information gathered. On the other hand, the bystander is powerless and at the mercy of the photoborg, unless he/she voices otherwise explicitly. In addition, bystanders may not be so concerned with an actual live recording for personal archives, but certainly are concerned about public viewing. Often lifelogs are streamed in real-time and near real-time, which does not grant the bystander confidence with respect to acceptable use cases.

In the scenario, Sophie poses a question to those who are being incidentally recorded by Anthony’s GoPro to see whether there is an expectation among her peers to get individual consent prior to a recording taking place. Oxford, the mutual acquaintance of the protagonists, believes that consent is paramount in this process. This raises a pertinent question: what about the practice of lifelogging? Lifeloggers could not possibly have the consent of every single person they encounter in a daily journey. Is lifelogging acceptable insofar as lifeloggers choose not to share recordings online or anywhere public? Mann (2005) argues that a person wishing to do lifelong sousveillance deserves certain legal protections against others who might attempt to disrupt continuity of evidence, say for example, while going through immigration. On the other hand, Harfield (2014) extends the physical conception of a private space in considering the extent to which an individual can expect to be private in a public space, defining audio-visual recording of a subject without their consent in public spaces as a moral wrong and seeing the act of sousveillance as a moral intrusion against personal privacy.

In the scenario, Sophie pointed out that if someone wanted to record another individual around them, they could easily do so covertly using everyday objects with embedded covert cameras, such as a pen, a key fob, a handbag or even their mobile phone. Sophie was able to put into perspective the various gazes from security cameras when compared to sousveillance. The very thought about the mass surveillance she was under every moment provided a sobering counterbalance, allowing her to experience tolerance for the practice of sousveillance. Yet for Oxford, the security cameras mounted on the walls and ceilings of the Communications Building, provided a level of safety for international students. Oxford clearly justified “security cameras for security reasons”, but could not justify additional “in your face” cameras. Oxford did not wish to come across a sousveiller because the recordings could be made publicly available on the Internet without his knowledge. Further, a clear motive for the recordings had not been conveyed by the camera holder (Michael et al., 2014).

Between 1994 and 1996, Steve Mann conducted a Wearable Wireless Webcam experiment to visually record and continuously stream live video from his wearable computer to the World Wide Web. Operating 24 hours a day (on and off), this had the effective purpose of capturing and archiving day-to-day living from the person’s own perspective (Mann, 2005). Mann has argued that in the future, devices that captured lifelong memories and shared them in real-time would be commonplace and worn continuously (Mann, 2013).

It is true that almost everywhere we go in our daily lives someone, somewhere, is watching. But in the workplace, especially, where there is intent to watch an employee, the law states that individuals must be notified that they are being watched (Australasian Legal Information Institute, 2015). When it comes to sousveillance will this be the case as well? In Australia, the use, recording, communication or publication of recorded information from a surveillance device under a warrant is protected data and cannot be openly shared according to the Surveillance Device Act 2004 (Cth). In the near future when we are making a recording with an overt device, a prospective sousveillance law might posit: “You can see that I am recording you but this is for personal use only and as long as I do not share this video with someone you cannot do or say anything to stop me.” Mann (2005) claims that sousveillance, unlike surveillance, will require, and receive, strong legal support through dedicated frameworks for its protection, as well as for its limitation (Mann & Wassell, 2013).

A person can listen to, or visually monitor, a private activity if s/he is a participant in the activity (Australasian Legal Information Institute, 2014). However, this Act forbids a person to install, use or maintain an optical surveillance device or a listening device to record a private activity whether the person is a party to the activity or not. The penalties do not apply to the use of an optical surveillance device or listening device resulting in the unintentional recording or observation of a private activity (Surveillance Devices Act, 1998 (WA)). Clarke (2014) combines optical surveillance device regulation with the regulation for listening devices and concludes that a person can listen to conversations if they are a participant in the activity but cannot make audio or visual recordings. The applications of the law cover only a limited range of situations and conditions may apply for prosecutions.

Scenario 4: Ordering at the Café


Anthony and Sophie approached the counter of a café to place their orders and Anthony soon found himself engaged in a conversation with the attendants at the serving area about the camera he was wearing. He asked the attendants how they felt about being filmed. The male attendant said he did not like it very much and the female barista said she would not mind being filmed. The manager did not comment about any aspect of the head-mounted GoPro recording taking place but he did make some derogatory comments about Anthony’s behavior to Sophie. The male attendant became disturbed about the idea of someone recording him while he was at work and he tried to direct Anthony to the manager, knowing that the manager would not like it either, and it would disturb him even more. Conversely, however, the female barista was far from upset about the impromptu recording, acting as if she was on a reality TV show, and taken by the fact that someone seemed to show some interest to her, overcoming the normal daily routine.

Exhibitionism, Hesitation, and Unease

People tend to care a great deal about being watched over or scrutinized and this is reflected in their social behaviors and choices, which are altered as a result without them even realizing (Nettle et. al., 2012). Thus, some people who generally do not like being recorded (like the male attendant), might be subconsciously rejecting the idea of having to change their behaviors. Others, like the manager, simply ignore the existence of the device and others still, like the female attendant, feel entirely comfortable in front of a camera, even playing up and portraying himself or herself as someone “they want to be seen as”.

Anthony did not understand why people found the camera on his head disturbing with the additional concerns about being recorded. In certain cases where people seemed to show particular interest, Anthony decided to engage others about how they felt about being filmed and tried to understand what their reactions were to constant ground-level surveillance. Anthony himself had not been educated with respect to campus policy or the laws pertaining to audio-visual recording in a public space. Anthony was unaware that in Australia, surveillance device legislation differs greatly between states but, broadly, audio and/or visual recording of a private activity is likely to be illegal whatever the context (Clarke, 2014). An activity is, however, only considered to be “private” when it is taking place inside a building, and in the state of New South Wales this includes vehicles. People, however, are generally unaware that prohibitions may not apply if the activity is happening outside a building, regardless of context (Clarke, 2014).

If people were to see someone wearing a head-mounted camera as they were going about their daily routine, it would doubtless gain their attention, as it is presently an unusual occurrence. When we leave our homes, we do not expect pedestrians to be wearing head-mounted cameras, nor, (although increasingly we know we are under surveillance in taxis, buses, trains, and other forms of public transport), do we expect bus drivers, our teachers, fellow students, family or friends to be wearing body-worn recording devices. Having said that, policing has had a substantial impact on raising citizen awareness of body-worn audio-visual recording devices. We now have mobile cameras on cars, on highway patrol police officer helmets, and even on the lapels of particular police officers on foot. While this has helped to decrease the number of unfounded citizen complaints against law enforcement personnel on duty, it is also seen as a retaliatory strategy to everyday citizens who now have a smartphone video recorder at hand 24x7.

Although the average citizen does not always feel empowered to question one’s authority to record, everyone has the right to question the intended purpose of the video being taken of him or her, and how or where it will be shared. In this scenario, does Anthony have the right to record others as he pleases without their knowledge, either of him making the recording, or of the places where that recording might end up? Would Anthony get the same reaction if he were making the recordings with his smartphone? Owners of smart phones would be hard-pressed to say that they have never taken visual recordings of an activity where there are bystanders in the background that they do not know and from whom they have not gained consent. Such examples include children’s sporting events, wedding receptions, school events, attractions and points of interest and a whole lot more. Most photoborgs use the line of argumentation that says: “How is recording with a GoPro instead of a smartphone any different”? Of course, individuals who object to being subjected to point of view surveillance (PoVS) have potential avenues of protection (including trespass against property, trespass against the person, stalking, harassment etc.), but these protections are limited in their applications (Clarke, 2014). Even so, the person using PoVS technology has access to far more protection than the person they are monitoring even if they are doing so in an unreasonable manner (Clarke, 2014).

Scenario 5: Finding a Table at the Café


In this scenario, patrons at an on-campus café vacated their chairs almost immediately after Anthony and Sophie sat down at the large table. Anthony and Sophie both realized the camera was driving people away from them. Sophie insisted at that point in the scenario, that Anthony at least stopped recording if he was unwilling to take off the device itself. After Cygneta and Klara (Sophie’s acquaintances) had joined them at the table, Anthony, interested in individual reactions and trying to prove a point to Sophie, asked Klara how she felt about being filmed. He received the responses that he had expected. Klara did not like being filmed one bit by something worn on someone’s head. Moreover, despite being a marketing student, she had not even heard of Google Glass when Anthony tried to share his perspective around the issue by bringing up the technology in conversation. This fell on deaf ears, he thought, despite Cygneta’s thought that visual data might well be the future of marketing strategies. Anthony tried to make an argument that if a technology like Google Glass was to become prevalent on campus in a couple of years that they would not have any say about being recorded by a stranger. Sophie supported Anthony from a factual standpoint reinforcing that there were no laws in Australia prohibiting video recordings in public. That is, across the States and Territories of Australia, visual surveillance in public places are not subject to general prohibitions except when the person(s) would reasonably expect their actions to be private if they were engaging in a private act (NSW); or if the person(s) being recorded had a strong case for expecting he/she would not be recorded (Victoria, WA, NT); and in SA, Tasmania and ACT legislations for recording other people subject to various provisos (Clarke, 2014).

The reactions of Klara and Cygneta got Sophie thinking about gender and if men were more likely than women to get enthralled by technological devices. She could see this happening with drones and wearable technologies like smart watches - and came to the realization that the GoPro was no different. Some male surfers (including Anthony) and skateboard riders had well and truly begun to use their GoPros to film themselves doing stunts, then sharing these on Instagram. She reflected on whether or not people, in general, would begin to live a life of “virtual replays” as opposed to living in the moment. When reality becomes hard to handle, people tend to escape to a virtual world where they create avatars and act “freely”, leading to the postponement of the hardships of real life, and some may even become addicted to this as being a more exciting lifestyle. These issues are further explored in the following popular articles: Ghorayshi (2014), Kotler (2014) and Lagorio (2006).

Novelty and Market Potential

The patrons at the first table appeared to find the situation awkward and they rectified this problem by removing themselves from the vicinity of Anthony and his camera. Klara did not possess adequate knowledge about emerging wearable technology, and she claimed she would not use it even if it were readily available. But once wearable computers like Google Glass permeated the consumer market, Cygneta, who seemed like she ‘kept up with the Joneses’, said she would likely start using it at some point, despite Klara’s apparent resistance. While smartphones were a new technology in the 1990s, currently close to one third of the world’s population are using them regularly, with 70% projected by 2020 (Ericsson, 2015). One reason this number is not bigger is low-income countries with widespread rural populations and vast terrains: the numbers are expected to rise massively in emerging markets. By comparison wearable computers are basically advanced versions of existing technology and thus uptake of wearable technologies will likely be seamless and even quicker. As with smartphone adoption, as long as they are affordable, wearable computers such as Digital Glass and smartwatches can be expected to be used as much as, or even more than, smartphones, given they are always attached to the body or within arm’s reach.

Scenario 6: A Visit to the Bank


When Sophie and Anthony visited the bank, Anthony sat down as Sophie asked for assistance from one of the attendants. Even if Anthony was not the one who needed help, he thought people working at the bank seemed to be more friendly than usual towards him. He was asked, in fact, if he wanted some assistance with anything, and when he confirmed he did not, no further questioning by the bank manager was conducted. He thought it strange that everyone was so casual about his camera, when everyone else that day had made him feel like a criminal. Again, he was acutely aware that he was in full view of the bank’s surveillance camera but questioned if anyone was really watching anyway. The couple later queued up at the ATM where Anthony mentioned that had he had some disingenuous intentions: he could be filming people and acquiring their PINs so easily. No one had even attempted to cover up their PIN entry, even though there were now signs near the keypad to “cover up”. This entire situation made Sophie feel very uncomfortable and slightly irritated by Anthony. It was after all a criminal act to shoulder surf someone’s PIN, but to have it on film as well to replay later was outrageous. It seemed to her that, no matter how much advice people get about protecting their identity or credit from fraud, they just don’t seem to pay attention. To Anthony’s credit, he too, understood the severity of the situation and admittedly felt uncomfortable by the situation in which, with no malicious intent, he had accidentally found himself.


This scenario illustrates that people in the workplace who are under surveillance are more likely to help clients. Anthony’s camera got immediate attention and a forward request: “Can I help you?” When individuals become publicly self-aware that they are being filmed, then their propensity to help others generally increases. The feeling of public self-awareness created by the presence of a camera triggers the change in behavior in accordance with a pattern that signifies concerns with any damage that could be done to reputation (Van Bommel et. al., 2012).

Anthony also could not keep himself from questioning the security measures that the bank should be applying given the increase in incidence of cheap embedded cameras in both stationary and mobile phones. When queuing in front of the ATM for Sophie’s cash withdrawal, Anthony noticed that he was recording, unintentionally, something that could easily be used for criminal activities and he started seeing the possible security breaches which would come with emerging wearables. For example, video evidence can be zoomed in to reveal private data. While some believe that personal body worn recording devices protect the security of the individual wearer from mass surveillance, rectifying some of the power imbalances, in this instance the recording devices have diminished security by their very presence. It is a paradox, and while it all comes down to the individual ethics of the photoborg, it will not take long for fraudsters to employ such measures.

Scenario 7: In the Library


After the ATM incident, Anthony began to consider more deeply the implications of filming others in a variety of contexts. It was the very first time he had begun to place himself in other people’s shoes and see things from their perspective. In doing this, he became more cautious in the library setting. He avoided glaring at the computer screens of other users around him, as he could then record what activities they were engaged in online, what they were searching for on their Internet browser, and more. He attracted the attention of certain people he came across in the library, because obviously he looked different, even weird. For the first time that day, he felt like he was going to get into serious trouble when he was talking to the librarian who was questioning him about his practice. The librarian claimed that Anthony had absolutely no right to record other people without their permission, as it was against campus policies. Anthony did take this seriously, but he was pretty sure there was no policy against using a GoPro on campus. When Anthony asked the librarian to refer him to the exact policy and university web link, despite clearly stating that his actions were a breach of university rules, the librarian could not provide a link. She did say, however, that she would be calling the library management to convey to them that she suspected that someone was in the library in breach of university policy. While this conversation was happening, things not only began to become less clear for Anthony, but he could sense that things were escalating in seriousness and that he was about to get into some significant trouble.

Campus Policies, Guidelines, and Normative Expectations

The questions raised in this scenario are not only about privacy but also about the issues around the University’s willingness to accept certain things as permitted behavior on campus property. Token inappropriate filming of other individuals was presently a hot news item, as many young women were victims of voyeuristic behavior, such as upskirting with mobile phone cameras, and more. Yet, many universities simply rely on their “Student Conduct Rules” for support outside criminal intent. For example, a typical student conduct notice states that the students have a responsibility to conduct themselves in accordance with:

1. Campus Access and Order Rules,

2. IT Acceptable Use Policy, and

3. Library Code of Conduct.

However, none of these policies typically provide clear guidelines on audiovisual recordings by students.

Campus policies here are approved by the University Council, and various policies address only general surveillance considerations about audio-visual recordings. The Campus Access and Order Rules specifies that University grounds are private properties (University of Wollongong, 2014), and under the common law regarding real property, the lawful occupiers of land have the general right to prevent others from being on, or doing acts on, their land, even if an area on the land is freely accessible to the public (Clarke, 2014). It is Clarke’s latter emphasis which summarises exactly the context of a typical university setting which can be considered a closed-campus but open to the public.

The pace of technological change poses challenges for the law, and deficiencies in regulatory frameworks for Point of View Surveillance exist in many jurisdictions in Australia (Clarke, 2014). Australian universities as organizations are also bound (in this case) by the Workplace Surveillance Act 2005 (NSW) and the Privacy and Personal Information Protection Act 1998 (NSW) (Australasian Legal Information Institute, 2016), which again do nothing to specify what is permitted in terms of rules or policies about sousveillance in actions committed by a student on campus grounds.

Scenario 8: Security on Campus


Security arrived at the scene of the incident and escorted Anthony to the security office. By this stage Anthony believed that this might well become a police matter. Security did not wish to ask Anthony questions about his filming on campus but ostensibly wanted to check whether or not Anthony’s GoPro had been stolen. There had been a spate of car park thefts, and it was for this that Anthony was being investigated. Anthony then thought it appropriate to ask them several questions about the recordings he had made, to which security mentioned the Surveillance Devices Act 2007 (NSW) and how they had to put signage to warn people about the cameras and the fact that activity was being recorded. Additionally, Anthony was told that CCTV footage could be shared only with the police, and that cameras on campus were never facing people but were facing toward the roadways, and footpaths. When Anthony reminded the security about Google Glass and asked if they had a plan for when Glass would be used on the campus, the security manager replied that everything would be thought about when the time arrived. Anthony left to attend a lecture for which he was once again late.

Security Breaches, the Law, and Enforcement

Anthony was not satisfied with the response of the security manager about campus rules pertaining to the filming of others. While Anthony felt very uncomfortable about the footage he had on his camera, he still did not feel that the university’s security office provided adequate guidance on acceptable use. The security manager had tended to skirt around providing a direct response to Anthony, probably because he did not have any concrete answers. First, the manager brought up the Video Piracy Policy topic and then the University’s IT Acceptable Use Policy. Anthony felt that those policies had nothing to do with him. First, he was sure he was not conducting video piracy in any way, and second, he was not using the university’s IT services to share his films with others or exceed his Internet quota, etc. Somehow, the manager connected this by saying that the recording might contain copyrighted material on it, and that it should never be transferred through the university’s IT infrastructure (e.g. network). He also shared a newspaper article with Anthony that was somehow supposed to act as a warning message but it just didn’t make sense to Anthony how all of that was connected to the issue at hand.

Scenario 9: Sophie’s Lecture


Arriving at the lecture theater after the lecture had already begun, Anthony and Sophie opened the door and the lecturer noticed the camera mounted on Anthony’s head. The lecturer immediately became infuriated, asking Anthony to remove the camera and to leave his classroom. Even after Anthony left the class, the lecturer still thought he might be being recorded through the lecture theatre’s part-glass door and so he asked Anthony to leave the corridor as well. The entire time, the GoPro was not recording any of the incidents. The incident became heated, despite Anthony fully accepting the academic’s perspective. It was the very last thing that Anthony had expected by that point in the day. It was absolutely devastating to him.

Ethics, Power, Inequality, and Non-Participation

Every student at an Australian university has academic freedom and is welcome to attend lectures whether or not they are enrolled in the subject. However, it is the academic instructor’s right to privacy to keep a student from recording his or her class. A lecturer’s classes are considered “teaching material” and the lecturer owns the intellectual property of his/her teaching material (University of Wollongong, 2014). In keeping with the aforementioned statements, any recording of lectures should be carried out after consulting with the instructor. Some lecturers do not even like the idea of Echo 360, as it can be used for much more than simply recording a lecture for reuse by students. Lecture recordings could be used to audit staff or surveil whether staff are doing their job properly, display innovative teaching techniques, possess poor or good knowledge of content, and stick to time or take early marks. Some faculty members also consider the classroom to be a sacred meeting place between them and students and would never wish for a camera to invade this intimate gathering. Cameras and recordings would indeed stifle a faculty member’s or a student’s right to freedom of speech if the video was ever to go public. It would also mean that some students would simply not contribute anything to the classroom if they knew they were being taped, or that someone might scrutinize their perspectives and opinions on controversial matters.

Possible Future Implications Drawn from Scenarios

In the scenarios, in almost every instance, the overt nature of Anthony’s wearable recording device, given it was head-mounted, elicited an instantaneous response. Others displayed a variety of responses and attitudes including that:

1. They liked it,

2. They did not mind it,

3. They were indifferent about it,

4. They did not like it and finally,

5. They were disturbed by it.

Regardless of which category they belonged to, however, they did not explicitly voice their feelings to Anthony, although body language and facial expressions spoke volumes. In this closed campus scenario, the majority of people who came into contact with Anthony fell under the first two categories. It also seems clear that some contexts were especially delicate, for instance, taking the camera (while still recording) into the restroom, an obviously private amenity. It is likely that individuals in the restroom would have had no problem with the GoPro filming outside the restroom setting.

Research into future technologies and their respective social implications is urgent, since many emerging technologies are here right now. Whatever the human mind can conjure is liable to be designed, developed, and implemented. The main concern is how we choose to deal with it. In this final section, issues drawn from the scenarios are speculatively extended to project future implications when wearable computing has become more ubiquitous in society.

Privacy, Security, and Trust

Privacy experts claim that while we might once have been concerned, or felt uncomfortable with, CCTV being as pervasive as it is today, we are shifting from a limited number of big brothers to ubiquitous little brothers (Shilton, 2009). The fallacy of security is that more cameras do not necessarily mean a safer society, and statistics, depending on how they are presented, may be misleading about reductions in crime in given hotspots. Criminals do not just stop committing crime (e.g. selling drugs) because a local council installs a group of multi-directional cameras on a busy public route. On the contrary, crime has been shown to be redistributed or relocated to another proximate geographic location. In a study for the United Kingdom’s Home Office (Gill & Spriggs, 2005), only one area of the 14 studied saw a drop in the number of incidents that could be attributed to CCTV.

Questions of trust seem to be the biggest factor militating against wearable devices that film other people who have not granted their consent to be recorded. Many people may not like to be photographed for reasons we don’t quite understand, but it remains their right to say, “No, leave me alone.” Others have no trouble being recorded by someone they know, so long as they know they are being recorded prior to the record button being pushed. Still others show utter indifference, claiming that there is no longer anything personal out in the open. Often, the argument is posed that anyone can watch anyone else walk down a street. This argument fails however: watching someone cross the road is not the same as recording them cross the road, whether by design or by sheer coincidence. Handing out requests for deletion every time someone asks whether they’ve been captured on camera is not good enough. Allowing people to opt out “after the fact” is not consent-based and violates fundamental human rights including the control individuals might have over their own image and the freedom to go about their life as they please (Bronitt & Michael, 2012).

Laws, Regulations, and Policies

At the present time, laws and regulations pertaining to surveillance and listening devices, privacy, telecommunications, crimes, and even workplace relations require amendments to keep pace with advancements in wearable and even implantable sensors. The police need to be viewed as enforcing the laws that they are there to upkeep, not to don the very devices they claim to be illegal. Policies in campus settings, such as universities, also need to address the seeming imbalance in what is, and is not, possible. The commoditization of such devices will only lead to even greater public interest issues coming to the fore. The laws are clearly outdated, and there is controversy over how to overcome the legal implications of emerging technologies.

Creating new laws for each new device will lead to an endless drafting of legislation, which is not practicable, and claiming that existing laws can respond to new problems is unrealistic, as users will seek to get around the law via loopholes in a patchwork of statutes. Cameras provide a power imbalance. Initially, only a few people had mobile phones with cameras: now they are everywhere. Then, only some people carried body-worn video recorders for extreme sports: now, increasingly, many are using a GoPro, Looxcie, or Taser Axon glasses. These devices, while still nascent, have been met with some acceptance, in various contexts including some business-centric applications. Photoborgs might feel they are “hitting back” at all the cameras on the walls that are recording 24×7, but this does not cancel out the fact that the photoborgs themselves are doing exactly what they are claiming a fixed, wall-mounted camera is doing to them.

Future Implications

All of the risks mentioned above are interrelated. If we lack privacy, we lose trust; if we lack security, we feel vulnerable; if we lose our anonymity, we lose a considerable portion of our liberty and when people lose their trust and their liberty, then they feel vulnerable. This kind of scenario is deeply problematic, and portends a higher incidence of depression, as people would not feel they had the freedom to act and be themselves, sharing their true feelings. Implications of this interrelatedness are presented in Figure 1.

Since 100% security does not exist in any technological system, privacy will always be a prominent issue. When security is lacking, privacy becomes an issue, individuals become more vulnerable and the anonymity of an individual comes into question. A loss of anonymity limits people’s liberty to act and speak as they want and eventually people start losing their trust in each other and in authorities. When the people are not free to express their true selves, they become withdrawn and despite a high-tech community, people may enter a state of despondency. The real question will be in the future when it is not people who are sporting these body-worn devices, but automated data collection machines like Knightscope’s K5 (Knightscope, 2016). These will indeed be mobile camera surveillance units, converging sousveillance and surveillance in one clean sweep (Perakslis et al., 2014).

Figure 1. Major implications of wearables: the utopian and dystopian views

Future Society

Mann (2013) argues that wearable sousveillance devices that are used in everyday life to store, access, transfer and share information will be commonplace, worn continuously and perhaps even permanently implanted. Michael and Michael (2012, p. 195) in their perception of the age of Überveillence state:

There will be a segment of the consumer and business markets who will adopt the technology for no clear reason and without too much thought, save for the fact that the technology is new and seems to be the way advanced societies are heading. This segment will probably not be overly concerned with any discernible abridgement of their human rights nor the small print ‘terms and conditions agreement’ they have signed, but will take an implant on the promise that they will have greater connectivity to the Internet, and to online services and bonus loyalty schemes more generally.

Every feature added on a wearable device adds another layer of risk to the pre-existing risks. Currently, we may only have capabilities to store, access, transfer and manipulate the gathered data but as the development of technology continues, context-aware software will be able to interpret vast amounts of data into meaningful information that can be used by unauthorized third parties. It is almost certain that the laws will not be able to keep up with the pace of the technology. Accordingly, individuals will have to be alert and aware, and private and public organizations will need to set rules and guidelines to protect their employees’ privacy, as well as their own.

Society’s ability to cope with the ethical and societal problems that technology raises has long been falling behind the development of such technology and the same can be said for laws and regulations. With no legal protection and social safe zone, members of society are threatened with losing their privacy through wearable technology. When the technology becomes widespread, privacy at work, in schools, in supermarkets, at the ATM, on the Internet, even when walking, sitting in a public space, and so on, is subject to perishability.

The future is already here and, since the development of technology is seemingly unstoppable, there is more to come, but for any possible futures that may come, there needs to be a healthy human factor. “For every expert there’s an equal and opposite expert” (Sowell, 1995, p. 102; also sometimes attributed to Arthur C. Clarke). So even as we are enthusiastic about how data collected through wearable technology will enhance the quality of our daily life, we also have to be cautious to think about our security and privacy issues in an era of ubiquitous wearable technology. In this sense, creating digital footprints of our social and personal lives with the possibility of them being exposed publicly do not seem to coincide with the idea of a “healthy society”.

One has to ponder: where next? Might we be arguing that we are nearing the point of total surveillance, as everyone begins to record everything around them for “just in case” reasons such as insurance protection, establishing liability, and complaint handling (much like the in-car black box recorder unit can clear you of wrongdoing in an accident)? How gullible might we become to think that images and video footage do not lie, even though a new breed of hackers might manipulate and tamper with digital reality to their own ends. The überveillance trajectory refers to the ultimate potentiality for embedded surveillance devices like swallowable pills with onboard sensors, tags, and transponder IDs placed in the subdermal layer of the skin (Michael & Michael, 2013). Will the new frontier be surveillance of the heart and mind?

Discussion Points

• Does sound recording by wearable devices present any ethical dilemmas?

• Are wearable still cameras more acceptable than wearable video cameras?

• What should one do if bystanders of a recording in a public space question why they are being recorded?

• What themes are evident in the videos and the comments on Surveillance Camera Man Channel at

• What is the difference between making a recording with a smartphone and with a head mounted camera?

• If one juxtaposes a surveillance camera covertly integrated into a light fixture, with an overt head-mounted camera, then why should the two devices elicit a different response from bystanders?

• In what ways is a CCTV in a restroom any different from a photoborg in a restroom?
• Are there gender differences in enthusiasm for certain wearables? Who are the innovators of these technologies?

• What dangers exist around Internet addiction, escapism, and living in a virtual world?

• Are we nearing the point of total information surveillance? Is this a good thing? Will it decrease criminal activity or are we nearing a Minority Report style future?

• Will the new frontier be surveillance of the heart and mind beyond anything Orwell could have envisioned?

• How can the law keep pace with technological change?

• Can federal and state laws be in contradiction over the rights of a photoborg? How?

• Watch the movie The Final Cut. Watch the drama The Entire History of You. What are the similiarities and differences? What does such a future mean for personal security and national security?

• Consider in small groups other scenarios where wearables would be welcome as opposed to unwelcome.

• In which locations should body-worn video cameras never be worn?


• What is meant by surveillance, sousveillance and überveillance?

• What is a photoborg? And what is “point of view” within a filming context?

• Research the related terms surveillance, dataveillance, and überveillence.

• What does Steve Mann’s “Request for Deletion” webpage say? Why is it largely untenable?

• Why did Google decide to focus on industry applications of Glass finally, and not the total market?

• Are we ready to see many (overt or covert) sousveillers in our everyday life?

• Will we all be photoborgs one day, or live in a society where we need to be?

• Do existing provisions concerning voyeurism cover all possible sousveillance situations?

• If lifelogs are streamed in real-time and near real-time what can bystanders shown do about the distribution of their images (if they are ever find out)?

• Is lifelong lifelogging feasible? Desirable? Should it be suspended in confidential business meetings, going through airport security and customs or other areas? Which areas?

• Should citizens film their encounters with police, given police are likely to be filming it too?

• Should the person using PoVS technology have more legal protection than persons they are monitoring?

• Are wearables likely to be rapidly adopted and even outpace smartphone use?


Armstrong, J., & Welsh, B. (2011). The Entire History of You. In B. Reisz (Ed.), Black Mirror. London, UK: Zeppetron.

Australasian Legal Information Institute. (2014). Workplace Surveillance Act, 2005 (NSW). Retrieved June 6, 2016, from consol_act/wsa2005245/

Australasian Legal Information Institute. (2015). Surveillance Devices Act, 1998 (WA). Retrieved June 6, 2016, from nsf/main_mrtitle_946_currencies.html

Australasian Legal Information Institute. (2016). Privacy and Personal Information Protection Act 1998. Retrieved June 6, 2016, from legis/nsw/consol_act/papipa1998464/

Branscomb, A. W. (1994). Who Owns Information? From Privacy to Public Access. New York, NY: BasicBooks.

Bronitt, S., & Michael, K. (2012). Human rights, regulation, and national security (introduction). IEEE Technology and Society Magazine, 31(1), 15–16. doi:10.1109/ MTS.2012.2188704

Clarke, R. (2012). Point-of-View Surveillance. Retrieved from

Clarke, R. (2014). Surveillance by the Australian media, and its regulation. Surveillance & Society, 12(1), 89–107.

Echo. (2016). Lecture capture: Video is the new textbook. Retrieved from http:// Ericsson. (2015).

Ericsson Mobility Report. Retrieved June 6, 2016, from http://

Fernandez Arguedas, V., Izquierdo, E., & Chandramouli, K. (2013). Surveillance ontology for legal, ethical and privacy protection based on SKOS. In IEEE 18th International Conference on Digital Signal Processing (DSP).

Ghorayshi, A. (2014). Google Glass user treated for internet addiction caused by the device. Retrieved June 6, 2016, from

Gill, M., & Spriggs, A. (2005). Assessing the impact of CCTV. London: Home Office Research, Development and Statistics Directorate.

Gokyer, D., & Michael, K. (2015). Digital wearability scenarios: Trialability on the run. IEEE Consumer Electronics Magazine, 4(2), 82–91. doi:10.1109/MCE.2015.2393005

Harfield, C. (2014). Body-worn POV technology: Moral harm. IEEE Technology and Society Magazine, 33(2), 64–72. doi:10.1109/MTS.2014.2319976

Knightscope. (2016). Advanced physical security technology. Knightscope: K5. Retrieved from

Kotler, S. (2014). Legal heroin: Is virtual reality our next hard drug. Retrieved June 6, 2016, from

Lagorio, C. (2006). Is virtual life better than reality? Retrieved June 6, 2016, from

Levy, K. (2014). A surprising number of places have banned Google Glass in San Francisco. Business Insider, 3. Retrieved from google-glass-ban-san-francisco-2014-3

Mann, S. (2002). Sousveillance. Retrieved from

Mann, S. (2005). Sousveillance and cyborglogs: A 30-year empirical voyage through ethical, legal, and policy issues. Presence (Cambridge, Mass.), 14(6), 625–646. doi:10.1162/105474605775196571

Mann, S. (2013). Veillance and reciprocal transparency: Surveillance versus sousveillance,

AR glass, lifeglogging, and wearable computing. In IEEE International Symposium on Technology and Society (ISTAS). Toronto: IEEE.

Mann, S. (n.d.). The request for deletion (RFD). Retrieved from http://wearcam. org/rfd.htm

Mann, S., & Wassell, P. (2013). Proposed law on sousveillance. Retrieved from

Michael, K. (2013). Keynote: The final cut—Tampering with direct evidence from wearable computers. In Proc. 5th Int. Conf. Multimedia Information Networking and Security (MINES).

Michael, K., & Michael, M. G. (2012). Commentary on: Mann, S. (2012): Wearable computing. In M. Soegaard & R. Dam (Eds.), Encyclopedia of human-computer interaction. The Foundation. Retrieved from https://www.

Michael, K., & Michael, M. G. (2013a). Computing ethics: No limits to watching? Communications of the ACM, 56(11), 26–28. doi:10.1145/2527187

Michael, K., Michael, M. G., & Perakslis, C. (2014). Be vigilant: There are limits to veillance. In J. Pitt (Ed.), The computer after me. London: Imperial College London Press. doi:10.1142/9781783264186_0013

Michael, M. G., & Michael, K. (Eds.). (2013b). Überveillance and the social implications of microchip implants: Emerging technologies (Advances in human and social aspects of technology). Hershey, PA: IGI Global.

Nettle, D., Nott, K., & Bateson, M. (2012). ‘Cycle thieves, we are watching you’: Impact of a simple signage intervention against bicycle theft. PLoS ONE, 7(12), e51738. doi:10.1371/journal.pone.0051738 PMID:23251615

Perakslis, C., Pitt, J., & Michael, K. (2014). Drones humanus. IEEE Technology and Society Magazine, 33(2), 38–39.

Ruvio, A., Gavish, Y., & Shoham, A. (2013). Consumer’s doppelganger: A role model perspective on intentional consumer mimicry. Journal of Consumer Behaviour, 12(1), 60–69. doi:10.1002/cb.1415

Shactman, N. (2003). A spy machine of DARPA’s dreams. Wired. Retrieved from

Shilton, K. (2009). Four billion little brothers?: Privacy, mobile phones, and ubiquitous data collection. Communications of the ACM, 52(11), 48–53. doi:10.1145/1592761.1592778

Sowell, T. (1995). The Vision of the Anointed: Self-congratulation as a Basis for Social Policy. New York, NY: Basic Books.

Surveillance Camera Man. (2015). Surveillance Camera Man. Retrieved from https://

Tanner, R. J., Ferraro, R., Chartrand, T. L., Bettman, J., & van Baaren, R. (2008). Of chameleons and consumption: The impact of mimicry on choice and preferences. The Journal of Consumer Research, 34(6), 754–766. doi:10.1086/522322

University of Wollongong. (2014a). Campus Access and Order Rules. Retrieved from

University of Wollongong. (2014b). Ownership of Intellectual Property. Retrieved from

University of Wollongong. (2014c). Student Conduct Rules. Retrieved from http://

Van Bommel, M., van Prooijen, J., Elffers, H., & Van Lange, P. (2012). Be aware to care: Public self-awareness leads to a reversal of the bystander effect. Journal of Experimental Social Psychology, 48(4), 926–930. doi:10.1016/j.jesp.2012.02.011

Key Terms and Definitions

Body-Worn Video (BWV): These are cameras that are embedded in devices that can be worn on the body to record video, typically by law enforcement officers. Closed-Campus: Refers to any organization or institution that contains a dedicated building(s) on a bounded land parcel offering a range of online and offline services, such as banking, retail, sporting. Closed campus examples include schools and universities.

Closed-Circuit Television (CCTV): Also referred to as video surveillance. CCTV is the use of video cameras to transmit a signal to a specific place. CCTV cameras can be overt (obvious) or covert (hidden).

Digital Glass: Otherwise referred to as wearable eyeglasses which house multiple sensors on board. An example of digital glass is Google Glass. The future of digital glass may well be computer-based contact lenses.

Lifelogging: When a user decides to log his/her life using wearable computing or other devices, that have audio-visual capability. It is usually a continuous stream of a recording 24/7.

Personal Security Devices: These are devices that allegedly deter perpetrators from attacking others because they are always on gathering evidence, and ready to record. PSDs may have an on-board alarm alerting central care services for further assistance.

Policy: An enforceable set of organizational rules and principles used to aid decision-making that have penalties for non-compliance, such as the termination of an employee’s contract with an employer.

Private Space: Somewhere geographically that one has an expectation of privacy, naturally. Some examples include: the home, the backyard, and the restroom.

Public Space: Somewhere geographically where there is no expectation of privacy save for when someone holds a private conversation in a private context.

Sousveillance: The opposite of surveillance from above, which includes inverse surveillance, also sometimes described as person-to-person surveillance. Citizens can use sousveillance as a mechanism to keep law enforcement officers accountable for their actions.

Surveillance: “Watching from above” such as CCTV from business buildings. For behaviors, activities, or other changing information to be under the watchful eye of authority, usually for the purpose of influencing, managing, directing, or protecting the masses.

Citation: Michael, K., Gokyer, D., & Abbas, S. (2017). Societal Implications of Wearable Technology: Interpreting “Trialability on the Run”. In A. Marrington, D. Kerr, & J. Gammack (Eds.), Managing Security Issues and the Hidden Dangers of Wearable Technologies (pp. 238-266). Hershey, PA: IGI Global. doi:10.4018/978-1-5225-1016-1.ch010

Credit Card Fraud


This chapter provides a single person case study of Mr. Dan DeFilippi who was arrested for credit card fraud by the US Secret Service in December 2004. The chapter delves into the psychology of a cybercriminal and the inner workings of credit card fraud. A background context of credit card fraud is presented to frame the primary interview. A section on the identification of issues and controversies with respect to carding is then given. Finally, recommendations are made by the convicted cybercriminal turned key informant on how to decrease the rising incidence of cybercrime. A major finding is that credit card fraud is all too easy to enact and merchants need to conduct better staff training to catch fraudsters early. With increases in global online purchasing, international carding networks are proliferating, making it difficult for law enforcement agencies to be “policing” unauthorized transactions. Big data may well have a role to play in analyzing behaviors that expose cybercrime.


Fraud is about exploiting weaknesses. They could be weaknesses in a system, such as a lack of controls in a company’s accounting department or a computer security hole, or a weakness in human thinking such as misplaced trust. A cybercriminal finds a weakness with an expected payout high enough to offset the risk and chooses to become involved in the endeavor. This is very much like a traditional business venture except the outcome is the opposite. A business will profit by providing goods or services that its customers value. Fraud takes value away from its victims and only enriches those committing it.

Counterfeit documents rarely need to be perfect. They only need to be good enough to serve their purpose, fooling a system or a person in a given transaction. For example, a counterfeit ID card will be scrutinized more closely by the bouncer at a bar than by a minimum wage cashier at a large department store. Bouncers have incentive to detect fakes since allowing in underage drinkers could have dire consequences for the bar. There is much less incentive to properly train cashiers since fraud makes up a small percentage of retail sales. This is sometimes referred to as the risk appetite and tolerance of an organization (Levi, 2008).

Lack of knowledge and training of store staff is by far the biggest weakness exploited when counterfeit or fraudulent documents are utilized by cybercriminals. If the victim does not know the security features of a legitimate document, they will not know how to spot a fake. For example, Visa and MasterCard are the most widely recognized credit card brands. Their dove and globe holograms are well known. A card without one would be very suspicious. However, there are other less known credit card networks such as Discover and American Express. Their security features are not as well recognized which can be exploited. If a counterfeit credit card has an appearance of legitimacy it will be accepted.


Dan DeFilippi was a black hat hacker in his teens and early twenties. In college he sold fake IDs, and later committed various scams, including phishing, credit card fraud, and identity theft. He was caught in December 2004. In order to avoid a significant jail sentence, DeFilippi decided to become an informant and work for the secret service for two years, providing training and consulting and helping them understand how hackers and fraudsters think. This chapter has been written through his eyes, his practices and learnings. Cybercriminals do not necessarily have to be perfect at counterfeiting, but they do have to be superior social engineers not to get caught. While most of the cybercrime now occurs remotely over the Internet, DeFilippi exploited the human factor. A lot of the time, he would walk into a large electronics department store with a fake credit card, buy high-end items like laptops, and then proceed to sell them online for a reduced price. He could make thousands of dollars like this in a single week.

In credit card fraud, the expected payout is so much higher than traditional crimes and the risk of being caught is often much lower making it a crime of choice. Banks often write off fraud with little or no investigation until it reaches value thresholds. It is considered a cost of doing business and additional investigation is considered to cost more than it is worth. Banks in Australia, for instance, used to charge about $250 to investigate an illegal transaction, usually passing the cost onto the customer before 2002. Today they usually do not spend effort on investigating such low-value transactions but rather redirect attention on how to uphold their brand. Since about the mid-2000s, banks also have openly shared more security breaches with one another which have acted to aid law enforcement task forces to respond in a timely manner to aid in investigating cybercrime. Yet, local law enforcement continues to struggle with the investigation of electronic fraud due to lack of resources, education, or jurisdictional issues. Fraud cases may span across multiple countries requiring complex cooperation and coordination between law enforcement agencies. A criminal may buy stolen credit cards from someone living on another continent, use them to purchase goods online in state 1, have the goods shipped to state 2 while living in state 3, with the card stolen from someone in state 4.

Online criminal communities and networks, or the online underground, are often structured similarly to a loose gang. New members (newbies) have to earn the community’s trust. Items offered for sale have to be reviewed by a senior member or approved reviewer before being offered to the public. Even when people are considered “trustworthy” there is a high level of distrust between community members due to a significant level of law enforcement and paranoia from past crackdowns. Very few people know anyone by their real identity. Everyone tries to stay as anonymous as possible. Many people use multiple handles and pseudonyms for different online activities, such as one for buying, one or more for selling, and one for online discussion through asynchronous text-based chat. This dilutes their reputation but adds an additional layer of protection.

The most desirable types of fraud in these communities, and for monetary crime in general, involves directly receiving cash instead of goods. Jobs, such as “cashing out” stolen debit cards at ATMs, are sought after by everyone and are handled by the most trusted community members. Due to their desirability the proceeds are often split unequally, with the card provider taking a majority share of the reward and the “runner” taking a majority of the risk. The types of people in these communities vary from teens looking to get a new computer for free to members of organized crime syndicates. With high unemployment rates, low wages, and low levels of literacy particularly in developing nations, it is no surprise that a large number of credit card fraud players are eastern European or Russian with suspected ties to organized crime. It is a quick and easy way of making money if you know what you are doing.

Of course, things have changed a little since DeFilippi was conducting his credit card fraud between 2001 and 2004. Law enforcement agencies now have whole task forces dedicated to online fraud. Bilateral and multilateral treaties are in place with respect to cybercrime, although this still lacks the buy-in of major state players and even states where cybercrime is flourishing (Broadhurst, 2006). In terms of how technology has been used to combat credit card fraud, the Falcon system has been able to help in fraud that would have otherwise gone unnoticed. If the Falcon system identifies any transaction as suspect or unusual, the bank will attempt to get in touch with the cardholder to ascertain whether or not it is an authentic transaction. If individuals cannot be reached directly, then their card is blocked until further confirmation of a given transaction. Banks continue to encourage travelers to contact them when their pattern of credit card use changes, e.g. when travelling abroad. Software platforms nowadays do much of the analytical processing with respect to fraud detection. Predictive analytics methods, not rule-based methods, are changing the way fraud is discovered (Riordan et al., 2012). Additionally, banks have introduced two factor (also known as multifactor) authentication requirements which means an online site requires more than just a cardholder’s username and password. Commonly this takes the form of a SMS or a phone call to a predesignated number containing a randomized code. Single factor authentication is now considered inadequate in the case of high-risk transactions, or movement of funds to other parties (Aguilar, 2015).

Main Focus of Chapter 

Issues, Controversies, Problems

Katina Michael: Dan, let’s start at the end of your story which was the beginning of your reformation. What happened the day you got caught for credit card fraud?

Dan DeFilippi: It was December 2004 in Rochester, New York. I was sitting in my windowless office getting work done, and all of a sudden the door burst open, and this rush of people came flying in. “Get down under your desks. Show your hands. Hands where I can see them.” And before I could tell what was going on, my hands were cuffed behind my back and it was over. That was the end of that chapter of my life.

Katina Michael: Can you tell us what cybercrimes you committed and for how long?

Dan DeFilippi: I had been running credit card fraud, identity theft, document forgery pretty much as my fulltime job for about three years, and before that I had been a hacker.

Katina Michael: Why fraud? What led you into that life?

Dan DeFilippi: Everybody has failures. Not everybody makes great decisions in life. So why fraud? What led me to this? I mean, I had great parents, a great upbringing, a great family life. I did okay in school, and you know, not to stroke my ego too much, but I know I am intelligent and I could succeed at whatever I chose to do. But when I was growing up, one of the things that I’m really thankful for is my parents taught me to think for myself. They didn’t just focus on remembering knowledge. They taught me to learn, to think, to understand. And this is really what the hacker mentality is all about. And when I say hacker, I mean it in the traditional sense. I don’t mean it as somebody in there stealing from your company. I mean it as somebody out there seeking knowledge, testing the edges, testing the boundaries, pushing the limits, and seeing how things work. So growing up, I disassembled little broken electron­ics and things like that, and as time went on this slowly progressed into, you know, a so-called hacker.

Katina Michael: Do you remember when you actually earned your first dollar by conducting cybercrime?

Dan DeFilippi: My first experience with money in this field was towards the end of my high school. And I realized that my electronics skills could be put to use to do something beyond work. I got involved with a small group of hackers that were trying to cheat advertising systems out of money, and I didn’t even make that much. I made a couple of hundred dollars over, like, a year or something. It was pretty much insignificant. But it was that experience, that first step, that kind of showed me that there was something else out there. And at that time I knew theft and fraud was wrong. I mean, I thought it was stealing. I knew it was stealing. But it spiraled downwards after that point.

Katina Michael: Can you elaborate on how your thinking developed towards earn­ing money through cybercrime?

Dan DeFilippi: I started out with these little things and they slowly, slowly built up and built up and built up, and it was this easy money. So this initial taste of being able to make small amounts, and eventually large amounts of money with almost no work, and doing things that I really enjoyed doing was what did it for me. So from there, I went to college and I didn’t get involved with credit card fraud right away. What I did was, I tried to find a market. And I’ve always been an entrepreneur and very business-minded, and I was at school and I said, “What do people here need? ... I need money, I don’t really want to work for somebody else, I don’t like that.” I realized people needed fake IDs. So I started selling fake IDs to college students. And that again was a taste of easy money. It was work but it wasn’t hard work. And from there, there’s a cross-over here between forged documents and fraud. So that cross-over is what drew me in. I saw these other people doing credit card fraud and mak­ing money. I mean, we’re talking about serious money. We’re talking about thousands of dollars a day with only a few hours of work and up.

Katina Michael: You strike me as someone who is very ethical. I almost cannot imagine you committing fraud. I’m trying to understand what went wrong?

Dan DeFilippi: And where were my ethics and morals? Well, the problem is when you do something like this, you need to rationalize it, okay? You can’t worry about it. You have to rationalize it to yourself. So everybody out there commit­ting fraud rationalizes what they’re doing. They justify it. And that’s just how our brains work. Okay? And this is something that comes up a lot on these online fraud forums where people discuss this stuff openly. And the question is posed: “Well, why do you do this? What motivates you? Why, why is this fine with you? Why are you not, you know, opposed to this?” And often, and the biggest thing I see, is like, you know, the Robin Hood scenario- “I’m just stealing from a faceless corporation. It’s victimless.” Of course, all of us know that’s just not true. It impacts the consumers. But everybody comes up with their own reason. Everybody comes up with an explanation for why they’re doing it, and how it’s okay with them, and how they can actually get away with doing it.

Katina Michael: But how does a sensitive young man like you just not realize the impact they were having on others during the time of committing the crimes?

Dan DeFilippi: I’ve never really talked about that too much before... Look the aver­age person when they know they’ve acted against their morals feels they have done wrong; it’s an emotional connection with their failure and emotionally it feels negative. You feel that you did something wrong no one has to tell you the crime type, you just know it is bad. Well, when you start doing these kinds of crimes, you lose that discerning voice in your head. I was completely dis­connected from my emotions when it came to these types of fraud. I knew that they were ethically wrong, morally wrong, and you know, I have no interest in committing them ever again, but I did not have that visceral reaction to this type of crime. I did not have that guilty feeling of actually stealing something. I would just rationalize it.

Katina Michael: Ok. Could I ask you whether the process of rationalization has much to do with making money? And perhaps, how much money did you actu­ally make in conducting these crimes?

Dan DeFilippi: This is a pretty common question and honestly I don’t have an answer. I can tell you how much I owe the government and that’s ... well, I suppose I owe Discover Card ... I owed $209,000 to Discover Card Credit Card Company in the US. Beyond that, I mean, I didn’t keep track. One of the things I did was, and this is kind of why I got away with it for so long, is I didn’t go crazy. I wasn’t out there every day buying ten laptops. I could have but chose not to. I could’ve worked myself to the bone and made millions of dollars, but I knew if I did that the risk would be significantly higher. So I took it easy. I was going out and doing this stuff one or two days a week, and just living comfortably but not really in major luxury. So honestly, I don’t have a real figure for that. I can just tell you what the government said.

Katina Michael: There is a perception among the community that credit card fraud is sort of a non-violent crime because the “actor” being defrauded is not a person but an organization. Is this why so many people lie to the tax office, for instance?

Dan DeFilippi: Yeah, I do think that’s absolutely true. If we are honest about it, everyone has lied about something in their lifetime. And people... you’re right, you’re absolutely right, that people observe this, and they don’t see it in the big picture. They think of it on the individual level, like I said, and people see this as a faceless corporation, “Oh, they can afford it.” You know, “no big deal”. You know, “Whatever, they’re ripping off the little guy.” You know. People see it that way, and they explain it away much easier than, you know, somebody going off and punching someone in the face and then proceeding to steal their wallet. Even if the dollar figure of the financial fraud is much higher, people are generally less concerned. And I think that’s a real problem because it might entice some people into committing these crimes because they are considered “soft”. And if you’re willing to do small things, it’s going to, as in my case, eventually spiral you downwards. I started with very small fraud, and then got larger. Not that everybody would do that. Not that the police officer taking the burger for free from Burger King is going to step up to, you know, to extortion or something, but certainly it could, could definitely snowball and lead to something.

Katina Michael: It has been about 6 years since you were arrested. Has much has changed in the banking sector regarding triggers or detection of cybercriminal acts?

Dan DeFilippi: Yeah. What credit card companies are doing now is pattern match­ing and using software to find and root out these kind of things. I think that’s really key. You know, they recognize patterns of fraud and they flag it and they bring it out. I think using technology to your advantage to identify these patterns of fraud and investigate, report and root them out is probably, you know, one of the best techniques for dollar returns.

Katina Michael: How long were you actually working for the US Secret Service, as a matter of interest? Was it the length of your alleged, or so-called prison term, or how did that work?

Dan DeFilippi: No. So I was arrested early December 2004. I started working with the Secret Service in April 2005, so about six months later. And I worked with them fulltime almost for two years. I cut back on the hours a little bit towards the end, because I went back to university. But it was, it was almost exactly two years, and most of it was fulltime.

Katina Michael: I’ve heard that the US is tougher on cybercrime relative to other crimes. Is this true?

Dan DeFilippi: The punishment for credit card fraud is eight-and-a-half years in the US.

Katina Michael: Do these sentences reduce the likelihood that someone might get caught up in this kind of fraud?

Dan DeFilippi: It’s a contested topic that’s been hotly debated for a long time. And also in ethics, you know, it’s certainly an interesting topic as well. But I think it depends on the type of person. I wasn’t a hardened criminal, I wasn’t the fella down on the street, I was just a kid playing around at first that just got more serious and serious as time went on. You know, I had a great upbring­ing, I had good morals. And I think to that type of person, it does have an impact. I think that somebody who has a bright future, or could have a bright future, and could throw it all away for a couple of hundred thousand dollars, or whatever, they recognize that, I think. At least the more intelligent people recognize it in that ... you know, “This is going to ruin my life or potentially ruin a large portion of my life.” So, I think it’s obviously not the only deterrent but it can certainly be useful.

Katina Michael: You note that you worked alone. Was this always the case? Did you recruit people to assist you with the fraud and where did you go to find these people?

Dan DeFilippi: Okay. So I mainly worked alone but I did also work with other people, like I said. I was very careful to protect myself. I knew that if I had partners that I worked with regularly it was high risk. So what I did was on these discussion forums, I often chatted with people beyond just doing the credit card fraud, I did other things as well. I sold fake IDs online. I sold the printed cards online. And because I was doing this, I networked with people, and there were a few cases where I worked with other people. For example, I met somebody online. Could have been law enforcement, I don’t know. I would print them a card, send it to them, they would buy something in the store, they would mail back the item, the thing they bought, and then I would sell them online and we would split the money 50/50.

Katina Michael: Was this the manner you engaged others? An equal split?

Dan DeFilippi: Yes, actually, exactly the same deal for instance, with the person I was working with in person, and that person I met through my fake IDs. When I had been selling the fake IDs, I had a network of people that resold for me at the schools. He was one of the people that had been doing that. And then when he found out that I was going to stop selling IDs, I sort of sold him my equipment and he kind of took over. And then he realized I must have something else going on, because why would I stop doing it, it must be pretty lucrative. So when he knew that, you know, he kept pushing me. “What are you doing? Hey, I want to get involved.” And this and that. So it was that person that I happened to meet in person that in the end was my downfall, so to speak.

Katina Michael: Did anyone, say a close family or friend, know what you were doing?

Dan DeFilippi: Absolutely not. No. And I, I made it a point to not let anyone know what I was doing. I almost made it a game, because I just didn’t tell anybody anything. Well, my family I told I had a job, you know, they didn’t know... but all my friends, I just told them nothing. They would always ask me, you know, “Where do you get your money? Where do you get all this stuff?” and I would just say, “Well, you know, doing stuff.” So it was a mystery. And I kind of enjoyed having this mysterious aura about me. You know. What does this guy do? And nobody ever thought it would be anything illegitimate. Everybody thought I was doing something, you know, my own webs ites, or maybe thought I was doing something like pornography or something. I don’t know. But yeah, I definitely did not tell anybody else. I didn’t want anybody to know.

Katina Michael: What was the most outrageous thing you bought with the money you earned from stolen credit cards?

Dan DeFilippi: More than the money, the outrageous things that I did with the cards is probably the matter. In my case the main motivation was not the money alone, the money was almost valueless to a degree. Anything that anyone could buy with a card in a store, I could get for free. So, this is a mind-set change a fraudster goes through that I didn’t really highlight yet. But money had very little value to me, directly, just because there was so much I could just go out and get for free. So I would just buy stupid random things with these stolen cards. You know, for example, the case where I actually ended up leading to my arrest, we had gone out and we had purchased a laptop before that one that failed, and we bought pizza. You know? So you know, a $10 charge on a stolen credit card for pizza, risking arrest, you know, for, for a pizza. And I would buy stupid stuff like that all the time. And just because I knew it, I had that experience, I could just get away with it mostly.

Katina Michael: You’ve been pretty open with interviews you’ve given. Why?

Dan DeFilippi: It helped me move on and not to keep secrets.

Katina Michael: And on that line of thinking, had you ever met one of your victims? And I don’t mean the credit card company. I actually mean the individual whose credit card you defrauded?

Dan DeFilippi: So I haven’t personally met anyone but I have read statements. So as part of sentencing, the prosecutor solicited statements from victims. And the mind-set is always, “Big faceless corporation, you know, you just call your bank and they just, you know, reverse the charges and no big deal. It takes a little bit of time, but you know, whatever.” And the prosecutor ended up get­ting three or four statements from individuals who actually were impacted by this, and honestly, you know, I felt very upset after reading them. And I do, I still go back and I read them every once in a while. I get this great sinking feeling, that these people were affected by it. So I haven’t actually personally met anyone but just those statements.

Katina Michael: How much of hacking do you think is acting? To me traditional hacking is someone sort of hacking into a website and perhaps downloading some data. However, in your case, there was a physical presence, you walked into the store and confronted real people. It wasn’t all card-not-present fraud where you could be completely anonymous in appearance.

Dan DeFilippi: It was absolutely acting. You know, I haven’t gone into great detail in this interview, but I did hack credit card information and stuff, that’s where I got some of my info. And I did online fraud too. I mean, I would order stuff off websites and things like that. But yeah, the being in the store and playing that role, it was totally acting. It was, like I mentioned, you are playing the part of a normal person. And that normal person can be anybody. You know. You could be a high-roller, or you could just be some college student going to buy a laptop. So it was pure acting. And I like to think that I got reasonably good at it. And I would come up with scenarios. You know, ahead of time. I would think of scenarios. And answers to situations. I came up with techniques that I thought worked pretty well to talk my way out of bad situations. For example, if I was going to go up and purchase something, I might say to the cashier, before they swiped the card, I’d say, “Oh, that came to a lot more than I thought it would be. I hope my card works.” So that way, if something happened where the card was declined or it came up call for authorization, I could say, “Oh yeah, I must not have gotten my payment” or something like that. So, yeah, it was definitely acting.

Katina Michael: You’ve mentioned this idea of downward spiraling. Could you elaborate?

Dan DeFilippi: I think this is partially something that happens and it happens if you’re in this and do this too much. So catching people early on, before this takes effect is important. Now, when you’re trying to catch people involved in this, you have to really think about these kinds of things. Like, why are they doing this? Why are they motivated? And the thought process, like I was saying, is definitely very different. In my case, because I had this hacker background, and I wasn’t, you know, like some street thug who just found a computer. I did it for more than just the money. I mean, it was certainly because of the chal­lenge. It was because I was doing things I knew other people weren’t doing. I was kind of this rogue figure, this rebel. And I was learning at the edge. And especially, if I could learn something, or discover something, some technique, that I thought nobody else was using or very few people were using it, to me that was a rush. I mean, it’s almost like a drug. Except with a drug, with an addict, you’re chasing that “first high” but can’t get back to it, and with credit card fraud, your “high” is always going up. The more money you make, the better it feels. The more challenges you complete, the better you feel.

Katina Michael: You make it sound so easy. That anyone could get into cybercrime. What makes it so easy?

Dan DeFilippi: So really, you’ve got to fill the holes in the systems so they can’t be exploited. What happens is crackers, i.e. criminal hackers, and fraudsters, look for easy access. If there are ten companies that they can target, and your company has weak security, and the other nine have strong security, they’re going after you. Okay? Also, in the reverse. So if your company has strong security and nine others have weak security, well, they’re going to have a field-day with the others and they’re just going to walk past you. You know, they’re just going to skip you and move on to the next target. So you need to patch the holes in your technology and in your organization. I don’t know if you’ve noticed recently, but there’s been all kinds of hacking in the news. The PlayStation network was hacked and a lot of US targets. These are basic things that would have been discovered had they had proper controls in place, or proper security auditing happening.

Katina Michael: Okay, so there is the systems focus of weaknesses. But what about human factor issues?

Dan DeFilippi: So another step to the personnel is training. Training really is key. And I’m going to give you two stories, very similar but with totally different outcomes, that happened to me. So a little bit more about what I used to do frequently. I would mainly print fake credit cards, put stolen data on those cards and use them in store to go and purchase items. Electronics, and things like that, to go and re-sell them. So ... and in these two stories, I was at a big- box well-known electronics retailer, with a card with a matching fake ID. I also made the driver’s licenses to go along with the credit cards. And I was at this first location to purchase a laptop. So pick up your laptop and then go through the standard process. And when committing this type of crime you have to have a certain mindset. So you have to think, “I am not committing a crime. I am not stealing here. I am just a normal consumer purchasing things. So I am just buying a laptop, just like any other person would go into the store and buy a laptop.” So in this first story, I’m in the store, purchasing a laptop. Picked it out, you know, went through the standard process, they went and swiped my card. And it came up with a ‘CFA’ – call for authorization. Now, a call for authorization is a case where it’s flagged on the computer and you actually have to call in and talk to an operator that will then verify additional information to make sure it’s not fraud. If you’re trying to commit fraud, it’s a bad thing. You can’t verify this, right? Right? So this is a case where it’s very possible that you could get caught, so you try to talk your way out of the situation. You try to walk away, you try to get out of it. Well, in this case, I was unable to escape. I was unable to talk my way out of it, and they did the call for authorization. They called in. We had to go up to the front of the store, there was a customer service desk, and they had somebody up there call it in and discuss this with them. And I didn’t overhear what they were saying. I had to stand to the side. About five or ten minutes later, I don’t know, I pretty much lost track of time at that point, they come back to me and they said, “I’m sorry, we can’t complete this transaction because your information doesn’t match the information on the credit card account.” That should have raised red flags. That should have meant the worse alarm bells possible.

Katina Michael: Indeed.

Dan DeFilippi: There should have been security coming up to me immediately. They should have notified higher people in the organization to look into the matter. But rather than doing that, they just came up to me, handed me back my cards and apologized. Poor training. So just like a normal consumer, I act surprised and alarmed and amused. You know, and I kind of talked my way out of this too, “You know, what are you talking about? I have my ID and here’s my card. Obviously this is the real information.” Whatever. They just let me walk out of the store. And I got out of there as quickly as possible. And you know, basically walked away and drove away. Poor training. Had that person had the proper training to understand what was going on and what the situation was, I probably would have been arrested that day. At the very least, there would have been a foot-chase.

Katina Michael: Unbelievable. That was very poor on the side of the cashier. And the other story you were going to share?

Dan DeFilippi: The second story was the opposite experience. The personnel had proper training. Same situation. Different store. Same big-box electronic store at a different place. Go in. And this time I was actually with somebody else, who was working with me at the time. We go in together. I was posing as his friend and he was just purchasing a computer. And this time we, we didn’t really approach it like we normally did. We kind of rushed because we’d been out for a while and we just wanted to leave, so we kind of rushed it faster than a normal person would purchase a computer. Which was unusual, but not a big deal. The person handling the transaction tried to upsell, upsell some things, warranties, accessories, software, and all that stuff, and we just, “No, no, no, we don’t ... we just want to, you know, kind of rush it through.” Which is kind of weird, but okay, it happens.

Katina Michael: I’m sure this would have raised even a little suspicion however.

Dan DeFilippi: So when he went to process the transaction, he asked for the ID with the credit card, which happens at times. But at this point the person I was with started getting a little nervous. He wasn’t as used to it as I was. My biggest thing was I never panicked, no matter what the situation. I always tried to not show nervousness. And so he’s getting nervous. The guy’s checking his ID, swipes the card, okay, finally going to go through this, and call for authorization. Same situation. Except for this time, you have somebody here who’s trying to
do the transaction and he is really, really getting nervous. He’s shifting back and forth. He’s in a cold sweat. He’s fidgeting. Something’s clearly wrong with this transaction. Now, the person who was handling this transaction, the person who was trying to take the card payment and everything, it happened to be the manager of this department store. He happened to be well-trained. He happened to know and realize that something was very wrong here. Something
was not right with this transaction. So the call for authorization came up. Now, again, he had to go to the front of the store. He, he never let that credit card and fake ID out of his hands. He held on to them tight the whole time. There was no way we could have gotten them back. So he goes up to the front and he says, “All right, well, we’re going to do this.” And we said, “Okay, well, we’ll go and look at the stock while you’re doing it.” You know. I just sort of tried to play off, and as soon as he walked away, I said, “We need to get out of here.” And we left; leaving behind the ID and card. Some may not realize it as I am retelling the story, but this is what ended up leading to my arrest. They ran his photo off his ID on the local news network, somebody recognized him, turned him in, and he turned me in. So this was an obvious case of good, proper training. This guy knew how to handle the situation, and he not only prevented that fraud from happening, he prevented that laptop from leaving the store. But he also helped to catch me, and somebody else, and shot down what I was doing. So clearly, you know, failing to train people leads to failure. Okay? You need to have proper training. And you need to be able to handle the situation.

Katina Michael: What did you learn from your time at the Secret Service?

Dan DeFilippi: So a little bit more in-depth on what I observed of cybercriminals when I was working with the Secret Service. Now, this is going to be a little aside here, but it’s relevant. So people are arrogant. You have to be arrogant to commit a crime, at some level. You have to think you can get away with it. You’re not going to do it if you can’t, you know, if you think you’re going to get caught. So there’s arrogance there. And this same arrogance can be used against them. Up until the point where I got caught in the story I just told you that led to my arrest, I was arrogant. I actually wasn’t protecting myself as well as I had been, should have been. Had I been investigated closer, had law enforcement being monitoring me, they could have caught me a lot earlier. I left traces back to my office. I wasn’t very careful with protecting my office, and they could have come back and found me. So you can play off arrogance but also ignorance, obviously. They go hand-in-hand. So the more arrogant somebody is, the more risk they’re willing to take. One of the things we found frequently works to catch people was email. Most people don’t realize that email actually contains the IP address of your computer. This is the identifier on the Internet to distinguish who you are. Even a lot of criminals who are very intelligent, who are involved in this stuff, do not realize that email shows this. And it’s very easy. You just look at the source of the email and boom, there you go. You’ve got somebody’s location. This was used countless times, over and over, to catch people. Now, obviously the real big fish, the people who are really intelligent and really in this, take steps to protect themselves with that, but then those are the people who are supremely arrogant.

Katina Michael: Can you give us a specific example?

Dan DeFilippi: One case that happened a few years ago, let’s call the individual “Ted”. He actually ran a number of these online forums. These are “carding” forums, online discussion boards, where people commit these crimes. And he was extremely arrogant. He was extremely, let’s say, egotistical as well. He was very good at what he did. He was a good cracker, though he got caught multiple times. So he actually ran one of these sites, and it was a large site, and in the process, he even hacked law enforcement computers and found out information about some of these other operations that were going on. Actu­ally outed some, some informants, but the people didn’t believe him. A lot of people didn’t believe him. And his arrogance is really what led to his downfall. Because he was so arrogant he thought that he could get away with everything. He thought that he was protecting himself. And the fact of the matter was, law enforcement knew who he was almost the whole time. They tracked him back using basic techniques just like using email. Actually email was used as part of the evidence, but they actually found him before that. And it was his arrogance that really led to his getting arrested again, because he just didn’t protect himself well enough. And this really I cannot emphasize it enough, but this can really be used against people.

Katina Michael: Do you think that cybercrimes will increase in size and number and impact?

Dan DeFilippi: Financial crime is going up and up. And everybody knows this. The reality is that technology works for criminals as much as it works for businesses. Large organizations just can’t evolve fast enough. They’re slow in comparison to cybercriminals.

Katina Michael: How so?

Dan DeFilippi: A criminal’s going to use any tools they can to commit their crimes. They’re going to stay on top of their game. They’re going to be at the forefront of technology. They’re going to be the ones out there pioneering new tech­niques, finding the holes before anybody else, in new systems to get access to your data. They’re going to be the ones out there, and combining that with the availability of information. When I started hacking back in the ‘90s, it was not easy to learn. You really pretty much had to go into these chat-rooms and become kind of like an apprentice. You had to have people teach you.

Katina Michael: And today?

Dan DeFilippi: Well after the 2000s, when I started doing the identification stuff, there was easier access to data. There were more discussion boards, places where you could learn about these things, and then today it’s super easy to find any of this information. Myself, I actually wrote some tutorials on how to conduct credit card fraud. I wrote, like, a guide to in-store carding. I included how to go about it, what equipment to use, what to purchase, and it’s all out there in the public domain. You don’t even have to understand any of this. You know, you could know nothing about technology, spend a few hours online searching for this stuff, learn how to do it, and order the stuff overnight and the next day you could be out there going and doing this stuff. That’s how easy it is. And that’s why it’s really going up, in my opinion.

Katina Michael: Do you think credit card fraudsters realize the negative conse­quences of their actions?

Dan DeFilippi: People don’t realize that there is a real negative consequence to this nowadays. I’m not sure what the laws are in Australia about identity theft and credit card fraud, but in the United States, it used to be very, very easy to get away with. If you were caught, it would be a slap on the wrist. You would get almost nothing happening to you. It was more like give the money back, and possibly serve jail time if it was a repeat offence, but really that was no deterrent. Then it exploded post dot com crash, then a few years ago, we passed a new law that it’s a mandatory two years in prison if you commit identity theft. And credit card fraud is considered identity theft in the United States. So you’re guaranteed of some time in jail if caught.

Katina Michael: Do you think people are aware of the penalties?

Dan DeFilippi: People don’t realize it. And they think, “Oh, it’s nothing, you know, a slap on the wrist.” There is a need for more awareness, and campaigning on this matter. People need to be aware of the consequences of their actions. Had I realized how much time I could serve for this kind of crime, I probably would have stopped sooner. Long story short, because I worked with the Se­cret Service and trained them for a few years, I managed to keep myself out of prison. Had I not done that, I would have actually been facing eight-and-a-half years. That’s serious, especially for somebody who’s in their early 20s. And really had that happened, my future would have been ruined, I think. I probably would have become a lifelong criminal because prisons are basically teaching institutions for crime. So really I, had I known, had I realized it, I wouldn’t have done it. And I think especially younger people, if they realize that the major consequences to these actions, that they can be caught nowadays, that there are people out there looking to catch them, that really would help cut back on this. Also catching people earlier of course is more ideal. Had I been caught early on, before my mind-set had changed and the emotional ties had been broken, I think I would have definitely stopped before it got this far. It would have made a much bigger impact on me. And that’s it.

Future Research Directions

Due to the availability of information over the Internet, non-technical people can easily commit “technical” crimes. The internet has many tutorials and guides to committing fraud, ranging from counterfeit documents to credit card fraud. Many of the most successful are hackers turned carders, those who understand and know how to exploit technology to commit their crimes (Turgeman-Goldschmidt, 2008). They progress from breaking into computers to committing fraud when they discover how much money there is to be made. All humans rationalize their actions. The primary rationalization, criminals use when committing fraud, is blaming the victim. They claim that the victim should have been more knowledgeable, should have taken more steps to protect themselves, or taken some action to avoid the fraud. Confidence scams were legal in the US until a decade ago due to the mindset that it was the victim’s fault for falling for the fraud. There needs to be a lot more research conducted into the psychology of the cybercriminal. Of course technological solutions abound in the market, but it is less of a technology problem, than a human factor problem. Technology solution patents for making credit cards more secure abound. But with near field communication (NFC) cards now on the market, fraud is being propelled as investment continues in insecure devices. One has to wonder why these technologies are being chosen when they just increase the risk appetite. There also has to be more campaigning in schools, informing young people of the consequences of cybercrime, especially given so many schools are now mandating the adoption of tablets and other mobile devices in high school.


Avoiding detection, investigation, and arrest for committing identity theft or electronic fraud is, in most cases, fairly simple when compared to other types of crime. When using the correct tools, the internet allows the perpetrator to maintain complete anonymity through much of the crime (Wall, 2015). In the case of electronic fraud, the only risk to the perpetrator is when receiving the stolen money or goods. In some cases, such as those involving online currencies designed to be untraceable, it may be impossible for authorities to investigate due to anonymity built into the system. The internet and broad reach of information is a two-way street and can also work in law enforcement’s favor. Camera footage of a crime, such as someone using a stolen credit card at a department store, can now be easily and inexpensively distributed for the public to see. The same tools that keep criminals anonymous can be used by law enforcement to avoid detection during investigations. As with “traditional” crimes, catching a fraudster comes down to mistakes. A single mistake can unravel the target’s identity. One technique used by the US Secret Service is to check emails sent by a target for the originating IP address. This is often overlooked. Engaging a target in online chat and subpoenaing IP records from the service provider is often successful as well. Even the most technologically savvy criminal may slip up once and let their true IP address through.

Many types of fraud can be prevented through education. The general population becomes less vulnerable and law enforcement is more likely to find the perpetrator. A store clerk who is trained to recognize the security features of credit cards, checks, and IDs will be able to catch a criminal in the act. The problem with education is its cost. A store may not find a positive return on investment for the time spent training minimum wage employees. Law enforcement may not have the budget for additional training or the personnel available to investigate the crime. Added security can also prevent certain types of crime. Switching from magnetic stripe to chip and PIN-based payment cards reduced card present fraud in Europe but then we have seen the introduction more recently of NFC cards that do not require a PIN for a transaction less than $100. Consumers may be reluctant to adopt new technologies due to the added process or learning curve. Chip and PIN have not been adopted in the USA due to reluctance of merchants and banks. The cost of the change is seen as higher than the cost of fraud. NFC cards on the other hand allegedly add to convenience of conducting transactions and have seen a higher uptake in Australia. However, some merchants refuse to accept NFC transactions, as usually fraudsters go undetected and the merchant is left to with problems to address. Human exploitation is the largest factor of fraud and can make or break a scam (Hadnagy, 2011). Social engineering can play an important role when exploiting a system. Take using a stolen credit card to purchase an item in a store. If the fraudster appears nervous and distracted employees may become suspicious. Confidence goes a long way. When purchasing a large ticket item, the fraudster may suggest to the cashier that he hopes the total is not over his limit or that he hopes his recent payment has cleared. When presented with an explanation for failure before a failure happens, the employee is less likely to expect fraud. However, if there is more training invested when new employees start at an organization, the likelihood that basic frauds will be detected is very high. There is also the incidence of insider attack which is growing, where an employee, knowingly accepts an illegitimate card from a known individual, and then splits the profits. Loss prevention strategies need to be implemented by organizations and the sector as a whole need to address the credit card fraud problem in a holistic manner with all the relevant stakeholders engaged and working together to crack down on cybercrime.


Aguilar, M. (2015). Here's Why Your Bank Account Is Less Secure Than Your Gmail. Gizmodo. Retrieved from

Broadhurst R. (2006). Developments in the global law enforcement of cyber‐crime.Policing: An International Journal of Police Strategies & Management, 29(3), 408–433. 10.1108/13639510610684674

Hadnagy C. (2011). Social Engineering: The Art of Human Hacking. Indiana: John Wiley.

Herley, C., van Ooirschot, P.C., & Patrick, A.S. (20). Passwords: If We’re So Smart, Why Are We Still Using Them? Financial Cryptography and Data Security, LNCS (Vol. 5628, pp. 230-237).

Levi M. (2008). Organized fraud and organizing frauds: Unpacking research on networks and organization.Criminology & Criminal Justice, 8(4), 389–419. 10.1177/1748895808096470

Reardon, B., Nance, K., & McCombie, S. (2012). Visualization of ATM Usage Patterns to Detect Counterfeit Cards Usage. Proceedings of the45th Hawaii International Conference on System Science (HICSS). Hawaii (pp. 3081-3088). 10.1109/HICSS.2012.638

Turgeman-Goldschmidt O. (2008). Meanings that hackers assign to their being a hacker. International Journal of Cyber Criminology, 2(2), 382–396.

Wall, D. S. (2015). The Internet as a conduit for criminal activity. In A. Pattavina (Ed.), Information Technology and the Criminal Justice System (pp. 77-98). London: Sage Publications.

Key Terms and Definitions

Authorization: Authorizing electronic transactions done with a credit card and holding this balance as unavailable until either the merchant clears the transaction or the hold ceases.

Call for Authorization: Also known as CFA. A message that may come up when attempting to purchase something using a credit card. Requires the store to call in and verify the transaction.

Carding: Illegal use of a credit card. When criminals use carding to verify the validity of stolen card data, they test it the card by presenting it to make a small online purchase on a website that has real-time transaction processes. If the card is processed successfully, the thief knows the card is still good to use.

Card-Not-Present Fraud: Card-not-present fraud is when you make purchases over the phone or internet using card details without the card being physically presented.

Credit Card Fraud: Defined as the fraudulent acquisition and/or use of credit cards or card details for financial gain.

Cybercrime: Either crimes where computers or other information technologies are an integral part of an offence or crimes directed at computers or other information technologies (such as hacking or unauthorized access to data).

Hacking: Criminals can hack into databases of account details held by banks that hold customer information, or intercept account details that travel in unencrypted form. Hacking bank computers can lead to the withdrawal of sums of money in excess of account credit balances.

Identity Document Forgery: The process by which identity documents issued by banks are copied and/or modified by unauthorized persons for the purpose of deceiving those who would view the documents about the identity of the bearer.

Merchant: Account that allows businesses to process credit card transactions.

Risk Appetite and Tolerance: Can be defined as ‘the amount and type of risk that an organization is willing to absorb in order to meet their strategic objectives.

Citation: DeFilippi, Dan and Katina Michael. "Credit Card Fraud: Behind the Scenes." Online Banking Security Measures and Data Protection. IGI Global, 2017. 263-282. Web. 6 Jan. 2018. doi:10.4018/978-1-5225-0864-9.ch015

Uberveillance and the Social Implications of Microchip Implants: Preface

Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies

In addition to common forms of spatial units such as satellite imagery and street views, emerging automatic identification technologies are exploring the use of microchip implants in order to further track an individual’s personal data, identity, location, and condition in real time.

Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies presents case studies, literature reviews, ethnographies, and frameworks supporting the emerging technologies of RFID implants while also highlighting the current and predicted social implications of human-centric technologies. This book is essential for professionals and researchers engaged in the development of these technologies as well as providing insight and support to the inquiries with embedded micro technologies.


Katina Michael, University of Wollongong, Australia

M.G. Michael, University of Wollongong, Australia


Uberveillance can be defined as an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices into the human body. These embedded technologies can take the form of traditional pacemakers, radio-frequency identification (RFID) tag and transponder implants, smart swallowable pills, nanotechnology patches, multi-electrode array brain implants, and even smart dust to mention but a few form factors. To an extent, head-up displays like electronic contact lenses that interface with the inner body (i.e. the eye which sits within a socket) can also be said to be embedded and contributing to the uberveillance trajectory, despite their default categorisation as body wearables.

Uberveillance has to do with the fundamental who (ID), where (location), and when (time) questions in an attempt to derive why (motivation), what (result), and even how (method/plan/thought). Uberveillance can be a predictive mechanism for a person’s expected behaviour, traits, likes, or dislikes based on historical fact; or it can be about real-time measurement and observation; or it can be something in between. The inherent problem with uberveillance is that facts do not always add up to truth, and predictions or interpretations based on uberveillance are not always correct, even if there is direct visual evidence available (Shih, 2013). Uberveillance is more than closed circuit television feeds, or cross-agency databases linked to national identity cards, or biometrics and ePassports used for international travel. Uberveillance is the sum total of all these types of surveillance and the deliberate integration of an individual’s personal data for the continuous tracking and monitoring of identity, location, condition, and point of view in real-time (Michael & Michael, 2010b).

In its ultimate form, uberveillance has to do with more than automatic identification and location-based technologies that we carry with us. It has to do with under-the-skin technology that is embedded in the body, such as microchip implants. Think of it as Big Brother on the inside looking out. It is like a black box embedded in the body which records and gathers evidence, and in this instance, transmitting specific measures wirelessly back to base. This implant is virtually meaningless without the hybrid network architecture that supports its functionality: making the person a walking online node. We are referring here, to the lowest common denominator, the smallest unit of tracking – presently a tiny chip inside the body of a human being. But it should be stated that electronic tattoos and nano-patches that are worn on the body can also certainly be considered mechanisms for data collection in the future. Whether wearable or bearable, it is the intent and objective which remains important, the notion of “people as sensors.” The gradual emergence of the so-called human cloud, that cloud computing platform which allows for the Internetworking of human “points of view” using wearable recording technology (Nolan, 2013), will also be a major factor in the proactive profiling of individuals (Michael & Michael, 2011).


This present volume will aim to equip the general public with much needed educational information about the technological trajectory of RFID implants through exclusive primary interviews, case studies, literature reviews, ethnographies, surveys and frameworks supporting emerging technologies. It was in 1997 that bioartist Eduardo Kac (Figure 1) implanted his leg in a live performance titled Time Capsule( in Brazil (Michael & Michael, 2009). The following year in an unrelated experiment, Kevin Warwick injected an implant into his left arm (Warwick, 2002; K. Michael, 2003). By 2004, the Verichip Corporation had their VeriChip product approved by the Food and Drug Administration (FDA) (Michael, Michael & Ip 2008). And since that point, there has been a great deal of misinformation and confusion surrounding the microchip implant, but also a lot of build-up on the part of the proponents of implantables.

Figure 1. 

Eduardo Kac implanting himself in his left leg with an RFID chip using an animal injector kit on 11 November 1997. Courtesy Eduardo Kac. More at

Radio-Frequency Identification (RFID) is not an inherently secure device, in fact it can be argued that it is just the opposite (Reynolds, 2004). So why someone would wish to implant something beneath the skin for non-medical reasons is quite surprising, despite the touted advantages. One of the biggest issues, not commonly discussed in public forums, has to be the increasing numbers of people who are suffering from paranoid or delusional thoughts with respect to enforced implantation or implantation through stealth. We have already encountered significant problems in the health domain- where for example, a clinical psychologist can no longer readily discount completely the claims of patients who identify with having been implanted or tracked and monitored using inconspicuous forms of ID. This will be especially true in the era of the almost “invisible scale to the naked eye” smart dust which has yet to fully arrive. Civil libertarians, religious advocates, and so-named conspiracy theorists will not be the only exclusive groups to discuss the real potential of microchipping people, and for this reason, the discussion will move into the public policy forum, all inclusive of stakeholders in the value chain.

Significantly, this book will also provide researchers and professionals who are engaged in the development or implementation of emerging services with awareness of the social implications of human-centric technologies. These implications cannot be ignored by operational stakeholders, such as engineers and the scientific elite, if we hope to enact long-term beneficial change with new technologies that will have a positive impact on humanity. We cannot possess the attitude that says- let us see how far we can go with technology and we will worry about the repercussions later: to do so would be narrow-sighted and to ignore the importance of socio-technical sustainability. Ethics are apparently irrelevant to the engineer who is innovating in a market-driven and research funded environment. For sure there are some notable exceptions where a middle of the way approach is pursued, notably in the medical and educational contexts. Engineering ethics, of course exist, unfortunately often denigrated and misinterpreted as discourses on “goodness” or appeals to the categorical imperative. Nevertheless industry as a whole has a social responsibility to consumers at large, to ensure that it has considered what the misuse of its innovations might mean in varied settings and scenarios, to ensure that there are limited, if any, health effects from the adoption of particular technologies, and that adverse event reports are maintained by a centralised administrative office with recognised oversight (e.g. an independent ombudsman).

Equally, government agencies must respond with adequate legislative and regulatory controls to ensure that there are consequences for the misuse of new technologies. It is not enough for example, for a company like Google to come out and openly “bar” applications for its Glass product, such as biometric recognition and pornography, especially when they are very aware that these are two application areas for which their device will be exploited. Google is trying to maintain its brand by stating clearly that it is not affiliated with negative uses of its product, knowing too well that their proclamation is quite meaningless, and by no means legally binding. And this remains one of the great quandaries, that few would deny that Google’s search rank and page algorithms have meant we have also been beneficiaries of some extraordinary inventiveness.

According to a survey by CAST, one in five persons, have reported that they want to see a Google Glass ban (Nolan, 2013). Therefore, the marketing and design approach nowadays, which is broadly evident across the universal corporate spectrum, seems to be:

We will develop products and make money from them, no matter how detrimental they may be to society. We will push the legislative/regulatory envelope as much as we can, until someone says: Stop. You’ve gone too far! The best we can do as a developer is place a warning on the packaging, just like on cigarette notices, and if people choose to do the wrong thing our liability as a company is removed completely because we have provided the prior warning and only see beneficial uses. If our product is used for bad then that is not our problem, the criminal justice system can deal with that occurrence, and if non-users of our technology are entangled in a given controversy, then our best advice to people is to realign the asymmetry by adopting our product.


This edited volume came together over a three year period. We formed our editorial board and sent out the call for book chapters soon after the IEEE conference we hosted at the University of Wollongong, the International Symposium on Technology and Society (ISTAS) on 7-10 June 2010, sponsored by IEEE’s Society on the Social Implications of Technology (SSIT) ( The symposium was dedicated to emerging technologies and there were a great many papers presented from a wide range of views on the debate over the microchipping of people. It was such a highlight to see this sober conversation happening between experts coming at the debate from different perspectives, different cultural contexts, and different lifeworlds. A great deal of the spirit from that conversation has taken root in this book. The audio-visual proceedings aired on the Australian Broadcasting Corporation’s much respected 7.30 Report and received wide coverage in major media outlets. The significance is not in the press coverage but in the fact that the topic is now relevant to the everyday person. Citizens will need to make a personal decision- do I receive an implant or not? Do I carry an identifier on the surface of my skin or not? Do I succumb to 24x7 monitoring by being fully “connected” to the grid or not?

Individuals who were present at ISTAS10 and were also key contributors to this volume include keynote speakers Professor Rafael Capurro, Professor Roger Clarke, Professor Kevin Warwick, Dr Katherine Albrecht, Dr Mark Gasson, Mr Amal Graafstra, and attendees Professor Marcus Wigan, Associate Professor Darren Palmer, Dr Ian Warren, Dr Mark Burdon, and Mr William A. Herbert. Each of these presenters have been instrumental voices in the discussion on Embedded Surveillance Devices (ESDs) in living things (animals and humans), and tracking and monitoring technologies. They have dedicated a portion of their professional life to investigating the possibilities and the effects of a world filled with microchips, beyond those in desktop computers and high-tech gadgetry. They have also been able to connect the practice of an Internet of Things (IoT) from not only machine-to-machine but nested forms of machine-to-people-to-machine interactions and considered the implications. When one is surrounded by such passionate voices, it is difficult not to be inspired onward to such an extensive work.

A further backdrop to the book is the annual workshops we began in 2006 on the Social Implications of National Security which have had ongoing sponsorship by the Australian Research Council’s Research Network for a Secure Australia (RNSA). Following ISTAS10, we held a workshop on the “Social Implications of Location-Based Services” at the University of Wollongong’s Innovation Campus and were fortunate to have Professor Rafael Capurro, Professor Andrew Goldsmith, Professor Peter Eklund, and Associate Professor Ulrike Gretzel present their work ( Worthy of note is the workshop proceedings which are available online have been recognised as major milestones for the Research Network in official government documentation. For example, the Department of the Prime Minister and Cabinet (PM&C) among other high profile agencies in Australia and abroad have requested copies of the works for their libraries.

In 2012, the topic of our annual RNSA workshop was “Sousveillance and the Social Implications of Point of View Technologies in Law Enforcement” held at the University of Sydney ( Professor Kevin Haggerty keynoted that event, speaking on a theme titled “Monitoring within and beyond the Police Organisation” and also later graciously contributed the foreword to this book, as well as presenting on biomimetics at the University of Wollongong. The workshop again acted to bring exceptional voices together to discuss audio-visual body-worn recording technologies including, Professor Roger Clarke, Professor David Lyon, Associate Professor Nick O’Brien, Associate Professor Darren Palmer, Dr Saskia Hufnagel, Dr Jann Karp, Mr Richard Kay, Mr Mark Lyell, and Mr Alexander Hayes.

In 2013, the theme of the National Security workshop was “Unmanned Aerial Vehicles - Pros and Cons in Policing, Security & Everyday Life” held at Ryerson University in Canada. This workshop had presentations from Professor Andrew Clement, Associate Professor Avner Levin, Mr Ian Hannah, and Mr Matthew Schroyer. It was the first time that the workshop was held outside Australian borders in eight years. While drones are not greatly discussed in this volume, they demonstrate one of the scenario views of the fulfilment of uberveillance. Case in point, the drone killing machine signifies the importance of a remote controlled macro-to-micro view. At first, something needs to be able to scan the skies to look down on the ground, and then when the target has been identified and tracked it can be extinguished with ease. One need only look at the Israel Defence Force’s pinpoint strike on Ahmed Jabari, the head of the Hamas Military Wing, to note the intrinsic link between the macro and micro levels of details (K. Michael, 2012). How much “easier” could this kind of strike have been if the GPS chipset in the mobile phone carried by an individual communicated with a chip implant embedded in the body? RFID can be a tracking mechanism, despite the claims of some researchers that it has only a 10cm proximity. That may well be the case for your typical wall-mounted reader, but the mobile phone can act as a continuous reader if in range, as can a set of traffic lights, lampposts, or even wifi access nodes, depending on the on-board technology and the power of the reader equipment being used. A telltale example of the potential risks can be seen in the rollout of Real ID driver’s licenses in the USA, since the enactment of the REAL ID Act of 2005.

In 2013, it was also special to meet some of our book contributors for the first time at ISTAS13, held at the University of Toronto on the theme of “Wearable Computers and Augmediated Reality in Everyday Life,” among them Professor Steve Mann, Associate Professor Christine Perakslis, and Dr Ellen McGee. As so often happens when a thematic interest area brings people together from multiple disciplines, an organic group of interdisciplinary voices has begun to form at The holistic nature of this group is especially stimulating in sharing its diverse perspectives. Building upon these initial conversations and ensuring they continue as the social shaping of technology occurs in the real world is paramount.

As we brought together this edited volume, we struck a very fruitful collaboration with Reader, Dr Jeremy Pitt of Imperial College London, contributing a large chapter in his disturbingly wonderful edited volume entitled This Pervasive Day: The Potential and Perils of Pervasive Computing (2012). Jeremy’s book is a considered forecast of the social impact of new technologies inspired by Ira Levin’s This Perfect Day(1970). Worthy of particular note is our participation in the session entitled “Heaven and Hell: Visions for Pervasive Adaptation” at the European Future Technologies Conference and Exhibition (Paechter, 2011). What is important to draw out from this is that pervasive computing will indeed have a divisive impact on its users: for some it will offer incredible benefits, while to others it will be debilitating in its everyday effect. We hope similarly, to have been able to remain objective in this edited volume, offering viewpoints from diverse positions on the topic of humancentric RFID. This remained one of our principal aims and fundamental goals.

Questioning technology’s trajectory, especially when technology no longer has a medical corrective or prosthetic application but one that is based on entertainment and convenience services is extremely important. What happens to us when we embed a device that we cannot remove on our own accord? Is this fundamentally different to wearing or lugging something around? Without a doubt, it is! And what of those technologies, that are presently being developed in laboratories across the world for microscopic forms of ID, and pinhole video capture? What will be the impact of these on our society with respect to covert surveillance? Indeed, the line between overt and covert surveillance is blurring- it becomes indistinguishable when we are surrounded by surveillance and are inside the thick fog itself. The other thing that becomes completely misconstrued is that there is actually logic in the equation that says that there is a trade-off between privacy and convenience. There is no trade-off. The two variables cannot be discussed on equal footing – you cannot give a little of your privacy away for convenience and hope to have it still intact thereafter. No amount of monetary or value-based recompense will correct this asymmetry. We would be hoodwinking ourselves if we were to suddenly be “bought out” by such a business model. There is no consolation for privacy loss. We cannot be made to feel better after giving away a part of ourselves. It is not like scraping one’s knee against the concrete with the expectation that the scab will heal after a few days. Privacy loss is to be perpetually bleeding, perpetually exposed.

Additionally, in the writing of this book we also managed a number of special issue journals in 2010 and 2011, all of which acted to inform the direction of the edited volume as a whole. These included special issues on “RFID – A Unique Radio Innovation for the 21st Century” in the Proceedings of the IEEE (together with Rajit Gadh, George Roussos, George Q. Huang, Shiv Prabhu, and Peter Chu); “The Social Implications of Emerging Technologies” in Case Studies in Information Technology with IGI (together with Dr Roba Abbas); “The Social and Behavioral Implications of Location-Based Services” in the Journal of Location Based Services with Routledge; and “Surveillance and Uberveillance” in IEEE Technology and Society Magazine. In 2013, Katina also guest edited a volume for IEEE Computer on “Big Data: Discovery, Productivity and Policy” with Keith W. Miller. If there are any doubts about the holistic work supporting uberveillance, we hope that these internationally recognised journals, amongst others, that have been associated with our guest editorship indicate the thoroughness and robustness of our approach, and the recognition that others have generously provided to us for the incremental work we have completed.

It should also not go without notice that since 2006 the term uberveillance has been internationally embedded into dozens of graduate and undergraduate technical and non-technical courses across the globe. From the University of New South Wales and Deakin University to the University of Salford, from the University of Malta right through to the University of Texas at El Paso and Western Illinois University- we are extremely encouraged by correspondence from academics and researchers noting the term’s insertion into outlines, chosen text book, lecture schedules, major assessable items, recommended readings, and research training. These citations have acted to inform and to interrogate the subjects that connect us. That our research conclusions resonate with you, without necessarily implying that you have always agreed with us, is indeed substantial.


Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies follows on from a 2009 IGI Premier Reference source book titled Automatic Identification and Location-Based Services: from Bar Codes to Chip Implants. This volume consists of 6 sections, and 18 chapters with 7 exclusive addendum primary interviews and panels. The strength of the volume is in its 41 author contributions. Contributors have come from diverse professional and research backgrounds in the field of emerging technologies, law and social policy, including, information and communication sciences, administrative sciences and management, criminology, sociology, law and regulation, philosophy, ethics and policy, government, political science, among others. Moreover, the book will provide insights and support to every day citizens who may be questioning the trajectory of micro and miniature technologies or the potential for humans to be embedded with electro-magnetic devices. Body wearable technologies are also directly relevant, as they will act as complementary if not supplementary innovations to various forms of implants.

Section 1 is titled “The Veillances” with a specific background context of uberveillance. This section inspects the antecedents of surveillance, Roger Clarke’s dataveillance thirty years on, Steve Mann’s sousveillance, and MG Michael’s uberveillance. These three neologisms are inspected under the umbrella of the “veillances” (from the French veiller) which stems from the Latin vigilare which means to “keep watch” (Oxford Dictionary, 2012).

In 2009, Katina Michael and MG Michael presented a plenary paper titled: “Teaching Ethics in Wearable Computing: the Social Implications of the New ‘Veillance’” (K. Michael & Michael, 2009d). It was the first time that surveillance, dataveillance, sousveillance, and uberveillance were considered together at a public gathering. Certainly as a specialist term, it should be noted “veillance” was first used in an important blogpost exploring equiveillance by Ian Kerr and Steve Mann (2006): “the valences of veillance” were briefly described. In contrast to Kerr and Mann (2006), Michael and Michael (2006) were pondering on the intensification of a state of uberveillance through increasingly pervasive technologies that can provide details from the big picture view right down to the miniscule personal details.

Alexander Hayes (2010), pictorialized this representation using the triquetra, also known as the trinity knot and Celtic triangle (Figure 2), and describes its application to uberveillance in the educational context in chapter 3. Hayes uses mini cases to illustrate the importance of understanding the impact of body-worn video across sectors. He concludes by warning that commercial entities should not be engaged in “techno-evangelism” when selling to the education sector but should rather maintain the purposeful intent of the use of point of view and body worn video recorders within the specific educational context. Hayes also emphasises the urgent need for serious discussion on the socio-ethical implications of wearable computers.

Figure 2. 

Uberveillance triquetra (Hayes, 2010). See also Michael and Michael (2007).


By 2013, K. Michael had published proceedings from the International Symposium on Technology and Society (ISTAS13) using the veillance concept as a theme (, with numerous papers submitted to the conference exploring veillance perspectives (Ali & Mann, 2013; Hayes, et al., 2013; K. Michael, 2013; Minsky, et al., 2012; Paterson, 2013). Two other crucial references to veillance include “in press” papers by Michael and Michael (2013) and Michael, Michael, and Perakslis (2014). But what does veillance mean? And how is it understood in different contexts? What does it mean to be watched by a CCTV camera, to have one’s personal details deeply scrutinized; to watch another; or to watch oneself?

Dataveillance (see Interview 1.1) conceived by Roger Clarke of the Australian National University (ANU) in 1988 “is the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons” (Clarke, 1988a). According to the Oxford Dictionary, dataveillance is summarized as “the practice of monitoring the online activity of a person or group” (Oxford Dictionary, 2013). It is hard to believe that this term was introduced a quarter of a century ago, in response to government agency data matching initiatives linking taxation records and social security benefits, among other commercial data mining practices. At the time it was a powerful statement in response to the Australia Card proposal in 1987 (Clarke, 1988b) which was never implemented by the Hawke Government, despite the Howard Government attempts to introduce an Access Card almost two decades later in 2005 (Australian Privacy Foundation, 2005). The same issues ensue today, only on a more momentous magnitude with far more consequences and advanced capabilities in analytics, data storage, and converging systems.

Sousveillance (see chapter 2) conceived by Steve Mann of the University of Toronto in 2002 but practiced since at least 1995 is the “recording of an activity from the perspective of a participant in the activity” (Wordnik, 2013). However, its initial introduction into the literature came in the inaugural publication of the Surveillance and Society journal in 2003 with a meaning of “inverse surveillance” as a counter to organizational surveillance (Mann, Nolan, & Wellman, 2003). Mann prefers to interpret sousveillance as under-sight which maintains integrity, contra to surveillance as over-sight which equates to hypocrisy (Mann, 2004).

Whereas dataveillance is the systematic use of personal data systems in the monitoring of people, sousveillance is the inverse of monitoring people; it is the continuous capture of personal experience. For example, dataveillance might include the linking of someone’s tax file number with their bank account details and communications data. Sousveillance on the other hand, is a voluntary act of logging what one might see around them as they move through the world. Surveillance is thus considered watching from above, whereas sousveillance is considered watching from below. In contrast, dataveillance is the monitoring of a person’s online activities, which presents the individual with numerous social dangers (Clarke, 1988a).

Uberveillance (see chapter 1) conceived by MG Michael of the University of Wollongong (UOW) in 2006, is commonly defined as: “ubiquitous or pervasive electronic surveillance that is not only ‘always on’ but ‘always with you,’ ultimately in the form of bodily invasive surveillance” (ALD, 2010). The term entered the Macquarie Dictionary of Australia officially in 2008 as “an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices in the human body” (Macquarie, 2009, p. 1094). The concern over uberveillance is directly related to the misinformationmisinterpretation, and information manipulation of citizens' data. We can strive for omnipresence through real-time remote sharing and monitoring, but we will never achieve simple omniscience (Michael & Michael, 2009).

Uberveillance is a compound word, conjoining the German über meaning over or above with the French veillance. The concept is very much linked to Friedrich Nietzsche’s vision of the Übermensch, who is a man with powers beyond those of an ordinary human being, like a super-man with amplified abilities (Honderich, 1995; M. G. Michael & Michael, 2010b). Uberveillance is analogous to embedded devices that quantify the self and measure indiscriminately. For example, heart, pulse, and temperature sensor readings emanating from the body in binary bits wirelessly, or even through amplified eyes such as inserted contact lens “glass” that might provide visual display and access to the Internet or social networking applications.

Uberveillance brings together all forms of watching from above and from below, from machines that move to those that stand still, from animals and from people, acquired involuntarily or voluntarily using obtrusive or unobtrusive devices (Figure 3) (K. Michael, et al., 2010). The network infrastructure underlies the ability to collect data direct from the sensor devices worn by the individual, and big data analytics ensures an interpretation of the unique behavioral traits of the individual implying more than just predicted movement, but intent and thought (K. Michael & Miller, 2013).

Figure 3. From surveillance to uberveillance (K. Michael, et al., 2009b)

It has been said that uberveillance is that part of the veillance puzzle that brings together the surdata, and sous to an intersecting point (Stephan, et al., 2012). In uberveillance, there is the “watching” from above component (sur), there is the “collecting” of personal data and public data for mining (data), and there is the watching from below (sous) which can draw together social networks and strangers, all coming together via wearable and implantable devices on/in the human body. Uberveillance can be used for good but we contend that independent of its application for non-medical purposes, it will always have an underlying control factor of power and authority (Masters & Michael, 2005; Gagnon, et al., 2013).

Section 2 is dedicated to applications of humancentric implantables in both the medical and non-medical space. Chapter 4 is written by professor of cybernetics, Kevin Warwick at the University of Reading and his senior research fellow, Dr Mark Gasson. In 1998, Warwick was responsible for Cyborg 1.0, and later Cyborg 2.0 in 2002. In chapter 4, Warwick and Gasson describe implants, tracking and monitoring functionality, Deep Brain Stimulation (DBS), and magnetic implants. They are pioneers in the implantables arena but after initially investigating ID and location interactivity in a closed campus environment using humancentric RFID approaches, Warwick has begun to focus his efforts on medical solutions that can aid the disabled, teaming up with Professor Tipu Aziz, a neurosurgeon from the University of Oxford. He has also explored person-to-person interfaces using the implantable devices for bi-directional functionality.

Following on from the Warwick and Gasson chapter are two interviews and a modified presentation transcript demonstrating three different kinds of RFID implant applications. Interview 4.1 is with Mr Serafin Vilaplana the former IT Manager at the Baja Beach Club who implemented the RFID implants for club patronage in Barcelona, Spain. The RFID implants were used to attract VIP patrons, perform basic access control, and be used for electronic payments. Katina Michael had the opportunity to interview Serafin after being invited to attend a Women’s in Engineering (WIE) Conference in Spain in mid-2009 organised by the Georgia Institute of Technology. It was on this connected journey that Katina Michael also met with Mark Gasson during a one day conference at the London School of Economics for the very first time, and they discussed a variety of incremental innovations in RFID.

In late May 2009, Mr Gary Retherford, a Six Sigma black belt specialising in Security, contacted Katina to be formally interviewed after coming across the Michaels’ work on the Internet. Retherford was responsible for instituting the employee access control program using the VeriChip implantable device in 2006. Interview 4.2 presents a candid discussion between Retherford and K. Michael on the risk versus reward debate with respect to RFID implantables. While Retherford can see the potential for ID tokens being embedded in the body, Michael raises some very important matters with respect to security questions inherent in RFID. Plainly, Michael argues that if we invite technology into the body, then we are inviting a whole host of computer “connectedness” issues (e.g. viruses, denial-of-service-attacks, server outages, susceptibility to hacking) into the human body as well. Retherford believes that these are matters that can be overcome with the right technology, and predicts a time that RFID implant maintenance may well be as straightforward as visiting a Local Service Provider (LSP).

Presentation 4.3 was delivered at IEEE ISTAS10 by Mr Amal Graafstra and can be found on the Internet here: This chapter presents the Do-It-Yourselfer perspective, as opposed to getting an implant that someone else uses in their operations or commercial applications. Quite possibly, the DIY culture may have an even greater influence on the diffusion of RFID implantables than even the commercial arena. DIYers are usually circumspect of commercial RFID implant offerings which they cannot customise, or for which they need an implant injected into a pre-defined bodily space which they cannot physically control. Graafstra’s published interview in 2009, as well as his full-length paper on the RFID subculture with K. Michael and M.G. Michael (2010), still stand as the most informative dialogue on the motivations of DIYers. Recently, in 2012, Graafstra began his own company touting the benefits of RFID implantables within the DIY/hacking community. Notably, a footer disclaimer statement reads: “Certain things sold at the Dangerous Things Web shop are dangerous. You are purchasing, receiving, and using the items you acquired here at your own peril. You're a big boy/girl now, you can make your own decisions about how you want to use the items you purchase. If this makes you uncomfortable, or you are unable to take personal responsibility for your actions, don't order!”

Chapter 5 closes section 2, and is written by Maria Burke and Chris Speed on applications of technology with an emphasis on memory, knowledge browsing, knowledge recovery, and knowledge sharing. This chapter reports on outcomes from research in the Tales of Things Electronic Memory (TOTeM) large grant in the United Kingdom. Burke and Speed take a fresh perspective of how technology is influencing societal and organisational change by focusing on Knowledge Management (KM). While the chapter does not explicitly address RFID, it rather explores technologies already widely diffused under the broad category of tagging systems, such as quick response codes, essentially 2D barcodes. The authors also do not fail to acknowledge that tagging systems rely on underlying infrastructure, such as wireless networks and the Internet more broadly through devices we carry such as smartphones. In the context of this book, one might also look at this chapter with a view of how memory aids might be used to support an ageing population, or those suffering with Alzheimer’s disease for example.

Section 3 is about the adoption of RFID tags and transponders by various demographics. Christine Perakslis examines the willingness to adopt RFID implants in chapter 6. She looks specifically at how personality factors play a role in the acceptance of uberveillance. She reports on a preliminary study, as well as comparing outcomes from two separate studies in 2005 and 2010. In her important findings, she discusses RFID implants as lifesaving devices, their use for trackability in case of an emergency, their potential to increase safety and security, and to speed up airport checkpoints. Yet the purpose of the Perakslis study is not to identify implantable applications as such but to investigate differences between and among personality dimensions and levels of willingness toward implanting an RFID chip in the human body. Specifically, Perakslis examines the levels of willingness toward the uberveillance trajectory using the Myers Briggs Type Indicator (MBTI).

Interview 6.1 Katina Michael converses with a 16-year-old male from Campbelltown, NSW, about tattoos, implants, and amplification. The interview is telling with respect to the prevalence of the “coolness” factor and group dynamics in youth. Though tattoos have traditionally been used to identify with an affinity group, we learn that implants would only resonate with youth if they were functional in an advanced manner, beyond just for identification purposes. This interview demonstrates the intrinsic connection between technology and the youth sub-culture which will more than likely be among the early adopters of implantable devices, yet at the same time remain highly susceptible to peer group pressure and brand driven advertising.

In chapter 7, Randy Basham considers the potential for RFID chip technology use in the elderly for surveillance purposes. The chapter not only focuses on adoption of technology but emphasises the value conflicts that RFID poses to the elderly demographic. Among these conflicts are resistance to change, technophobia, matters of informed consent, the risk of physical harm, Western religious opposition, concerns over privacy and GPS tracking, and transhumanism. Basham who sits on the Human Services Information Technology Applications (HUSITA) board of directors provides major insights to resistance to change with respect to humancentric RFID. It is valuable to read Basham’s article alongside the earlier interview transcript of Gary Retherford, to consider how new technologies like RFID implantables may be diffused widely into society. Minors and the elderly are particularly dependent demographics in this space and require special attention. It is pertinent to note, that the protests by CASPIAN led by Katherine Albrecht in 2007 blocked the chipping of elderly patients who were suffering with Alzheimer’s Disease (Lewan, 2007; ABC, 2007). If one contemplates on the trajectory for technology crossover in the surveillance atmosphere, one might think on an implantable solution with a Unique Lifetime Identifier (ULI) which follows people from cradle-to-grave and becomes the fundamental componentry that powers human interactions.

Section 4 draws on laws, directives, regulations and standards with respect to challenges arising from the practice of uberveillance. Chapter 8 investigates how the collection of DNA profiles and samples in the United Kingdom is fast becoming uncontrolled. The National DNA Database (NDNAD) of the UK has more than 8% of the population registered with much higher proportions for minority groups, such as the Black Ethnic Minority (BEM). Author Katina Michael argues that such practices drive further adoption of what one could term, national security technologies. However, developments and innovations in this space are fraught with ethical challenges. The risks associated with familial searching as overlaid with medical research, further compounds the possibility that people may carry a microchip implant with some form of DNA identifier as linked to a Personal Health Record (PHR). This is particularly pertinent when considering the European Union (EU) decision to step up cross-border police and judicial cooperation in EU countries in criminal matters, allowing for the exchange of DNA profiles between the authorities responsible for the prevention and investigation of criminal offences (see Prüm Treaty).

Chapter 9 presents outcomes from a large Australian Research Council-funded project on the night time economy in Australia. In this chapter, ID scanners and uberveillance are considered in light of trade-offs between privacy and crime prevention. Does instituting ID scanners prevent or minimise crime in particular hot spots or do they simply cause a chilling effect and trigger the redistribution of crime to new areas. Darren Palmer and his co-authors demonstrate how ID scanners are becoming a normalized precondition of entry into one Australian nighttime economy. They demonstrate that the implications of technological determinism amongst policy makers, police and crime prevention theories need to be critically assessed and that the value of ID scanners needs to be reconsidered in context. In chapter 10, Jann Karp writes on global tracking systems in Australian interstate trucking. She investigates driver perspectives and attitudes on the modern practice of fleet management, and on the practice of tracking vehicles and what that means to truck drivers. Whereas chapter 9 investigates the impact of emerging technology on consumers, chapter 10 gives an employee perspective. While Palmer et al. question the effectiveness of ID scanners in pubs and clubs, Karp poses the challenging question- is locational surveillance of drivers in the trucking industry helpful or is it a hindrance?

Chapter 11 provides legislative developments in tracking, in relation to the “Do Not Track” initiatives written by Mark Burdon et al. The chapter focuses on online behavioral profiling, in contrast to chapter 8 that focuses on DNA profiling and sampling. US legislative developments are compared with those in the European Union, New Zealand, Canada and Australia. Burdon et al. provide an excellent analysis of the problems. Recommendations for ways forward are presented in a bid for members of our communities to be able to provide meaningful and educated consent, but also for the appropriate regulation of transborder information flows. This is a substantial piece of work, and one of the most informative chapters on Do Not Track initiatives available in the literature.

Chapter 12 by Kyle Powys Whyte and his nine co-authors from Michigan State University completes section 4 with a paper on the emerging standards in livestock industry. The chapter looks at the benefits of nanobiosensors in livestock traceability systems but does not neglect to raise the social and ethical dimensions related to standardising this industry. Whyte et al. argue that future development of nanobiosensors should include processes that engage diverse actors in ways that elicit productive dialogue on the social and ethical contexts. A number of practical recommendations are presented at the conclusion of the chapter, such as the role of “anticipatory governance” as linked to Science and Technology Studies (STS). One need only consider the findings of this priming chapter, and how these results may be applied in light of the relationship between non-humancentric RFID and humancentric RFID chipping. Indeed, the opening sentence of the chapter points to the potential: “uberveillance of humans will emerge through embedding chips within nonhumans in order to monitor humans.”

Section 5 contains the critical chapter dedicated to the health implications of microchipping living things. In chapter 13, Katherine Albrecht uncovers significant problems related to microchip-induced cancer in mice or rats (2010). A meta-data analysis of eleven clinical studies published in oncology and toxicology journals between 1996 and 2006 are examined in detail in this chapter. Albrecht goes beyond the prospective social implications of microchipping humans when she presents the physical adverse reactions to implants in animals. Albrecht concludes her chapter with solid recommendations for policy-makers, veterinarians, pet owners, and oncology researchers, among others. When the original report was first launched (, Todd Lewan (2007) of the Associated Press had an article published in the Washington Post titled, “Chip Implants Linked to Animal Tumors.” Albrecht is to be commended for this pioneering study, choosing to focus on health related matters which will increasingly become relevant in the adoption of invasive and pervasive technologies.

The sixth and final section addresses the emerging socio-ethical implications of RFID tags and transponders in humans. Chapter 14 addresses some of the underlying philosophical aspects of privacy within pervasive surveillance. Alan Rubel chooses to investigate the commercial arena, penal supervision, and child surveillance in this book chapter. He asks: what is the potential for privacy loss? The intriguing and difficult question that Rubel attempts to answer is whether privacy losses (and gains) are morally salient. Rubel posits that determining whether privacy loss is morally weighty, or of sufficient moral weight to give rise to a right to privacy, requires an examination of reasons why privacy might be valuable. He describes both instrumental value and intrinsic value and presents a brief discussion on surveillance and privacy value.

Panel 14.1 is a slightly modified transcription of the debate over microchipping people recorded at IEEE ISTAS10 ( This distinguished panel is chaired by lawyer William Herbert. Panel members included, Rafael Capurro, who was a member of the European Group on Ethics in Science and New Technologies (EGE), and who co-authored the landmark Opinion piece published in 2005 “On the ethical aspects of ICT implants in the human body.” Capurro, who is the director for the International Center for Information Ethics, was able to provide a highly specialist ethical contribution to the panel. Mark Gasson and Amal Graafstra, both of whom are RFID implantees, introduced their respective expert testimonies. Chair of the Australian Privacy Foundation Roger Clarke and CASPIAN director Katherine Albrecht represented the privacy and civil liberties positions in the debate. The transcript demonstrates the complexity and multi-layered dimensions surrounding humancentric RFID, and the divisive nature of the issues at hand: on whether to microchip people, or not.

In chapter 15 we are introduced to the development of brain computer interfaces, brain machine interfaces and neuromotor prostheses. Here Ellen McGee examines sophisticated technologies that are used for more than just identification purposes. She writes of brain implants that are surgically implanted and affixed, as opposed to simple implantable devices that are injected in the arm with a small injector kit. These advanced technologies will allow for radical enhancement and augmentation. It is clear from McGee’s fascinating work that these kinds of leaps in human function and capability will cause major ethical, safety, and justice dilemmas. McGee clearly articulates the need for discourse and regulation in the broad field of neuroprosthetics. She especially emphasises the importance of privacy and autonomy. McGee concludes that there is an urgent need for debate on these issues, and questions whether or not it is wise to pursue such irreversible developments.

Ronnie Lipschutz and Rebecca Hester complement the work of McGee, going beyond the possibilities to making the actual assumption that the human will assimilate into the cellular society. They proclaim “We are the Borg!” And in doing so point to a future scenario where not only bodies are read, but minds as well. They describe “re(b)organization” as that new phenomenon that is occurring in our society today. Chapter 16 is strikingly challenging for this reason, and makes one speculate what or who are the driving forces behind this cyborgization process. This chapter will also prove of special interest for those who are conversant with Cartesian theory. Lipschutz and Hester conclude by outlining the very real need for a legal framework to deal with hackers who penetrate biodata systems and alter individual’s minds and bodies, or who may even kill a person by tampering with or reprogramming their medical device remotely.

Interview 16.1 directly alludes to this cellular society. Videographer Jordan Brown interviews Katina Michael on the notion of the “screen bubble.” What is the screen culture doing to us? Rather than looking up as we walk around, we divert our attention to the screen in the form of a smart phone, iPad, or even a digital wearable glass device. We look down increasingly, and not at each other. We peer into lifeless windows of data, rather than peer into one another’s eyes. What could this mean and what are some of the social implications of this altering of our natural gaze? The discussion between Brown and K. Michael is applicable to not just the implantables space, but to the wearables phenomenon as well.

The question of faith in a data driven and information-saturated society is adeptly addressed by Marcus Wigan in the Epilogue. Wigan calls for a new moral imperative. He asks the very important question in the context of “who are the vulnerable now?” What is the role of information ethics, and where should targeted efforts be made to address these overarching issues which affect all members of society- from children to the elderly, from the employed to the unemployed, from those in positions of power to the powerless. It is the emblematic conclusion to a book on uberveillance.


ABC. (2007). Alzheimer's patients lining up for microchip. ABCNews. Retrieved from

Albrecht, K. (2010). Microchip-induced tumors in laboratory rodents and dogs: A review of the literature 1990–2006. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS10). Wollongong, Australia: IEEE.

Ali, A., & Mann, S. (2013). The inevitability of the transition from a surveillance-society to a veillance-society: Moral and economic grounding for sousveillance. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS13). Toronto, Canada: IEEE.

Australian Privacy Foundation. (2005). Human services card. Australian Privacy Foundation. Retrieved 6 June 2013, from

Clarke R. (1988a). Information technology and dataveillance.Communications of the ACM, 31(5), 498–512. 10.1145/42411.42413

Clarke R. (1988b). Just another piece of plastic in your wallet: The ‘Australian card’ scheme.ACM SIGCAS Computers and Society, 18(1), 7–21. 10.1145/47649.47650

Gagnon M. Jacob J. D. Guta A. (2013). Treatment adherence redefined: A critical analysis of technotherapeutics.Nursing Inquiry, 20(1), 60–70. 10.1111/j.1440-1800.2012.00595.x22381079

Graafstra A. (2009). Interview 14.2: The RFID do-it-yourselfer. In MichaelK.MichaelM. G. (Eds.), Innovative automatic identification and location based services: from bar codes to chip implants (pp. 427–449). Hershey, PA: IGI Global.

Graafstra, A., Michael, K., & Michael, M. G. (2010). Social-technical issues facing the humancentric RFID implantee sub-culture through the eyes of Amal Graafstra. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS10). Wollongong, Australia: IEEE.

Hayes, A. (2010). Uberveillance (triquetra). Retrieved 6 May 2013, from

Hayes, A., Mann, S., Aryani, A., Sabbine, S., Blackall, L., Waugh, P., & Ridgway, S. (2013). Identity awareness of research data in veillance and social computing. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS13). Toronto, Canada: IEEE.

Kerr, I., & Mann, S. (n.d.). Exploring equiveillance. ID TRAIL MIX. Retrieved 26 September 2013 from

Levin I. (1970). This perfect day: A novel. New York: Pegasus.

Lewan, T. (2007, September 8). Chip implants linked to animal tumors. Washington Post. Retrieved from

Macquarie. (2009). Uberveillance. In S. Butler (Ed.), Macquarie dictionary (5th ed.). Sydney, Australia: Sydney University.

Mann, S. (2004). Sousveillance: Inverse surveillance in multimedia imaging. In Proceedings of the 12th Annual ACM International Conference on Multimedia. New York, NY: ACM.

Mann S. Nolan J. Wellman B. (2003). Sousveillance: Inventing and using wearable computing devices for data collection in surveillance environments.Surveillance & Society, 1(3), 331–355.

Masters, A., & Michael, K. (2005). Humancentric applications of RFID implants: The usability contexts of control, convenience and care. In Proceedings of the Second IEEE International Workshop on Mobile Commerce and Services. Munich, Germany: IEEE Computer Society.

Michael K. (2003). The automatic identification trajectory. In LawrenceE.LawrenceJ.NewtonS.DannS.CorbittB.ThanasankitT. (Eds.), Internet commerce: Digital models for business. Sydney, Australia: John Wiley & Sons.

Michael, K. (2012). Israel, Palestine and the benefits of waging war through Twitter. The Conversation. Retrieved 22 November 2012, from

Michael K. (2013a). High-tech lust.IEEE Technology and Society Magazine, 32(2), 4–5. 10.1109/MTS.2013.2259652

Michael, K. (Ed.). (2013b). Social implications of wearable computing and augmediated reality in every day life. In Proceedings of IEEE Symposium on Technology and Society. Toronto, Canada: IEEE.

Michael, K., McNamee, A., & Michael, M. G. (2006). The emerging ethics of humancentric GPS tracking and monitoring. In Proceedings of International Conference on Mobile Business. Copenhagen, Denmark: IEEE Computer Society.

Michael K. Michael M. G. (Eds.). (2007). From dataveillance to überveillance and the realpolitik of the transparent society. Wollongong, Australia: Academic Press.

Michael K. Michael M. G. (2009a). Innovative automatic identification and location-based services: From bar codes to chip implants. Hershey, PA: IGI Global. 10.4018/978-1-59904-795-9

Michael, K., & Michael, M. G. (2009c). Predicting the socioethical implications of implanting people with microchips. PerAda Magazine. Retrieved from

Michael, K., & Michael, M. G. (2009d). Teaching ethics in wearable computing: The social implications of the new ‘veillance’. EduPOV.Retrieved June 18, from

Michael K. Michael M. G. (2010). Implementing namebers using implantable technologies: The future prospects of person ID. In PittJ. (Ed.), This pervasive day: The potential and perils of pervasive computing (pp. 163–206). London: Imperial College London.

Michael K. Michael M. G. (2011). The social and behavioral implications of location-based services.Journal of Location-Based Services, 5(3-4), 121–137. 10.1080/17489725.2011.642820

Michael K. Michael M. G. (2013). No limits to watching?Communications of the ACM, 56(11), 26-28.10.1145/2527187

Michael K. Michael M. G. Abbas R. (2009b). From surveillance to uberveillance (Australian Research Council Discovery Grant Application). Wollongong, Australia: University of Wollongong.

Michael, K., Michael, M. G., & Ip, R. (2008). Microchip implants for humans as unique identifiers: A case study on VeriChip. In Proceedings of Conference on Ethics, Technology, and Identity. Delft, The Netherlands: Delft University of Technology.

Michael K. Michael M. G. Perakslis C. (2014). Be vigilant: There are limits to veillance. In PittJ. (Ed.), The computer after me. London: Imperial College Press.

Michael K. Miller K. W. (2013). Big data: New opportunities and new challenges.IEEE Computer, 46(6), 22–24. 10.1109/MC.2013.196

Michael K. Roussos G. Huang G. Q. Gadh R. Chattopadhyay A. Prabhu S. (2010). Planetary-scale RFID Services in an age of uberveillance.Proceedings of the IEEE, 98(9), 1663–1671. 10.1109/JPROC.2010.2050850

Michael M. G. (2000). For it is the number of a man.Bulletin of Biblical Studies, 19, 79–89.

Michael M. G. Michael K. (2009). Uberveillance: Microchipping people and the assault on privacy.Quadrant, 53(3), 85–89.

Michael M. G. Michael K. (2010). Towards a state of uberveillance.IEEE Technology and Society Magazine, 29(2), 9–16. 10.1109/MTS.2010.937024

Minsky, M. (2013). The society of intelligent veillance. In Proceedings of IEEE International Symposium on Technology and Society (ISTAS13). Toronto, Canada: IEEE.

Nolan, D. (2013, June 7). The human cloud. Monolith. Retrieved from

Oxford Dictionary. (2012). Dataveillance. Retrieved 6 May 2013, from

Paechter B. Pitt J. Serbedzijac N. Michael K. Willies J. Helgason I. (2011). Heaven and hell: Visions for pervasive adaptation. In Fet11 essence. Budapest, Hungary: Elsevier. 10.1016/j.procs.2011.12.025

Paterson, N. (2013). Veillances: Protocols & network surveillance. In Proceedings of IEEE International Symposium on Technology and Society(ISTAS13). Toronto, Canada: IEEE.

Pitt J. (Ed.). (2012). This pervasive day: The potential and perils of pervasive computing. London: Imperial College London.

Pitt J. (2014). The computer after me. London: Imperial College Press.

Reynolds, M. (2004). Despite the hype, microchip implants won't deliver security. Gartner. Retrieved 6 May 2013, from

Rodotà, S., & Capurro, R. (2005). Ethical aspects of ICT implants in the human body. Opinion of the European Group on Ethics in Science and New Technologies to the European Commission, 20.

Shih, T. K. (2013). Video forgery and motion editing. In Proceedings of International Conference on Advances in ICT for Emerging Regions. ICT.

Stephan K. D. Michael K. Michael M. G. Jacob L. Anesta E. (2012). Social implications of technology: Past, present, and future.Proceedings of the IEEE, 100(13), 1752–1781. 10.1109/JPROC.2012.2189919

(1995). Superman. InHonderichT. (Ed.), Oxford companion to philosophy. Oxford, UK: Oxford University Press.

(2010). Uberveillance. InALD (Ed.), Australian law dictionary. Oxford, UK: Oxford University Press.

Warwick K. (2002). I, cyborg. London: Century.

Wordnik. (2013). Sousveillance. Retrieved 6 June 2013, from

Towards the Blanket Coverage DNA Profiling and Sampling of Citizens

Towards the Blanket Coverage DNA Profiling and Sampling of Citizens in England, Wales, and Northern Ireland

Katina Michael, University of Wollongong, Australia



The European Court of Human Rights (ECtHR) ruling of S and Marper v United Kingdom will have major implications on the retention of Deoxyribonucleic Acid (DNA) samples, profiles, and fingerprints of innocents stored in England, Wales, and Northern Ireland. In its attempt to develop a comprehensive National DNA Database (NDNAD) for the fight against crime, the UK Government has come under fire for its blanket-style coverage of the DNA sampling of its populace. Figures indicate that the UK Government retains a highly disproportionate number of samples when compared to other nation states in the Council of Europe (CoE), and indeed anywhere else in the world. In addition, the UK Government also retains a disproportionate number of DNA profiles and samples of specific ethnic minority groups such as the Black Ethnic Minority group (BEM). Finally, the S and Marper case demonstrates that innocent children, and in general innocent citizens, are still on the national DNA database, sometimes even without their knowledge. Despite the fact that the S and Marper case concluded with the removal of the biometric data of Mr S and Mr Marper, all other innocent subjects must still apply to their local Metropolitan Police Service to have their fingerprints or DNA removed from the register. This is not only a time-consuming process, but not feasible.


The Police and Criminal Evidence Act of 1984 (UK) (PACE) has undergone major changes since its inception. The PACE and the PACE Codes of Practice provide the core framework of police powers and safeguards around stop and search, arrest, detention, investigation, identification and interviewing detainees (Police Home Office 2009). In the month of December 2008, post the S and Marper European Court of Human Rights ECtHR judgment, PACE underwent a review and changes were effective on the 31 December 2008, however, more changes especially on the issue of the retention of fingerprints and DNA are forthcoming. According to the Home Office the changes expected in the PACE will be to ensure that the “right balance between the powers of the police and the rights and freedoms of the public” are maintained (Police Home Office 2009). On reviewing the legal changes that have taken place since 1984 via a multitude of Acts, it can be said the United Kingdom (with the exception of Scotland) has, contrary to the claims of the Home Office, experienced a significant imbalance between the powers of the police and the rights and freedoms of the public. In the last 15 years, the rights and freedoms of the public have been severely encroached upon, and police powers significantly increased. A brief review of the major legislative impacts between 1984 and 2008 will be reviewed below. They are summarized in a timeline in Figure 1.

Figure 1. Changes to U.K. Legislation 1984-2008 that have Given the Police Greater Powers and have had an Impact on Fingerprint and DNA Retention (The content in was taken from Genewatch UK (2009a) but adapted and turned into a timeline for the sake of readability)

Legislative Changes between 1984 and 2009

PACE was introduced in 1984, one year prior to Dr Jeffrey’s discovery of DNA. Interestingly, PACE allowed for the police to ask a doctor to take a blood sample from a suspect during the investigation of a serious crime but only with their express consent. Thus a suspect had to volunteer or “agree” to a blood sample being taken, it could not be taken by force. Even after Jeffrey’s discovery, there was limited use of blood samples for forensic analysis as tools and techniques were still in their infancy. The Single Locus Probe (SLP) technique which was in use in early DNA examinations had numerous limitations. While new SLP technology overcame some of these limitations, “the statistical evaluation of SLP DNA evidence brought a new set of problems, perhaps even more difficult to overcome than the preceding technical limitations” (Sullivan 1998). In sections 61-65 the original PACE classified blood samples and scrapings of cells from the inner cheek as intimate in nature. Hair samples (save for pubic hair) was the only type of non-intimate DNA sample that could be retained for forensic analysis without the permission of the suspect, and this on account of an investigation into a serious arrestable offence. Although this kind of DNA cut with scissors rarely provided enough of a good sample to conduct single locus probe (SLP) profiling, it was in the late 1980s that PCR (polymerase chain reaction) profiling could amplify and type a single strand of hair (Home Office, 2004). This is when mass screenings of DNA samples were possible. To begin with there was great contention over the admissibility of DNA evidence in a court of law but this changed as commonplace errors and procedural issues were rectified, new more modern profiling techniques were introduced, and larger databases for statistical purposes became available.

A significant moment in the fight against crime in the United Kingdom came in 1993 after a Royal Commission on Criminal Justice (Hansard 2003). The Commission was set up because there was a feeling among the community that the criminal justice system was just not working well enough to convict the guilty and exonerate the innocent. Leading up to 1993, there were a number of high profile miscarriages of justice which weakened the public’s confidence in the criminal justice system, for example, the Birmingham Six, who had been jailed in 1974 for allegedly planting an IRA (Irish Republican Army) bomb that killed 21 people (BBC, 1991). One of the key recommendations coming from the Commission was the setting up of a national forensic DNA database. In the following year in 1994, the Criminal Justice and Public Order Act (CJPOA) introduced amendments to PACE and in 1995 the National DNA Database (NDNAD) was launched. At first, the Association of Chief Police Officers in England, Wales and Northern Ireland, believed that the system should have processed around 135, 000 samples in the first year, but by the end of that year only one quarter of the original target had been loaded into the system due to significant procedural and technical teething problems related to the database. The expected annual rate was not reached until 1998 as police did not know how to fully exploit the new legislation (Lynch, 2008).

One of the fundamental changes heralded by the CJPOA was the reclassification of particular types of DNA samples from intimate to non-intimate. Authorities knew too well from their limited experience with DNA since the mid-1980s, that “richer” cellular samples were needed if a useable database of the size being projected was going to be possible. Saliva samples and mouth swabs became non-intimate samples, and it followed that non-intimate samples could be taken without the consent of the suspect. Furthermore, police could now conduct the procedure without the assistance of a trained doctor, and if needed by force. The sweeping changes did not stop there; the CJPOA also altered the rules regarding when a DNA sample could be taken. It was the first time that DNA samples could be taken from people who had not conducted serious arrestable offences but from those who had conducted recordable offences beyond the most trivial. If a suspect was found guilty then for the first time since the introduction of PACE, the DNA sample could be stored indefinitely. Only if a person was acquitted of a crime, or charges were dropped, would the sample data be destroyed. Minor legislative changes were introduced allowing for the cross-matching of DNA profiles across the whole of the U.K. in 1996 through the Criminal Procedure and Investigations Act, and in 1997 the Criminal Evidence (Amendment) Act enabled non-intimate samples to be taken from prison inmates who had been convicted of serious offences prior to the establishment of the NDNAD.

In 1997, there was a change of government, the Labour Party came to power and by 1999 Prime Minister Tony Blair announced the aggressive expansion of the NDNAD to contain some 3 million profiles by 2004. It was in 2001, post the Sept 11 attacks via the Prevention of Terrorism Act that DNA profiles which entered the database remained there indefinitely, even if the suspect was acquitted or charges were dropped. The PACE was impacted by these changes and even volunteers who had partaken in mass screenings or dragnets who had willingly provided their DNA samples remained on the database indefinitely (Beattie, 2009). In 2003, under the Criminal Justice Act of s. 10 (amending s. 63 of PACE), those who were simply arrested or detained at a police station on suspicion of a recordable offence had their DNA sample taken. According to McCartney (2006):

This enables police to take DNA samples from almost all arrestees and preempts technological advances which are expected to see mobile DNA testing kits in the coming years (by omitting the words “in police detention”). It means that a sample (usually a cheek swab) can be taken upon “reasonable suspicion” for an offence, regardless of whether it will indicate guilt or have any possibility of use during the investigation. The law, then, is explicit: anyone who comes under police suspicion is liable to have a DNA sample taken, searched against the samples on the NDNAD, and retained. The course that an investigation takes or whether a prosecution proceeds is of little, if any, significance.

The Criminal Justice Act was yet another extension of police powers and no other nation state had the same freedom to gather and store such personal citizen information. By 2005, the Serious Organised Crime and Police Act extended the uses of the NDNAD to include the identification of deceased persons. By 2008, the Counter-Terrorism Act extended police powers to allow DNA and fingerprints to be taken from persons subject to control orders or those under secret surveillance in the interests of national security.

Numerous legal analysts have been critical of the changes that PACE has undergone since 1984 - ironically the increase in police powers and the establishment of the NDNAD was originally introduced to increase public confidence in the criminal justice system and has instead eroded citizen trust in the state and impinged on the rights of every day Britons by going too far. Beattie (2009) is rather candid in her assessment of the changes, stating:

[there is] no statutory guidance for decisions about the retention of samples, no readily accessible mechanism whereby individuals can challenge the decision to retain their records (other than judicial review) and no independent oversight by a designated regulatory body.

This assessment seems to strike at the very heart of the problem. With only a judicial route at one’s disposal to question current practices, an innocent citizen is left almost entirely powerless to battle against its own government. We can see no greater example of this than in the DNA sample storage of juveniles between the ages of ten and eighteen, “230,000 of whom were alleged to have been added following legislative changes in 2004, and of whom 24,000 were taken from ‘innocent children’ against whom no charges had been brought …” (Lynch, 2008). An utterly disturbing statistic, and one which rightly led to the accusation of the Labour government compiling a database by stealth.

It now seems that PACE “1984” really did lay the seeds to an Orwellian state. According to the most recent Government statistics, 7.39 per cent of the UK population has their DNA profiles retained on the NDNAD (Beattie, 2009). This is an alarming figure when one considers that most other European states have less than 1 per cent of their population on their respective DNA database, and do not keep cellular samples but rather DNA profiles alone and for a defined period of time (Table 1). The U.K. Government would possibly have us believe by these figures that they are dealing with an unusually high crime rate, but the reality is that the figures do not reveal the percentage of persons who have committed violent crimes as opposed to those who have committed petty crimes. Another problem with the NDNAD is that it is highly disproportionate in terms of its recording of citizens by ethnic background. The Guardian newspaper calculated that 37 per cent of black men and 13 per cent of Asian men in the nation are contained in the NDNAD, as compared to only 9 per cent of white men (Lynch, 2008). Liberty has stated that 77 per cent of young black men had records on the NDNAD in 2006 and that black people in general were almost 4 times as likely to appear on the database as white people (Rodgers, 2009).

Table 1. Characteristics of some National DNA Databases

The National DNA Database

The U.K. National DNA Database (NDNAD) of England and Wales was launched in April of 1995 at the Forensic Science Service (FSS) laboratory. It took several years for Northern Ireland to be included in the NDNAD. Before launching the official database the FSS trialed a small-scale forensic database to ensure the validity of such a system. The FSS began developing DNA testing in 1987 and in 1995 achieved a scientific breakthrough, inventing a chemical that enabled DNA profiling which led to the establishment of the NDNAD (FSS, 2009a). The NDNAD is the oldest and largest DNA database in the world with national legislation to foster and support its growth. The U.K. has also adopted a privatized model for forensic science services as related to the criminal justice system (Lynch, 2008). This was not always the case however, as the FSS was once an agency of the Home Office. When it became FSS Ltd. it became a profit maximizing, government-owned company under almost exclusive contract to the Home Office in forensic services to the police.

Although the legislation that enabled the police to collect DNA samples, request the FSS to process them and to store DNA profiles on the NDNAD, the annual expected growth rate was not reached until the late 1990s. As one of the main strategic objectives of the NDNAD was to demonstrate a return on investment, the Home Office set out to detect more crimes and thus reduce overall crime rates in the hope of closing the justice gap (McCartney, 2006, p. 175). In April 2000, five years after the establishment of the NDNAD, the UK government announced the DNA Expansion Programme, aimed at getting all known active offending persons onto the database which at the time was estimated to be about 3 million people. The total government investment in the program to March 2005 stood at £240.8 million which enabled police forces to increase the sampling of suspects and to recruit additional crime scene investigators, purchase the appropriate equipment, train more police etc. (Home Office, 2005). Old samples from 1995 to 1999 were also able to be reanalyzed (McCartney, 2006, p. 176). A portion of the profiles were updated to match upgrades in the system software of the NDNAD from the standard profiling software known as SGM (Second Generation Multiplex) which had an average discrimination power of 1 in 50 million, to SGM Plus profiles which was said to reduce the chance of an adventitious match as the size of the NDNAD inevitably increased fuelled by the funding from the Expansion Programme.

An adventitious match is the possibility that two different people would have a profile that was almost identical owing to a “false positive” also know in statistics as an α (alpha) error. Thus an adventitious match shows a positive result for the matching of two persons (e.g. that of a crime scene sample, and that of a record on the NDNAD) when in actual fact there is no match at all. In the original NDNAD the risk of an adventitious match using the original SGM profiles was calculated to be 26 per cent but it has been claimed that since the introduction of the SGM Plus software, no adventitious matches have occurred (Nuffield Council, 2007). Sir Alec Jeffreys, however, has warned publicly that the genetic profiles held by police for criminal investigations are not sophisticated enough to prevent false identifications. “Dissatisfied with the discriminatory power of SGM Plus, Jeffreys recommends that following the identification of a suspect, the authority of the match should be tested by reanalyzing the sample at six additional loci” (Lynch 2008, pp. 144-145). Reanalysis of samples (whether volunteers, suspects, or those convicted) without consent, raises additional ethical questions however, even if it might indeed be able to exonerate a small number of individuals, if anyone at all.

The FSS are aware of the small possibility for an error but believe that the 10 markers currently stored on the database are sufficient (Jha 2004). In their defense FSS claim that the NDNAD is simply a type of intelligence database, and ultimately one is not convicted on mere “intelligence” but on multiple sources of evidence (Koblinsky, Liotti & Oeser-Sweat 2005, p. 273). Peter Gill of the FSS responded to Jeffreys concerns to the need to increase the number of markers for each profile by emphasizing that adventitious matches occur quite often when degraded samples are used and that the jury had to make up their mind based on numerous sources of evidence not just DNA evidence in isolation (Jha, 2004). For Jeffreys, storing “unnecessary” personal information on the NDNAD, for instance of persons who have previously been wrongly suspected of a crime, will only act to over-represent certain ethnic minorities which could lead to resentment by some citizen sub groups. The other issue that Jeffreys raises is the potential to use DNA sample information at some time in the future, and the risks associated with the potential to reveal health information from those samples; he is strongly opposed to the police gaining access to that kind of information (FSS, 2009).

Looking at some cross-sectional data of the NDNAD can provide us with a better feel for the size of this databank, which per capita, stores the largest number of DNA profiles for any given nation. By the end of March 2005, the Nuffield Bioethics Council reported that there were 3 million profiles stored on the NDNAD, an estimated 5.2 per cent of the U.K. population with 40,000 to 50,000 profiles being added monthly. Specifically, the police had retained 3,072,041 criminal justice (CJ) profiles, 12,095 volunteer profiles, and 230,538 scene-of-crime (SOC) profiles (Lynch, 2008, p. 149). The increase in loading samples of crimes was not just due to the Expansion Programme but also the legislative changes noted above via the Criminal Justice Act of 2003 and also the Serious Organised Crime and Police Act of 2005, and because of innovations in processing capabilities by the FSS. These legislative changes broadened the net of people who would now be added to the databank, in effect lowering the threshold for making it onto the NDNAD. From the perspective of the Association of Chief Police Officers, this was a positive because it meant getting offenders onto the database earlier in their criminal careers. By the end of December 2005, the NDNAD held around 3.45 million CJ and elimination profiles and 263,923 crime scene sample profiles. At that rate it was predicted that an estimated 25 per cent of the adult male population and 7 per cent of the adult female population would eventually enter the database (Williams and Johnson 2005). More sober estimates indicate that the overall number of persons to be admitted to the NDNAD would be a little over 10 per cent of the UK population (Table 2) (Jobling & Gill, 2004, p. 745).

Table 2. A NDNAD snapshot using year-end 2007 data

Current NDNAD Statistics

The most recent NDNAD statistics were made public during a parliamentary debate in October of 2009 (Hansard 2009). Here new figures from between 2007 and 2009 were tabled. Figure 2 is based on the data that was presented and shows that at the end of March in 2007, there were about 151,882 DNA profiles of persons between the ages of 10 and 15 on the NDNA which constituted about 3 per cent of all DNA profiles. There were 206,449 DNA profiles of persons between the age of 16 and 17 equating to about 5 per cent of all DNA profiles. Not counting children under the age of 10 whose DNA profiles are stored on the NDNAD, we can estimate that about 9 per cent of the profiles on the NDNAD are of persons under the age of 18. These are numbers that have the wider community, especially civil liberties groups, other self-interest groups and key non-government organizations (NGOs) expressing deep concern over the widening retention of persons for inclusion on the NDNAD. The matter has now gone through judicial review and while the UK courts refused to acknowledge the rights of innocents or those of young children or those who have been acquitted of a crime from entering the NDNAD, the European Court of Human Rights (ECtHR) ruled otherwise. The S and Marper v. United Kingdom will be the focus of the next section of this paper.

Figure 2. DNA profiles on the NDNAD by age as of end March 2007

Beyond the problem of children on the NDNAD is the disproportionate number of persons of other ethnic appearance outside white Europeans who have had their DNA sample taken and analyzed and stored indefinitely. The NDNAD does not record detailed data about one’s ethnicity but it does categorise an individual into one of six ethnic origins based on appearance. These categories include: White-South European, White-North European, Asian, Black, Chinese Japanese or South East Asian, Middle Eastern and one more category referred to as Unknown. At first glance the numbers in Figure 3 show that about 77 per cent of the DNA profiles on the NDNAD have come from “White-Europeans” (summing both the South and North White European categories) and only 7 per cent from “Blacks” and about 5 per cent from “Asians”. But one should not look at these percentages on face value. Relatively speaking, when one analyses these numbers along-side census data, the truer picture emerges. Blacks and Asians do not make up the largest ethnic portion of the UK and thus a figure of 7 per cent of Blacks on the NDNAD means that more than 37 per cent of the Black male population in the UK have their DNA profile recorded on the NDNAD, and 5 per cent of “Asians” means that about 13 per cent of the Asian population have their DNA profile recorded on the NDNAD. This is compared with only 9 per cent of the total White population that is on the NDNAD.

Figure 3. DNA profiles on the NDNAD by ethnic appearance as of end March 2007

Some groups refer to this kind of disproportionate ethnic presence on the NDNAD as institutionalized racism. Institutionalized racism can be defined as “that which, covertly or overtly, resides in the policies, procedures, operations and culture of public or private institutions - reinforcing individual prejudices and being reinforced by them in turn” (Lawrence, 1999). It is a structured and systematic form of racism built into institutions. While this researcher would not label the disproportionate ethnic representation in the NDNAD as racism, she does acknowledge that minority ethnic populations, particularly black men, do not stand to benefit from the current UK legislation, but rather the legislation has been to the detriment of minority groups. According to National Black Police Association of the UK black men are four times more likely to be stopped and searched than white men. They are also more likely to be arrested and released without charge, let alone convicted, and without being compensated for their ordeal. The NDNAD statistics seem to suggest that black males are more likely to offend than white males, which is a fallacy. And this kind of feeling among the community of the Black Ethnic Minority (BEM) may not only provoke great mistrust in the UK police and the Government but also strong resentment toward future life opportunities and freedoms, a feeling echoed by Sir Jeffreys. It also means that less competent officers may be inclined, whether mindfully or not, to draw in ethnic minorities in general because they are the “usual” suspects in crimes (Jarrett, 2006). The most up-to-date figures on the profiles that constitute the NDNAD by gender, age and ethnicity can be found in Table 3, which is an adapted version of the data that was tabled in Hansard 27 October 2009 Col292W.

Table 3. Most recently released NDNAD profile statistics by gender and ethnic appearance (compare 2008 and 2009). Source: Hansard 27 October 2009 Col292W.

Of the greatest injustice of the UK legislation related to the collection and storage of DNA samples and profiles however, is the fact that at least 857,000 innocent people remain on the NDNAD who have not been convicted of crime and who may never be convicted of a crime. Living in this state of apprehension of any one of those people is quite incomprehensible. For some, such an ordeal would almost certainly lead to a feeling of bitterness or dislike or hatred for the State and especially the UK Police, for that individual who was wrongly apprehended. Among the one million innocent people whose DNA sample has been taken are an estimated 100,000 innocent children (Action on Rights for Children 2007). What are these persons to think and feel? What does it mean about their future, or employment opportunities requiring security checks? And how might their experience with Police impact them later in life? Psychologists will always point out that someone treated like a criminal may retaliate as if they were one: “[b]ecause it feels like someone is punishing us by making us feel guilty, we often have an urge to retaliate against those who do” (Stosny 2008).

But beyond the psychological repercussions on the individual stemming from what some refer to as “emotional pollution” is the effort that a person must go through to get their details removed from the NDNAD (Geoghegan, 2009), a process that was almost impossible until the S and Marper ECtHR judgment. Since 2004, in England, Wales and Northern Ireland records are removed and DNA destroyed only under “exceptional circumstances” (Genewatch UK, 2009). And given the profiles on the NDNAD belong to individual police forces, innocents whose profiles remain on the NDNAD and who wish to have them removed need to appeal to their Constabulary, although most recently ACPO have asked officers to ignore the ECtHR ruling (Travis, 2009).

At the end of March 2009, Lord West of Spithead noted that the NDNAD contained DNA profiles and linked NDA samples from approximately 4,859,934 individuals included by all police forces, of which an estimated 4,561,201 were from English and Welsh forces (more than 7 per cent of the UK population) (Hansard, 2009). This figure should be compared with those cited on 27 October 2009 in Parliament which indicated that at the end of March in 2008 there was a total of 5,056 313, profiles on the NDNAD and as of 2009 for the same period there were 5,617,112 (See Table 3). According to the latest population statistics obtained from the Office for National Statistics (2009), there are about 61.4 million people residing in the UK, which means that the NDNAD contains profiles of more than 8.36 per cent of the total population in the United Kingdom. This figure is rather conservative an estimate when one considers that Scotland has a different legislative requirement regarding the retention of DNA profiles.

Why these specifics are important is because they indicate a number of things. First, the size of the UK databank is growing at over 560,000 profiles per annum which is in keeping with the rate of 40,000 to 50,000 samples per month. Secondly one in nine persons in England, Wales and Northern Ireland is registered on the databank. Thirdly, and more to the point, there are 507,636 DNA profiles which are of unknown persons. This either means that these samples have been collected at crime scenes and have not been individually identified alongside “known” persons or that potentially errors exist in the NDNAD itself. Here an important complementary factor must be underscored in support of the latter claim. If we are to allege that 507,636 profiles came from scenes of crime (SOC) where the individual has not been identified since April 1995 then we also need to understand that (McCartney, 2006, p. 182):

only 5 per cent of examined crime scenes result in a successful DNA sample being loaded onto the NDNAD, and only 17 per cent of crime scenes are examined, meaning that just 0.85 per cent of all recorded crime produces a DNA sample that can be tested (NDNAD, 2003/04: 23)…

Thus it is very rare for a perpetrator of a serious crime to leave body samples behind unless it is saliva on a cigarette butt or a can of drink or in more violent crimes such as sexual assaults, semen or some other bodily stain sample. In the case of some violent crimes like sexual assault, most victims do not, and are unlikely to begin, reporting to police. Many of these who do report do so too late for DNA profiling to be an option. Of those who do report in time, the occurrence of sexual intercourse is often not an issue in dispute. The existence or non-existence of consent will be the critical matter. DNA profiling can offer nothing to resolve this problem. However, in the case of serial rapes or where there is no real doubt about identity of the assailant, DNA profiling potentially has a great deal to offer (Freckelton, 1989, p. 29).

Of Dragnets and Mass Screenings

In cases where heinous violent crimes have occurred, often of a serial nature, local police have conducted mass DNA screenings of the population in and of surrounding neighborhoods of the scene of the crime (Butler, 2005, p. 449). It becomes apparent to local police that a mass DNA screening is required when it seems that the crimes have been conducted by a single person nearby, given the trail of evidence left behind and other intelligence information. A DNA mass screening was used in the very first case where DNA was used to convict an individual. Mass screenings are now termed intelligence-led screens and the subtle change in nuance as of 1999 was of great importance to how the UK perceived its use of DNA evidence in criminal cases. In a talk on DNA technology, Lynn Fereday of the FSS said in 1999 that:

[t]he screens now are a routine method of policing. This is a major way of saving police resources. What happens is that once a crime is being investigated, and DNA evidence has been found, police immediately do a scoping of who or what area they have to screen. They decide on a select area, and they then look for volunteers in that area. One of the first cases involved a murder of the young girl using STRs …The interesting thing about the mass screens is that although there seem to be some unease about continuing with them here, people are volunteering constantly. They volunteer for a reason, because they know they are innocent. They have nothing to fear, and we will end up with crime detection.

Of course, such comments come from an employee of the FSS. Examples of very early mass screenings in the UK can be found in DNA user conferences (Burton, 1999).

There is no denying that mass screenings have led to convictions of perpetrators who would have otherwise gone unnoticed but the statement that people volunteer because they are “innocent” or they “have nothing to fear” is not entirely true.

In her landmark paper in 2006, Carole McCartney described Operation Minstead where the police profiled 1,000 black men in South London in the hunt for a serial rapist, and then requested each of them to volunteer a DNA sample. McCartney (2006, p. 180) writes:

Of those, 125 initially refused, leading to “intimidatory” letters from the police, urging re-consideration, and five were arrested, their DNA taken post-arrest and added to the NDNAD. Such actions have raised questions of legality, with arrests only lawful with 'reasonable suspicion' of an individual having committed a criminal act. If the police are to arrest on non-compliance with a DNA request, then that casts non-compliance as a crime--a step that worries civil libertarians and may lose the spirit of cooperation essential in these circumstances.

Table 4 shows an example of a prioritisation grid to deal with DNA intelligence led screen actions. While it is an early example of a grid, and today’s practices are much more sophisticated in manner, it does indicate why an individual approached to volunteer a DNA sample by the police might refuse to do so. Being targeted to donate a sample by the police in a mass screen such as Operation Minstead means you are under some suspicion and fall into one of the priority areas of concern. If you are indeed innocent of a crime, you may refuse to donate a DNA sample for any number of reasons, among which could be a basic right not to be insulted particularly by the State. An individual resident who lives in a mass screen prioritization area and meets the criteria of any number of priorities might feel like they are being presumed guilty, and may not trust technology to prove them innocent, or may even fear being accidentally matched to a crime they did not commit.

Table 4. A prioritisation grid to deal with DNA intelligence LED screen actions. Source (Burton 1999).

Now while the police can ask any person in the UK to volunteer a DNA sample, there is some controversy related to what happens with a sample once it is analyzed and an individual is proven to be innocent. If an individual has been eliminated from enquiries then the question remains whether or not their DNA profile should be retained on the NDNAD. According to Genewatch (2009c):

[i]n these cases, people can consent to having their DNA used only for the inquiry, or give an additional signature if they agree to having their DNA profile added to the database. In Scotland volunteers can change their minds and ask to be removed from the Database, but this is not possible in England and Wales. However, the NDNAD Ethics Group recommended in April 2008 that volunteers should not have their DNA added to the Database at all, and their DNA should be destroyed when the case has ended. This recommendation is likely to be implemented because there is no evidence that adding volunteers' DNA to the database is helping to solve crimes.’

Still this practice has yet to be implemented categorically and the claim remains that innocent people should be kept off the NDNAD.

Statistics presented by the Home Office will always tout suspect to scene matches and scene to scene matches and provide the numbers of murders, rapes and car crimes where suspects are identified but it is very important to note that not all successful matches result in a conviction or even in an arrest (McCartney, 2006). So while statistics might seem to indicate that the NDNAD is returning value for money, overall crimes rates in the UK have not been reduced (Ministry of Justice, 2009), and the number of persons convicted using DNA evidence remains relatively moderate based on previous years reports. The FSS and the Government will always seek to show that the NDNAD has been an excellent evidential tool that has supported many successful prosecutions and provided important leads in unsolved “cold” cases but no matter how one looks at it, the storage of innocent persons’ DNA profiles should not be permitted.

Where was the NDNAD Headed?

The Possibility of Blanket Coverage DNA Sampling of All Citizens

Putting the brakes on the NDNAD was not going to be easy. Several cases had been heard through various local courts but were unsuccessful in their attempts to have their clients’ fingerprints and DNA samples and profiles destroyed. Of course, some scientists working in the area of forensic analysis continued to dream of databases and databanks that ideally would contain the profiles of every person in the country. This was a view maintained by scientists not only within the UK but as far as the United States and even New Zealand. Although the overwhelming feeling among this community of experts was that such a database would “obviously never be compiled” (Michaelis et al., 2008, p. 106). Still this goodwill does not halt the potential for DNA databases to become commonplace into the future. In 2005, Koblinsky et al. (p. 290) rightly predicted that more people would find themselves onto national DNA databases. They believed that it was likely:

… that legislation will be passed that will require juveniles who commit serious crimes to be included in the database. It is possible that eventually every citizen will be required to have his or her profile in a national database despite concerns about privacy issues and constitutional protections.

Such attitudes must be understood within their context. It makes sense to forensic analysts and scientific-literate commentators that a larger database would help to capture repeat offenders and thus reduce overall crime rates. Many would not debate the importance of DNA profiling for serious crimes, but there are issues with relating DNA profiling techniques in a mandatory fashion to the whole populace. Even the Nuffield Bioethics Council was allegedly supportive of the benefits of a universal database. According to Lynch et al. (2008, p. 154) the Council:

…[found] that while the balance of argument and evidence presented in the consultation was against the establishment of a population-wide database, it recommend[ed] that the possibility should be subject to review, given its potential contribution to public safety and the detection of crime, and its potential for reducing discriminatory practices.

In 2005, Koblinsky et al. (p. 163) wrote: “[a]s DNA analysis becomes more and more common in criminal investigations, there will come a day when millions upon millions of people will have been profiled.” Well, we no longer have to look into the future for the fulfillment of such prophecies - they are here now. There are millions upon millions of DNA samples and profiles stored in the UK alone and the US too is now driving new initiatives on the road of mass DNA profiling (Moore, 2009). The FBI’s CODIS database has 6.7 million profiles and it is expected that it will accelerate its DNA database from 80,000 new entries a year to 1.2 million by 2012 (Michaelis et al., p. 105). But it may not be criminal legislation that impacts on such outlandish figures. One day it is indeed possible that the medical research field will have such an impact on society that “… every citizen’s genetic profile may be stored in a national database. There are many who are concerned about the ramifications of a government agency maintaining such records. It is essential that all DNA data can be encrypted and protected from abuse or unauthorized access” (Koblinsky et al., 2005).

Expanding databanks will clearly have an impact on civil liberties and individual privacy. And while there are those who believe such statements do a “disservice to a society suffering from a constant rise in violent crime,” (Melson, 1990) the recent ECtHR ruling is proof enough that we need to reconsider the road ahead. But it is not scientists alone who are providing the impetus for even larger databanks, politicians or political commentators also are entering the debate. Former mayor of New York, Mr Rudy Giuliani had advocated taking DNA samples of all babies born in American hospitals. This idea would not take much to institute in practice, given cellular samples (blood) are already taken from babies with the permission of the parent to test for common disorders. The same practice also exists in Australia and is known as the Guthrie Test or more commonly the Heel Prick Test (Guthrie Test, 2009). Michaelis et al. (2008, pp. 100-101) comment on such a potential status of mass DNA sampling at birth but are mindful of the implications on civil liberties and privacy:

Having a databank of all American-born persons would obviously be of great benefit, not only in violent crime investigations but also in cases of missing persons, inheritance disputes, immigration cases and mass casualties such as airline crashes and terrorist acts. The obvious concerns over privacy and civil liberties, however, have caused commentators to urge caution when deciding which samples to include in the databanks.

DNA Developments and Innovations Challenging Ethical Practice

The 13 year Human Genome Project (HGP) conducted by the US Department of Energy and the National Institutes of Health has gone a long way into identifying all the approximately 20,000-25,000 genes in human DNA, and determining the sequences of the 3 billion chemical base pairs that make up human DNA. The project was and still is surrounded by a number of very challenging ethical, legal and social issues (Table 5). Points 3 and 7 in the table are of particular interest when we consider what it means for someone’s DNA sample to be taken, analyzed, and stored indefinitely in a criminal databank. What kind of psychological impact will it have on the individual and forthcoming stigmatization by the individual themselves, and then by the community around them. This is particularly the case of minority groups. And what of the potential to “read” someone’s DNA and be able to make judgments on their mode of behavior based on their genetic makeup? Are persons for instance, more prone to violence because they carry particular genes? Or would some generalities based on genetics affect someone’s free will and determine their future because of some preconceived statistical result?

Table 5. Societal concerns arising from the new genetics (adapted from the Human Genome Project, 2009)

Already under research are “DNA identikits” which can describe a suspect’s physical appearance from their DNA sample in the absence of an eyewitness account. At present the FSS provide an ethnic inference service (McCartney, 2006, p. 178). The FSS used this technology in 2008 to investigate the stabbing of Sally Anne Bowman in 2005, although it was not this forensic result that ultimately led the police to her perpetrator (FSS, 2009). Used to supplement ethnic inference is the red hair test which can detect 84 per cent of red heads (McCartney, 2006, p. 181). The continued research into the HGP will inevitably determine very detailed information about a person in the future. The other problem closely related to innovations in identikits are those of advances in familial searching techniques. Given that families share a similar DNA profile, obtaining the DNA of one individual in a family, let us say “the son”, can help to determine close matches with other persons in the immediate family such as the sister, mother, father or first cousin. While only identical twins share exactly the same DNA, a sibling or parent share a very close match. The technique of familial searching was also used in the Sally Anne Bowman case without success. A suspect’s DNA was taken and matched against the UK NDNA but no exact matches were returned. The FSS then attempted the familial searching technique and that too did not aid their investigation. Familial searching was first used in 2002 in a rape and murder case when a list of 100 close matches was returned from the NDNAD to identify a perpetrator who had since died. DNA samples were first taken from the living relatives and then from the dead body of the offender Joe Kappen.

The Risks Associated with Familial Searching and Medical Research

Familial searching has very broad ethical implications. It is conducted on the premise that a rotten apple comes from a rotten tree. Put another way, the old adage goes, “tell me who your friends are and I’ll tell you who you are.” Instead today, we may be making the false connection of - “tell me who your friends are and I’ll tell what gene you are”! Interestingly this latter idea has formed the titled of a biology paper written by P. Morandini (2009). The point is that we return to models of reputation by association and these cannot be relied upon to make judgments in a court of law. We learnt all too well in Australia through the Dr Haneef case, that guilt by association, even guilt by blood-line, is dangerous to civil liberties. Considered another way, some have termed this kind of association based on DNA profiles, “genetic redlining.” Genetic redlining can be defined as “the differentiated treatment of individuals based upon apparent or perceived human variation” (Melson, 1990, p. 189). David L. Gollaher discusses the risks of what essentially is genetic discrimination in a 1998 paper.

Perhaps the most disturbing practice that may enter this field and make things impossible to police both in the “criminal law” arena and the “medical research” field is the deregulation and privatization of the DNA industry internationally. Future technological innovations will surely spawn the growth of this emerging industry. We have already noted the home-based DNA sampling kits available for less than 100 US dollars which come with free DNA sample databanking. It will not be long before some citizens volunteer somebody else’s DNA, instead of their own, forging consent documentations and the like. The bungle with the first ever UK DNA case shows that even the police could not imagine that Pitchfork (the offender), would have conceived of asking a friend to donate a sample on his behalf. Such cases will inevitably occur in volunteer home sampling methods, as fraudsters attempt to access the DNA samples of friends, strangers or even enemies via commonplace saliva-based sampling techniques. All you need is a pre-packed buccal swab from the DNA company providing the kits and away you go. If this seems an extreme possibility to the reader, consider the “spit kits” that have been issued to public transport drivers who have been harassed by passengers by being spat at or otherwise, who can now collect the DNA samples of an alleged offender and turn them into the appropriate authorities. No consent of the donor is required here (Lynch, 2008, p. 153).

When we consider how we as a society have traversed to this point of “accepting” the construction and development of such unusually large national databanks as the NDNAD in the UK, we can identify a number of driving forces. Some nations are at this point of almost indiscriminate storage of DNA profiles primarily due to changes in policing practices and the law, government policy, innovation in forensic science (the idea that because we can, we should), co-existing with venture capitalists who are backing commercial opportunities and the parallel developments in the genetic medical research field. In the case of the UK the PACE changed so much, and there was such a redefinition of what constituted a “recordable offence” that non-intimate samples could be obtained from individuals for investigation into the following offences without their consent (Roberts & Taylor, 2005, pp. 389-390):

unlawfully going onto the playing area at a designated football match; failing to leave licensed premises when asked to do so; taking or destroying rabbits by night; riding a pedal cycle without the owner's consent; allowing alcohol to be carried in vehicles on journeys to or from a designated sporting event.

Consider the Home Office’s August 2008 proposal to expand police powers which included plans to set up new “short term holding facilities” (STHFs) in shopping centers to take people's DNA and fingerprints but was later quashed with the S and Marper ECtHR judgment (Genewatch UK, 2009b).

This is short of being farcical. It makes little sense to take such personal data from an individual when the profile itself cannot be used for investigative purposes. There must be some other motivation toward the sampling of persons who on occasion might find themselves charged with a petty crime and are punished by fine, penalty, forfeiture or imprisonment other than in a penitentiary. Why store such petty crime offenders’ DNA profiles indefinitely on the NDNAD? Surely the action of someone who might find themselves, for instance, under the influence of alcohol and refuse to leave a licensed premise when asked to do so, is not indicative of their capacity to commit a serious felony in the future. There is a grave issue of proportionality here commensurate to the crime committed by the individual, and on the side of the crime itself, a major issue with what constitutes a recordable offence. The original PACE wording stated a “serious arrestable offence” (Ireland, 1989, p. 80) not just any old offence. As a result policing powers were increased significantly, and the individual’s right not to incriminate himself or herself was withdrawn in conflict with the underpinnings of Common Law (Freckelton, 1989, p. 31).

Our legal system has traditionally eschewed forcing people to incriminate themselves by becoming the instruments of their own downfall. That principle has suffered a number of encroachments in recent years.

It is here that we need to take a step back, reassess the balance needed in a robust criminal justice system and make the necessary changes to legislation, save we get too far ahead that we find recourse a near impossibility.


When one analyses the case of Mr S and Mr Marper, one realises how short of the mark the UK Government has fallen. Instead of upholding the rights of innocent people, the retention of their fingerprint and DNA data is kept for safe keeping. Some have claimed that this initial boost in the number of samples was purposefully conducted to make the NDNAD meaningful statistically, while others believe it was in line with more sinister overtones of a surveillance state. One thing is certain, that where the courts in England did not provide any recourse for either Mr S or Mr Marper, the European Court of Human Rights ruling indicated a landslide majority in the case for both Mr S and Mr Marper to have their DNA samples destroyed, and profiles permanently deleted. One of the major issues that has triggered this change in the collection of such personal and sensitive data have been the alleged 3,000 individual changes to the PACE Act. The watering down of laws that are meant to uphold justice, but instead are being alternatively used to abuse citizen rights, is an extremely worrying trend, and adequate solutions, despite the ECtHR ruling, are still lacking.


Action on Rights for Children. (2007). How many innocent children are being added to the national DNA database? Retrieved from

BBC. (1991). Birmingham six freed after 16 years. Retrieved from

Beattie K. (2009). S and Marper v UK: Privacy, DNA and crime prevention.European Human Rights Law Review, 2, 231.

Burton, C. (1999). The United Kingdom national DNA database. Interpol. Retrieved from

Butler J. M. (2005). Forensic DNA typing: Biology, technology, and genetic of STR markers. Oxford, UK: Elsevier.

Fereday, L. (1999). Technology development: DNA from fingerprints. Retrieved from

Forensic Science Service. (2009a). Analytical solutions: DNA solutions. Retrieved from

Forensic Science Service. (2009b). Sally Anne Bowman. Retrieved from

Freckelton I. (1989). DNA profiling: Forensic science under the microscope. In VernonJ.SelingerB. (Eds.), DNA and criminal justice (Vol. 2). Academic Press.

Genewatch, U. K. (2009a). A brief legal history of the NDNAD. Retrieved from

Genewatch, U. K. (2009b). Police and criminal evidence act (PACE) consultations. Retrieved from

Genewatch, U. K. (2009c). Whose DNA profiles are on the database? Retrieved from

Geoghegan, J. (2009, October 12). Criticism for police over silence on DNA database. Echo. Retrieved from

Gollaher D. L. (1998). Genetic discrimination: Who is really at risk?Genetic Testing, 2(1), 13. 10.1089/gte.1998.2.1310464593

Guthrie Test (Heel Prick Test). (2009). Discovery. Retrieved from

Hansard. (1993). Royal commission on criminal justice. Retrieved from

Hansard. (2009). DNA databases. Retrieved from

Hansard. (2009). Police: Databases. Retrieved from

Home Office. (2004). Coldcases to be cracked in DNA clampdown. Retrieved from'Coldcases'_To_Be_Cracked_In_Dna?version=1

Home Office. (2005). DNA expansion programme 2000–2005: Reporting achievement. Retrieved from

Human Genome Project. (2009). Human genome project information: Ethical, legal and social issues. Retrieved from

Ireland, S. (1989). What authority should police have to detain suspects to take samples? In J. Vernon & B. Selinger (Eds.), DNA and criminal justice. Retrieved from

Jarrett, K. (2006). DNA breakthrough. National Black Police Association. Retrieved from

Jha, A. (2004, September 9). DNA fingerprinting no longer foolproof. The Guardian. Retrieved from

Jobling M. A. Gill P. (2004). Encoded evidence: DNA in forensic analysis.Nature Reviews. Genetics, 5(10), 745. 10.1038/nrg145515510165

Koblinsky L. Liotti T. F. Oeser-Sweat J. (Eds.). (2005). DNA: Forensic and legal applications. Hoboken, NJ: Wiley.

Lawrence, S. (1999, February 24). What is institutional racism? The Guardian. Retrieved from

Lynch M. (2008). Truth machine: The contentious history of DNA fingerprinting. Chicago: Chicago University Press. 10.7208/chicago/9780226498089.001.0001

McCartney C. (2006). Forensic identification and criminal justice: Forensic science justice and risk. Cullompton: Willan Publishing.

McCartney C. (2006). The DNA expansion programme and criminal investigation.The British Journal of Criminology, 46(2), 189. 10.1093/bjc/azi094

Melson K. E. (1990). Legal and ethical considerations. In KirbyL. T. (Ed.), DNA fingerprinting: An introduction. Oxford, UK: Oxford University Press.

Michaelis R. C. Flanders R. G. Wulff P. H. (2008). A litigator's guide to DNA: From the laboratory to the courtroom. Burlington, UK: Elsvier.

Ministry of Justice. (2009). Population in custody. Retrieved from

Moore, S. (2009). F.B.I. & states vastly expand DNA databases. The New York Times. Retrieved from

Morandini, P. (2009). Tell me who your friends are and I'll tell what gene you are. Retrieved from

Nuffield Council on Bioethics. (2009). Forensic use of bioinformation: Ethical issues. Retrieved from

Office for National Statistics. (2007). Mid-2006 UK, England and Wales, Scotland and Northern Ireland: 22/08/07. Retrieved from

Office for National Statistics. (2009). UK population grows to 61.4 million. Retrieved from

Parliamentary Office of Science and Technology. (2006). Postnote: The national DNA database. Retrieved from

Police Home Office. (2009). Police and criminal evidence act 1984 (PACE) and accompanying codes of practice. Retrieved from

Roberts A. Taylor N. (2005). Privacy and the DNA database.European Human Rights Law Review, 4, 373.

Rodgers, M. C. (2009). Diane Abbott MP and liberty hold DNA clinic in Hackney. Liberty. Retrieved from

Stosny, S. (2008). Guilt vs. responsibility is powerlessness vs. power: Understanding emotional pollution and power. Anger in the Age of Entitlement. Retrieved from

Travis, A. (2009, August 8). Police told to ignore human rights ruling over DNA: Details of innocent people will continue to be held: Senior officers will not get new guidance for a year. The Guardian. Retrieved from

Williams, R., & Johnson, P. (2005). Inclusiveness, effectiveness and intrusiveness: Issues in the developing uses of DNA profiling in support of criminal investigations. Medical Malpractice: U.S., & International Perspectives, 545.

Key Terms and Definitions

BEM: Black Ethnic Minority group. BEM has specific national or cultural traditions from the majority of the population.

DNA: Deoxyribonucleic acid (DNA) is a molecule that encodes the genetic instructions used in the development and functioning of all known living organisms and many viruses.

DRAGNETS: In policing a dragnet is any system of coordinated measures for apprehending criminals or suspects, such as widespread DNA testing, pressuring potential criminals who have committed a given act to come forward.

ECtHR: European Court of Human Rights is a supra-national or international court established by the European Convention on Human Rights.

Familial Searching: Familial searching is a second phase step conducted by law enforcement after a search on a DNA database has returned no profile matches. Familial searching attempts to find a match of first-order relatives (e.g. sibling, parent/child) based on a partial match, granting some leads to law enforcement, as opposed to no leads.

HGP: The Human Genome Project is an international scientific research project with a primary goal of determining the sequence of chemical base pairs which make up human DNA, and of identifying and mapping the total genes of the human genome from both a physical and functional standpoint.

Mass Screenings: Occur when the police encourage people residing in a given area, or encourage people who are members of a certain group to volunteer their DNA sample. Mass screenings are supposed to save police resources in apprehending the offender(s) of a criminal activity.

NDNAD: Is a National DNA Database that was set up in 1995. As of the end of 2005, it carried the profiles of around 3.1 million people. In March 2012 the database contained an estimated 5,950,612 individuals. The database, which grows by 30,000 samples each month, is populated by samples recovered from crime scenes and taken from police suspects and, in England and Wales, anyone arrested and detained at a police station.

PACE: The Police and Criminal Evidence Act 1984 (PACE) (1984 c. 60) is an Act of Parliament which instituted a legislative framework for the powers of police officers in England and Wales to combat crime, as well as providing codes of practice for the exercise of those powers.

Profiling: With respect to DNA is the banding patterns of genetic profiles produced by electrophoresis of treated samples of DNA.

Scene of a Crime: Is a location where a crime took place or another location where evidence of the crime may be found. This is the area which comprises most of the physical evidence retrieved by law enforcement personnel, crime scene investigators (CSIs) or in some circumstances forensic scientists.

SLP: The Single Locus Probe (SLP) is a technique which was in use in early DNA examinations and has numerous limitations with respect to newer more advanced techniques.

Citation: Michael, K. (2014). Towards the Blanket Coverage DNA Profiling and Sampling of Citizens in England, Wales, and Northern Ireland. In M. Michael, & K. Michael (Eds.), Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies (pp. 187-207). Hershey, PA: IGI Global. doi:10.4018/978-1-4666-4582-0.ch008

Conceptual Model of User Acceptance of Location-Based Emergency Services

Towards a Conceptual Model of User Acceptance of Location-Based Emergency Services


This paper investigates the introduction of location-based services by government as part of an all-hazards approach to modern emergency management solutions. Its main contribution is in exploring the determinants of an individual’s acceptance or rejection of location services. The authors put forward a conceptual model to better predict why an individual would accept or reject such services, especially with respect to emergencies. While it may be posited by government agencies that individuals would unanimously wish to accept life-saving and life-sustaining location services for their well-being, this view remains untested. The theorised determinants include: visibility of the service solution, perceived service quality features, risks as perceived by using the service, trust in the service and service provider, and perceived privacy concerns. The main concern here is to predict human behaviour, i.e. acceptance or rejection. Given that location-based services are fundamentally a set of electronic services, this paper employs the Technology Acceptance Model (TAM) as a special adaptation of the Theory of Reasoned Action (TRA) to serve as the theoretical foundation of its conceptualisation. A series of propositions are drawn upon the mutual relationships between the determinants and a conceptual model is constructed using the determinants and guided by the propositions. It is argued the conceptual model presented would yield to the field of location-based services research a justifiable theoretical approach competent for exploitation in further empirical research in a variety of contexts (e.g. national security).

1. Introduction

Emergency management (EM) activities have long been practiced in civil society. Such activities evolved from simple precautions and scattered procedures into more sophisticated management processes that include preparedness, protection, response, mitigation and recovery strategies (Canton, 2007). In the twentieth century, governments have been utilising technologies such as sirens, speakers, radio, television and internet to communicate and disseminate time-critical information to citizens about impending dangers, during and after hazards. Over the past decade, location based services (LBS) have been implemented, or considered for implementation, by several countries to geographically deliver warnings, notifications and possibly life-saving information to people (Krishnamurthy, 2002; Weiss et al., 2006; Aloudat et al., 2007; Jagtman, 2010).

LBS take into account the pinpoint geographic position of a given device (handheld, wearable, implantable), and provide the user of the device with value added information based on the derived locational information (Küpper, 2005; Perusco & Michael, 2007). The location information can be obtained by using various indoor and/or outdoor positioning technologies that differ in their range, coverage, precision, target market, purpose and functionality.Radio frequencies, cellular telecommunications networks and global navigation satellite systems are amongst the main access media used to determine the geographic location of a device (Michael, 2004; Perusco & Michael, 2007).The collected location information can be stored for the purpose of further processing (e.g. analysing the whereabouts of a fleet of emergency service vehicles over a period of time) or combined with other relevant information and sent back to the user in a value-added form (e.g. traffic accidents and alternative routes). The user can either initiate a request for the service or it is triggered automatically when the device enters or leaves or comes in the vicinity of a defined geographic area.

The conventional use of LBS in emergency management is to find the almost exact location of a mobile handset after an emergency call or a distress short message service (SMS).Although the accuracy of the positioning results ranges from a few metres up to several kilometres, the current objective by several governments is to regulate the telecommunications carriers to provide the location information within accuracies between 50 to 150 metres. This type of service is generally known as wireless E911 in Northern America (i.e. Canada and the United States), E112 in the European Union, and similarly, but not officially, E000 in Australia.

But, even with proximate levels of accuracy LBS applications have the ability to create much more value when they are utilised under an all hazards approach by government. For example, with LBS in use,government agencies pertinent to the emergency management portfolio can collaborate with telecommunications carriers in the country to disseminate rapid warnings and relevant safety messages to all active mobile handsets regarding severe weather conditions, an act of terrorism, an impending natural disaster or any other extreme event if it happened or was about to happen in the vicinity of these mobile handsets. For that reason, LBS solutions are critically viewed by different governments around the world as an extremely valuable addition to their arrangements for emergency notification purposes (Aloudat et al., 2007; Jagtman, 2010).

However, in relation to LBS and EM almost no study has undertaken the responsibility of exploring an individual’s acceptance of utilising the services for emergency management. One might rightly ponder on whether any individual would ever forego LBS in a time of emergency. Nonetheless, despite the apparent benefits of this type of electronic service,their commercial utilisation has long raised numerous technical, social, ethical and legal issues amongst users. For example, the quality of the service information being provided, to issues related to the right of citizen privacy, and issues concerning the legal liability of service failure or information disclosure have been raised (O’Connor & Godar, 2003; Tilson et al., 2004; Perusco et al., 2006; Perusco & Michael, 2007; Aloudat & Michael, 2011). Accordingly, the contribution of this paper is to discuss the potential determinants or drivers of a person’s acceptance or rejection for utilising location-based services for emergency management, and propose a conceptual research model that comprises the drivers and justly serves as the theoretical basis needed for further empirical research.

The rest of this paper is organised as follows: Section 2 is a discussion of the factors expected to impact on a person’s perceptions towards the services, and presentation of the theoretical propositions of the expected relationships between the factors. Section 3 introduces the conceptual model and its theoretical foundation. Section 4 outlines the steps taken for pretesting the model via a pilot survey and provides the analysis results of the data collected. Section 5 concludes the paper and discusses the implications of this research work, including the theoretical contributions to the scholarly literature.

2. Determinants of acceptance or rejection

A review of acceptance and adoption literature was conducted to identify, critically assess and then select the factors that would most likely influence individuals’ beliefs regarding the use of LBS for emergencies. This approach has been completely justified by Taylor and Todd (1995), and Venkatesh and Brown (2001) on the basis that there is a wealth of information systems (IS) acceptance research, which minimises the need to extract beliefs anew for each new acceptance setting. The adopted working definitions for the selected factors are summarised in Table 1.

Table 1. Summary of the constructs and their definitions

Factor | Description of the Adopted Working Definition | Based Upon

  • Individual’s attitude towards the use of LBS
    • Individual’s positive or negative feelings towards using LBS in emergencies. Fishbein and Ajzen (1975)
  • Individual’s intention to use LBS
    • Individual’s decision to engage or not to engage in using LBS in emergencies. Fishbein and Ajzen (1975)
  • Trust
    • The belief that allows a potential LBS user to willingly become vulnerable to the use-case outcome of LBS, having taken the characteristics of LBS into consideration, irrespective of the ability to monitor or control the services or the service provider. Mayer et al., (1995), McKnight and Chervany (2001)
  • Risk as perceived by the potential user
    • Individual’s belief of the potential loss and the adverse consequences of using LBS in emergencies and the probability that these consequences may occur if the services are used. Pavlou and Gefen (2004), Heijden et al., (2005)
  • Perceived usefulness
    • Individual perception that using LBS for managing emergencies is useful. Davis et al., (1989) Perceived ease of use The degree to which the prospective user expects LBS to be free of effort. Davis et al., (1989)
  • Visibility
    • The extent to which the actual use of LBS is observed as a solution to its potential users Agarwal and Prasad (1997)
  • Perceived service qualities
    • Individual’s global judgment relating to the superiority of the service. Parasuraman et al., (1988)
  • Perceived currency
    • Prospective user’s perception of receiving up-to-the-minute service information during emergencies. Zeithaml et al., (2000), Yang et al., (2003)
  • Perceived accuracy
    • Prospective user’s perception about the conformity of LBS with its actual attributes of content, location, and timing. Zeithaml et al., (2000), Yang et al., (2003)
  • Perceived responsiveness
    • Prospective user’s perception of receiving a prompt LBS service during emergencies. Parasuraman et al., (1988), Liljander et al., (2002), Yang et al., (2003)
  • Privacy concerns
    • as perceived by the prospective user Individual’s concerns regarding the level of control by others over personal identifiable information. Stone et al., (1983)
  • Collection
    • The concern that extensive amounts of location information or other personal identifiable data will be collected when using LBS during emergencies. Smith et al., (1996), Junglas and Spitzmuller (2005)
  • Unauthorised secondary use
    • The concern that LBS information is collected for emergency purposes but will be used for other purposes without explicit consent from the individual. Smith et al., (1996), Junglas and Spitzmuller (2005)

A further discussion of each proposed factor and the criteria behind its selection are presented in the following sections.

2.1. The Visibility of Location- Based Emergency Services

Many individuals may not be aware of the possible utilisation of location-based services in emergency management and, therefore, it could be argued that the direct advantages and disadvantages of such utilisation are not be vis­ible to them (Pura, 2005; Chang et al., 2007). Individuals who are not aware of the existence of LBS or, basically do not know anything about the capabilities of this type of electronic services in the domain of emergency management, may not develop an appreciation, or even depreciation, towards the services unless they were properly and repeatedly being introduced (exposed) to

LBS emergency management solutions. In other words, people may not be able to accu­rately judge the advantages or disadvantages of LBS unless the application of LBS is visible to them. It should be noted however, that the exposure effect does not necessarily increase the perceived functionality of the services, but it can greatly enhance or degrade the percep­tions of an individual about the usefulness of the services, thus influencing their acceptance or rejection of the services (Thong et al., 2004).

One of the key attributes of the Diffusion of Innovation (DOI) Theory by Rogers (1995) is the construct of observability, which is “the degree to which the results of an innovation are observable to others” (p. 16). Innovation is “an idea, practice, [technology, solution, service] or object that is perceived as new by an individual” (Rogers, 1995,p. 135). Later, observability was perceived by Moore and Benbasat (1991) as two distinct constructs of demonstrability and visibility. Demonstrability is “the tangibility of the results of using an innovation,” and visibility is “the extent to which potential adopters see the innovation as being visible in the adoption context” (Agarwal & Prasad, 1997, p. 562). Further interpretation of visibility surmises that, an innovation, application, solution, technology or service may not be new but it could be un­known for its prospective users. This probably is the case with LBS and their application, where the services have been around for several years now, yet their general usage rates, especially in the contexts of emergency management are still extremely limited worldwide (Frost & Sul­livan, 2007; O’Doherty et al., 2007; Aloudat & Michael, 2010).

The main contribution of the DOI theory to this paper is the integration of its visibility construct in the proposed conceptual model. Visibility is defined as the extent to which the actual utilisation of LBS in EM is observed as a solution to its potential users. Considering the arguments above and following a line of reasoning in former studies, such as Karahanna et al., (1999) and Kurnia and Chien (2003), the following proposition is given:

Proposition P1: The perception of an individual of the usefulness of location-based services for emergency management is positively related to the degree to which the services as a solution are visible to him or her.

2.2. The Quality Features of Location-Based

Emergency Services

A classic definition of service quality is that it is “a judgment, or attitude, relating to the superiority of the service” (Parasuraman et al., 1988, p. 16). The quality is, therefore, a result of personal subjective understanding and evaluation of the merits of the service. In the context of emergency management, individuals may not always have comprehensive knowledge about the attributes of LBS in such context or the capabilities of the services for emergencies. Consequently, individuals may rely on indirect or inaccurate measures to judge such attributes. Therefore, there is a need to create verifiable direct measurements in order to present the subjective quality (perceived) in an objective way (determinable dimensions) in order to examining the impact of the quality features of LBS on people’s opinions towards utilising the services for EM.

The quality of electronic services (e-ser­vices) has been discerned by several research­ers as a multifaceted concept with different dimensions proposed for different service types (Zeithamletal., 2002; Zhang&Prybutok, 2005). Unfortunately, in the context of LBS there is no existing consummate set of dimensions that can be employed to measure the impact of LBS quality features on people’s acceptance of the services. Nonetheless, a set by Liljander et al., (2002) can serve as a good candidate for this purpose. The set of Lilj ander et al., was adapted from the well-known work of Parasuraman et al., (1988); the SERVQUAL model, but they redesigned the model to accurately reflect the quality measurements of e-services. The dimensions of Liljander et al., (2002) include reliability, responsiveness, customisation, as­surance/trust, and user interface.

Since LBS belongs to the family of e-ser­vices, most of the aforementioned dimensions in Liljander’s et al., model are highly pertinent and can be utilised to the benefit of this research. In addition, as the dimensions are highly adaptable to capture new media (Liljander et al., 2002) then it is expected that these dimensions would be capable of explaining people’s evaluation towards the introduction ofLBS into the modern emergency management solutions. Moreover, the few number of these dimensions are expected to provide a parsimonious yet reliable approach to study the impact of LBS quality features on people’s opinions without the need to employ larger scales such as Zeithaml’s et al., (2000), which comprises eleven dimensions, making it almost impractical to be employed along with other theorised constructs in any proposed conceptual model.

The interpretation of the reliability concept follows Kaynama and Black (2000), Zeithaml et al., (2000) and Yang et al., (2003) definitions as the accuracy and currency of the product information. For LBS to be considered reli­able, the services need to be delivered with the best possible accurate state and within the promised time frame (Liljander et al., 2002). This is highly relevant to emergency situations, taking into account that individuals are most likely on the move and often in time-critical circumstances that always demand accurate and current services.

Since it is reasonable to postulate that the success of LBS utilisation in emergency situa­tions depends on the ability of the government, as the provider of the service, to disseminate the service information to a large number of people in a timely fashion, and due to the fact that fast response to changing situations, or to peoples’ emergent requests, is considered as providing timely information to them then timeliness is closely related to responsiveness (Lee, 2005). Therefore, investigating the responsiveness of LBS would also be relevant in this context.

Liljander’s et al., (2002) user interface and customisation dimensions are not explic­itly pertinent to EM. The dimension of User interface comprises factors such as aesthetics, something that cannot actually be relevant to an emergency situation. Customisation refers to the state where information is presented in a tailored format to the user. This can be done for and by the user. As LBS are customised based on the location of the recipient and the type of information being sent to the user then customisation is already an intrinsic quality in the core features of these services.

Therefore, the service quality dimensions that are expected to impact on people’s accep­tance of LBS for EM include:

1. Perceived currency: the perceived qual­ity of presenting up-to-the-minute service information during emergency situations;

2. Perceived accuracy: individual’s percep­tion about the conformity of LBS with its actual attributes of content, location, and timing;

3. Perceived responsiveness: individual’s perception of receiving a prompt service (Parasuraman et al., 1988; Liljander et al., 2002; Yang et al., 2003).

Although perceived service quality is a representation of a person’s subjective ex­pectations of LBS, and not necessarily a true interpretation of the actual attributes of the service, it is expected nonetheless that these perceptions would convey an accepted degree of quality the prospective user anticipates in LBS, given the fact that limited knowledge about the actual quality dimensions are available to them in the real world.

It could be posited that an individual’s perception of how useful LBS are in emergen­cies can be highly influenced by the degree to which he or she perceives the services to be accurate, current and responsive. Here, the conceptual model follows the same rationale of TAM, which postulates perceived ease of use as a direct determinant of the perceived usefulness. Perceived ease of use is defined as “the degree to which an individual believes that using a particular system would be free of physical and mental effort” (Davis, 1989, p. 320). It is justifiable therefore to postulate that ease of use is related to the technical qual­ity features of LBS since the evaluation of an individual to the service easiness is highly associated with the convenient design of the service itself. This explains why ease of use has been conceived before by several researchers as one of the dimensions of the service quality (Zeithaml et al., 2002; Yang et al., 2003; Zhang & Prybutok, 2005).

Building upon the mentioned arguments and following the trails of TAM, LBS quality features of currency, accuracy and responsive­ness are theorised in the conceptual model as direct determinants of the perceived usefulness and, accordingly, the following propositions are defined:

Proposition P2a: There is a positive relation­ship between the perceived currency of location-based services and the perceived usefulness of the services for emergency management;

Proposition P2b: There is a positive relation­ship between the perceived accuracy of location-based services and the perceived usefulness of the services for emergency management;

Proposition P2c: There is a positive relation­ship between the perceived responsive­ness of location-based services and the perceived usefulness of the services for emergency management.

2.3. Risks Associated with Using Location-Based Emergency Services

Risk of varying types exists on a daily basis in a human’s life. In the extreme situations, such as emergencies and disasters, perceptions of risk stem from the fact that the sequence of events and magnitude of the outcome are largely unknown or cannot be totally controlled. If one takes into account that risky situations generally affect the confidence of people in technology (Im et al., 2008), then the decision of an individual to accept LBS for EM might be influenced by his or her intuition that these electronic services could be easily disrupted since the underlying infrastructure may suffer heavily in severe conditions usually associated with such situations, especially in large-scale disasters. A telling example is Hurricane Katrina, in 2005, which caused serious dis­ruptions throughout New Orleans, Louisiana, and rendered inoperable almost every piece of public and private infrastructure in the city. As a result, uncertainty about the intensity of extreme situations coupled with their unfore­seeable contingencies may have long-term implications on one’s perceptions towards the use of all technologies, including LBS, in life- threatening situations, such as emergencies.

Since it is practically rational to believe that individuals would perceive different types of risk in emergencies, then it might be highly difficult to examine particular facets of risk as being separate to one another since they can all be inextricably intertwined. Therefore, follow­ing the theoretical justification of Pavlou (2003), perceived risk is theorised in the conceptual model as a high-order unidimensional concept.

Perceived risk is defined as the individual’s belief of the potential loss and the adverse consequences of using LBS in emergencies and the probability that these consequences may occur if the services are used. Bearing in mind the high uncertainty that is usually associated with such events, this paper puts forward the following proposition:

Proposition P3: The risks perceived in using location-based services for emergency management have a negative influence on the perceived usefulness of the services.

2.4. People’s Trust in Location- Based Emergency Services

Trust has long been regarded as an important aspect of human interactions and their mutual relationships. Basically, any intended interac­tion between two parties proactively requires an element of trust predicated on the degree of certainty in one’s expectations or beliefs of the other’s trustworthiness (Mayer et al., 1995;

Li, 2008). Uncertainty in e-services, including LBS, leads individuals to reason about the capabilities of the services and their expected performance, which eventually brings them to either trust the services by willingly accept to use them or distrust the services by simply reject to use them. In emergencies, individuals may consider the possible risks associated with LBS before using this kind of services. There­fore, individuals are likely to trust the services and engage in a risk taking relationship if they perceive the benefits of LBS outweigh the risks. However, if high levels of risk are perceived, then it is most likely that individuals do not have enough trust in the services and, therefore, will not engage in a risk-taking behaviour by using them (Mayer et al., 1995). Consequently, it could be posited here that trust in LBS is a pivotal determinant of accepting the services, especially in emergency situations where great uncertainty is always present.

Trust has generally been defined as the belief that allows a person to willingly become vulnerable to the trustee after having taken the characteristics of the trustee into consideration, whether the trustee is another person, a product, a service, an institution or a group of people (McKnight & Chervany, 2001). In the context of LBS, the definition encompasses trust in the service provider (i.e. government in col­laboration with telecommunications carriers) and trust in the services and their underlying infrastructure. This definition is in agreement with the generic model of trust in e-services, which encompasses two types of trust: trust in the government agency controlling and provid­ing the service and trust in the technology and underlying infrastructure through which the service is provided (Tan & Thoen, 2001; Carter & Bélanger, 2005; Horkoffet al., 2006).

Since the willingness to use the services can be regarded as an indication that the person has taken into account the characteristics of both the services and the provider of the services, including any third parties in between, then it is highly plausible to say that investigating trust propensity in the services would provide a prediction of trust in both LBS and their provider. Some could reasonably argue that trust should be examined with the proposition that the person knows or, at least, has a presumption of knowledge about the services, their usefulness and the potential risks associated with them. Nonetheless, it should be noted here that trust is, ipso facto, a subjective interpretation of the trustworthiness of the services, given the limited knowledge of the actual usage of LBS in the domain of emergency management in the real world.

Despite the general consensus of the ex­istence of a mutual relationship between trust and risk, the two concepts should be investi­gated separately when examining their impact on the acceptance of LBS since both usually show a different set of antecedents (Junglas & Spitzmuller, 2006). Trust and perceived risks are essential constructs when uncertainty is present (Mayer et al., 1995). However, each of the two has a different type of relationship with uncertainty. While uncertainty augments the risk perceptions of LBS, trust reduces the individual’s concerns regarding the possible negative consequences of using the services, thus alleviating uncertainty around their per­formance (Morgan & Hunt, 1994; Nicolaou & McKnight, 2006).

Therefore, as trust in LBS lessens the uncer­tainty associated with the services, thus reduc­ing the perceptions of risk, this paper theorises that perceived risk is negatively related to an individual’s trust in LBS. This is in line with a large body of previous empirical research, which supports the influence of trust on the perceptions of risk (Gefen et al., 2003). Furthermore, by reducing uncertainty trust is assumed to create a positive perspective regarding the usefulness of the services and provide expectations of an acceptable level of performance. Accordingly, the following propositions are defined:

Proposition P4: Trust in location-based ser­vices positively influences the perceived usefulness of the services for emergency management;

Proposition P5: Trust in location-based ser­vices negatively impacts the risks perceived from using the services for emergency management.

2.5. Privacy Concerns

Pertaining to Location-Based Emergency Services

In the context of LBS, privacy pertains mainly to the locational information of the person and the degree of control he or she exercises over this type of information. Location information is regarded as highly sensitive data that when collected over a period of time or combined with other personal information can infer a great deal about a person’s movements and in turn reveal more than just one’s location. Indeed, Clarke and Wigan (2008) noted that knowing the past and present locations of a person could, amongst other things, enable the discovery of the person’s behavioural patterns in a way that could be used, for example, by governments to create a suspicion, or by the private sector to conduct target marketing.

Privacy concerns could originate when individuals become uncomfortable of the per­ception that there is a constant collection of their personal location information, the idea of its perennial availability to other parties, and the belief that they have incomplete control over the collection, the extent, the duration, the timing or the amount of data being collected about them.

The traditional commercial use of LBS, where a great level of detail about the service application are regularly available for the end user, may not create much sensitivity towards privacy since in most cases the explicit consent of the user is a prerequisite for initiating these services. This is completely true in the markets of the United States, Europe and Australia (Gow, 2005; Code of Practice of Passive Loca­tion Services in the UK, 2006; The Australian Government: Attorney General’s Department, 2008). However, in emergencies pertinent government departments and law enforcement agencies have the power to temporarily waive the person’s right to privacy based on the as­sumption that the consent is already implied when collecting location information in such situations (Gow, 2005; Pura, 2005).

The implications of waiving away the consent, even temporarily, may have long-term adverse effects on people’s perspectives towards the services in general. It also has the potential to raise a debate on to what extent individuals are truly willing to relinquish their privacy in exchange for a sense of continuous security (Perusco et al., 2006). The debate could be easily augmented in the current political climate of the so-called “war on terror” where governments have started to bestow additional powers on themselves to monitor, track, and gather personal location information in a way that never could have been justified before (Perusco & Michael, 2007). As a result, privacy concerns are no exception to emergency management.

Four privacy concerns have been identified previously by Smith et al. (1996). They are col­lection, unauthorised secondary use, errors in storage, and improper access of the collected data. These concerns are also pertinent to LBS (Junglas & Spitzmuller, 2006). Collection is defined as the concern that extensive amounts of location information or other personal identifi­able information would be collected when using LBS for emergency management. Unauthorised secondary use is defined as the concern that LBS information is collected for the purposes of emergency management but ultimately is used for other purposes and without explicit consent from the individual. Errors in storage describe the concern that the procedures taken against accidental or deliberate errors in stor­ing location information are inadequate. And improper access is the concern that the stored location information is accessed by parties who do not have the authority to do so.

Two particular privacy concerns, collection and unauthorised secondary use, are integrated into the conceptual model. These concerns are expected to have a direct negative impact on the perceived usefulness of LBS. Other prominent constructs of trust and perceived risk are assumed to have mediating effects on the relationship between privacy concerns and perceived usefulness since both constructs (i.e. trust and perceived risk) could be reasonably regarded as outcomes of the assessment of the individual of the privacy concerns. For instance, if a person does not have much concern about the privacy of his or her location information then it is most likely he or she trusts the services, thus perceiving LBS to be beneficial and useful. On the other hand, if the perceptions of privacy concerns were high then the individual would not probably engage in a risk taking behaviour, resulting in lower perceptions of the usefulness of the services.

Accordingly, perceived privacy concerns are theorised in the conceptual model as direct determinants of both trust and perceived risk. While perceived privacy concerns are postu­lated to have a negative impact on the trust in the services, they are theorised to have a positive influence on the risks perceived from using location-based services for emergency management.

Considering the above mentioned argu­ments, the following propositions are made:

Proposition P6a: Collection, as a perceived privacy concern, negatively impacts the perceived usefulness of location-based services for emergency management;

Proposition P6b: Unauthorised secondary use, as a perceived privacy concern, nega­tively impacts the perceived usefulness of location-based services for emergencies;

Proposition P7a: Collection, as a perceived privacy concern, has a negative impact on trust in location-based services;

Proposition P7b: Unauthorised secondary use, as a perceived privacy concern, has a negative impact on trust in location-based services;

Proposition P8a: The risks perceived from us­ing location-based services for emergency management are positively associated with the perceived privacy concern of collection;

Proposition P8b: The risks perceived from us­ing location-based services for emergency management are positively associated with the perceived privacy concern of unautho­rised secondary use.


The determinants of LBS acceptance are inte­grated into a conceptual model that extends and builds upon the established theory of reasoned action (TRA), applied in a technology-specific adaptation as a technology acceptance model (TAM). See Figure 1.

Figure 1. The conceptual model of location-based emergency services acceptance

TAM postulates that usage or adoption behaviour is predicted by the individual’s inten­tion to use location-based emergency services. The behavioural intention is determined by the individual’s attitude towards using the services. Both the attitude and intention are postulated as the main predictors of acceptance. The at­titude, in turn, is influenced by two key beliefs: perceived ease of use and perceived usefulness of LBS. TAM also grants a basis for investi­gating the influence of external factors on its internal beliefs, attitude, and intention (Davis etal., 1989).

As illustrated in the model in Figure 1, a set of propositions that reflect the theoretical relationships between the determinants of ac­ceptance are presented as arrowed lines that start from the influential factor and end into the dependent construct. The theorised factors supplement TAM’s original set and are totally in agreement with its theoretical structural formulation. That is, all of the hypothesised effects of the factors would only be exhibited on the internal constructs (i.e. attitude and inten­tion) through the full mediation of the internal beliefs (i.e. perceived usefulness or perceived ease of use).


A pilot survey was conducted in order to test the reliability of the model’s constructs. IS literature places great emphasis on the importance of the piloting stage as part of the model’s development (Baker, 1999; Teijlingen & Hundley, 2001). In essence, the pilot survey is an experimental study that aims to collect data from a small set of subjects in order to discover any defects or flaws that can be corrected, before the conceptual model is tested in a large scale survey (Baker, 1999; Zikmund, 2003).

4.1. Measurement of Constructs

To increase construct measurement reliability, most of the items in the survey, which have been tested and validated in former studies, were adapted to reflect the specific context of this research i.e. location-based services. It should be emphasised here that the use of existing items in the literature is completely a valid approach (Churchill, 1979).

The scales of TAM’s perceived useful­ness and perceived ease of use were measured based on the original scales of Davis (1989). Attitude measurement items were adopted from two studies by Agarwal and Prasad (1999) and Van der Heij den et al., (2001). Intention to use items were measured using scales adopted from Junglas and Spitzmuller (2005). Trust measure­ments were adopted from Mayer et al., (1995) and Junglas and Spitzmuller (2005). Pavlou and Gefen (2004)perceived risk items were adopted given the emphasis on emergency management. The items of the visibility construct were ad­opted from a study by Karahanna et al., (1999). The items of perceived privacy concerns were adopted from Smith et al., (1996) and Junglas and Spitzmuller (2005). The statements of perceived service qualities were not directly available but have been operationalized based on the recommendations of Churchill (1979), who suggested that each statement should express limited meaning, its dimensions should be kept simple and the wording should be straightforward.

4.2. Survey Design

The survey included an overview and introduction of the application of location-based services in emergency management. In addition, the survey provided the participants with four vignettes. Each vignette depicted a hypothetical scenario about the possible uses of LBS applications for managing potential hazardous situations. The scenarios covered specific topics to emergencies such as an impending natural disaster, a situation where a person was par­ticularly in need of medical assistance, severe weather conditions and a national security issue. Two of the vignettes were designed to present location-based services in a favourable light, and the other two vignettes were designed to draw out the potential pitfalls and limitations of LBS in EM. Through the use of vignettes, participants were encouraged to project their true perceptions about the services while, at the same time, involved with creating a meaning related to their potential use in these events. Creating this meaningful attachment in context was very important, as it acted to inform par­ticipant responses, given the utilisation of LBS in EM is still in its nascent stages worldwide.

A self-administrated questionnaire was used to collect data from participants. A five- point Likert rating scale was used throughout the questionnaire. The survey which predominantly yielded quantitative results also included one open-ended question in order to solicit more detailed responses from the participants.

4.3. The Sample of the Pilot Survey

Six hundred pilot surveys were randomly distributed by hand, in November 2008, to households’ mailboxes in the Illawarra region and the City of Wollongong, New South Wales, Australia. Participants were asked to return their copies to the researcher within three weeks in the enclosed reply-paid envelope provided with the survey.

Although, this traditional approach is time- consuming and demands a lot of physical effort, it was favoured as it is more resilient to social desirability effects (Zikmund, 2003), where respondents may reply in a way they think it is more socially appropriate (Cook & Campbell, 1979). In addition, it is generally associated with high perceptions of anonymity, something that may not be completely assured or guaranteed by other methods of data collection since they tend to disclose some personal information, such as name, telephone number, email address or IP address, which may cause privacy infringements (Zikmund, 2003; Michaelidou & Dibb, 2006).

The main concern was to end up with a low response rate, an issue several researchers have noted before (Yu & Cooper, 1983; Galpin, 1987; Zikmund, 2003). Indeed, a total of 35 replies were returned, yielding an extremely low response rate of 5.8%. Two incomplete replies were excluded, leaving only 33 usable surveys for the final analysis.

Although it is a desirable goal to end up with a high response rate to have more confidence in the results, and to be able to comment on the significance of the findings (Emory & Cooper, 1991; Saunders et al., 2007), it should be noted that the pilot study’s main objective is to serve as an initial test (pretest) of the conceptual model and does not, in any away, attempt to generalise its results to a new population. Therefore, the generalisability of the findings is not an issue of contention here (Morgan & Hunt, 1994).

Nonetheless, there is much discussion in the literature of what constitutes a “good” response rate of the pilot survey; hence, its acceptable sample size. Hunt et al., (1982), for example, stated that several researchers simply recom­mended a “small” sample size, others indicated a sample size between 12 and 30 as sufficient to fulfil the requirements of the analysis. Anderson and Gerbing (1991) pretested a methodology for predicting the performance of measures in a confirmatory factor analysis with a sample size of 20. They posited the consistency of this small sample size with the general agreement between researchers that the number should be relatively small. Reynolds et al., (1993) noted that the sample size of pilot surveys is generally small when discussed in the literature, ranging from 5 to 100, an depending on the goal of the study.

The main concern, however, when as­sessing the effect of a low response rate on the validity of the survey is when taking into account the non-response bias (Cummings etal., 2001; Fowler, 2001). The bias stems from the possibility that only the sample population who are interested in the topic of the pilot survey would provide their responses back (Fowler, 2001). Nonetheless, if non-respondents’ char­acteristics are systematically similar to those of the respondents, then the non-response bias is not necessarily reduced by an increased response rate (Cummings et al., 2001).

Kanuk and Berenson (1975) in a compre­hensive literature review of the factors influenc­ing response rates to mail surveys, examined the significant differences between respondents and non-respondents, taking into account a broad range of personality traits, socio-economic and demographic characteristics. The researchers concluded that the only consistent difference was that respondents tend to be better educated.

Since respondents of this pilot survey were of all levels of education, as illustrated in Table 2, where for example, 7 respondents had a secondary education while 7 had post­graduate degrees, representing the low-level educated and the well-educated population, then it is argued that non-respondents did not differ significantly from the survey’s responders, suggesting that non-response bias was not present, and therefore, low response rate is not an issue here. Thus, the pilot survey with its low response rate, and for which no systematic differences between respondents and non-respondents exist is considered valid for the analysis.

Table 2. Respondents education

The traditional benchmarks in mail survey studies that positioned a 50 percent response rate as adequate and 70 percent as very good (Babbie, 1998) should be reappraised. Current trends of thinking reject these unconditional criterion levels and assertively demand for a contextual approach where response rate is considered in conjunction with the goal of the study, its design and the nature of its sample (Fife-Schaw, 2000; Fowler, 2001).

4.4. Reliability of the Measurements

Reliability expresses the extent to which the measures in the instrument are free of random errors, thus yielding similar consistent results if repeated (Yin, 2003; Zikmund, 2003). Reli­ability reflects the internal consistency of the scale items measuring the same construct for the selected data. Hence, it is basically an evaluation of the measurement accuracy (Straub, 1989). Nunnally and Bernstein (1994) recommended the calculation of Cronbach’s alpha coefficients to assess reliability. Straub (1989) suggested an alpha value of 0.80 as the lowest accepted threshold. However, Nunnally and Bernstein (1994) stated that 0.60 is accepted for newly developed measures, otherwise, 0.70 should serve as the lowest cut-off value.

The common threshold value of 0.7 was selected as the minimum acceptable level based on the recommendations of Nunnally and Bern­stein (1994) and Agarwal and Karahanna (2000). The results ofthe analysis are presented in Table 3, revealing acceptable values for nearly all measurements except perceived accuracy which was found to be 0.684. Accordingly, one highly complex item was excluded and the revised construct was put again through another round of validation, after which a higher acceptable coefficient of 0.724 was yielded.

Table 3. Cronbach’s alpha reliability statistics

Another reliability scale assessment, through the computation of composite reli­ability, was also conduted. It is similar in interpretation to Cronbach’s alpha test, but it applies the actual loadings of the items and does not assume weight equivalency among them (Chin, 1998). Moreover, Raykov (1997) showed that Cronbach’s test may under-estimate the reliability of the congeneric measures, leav­ing the researcher with lower-bound estimates of the true reliability scores. As illustrated in Table 4, the results show that all scores far exceed the 0.7 recommended threshold (Hair et al., 2006). Consequently, these results bring more confidence in the conceptual model and its constructs as they have demonstrated high internal consistency under the evaluation of two separate reliability tests.

Table 4. Composite reliability statistics

Table 4. Composite reliability statistics

5. Conclusion and Implications

Despite the large body of research that has been written to augment our understanding of the determinants of acceptance and adoption of location-based services in various usage contexts, the scarcity of theoretical and empiri­cal studies that examine people’s acceptance of LBS in the realm of emergencies is noted. This is clearly a gap in the current research in which this study makes a significant contribu­tion. This paper is a discussion of unexplored determinants in relation to user acceptance of location-based emergency services. These include the visibility of LBS applications in the context of emergency management, the privacy of individuals and their perceived concerns regarding extensive collection and unauthorised secondary use of the collected data by governments, risks as associated with using LBS for EM, trust in the services and in the service provider, and the current, accurate and responsive quality features of the services being offered for emergency management.

This paper proposed a conceptual model based on the aforementioned determinants that should serve as the theoretical basis for future empirical examination of acceptance. The model significantly extends and builds upon the theory of reasoned action, applied in a technology-specific adaptation as a technology acceptance model.

Although the conceptual model was built specifically to predict an individual’s acceptance of LBS for emergency management, the model can nonetheless be used as a generic candidate model in empirical studies to predict people’s acceptance of location-based services in other security usage contexts, applications, scenarios or settings. This is made possible due to the fact that all of the theorised factors of the model are highly relevant to the intrinsic characteristics of LBS. Examples of where the model would be deemed particularly useful include law enforce­ment applications, such as matters related to the surveillance implications of location-based services, and location-based evidence captures and social issues pertaining to the application of the services, such as arrest support, traffic violations or riot control.

In addition, the proposed model can be used not only to identify the predictors of acceptance but also to help the service providers to design their solutions in a way that can fairly meet the end user expectations. For instance, the model identifies perceived usefulness, perceived ease of use and perceived service quality features as expected determinants of acceptance. Once em­pirically tested, the impact of these factors can provide guidelines to developers of the services to accommodate the right service requirements that reflect acceptable performance standards for the potential users.

Finally, the application of location-based services in today’s society has the potential to raise concerns amongst users. These concerns could easily be augmented in highly sensitive settings, such as emergency management or counter-terrorism solutions. While this paper presents theoretical foundations, it is hoped the knowledge obtained here can be considered by governments and interested researchers towards the formation of developing more successful deployment and diffusion strategies for loca­tion-based emergency services globally. The purpose of this paper is to help in channelling such strategies in the right direction.


Agarwal, R., & Prasad, J. (1997). The role of innovation characteristics and perceived volun­tariness in the acceptance of information technologies. Decision Sciences, 28(3), 557–582. doi: 10.111 1/j. 1540-5915. 1997.tb01322.x.

Aloudat,A., &Michael,K. (2010). Toward the regula­tion of ubiquitous mobile government: A case study on location-based emergency services in Australia. Electronic Commerce Research, 10(4).

Aloudat,A., &Michael, K. (2011). The socio-ethical considerations surrounding government mandated location-based services during emergencies: An Australian case study. In M. Quigley(Ed.), ICT ethics and security in the 21st century: New developments and applications (1sted.,pp. 129–154). Hershey,PA: IGI Global. doi: 10.40 18/978-1-60960-573-5.ch007.

Aloudat, A., Michael, K., & Jun, Y. (2007). Location- based services in emergency management- from government to citizens: Global case studies. In P. Mendis, J. Lai, E. Dawson, & H. Abbass (Eds.), Re­cent advances in security technology (pp. 190–201). Melbourne, Australia: Australian Homeland Security Research Centre.

Canton, L. G. (2007). Emergency management: Concepts and strategies for effective programs (1st ed.). Hoboken, NJ: John Wiley & Sons, Inc..

Carter, L., & Bélanger, F. (2005). The utilization of e-government services: Citizen trust, innovation and acceptance factors’. Information Systems Journal, 15(1),5–25.doi:10.1111/j.1365-2575.2005.00183.x.

Chang, S., Hsieh, Y.-J., Lee, T.-R., Liao, C.-K., & Wang, S.-T. (2007). A user study on the adoption of location based services. In Advances in web and network technologies, and information management (pp. 276-286).

Clarke, R., & Wigan, M. (2008). You are where you have been. In K. Michael, & M. G. Michael (Eds.), Australia and the new technologies: Evidence based policy in public administration (pp. 100–114). Can­berra: University of Wollongong.

Code of Practice of Passive Location Services in the UK. (2006). Industry code of practice for the use of mobile phone technology to provide passive location services in the UK. Retrieved August 23, 2007, from UKCoP_location_servs_210706v_pub_clean.pdf

Davis, F. D. (1989). Perceived usefulness, perceived ease ofuse, and user acceptance of information tech­nology. Management Information Systems Quarterly, 13(3), 318–340. doi:10.2307/249008.

Frost and Sullivan research service. (2007). Asia Pacific location-based services (LBS) mar­kets. Retrieved August 28, 2007, from http://

Gefen, D., Srinivasan Rao, V., & Tractinsky, N. (2003, January 6-9). The conceptualization of trust, risk and their electronic commerce: the need for clarifications. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences. IEEEXplore Database.

Gow, G. A. (2005). Pinpointing consent: Location privacy and mobile phones. In K. Nyíri (Ed.), A sense of place: The global and the local in mobile communication (pp. 139–150). Vienna, Austria: Passagen Verlag.

Horkoff, J., Yu, E., & Liu, L. (2006). Analyzing trust in technology strategies. In Proceedings of the the 2006 International Conference on Privacy, Security and Trust: Bridge the Gap Between PST Technologies and Business Services, Markham, Ontario, Canada.

Im, I., Kim, Y., & Han, H.-J. (2008). The effects of perceived risk and technology type on users’ accep­tance of technologies. Information & Management, 45(1), 1–9. doi: 10. 1016/

Jagtman, H. M. (2010). Cell broadcast trials in the Netherlands: Using mobile phone technology for citizens’alarming. Reliability Engineering & System Safety, 95(1), 18–28. doi: 10. 1016/j.ress.2009.07.005.

Junglas,I., & Spitzmuller, C. (2006). Personality traits and privacy perceptions: An empirical study in the context of location-based services. In Proceedings of the International Conference on Mobile Busi­ness, Copenhagen, Denmark (pp. 11). IEEEXplore Database.

Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). Information technology adoption across time: A cross-sectional comparison of pre- adoption and post-adoption beliefs. Management Information Systems Quarterly, 23(2), 183–213. doi: 10.2307/249751.

Kaynama, S. A., & Black, C. I. (2000). A proposal to assess the service quality of online travel agencies. Journal of Professional Services Marketing, 21(1), 63–68. doi: 10. 1300/J090v21n01_05.

Krishnamurthy, N. (2002, December 15-17). Using SMS to deliver location-based services. In Proceed­ings of the 2002 IEEE International Conference on Personal Wireless Communications, New Delhi, India.

Küpper, A. (2005). Location-based services: Fundamentals and operation (1st ed.). Chichester, UK: John Wiley & Sons Ltd. doi:10.1002/0470092335.

Kurnia, S., & Chien,A.-W. J. (2003, June 9-11). The acceptance of online grocery shopping. In Proceed­ings of the 16th Bled eCommerce Conference, Bled, Slovenia.

Lee, T. (2005). The impact of perceptions of interac­tivity on customer trust and transaction intentions in mobile commerce. Journal of Electronic Commerce Research, 6(3), 165–180.

Li, P. P. (2008). Toward a geocentric framework of trust: An application to organizational trust. Man­agement and Organization Review, 4(3), 413–439. doi: 10.111 1/j. 1740-8784.2008.00120.x.

Liljander, V., Van-Riel, A. C. R., & Pura, M. (2002). Customer satisfaction with e-services: The case of an on-line recruitment portal. In M. Bruhn, & B. Stauss (Eds.), Jahrbuch Dienstleistungs management 2002 – Electronic Services (1st ed., pp. 407–432). Wiesbaden, Germany: Gabler Verlag.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.

McKnight, D. H., & Chervany, N. L. (2001). What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology. Interna­tional Journal of Electronic Commerce, 6(2), 3 5–59.

Michael, K. (2004). Location-based services: A ve­hicle for IT&T convergence. In K. Cheng, D. Webb, & R. Marsh (Eds.), Advances in e-engineering and digital enterprise technology (pp. 467–478). Profes­sional Engineering Pub..

Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192–222. doi:10.1287/isre.2.3. 192.

Morgan, R. M., & Hunt, S. D. (1994). The commit­ment-trust theory of relationship marketing. Journal of Marketing, 58(3), 20. doi: 10.2307/1252308 

Nicolaou, A. I., & McKnight, D.H. (2006).Perceived information quality in data exchanges: Effects on risk, trust, and intention to use. Information Systems Re­search, 17(4), 332–351. doi: 10. 1287/isre. 1060.0103.

O’Connor, P. J., & Godar, S. H. (2003). We know where you are: The ethics of LBS advertising. In B. E. Mennecke, &T. J. Strader (Eds.),Mobile commerce: Technology, theory, and applications (pp.211–222). Hershey, PA: IGI Global. doi: 10.4018/978-1-59140- 044-8.ch013.

O’Doherty, K., Rao, S., & Mackay, M. M. (2007). ‘Young Australians’ perceptions of mobile phone content and information services: An analysis of the motivations behind usage. Young Consumers: Insight and Ideas for Responsible Marketers, 8(4),257–268. doi:10.1 108/17473610710838617.

Parasuraman, A., Berry, L., & Zeithaml, V. (1988). SERVQUAL: A multiple-item scale for measuring service quality. Journal of Retailing, 64(1), 12–40.

Pavlou, P. A. (2003). Consumer acceptance of elec­tronic commerce: Integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce, 7(3), 101–134.

Perusco, L., & Michael, K. (2007). Control, trust, privacy, and security: Evaluating location-based services. IEEE Technology and Society Magazine, 4–16. doi:10.1109/MTAS.2007.335564.

Perusco, L., Michael, K., & Michael, M. G. (2006, October 11-13). Location-based services and the privacy-security dichotomy. In Proceedings of the Third International Conference on Mobile Computing and Ubiquitous Networking,London,UK(pp. 9 1-98). Research Online: University of Wollongong Database.

Pura, M. (2005). Linking perceived value and loyalty in location-based mobile services. Managing Service Quality, 15(6), 509–538. doi:10.1 108/096045205 10634005.

Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York, NY: Free Press.

Smith, H. J., Milberg, S. J., & Burke, S. J. (1996). Information privacy: Measuring individuals’ con­cerns about organizational practices’. Management Information Systems Quarterly, 20(2), 167–196. doi: 10.23 07/249477.

Tan, Y.-H., & Thoen, W. (2001). Toward a generic model of trust for electronic commerce. International Journal of Electronic Commerce, 5(2), 6 1–74.

The Australian Government: Attorney General’s Department. (2008). Privacy act 1988: Act No. 119 of 1988 as amended. the Office of Legislative Drafting and Publishing. Retrieved August 2, 2008, from http://­Compilation1.nsf/0/63C00ADD09B982ECCA257490002B9D57/$file/Privacy1988_WD02HYP.pdf

Thong, J. Y. L., Hong, W., & Tam, K. Y. (2004). What leads to acceptance of digital libraries? Communications of the ACM, 47(11), 78–83. doi: 10.1145/1029496.1029498.

Tilson, D., Lyytinen, K., & Baxter, R. (2004, Janu­ary 5-8). A framework for selecting a location based service (LBS) strategy and service portfolio. In Proceedings of the 3 7th Annual Hawaii International Conference on System Sciences, Big Island, HI. IEEEXplore Database.

Weiss, D., Kramer, I., Treu, G., & Kupper, A. (2006, June 26-29). Zone services -An approach for location-based data collection. In Proceedings of the 8th IEEE International Conference on E-Commerce Technology, The 3rd IEEE International Confer­ence on Enterprise Computing, E-Commerce, and E-Services, San Francisco, CA.

Yang, Z., Peterson, R. T., & Cai, S. (2003). Services quality dimensions of Internet retailing: An explor­atory analysis. Journal of Services Marketing, 17(7), 685–700. doi: 10.1108/08876040310501241.

Zeithaml, V. A., Parasuraman, A., & Malhotra, A. (2000). A conceptual framework for understanding e-service quality: Implications for future research and managerial practice. MSI Working Paper Series, (WorkingPaper00-1 15), Marketing Science Institute, Cambridge, MA.

Zeithaml, V. A., Parasuraman, A., & Malhotra, A. (2002). Service quality delivery through web sites: A critical review of extant knowledge. Academy of Marketing Science, 30(4), 362. doi: 10.1177/009207002236911.

Zhang, X., & Prybutok, V. R. (2005). A consumer perspective of e-service quality. IEEE Transactions on Engineering Management, 52(4), 461–477. doi: 10.1 109/TEM.2005.856568.

Keywords: Acceptance, Location-Based Emergency Services, Privacy, Risk, Service Quality, Technology Acceptance Model (TAM), Theory of Reasoned Action (TRA), Trust, Visibility

Citation: Anas Aloudat, Katina Michael, "Towards a Conceptual Model of User Acceptance of Location Based Emergency Services", International Journal of Ambient Computing and Intelligence, 5(2), 17-34, April-June 2013.