My Research Programme (2002 - Now)

Implantable Medical Device Tells All

Implantable Medical Device Tells All: Uberveillance Gets to the Heart of the Matter

In 2015, I provided evidence at an Australian inquiry into the use of subsection 313(3) of the Telecommunications Act of 1997 by government agencies to disrupt the operation of illegal online services [1]. I stated to the Standing Committee on Infrastructure and Communications that mandatory metadata retention laws meant blanket coverage surveillance for Australians and visitors to Australia. The intent behind asking Australian service providers to keep subscriber search history data for up to two years was to grant government and law enforcement organizations the ability to search Internet Protocol–based records in the event of suspected criminal activity.

Importantly, I told the committee that, while instituting programs of surveillance through metadata retention laws would likely help to speed up criminal investigations, we should also note that every individual is a consumer, and such programs ultimately come back to bite innocent people through some breach of privacy or security. Enter the idea of uberveillance, which, I told the committee, is “exaggerated surveillance” that allows for interference [1] that I believe is a threat to our human rights [2]. I strongly advised that evoking section 313 of the Telecommunications Act 1997 requires judicial oversight through the process of a search warrant. My recommendations fell on deaf ears, and, today, we even have the government deliberating over whether or not they should relax metadata laws to allow information to be accessed for both criminal and civil litigation [3], which includes divorces, child custody battles, and business disputes. In June 2017, Australian Prime Minister Malcolm Turnbull even stated that “global social media and messaging companies” need to assist security services’ efforts to fight terrorism by “providing access to encrypted communications” [52].

Consumer Electronics Leave Digital Data Footprints

Of course, Australia is not alone in having metadata retention laws. Numerous countries have adopted these laws or similar directives since 2005, keeping certain types of data for anywhere between 30 days and indefinitely, although the standard length is somewhere between one and two years. For example, since 2005, Italy has retained subscriber information at Internet cafes for 30 days. I recall traveling to Verona in 2008 for the European Conference on Information Systems, forgetting my passport in my hotel room, and being unable to use an Internet cafe to send a message back home because I was carrying no recognized identity information. When I asked why I was unable to send a simple message, I was handed an antiterrorism information leaflet. Italy also retains telephone data for up to two years and Internet service provider (ISP) data for up to 12 months.

Similarly, the United Kingdom retains all telecommunications data for one to two years. It also maintains postal information (sender, receiver data), banking data for up to seven years, and vehicle movements for up to two years. In Germany, metadata retention was established in 2008 under the directive Gesetz zur Neuregelung der Telekommunikationsüberwachung und anderer verdeckter Ermittlungsmaßnahmen sowie zur Umsetzung der Richtlinie 2006/24/EG, but it was overturned in 2010 by the Federal Constitutional Court of Germany, which ruled the law was unconstitutional because it violated a fundamental right, in that correspondence should remain secret. In 2015, this violation was challenged again, and a compromise was reached to retain telecommunications metadata for up to ten weeks. Mandatory data retention in Sweden was challenged by one holdout ISP, Bahnhof, which was threatened with an approximately US$605,000 fine in November 2014 if it did not comply [4]. They defended their stance to protect the privacy and integrity of their customers by offering a no-logs virtual private network free of charge [5].

Some European Union countries have been deliberating whether to extend metadata retention to chats and social media, but, in the United States, many corporations voluntarily retain subscriber data, including market giants Amazon and Google. It was reported in The Guardian in 2014 that the United States records Internet metadata for not only itself but the world at large through the National Security Agency (NSA) using its MARINA database to conduct pattern-of-life analysis [6]. Additionally, with the Amendments Act in 2008 of the Foreign Intelligence Surveillance Act 1978, the time allotted for warrantless surveillance was increased, and additional provisions were made for emergency eavesdropping. Under section 702 of the Foreign Intelligence Surveillance Act of 1978 Amendments Act, now all American citizens’ metadata is stored. Phone records are kept by the NSA in the MAINWAY telephony metadata collection database [53], and short message service and other text messaging worldwide are retained in DISHFIRE [7], [8].

Emerging Forms of Metadata in an Internet of Things World

Figure 1. An artificial pacemaker (serial number 1723182) from St. Jude medical, with electrode, which was removed from a deceased patient prior to cremation. (Photo courtesy of wikimedia commons.)

The upward movement toward a highly interconnected world through the Web of Things and people [9] will only mean that even greater amounts of data will be retained by corporations and government agencies around the world, extending beyond traditional forms of telecommunications data (e.g., phone records, e-mail correspondence, Internet search histories, metadata of images, videos, and other forms of multimedia). It should not surprise us that even medical devices are being touted as soon to be connected to the Internet of Things (IoT) [10]. Heart pacemakers, for instance, already send a steady stream of data back to the manufacturer’s data warehouse (Figure 1). Cardiac rhythmic data is stored on the implantable cardioverter-defibrillator’s (ICD’s) memory and is transmitted wirelessly to a home bedside monitor. Via a network connection, the data find their way to the manufacturer’s data store (Figure 2).

The standard setup for an EKG. A patient lies in a bed with EKG electrodes attached to his chest, upper arms, and legs. A nurse oversees the painless procedure. The ICD in a patient produces an EKG (A) which can automatically be sent to a ICD manufacturer's data store (B). (Image courtesy of wikimedia commons.)

In health speak, the ICD set up in the patient’s home is a type of remote monitoring that happens usually when the ICD recipient is in a state of rest, most often while sleeping overnight. It is a bit like how normal computer data backups happen, when network traffic is at its lowest. In the future, an ICD’s proprietary firmware updates may well travel back up to the device, remote from the manufacturer, like installing a Windows operating system update on a desktop. In the following section, we will explore the implications of access to personal cardiac data emanating from heart pacemakers in two cases.

CASE 1: HUGO CAMPOS DENIED ACCESS TO HIS PERSONAL CARDIAC DATA

Figure 3. The conventional radiography of a single-chamber pacemaker. (Photo courtesy of wikimedia commons.)

In 2007, scientist Hugo Campos collapsed at a train station and later was horrified to find out that he had to get an ICD for his genetic heart condition. ICDs usually last about seven years before they require replacement (Figure 3). A few years into wearing the device, being a high-end quantifiedself user who measured his sleep, exercise, and even alcohol consumption, Campos became inquisitive over how he might gain access to the data generated by his ICD (Figure 4). He made some requests to the ICD’s manufacturer and was told that he was unable to receive the information he sought, despite his doctor having full access. Some doctors could even remotely download the patient’s historical data on a mobile app for 24/7 support during emergency situations (Figure 5). Campos’s heart specialist did grant him access to written interrogation reports, but Campos only saw him about once every six months after his conditioned stabilized. Additionally, the logs were of no consequence to him on paper, and the fields and layout were predominantly decipherable only by a doctor (Figure 6).

Figure 4. The Nike FuelBand is a wearable computer that has become one of the most popular devices driving the so-called quantified-self trend. (Photo courtesy of wikimedia commons.)

Dissatisfied by his denied access, Campos took matters into his own hands and purchased a device on eBay that could help him get the data. He also went to a specialist ICD course and then intercepted the cardiac rhythms being recorded [11]. He got to the data stream but realized that to make sense of it from a patient perspective, a patient-centric app had to be built. Campos quickly deduced that regulatory and liability concerns were at the heart of the matter from the manufacturer’s perspective. How does a manufacturer continue to improve its product if it does not continually get feedback from the actual ICDs in the field? If manufacturers offered mobile apps for patients, might patients misread their own diagnoses? Is a manufacturer there to enhance life alone or to make a patient feel better about bearing an ICD? Can an ICD be misused by a patient? Or, in the worst case scenario, what happens in the case of device failure? Or patient death? Would the proof lie onboard? Would the data tell the true story? These are all very interesting questions.

Figure 5. The medical waveform format encoding rule software on a Blackberry device. It displays medical waveforms, such as EKG (shown), electroencephalogram, and blood pressure. Some doctors have software that allows them to interrogate EKG information, but patients presently do not have access to their own ICD data. (Photo courtesy of wikimedia commons.)

Campos might well have acted to not only get what he wanted (access to his data his own way) but to raise awareness globally as to the type of data being stored remotely by ICDs in patients. He noted in his TEDxCambridge talk in 2011 [12]:

the ICD does a lot more than just prevent a sudden cardiac arrest: it collects a lot of data about its own function and about the patient’s clinical status; it monitors its own battery life; the amount of time it takes to deliver a life-saving shock; it monitors a patient’s heart rhythm, daily activity; and even looks at variations in chest impedance to look if there is build-up of fluids in the chest; so it is a pretty complex little computer you have built into your body. Unfortunately, none of this invaluable data is available to the patient who originates it. I have absolutely no access to it, no knowledge of it.

Doctors, on the other hand, have full 24/7 unrestricted access to this information; even some of the manufacturers of these medical devices offer the ability for doctors to access this information through mobile devices. Compare this with the patients’ experience who have no access to this information. The best we can do is to get a printout or a hardcopy of an interrogation report when you go into the doctor’s office.

Figure 6. An EKG chart. Twelve different derivations of an EKG of a 23-year-old japanese man. A similar log was provided to hugo campos upon his request for six months worth of EKG readings. (Photo courtesy of wikimedia commons.)

Campos decided to sue the manufacturer after he was informed that the data being generated from his ICD measuring his own heart activity was “proprietary data” [13]. Perhaps this is the new side of big data. But it is fraught with legal implications and, as far as I am concerned, blatantly dangerous. If we deduce that a person’s natural biometric data (in this instance, the cardiac rhythm of an individual) belong to a third party, then we are headed into murky waters when we speak of even more invasive technology like deepbrain stimulators [14]. It not only means that the device is not owned by the electrophorus (the bearer of technology) [15], [16], but quite possibly the cardiac rhythms unique to the individual are also owned by the device manufacturer. We should not be surprised. In Google Glass’s “Software and Services” section of its terms of use, it states that Google has the right to “remotely disable or remove any such Glass service from user systems” at its “sole discretion” [17]. Placing this in the context of ICDs means that a third party almost indelibly has the right to switch someone off.

CASE 2: ROSS COMPTON’S PACEMAKER DATA IS SUBPOENAED FOR CRIMINAL INVESTIGATIONS

Enter the Ross Compton case of Middletown, Ohio. M.G. Michael and I have dubbed it one of the first authentic uberveillance cases in the world, because the technology was not just wearable but embedded. The story goes something like this: On 27 January 2017, 59-year-old Ross Compton was indicted on arson and insurance fraud charges. Police gained a search warrant to obtain his heart pacemaker readings (heart and cardiac rhythms) and called his alibi into question. Data from Compton’s pacemaker before, during, and after the fire in his home broke out were disclosed by the heart pacemaker manufacturer after a subpoena was served. The insurer’s bill for the damage was estimated at about US$400,000. Police became suspicious of Compton when they traced gasoline to Compton’s shoes, trousers, and shirt.

In his statement of events to police, Compton told a story that misaligned and conflicted with his call to 911. Forensic analysts found traces of multiple fires having been lit in various locations in the home. Yet, Compton told police he had rushed his escape, breaking a window with his walking stick to throw some hastily packed bags out and then fleeing the flames himself to safety. Compton also told police that he had an artificial heart with a pump attached, a fact that he thought might help his cause but that was to be his undoing. In this instance, his pacemaker acted akin to a black box recording on an airplane [18].

After securing the heart pacemaker data set, an independent cardiologist was asked to assess the telemetry data and determine if Compton’s heart function was commensurate with the exertion needed to make a break with personal belongings during a life-threatening fire [19]. The cardiologist noted that, based on the evidence he was given to interpret, it was “highly improbable” that a man who suffered with the medical conditions that Compton did could manage to collect, pack, and remove the number of items that he did from his bedroom window, escape himself, and then proceed to carry these items in front of his house, out of harm’s way (see “Columbo, How to Dial a Murder”). Compton’s own cardio readings, in effect, snitched on him, and none were happier than the law enforcement officer in charge of the case, Lieutenant Jimmy Cunningham, who noted that the pacemaker data, while only a supporting piece of evidence, was vital in proving Compton’s guilt after gasoline was found on his clothing. Evidence-based policing has now well outstripped the more traditional intelligence-led policing approach, entrenched given the new realm of big data availability [20], [21].

Columbo, How to Dial a Murder [S1] Columbo says to the murderer:
“You claim that you were at the physicians getting your heart examined…which was true [Columbo unravels a roll of EKG readings]…the electrocardiogram, Sir. Just before three o’clock your physician left you alone for a resting trace. At that moment you were lying down in a restful position and your heart showed a calm, slow, easy beat [pointing to the EKG readout]. Look at this part, right here [Columbo points to the reading], lots of sudden stress, lots of excitement, right here at three o’clock, your heart beating like a hammer just before the dogs attacked…Oh you killed him with a phone call, Sir…I’ll bet my life on it. Very simple case. Not that I’m particularly bright, Sir…I must say, I found you disappointing, I mean your incompetence, you left enough clues to sink a ship. Motive. Opportunity. And for a man of your intelligence Sir, you got caught on a lot of stupid lies. A lot.” [S1] Columbo: How to Dial a Murder. Directed by James Frawley. 1978. Los Angeles, CA: Universal Pictures Home Entertainment, 2006. DVD.

Consumer Electronics Tell a Story

Several things are now of interest to the legal community: first and foremost, how is the search warrant for a person’s pacemaker data executed? In case 1, Campos was denied access to his own ICD data stream by the manufacturer, and yet his doctor had full access. In case 2, Compton’s own data provided authorities with the extra evidence they needed to accuse him of fraud. This is yet another example of seemingly private data being used against an individual (in this instance, the person from whose body the data emanated), but in the future, for instance, the data from one person’s pacemaker might well implicate other members of the public. For example, the pacemaker might be able to prove that someone’s heart rate substantially increased during an episode of domestic violence [22] or that an individual was unfaithful in a marriage based on the cross matching of his or her time stamp and heart rate data with another.

Of course, a consumer electronic does not have to be embedded to tell a story (Figure 7). It can also be wearable or luggable, as in the case of a Fitbit that was used as a truthdetector in an alleged rape case that turned out to be completely fabricated [23]. Lawyers are now beginning to experiment with other wearable gadgetry that helps to show the impact of personal injury cases from accidents (work and nonwork related) on a person’s ability to return to his or her normal course of activities [24] (Figure 8). We can certainly expect to see a rise in criminal and civil litigation that makes use of a person’s Android S Health data, for instance, which measure things like steps taken, stress, heart rate, SpO2, and even location and time (Figure 9). But cases like Compton’s open the floodgates.

Figure 7. A Fitbit, which measures calories, steps, distance, and floors. (Photo courtesy of wikimedia commons.)

Figure 8. A closeup of a patient wearing the iRhythm ZIO XT patch, nine days after its placement. (Photo courtesy of wikimedia commons.)

I have pondered on the evidence itself: are heart rate data really any different from other biometric data, such as deoxyribonucleic acid (DNA)? Is it perhaps more revealing than DNA? Should it be dealt with in the same way? For example, is the chain of custody coming from a pacemaker equal to that of a DNA sample and profile? In some way, heart rates can be considered a behavioral biometric [25], whereas DNA is actually a cellular sample [26]. No doubt we will be debating the challenges, and extreme perspectives will be hotly contested. But it seems nothing is off limits. If it exists, it can be used for or against you.

Figure 9. (a) and (b) The health-related data from Samsung's S Health application. Unknown to most is that Samsung has diversified its businesses to be a parent company to one of the world's largest health insurers. (Photos courtesy of katina michael.)

The Paradox of Uberveillance

In 2006, M.G. Michael coined the term uberveillance to denote “an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices in the human body” [27]. No doubt Michael’s background as a former police officer in the early 1980s, together with his cross-disciplinary studies, had something to do with his insights into the creation of the term [28]. This kind of surveillance does not watch from above, rather it penetrates the body and watches from the inside, looking out [29].

Furthermore, uberveillance “takes that which was static or discrete…and makes it constant and embedded” [30]. It is real-time location and condition monitoring and “has to do with the fundamental who (ID), where (location), and when (time) questions in an attempt to derive why (motivation), what (result), and even how (method/plan/thought)” [30]. Uberveillance can be used prospectively or retrospectively. It can be applied as a “predictive mechanism for a person’s expected behavior, traits, likes, or dislikes; or it can be based on historical fact” [30].

In 2008, the term uberveillance was entered into the official Macquarie Dictionary of Australia [31]. In research that has spanned more than two decades on the social implications of implantable devices for medical and nonmedical applications, I predicted [15] that the technological trajectory of implantable devices that were once used solely for care purposes would one day be used retrospectively for tracking and monitoring purposes. Even if the consumer electronics in question were there to provide health care (e.g., the pacemaker example) or convenience (e.g., a near-field-communication-enabled smartphone), the underlying dominant function of the service would be control [32]. The socioethical implications of pervasive and persuasive emerging technologies have yet to really be understood, but increasingly, they will emerge to take center stage in court hearings, like the emergence of DNA evidence and then subsequently global positioning system (GPS) data [33].

Medical device implants provide a very rich source of human activity monitoring, such as the electrocardiogram (EKG), heart rate, and more. Companies like Medtronics, among others specializing in implantables, have proposed a future where even healthy people carry a medical implant packed with sensors that could be life sustaining and detect heart problems (among others), reporting them to a care provider and signaling when assistance might be required [34]. Heart readings provide an individual’s rhythmic biometrics and, at the same time, can record increases and decreases in activity. One could extrapolate that it won’t be long before our health insurance providers are asking for the same evidence for reduced premiums.

Figure 10. A pacemaker cemetery. (Photo courtesy of wikimedia commons.)

The future might well be one where we all carry a black box implantable recorder of some sort [35], an alibi that proves our innocence or guilt, minute by minute (Figure 10). Of course, an electronic eye constantly recording our every move brings a new connotation to the wise words expressed in the story of Pinocchio: always let your conscience be your guide. The future black boxes may not be as forgiving as Jiminy Cricket and more like Black Mirror’s “The Entire History of You” [36]. But if we assume that these technologies are to be completely trusted, whether they are implantable, wearable, or even luggable, then we are wrong.

The contribution of M.G. Michael’s uberveillance is in the emphasis that the uberveillance equation is a paradox. Yes, there are near-real-time data flowing continuously from more points of view than ever [37], closed-circuit TV looking down, smartphones in our pockets recording location and movement, and even implantables in some of us ensuring nontransferability of identity [38]. The proposition is that all this technology in sum total is bulletproof and foolproof, omniscient and omnipresent, a God’s eye view that cannot be challenged but for the fact that the infrastructure and the devices, and the software, are all too human. And while uberveillance is being touted for good through an IoT world that will collectively make us and our planet more sustainable, there is one big crack in the utopian vision: the data can misrepresent, misinform, and be subject to information manipulation [39]. Researchers are already studying the phenomenon on complex visual information manipulation, how to tell whether data has been tampered with, a suspect introduced or removed from a scene of a crime, and other forensic visual analytics [40]. It is why Vladimir Radunovic, director of cybersecurity and e-diplomacy programs in the DiploFoundation, cited M.G. Michael’s contribution that “big data must be followed by big judgment” [41].

What happens in the future if we go down the path of constant bodily monitoring of vital organs and vital signs, where we are all bearing some device or at least wearing one? Will we be in control of our own data, or, as is seemingly obvious at present, will we not be in control? And how might selfincrimination play a role in our daily lives, or even worse, individual expectations that can be achieved by only playing to a theater 24/7 so our health statistics can stack up to whatever measure and cross-examination they are put under personally or publicly [42]? Can we believe the authenticity of every data stream coming out of a sensor onboard consumer electronics? The answer is no.

Having run many years of GPS data-logging experiments, I can say that a lot can go wrong with sensors, and they are susceptible to outside environmental conditions. For instance, they can log your location miles away (even in another continent), the temperature gauge can play up, time stamps can revert to different time zones, the speed of travel can be wildly inaccurate due to propagation delays in satellites, readings may not be at regular intervals due to some kind of interference, and memory overflow and battery issues, while getting better, are still problematic. The short and long of it is that technology cannot be trusted. At best, it can act as supporting evidence but should never replace eyewitness accounts. Additionally, “the inherent problem with uberveillance is that facts do not always add up to truth (i.e., as in the case of an exclusive disjunction T 1 T 5 F), and predictions based on uberveillance are not always correct” [30].

Conclusion

While device manufacturers are challenging the possibility that their ICDs are hackable in courts [43], highly revered security experts like Bruce Schneier are heavily cautioning about going down the IoT path, no matter how inviting it might look. In his acclaimed blog, Schneier recently wrote [44]:

All computers are hackable…The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster…We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized. If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

The cardiac implantables market by 2020 is predicted to become a US$43 billion industry [45]. Obviously, the stakes are high and getting higher with every breakthrough implantable innovation we develop and bring to market. We will need to address some very pressing questions at hand, as Schneier suggests, through some form of regulation if we are to maintain consumer privacy rights and data security. Joe Carvalko, a former telecommunications engineer and U.S. patent attorney as well as an associate editor of IEEE Technology and Society Magazine and pacemaker recipient, has added much to this discussion already [46], [47]. I highly recommend several of his publications, including “Who Should Own In-the-Body Medical Data in the Age of eHealth?” [48] and an ABA publication coauthored with Cara Morris, The Science and Technology Guidebook for Lawyers [49]. Carvalko is a thought leader in this space, and I encourage you to listen to his podcast [50] and also to read his speculative fiction novel, Death by Internet, [51] which is hot off the press and wrestles with some of the issues raised in this article.

REFERENCES

[1] K. Michael, M. Thistlethwaite, M. Rowland, and K. Pitt. (2015, Mar. 6). Standing Committee on Infrastructure and Communications, Section 313 of the Telecommunications Act 1997. [Online]. Available: http:// parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;db=COMMITT EES;id=committees%2Fcommrep%2Fd8727a07-ba09-4a91-9920-73d21 e446d1d%2F0006;query=Id%3A%22committees%2Fcommrep%2Fd872 7a07-ba09-4a91-9920-73d21e446d1d%2F0000%22

[2] S. Bronitt and K. Michael, “Human rights, regulation, and national security,” IEEE Technol. Soc. Mag., vol. 31, pp. 15–16, 2012.

[3] B. Hall. (2016, Dec. 22). Australians’ phone and email records could be used in civil lawsuits. Sydney Morning Herald. [Online]. Available: http:// www.smh.com.au/federal-politics/political-news/australians-phone-andemail-records-could-be-used-in-civil-lawsuits-20161222-gtgdy6.html

[4] PureVPN. (2015, Oct. 14). Data retention laws—an update. [Online]. Available: https://www.purevpn.com/blog/data-retention-laws-by-countries/

[5] D. Crawford. (2014, Nov. 18). Renegade Swedish ISP offers all customers VPN. Best VPN. [Online]. Available: https://www.bestvpn.com/ blog/11806/renegade-swedish-isp-offers-customers-vpn/

[6] J. Ball. (2013, Oct. 1). NSA stores metadata of millions of web users for up to a year, secret files show. Guardian. [Online]. Available: https://www .theguardian.com/world/2013/sep/30/nsa-americans-metadata-year-documents

[7] J. S. Granick, American Spies: Modern Surveillance, Why You Should Care, and What to Do About It. Cambridge, U.K.: Cambridge Univ. Press, 2017.

[8] A. Gregory, American Surveillance: Intelligence, Privacy, and the Fourth Amendment. Madison: Univ. of Wisconsin Press, 2016.

[9] K. Michael, G. Roussos, G. Q. Huang, A. Chattopadhyay, R. Gadh, B. S. Prabhu, and P. Chu, “Planetary-scale RFID services in an age of uberveillance,” Proc. IEEE, vol. 98, no. 9, pp. 1663–1671, 2010.

[10] N. Lars. (2015, Mar. 26). Connected medical devices, apps: Are they leading the IOT revolution—or vice versa? Wired. [Online]. Available: https://www.wired.com/insights/2014/06/connected-medical-devicesapps-leading-iot-revolution-vice-versa/

[11] H. Campos. (2015). The heart of the matter. Slate. [Online]. Available: http://www.slate.com/articles/technology/future_tense/2015/03/ patients_should_be_allowed_to_access_data_generated_by_implanted_ devices.html

[12] H. Campos. (2011). Fighting for the right to open his heart data: Hugo Campos at TEDxCambridge 2011. [Online]. Available: https:// www.youtube.com/watch?v=oro19-l5M8k

[13] D. Hinckley. (2016, Feb. 22). This big brother/big data business goes way beyond Apple and the FBI. Huffington Post. [Online]. Available: http://www.huffingtonpost.com/david-hinckley/this-big-brotherbigdata_b_9292744.html october 2017 ^ IEEE Consumer Electronics Magazine 115

[14] K. Michael, “Mental health, implantables, and side effects,” IEEE Technol. Soc. Mag., vol. 34, no. 2, pp. 5–17, 2015.

[15] K. Michael, “The technological trajectory of the automatic identification industry: The application of the systems of innovation (SI) framework for the characterisation and prediction of the auto-ID industry,” Ph.D. dissertation, School of Information Technology and Computer Science, Univ. of Wollongong, Wollongong, Australia, 2003.

[16] K. Michael and M. G. Michael, “Homo electricus and the continued speciation of humans,” in The Encyclopedia of Information Ethics and Security, M. Quigley, Ed. Hershey, PA: IGI Global, 2007, pp. 312–318.

[17] Google Glass. (2014, Aug. 19). Glass terms of use. [Online]. Available: https://www.google.com/glass/termsofuse/

[18] K. Michael and M. G. Michael, “Implementing ‘namebers’ using microchip implants: The black box beneath the skin,” in This Pervasive Day: The Potential and Perils of Pervasive Computing, J. Pitt, Ed. London, U.K.: Imperial College Press, 2011.

[19] D. Smith. (2017, Feb. 4). Pacemaker data used to charge alleged arsonist. Jonathan Turley. [Online]. Available: https://jonathanturley .org/2017/02/04/pacemaker-data-used-to-charge-alleged-arsonist/

[20] K. Michael, “Big data and policing: The pros and cons of using situational awareness for proactive criminalisation,” presented at the Human Rights and Policing Conf,. Australian National University, Canberra, Apr. 16, 2013.

[21] K. Michael and G. L. Rose, “Human tracking technology in mutual legal assistance and police inter-state cooperation in international crimes,” in From Dataveillance to Überveillance and the Realpolitik of the Transparent Society (The Second Workshop on Social Implications of National Security), K. Michael and M. G. Michael, Eds. Wollongong, Australia: University of Wollongong, 2007.

[22] F. Gerry, “Using data to combat human rights abuses,” IEEE Technol. Soc. Mag., vol. 33, no. 4, pp. 42–43, 2014.

[23] J. Gershman. (2016, Apr. 21). Prosecutors say Fitbit device exposed fibbing in rape case. Wall Street Journal. [Online]. Available: http:// blogs.wsj.com/law/2016/04/21/prosecutors-say-fitbit-device-exposedfibbing-in-rape-case/

[24] P. Olson. (2014, Nov. 16). Fitbit data now being used in the courtroom. Forbes. [Online]. Available: https://www.forbes.com/sites/parmyolson/ 2014/11/16/fitbit-data-court-room-personal-injury-claim/#459434e37379

[25] K. Michael and M. G. Michael, “The social and behavioural implications of location-based services,” J. Location Based Services, vol. 5, no. 3–4, pp. 121–137, Sept.–Dec. 2011.

[26] K. Michael, “The European court of human rights ruling against the policy of keeping fingerprints and DNA samples of criminal suspects in Britain, Wales and Northern Ireland: The case of S. and Marper v United Kingdom,” in The Social Implications of Covert Policing (Workshop on the Social Implications of National Security, 2009), S. Bronitt, C. Harfield, and K. Michael, Eds. Wollongong, Australia: University of Wollongong, 2010, pp. 131–155.

[27] M. G. Michael and K. Michael, “National security: The social implications of the politics of transparency,” Prometheus, vol. 24, no. 4, pp. 359–364, 2006.

[28] M. G. Michael, “On the ‘birth’ of uberveillance,” in Uberveillance and the Social Implications of Microchip Implants, M. G. Michael and K. Michael, Eds. Hershey, PA: IGI Global, 2014.

[29] M. G. Michael and K. Michael, “A note on uberveillance,” in From Dataveillance to Überveillance and the Realpolitik of the Transparent Society (The Second Workshop on Social Implications of National Security), M. G. Michael and K. Michael, Eds. Wollongong, Australia: University of Wollongong, 2007.

[30] M. G. Michael and K. Michael, “Toward a state of uberveillance,” IEEE Technol. Soc. Mag., vol. 29, pp. 9–16, 2010.

[31] M. G. Michael and K. Michael, “Uberveillance,” in Fifth Edition of the Macquarie Dictionary, S. Butler, Ed. Sydney, Australia: Sydney University, 2009.

[32] A. Masters and K. Michael, “Lend me your arms: The use and implications of humancentric RFID,” Electron. Commerce Res. Applicat., vol. 6, no. 1, pp. 29–39, 2007.

[33] K. D. Stephan, K. Michael, M. G. Michael, L. Jacob, and E. P. Anesta, “Social implications of technology: The past, the present, and the future,” Proc. IEEE, vol. 100, pp. 1752–1781, 2012. [34] E. Strickland. (2014, June 10). Medtronic wants to implant sensors in everyone. IEEE Spectrum. [Online]. Available: http://spectrum.ieee .org/tech-talk/biomedical/devices/medtronic-wants-to-implant-sensorsin-everyone

[35] K. Michael, “The benefits and harms of national security technologies,” presented at the Int. Women in Law Enforcement Conf., Hyderabad, India, 2015. [36] J. A. Brian Welsh. (2011). The entire history of you,” Black Mirror, C. Brooker, Ed. [Online]. Available: https://www.youtube.com/watch?v= Sw3GIR70HAY

[37] K. Michael, “Sousveillance and point of view technologies in law enforcement,” presented at the Sixth Workshop on the Social Implications of National Security: Sousveillance and Point of View Technologies in Law Enforcement, University of Sydney, Australia, 2012.

[38] K. Albrecht and K. Michael, “Connected: To everyone and everything,” IEEE Technology and Soc. Mag., vol. 32, pp. 31–34, 2013.

[39] M. G. Michael, “The paradox of the uberveillance equation,” IEEE Technol. Soc. Mag., vol. 35, no. 3, pp. 14–16, 20, 2016.

[40] K. Michael, “The final cut—tampering with direct evidence from wearable computers,” presented at the Fifth Int. Conf. Multimedia Information Networking and Security (MINES 2013), Beijing, China, 2013.

[41] V. Radunovic, “Internet governance, security, privacy and the ethical dimension of ICTs in 2030,” IEEE Technol. Soc. Mag., vol. 35, no. 3, pp. 12–14, 2016.

[42] K. Michael. (2011, Sept. 12). The microchipping of people and the uberveillance trajectory. Social Interface. [Online]. Available: http:// socialinterface.blogspot.com.au/2011/08/microchipping-of-people-and .html

[43] O. Ford. (2017, Jan. 12). Post-merger Abbott moves into 2017 with renewed focus, still faces hurdles. J.P. Morgan Healthcare Conf. 2017. [Online]. Available: http://www.medicaldevicedaily.com/servlet/com .accumedia.web.Dispatcher?next=bioWorldHeadlines_article& forceid=94497

[44] B. Schneier. (2017, Feb. 1). Security and the Internet of Things: Schneier on security. [Online]. Available: https://www.schneier.com/ blog/archives/2017/02/security_and_th.html

[45] IndustryARC. (2015, July 30). Cardiac implantable devices market to reach $43 billion by 2020. GlobeNewswire. [Online]. Available: https://globenewswire.com/news-release/2015/07/30/756345/10143745/ en/Cardiac-Implantable-Devices-Market-to-Reach-43-Billion-By-2020 .html

[46] J. Carvalko, The Techno-Human Shell: A Jump in the Evolutionary Gap. Mechanicsburg, PA: Sunbury Press, 2013.

[47] J. Carvalko and C. Morris, “Crowdsourcing biological specimen identification: Consumer technology applied to health-care access,” IEEE Consum. Electron. Mag., vol. 4, no. 1, pp. 90–93, 2014.

[48] J. Carvalko, “Who should own in-the-body medical data in the age of ehealth?” IEEE Technol. Soc. Mag., vol. 33, no. 2, pp. 36–37, 2014.

[49] J. Carvalko and C. Morris, The Science and Technology Guidebook for Lawyers. New York: ABA, 2014.

[50] K. Michael and J. Carvalko. (2016, June 20). Joseph Carvalko speaks with Katina Michael on his non-fiction and fiction pieces. [Online]. Available: https://www.youtube.com/watch?v=p4JyVCba6VM

[51] J. Carvalko, Death by Internet. Mechanicsburg, PA: Sunbury Press, 2016.

[52] R. Pearce. (2017, June 7). “No-one’s talking about backdoors” for encrypted services, says PM’s cyber guy. Computerworld. [Online]. Available: https://www.computerworld.com.au/article/620329/no-onetalking-about-backdoors-says-pm-cyber-guy/

[53] M. Ambinder. (2013, Aug. 14). An educated guess about how the NSA is structured. The Atlantic. [Online]. Available: https://www .theatlantic.com/technology/archive/2013/08/an-educated-guess-abouthow-the-nsa-is-structured/278697/

Acknowledgment

A short form of this article was presented as a video keynote speech for the Fourth International Conference on Innovations in Information, Embedded and Communication Systems in Coimbatore, India, on 17 March 2017. The video is available at https://www.youtube.com/watch?v=bEKLDhNfZio.

Keywords

Metadata, Electrocardiography, Pacemakers, Heart beat, Telecommunication services, Implants, Biomedical equipment, biomedical equipment, cardiology, criminal law, medical computing, police data processing, transport protocols, implantable medical device, heart, Australian inquiry, government agencies, illegal online services,mandatory metadata retention laws, government organizations, law enforcement organizations, Internet protocol

Citation: Katina Michael, 2017, "Implantable Medical Device Tells All: Uberveillance Gets to the Heart of the Matter", IEEE Consumer Electronics Magazine, Vol. 6, No. 4, Oct. 2017, pp. 107 - 115, DOI: 10.1109/MCE.2017.2714279.

 

Are You Addicted to Your Smartphone, Social Media, and More?

Abstract

Back in 1998, I remember receiving my first second-generation (2G) mobile assignment at telecommunications vendor Nortel: a bid for Hutchison in Australia, a small alternate operator. At that time, I had already grown accustomed to modeling network traffic on traditional voice networks and was beginning to look at the impact of the Internet on data network dimensioning. In Australia, we were still relying on the public switched telephone network to dial up the Internet from homes, the Integrated Services Digital Network in small-to-medium enterprises, and leased lines for larger corporates and the government. But modeling mobile traffic was a different affair.

Figure 1. Russia’s Safe-Selfie campaign flyer.

Back in 1998, I remember receiving my first second-generation (2G) mobile assignment at telecommunications vendor Nortel: a bid for Hutchison in Australia, a small alternate operator. At that time, I had already grown accustomed to modeling network traffic on traditional voice networks and was beginning to look at the impact of the Internet on data network dimensioning. In Australia, we were still relying on the public switched telephone network to dial up the Internet from homes, the Integrated Services Digital Network in small-to-medium enterprises, and leased lines for larger corporates and the government. But modeling mobile traffic was a different affair.

I remember thinking: how will we begin to categorize subscribers, and what kinds of network patterns could we expect in mobility? I recall beginning to define the market segments into four categories, which included security (low-end users), road warriors (high-end corporate users), socialites (youth market), and everyday users (average users). Remember, this was even before the rise of the Wireless Application Protocol. As naysayers said that the capital expenditure spent on 2G networks would be prohibitive and that investments would never be recouped for decades, subscribers’ usage rapidly increased with devices like the Research in Motion Blackberry, which allowed for mobile e-mail.

Fast-forward to 2000. I was already knee-deep in thirdgeneration (3G) mobile bids, predicting the cost of 3G spectrum in emerging and developed markets, increasing my categories of subscriber types from four to nine segments, and calculating upload and download rates for top mobile apps like gaming, images (photos and imaging), and e-mail with chunky PowerPoint attachments and other file types. We knew what was coming was big, but perhaps we ourselves sitting on the coalface didn’t realize what a big impact it would actually have on our lives and the lives of our children. Our models showed average revenues per user of US$120 per month for corporates. At the time, most of us believed the explosion that would take place in the coming decade (but not as big as it turned out to be), despite preaching the mantra that voice is now just another bit of data. In calculating pricing models, we brainstormed with one another: who would spend over 100 min on a mobile? Or who would spend hours gaming on a handset rather than a larger gaming console?

Enter social media, enabled by this wireless Internet protocol (IP) revolution and the rapid increase in diverse mobile hardware from netbooks to tablets to smartphones and smartwatches. Then things rapidly changed again. LinkedIn, Facebook, Twitter, Instagram, Snapchat, and WeChat are all enjoyed by social media users (consumers and professionals) around the globe today, and it is estimated that there will be 2.67 billion social network users by 2018 [1]. Over one-third of consumers worldwide, more than 2.56 billion people, will have a mobile phone by 2018, and more than half of these will have smartphone capability, making feature phones the minority [2].

The Social Media Boom

When Google announced that a staggering 24 billion selfies were uploaded to its servers alone in 2015, consuming 13.7 petabytes of storage space, I stopped and contemplated the meaning of these statistics [3]. What about the zillions of selfies uploaded to Apple’s iCloud, posted to Facebook, Instagram, Snapchat, and Twitter? It means that in one’s average lifetime, most people are taking at least one selfie a day and sharing their image publicly. This figure is much higher for the impressionable teen market, with a 2015 Google study reporting that youth take, on average, 14 selfies and 16 photos or videos, check social media 21 times, and send 25 text messages per day [4]. This number continues to grow steadily, according to fresh evidence by Pew Internet Research [5], and is now even impacting workplace productivity [6]. In the same year that Google announced the selfie statistics, Russia’s Ministry of Internal Affairs began a Safe-Selfie campaign [7], stating: “When you take a selfie, make sure that you are in a safe place and your life is not in danger!” (Figure 1). This was followed, of course, by the acknowledged deaths that had occurred while younger and older individuals were in the process of taking selfies, and the rate of frequency outnumbered shark attacks in 2015 [8]. One cannot fathom.

Noticeable is the adoption of high-tech gadgetry, especially in the childhood to youth markets, with an even greater penetration by teenagers and individuals younger than 34 years. It is rather disturbing to read that 24% of U.S. teens go online “almost constantly” [5], facilitated by the widespread penetration of smartphones and increasing requirement of tablets in the secondary education system. The sheer affordability of tech gear and its increasing multifunctionality now means that most people have digital Swiss Army knives at their disposal with a smartphone. By accessing the Internet via your phone, you can upload pictures, browse websites, navigate locations on maps, and be reachable any time of the day. The allure of how to kill time while waiting for appointments or in public transportation means that most people are frequently engaged in some form of interaction through a screen. The short-lived Google Glass was a hands-free solution that would have brought the screen right up to the eye [9], but while momentarily halted, one can envisage a future where we are seeing everything through filtered lenses. Google Glass Enterprise edition is now on sale [35]!

The Rise of Internet Addiction

Experts have tried to quantify the amount of time being spent on screens, specific devices (smartphones), and even particular apps (e.g., Facebook), and have identified guidelines for various age groups for appropriate use. Most notable is the work started by Dr. Kimberly Young in 1995 when she established her website netaddiction.com and clinical practice, the Center for Internet Addiction. She has been conducting research on how the Internet changes people’s behavior. Her guideline “3-6-9-12 Screen Smart Parenting” has gained worldwide recognition [10].

Increasingly, we are hearing about social media addiction stories (see “Social Media Addiction” [11] and “Mental Health and Social Media” [36]). We have all heard about the toddler screaming for his or her iPad before breakfast and gamers who are reluctant to come to dinner with the rest of the family (independent of gender, age, or ethnicity) unless they are instant messaged. There is a growing complexity around the diagnosis of various addiction behaviors. Some suffer from Internet addiction broadly, while others are addicted to computer gaming, smartphones, and even social media. It has been postulated by some researchers that most of these modern technology-centric addictions are age-old causes, such as obsessive-compulsive disorder, but they have definitely been responsible for triggering a new breed of what I consider to be yet-to-be-defined medical health issues.

In the last five years, especially, much research has begun in the area of online addiction. Various scales for Internet addiction have been developed by psychologists, and there are even scales for specific technologies now, like smartphones. The South Koreans have developed the Smartphone Addiction Scale, Smartphone Addiction Proneness Scale, and the KS-scale, a method for Koreans to self-report Internet addiction using a short-form scale. Unsurprisingly, these scales are significant for the South Korean market, given it is the world leader in Internet connectivity, having the world’s fastest average Internet connection speed with roughly 93% of citizens connected. It therefore follows that the greater the penetration of highspeed Internet in a market, the greater the propensity for a subscriber to suffer from some form of online addiction. There are even scales for social media applications, e.g., the Bergen Facebook Addiction Scale (BFAS) developed by Dr. Cecile Andraessen at the University of Bergen in Norway in 2012 (see “BFAS Survey Statements” [12]).

Accessible Internet Feeds the Addiction

Despite its remoteness to the rest of the world, Australia surprisingly does not lag far behind the South Korean market. According to the Australian Bureau of Statistics, in 2013, 94% of Australians were Internet users, but regional areas across Australia do not enjoy the same high-speed access as in South Korea, despite the National Broadband Network initiative that was founded in 2009, with the actual rollout beginning in 2015. Yet, alarmingly, one recognized industry report, “Digital Down Under,” stated that 13.4 million Australians spent a whopping 18.8 h a day online [13]. This statistic has been contested but commensurately backed by Lee Hawksley, managing director of ExactTarget Australia, who oversaw the research. She has gone on record saying, “...49% of Australians have smartphones, which means we are online all the time…from waking to sleep, when it comes to e-mail, immersion, it’s even from the 18–65s; however, obviously with various social media channels the 18–35s are leading the charge.”

According to the same study, roughly one-third of women living in New South Wales are spending almost two-thirds of their day online. And it is women who are 30% more likely to suffer anxiety as a result of participating in social media than men [14]–[16]. This is even greater than the Albrecht and Michael deduction of 2014, which estimated that people in developed nations are spending an average of 69% of their waking life behind the screen [17]. That is about 11 h behind screens out of 16 waking hours. But, no doubt, people are no longer sleeping 8 h with access to technology at arm’s reach within the bedroom, and, as a result, cracks are appearing in relationships, employment, severe sleep deprivation, and other areas as a result of screen dependencies [18].

It is difficult to say what kinds of specific addictions exist in relation to the digital world, and various countries identify market-relevant scales and measures. While countries like China, Taiwan, and South Korea acknowledge there is something called “Internet addiction” as a diagnosed medical condition, other countries, e.g., the United States, prefer not to be explicit about the condition, such as in the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-V) [19]. Instead, a potential new diagnosis, Internet gaming disorder, is dealt with in an appendix of the DSM-V [20], [21]. Generally, Internet addiction is defined as “the inability of individuals to control their Internet use, resulting in marked distress and/or functional impairment in daily life” [22]. Some practitioners have likened online addiction as being akin to substance-based addiction. Usually it manifests predominantly in one of three quite separate but sometimes overlapping subtypes in an individual: excessive gaming, sexual preoccupations [23], and e-mail/text/social media messaging [24].

Shared Data and the Need to Know

For now, what has been quantified and is well known is the amount of screen time spent by individuals in front of multiple platforms: Internet-enabled television (e.g., Netflix), play stations (for video games), desktops (for browsing), tablets (for pictures and editing), and smartphones (for social media messaging). Rest assured, the IP-enabled devices we are enjoying are passing on our details to corporations, who in the name of billing now accurately know about our family’s every move, digitally chronicling our preferences and habits. It is a form of pervasive social and behavioral biometrics, allowing big business to know app-by-app your individual thoughts. What is happening to all this metadata? Of course, it is being repurposed to give you more of the same, generating even more profit for interested businesses. For some capitalists, there is nothing wrong with this calculated engineering. Giving you more of what you want is the new mantra, but it does have its side effects, obviously.

The Australian Psychological Society issued its “Stress and Wellbeing in Australia” report last year, which included a section on social media fear of missing out (FOMO) [25]. Alongside FOMO, we also now have a fear of being off the grid, or FOBO, and the fear of no mobile, or NoMo [26]. I personally know adults who will not leave their homes in the morning unless they have watched the top ten YouTube videos of the day or won’t go to sleep until every one of those last e-mails has been answered and placed in the appropriate folder and actioned. Screen times are forever increasing, and this has come at the expense of physical exercise and one-toone time with loved ones.

There are reports of men addicted to video games who cannot keep a nine-to-five job, there are women suffering from depression and anxiety because they compare their online status with that of their peers, there are children who message on Instagram throughout the night, and there are those who are addicted to their work at the expense of all physical relationships around them. Perhaps most disturbingly are the increasing cases of online porn exposure by children between the ages of 9 and 13 years in particular [27], cybersexual activities in adolescence, or extreme social-media communities that spread disinformation. If it’s being conducted virtually, then it must not be real, with no physical repercussions; but far from it, online addictions generate a guilt that remains and is hard to rid. This is particularly true of misdemeanors published to the websphere that can be played back, disallowing individuals to forget about their prior actions or break out of stereotypes [28].

A New Tool: The AntiSocial App

FIGURE 2. The app AntiSocial measures the number of unlocks. (Image courtesy of BugBean.)

Endless pings tend to plague smartphone users if their settings have not been tweaked for anything but a default [29]. Notifications and alerts are checked while users are driving (even if it is against the law to text and drive), in the middle of a conversation, in bed while being intimate, while using the restroom, or even while taking a shower. But no one has ever measured the end-to-end use through actual surveying of digital instrumentation in an open market setting. It has been left to self-reporting mechanisms or applications that  have run on a desktop that might monitor how long workers use various work applications or are on e-mail or closed surveying of populations participating in trials. But manual voluntary audit logs are often incomplete or underreport actual usage, and closed trials are not often representative of reality. At best, we can point to the South Korean smartphone verification and management system that has helped to raise awareness that such a system is needed for intervention [30]. And yet, the concern is so high that we can say with some confidence that it won’t take long for companies to come out with socially responsible technologies and software to help us remain in the driver’s seat with some confidence.

FIGURE 3. AntiSocial measures app usage in minutes, allowing the user to limit or block certain apps based on a predefined consumption. (Image courtesy of BugBean.)

Enter the new app called AntiSocial, created by Melbourne, Australia, software company BugBean, which has consumer interests at heart [31]. Antisocial.io has taken the world by storm and has been downloaded on GooglePlay by individuals in over 150 countries within just a few months. The fact that it ranked number three on the U.K. GooglePlay downloads after only a few days demonstrates the need for it. It will not only accurately record usage in multiple application contexts but also encourage mindfulness about usage. AntiSocial does not tell users to stop using social media or stop video gaming for entertainment, but it reminds people to consider their digital calorie intake by comparing their behaviors with other anonymous users in their age group, occupation, and location. It is not about shaming users but raising individual awareness and wasting less time. We say we are too busy for this or that, and yet we don’t realize we are getting lost and absorbed in online activities. How do we reclaim some of this time [32]?

It may well be as simple as switching off the phone in particular settings, deliberately not taking it with you on a given outing, or having a digital detox day once a week or once a month. It might be taking responsibility for the length of screen time you have when you are away from the office or using AntiSocial to block certain apps after a self-determined amount of time has been spent on the app on any given day [33]. Whatever your personal solution, taking the AntiSocial challenge is about empowering you, and letting you exploit the technology at your fingertips without it exploiting you.

The AntiSocial App will Help

FIGURE 4. AntiSocial benchmarks smartphone app usage against others in the same age group, occupation, and location. (Image courtesy of BugBean.)

Some of the social problems that arise from smartphone and/ or social media addiction in particular include sleep depravity, anxiety, depression, a drop in grades, and anger management issues. AntiSocial provides a count of the number of unlocks you perform on your handset (Figure 2) and tells you in minutes how long you use each application (Figure 3), including cameras, Facebook and Instagram, and your favorite gaming app. It will help you to compare yourself against others and take responsibility for your use (Figure 4). You might choose to replace that time spent on Facebook, e.g., with time walking the dog, helping your kids with their homework, or even learning to cook a new recipe [34]. There is also a paired version that can be shared between parents and their children or even colleagues and friends. You might like to set yourself a challenge to detox digitally, just like you might do at your local gym in terms of fitness and weight loss. Have fun within a two-week timeframe, declaring yourself the biggest loser (of mobile minutes, that is) and report back to family and friends on what you feel you have gained. You might be surprised how liberating this actually feels.

You’ll come away appreciating the digital world and its conveniences a great deal more. You’ll also likely have a clearer head and not be tempted to snap back a reply online that might hurt another or inadvertently hurt yourself. And you’ll be able to use the AntiSocial app to become more social and start that invaluable conversation with those loved ones around you in the physical space.

References

1. Number of social media users worldwide from 2010 to 2020, 2017, [online] Available: https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/.
2. 2 billion consumers worldwide to get smart(phones) by 2016, 2014, [online] Available: https://www.emarketer.com/Article/2-Billion-Consumers-Worldwide-Smartphones-by-2016/1011694.
3. R. Gray, "What a vain bunch we really are! 24 billion selfies were uploaded to Google last year", Daily Mail, 2016, [online] Available: http://www.dailymail.co.uk/sciencetech/article-3619679/What-vain-bunch-really-24-billion-selfies-uploaded-Google-year.html.
4. "Average youth clicks 14 selfies a day says Google study", Daily News and Analysis, 2015, [online] Available: http://www.dnaindia.com/lifestyle/report-average-youth-clicks-14-selfies-a-day-says-google-study-2117522.
5. A. Lenhart, Teens social media & technology overview 2015 Pew Internet Research, 2015, [online] Available: http://www.pewinternet.org/2015/04/09/teens-social-media-technology-2015/.
6. K. Olmstead, C. Lampe, N.B. Ellison, Social media and the workplace, Pew Research Centre, 2016, [online] Available: http://www.pewinternet.org/2016/06/22/social-media-and-the-workplace/.
7. Safe selfie, Ministry of Internal Affairs, 2015, [online] Available: https://xn-b1aew.xn-p1ai/safety_selfie.
8. H. Horton, More people have died by taking selfies this year than by shark attacks, 2015, [online] Available: http://www.telegraph.co.uk/technology/11881900/More-people-have-died-by-taking-selfies-this-year-than-by-shark-attacks.html.
9. K. Michael, "For now we see through a glass darkly", IEEE Technol. Soc. Mag., vol. 32, no. 4, pp. 4-5, 2013.
10. K. Young, "Children and technology: Guidelines for parents—rules for every age", IEEE Technol. Soc. Mag., vol. 36, no. 1, pp. 31-33, 2017.
11. S. Bennett, Social media addiction: Statistics & trends, AdWeek, 2014, [online] Available: http://www.adweek.com/digital/social-media-addiction-stars/.
12. C.S. Andreassen, T. Torsheim, G.S. Brunborg, S. Pallesen, "Development of a Facebook addiction scale", Psychol Rep., vol. 110, no. 2, pp. 501-517, 2012.
13. M.J. Angel, "Living the “iLife”—are Australians Internet junkies?", Sydney Morning Herald, 2013, [online] Available: http://www.smh.com.au/lifestyle/life/living-the-ilife-are-australians-internet-junkies-20130418-2i2fu.html.
14. M. Maldonado, "The anxiety of Facebook", PsychCentral, 2016, [online] Available: https://psychcentral.com/lib/the-anxiety-of-facebook/.
15. R. Williams, "How Facebook can amplify low self-esteem/narcissism/anxiety", PsychologyToday, 2014, [online] Available: https://www.psychologytoday.com/blog/wired-success/201405/how-facebook-can-amplify-low-self-esteemnarcissismanxiety.
16. J. Huntsdale, Unliking Facebook—the social media addiction that has you by the throat, ABC, 2015, [online] Available: http://www.abc.net.au/local/stories/2015/01/23/4177043.htm.
17. K. Albrecht, K. Michael, "We've got to do better", IEEE Technol. Soc. Mag., vol. 33, no. 1, pp. 5-7, 2014.
18. M. Gradisar, A.R. Wolfson, A.G. Harvey, L. Hale, R. Rosenberg, C.A. Czeisler, "The sleep and technology use of Americans: Findings from the National Sleep Foundation's 2011 sleep in America poll", J. Clin. Sleep Med., vol. 9, no. 12, pp. 1291-1299, 2013.
19. R. Pies, "Should DSM-V designate “Internet addiction” a mental disorder?", Psychiatry, vol. 6, no. 2, pp. 31-37, 2009.
20. N.M. Petry, F. Rehbein, C.H. Ko, C.P. O'Brien, "Internet gaming disorder in the DSM-5", Curr. Psychiatry Rep., vol. 17, no. 9, pp. 72, 2015.
21. K. Albrecht, K. Michael, M.G. Michael, "The dark side of video games: Are you addicted?", IEEE Consum. Electron. Mag., vol. 5, no. 1, pp. 107-113, 2015.
22. J.H. Ha, H.J. Yoo, I.H. Cho, B. Chin, D. Shin, J.H. Kum, "Psychiatric comorbidity assessed in Korean children and adolescents who screen positive for Internet addiction", J. Clin. Psychiatry, vol. 67, pp. 821-826, 2006.
23. K. Young, "Help for cybersex addicts and their loved ones", IEEE Technol. Soc. Mag., vol. 35, no. 4, pp. 13-15, 2016.
24. Y.H.C. Yau, M.J. Crowley, L.C. Mayes, M.N. Potenza, "Are Internet use and video-game-playing addictive behaviors? Biological clinical and public health implications for youths and adults", Minerva Psichiatr., vol. 53, no. 3, pp. 153-170, 2012.
25. L. Merrillees, "Psychologists scramble to keep up with growing social media addiction", ABC News, 2016.
26. P. Valdesolo, "Scientists study Nomophobia-fear of being without a mobile phone", Scientific American, 2015, [online] Available: https://www.scientificamerican.com/article/scientists-study-nomophobia-mdash-fear-of-being-without-a-mobile-phone/.
27. M. Ybarra, K.J. Mitchell, "Exposure to Internet pornography among children and adolescents: A national survey", Cyberpsychology and Behaviour, vol. 8, no. 5, pp. 473-486, 2005.
28. K. Michael, M.G. Michael, "The fallout from emerging technologies: Surveillance social networks and suicide", IEEE Technol. Soc. Mag., vol. 30, no. 3, pp. 13-17, 2011.
29. J. Huntsdale, Social media monitoring apps shine spotlight on Internet addiction, ABC News, 2017, [online] Available: http://www.abc.net.au/news/2017-02-22/social-media-addiction-monitoring-app/8292148.
30. S.-J. Lee, M.J. Rho, I.H. Yook, S.-H. Park, K.-S. Jang, B.-J. Park, O. Lee, D.K. Lee, D.-J. Kim, I.Y. Choi, "Design development and implementation of a smartphone overdependence management system for the self-control of smart devices", Appl. Sci., vol. 6, pp. 440-452, 2016.
31. Google Play, 2017, [online] Available: https://play.google.com/store/apps/details?id=com.goozix.antisocial_personal&hl=en.
32. P. Peeke, Hooked hacked hijacked: Reclaim your brain from addictive living: Dr. Pam Peeke TEDxWall Street, 2013, [online] Available: https://www.youtube.com/watch?v=aqhzFd4NUPI.
33. K. Michael, Facts and figures: The rise of social media addiction: What you need to know PC World, 2017, [online] Available: http://www.pcworld.idg.com.au/article/614696/facts-figures-rise-social-media-addiction/.
34. C. Dalgleish, "10 things you should do instead of sitting on social media", The Cusp., 2017, [online] Available: http://thecusp.com.au/10-things-instead-sitting-social-media/15142.
35. A. Myrick, Google Glass makes an unexpected return with its new Enterprise Edition, 2017, [online] Available: https://phandroid.com/2017/07/18/google-glass-enterprise-edition/.
36. Mental Health and Social Media, July 2017, [online] Available: https://www.youtube.com/watch?v=VDx9djMuIFg&t=140s.

Keywords

Social network services, Facebook, Mobile communication, Australia, Telecommunication services, smart phones, social networking (online), telephone networks, smartphone, social media, antisocial app, second-generation mobile assignment, 2G mobile assignment, Nortel telecommunications vendor, Hutchison, Australia, network traffic, voice networks, Internet, data network dimensioning, public switched telephone network, integrated services digital network, small-to-medium enterprises

Citation: Katina Michael, "Are You Addicted to Your Smartphone, Social Media, and More?: The New AntiSocial App Could Help", IEEE Consumer Electronics Magazine, Vol. 6, No. 4, Oct. 2017, pp. 116 - 121, DOI: 10.1109/MCE.2017.2714421

Urban flood modelling using geo-social intelligence

Social media is not only a way to share information among a group of people but also an emerging source of rich primary data that can be crowdsourced for good. The primary function of social media is to allow people to network near real-time, yet the repository of amassed data can also be applied to decision support systems in response to extreme weather events. In this paper, Twitter is used to crowdsource information about several monsoon periods that caused flooding in the megacity of Jakarta, Indonesia. Tweets from two previous monsoons related to flooding were collected and analysed using the hashtag # 'banjir'. By analysing the relationship between the tweets and the flood events, this study aims to create 'trigger metrics' of flooding based on Twitter activity. Such trigger metrics have the advantage of being able to provide a situational overview of flood conditions in near real-time, as opposed to formal government flood maps that are produced on a six to twelve hourly schedule alone. The aim is to provide continuous intelligence, rather than make decisions on outdated data gathered between extended discrete intervals.

Citation: Yang, K., Michael, K., Abbas, R. & du Chemin Holderness, T. (2018). Urban flood modelling using geo-social intelligence. International Symposium on Technology and Society, Proceedings (pp. 1-9). IEEE Xplore: IEEE.

Reconnaissance and Social Engineering Risks as Effects of Social Networking

Author Note: This paper is a "living reference work entry". Published first in 2014, now in second edition with minor changes to original content.

… not what goes into the mouth defiles a man, but what comes out of the mouth, this defiles a man.” Matthew 15:11 (RSV)

Introduction

For decades we have been concerned with how to stop viruses and worms from penetrating organizations and how to keep hackers out of organizations by luring them toward unsuspecting honeypots. In the mid-1990s Kevin Mitnick’s “dark-side” hacking demonstrated, and possibly even glamorized (Mitnick and Simon 2002), the need for organizations to invest in security equipment like intrusion detection systems and firewalls, at every level from perimeter to internal demilitarized zones (Mitnick and Simon 2005).

In the late 1990s, there was a wave of security attacks which stifled worker productivity. During these unexpected outages, employees would take long breaks queuing at the coffee machine, spend time cleaning their desk, and try to look busy shuffling paper in their in- and out-trays. It was clear by the downtime caused by malware hitting servers worldwide that corporations had begun to rely on intranets for content and workflow management so much and that employees would be left with very little to do when they were not connected. Nowadays, everything is online with respect to the service industry, and there is a known vulnerability in the requirement to be always connected. For example, you can cripple an organization if you take away their ability to accept electronic payments online, or render their content management system inaccessible due to denial of service attacks, or hack into a company’s webpage.

When the “Melissa” virus caught employees unaware in 1999, and was then followed by the “Explorer.zip” worm in the same year, public folders had Microsoft Office files either deleted or corrupted. At the time, anecdotal stories indicated that some people (even whole groups) lost several weeks of work, after falling victim to the worm that had attacked their hard drive. This led many to seek backup copies of their files, only to find that the backups themselves were not activated (Michael 2003).

The moral of the story is that for decades we have been preoccupied with stopping data (executables, spam, false log-in attempts, and the like) from entering the organization when the real problem since the rise of broadband networks, 3G wireless, and more recently social media has been how to stop data from going out of the organization. While this sounds paradoxical, the major concern is not what data traffic comes into an organization, but what goes out of an organization that matters. We have become our own worst enemy when it comes to security in this online-everything world we live in.

In short, data leakage is responsible for most corporate damage, such as the loss of competitive information. You can secure a bucket and make it water tight, put a lid on it, even put a lock on the lid, but if that bucket has even a single tiny hole, its contents will leak out and cause spillage. Such is the dilemma of information security today – while we have become more aware of how to block out unwanted data, the greatest risk to our organization is that which leaves the organization – through the network, through storage devices, and via an employees’ online personal blog, even the spoken word. It is indeed what most security experts call the “human” factor (Michael 2008).

Reconnaissance of Social Networks for Social Engineering

Social Networking

The Millennials, also known as Gen Ys, have been the subject of great discussion by commentators. If we are to believe what researchers say about Gen Ys, then it is this generation that has voluntarily gone public with private data. This generation, propelled by advancements in broadband wireless, 3G mobiles, and cloud computing, is always connected and always sharing their sentiments and cannot get enough of the new apps. They are allegedly “transparent” with most of their data exchanges. Generally, Gen Ys do not think deeply about where the information they publish is stored, and they are focused on convenience solutions that benefit them with the least amount of rework required. They tend not to like to use products like Microsoft Office and would rather work on Google Drive using Google Docs collaboratively with their peers. They are less concerned with who owns information and more concerned with accessibility and collaboration.

Gen Ys are characterized with creating circles of friends online, doing everything digitally they possibly can, and blogging to their heart’s content. In fact, Google has recently released a study that has found that 80% of Gen Ys make up a new generation dubbed “Gen C.” Gen Cs are known as the YouTube generation and are focused on “creation, curation, connection, and community” (Google 2012). It is generally embraced in the literature that this is the generation that would rather use their personally purchased tools, devices, and equipment for work purposes because of the ease of carrying their “life” and “work” with them everywhere they go and the ease of melding their personal hobbies, interests, and professional skillsets with their workplace seamlessly (PWC 2012). Bring your own device (BYOD) is a movement that has emerged from this type of mind-set. It all has to do with customization and personalization, with working with settings that have been defined by the user and with lifelogging in a very audiovisual way. Above all the mantra of this generation is Open-Everything. The claim made by Gen Cs is that transparency is a great force to be reckoned with when it comes to accessibility. Gen Cs allegedly define their social network and are what they share, like, blog, and retweet. This is not without risk, despite that some criminologists have played down the fear as related to privacy and security concerns (David 2008).

Despite that online commentators regularly like to place us all into categories based on our age, most people we’ve spoken to through our research do not feel like any particular “generation.” Individuals like to think they are smart enough to exploit the technologies for what they need to achieve. People may generally choose not to embrace social networking for blogging purposes, for instance, but might see how the application can be put to good use within an institutional setting and educational framework. For this reason they might be heavy users of social networking applications like LinkedIn, Twitter, Facebook, and Google Latitude but also understand its shortcomings and the potential implications of providing a real name, gender, and date of birth, as well as other personal particulars like geotagged photos or live streaming.

This ability to gather and interpret cyber-physical data about individuals and their behaviors has a double-edged spur when related back to a place of work. On the one hand, we have data about someone’s personal encounters that can be placed in a context back to a place of employment (Dijst 2009). For instance, a social networking update might read: “In the morning, I met with Katina Michael, we spoke about putting a collaborative grant together on location-based tracking, and then I went and met Microsoft Research Labs to see if they were interested in working with us, and had lunch with person@microsoft.com (+person) (#microsoft) who is a senior software engineer.” This information is pretty innocent on its own but there are a lot of details in there that might be used for gathering information: (1) a real name, (2) a real e-mail address, (3) an identifiable position in an organization, (4) potentially links to an extended social network, and (5) possibly even a real physical location of where the meeting took place if the individual had a location-tracking feature switched on their mobile social network app. The underlying point here is that you might have nothing to fear by blogging or participating on social networks under your company identity, but your organization might have much to lose.

Social Reconnaissance

Despite that many of us don’t wish to admit it, we have from time to time conducted social reconnaissance online for any number of reasons. In the most basic of cases, you might be visiting a location you have not previously been to and you use Google Street View to take a quick look at what the dwelling looks like for identification purposes. You might also browse the web with your own name, dubbed “ego surfing,” to see how you have been cited, quoted, and tagged in images or generally what other people are saying about you. But businesses also are increasingly keeping their eye out on what is being said about their brand using automatic web alerts based on hashtags, to the extent that new schemes offering insurance for business reputation have begun to emerge. Now, my point here is not whether or not you conduct social reconnaissance on yourself, or your family, or your best friend, or even strangers that look enticing, but on what hackers out there might learn about you and your life and your organization by conducting both social and technical reconnaissance. Yes, indeed, if you didn’t know it already, there are people out there that will (1) spend all their work time looking up what you do (depending on who you are), (2) think about how that information they have gathered can be related back to your place of work, and (3) exploit that knowledge to conduct clever social engineering attacks (Hadnagy 2011).

Chris Hadnagy, founder of social-engineer.om, was recently quoted as saying: “[i]nformation gathering is the most important part of any engagement. I suggest spending over 50 percent of the time on information gathering… Quality information and valid names, e-mails, phone number makes the engagement have a higher chance of success. Sometimes during information gathering you can uncover serious security flaws without even having to test, testing then confirms them” (Goodchild 2012).

It is for this reason that social engineers will focus on the company website, for instance, and build their attack plan off that. Dave Kennedy, CSO of Diebold, complements this idea by firsthand experience: “[a] lot of times, browsing through the company website, looking through LinkedIn are valuable ways to understand the company and its structure. We’ll also pull down PDF’s, Word documents, Excel spread sheets and others from the website and extract the metadata which usually tells us which version of Adobe or Word they were using and operating system that was used” (Goodchild 2012).

Most of us know of people who do not wish to be photographed and who have painstakingly attempted to un-tag themselves from a variety of images on social networks, who have tried to delete their online presence and be judged before an interview panel for the person they are today, not the person they were when MySpace or Facebook first came out. But what about the separate group of people who do not acknowledge that there is a fence between their work life and home life, accept personal e-mails on a work account, and then are vocal about everything that happens to them on a moment-by-moment basis with a disclaimer that reads: “anything you read on this page is my own personal opinion and not that of the organization I work for.” Some would say these individuals are terribly naïve and are probably not acting in accord with organizational policies. The disclaimer won’t help the company nor will it help them. Ethical hackers, who have built large empires around their tricks of the trade since the onset of social networking, have spent the last few years trying to educate us all – “data leakage is your biggest problem folks” not the fact that you have weak perimeters! You are, in other words, your own worst enemy because you divulge more than you can afford to, to the online world.

No one is discounting that there are clear benefits in making tacit knowledge explicit by recording it in one form or another, or openly sharing our research data in a way that is conducive to ethical practices, and making things more interoperable than what they are today – but the world keeps moving so fast that for the greater part people are becoming complacent with how they store their datasets and the repercussions of their actions. But the repercussions do exist, and they are real.

Social Engineering

Expert social engineers have never relied on very sophisticated ways of penetrating security systems. It is worth paying a visit to the social engineering toolkit (SET) at www.ocial-engineer.rg where you might learn a great deal about ethical hacking (Palmer 2001) and pentesting (Social-Engineer.Org 2012). Here social engineering tools are categorized as physical (e.g., cameras, GPS trackers, pen recorders, and radio-frequency bug kits), computer based (e.g., common user password profilers), and phone based (e.g., caller ID spoofing). In phase 1 of their premeditated attacks, social engineers are merely engaged in the practice of observation of the information we each put up for grabs willingly. And beyond “the information” itself, subjects and objects are also under surveillance by the social engineers as these might give further clues to the potential hack. It is when there is enough information that a social engineer will think about the next phase 2 which could mean dumpster diving and collecting as much hard copy and online evidence as possible (e.g., company website info). Social networks have given social engineers a whole new avenue of investigation. In fact, social networking will keep social engineers in fulltime work forever and ever unless we all get a lot smarter with how to use these applications.

In phase 3, the evidence gathered by the hacker is used to good practice as they claw their way deeper and deeper into organizational systems. It might mean having a few full names and position profiles of employees in a company and then using their “hacting” (hacking and acting) skills to get more and more data. Think about social engineers, building on steps and penetrating deeper and deeper into the administration of an organization. While we might think executives are the least targeted individuals, social engineers are brazen to ‘attack’ personal assistants of executives as well as operational staff. One of the problems associated with social networking is that executives casually give over their login and passwords to personal assistants to take care of their online reputations, thus becoming increasingly easier to manipulate and hijack these spaces and use them to as proof for a given action. When social engineers get that level of authority they require to circumvent systems or they are able to use a technical reconnaissance to exploit data found via social reconnaissance (or vice versa), then they can gain access to an organization’s network resources remotely, free to unleash cross-site scripting, man-in-the-middle attacks, SQL code injection, and the like.

Organizational Risks

We have thus come full circle on what social reconnaissance has to do with social networks. Social networking sites (SNS) provide social engineers with every bit of space they need to conduct their unethical hacking and their own penetration tests. You would not be the first person to admit that you have accepted a “friend” on a LinkedIn invitation without knowing who they are, or even caring who they are. Just another e-mail in the inbox to clear out, so pressing accept is usually a lot easier than pressing ignore and then delete or even blocking them for life.

Consider the problem of police in metropolitan forces creating LinkedIn profiles and accepting friends of friends on their public social network profile. What are the implications of this from a criminal perspective? Carrying the analogy of police further, what of the personal gadgets they carry? How many police are currently carrying e-mails on personal mobile phones that they should not be for security concerns? Or even worse, police who have their Twitter, Facebook, or LinkedIn profile always connected via their mobile phone? The police can be said to be rapidly introducing new policies to address these problems, but the problems regardless still exist for mainstream employees of large, medium, and even small organizations. The theft does not have to be complex like the stealing of software code or other intellectual property in designs and blueprints but as simple as the theft of competitive information like customer lead lists in a Microsoft Access database, or payroll data stored in MYOB, or even the physical device itself.

Penetration testing done periodically can be used as feedback into the development of a more robust information security life cycle that can aid those in charge of information governance to react proactively to help employees understand the implications of their practices (Bishop 2007). Trustwave 2012 advocates for four types of assessment and testing. The first is straightforward and traditional physical assessment. The second is client-side penetration testing which validates whether every staff member is adhering to policies. The third is business intelligence testing which is investigating how employees are using social networking, location-enabled devices, and mobile blogging to ensure that a company’s reputation is not at risk and to find out what data exists publically about an organization. And finally, red team testing is when a group of diverse subject matter experts try to penetrate a system reviewing security profiles independently.

No one would ever want to be the cause behind the ransacking of their organization’s online information above and beyond the web scraping technologies becoming widely available (Poggi et al. 2007). It would help if policies were enforceable within various settings but these too are difficult to monitor. How does one get the message across that while blocking unwanted traffic at the door is very important for an organization, what is even more important is noting what goes walkabout from inside the organization out? It will take some years for governance structures to adapt to this kind of thinking because the security industry and the media have previously been rightly focused on Denial of Service (DoS) attacks and botnets and the like (Papadimitriou and Garcia-Molina 2011). But it really is a chicken and egg problem – the more information we give out using social networking sites, the more we are giving impetus to DoS, DDoS, and the proliferation of botnets (Kartaltepe et al. 2010; Huber et al. 2009).

Conclusion

Possibly this entry may not have convinced employees that greater care should be taken about what they publish online, on personal blogs, or the pictures or footage post on lifelogs or on YouTube, but it may have convinced employees that the biggest problems today in security systems arise from the information that users post publicly in environments that rely on social networks. This information is just waiting to be harvested by people unsuspecting to users that they will probably never meet physically. Employers need to get their staff educated on company policies periodically and even review the policies they create no less than every 2 years. As an employer you should also be considering when the last time was that your organization performed a penetration test that considered new social networking applications. Individuals should extend this kind of pentesting to their own online profiles and review their own personal situation. Sure you might not have nothing to hide, but you might have a lot to lose.

References

  1. Bishop M (2007) About penetration testing. IEEE Secur Privacy 5(6):84–87

  2. David SW (2008) Cybercrime and the culture of fear. Inf Commun Soc 11(6):861–884

  3. Dijst M (2009) ICT and social networks: towards a situational perspective on the interaction between corporeal and connected presence. In: Kitamura R, Yoshii T, Yamamoto T (eds) The expanding sphere of travel behaviour research. Emerald, Bingley

  4. Goodchild J (2012) 3 tips for using the social engineering toolkit, CSOOnline- data protection. http://ww.soonline.om/rticle/05106/-tips-for-using-the-social-engineering-toolkit. Accessed 3 Dec 2012

  5. Google (2012) Introducing Gen C: the YouTube generation. http://sl.static.om/hink/ocs/ntroducing-gen-c-the-youtube-generationesearch-studies.df. Accessed 1 Apr 2013

  6. Hadnagy C (2011) Social engineering: the art of human hacking. Wiley, Indianapolis

  7. Huber M, Kowalski S, Nohlberg M, Tjoa S (2009) Towards automating social engineering using social networking sites. In: IEEE international conference on computational science and engineering, CSE’09, Vancouver, vol 3. IEEE, Los Alamitos, pp 117–124

  8. Kartaltepe EJ, Morales JA, Xu S, Sandhu R (2010) Social network-based botnet command-and-control: emerging threats and countermeasures. In: Applied cryptography and network security. Springer, Berlin/Heidelberg, pp 511–528

  9. Michael K (2003) The battle against security attacks. In: Lawrence E, Lawrence J, Newton S, Dann S, Corbitt B, Thanasankit T (eds) Internet commerce: digital models for business. Wiley, Milton, pp 156–159. http://orks.epress.om/michael/63/. Accessed 1 Feb 2013

  10. Michael K (2008) Social and organizational aspects of information security management. In: IADIS e-Society, Algarve, 9–12 Apr 2008. http://orks.epress.om/michael/6/. Accessed 1 Feb 2013

  11. Mitnick K, Simon WL (2002) The art of deception: controlling the Human element of security. Wiley, Indianapolis

  12. Mitnick K, Simon WL (2005) The art of intrusion. Wiley, Indianapolis

  13. Palmer CC (2001) Ethical hacking. IBM Syst J 40(3):769–780

  14. Papadimitriou P, Garcia-Molina H (2011) Data leakage detection. IEEE Trans Knowl Data Eng 23(1):51–63

  15. Poggi N, Berral JL, Moreno T, Gavalda R, Torres J (2007) Automatic detection and banning of content stealing bots for e-commerce. In: NIPS 2007 workshop on machine learning in adversarial environments for computer security. http://eople.c.pc.du/poggi/ublications/.%2oggi%2-%2utomatic%2etection%2nd%2anning%2 of%2ontent%2tealing%2ots%2or%2-commerce.df. Accessed 1 May 2013

  16. PWC (2012) BYOD (Bring your own device): agility through consistent delivery. http://ww.wc.om/s/n/ncreasing-it-effectiveness/ublications/yod-agility-through-consistent-delivery.html. Accessed 3 Dec 2012

  17. Social-Engineer.Org: Security Through Education (2012) http://ww.ocial-engineer.rg/. Accessed 3 Dec 2012

  18. Trustwave (2012) Physical security and social engineering testing. https://ww.rustwave.om/ocialphysical.hp. Accessed 3 Dec 2012

Synonyms

Footprinting; Hacker; Penetration testing; Reconnaissance; Risk; Security; Self-disclosure; Social engineering; Social media; Social reconnaissance; Vulnerabilities

Glossary

Social reconnaissance: A preliminary paper-based or electronic web-based survey to gain personal information about a member or group in your community of interest. The member may be an individual friend or foe, a corporation, or the government

Social engineering: With respect to security, is the art of the manipulation of people while purporting to be someone other than your true self, thus duping them into performing actions or providing secret information

Data leakage: The deliberate or accidental outflow of private data from the corporation to the outside world, in a physical or virtual form

Online social networking: An online social network is a site that allows for the building of social networks among people who share common interests

Malware: The generic term for software that has a malicious purpose. Can take the form of a virus, worm, Trojan horse, and spyware

Citation: Katina Michael, "Reconnaissance and Social Engineering Risks as Effects of Social Networking", in Reda Alhajj and Jon Rokne, Encyclopedia of Social Network Analysis and Mining, 2017, pp. 1-7, DOI: 10.1007/978-1-4614-7163-9_401-1.

Bots Trending Now: Disinformation and Calculated Manipulation of the Masses

Bot Developments

A bot (short for robot) performs highly repetitive tasks by automatically gathering or posting information based on a set of algorithms. Internet-based bots can create new content and interact with other users like any human would. Bots are not neutral. They always have an underlying intent toward direct or indirect benefit or harm. The power is always with the individual(s)/organization(s) unleashing the bot, and imbued with the developer's subjectivity and bias [1].

Bots can be overt or covert to subjects; they can deliberately “listen” and then in turn manipulate situations providing real information or disinformation (known also as automated propaganda). They can target individuals or groups and successfully alter or even disrupt group-think, and equally silence activists trying to bring attention to a given cause (e.g., human rights abuses by governments). On the flipside, bots can be used as counterstrategies in raising awareness of political wrongdoing (e.g., censorship) but also be used for terrorist causes appealing to a global theatre (e.g., ISIS) [2].

Software engineers and computer programmers have developed bots that can do superior conversational analytics, bots to analyze human sentiment in social media platforms such as Facebook [3] and Twitter [4], and bots to get value out of unstructured data using a plethora of big data techniques. It won't be long before we have bots to analyse audio using natural language processing, and commensurate bots to analyze and respond to uploaded videos on YouTube, and even bots that respond with humanlike speech contextually adapted for age, gender, and even culture. The convergence of this suite of capabilities is known as artificial intelligence [5]. Bots can be invisible, they can appear as a 2D embodied agent on a screen (avatar or dialog screen), or as a 3D object (e.g., toy) or humanoid robot (e.g., Bina [6] and Pepper).

Bots that Pass the Turing Test

Most consumers who use instant messaging chat programs to interact with their service providers very well might not realize that they have likely interacted with a chat bot that is able to crawl through a provider's public Internet page for information acquisition [7]. After 3–4 interactions with the bot, that can last anything between 5 to 10 minutes, a human customer service representative might intervene to enable a direct answer to a more complex problem. This is known as a hybrid delivery model where bot and human work together to solve a customer inquiry. The customer may detect a slower than usual response in the chat window, but is willing to wait given the asynchronous mode of communications, and the mere fact they don't have to converse with a real person over the telephone. The benefit to the consumer is said to be bypassing a human clerk and wait times for a representative, and the benefit to the service provider is in saving the cost of human resources, including ongoing training.

Bots that interact with humans and go undetected as being non-human are considered successful in their implementation, and are said to pass the Turing Test [8]. Devised in 1950, English mathematician, Alan M. Turing suggested the “imitation game,” which consisted of a remote human interrogator within a fixed time frame being able to distinguish between a computer and a human subject based on their replies to various questions posed by the interrogator [9].

Bot Impacts Across the Globe

Bots usually have Internet/social media accounts that look like real people, generate new content like any human would, and interact with other users. Politicalbots.org reported that approximately 19 million bot accounts were tweeting in support of either Donald Trump or Hillary Clinton in the week before the U.S. presidential election [10]. Pro-Trump bots worked to sway public opinion by secretly taking over pro-Clinton hashtags like #ImWithHer and spreading fake news stories [11]. These pervasive bots are said to have swayed public opinion.

Yet bots have not just been utilized in the U.S. alone, but also the U.K. (Brexit's mood contagion [12]), Germany (fake news [1]), France (robojournalism [13]), Italy (popularity questioned [14]), and even in Australia (Coalition's fake followers [15]). Unsurprisingly, political bots have also been used by Turkey (Erdogan's 6000 robot army [16], [17]), Syria (Twitter spambots [18]), Ecuador (surveillance [19]), Mexico (Peñabots [20]), Brazil, Rwanda, Russia (Troll Houses [21]), China (tracking Tibetan protestors [22]), Ukraine (social bots [23]), Venezuela (6000 bots generating anti-U.S. sentiment [24] with #ObamaYankeeGoHome [25]).

Whether it is personal attacks meant to cause a chilling effect, spamming attacks on hashtags meant to redirect trending, overinflated follower numbers meant to show political strength, or deliberate social media messaging to perform sweeping surveillance, bots are polluting political discourse on a grand scale. So much so, that some politicians themselves are now calling for action against these autobots - with everything from demands for ethical conduct in society, to calls for more structured regulation [26] for political parties, to even implementation of criminal penalties for offenders creating and implementing malicious bot strategies.

Provided below are demonstrative examples of the use of bots in Australia, the U.K., Germany, Syria and China, with each example offering an alternative case whereby bots have been used to further specific political agendas.

Fake Followers in Australia

In 2013, the Liberal Party internally investigated the surge in Twitter followers that the then Opposition Leader Tony Abbot accumulated. On the night of August 10,2013, Abbot's Twitter following soared from 157 000 to 198 000 [27]. In the days preceding this period, his following was steadily growing at about 3000 per day. The Liberal Party had to declare on their Facebook page that someone had been purchasing “Fake Twitter followers for Tony Abbot's Twitter account,” but later a spokeswoman said it was someone not connected with the Liberal Party nor associated with the Liberal campaign and that the damage had been done using a spambot [27], an example of which is shown in Figure 1.

Figure 1. Twitter image taken from [28].

 

The Liberals acted quickly to contact Twitter who removed only about 8000 “fake followers” and by that same evening, Mr. Abbot's followers had grown again to 197000. A later analysis indicated that the Coalition had been spamming Twitter with exactly the same messages from different accounts, most of these not even from Australian shores. The ploy meant that Abbot got about 100000 likes on Facebook in a single week, which was previously unheard of for any Liberal Party leader. Shockingly, one unofficial audit noted that about 95% of Mr. Abbot's 203000 followers were fake, with 4% “active” and only 1% genuine [15]. The audit was verified by social media monitoring tools, StatusPeople and SocialBakers that determined in a report that around 41 per cent of Abbott's most recent 50000 Twitter followers were fake, unmanned Twitter accounts [29]. The social media monitoring companies noted that the numbers of fake followers were likely even higher. It is well known that most of the Coalition's supporters do not use social media [30]. Another example of the suspected use of bots during the 2013 election campaign can be seen in Figure 2.

Figure 2. Twitter image taken from [31].

Fake Trends and Robo-Journalists in the U.K.

As the U.K.'s June 2016 referendum on European Union membership drew near, researchers discovered automated social media accounts were swaying votes for and against Britain's exit from the EU. A recent study found 54% of accounts were pro-Leave, while 20% were pro-Remain [32]. And of the 1.5 million tweets with hashtags related to the referendum between June 5 and June 12, about half a million were generated by 1% of the accounts sampled.

As more and more citizenry head to social media for their primary information source, bots can sway decisions this way or that. After the results for Brexit were disclosed, many pro-Remain supporters claimed that social media had had an undue influence by discouraging “Remain” voters from actually going to the polls [33], refer to Figure 3. While there are only 15 million Twitter users in the U.K., it is possible that robo-journalists (content gathering bots) and human journalists who relied on social media content that was fake, further propelled the “fake news,” affecting more than just the TwitterSphere.

 

Figure 3. Twitter image taken from [34].

Fake News and Echo Chambers in Germany

German Chancellor Angela Merkel has expressed concern over the potential for social bots to influence this year's German national election [35]. She brought to the fore the ways in which fake news and bots have manipulated public opinion online by spreading false and malicious information. She said: “Today we have fake sites, bots, trolls - things that regenerate themselves, reinforcing opinions with certain algorithms and we have to learn to deal with them” [36]. The right-wing Alternative for Germany (AfD) already has more Facebook likes than Merkel's Christian Democrats (CDU) and the center-left Social Democrats (SPD) combined. Merkel is worried the AfD might use Trump-like strategies on social media channels to sway the vote.

It is not just that the bots are generating fake news [35], but that the algorithms that Facebook deploys as content are shared between user accounts, creating “echo chambers” and outlets for reverberation [37]. However in Germany, Facebook, which has been criticized for failing to police hate speech, in 2016 has just been legally classified as a “media company,” which means it will now be held accountable for the content it publishes. While the major political parties responded by saying they will not utilize “bots for votes,” it is now also outside geopolitical forces (e.g., Russians) who are chiming in, attempting to drive social media sentiment with their own hidden agendas [35].

Spambots and Hijacking Hashtags in Syria

During the Arab Spring, online activists were able to provide eyewitness accounts of uprisings in real time. In Syria, protesters used the hashtags #Syria, #Daraa, and #Mar15 to appeal for support from a global theater [18]. It did not take long for government intelligence officers to threaten online protesters with verbal assaults and one-on-one intimidation techniques. Syrian blogger Anas Qtiesh wrote: “These accounts were believed to be manned by Syrian mokhabarat (intelligence) agents with poor command of both written Arabic and English, and an endless arsenal of bile and insults” [38]. But when protesters continued despite the harassment, spambots created by Bahrain company EGHNA were coopted to create pro-regime accounts [39]. The pro-regime messages then flooded hashtags that had pro-revolution narratives.

This essentially drowned out protesters' voices with irrelevant information - such as photography of Syria. @LovelySyria, @SyriaBeauty and @DNNUpdates dominated #Syria with a flood of predetermined tweets every few minutes from EGHNA's media server [40]. Figure 4 provides an example of such tweets. Others who were using Twitter to portray the realities of the conflict in Syria publicly opposed the use of the spambots (see Figure 5) [43].

Figure 4. Twitter image taken from [41].

Figure 5. Twitter image taken from [42].

Since 2014, the Islamic State terror group has “ghost-tweeted” its messages to make it look like it has a large, sympathetic following [44]. This has been a deliberate act to try and attract resources, both human and financial, from global constituents. Tweets have consisted of allegations of mass killings of Iraqi soldiers and more [45]. This activity shows how extremists are employing the same social media strategies as some governments and social activists.

Sweeping Surveillance in China

In May 2016, China was exposed for purportedly fabricating 488 million social media comments annually in an effort to distract users' attention from bad news and politically sensitive issues [46]. A recent three-month study found 13% of messages had been deleted on Sina Weibo (Twitter's equivalent in China) in a bid to crack down on what government officials identified as politically charged messages [47]. It is likely that bots were used to censor messages containing key terms that matched a list of banned words. Typically, this might have included words in Mandarin such as “Tibet,” “Falun Gong,” and “democracy” [48].

China employs a classic hybrid model of online propaganda that comes into action only after some period of social unrest or protest when there is a surge in message volumes. Typically, the task is left to government officials to do the primary messaging, with back up support from bots, methodically spreading messages of positivity and ensuring political security using pro-government cheerleading. While on average it is believed that one in every 178 posts is curated for propaganda purposes, the posts are not continuous and appear to overwhelm dissent only at key times [49]. Distraction online, it seems, is the best way to overcome opposition. That distraction is carried out in conjunction with making sure there is a cap on the number of messages that can be sent from “public accounts” that have broadcasting capabilities.

What Effect are Bots Having on Society?

The deliberate act of spreading falsehoods via the Internet, and more specifically via social media, to make people believe something that is not true is certainly a form of propaganda. While it might create short-term gains in the eyes of political leaders, it inevitably causes significant public distrust in the long term. In many ways, it is a denial of citizen service that attacks fundamental human rights. It preys on the premise that most citizens in society are like sheep, a game of “follow the leader” ensues, making a mockery of the “right to know.” We are using faulty data to come to phony conclusions, to cast our votes and decide our futures. Disinformation on the Internet is now rife - and if the Internet has become our primary source of truth, then we might well believe anything.

REFERENCES

1. S.C. Woolley, "Automating power: Social bot interference in global politics", First Monday, vol. 21, no. 4.

2. H.M. Roff, D. Danks, J.H. Danks, "Fight ISIS by thinking inside the bot: How we can use artificial intelligence to distract ISIS recruiters", Slate, [online] Available: http://www.slate.com/articles/technology/future_tense/2015/10/using_chatbots_to_distract_isis_recruiters_on_social_media.htm. M. Fidelman, 10 Facebook messenger bots you need to try right now, Forbes, [online] Available: https://www.forbes.com/sites/markfidelman/2016/05/19/10-facebook-messenger-bots-you-need-to-try-right-now/#24546a4b325a.

4. D. Guilbeault, S.C. Woolley, "How Twitter bots are shaping the election", Atlantic, [online] Available: https://www.theatlantic.com/technology/archive/2016/11/election-bots/506072/.

5. K. Hammond, "What is artificial intelligence?", Computer World, [online] Available: http://www.computerworld.com/article/2906336/emerging-technology/what-is-artificial-intelligence.html

6. Bina 48 meets Bina Rothblatt - Part Two, [online] Available: https://www.youtube.com/watch?v=G5IqcRILeCc.

7. M. Vakulenko, Beyond the ‘chatbot’ - The messaging quadrant, [online] Available: https://www.visionmobile.com/blog/2016/05/beyond-chatbot-messaging-quadrant.

8. "Turing test: Artificial Intelligence", Encyclopaedia Britannica, [online] Available: https://www.britannica.com/technology/Turing-test.

9. D. Proudfoot, "What Turing himself said about the Imitation Game", Spectrum, [online] Available: http://spectrum.ieee.org/geek-life/history/what-turing-himself-said-about-the-imitation-game.

10. S.C. Woolley, Resource for understanding political bots, [online] Available: http://politicalbots.org/?p=797.

11. N. Byrnes, "How the bot-y politic influenced this election", Technology Rev., [online] Available: https://www.technologyreview.com/s/602817/how-the-bot-y-politic-influenced-this-election/.

12. I. Lapowsky, "Brexit is sending markets diving. Twitter could be making it worse", Wired, [online] Available: https://www.wired.com/2016/06/brexit-sending-markets-diving-twitter-making-worse/.

13. 2016 election news coverage, France:, [online] Available: http://www.france24.com/en/20161105-bots-step-2016-election-news-coverage.

14. A. Vogt, "Hot or bot? Italian professor casts doubt on politician's Twitter popularity", The Guardian, [online] Available: https://www.theguardian.com/world/2012/jul/22/bot-italian-politician-twitter-grillo.

15. T. Peel, "The Coalition's Twitter fraud and deception", Independent, [online] Available: https://independentaustralia.net/politics/politics-display/the-coalitions-twitter-fraud-and-deception.

16. C. Letsch, "Social media and opposition to blame for protests says Turkish PM", The Guardian, [online] Available: https://www.theguardian.com/world/2013/jun/02/turkish-protesters-control-istanbul-square.

17. E. Poyrazlar, "Turkey's leader bans his own Twitter bot army", Vocativ, [online] Available: http://www.vocativ.com/world/turkey-world/turkeys-leader-nearly-banned-twitter-bot-army/.

18. J.C. York, "Syria's Twitter spambots", The Guardian, [online] Available: https://www.theguardian.com/commentisfree/2011/apr/21/syria-twitter-spambots-pro-revolution.

19. R. Morla, "Ecuadorian websites report on hacking team get taken down", Panam Post, [online] Available: http://panampost.com/rebeca-morla/2015/07/13/ecuadorian-websites-report-on-hacking-team-get-taken-down/.

20. A. Najar, ¿Cuánto poder tienen los Feñabots. los tuiteros que combaten la crítica en Mexico?, BBC, [online] Available: http://www.bbc.com/mundo/noticias/2015/03/150317_mexico_internet_poder_penabot_an.

21. S. Walker, "Salutin’ Putin: Inside a Russian troll house", The Guardian, [online] Available: https://www.theguardian.com/world/2015/apr/02/putin-kremlin-inside-russian-troll-house.

 22. B. Krebs, "Twitter vots target Tibetan protests", Krebson Security, [online] Available: http://krebsonsecurity.com/2012/03/twitter-bots-target-tibetan-protests/.

23. S. Hegelich, D. Janetzko, "Are social bots on Twitter political actors? Empirical evidence from a Ukrainian social botnet", Proc. Tenth Int. AAAI Conf. Web and Social Media, pp. 579-582, 2016.

24. M.C. Forelle, P.N. Howard, A. Monroy-Hernandez, S. Savage, "Political bots and the manipulation of public opinion in Venezuela", SSRN, [online] Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2635800.

29. H. Polites, The perils of polling Twitter bots, The Australian, [online] Available: http://www.theaustralian.com.au/business/business-spectator/the-perils-of-polling-twitter-bots/news-story/97d733c6650991d20a03d25a4229b42e.

30. A. Bruns, Follower accession: How Australian politicians gained their Twitter followers, SBS, [online] Available: http://www.sbs.com.au/news/article/2013/07/08/follower-accession-how-australian-politicians-gained-their-twitter-followers.

31. S. Fazakerley, Paid parental leave is a winner for Tony Abbott, [online] Available: https://twitter.com/stuartfaz/status/369068662163910656/photo/1?ref_src=twsrc%5Etfw&ref_url=httpo/03Ao/02Fo/02Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347.

32. P.N. Howard, B. Kollanyi, "Bots #Stron-gerin and #Brexit: Computational propaganda during the UK-EU Referendum", SSRN, [online] Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2798311.

33. A. Bhattacharya, Watch out for the Brexit bots, [online] Available: https://qz.com/713980/watch-out-for-the-brexit-botsf.

34. M. A. Carter, N/A, [online] Available: https://twitter.com/rob_cart123/status/746091911354716161?ref_src=twsrc%5Etfw&ref_url=http%3A%2F%2Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347.

35. C. Copley, Angela Merkel fears social bots maymanipulate German election, [online] Available: http://www.smh.com.au/world/angela-merkel-fears-social-bots-may-manipulate-german-election-20161124-gsx5cu.html.

36. I. Tharoor, ‘Fake news’ threatens Germany's election too says Angela Merkel, [online] Available: http://www.smh.com.au/world/fake-news-threatens-germanys-election-too-says-angela-merkel-20161123-gsw7kp.html.

37. F. Floridi, "Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis", The Guardian, [online] Available: https://www.theguardian.com/technology/2016/nov/29/fake-news-echo-chamber-ethics-infosphere-internet-digital.

38. A. Qtiesh, The blast inside, [online] Available: http://www.anasqtiesh.com/.

39. SYRIA - Syria's Twitter spambots, [online] Available: https://wikileaks.org/gifiles/docs/19/1928607_syria-syria-s-twitter-spam-bots-html.

40. A. Qtiesh, Spam bots flooding Twitter to drown info about #Syria Protests (Updated), [online] Available: https://advox.globalvoices.org/2011/04/18/spam-bots-flooding-twitter-to-drown-info-about-syria-protests/.

41. N/A, [online] Available: https://twitter.com/SyriaBeauty/status/202585453919076353?ref_src=twsrc%5Etfw&ref_url=http%3A%2F%2Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347.

42N/A, [online] Available: https://twitter.com/BritishLebanese/status/60075290055024640?ref_src=twsrc%5Etfw&ref_url=http%3A%2F%2Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347

43. L. Shamy, To everyone who can hear me!, [online] Available: https://twltter.com/Linasharny/status/808422105809387520?ref_src=twsrcsooEtfw&ref_url=http%3A%2F%2Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347.

44. S. Woolley, Spammers scammers. and trolls: Political bot manipulation, [online] Available: http://politicalbots.org/?p=295.

45.  R. Nordland, A.J. Rubin, Massacre claim shakes Iraq, NYTimes, [online] Available: https://www.nytimes.com/2014/06/16/world/middleeast/irao.html?_r=1

46. S. Oster, "China fakes 488 million social media posts a year: Study", Bloomberg News, [online] Available: https://www.bloomberg.com/news/articles/2016-05-19/china-seen-faking-488-million-internet-posts-to-divert-criticism.

47. G. King, J. Pan, M.E. Roberts, How the Chinese Government fabricates social media posts for strategic distraction not engaged argument, [online] Available: http://gking.harvard.edu/files/gking/files/50c.pdf.

48. Y. Yang, The perfect example of political propaganda: The Chinese Government's persecution against Falun Gong, [online] Available: http://www.globalmediajournal.com/open-access/the-perfect-example-of-political-propaganda-the-chinese-governments-persecution-against-falun-gong.php?aid=35171.

49. B. Feldman, How the Chinese Govemment uses social media to stop dissent, New York:, [online] Available: http://nymag.com/selectall/2016/05/china-posts-propaganda-on-social-media-as-misdirection.htm

ACKNOWLEDGMENT

This article is adapted from an article published in The Conversation titled “Bots without borders: how anonymous accounts hijack political debate,” on January 24, 2017. Read the original article http://theconversation.com/bots-without-borders-how-anonymous-accounts-hijack-political-debate-70347 Katina Michael would like to thank Michael Courts and Amanda Dunn from The Conversation for their editorial support, and Christiane Barro from Monash University for the inspiration to write the piece. Dr. Roba Abbas was also responsible for integrating the last draft with earlier work.

Citation: Katina Michael, 2017, "Bots Trending Now: Disinformation and Calculated Manipulation of the Masses", IEEE Technology and society Magazine, Vol. 36, No. 2, pp. 6-11.

Assessing technology system contributions to urban dweller vulnerabilities

Lindsay J. Robertson+, Katina Michael+, Albert Munoz#

+ School of Computing and Information Technology, University of Wollongong, Northfields Ave, NSW 2522, Australia

# School of Management and Marketing, University of Wollongong, Northfields Ave, NSW 2522, Australia

Received 26 March 2017, Revised 16 May 2017, Accepted 18 May 2017, Available online 19 May 2017

https://doi.org/10.1016/j.techsoc.2017.05.002

Highlights

• Individual urban-dwellers have significant vulnerabilities to technological systems.

• The ‘exposure’ of a technological system can be derived from its configuration.

• Analysis of system ‘exposure’ allows valuable insights into vulnerability and its reduction.

Abstract

Urban dwellers are increasingly vulnerable to failures of technological systems that supply them with goods and services. Extant techniques for the analysis of those technological systems, although valuable, do not adequately quantify particular vulnerabilities. This study explores the significance of weaknesses within technological systems and proposes a metric of “exposure”, which is shown to represent the vulnerability contributed by the technological system to the end-user. The measure thus contributes to the theory and practice of vulnerability reduction. The results suggest specific and general conclusions.

1. Introduction

1.1. The scope and nature of user vulnerability to technological systems

Today's urban dwelling individuals are end-users that increasingly depend upon the supply of goods and services produced by technological systems. These systems are typically complex [1–4], and as cities and populations grow, demands placed on these systems lead to redesigns and increases in complexity. End-users often have no alternative means of acquiring essential goods and services and thus a failure in a technology system has implications for the individual that are disproportionately large compared to the implications for the system operator/owner. End users may also lack awareness of the technological systems that deliver these goods and services, inclusive of system complexity and fragility, yet may be expected to be concerned for their own security. The resulting dependence on technology justifies the observed concern that there is a vulnerability incurred by users, from the systems that provide them with goods and services.

Researchers [5–7], alongside the tradition of military strategists [8], have presented a socio-technical perspective on individual vulnerability, drawing attention to the complexity of the technological systems tasked with the provision of essential goods and services. Meanwhile, other researchers have noted the difficulties of detailed performance modelling of such systems [9–11].

The vulnerability of an urban dweller has also been a common topic within the popular press for example “Cyber-attack: How easy is it to take out a smart city?” [12], which speculated how such phenomena as the “Internet of Things” affect the vulnerability of connected systems. Other popular press topics have included the possibility that motor vehicle systems are vulnerable to “hacking” [13].

There is furthermore a widespread recognition that systems involving many 'things that can go wrong' are fragile. Former astronaut and United States Senator John Glenn stated in his (1997) retirement speech [14] mentioned “… the question I'm asked the most often is: ‘When you were sitting in that capsule listening to the count-down, how did you feel?’ ‘Well, the answer to that one is easy. I felt exactly how you would feel if you were getting ready to launch and knew you were sitting on top of two million parts - all built by the lowest bidder on a government contract’ …” His concern was justified, and most would appreciate that similar concerns apply to more mundane situations than the Mercury mission.

National infrastructure systems are typically a major subset of the technological systems that deliver goods and services to individual end-users. Infrastructure systems are commonly considered to be inherently valuable to socio-economic development, with the maintenance of security and functionality often emphasized by authors such as Gómez et al. (2011) [15]. We argue that infrastructural systems actually have no intrinsic value to the end-user, and are only valuable until another option can supply the goods and services to the user with lower vulnerability, higher reliability or both. If a house-scale sewage treatment technology were economical, adequately efficient and reliable, then reticulated, centralised sewage systems would have no value. We would also argue that the study of complete technological systems responsible for delivery of goods or services to an end-user, is distinguishable from the study of infrastructural systems.

For the urban apartment dweller, significant and immediate changes to lifestyle quality would occur if any of a list of services became unavailable. To name a few, these services would include those that allow the flow of work information, financial transactions, availability of potable water, fuel/power for lighting, heating, cooking and refrigeration, sewage disposal, perishable foods and general transport capabilities. Each of these essential services are supplied by technological systems of significant complexity and face an undefined range of possible hazards. This paper explores the basis for assessing the extent to which some technological systems contribute to a user's vulnerability.

Perrow [16] asserts that complexity, interconnection and possibility of major harm make catastrophe inevitable. While Perrow's assertion may have intuitive appeal, there is a need for a quantitative approach to the assessment of vulnerability. Journals (e.g. International Journal of Emergency Management, Disasters) are devoted to analysing and mitigating individuals' vulnerabilities to natural disasters. While there is an overlap of topic fields, disaster scenarios characteristically assume geographically-constrained, simultaneous disruption of a multitude of services and also implicitly assume that the geographically unaffected regions can and will supply essential needs during reconstruction. This research does not consider the effects of natural disasters, but rather the potential for component or subsystem disruptions to affect the technological system's ability to deliver goods and services to the end-user.

Some technological systems-such as communications or water distributions systems-transmit relevant goods and services via “nodes” that serve only aggregate or distribute the input goods and services. Such systems can be characterised as “homogeneous” and are thus distinguished from systems that progressively create as well as transmit goods, and thus require a combination of processes, input- and intermediate-streams and services. The latter type of system are thus categorized as heterogeneous, such heterogeneity must be accommodated in an analysis measure.

1.2. Quantification as prelude to change

We propose and justify a quantification of a technological system contributions to the vulnerability of an urban dwelling end-user who is dependent upon its outputs. The proposed approach can be applied to arbitrary heterogeneous technological systems and can be shown to be a valid measure of an attribute that had previously been only intuitively appreciated. Representative examples are used to illustrate the theory, and preliminary results from these examples illustrate some generalised concerns and approaches to decreasing urban dwelling end user “exposure” to technological systems. The investigation of a systems exposure will allow a user to make an informed assessment of their own vulnerability, to reduce their exposure by making changes to system aspects within their control. The investigation of a system's exposure will allow a user to make an informed assessment of their own vulnerability, to reduce their exposure by making changes to system aspects within their control. Quantifying a system's exposure will allow a system's owner to identify weaknesses, and to assess the effect of hypothetical changes.

2. Quantification of an individual's “exposure”: development of theory

2.1. Background to theory

Consider an individual's level of “exposure” by two scenarios: a first scenario where a specific service can only be supplied to an end-user by a single process, which is dependent on another process, which is in turn dependent on a third process. In the second scenario, the same service can be offered to the user by any one of three identical processes with no common dependencies. Any end-user of the service could reasonably be expected to feel more “exposed” under the first scenario than under the second. Service delivery systems are likely to involve complex processes that include at least some design redundancies, but also include single-points-of-failure, and cases where two or more independent failures would deny the supply of the service. For such systems, the “exposure” of the end user may not be obvious, but it would be useful to distinguish quantitatively and to be able to distinguish quantitatively among alternative configurations.

The literature acknowledges the importance of a technological configuration's contribution to end-user vulnerability [6], yet such studies do not quantitatively assess the significance of the system configuration. Reported approaches to vulnerability evaluation can be broadly categorized according to whether they consider homogeneous or heterogeneous systems, whether they assume a static or a dynamic system response, and whether system configuration is, or is-not used as the basis for the development of metrics. The published literature on risk analysis (including interconnected system risks), resilience analysis, and modelling all have a bearing on the topic, and are briefly summarised below:

Risk analysis may be applied to heterogeneous or homogeneous systems, the analysis does not analyse dynamic system responses and limits analysis to a qualitative assessment of the effect of brainstormed hazards. Classical risk analysis [17–20] requires an initial description of the system under review, however practitioners commonly only generate descriptions lacking specific system configuration detail. While many variations are possible, it is common for an expert group to carry out the risk analysis by listing all identified hazards and associated harms. Experts then categorise identified harms by severity, and hazards according to the vulnerability of the system and the probability of hazard occurrence. Risk events are then classified by the severity, based on harm magnitude, and hazards probability. Undertaking a risk analysis is valuable, yet without a detailed system definition to which the assessments of hazard and probability are applied, probability-of-occurrence evaluations may be inaccurate or fail to identify guided hazards, and the analysis may fail to identify specific weaknesses. Another issue exists if the exercise fails to account for changes to instantaneous system states; if the system is close to design capacity upon hazard occurrence, the probability of the hazard causing harm is higher than if the hazard occurred at a point in time when the system operates at lower capacities. Finally, the use of categories that correlate harm and hazard to generate a risk evaluation are inherently coarse-grained, meaning that changes to system configuration or components may- or may-not trigger a change to the category that is assigned to the risk.

Another analysis approach is that of “Failure Modes and Effects Analysis” (FMEA) [21], which examines fail or go conditions of each component within a system, ultimately producing a tabulated representation of the permutations and combinations of input “fail” criteria that cause system failure. FMEA is generally used to demonstrate that a design-redundancy requirement of a tightly-defined system is met.

'Resilience' has been the topic of significant research, much of which is dedicated to the characterization of the concept and definitional consensus. One representative definition [22] is “ … the ability of the system to withstand a major disruption within acceptable degradation parameters and to recover within an acceptable time … ” This definition is interpreted [23–25] quantitatively as a time-domain variable measuring one or more characteristics of the transient response. For complex systems, derivation of time-domain responses to a specific input disruption can be expected to be difficult, and such a derivation will only be valid for one particular initial operational state and disruption event. Responses to each possible system input and initial condition would generate a new time-domain response, and so a virtually infinite number of transient responses would be required to fully characterize the 'resilience' of a single technological system. All such approaches implicitly assume that the disturbance is below an assumed ‘maximum tolerable’ level, so the technological system's response will be a continuous function, i.e. the system will not actually fail. As resilience analysis applies to the aforementioned scenarios, a methodological issue exists in that evaluations of this kind are post-hoc observations, where feedback from event occurrences lead to design changes. Thus, an implicit assumption exists that the intention of resilient design is to minimise the disturbance to an ongoing provision of goods and services, rather than prevent output failure. Resilience analysis examines each permutation and combination of input, and constraining scope to failures, as input. As this method considers system responses to external stimulus, it requires a detailed knowledge of system configurations and system configuration, but is only practical for relatively simple cases (the difficulty of modelling large systems, has been noted by others [9]).

A third approach constructs a model of the target system in order to infer real world behaviour. The model - as a simplified version of the real world system - is constructed for the purposes of experimentation or analysis [26]. Applied to the context of end-user vulnerability, published simplifications of communication systems, power systems and water distribution systems commonly assume system homogeneity. For example, a graph theory approach will consider the conveyance of goods and services as a single entity transmitted across a mesh of edges and vertices that each serve to either disperse or distribute the product. Once a distribution network is represented as a graph, it is possible to mathematically describe the interconnection between specified vertices [10], and to draw conclusions [27] regarding the robustness of the network. Tanaka and colleagues [28] noted that it is possible to represent homogeneous networks using graph theory notation and thus make graph theory analyses possible. Common graph theory metrics consider the connections of each edge and do not consider the possibility that an edge could carry a different service from another edge. Because the graph theory metrics assume a homogeneous system, these metrics cannot be applied directly to heterogeneous systems in which interconnections do not always carry the same goods or services.

2.2. Exposure of a technological system

In order to obtain a value for the technological system contribution to end-user vulnerability that enables comparisons among system configurations a quantitative analytical technique is needed. To achieve this, four essential principles are proposed to allow and justify the development of a metric that evaluates the contribution of a heterogeneous technological system, to the vulnerability of an individual. These principles are:

(1) Application to individual end-user: an infrastructural system may be quite large and complex. Haimes and Jiang [11] considered complex interactions between different infrastructural systems by assigning values to degrees of interaction: the model allows mathematical exploration of failure effects but (as is acknowledged by these authors) depends on interaction metrics that are difficult to establish. This paper presents an approach that is focussed on a representative single end-user. When an individual user is considered, not only is the performance of the supply system readily defined, but the relevant system description is more hierarchical and less amorphous. Our initial work has also suggested that if consideration of failures requiring more than 3 simultaneous and unrelated hazards, then careful modelling can generate a defensible model without feedback loops.

(2) Service level: it is possible to not only describe goods or services that are delivered to the individual (end-user), but also to define a service level at which the specified goods or services either are-, or are-not delivered. From a definitional standpoint, this approach allows the output of a technological system to be expressed as a Boolean variable (True/False), and allows the effect of the configuration of a technological system to be measured against a single performance criterion. For some goods/services, additional insights may be possible from separate analyses at different service levels (e.g. water supply analyzed at “normal flow” and at “intermittent trickle”) however for other goods/services (e.g. power supply) a single service level (power on/off) is quite reasonable.

(3) Hazard and weakness link: events external to a technology system only threaten the output of the technology system if the external events align with a weakness in the technology system. If a hazard does not align with a weakness then it has no significance. Conversely if a weakness exists within a technological system and has not been identified, then hazards that can align with the weakness are also unlikely to be recognised. If the configuration of a particular technology system is changed, weaknesses may be removed while other weaknesses may be added. Therefore, for each weakness added, an additional set of external events can be newly identified as hazards - and correspondingly for each weakness that is removed, the associated hazards cease to be significant. Processes capable of failure and input streams that could become unavailable, are weaknesses that are significant regardless of the number and/or type of hazards of sufficient magnitude to cause failure, that might align with any specific example of such a weakness.

(4) Hazard probability: Some (e.g. extreme weather events) hazards occur randomly, can be assessed statistically, and will have a higher probability of occurrence over a long time period. Terrorist actions or sabotage in particular, do not occur randomly but must be considered as intelligently (mis)guided hazards. The effect of a guided hazard upon a risk assessment is qualitatively different from the effect of a random hazard. The guided hazard will occur every time the perpetrator elects to cause the hazard and therefore the hazard has a probability of 1.0. It is proposed that the significance of this distinction has not been fully appreciated. A malicious entity will seek out weaknesses, regardless of whether these have been identified by a risk assessment exercise or not. Since either random or guided hazards have an equal effect, and have a probability approaching 1 for a long time period, we argue that a risk assessment based upon the 'probability' (risk) of a hazard occurring is a concept with limited usefulness, and vulnerability is more validly assessed by assuming that all hazards (terrorist action, component failure or random natural event) will occur sooner or later, hence having a collective probability of 1.0. As soon as the assumption is made that sooner-or-later a hazard will occur, assessment of the technological systems contribution to user vulnerability can be refocussed from consideration of hazard probability to consideration of the number and type of weaknesses with which (inevitable) hazards can align.

A heterogeneous technological system may involve an arbitrary number of linked operations, each of which (consistent with the definition of stated by Slack et al. [29]requires inputs, executes some transformation process, and produces an output that is received by a subsequent process and ultimately serves an end-user. If the output of such a system is considered to be the delivery- or non-delivery of a nominated service-level output to an individual end-user, then the arbitrary heterogeneous technological system can be described by a configured system of notional AND/OR/NOT functions [30] whose inputs/outputs include unit-operations, input streams, intermediate product streams and services. For example, petrol is dispensed from a petrol station bowser to a car if fuel is present in the bulk tank, the power to a pump is available, a pipework and pump are operational and the required control signal is valid. Hence, a notional “AND” gate with these 5 inputs will model the operation of the dispensing system. The valid control signal will be generated when another set of different inputs is present, and the provision of this signal can be modelled by a notional “AND” function with nominated inputs. The approach allows the operational configuration of a heterogeneous technological system to be represented by a Boolean algebraic expression. Fig. 1 illustrates the use of Boolean operations to represent a somewhat more complex technological system.

Fig. 1. Process and stream operations required for system: Boolean representation.

 

Having represented a specific technological system using a Boolean algebraic expression, a 'truth table' can be constructed to display all permutations of process and stream availabilities as inputs, and technological system output as a single True or False value. From the truth table, a count of the cases in which a single input failure will cause output failure, and assign that total to the variable “E1”. A count of the cases where two input failures (exclusive of inputs whose failure will alone cause output failure) cause output failure, and assign that total value to E2. A further count of the cases in which three input failures cause output failure (and where neither single nor double input failures within that triple combination would alone cause output failure) and assign that total value to the variable “E3” and similarly for further “E” values. A simple algorithm can generate all permutations of “operate or fail” for every input process and stream. If a “1” is considered as a “operate” and “0” is considered as a “fail”, then for a model with n inputs (streams and processes) 2n options are input. If the algorithm outputs are applied to each binary representation of input states (processes and streams) and the output conditions (operate or fail) are recorded the input conditions for each output fail combination, the E1 etc. values can be computed (the E1 is the number of output-failure conditions where only a single input has failed). A truth-table approach to generating exposure metrics is illustrated in Fig. 2.

 Fig. 2. Evaluation of exposure, by analysis of Boolean expression.

Fig. 2. Evaluation of exposure, by analysis of Boolean expression.

The composite metric {E1, E2, E3 … En}, is therefore mapped from the Boolean representation of the heterogeneous system and characterizes the weaknesses of that system in the contribution of the technological configuration to end-user vulnerability. Indeed, for a given single output at a defined service level - described by a Boolean value, representing “available” or “not available” - it is possible to isomorphically map an arbitrary technological system onto a Boolean algebraic expression. Thus, it is possible to create a homomorphic mapping (consistent with the guidance of Suppes [31] to a composite metric that characterizes the weakness of the system. Furthermore, the metric allows for comparison of the exposure level of alternative technological systems and configurations.

Next, we consider whether the measure represents the proposed attribute, by considering validity criteria. Hand [32] states that construct validity “involves the internal structure of the measure and also its expected relationship with other, external measures …” and “… thus refers to the theoretical construction of the test: it very clearly mixes the measurement procedure with the concept definition”. Since the Boolean algebraic expression represents all processes, streams and interactions, it can be directly mapped to a Process Flow Diagram (PFD) and so is an isomorphic mapping of the technological system with respect to processes and streams. The truth table is a homomorphic mapping of output conditions and input combinations, with output values unambiguously derived from the input values, but the configuration cannot be unambiguously derived from the output values. The {E1, E2, E3 … En} values are therefore a direct mapping of the system configuration.

Since the configuration and components of the system are represented by a Boolean expression, and the exposure metric {E1, E2, E3 … En} is assembled directly from the representation of the technological system, it has sufficient “construct validity” in the terms proposed by Hand [32]. The representational validity of this metric to the phenomenon of interest (viz. contribution to individual end-user vulnerability) must still be considered [31,32], and two justifications are proposed. Firstly, the representation of “exposure” using {E1, E2, E3 … En} supports the common system engineering “N+1”, “N+2” design redundancy concepts [33]. Secondly, the cost of achieving a given level of design redundancy can be assumed to be related to “E” values and so enumerating these will support decisions on value propositions of alternative projects, a previously-identified criterion for a valid metric.

Generating an accurate exposure metric as described, requires identification of processes and streams, which in practice requires a consideration of representation granularity. If every transistor in a processor chip were considered as a potential cause of failure, the “exposure” value calculated for the computer would be exceedingly high. If by contrast, the computer were considered as a complete, replaceable unit, then it would be assigned an exposure value of 1. A pragmatic definition of granularity will address this issue: if some sub-system of interest is potentially replaceable as a unit, and can be attacked separately from other sub-systems, then the sub-system of interest should be considered as a single potential source of failure. This definition allows adequate precision and reproducibility by different practitioners.

Each input to an operation within a technological system will commonly be the output of another technological system, which will itself have a characteristic “exposure”. The contribution of the predecessor system's exposure to the successor system must be calculated. This problem is generalised by considering that each input to a Boolean ‘AND’ or ‘OR’ operation has a composite exposure metric, and developing the principles by which the operation's output can be calculated from these inputs. Consider, for example, an AND gate that has three inputs (A, B and C), whose inputs have composite exposure metrics {A1, A2, A3 … }, {B1, B2, B3 … } and {C1, C2, C3 … }. The contributory exposure components are added component-wise, hence the resulting exposure of an AND operation is {(A1+B1+C1), (A2+B2+C2), (A3+B3+C3) … (An + Bn + Cn)}. The generalised calculation of contributory exposure is more complex for the OR operation. For an OR gate with three inputs (A, B and C), each of which has composite exposure metric {A1, A2, A3 … }, {B1, B2, B3 … } and {C1, C2, C3 … }:

• The output E1 value is 0

• The output E2 value is 0

• The output E3 value is 2((A1−1)+(B1−1)+(C1−1)),((A2−1)+(B2−1)+(C2−1)), ((A3−1)+(B3−1)+(C3−1)) since one fail from each input must occur for the output to fail, however each remaining combination of fails contributes to the E3 value

• The E4 and subsequent values are calculated in exactly the same way as the E3 value.

Since the contributory system has effectively added streams and processes, the length of the output exposure vector is increased when the contributory system is considered. The proposed approach is therefore to nominate a level to which exposure values will be evaluated. If, for example, this level is set at 2, then the representation would be considered to be complete when it could be shown that no contributory system adds to the E2 values of the represented system.

3. Implications from theory

The current levels of expenditure on infrastructure “hardening” are well reported in popular press. The theory presented is proposed to be capable, for a defined technological system, of quantitatively comparing the effectiveness of alternative projects. The described measure is also proposed to be capable of differentiating between systems that have higher or lower exposure, and thus allowing prioritisation of effort. The following examples have undergone a preliminary analysis. The numerical outputs are dependent on system details and boundaries, nevertheless, the authors' preliminary results indicate the output that is anticipated, and are considered to demonstrate the value of the principles. The example studies are diverse and examine well-defined services and service levels for the benefit of a representative individual end-user. Each example involves a technological system (with a range of processes, intermediate streams, and functionalities) and may therefore be expected to include a number of cases in which a single stream/process failure will cause the service delivery to fail - and a number of other cases in which multiple failures would result in the non-delivery of the defined service. The analyses also collectively identify common contributors, technological gaps, and common principles that inform improvement decisions. In each example case, the delivered service and level is established, following which the example is described and the boundaries confirmed. The single-cause of failure items (contributors to the E1 value) are assessed first, followed by the dual-combination causes of failure (contributors to the E2 values) and then the E3 values. It is assumed that neither maintenance nor replacement of components are required within the timeframe considered – i.e., outputs will be generated as long as their designed inputs are present, and the processes are functional.

3.1. Example 1: Petrol station, supply of fuel to a customer

The “service” in this case is the delivery, within standard fuel specifications (including absence of contaminants) and at standard flow rate, of petrol into a motor vehicle at the forecourt of a petrol-station. The scope includes the operation of the forecourt pumps, the underground fuel storage tanks, metering and transactional services. Although storage is significant, the refilling of the underground tanks from fuel stored in national-reserves can only be accomplished by a limited number of approaches, which must occur frequently relative to the considered timeframe and must be considered. Since many sources supply the bulk collection depot, the analysis will not go further back than the bulk storage depot. Similarly, the station is unlikely to have duplicated power feeders from the nearest substation and so this supply must be considered. The financial transaction system and the communications system it uses, must be included in the consideration.

On the assumption that the station is staffed, sanitary facilities (sewage, water) are also required (see Fig. 3). While completely automated “truck stop” fuel facilities exist, facilities as described are common and can reasonably be called representative. The fuel dispensing systems in both cases are almost identical, however the automated facilities cannot normally carry out cash transactions, and the manned stations commonly sell other goods (food and drink) and may provide toilet facilities.

Fig. 3. Operation of petrol station.

In work not reported here, the exposure metrics of the contributory systems have been estimated as EFTPOS financial transaction {49, 12, 1}, staff facilities system {36, 4, 6}, power {2, 3, 0}. Based on these figures, the total exposure metric for the petrol delivered to the end user is estimated at {92, 20, 8}.

In evaluating the exposure metric, the motor/petrol-pump and pipework connections do not generate E3 values (more than 3 failures would be required to cause a failure of the output function) since the petrol station has four pumps. The petrol station power supply affects several plant items that are local to the petrol station, and so is represented at the level which allows a valid assessment of its exposure contribution. For bulk petrol supply, numerous road system paths exist, tankers and drivers are capable of bringing fuel from the refinery to the station and so these do not affect the E3 values. The electricity distribution system has more than three feed-in power stations and is assumed to have at least 3 High Voltage (HV) lines to major substations, however local substations commonly have only two voltage-breakdown transformers, and a single supply line to the petrol station would be common. The local substation and feeders are assumed to be different for the sewage treatment plant, the bulk petrol synthesis and banking system clearing-house (and are accounted for in the exposure metrics for those systems), but the common HV power system (national grid) does not contribute to the E3 values, and so it is not double-counted in assessing the power supply exposure of the petrol station and contributory systems. While EFTPOS and sewage systems have had high availability, this analysis emphasizes the large contribution they make to the user's total exposure, and thus suggest options for investigation.

This example illustrates the significance of the initial determination of system boundaries. This example output is defined as fuel supplied to a user's vehicle at a representative petrol station. Other studies might consider a user seeking petrol within a greater geographic region (e.g. neighbourhood). In that case the boundaries would be determined to consider alternative petrol station and also subsystems that are common to local petrol stations (power, sewage, financial transactions, bulk fuel) and subsystems (e.g. local pumps) where design redundancy is achieved by access to multiple stations.

3.2. Example 2: Sewage system services for apartment-dweller

Consider the service of the sanitary removal of human waste, as required, via the lavatory installed in an urban apartment discharging to a wastewater treatment plant. The product of the treatment operation being environmentally acceptable treated water to waterways, and solid waste at environmentally acceptable specifications to landfill.

The technology description assumes that the individual user lives in an urban area with a population of 200,000 to 500,000. This size is selected because it is representative of a large number of cities. An informal survey of the configuration of sewage systems used by cities within this size range, reveals a level of uniformity, and hence the configuration in the example is considered “representative”.

The service is required as needed and commences from the water-flushed lavatory, and ends with the disposal of treated material. Electric power supplies to pumping stations and to local water pumps, are unlikely to have multiple feeders and will be considered to the nearest substation. The substation can be expected to have multiple feeders and so it is not considered necessary to consider the electric power supply further “back”. Significant pumping stations would commonly have a “Local/Manual” alternative control system capability, in which a remote control is normal, but allowing an operator to select “manual” at the pump station and thereafter operate the pumps and valves locally.

Operationally, the cistern will flush if town water is available and the lift pump is operational and power is available to the lift-pump. The lavatory will flush if the cistern is full. The waste will be discharged to the first pumping station if the pipework is intact (gravity fed). The first pumping station will operate if either main or backup pump are operational and power is available and either operator is available or control signal is present. The power signal will be available if signal lines are operational and signal equipment is operational and power for servo motors are available and remote operator or sensor is operational. The waste will be delivered to the treatment station if the major supply pipework is operational. The treatment station will be operational (i.e. able to separate and coalesce the sewage into a solid phase suitable for landfill, and an environmentally benign liquid that can be discharged to sea or river) if the sedimentation tank and discharge system are operable and the biofilm contactor is operational and the clarifier/aerator is operational and the clarifier sludge removal system is operational and power supply is available and operators are available. The sludge can be removed if roads are operational and driver and truck and fuel is available.

Several single sources of failure (contributors to E1 value) can be discerned. The power supply to local water pump, gravity fed pipework from lavatory to first pumping station and power supply to the first pump station. Manual control of the pumping station is possible, then this contributes to the E2 value, otherwise the control system wiring and logic will contribute to the E1 value. Assuming duplicate pumping station pumps, these contribute to the E2 value. Few of the treatment plant processes will be duplicated and so will contribute to the E1 values. For the urban population under consideration, the treatment plant is unlikely to have dual power feeds, and so the power supply contributes to the E1value.

For real treatment plants, most processes will include bypasses or overflow provisions. If the service is interpreted to specify the discharge of environmentally acceptable waste, then these bypasses are irrelevant to this study. However, if the “service” were defined with relaxed environmental impact requirements, then the availability of bypasses would mean that treatment plant processes would contribute to E2 values. Common reports of untreated sewage discharge following heavy rainfall events indicate that the resilience of treatments plants is low.

The numerical values of exposure presented in Table 1 are based on a representative design. It is noted that design detail may vary for specific systems. .

Table 1. Contributions to exposure of sewage system.

Preliminary research included the commissioning of a risk analysis by a professional practitioner of a sewage system defined to an identical scope, components and configuration of the target system. While the risk analysis generated useful results, it failed to identify all of the weaknesses that were identified by the exposure analysis.

3.3. Example 3: Supply of common perishable food to apartment-dweller

For the third example, a supply of a perishable food will be considered. The availability to a (representative) individual consumer, of whole milk with acceptable bacteriological, nutritional and organoleptic properties will be considered to represent the “service” and associated service level. Fresh whole milk is selected because it is a staple food and requires a technological system that is similar to that required by other nominally processed fresh food. Nevertheless, the technological system is not trivial - the pasteurisation step requires close control if bacteriological safety is to be obtained without deterioration of the nutritional and taste qualities.

At the delivery point, a working refrigerator and electric power are required. Transit to the apartment requires electric power to operate the elevator. Transport from the retail outlet requires fuel, driver, operational vehicle, and roads. The retail outlet requires staff, electric power, functional sewage system, functional water supply, functional lighting, communications and stocktaking system and access to financial transaction capability. Transport to the retail outlet requires fuel, driver, roadways and operational trucks. The processing and packaging system requires pasteurisation equipment, control system, Cleaning In Place (CIP) system, CIP chemical supply, pipework-and-valve changeover system for CIP, electric-powered pumps, electrically operated air compressors, fuel and fired hot water heaters, packaging equipment, packaging material supplies, water supply, waste water disposal, sewage system and skilled operators. Neither the processing facilities, nor the retail outlet, nor the apartment would commonly not have duplicated electric power feeders and so these and associated protection systems must be considered back to the nearest substation. The substation can be assumed to have multiple input feeders, and so the electric power system need not be considered any further upstream of the substation. The heating system (hot water boiler and hot water circulation system) for the pasteurizer would commonly be fired with fuel oil, but including enough storage that it would commonly be supplied directly from a fuel wholesaler.

The processes leading from raw milk collection up to the point where pasteurised and chilled milk is packaged, are defined in considerable detail by regulatory bodies - and can therefore be considered to be representative. There will be variation in packaging equipment types, however each of these can be considered as a single process and single “weakness”. Distribution to supermarkets, retail sales and distribution to individual dwellings are similar across most of the western world and are considered to be adequately representative. Neither the processing plant nor the retail outlet are staffed and so require an operational sewage disposal system and water supply.

Delivery of the milk will be achieved if the refrigeration unit in the apartment is operational and power is supplied to it and packaged product is supplied. Packaged product can be supplied if apartment elevator and power are available and individual transport from the retail outlet is functional and retailing facilities exist and are supplied with power and are staffed and have functional financial transaction systems. The retail facility can be staffed if skilled persons are available and transport allows them to access the facility and staff facilities (water, sewage systems) are operational. The sewage system is functional if water supply is available and pumping station is functional and is supplied with power and control systems are available. The bulk consumer packs are delivered to the retail outlet if road systems and drivers and vehicles and fuel is available. The packaged product is available from the processing facility if fuel is available to heat the pasteurizer and power is available to operate pumps and control system is operable and skilled operators are available and homogeniser is operational and compressed air for packaging equipment is available and packaging equipment is operational. Product can be made if CIP chemicals are available and CIP waste disposal is operational. Since very many suppliers and transport options are capable of supplying raw milk to the processing depot, the study need not consider functions/processes that are further upstream to the processing depot, i.e. the on-farm processes or the raw milk delivery to the processing depot.

Several single sources of failure (contributors to E1 value) can be identified: Power supply to the refrigerator (and cabling to the closest substation), roads fuel vehicle and driver, staff and power supply (and cabling to substation) at the retail outlet. Staff facilities (and hence the exposure contributions from the sewage system) must be considered. Provided the retail outlet is able to accept cash or paper-credit notes, then the payment system contributes to the E2 value, however if the retail outlet is constrained to electronic payments then many processes associated with the communications and banking systems will contribute to the E1 value. Roads, and fuel for bulk distribution will contribute to the E1 value, however drivers and trucks contribute to higher E values, since many drivers and trucks can be assumed to be available. The power supply to the processing and packing facility will contribute to the E1 value. The tight specifications and regulatory standards for consumer-quality milk will generally not allow any bypassing of processes, and so each of the major processes (reception, pasteurisation, standardisation, homogenisation and packaging) will all contribute to the E1 values. The milk processing and packaging facility will also need fuel for the Cleaning in Place (CIP) system and will need staff facilities - and hence the exposure contribution of the sewage system must be considered.

The examples demonstrate three commonalities. Firstly, it is both practical and informative to evaluate contributors to E1, E2 etc. variables for a broad range of cases and technologies. Secondly, sources of vulnerability that are specific to the examples can be identified, and thirdly principles for reduction of vulnerability can be readily articulated. In the petrol station supply example, one could consider eliminating the operator to obtain a greater reduction in exposure, eliminating needs for sewage, water, than retaining the operator and allowing cash transactions.

Some common themes can also be inferred among the examples, in general principles for exposure reduction likely to be applicable to other cases: The example studies contain intermediate streams: if such intermediate streams have specifications that are publicly available (open-source), there is increased opportunity for service substitution from multiple sources, and a reduction in the associated exposure values. Proprietary software and data storage are a prominent example of lack of standardisation despite the availability of such approaches as XML. Currently, electronic financial transactions require secure communications between an EFTPOS terminal and the host system, and the verification and execution of the transaction between the vendor's bank account and the purchaser's bank account. These processes are necessarily somewhat opaque. Completely decentralised transactions are possible as long as both the seller and vendor agree upon a medium of exchange. The implications of either accepting- or not-accepting a proposed medium of exchange are profound for the “exposure” created. Although the internet was designed to be fault tolerant, its design requires network controllers to determine the path taken by data, and in practice this has resulted in huge proportions of traffic being routed through a small number of high bandwidth channels. This is a significant issue: if the total intercontinental internet traffic (including streamed movies, person-to-person video chats, website service and also EFTPOS financial transaction data) were to be routed to a low-bandwidth channel, the financial transaction system would probably fail. Technological systems such as the generation of power using nuclear energy, are currently only economic at very large-scale, and hence represent dependencies to a large number of systems and users. Conversely, a system generating power using photovoltaic cells, or using external combustion (tolerant of wide variations in fuel specification) based on microalgae grown at village level, would probably be inherently less “exposed”. In every case where a single-purpose process forms an essential part of a process, it represents a source of weakness. By contrast, any “general purpose” or “re-purpose-able” component can, by definition, contribute to decreasing exposure. Human beings' ability to apply “common sense” and their unlimited capacity for re-purposing, are the epitome of multi-purpose processors. The capability to incorporate humans into a technological system is possibly the single most effective way to reduce “exposure”. The “capability to incorporate” may require some changes that do not inherently affect the operation of a system, merely incorporate a capability, such as the inclusion of a hand-crank capability in a petrol pump.

The examples (fuel supply, sewage disposal, perishable food) also uncover the existence of technological gaps, and where a solution would decrease exposure. Such gaps include the capability to store significant electrical energy locally (this is a more significant gap than is the capability to generate locally), a truly decentralised/distributed and secure communication system, and an associated knowledge storage/access system, a fully decentralised financial system that allows safe storage of financial resources and safe transactions, a decentralised sewage treatment technology, and less centralised technological approaches for the supply of both food and water and a transport system capable of significant load-carrying though not necessarily high speed, with low-specification roadways and broadly-specified energy supplies.

4. Discussion

The detailed definition of a technological system allows a more rigorous process for identification of hazards, by ensuring that all system weaknesses are considered. Calculating the exposure level of a technological system is not proposed as a replacement for risk analysis, but as a technique that offers specific insights and also increases the rigour of risk analysis. Indexing a measure of contribution-to-vulnerability is both simplified and enhanced in value if the measure is indexed to the delivery of specific goods/services, at defined levels, to an individual end-user. This approach will allow clarification of the extent to which a given project will benefit the individual. The analysis is applicable to any technological system supplying a specified deliverable at a given service level to a user. It is recognised that while some hazards e.g. a major natural or man-made disaster may affect more than one system, the analysis of the technological exposure of each system remains valid and valuable. The analysis of hazard-probability is of limited value over either long timeframes or when hazards are guided, and that characterising the number and types of weaknesses in a technological system is a better indicator of the vulnerability which it contributes to the person who depends on its outputs. An approach to quantifying the “exposure” of a technological system has been defined and justified as a valid representation of the contribution made to the vulnerability of the individual end-user. The approach is generates a fine-grained metric {E1, E2, E3 … En} that is shown to accurately measure the vulnerability incurred by the end-user: calculation of the metric is tedious but not conceptually difficult; the measure is readily able to be verified and replicated, and the calculated values allow detailed investigation of the effect of hypothesized changes to a target system. The approach has been illustrated with a number of examples, and although only preliminary analyses have been made, the practicality and utility of the approach has been demonstrated. Only a small number of example studies have been presented, although they have been selected to address a range of needs experienced by actual urban-dwellers. In each case the scope and technologies used are proposed to be representative, and hence conclusions drawn from the example studies can be considered to be significant.

Even the preliminary analyses of the examples have indicated two distinct categories of contributors to vulnerability: weaknesses that are located close to the point of final consumption, and highly centralised technological systems such as communications, banking, sewage, water treatment and power generation. In both of these categories the user's exposure is high despite limited design redundancy, however the users exposure could be reduced by selecting or deploying technology subsystems with lower exposure close to point-of-use, and by using public standards to encourage multiple opportunities for service substitution. The use of an exposure metric has been shown to provides measure of the vulnerability contributed by a given technological system to the individual end-user, has been shown to be able to be applied to representative examples of technological systems. Although results are preliminary, the metric has been shown to allow the derivation of both specific and generalised conclusions. The measure can integrate with-, and add value to-existing techniques such as risk analysis.

The approach is proposed as a theory of exposure, including conceptual definitions, domain limitations, relationship-building and predictions that are proposed [34] as essential criteria for a useful theory.

Acknowledgement

This research is supported by an Australian Government Research Training Program (RTP) Scholarship.

References

[1] J. Forrester, Industrial Dynamics, MIT Press, Boston, MA (1961)

[2] R. Ackoff, Towards a system of systems concepts, Manag. Sci., 17 (11) (1971), pp. 661-671

[3] J. Sterman, Business Dynamics: Systems Thinking and Modelling for a Complex World, McGraw-Hill Boston (2000)

[4] I. Eusgeld, C. Nan, S. Dietz, System-of-systems approach for interdependent critical infrastructures, Reliab. Eng. Syst. Saf., 96 (2011), pp. 679-686

[5] T. Forester, P. Morrison, Computer unreliability and social vulnerability, Futures, 22 (1990), pp. 462-474

[6] B. Martin, Technological vulnerability technology in society, 18 (1996), 511–523A

[7] L. Robertson, From societal fragility to sustainable robustness: some tentative technology trajectories, Technol. Soc., 32 (2010), pp. 342-351

[8] M. Kress, Operational Logistics: the Art and Science of Sustaining Military Operations, 1-4020-7084-5, Kluwer Academic Publishers (2002)

[9] L. Li, Q.-S. Jia, H. Wang, R. Yuan, X. Guan, Enhancing the robustness and efficiency of scale-free network with limited link addition, KSII Trans. Internet Inf. Syst., 6 (2012), pp. 1333-1353

[10] A. Yazdani, P. Jeffrey, Resilience enhancing expansion strategies for water distribution systems: a network theory approach, Environ. Model. Softw., 26 (2011), pp. 1574-1582

[11] Y.Y. Haimes, P. Jiang, Leontief based model of risk in complex interconnected infrastructures, ASCE J. Infrastruct. Syst., 7 (2001), pp. 1-12

[12] Cyber-attack: How Easy Is it to Take Out a Smart City?. (New Scientist, 4 August 2015), (https://www.newscientist.com/article/dn27997-cyber-attack-how-easy-is-it-to-take-out-a-smart-city/, retrieved 22 Mar 2017).

[13] Jeep Drivers Can Be HACKED to DEATH: All You Need Is the Car's IP Address (e.g. The Register, 21 Jul 2015). (https://www.theregister.co.uk/2015/07/21/jeep_patch/, retrieved 22 Mar 2017).

[14] J. Glenn (Sen), Quotation from Retirement Speech, Seen online at (1997), http://www.historicwings.com/features98/mercury/seven-left-bottom.html, in Feb 2014

[15] C. Gómez, M. Buriticá, M. Sánchez-Silva, L. Dueñas-Osorio, Optimization-based decision-making for complex networks in disastrous events, Int. J. Risk Assess. Manag., 15 (5/6) (2011), pp. 417-436

[16] C. Perrow, Normal Accidents: Living with High-risk Technologies, Basic Books, New York (1984)

[17] P. Chopade, M. Bikdash, Critical infrastructure interdependency modelling: using graph models to assess the vulnerability of smart power grid and SCADA networks, 8th International Conference and Expo on Emerging Technologies for a Smarter World, CEWIT 2011(2011)

[18] ISO GUIDE 73, Risk Management — Vocabulary, (2009)

[19] A. Gheorghe, D. Vamadu, Towards QVA - Quantitative vulnerability assessment: a generic practical model, J. Risk Res., 7 (2004), pp. 613-628

[20] ISO/IEC 31010, Risk Management— Risk Assessment Techniques', (2009)

[21] FMEA, MIL-STD-1629A, Failure Modes and Effects Analysis (1980)

[22] Y.Y. Haimes, On the definition of resilience in systems, Risk Anal., 29 (2009), pp. 498-501, 10.1111/j.1539-6924.2009.01216.x, 2009 Note page 498

[23] K. Khatri, K. Vairavamoorthy, “A new approach of risk analysis for complex infrastructure systems under future uncertainties: a case of urban water systems. ”Vulnerability, Uncertainty, and Risk: analysis, Modeling, and Management, Proceedings of the ICVRAM 2011 and ISUMA 2011 Conferences (2011), pp. 846-856

[24] Y. Shuai, X. Wang, L. Zhao, Research on measuring method of supply chain resilience based on biological cell elasticity theory, IEEE Int. Conf. Industrial Eng. Eng. Manag. (2011), pp. 264-268

[25] Munoz M. Dunbar, On the quantification of operational supply chain resilience, Int. J. Prod. Res., 53 (22) (2015), pp. 6736-6751

[26] A.M. Law, Simulation Modelling and Analysis, McGraw-Hill, New York (2007)

[27] L. Barabási, E. Bonabeau, EScale-free networks, Sci. Am., 288 (2003), pp. 60-69

[28] G. Tanaka, K. Morino, K. Aihara, Dynamical robustness in complex networks: the crucial role of low-degree nodes, Nat. Sci. Rep., 2/232 (2012)

[29] N. Slack, A. Brandon-Jones, R. Johnston, Operations Management 7th Edition by Dawson Books, (2013), ISBN-13: 978–0273776208 ISBN-10: 0273776207

[30] ISO/IEC 9075–2, Information Technology – Database Languages – SQL, (2011)

[31] P. Suppes, Measurement theory and engineering, Dov M. Gabbay, Paul Thagard, John Woods (Eds.), Handbook of the Philosophy of Science, Philosophy of Technology and Engineering Sciences, vol. 9, Elsevier BV (2009)

[32] D. Hand, Measurement Theory and Practice: the World through Quantification, Wiley (2004), ISBN: 978-0-470-68567-9.ISO/IEC 31010:2009–(Risk Management - Risk Assessment Techniques)

[33] Ponemon Institute, 2013 Study on Data Center Outages, Retrieved, (2013), http://www.emersonnetworkpower.com/documentation/en-us/brands/liebert/documents/white%20papers/2013_emerson_data_center_outages_sl-24679.pdf, Dec 2016

[34] J.G. Wacker, A definition of theory: research guidelines for different theory-building research methods in operations management, J. operations Manag., 16.4 (1998), pp. 361-385

Vitae

L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

1 Abbreviation: Failure Modes and Effects Analysis, FMEA.

2 The terminology used is typical for Australasia: other locations may use different terminology (e.g. “gas” instead of “petrol”).

Keywords

Technological vulnerability, Exposure, Urban individual, Risk

Biographies

L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

Citation: Lindsay J. Robertson, Katina Michael, Albert Munoz, "Assessing technology system contributions to urban dweller vulnerabilities", Technology in Society, Vol. 50, August 2017, pp. 83-92, DOI: https://doi.org/10.1016/j.techsoc.2017.05.002

Societal Implications of Wearable Technology

Source: https://www.apadmi.com/the-implications-of-wearable-technology-for-healthcare-organisations/ (opening art only)

Societal Implications of Wearable Technology: Interpreting “Trialability on the Run”

Abstract

This chapter presents a set of scenarios involving the GoPro wearable Point of View (PoV) camera. The scenarios are meant to stimulate discussion about acceptable usage contexts with a focus on security and privacy. The chapter provides a wide array of examples of how overt wearable technologies are perceived and how they might/might not be welcomed into society. While the scenario is based at the University of Wollongong campus in Australia, the main implications derived from the fictitious events are useful in drawing out the predicted pros and cons of the technology. The scenarios are interpreted and the main thematic issues are drawn out and discussed. An in depth analysis takes place around the social implications, the moral and ethical problems associated with such technology, and possible future developments with respect to wearable devices.

Introduction

This chapter presents the existing, as well as the potential future, implications of wearable computing. Essentially, the chapter builds on the scenarios presented in an IEEE Consumer Electronics Magazine article entitled: “Trialability on the Run” (Gokyer & Michael, 2015). In this chapter the scenarios are interpreted qualitatively using thick description and the implications arising from these are discussed using thematic analysis. The scenario analysis is conducted through deconstruction, in order to extract the main themes and to grant the reader a deeper understanding of the possible future implications of the widespread use of wearable technology. First, each of the scenarios is analyzed to draw out the positive and the negative aspects of wearable cameras. Second, the possible future implications stemming from each scenario context are discussed under the following thematic areas: privacy, security, society, anonymity, vulnerability, trust and liberty. Third, direct evidence is provided using the insights of other research studies to support the conclusions reached and to identify plausible future implications of wearable technologies, in particular use contexts in society at large.

The setting for the scenario is a closed-campus environment, (a large Australian University). Specific contexts such as a lecture theatre, restroom, café, bank, and library, are chosen to provide a breadth of use cases within which to analyze the respective social implications. The legal, regulatory, and policy-specific bounds of the study are taken from current laws, guidelines and normative behavior, and are used as signposts for what should, or should not, be acceptable practice. The outcomes illustrate that the use cases are not so easily interpretable, given the newness of the emerging technology of wearable computing, especially overt head-mounted cameras, that draw a great deal of attention from bystanders. Quite often resistance to the use of a head-mounted camera is opposed without qualified reasoning. “Are you recording me? Stop that please!” is a common response to audio-visual bodyworn recording technology in the public space by individuals (Michael & Michael, 2013). Yet companies such as Google have been able to use fleets of cars to gather imagery of homes and streets, with relatively little problem.

There are, indeed, laws that pertain to the misuse of surveillance devices without a warrant, to the unauthorized recording of someone else whether in a public or private space, and to voyeuristic crimes such as upskirting. While there are laws, such as the Workplace Surveillance Act, 2005 (NSW), asserting a set of rules for surveillance (watching from above), the law regarding sousveillance (watching from below) is less clear (Clarke, 2012). We found that, while public spaces like libraries and lecture theatres have clear policy guidelines to follow, the actual published policies, and the position taken by security staff, do not in fact negate the potential to indirectly record another. Several times, through informal questioning, we found the strong line “you cannot do that because we have a policy that says you are not allowed to record someone”, to be unsubstantiated by real enforceable universitywide policies. Such shortcomings are now discussed in more detail against scenarios showing various sub-contexts of wearable technology in a closed-campus setting.

Background

The term sousveillance has been defined by Steve Mann (2002) to denote a recording done from a portable device such as a head-mounted display (HMD) unit in which the wearer is a participant in the activity. In contrast to wall-mounted fixed cameras typically used for surveillance, portable devices allow inverse surveillance: recordings from the point of view of those being watched. More generally, point of view (POV) has its foundations in film, and usually depicts a scene through the eyes of a character. Body-worn video-recording technologies now mean that a wearer can shoot film from a first-person perspective of another subject or object in his or her immediate field of view (FOV), with or without a particular agenda.

During the initial rollout of Google Glass, explorers realized that recording other people with an optical HMD unit was not perceived as an acceptable practice, despite the fact that the recording was taking place in a public space. Google’s apparent blunder was to assume that the device, worn by 8,000 individuals, would go unnoticed, like shopping mall closed-circuit television (CCTV). Instead, what transpired was a mixed reaction by the public—some nonusers were curious and even thrilled at the possibilities claimed by the wearers of Google Glass, while some wearers were refused entry to premises, fined, verbally abused, or even physically assaulted by others in the FOV (see Levy, 2014).

Some citizens and consumers have claimed that law enforcement (if approved through the use of a warrant process) and shop owners have every right to surveil a given locale, dependent on the context of the situation. Surveilling a suspect who may have committed a violent crime or using CCTV as an antitheft mechanism is now commonly perceived as acceptable, but having a camera in your line of sight record you—even incidentally—as you mind your own business can be disturbing for even the most tolerant of people.

Wearers of these prototypes, or even of fully-fledged commercial products like the Autographer (see http://www.autographer.com/), claim that they record everything around them as part of a need to lifelog or quantify themselves for reflection. Technology such as the Narrative Clip may not capture audio or video, but even still shots are enough to reveal someone else’s whereabouts, especially if they are innocently posted on Flickr, Instagram, or other social media. Many of these photographs also have embedded location and time-stamp metadata stored by default.

A tourist might not have malicious intent by showing off in front of a landmark, but innocent bystanders captured in the photo could find themselves in a predicament given that the context may be entirely misleading.

Wearable and embedded cameras worn by any citizen carry significant and deep personal and societal implications. A photoborg is one who mounts a camera onto any aspect of the body to record the space around himself or herself (Michael & Michael, 2012). Photoborgs may feel entirely free, masters of their own destiny; even safe that their point of view is being noted for prospective reuse. Indeed, the power that photoborgs have is clear when they wear the camera. It can be even more authoritative than the unrestricted overhead gazing of traditional CCTV, given that sousveillance usually happens at ground level. Although photoborgs may be recording for their own lifelog, they will inevitably capture other people in their field of view, and unless these fellow citizens also become photoborgs themselves, there is a power differential. Sousveillance carries with it huge socioethical, environmental, economic, political, and spiritual overtones. The narrative that informs sousveillance is more relevant than ever before due to the proliferation of new media.

Sousveillance grants citizens the ability to combat the powerful using their own evidentiary mechanism, but it also grants other citizens the ability to put on the guise of the powerful. The evidence emanating from cameras is endowed with obvious limitations, such as the potential for the impairment of the data through loss, manipulation, or misrepresentation (Michael, 2013). The pervasiveness of the camera that sees and hears everything can only be reconciled if we know the lifeworld of the wearer, the context of the event being captured, and how the data will be used by the stakeholder in command.

Sousveillance happens through the gaze of the one wearing the camera, just like a first-person shooter in a video game. In 2003, WIRED published an article (Shachtman, 2003) on the potentiality to lifelog everything about everyone. Shachtman wrote:

The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person’s life, index all the information and make it searchable… The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read… All of this—and more—would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual’s health… This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual’s life.”

This goes to show how any discovery can be tailored toward any end. Lifelogging is meant to sustain the power of the individual through reflection and learning, to enable growth, maturity and development. Here, instead, it has been hijacked by the very same stakeholder against whom it was created to gain protection.

Sousveillance also drags into the equation innocent bystanders going about their everyday business who just wish to be left alone. When we asked wearable 2.0 pioneer Steve Mann in 2009 what one should do if bystanders in a recording in a public space questioned why they were being recorded without their explicit permission, he pointed us to his “request for deletion” (RFD) web page (Mann, n.d.). This is admittedly only a very small part of the solution and, for the most part, untenable. One just needs to view a few minutes of the Surveillance Camera Man Channel (http://tinyurl.com/lsrl6u9) to understand that people generally do not wish to be filmed in someone else’s field of view. Some key questions include:

1. In what context has the footage been taken?

2. How will it be used?

3. To whom will the footage belong?

4. How will the footage taken be validated and stored?

Trialability on the Run

In this section, plausible scenarios of the use of wearable cameras in a closed campus setting are presented and analyzed in the story “Trialability on the Run”. Although the scenarios are not based directly on primary sources of evidence, they do provide conflicting perspectives on the pros and cons of wearables. As companies are engaged in ever-shorter market trialing of their products, the scenarios demonstrate what can go wrong with an approach that effectively says: “Let’s unleash the product now and worry about repercussions later; they’ll iron themselves out eventually—our job is solely to worry about engineering.” The pitfalls of such an approach are the unexpected and asymmetric consequences that ensue. For instance, someone wearing a camera breaches my privacy, and, although the recorded evidence has affected no one else, my life is affected adversely. Laws, and organizational policies especially, need quickly to respond as advances in technologies emerge.

“Trialability on the Run” is a “day in the life” scenario that contains 9 parts, set in a closed-campus in southern New South Wales. The main characters are Anthony, the owner and wearer of the head-mounted GoPro (an overt audio-visual recording device), and his girlfriend Sophie. The narrator follows and observes the pair as they work their way around the campus in various sub-contexts, coming into contact with academic staff, strangers, acquaintances, cashiers, banking personnel, librarians, fellow university students and finally security personnel. Anthony takes the perspective that his head-mounted GoPro is no different from the mounted security surveillance cameras on lampposts and building walls, or from the in-lecture theatre recordings captured by the Echo360 (Echo, 2016), or even from portable smart phone cameras that are handheld. He is bewildered when he draws so much attention to himself as the photoborg camera wearer, since he perceives he is performing exactly the same function as the other cameras on campus and has only the intent of capturing his own lifelog. Although he is not doing anything wrong, Anthony looks different and stands out as a result (Surveillance Camera Man, 2015). His girlfriend, Sophie, is not convinced by Anthony’s blasé attitude and tries to press a counter argument that Anthony’s practice is unacceptable in society.

Scenario 1: The Lecture

Context

In this scenario, the main character, Anthony, has arrived at the lecture theatre, in which the lesson had already begun, intending to record the lecture instead of taking notes. Being slightly late, he decided to sit in the very front row. All the students and eventually the lecturer saw the head-mounted camera he was wearing. The lecturer continued his lecture without showing any emotion. Some students giggled at the spectacle and others were very surprised with what they observed, as it was quite probable that it was the first time that they were seeing someone wearing a camera to record a lecture. The students were generally not bothered by the head-mounted recording device in full view, as it was focused on the lecture material and the lecturer, so proceedings continued, as they otherwise would have, had the body-worn camera not been present. Students are very used to surveillance cameras on campus; this was just another camera as far as they were concerned, and besides no one objected: they were too busy taking notes and listening to instruction about the structure and form of the final examination in their engineering coursework.

Wearable User Rights and Intellectual Property

In some of the lecture theatres on university campuses, there are motion sensor based video cameras that make full audio-visual recordings of the lectures (Echo, 2016). Lecturers choose to record their lectures in this manner as available evidence of educational content covered for students, especially for those who were unable to attend the lecture, for those for whom English is a second language or for those who like to listen to lecture content as a form of revision. In this regard, there are no policies in place to keep the students from making audio-visual recordings of the lecture in the lecture theatres.

Lecture theatres are considered public spaces and many universities allow students to attend lectures whether or not they are enrolled in that particular course or subject. Anyone from the public could walk into lectures and listen, as there is no keycard access. Similar to centrally organized Echo 360 audio-visual recordings, Anthony is taping the lecture himself and he does not see any problems with distributing the recording to classmates if someone asks for it to study for the final examination. After all, everyone owns a smartphone and anyone can record the lecture with the camera on their smartphones or tablet device.

This scenario raises a small number of questions that need to be dealt with foremost, such as “What is the difference between making a recording with a smartphone and with a head-mounted camera?” or, “Does it only start being a problem when the recording device is overt and can be seen?” If one juxtaposes the surveillance camera covertly integrated into a light fixture, with an overt head-mounted camera, then why should the two devices elicit such a different response from bystanders?

These questions do not, however, address the fact that an open discussion is required on whether or not we are ready to see a great deal of these sousveillers in our everyday life, and if we are not, what are we prepared to do about it? Mann (2005) predicted the use of sousveillance would grow greatly when the sousveillance devices acquired non-traditional uses such as making phone calls, taking pictures, and having access to the Internet. This emergence produces a grey area, generating the requirement for laws, legislation, regulations and policies having to be amended or created to address specific uses of the sousveillance devices in different environments and contexts. Clarke (2014) identifies a range of relevant (Australian) laws to inform policy discussion and notes the inadequacy of current regulation in the face of rapidly emerging technology.

Scenario 2: The Restroom

Context

In the restroom scenario, Anthony walked into a public restroom after his lecture, forgetting that his head-mounted camera was still on and recording. While unintentionally recording, Anthony received different reactions from the people present in the restroom, all of whom saw the camera and suspected some foul play. The first person, who was leaving as Anthony was entering the restroom, did not seem to care; another tried to ignore Anthony and left as soon as he was finished. The last person became disturbed by the fact that he was being recorded in what he obviously deemed to be a private place. Later that day when Anthony searched for lecture recordings on the tape, he got a sense of wrongdoing after realizing that, in the restroom, he had accidentally left the camera on in record mode. He was surprised, upon hindsight, that he did not get any major reactions, such as an individual openly expressing their discontent or the fact he did not get any specific questions or pronouncements of discomfort. If it were not for the facial expressions to which Anthony was privy, he would not have been able to tell that anybody was upset, as there was no verbal cue or physical retaliation. Of course, the innocent bystanders, going about their business, would not have been able to assume that the camera was indeed rolling.

Citizen Privacy, Voyeurism, and a Process of Desensitization

Restrooms, change rooms, and shower blocks on campus are open to the public, but they are also considered private spaces given that people are engaged in private activities (e.g. showering), and are, at times, not fully clothed. The natural corollary, then, would lead to the expectation that some degree of privacy should be granted. Can anyone overtly walk into a public toilet sporting a camera and record you while you are trying to, for modesty’s sake, do what should only be done in the restroom? Is the body-worn technology becoming so ubiquitous that no one even says a word about something that they can clearly see is ethically or morally wrong? Steve Mann has argued that surveillance cameras in the restroom are an invasion of privacy more abhorrent than body-worn cameras owned by everyday people. The direct approachability of the photoborg differs from an impersonal CCTV.

There is a long discussion to be had on personal security. For instance, will we all, one day, be carrying such devices as we seek to lifelog our entire histories, or acquire an alibi for our whereabouts should we be accused of a given crime, as portrayed in film in the drama “The Entire History of You” (Armstrong and Welsh, 2011)? It is very common to find signs prohibiting the use of mobile phones in leisure centers, swimming pools and the like. There remains, however, much to be argued around safety versus privacy trade-offs, if it is acceptable practice to rely on closed circuit television (CCTV) in public spaces.

University campuses are bound by a number of laws, at federal or state level, including (in this case) the Privacy Act 1998 (Cth), the Surveillance Devices Act 2007(NSW), and the Workplace Surveillance Act 2005 (NSW). This scenario points out that even when there cannot possibly be surveillance cameras in restrooms or change rooms, the Surveillance Devices Act 2007 (NSW) does not specify provisions about sousveillance in those public/private spaces. In Clarke’s (2014) assessment of the NSW Crimes Act (1900), the voyeurism offences provisions exist relating to photographs. They pertain to photographs that are of a sexual and voyeuristic nature, usually showing somebody’s private parts. These photographs are also taken without the consent of the individual and/or taken in places where a person would reasonably expect to be afforded privacy (toilets, showers, change rooms etc). When a person claims to have had his or her privacy breached, however, exceptions to this rule apply if s/he is a willing participant in the activity, or if circumstances indicate that the persons involved did not really care if they were seen by onlookers (Clarke, 2014). It is even less likely to be illegal, if the act was conducted in a private place, but with doors open in full view (Clarke, 2014). Thus, the law represents controls over a narrow range of abuses (Clarke, 2014), and, unless they find themselves in a predicament and seek further advice, the general populace is unaware that the law does not protect them entirely, and depends on the context.

Scenario 3: The Corridor

Context

This scenario depicts a conversation occurring with Sophie (Anthony’s girlfriend and fellow undergraduate coursework student). In the corridor, Anthony bumps into their mutual friend, Oxford, as they vacate the lecture theatre. Throughout the conversation, Anthony demonstrates confidence in his appearance. He believed wearing a head-mounted camera was not a problem and so consequently, he did not think he was doing anything wrong. On the other hand, Sophie was questioning whether or not body-worn cameras should be used without notifying the people in their vicinity. Oxford, an international student, became concerned about the possible future uses of the recording that featured him. His main concern was that he did not want the footage to be made publicly available given how he looked and the clothing he was wearing. Although Oxford had no objection to Anthony keeping the footage for his personal archives, he did not wish for it to be splattered all over social media.

Trust, Disproportionality, and Requests for Deletion

The two student perspectives of “recording” a lifelog are juxtaposed. Anthony is indifferent as he feels he is taping “his” life as it happens around him through time. Oxford, on the other hand, believes he has a right to his own image, and that includes video (Branscomb, 1994). Here we see a power and control dominance occurring. The power and control is with the photoborg who has the ability to record, store and share the information gathered. On the other hand, the bystander is powerless and at the mercy of the photoborg, unless he/she voices otherwise explicitly. In addition, bystanders may not be so concerned with an actual live recording for personal archives, but certainly are concerned about public viewing. Often lifelogs are streamed in real-time and near real-time, which does not grant the bystander confidence with respect to acceptable use cases.

In the scenario, Sophie poses a question to those who are being incidentally recorded by Anthony’s GoPro to see whether there is an expectation among her peers to get individual consent prior to a recording taking place. Oxford, the mutual acquaintance of the protagonists, believes that consent is paramount in this process. This raises a pertinent question: what about the practice of lifelogging? Lifeloggers could not possibly have the consent of every single person they encounter in a daily journey. Is lifelogging acceptable insofar as lifeloggers choose not to share recordings online or anywhere public? Mann (2005) argues that a person wishing to do lifelong sousveillance deserves certain legal protections against others who might attempt to disrupt continuity of evidence, say for example, while going through immigration. On the other hand, Harfield (2014) extends the physical conception of a private space in considering the extent to which an individual can expect to be private in a public space, defining audio-visual recording of a subject without their consent in public spaces as a moral wrong and seeing the act of sousveillance as a moral intrusion against personal privacy.

In the scenario, Sophie pointed out that if someone wanted to record another individual around them, they could easily do so covertly using everyday objects with embedded covert cameras, such as a pen, a key fob, a handbag or even their mobile phone. Sophie was able to put into perspective the various gazes from security cameras when compared to sousveillance. The very thought about the mass surveillance she was under every moment provided a sobering counterbalance, allowing her to experience tolerance for the practice of sousveillance. Yet for Oxford, the security cameras mounted on the walls and ceilings of the Communications Building, provided a level of safety for international students. Oxford clearly justified “security cameras for security reasons”, but could not justify additional “in your face” cameras. Oxford did not wish to come across a sousveiller because the recordings could be made publicly available on the Internet without his knowledge. Further, a clear motive for the recordings had not been conveyed by the camera holder (Michael et al., 2014).

Between 1994 and 1996, Steve Mann conducted a Wearable Wireless Webcam experiment to visually record and continuously stream live video from his wearable computer to the World Wide Web. Operating 24 hours a day (on and off), this had the effective purpose of capturing and archiving day-to-day living from the person’s own perspective (Mann, 2005). Mann has argued that in the future, devices that captured lifelong memories and shared them in real-time would be commonplace and worn continuously (Mann, 2013).

It is true that almost everywhere we go in our daily lives someone, somewhere, is watching. But in the workplace, especially, where there is intent to watch an employee, the law states that individuals must be notified that they are being watched (Australasian Legal Information Institute, 2015). When it comes to sousveillance will this be the case as well? In Australia, the use, recording, communication or publication of recorded information from a surveillance device under a warrant is protected data and cannot be openly shared according to the Surveillance Device Act 2004 (Cth). In the near future when we are making a recording with an overt device, a prospective sousveillance law might posit: “You can see that I am recording you but this is for personal use only and as long as I do not share this video with someone you cannot do or say anything to stop me.” Mann (2005) claims that sousveillance, unlike surveillance, will require, and receive, strong legal support through dedicated frameworks for its protection, as well as for its limitation (Mann & Wassell, 2013).

A person can listen to, or visually monitor, a private activity if s/he is a participant in the activity (Australasian Legal Information Institute, 2014). However, this Act forbids a person to install, use or maintain an optical surveillance device or a listening device to record a private activity whether the person is a party to the activity or not. The penalties do not apply to the use of an optical surveillance device or listening device resulting in the unintentional recording or observation of a private activity (Surveillance Devices Act, 1998 (WA)). Clarke (2014) combines optical surveillance device regulation with the regulation for listening devices and concludes that a person can listen to conversations if they are a participant in the activity but cannot make audio or visual recordings. The applications of the law cover only a limited range of situations and conditions may apply for prosecutions.

Scenario 4: Ordering at the Café

Context

Anthony and Sophie approached the counter of a café to place their orders and Anthony soon found himself engaged in a conversation with the attendants at the serving area about the camera he was wearing. He asked the attendants how they felt about being filmed. The male attendant said he did not like it very much and the female barista said she would not mind being filmed. The manager did not comment about any aspect of the head-mounted GoPro recording taking place but he did make some derogatory comments about Anthony’s behavior to Sophie. The male attendant became disturbed about the idea of someone recording him while he was at work and he tried to direct Anthony to the manager, knowing that the manager would not like it either, and it would disturb him even more. Conversely, however, the female barista was far from upset about the impromptu recording, acting as if she was on a reality TV show, and taken by the fact that someone seemed to show some interest to her, overcoming the normal daily routine.

Exhibitionism, Hesitation, and Unease

People tend to care a great deal about being watched over or scrutinized and this is reflected in their social behaviors and choices, which are altered as a result without them even realizing (Nettle et. al., 2012). Thus, some people who generally do not like being recorded (like the male attendant), might be subconsciously rejecting the idea of having to change their behaviors. Others, like the manager, simply ignore the existence of the device and others still, like the female attendant, feel entirely comfortable in front of a camera, even playing up and portraying himself or herself as someone “they want to be seen as”.

Anthony did not understand why people found the camera on his head disturbing with the additional concerns about being recorded. In certain cases where people seemed to show particular interest, Anthony decided to engage others about how they felt about being filmed and tried to understand what their reactions were to constant ground-level surveillance. Anthony himself had not been educated with respect to campus policy or the laws pertaining to audio-visual recording in a public space. Anthony was unaware that in Australia, surveillance device legislation differs greatly between states but, broadly, audio and/or visual recording of a private activity is likely to be illegal whatever the context (Clarke, 2014). An activity is, however, only considered to be “private” when it is taking place inside a building, and in the state of New South Wales this includes vehicles. People, however, are generally unaware that prohibitions may not apply if the activity is happening outside a building, regardless of context (Clarke, 2014).

If people were to see someone wearing a head-mounted camera as they were going about their daily routine, it would doubtless gain their attention, as it is presently an unusual occurrence. When we leave our homes, we do not expect pedestrians to be wearing head-mounted cameras, nor, (although increasingly we know we are under surveillance in taxis, buses, trains, and other forms of public transport), do we expect bus drivers, our teachers, fellow students, family or friends to be wearing body-worn recording devices. Having said that, policing has had a substantial impact on raising citizen awareness of body-worn audio-visual recording devices. We now have mobile cameras on cars, on highway patrol police officer helmets, and even on the lapels of particular police officers on foot. While this has helped to decrease the number of unfounded citizen complaints against law enforcement personnel on duty, it is also seen as a retaliatory strategy to everyday citizens who now have a smartphone video recorder at hand 24x7.

Although the average citizen does not always feel empowered to question one’s authority to record, everyone has the right to question the intended purpose of the video being taken of him or her, and how or where it will be shared. In this scenario, does Anthony have the right to record others as he pleases without their knowledge, either of him making the recording, or of the places where that recording might end up? Would Anthony get the same reaction if he were making the recordings with his smartphone? Owners of smart phones would be hard-pressed to say that they have never taken visual recordings of an activity where there are bystanders in the background that they do not know and from whom they have not gained consent. Such examples include children’s sporting events, wedding receptions, school events, attractions and points of interest and a whole lot more. Most photoborgs use the line of argumentation that says: “How is recording with a GoPro instead of a smartphone any different”? Of course, individuals who object to being subjected to point of view surveillance (PoVS) have potential avenues of protection (including trespass against property, trespass against the person, stalking, harassment etc.), but these protections are limited in their applications (Clarke, 2014). Even so, the person using PoVS technology has access to far more protection than the person they are monitoring even if they are doing so in an unreasonable manner (Clarke, 2014).

Scenario 5: Finding a Table at the Café

Context

In this scenario, patrons at an on-campus café vacated their chairs almost immediately after Anthony and Sophie sat down at the large table. Anthony and Sophie both realized the camera was driving people away from them. Sophie insisted at that point in the scenario, that Anthony at least stopped recording if he was unwilling to take off the device itself. After Cygneta and Klara (Sophie’s acquaintances) had joined them at the table, Anthony, interested in individual reactions and trying to prove a point to Sophie, asked Klara how she felt about being filmed. He received the responses that he had expected. Klara did not like being filmed one bit by something worn on someone’s head. Moreover, despite being a marketing student, she had not even heard of Google Glass when Anthony tried to share his perspective around the issue by bringing up the technology in conversation. This fell on deaf ears, he thought, despite Cygneta’s thought that visual data might well be the future of marketing strategies. Anthony tried to make an argument that if a technology like Google Glass was to become prevalent on campus in a couple of years that they would not have any say about being recorded by a stranger. Sophie supported Anthony from a factual standpoint reinforcing that there were no laws in Australia prohibiting video recordings in public. That is, across the States and Territories of Australia, visual surveillance in public places are not subject to general prohibitions except when the person(s) would reasonably expect their actions to be private if they were engaging in a private act (NSW); or if the person(s) being recorded had a strong case for expecting he/she would not be recorded (Victoria, WA, NT); and in SA, Tasmania and ACT legislations for recording other people subject to various provisos (Clarke, 2014).

The reactions of Klara and Cygneta got Sophie thinking about gender and if men were more likely than women to get enthralled by technological devices. She could see this happening with drones and wearable technologies like smart watches - and came to the realization that the GoPro was no different. Some male surfers (including Anthony) and skateboard riders had well and truly begun to use their GoPros to film themselves doing stunts, then sharing these on Instagram. She reflected on whether or not people, in general, would begin to live a life of “virtual replays” as opposed to living in the moment. When reality becomes hard to handle, people tend to escape to a virtual world where they create avatars and act “freely”, leading to the postponement of the hardships of real life, and some may even become addicted to this as being a more exciting lifestyle. These issues are further explored in the following popular articles: Ghorayshi (2014), Kotler (2014) and Lagorio (2006).

Novelty and Market Potential

The patrons at the first table appeared to find the situation awkward and they rectified this problem by removing themselves from the vicinity of Anthony and his camera. Klara did not possess adequate knowledge about emerging wearable technology, and she claimed she would not use it even if it were readily available. But once wearable computers like Google Glass permeated the consumer market, Cygneta, who seemed like she ‘kept up with the Joneses’, said she would likely start using it at some point, despite Klara’s apparent resistance. While smartphones were a new technology in the 1990s, currently close to one third of the world’s population are using them regularly, with 70% projected by 2020 (Ericsson, 2015). One reason this number is not bigger is low-income countries with widespread rural populations and vast terrains: the numbers are expected to rise massively in emerging markets. By comparison wearable computers are basically advanced versions of existing technology and thus uptake of wearable technologies will likely be seamless and even quicker. As with smartphone adoption, as long as they are affordable, wearable computers such as Digital Glass and smartwatches can be expected to be used as much as, or even more than, smartphones, given they are always attached to the body or within arm’s reach.

Scenario 6: A Visit to the Bank

Context

When Sophie and Anthony visited the bank, Anthony sat down as Sophie asked for assistance from one of the attendants. Even if Anthony was not the one who needed help, he thought people working at the bank seemed to be more friendly than usual towards him. He was asked, in fact, if he wanted some assistance with anything, and when he confirmed he did not, no further questioning by the bank manager was conducted. He thought it strange that everyone was so casual about his camera, when everyone else that day had made him feel like a criminal. Again, he was acutely aware that he was in full view of the bank’s surveillance camera but questioned if anyone was really watching anyway. The couple later queued up at the ATM where Anthony mentioned that had he had some disingenuous intentions: he could be filming people and acquiring their PINs so easily. No one had even attempted to cover up their PIN entry, even though there were now signs near the keypad to “cover up”. This entire situation made Sophie feel very uncomfortable and slightly irritated by Anthony. It was after all a criminal act to shoulder surf someone’s PIN, but to have it on film as well to replay later was outrageous. It seemed to her that, no matter how much advice people get about protecting their identity or credit from fraud, they just don’t seem to pay attention. To Anthony’s credit, he too, understood the severity of the situation and admittedly felt uncomfortable by the situation in which, with no malicious intent, he had accidentally found himself.

Security

This scenario illustrates that people in the workplace who are under surveillance are more likely to help clients. Anthony’s camera got immediate attention and a forward request: “Can I help you?” When individuals become publicly self-aware that they are being filmed, then their propensity to help others generally increases. The feeling of public self-awareness created by the presence of a camera triggers the change in behavior in accordance with a pattern that signifies concerns with any damage that could be done to reputation (Van Bommel et. al., 2012).

Anthony also could not keep himself from questioning the security measures that the bank should be applying given the increase in incidence of cheap embedded cameras in both stationary and mobile phones. When queuing in front of the ATM for Sophie’s cash withdrawal, Anthony noticed that he was recording, unintentionally, something that could easily be used for criminal activities and he started seeing the possible security breaches which would come with emerging wearables. For example, video evidence can be zoomed in to reveal private data. While some believe that personal body worn recording devices protect the security of the individual wearer from mass surveillance, rectifying some of the power imbalances, in this instance the recording devices have diminished security by their very presence. It is a paradox, and while it all comes down to the individual ethics of the photoborg, it will not take long for fraudsters to employ such measures.

Scenario 7: In the Library

Context

After the ATM incident, Anthony began to consider more deeply the implications of filming others in a variety of contexts. It was the very first time he had begun to place himself in other people’s shoes and see things from their perspective. In doing this, he became more cautious in the library setting. He avoided glaring at the computer screens of other users around him, as he could then record what activities they were engaged in online, what they were searching for on their Internet browser, and more. He attracted the attention of certain people he came across in the library, because obviously he looked different, even weird. For the first time that day, he felt like he was going to get into serious trouble when he was talking to the librarian who was questioning him about his practice. The librarian claimed that Anthony had absolutely no right to record other people without their permission, as it was against campus policies. Anthony did take this seriously, but he was pretty sure there was no policy against using a GoPro on campus. When Anthony asked the librarian to refer him to the exact policy and university web link, despite clearly stating that his actions were a breach of university rules, the librarian could not provide a link. She did say, however, that she would be calling the library management to convey to them that she suspected that someone was in the library in breach of university policy. While this conversation was happening, things not only began to become less clear for Anthony, but he could sense that things were escalating in seriousness and that he was about to get into some significant trouble.

Campus Policies, Guidelines, and Normative Expectations

The questions raised in this scenario are not only about privacy but also about the issues around the University’s willingness to accept certain things as permitted behavior on campus property. Token inappropriate filming of other individuals was presently a hot news item, as many young women were victims of voyeuristic behavior, such as upskirting with mobile phone cameras, and more. Yet, many universities simply rely on their “Student Conduct Rules” for support outside criminal intent. For example, a typical student conduct notice states that the students have a responsibility to conduct themselves in accordance with:

1. Campus Access and Order Rules,

2. IT Acceptable Use Policy, and

3. Library Code of Conduct.

However, none of these policies typically provide clear guidelines on audiovisual recordings by students.

Campus policies here are approved by the University Council, and various policies address only general surveillance considerations about audio-visual recordings. The Campus Access and Order Rules specifies that University grounds are private properties (University of Wollongong, 2014), and under the common law regarding real property, the lawful occupiers of land have the general right to prevent others from being on, or doing acts on, their land, even if an area on the land is freely accessible to the public (Clarke, 2014). It is Clarke’s latter emphasis which summarises exactly the context of a typical university setting which can be considered a closed-campus but open to the public.

The pace of technological change poses challenges for the law, and deficiencies in regulatory frameworks for Point of View Surveillance exist in many jurisdictions in Australia (Clarke, 2014). Australian universities as organizations are also bound (in this case) by the Workplace Surveillance Act 2005 (NSW) and the Privacy and Personal Information Protection Act 1998 (NSW) (Australasian Legal Information Institute, 2016), which again do nothing to specify what is permitted in terms of rules or policies about sousveillance in actions committed by a student on campus grounds.

Scenario 8: Security on Campus

Context

Security arrived at the scene of the incident and escorted Anthony to the security office. By this stage Anthony believed that this might well become a police matter. Security did not wish to ask Anthony questions about his filming on campus but ostensibly wanted to check whether or not Anthony’s GoPro had been stolen. There had been a spate of car park thefts, and it was for this that Anthony was being investigated. Anthony then thought it appropriate to ask them several questions about the recordings he had made, to which security mentioned the Surveillance Devices Act 2007 (NSW) and how they had to put signage to warn people about the cameras and the fact that activity was being recorded. Additionally, Anthony was told that CCTV footage could be shared only with the police, and that cameras on campus were never facing people but were facing toward the roadways, and footpaths. When Anthony reminded the security about Google Glass and asked if they had a plan for when Glass would be used on the campus, the security manager replied that everything would be thought about when the time arrived. Anthony left to attend a lecture for which he was once again late.

Security Breaches, the Law, and Enforcement

Anthony was not satisfied with the response of the security manager about campus rules pertaining to the filming of others. While Anthony felt very uncomfortable about the footage he had on his camera, he still did not feel that the university’s security office provided adequate guidance on acceptable use. The security manager had tended to skirt around providing a direct response to Anthony, probably because he did not have any concrete answers. First, the manager brought up the Video Piracy Policy topic and then the University’s IT Acceptable Use Policy. Anthony felt that those policies had nothing to do with him. First, he was sure he was not conducting video piracy in any way, and second, he was not using the university’s IT services to share his films with others or exceed his Internet quota, etc. Somehow, the manager connected this by saying that the recording might contain copyrighted material on it, and that it should never be transferred through the university’s IT infrastructure (e.g. network). He also shared a newspaper article with Anthony that was somehow supposed to act as a warning message but it just didn’t make sense to Anthony how all of that was connected to the issue at hand.

Scenario 9: Sophie’s Lecture

Context

Arriving at the lecture theater after the lecture had already begun, Anthony and Sophie opened the door and the lecturer noticed the camera mounted on Anthony’s head. The lecturer immediately became infuriated, asking Anthony to remove the camera and to leave his classroom. Even after Anthony left the class, the lecturer still thought he might be being recorded through the lecture theatre’s part-glass door and so he asked Anthony to leave the corridor as well. The entire time, the GoPro was not recording any of the incidents. The incident became heated, despite Anthony fully accepting the academic’s perspective. It was the very last thing that Anthony had expected by that point in the day. It was absolutely devastating to him.

Ethics, Power, Inequality, and Non-Participation

Every student at an Australian university has academic freedom and is welcome to attend lectures whether or not they are enrolled in the subject. However, it is the academic instructor’s right to privacy to keep a student from recording his or her class. A lecturer’s classes are considered “teaching material” and the lecturer owns the intellectual property of his/her teaching material (University of Wollongong, 2014). In keeping with the aforementioned statements, any recording of lectures should be carried out after consulting with the instructor. Some lecturers do not even like the idea of Echo 360, as it can be used for much more than simply recording a lecture for reuse by students. Lecture recordings could be used to audit staff or surveil whether staff are doing their job properly, display innovative teaching techniques, possess poor or good knowledge of content, and stick to time or take early marks. Some faculty members also consider the classroom to be a sacred meeting place between them and students and would never wish for a camera to invade this intimate gathering. Cameras and recordings would indeed stifle a faculty member’s or a student’s right to freedom of speech if the video was ever to go public. It would also mean that some students would simply not contribute anything to the classroom if they knew they were being taped, or that someone might scrutinize their perspectives and opinions on controversial matters.

Possible Future Implications Drawn from Scenarios

In the scenarios, in almost every instance, the overt nature of Anthony’s wearable recording device, given it was head-mounted, elicited an instantaneous response. Others displayed a variety of responses and attitudes including that:

1. They liked it,

2. They did not mind it,

3. They were indifferent about it,

4. They did not like it and finally,

5. They were disturbed by it.

Regardless of which category they belonged to, however, they did not explicitly voice their feelings to Anthony, although body language and facial expressions spoke volumes. In this closed campus scenario, the majority of people who came into contact with Anthony fell under the first two categories. It also seems clear that some contexts were especially delicate, for instance, taking the camera (while still recording) into the restroom, an obviously private amenity. It is likely that individuals in the restroom would have had no problem with the GoPro filming outside the restroom setting.

Research into future technologies and their respective social implications is urgent, since many emerging technologies are here right now. Whatever the human mind can conjure is liable to be designed, developed, and implemented. The main concern is how we choose to deal with it. In this final section, issues drawn from the scenarios are speculatively extended to project future implications when wearable computing has become more ubiquitous in society.

Privacy, Security, and Trust

Privacy experts claim that while we might once have been concerned, or felt uncomfortable with, CCTV being as pervasive as it is today, we are shifting from a limited number of big brothers to ubiquitous little brothers (Shilton, 2009). The fallacy of security is that more cameras do not necessarily mean a safer society, and statistics, depending on how they are presented, may be misleading about reductions in crime in given hotspots. Criminals do not just stop committing crime (e.g. selling drugs) because a local council installs a group of multi-directional cameras on a busy public route. On the contrary, crime has been shown to be redistributed or relocated to another proximate geographic location. In a study for the United Kingdom’s Home Office (Gill & Spriggs, 2005), only one area of the 14 studied saw a drop in the number of incidents that could be attributed to CCTV.

Questions of trust seem to be the biggest factor militating against wearable devices that film other people who have not granted their consent to be recorded. Many people may not like to be photographed for reasons we don’t quite understand, but it remains their right to say, “No, leave me alone.” Others have no trouble being recorded by someone they know, so long as they know they are being recorded prior to the record button being pushed. Still others show utter indifference, claiming that there is no longer anything personal out in the open. Often, the argument is posed that anyone can watch anyone else walk down a street. This argument fails however: watching someone cross the road is not the same as recording them cross the road, whether by design or by sheer coincidence. Handing out requests for deletion every time someone asks whether they’ve been captured on camera is not good enough. Allowing people to opt out “after the fact” is not consent-based and violates fundamental human rights including the control individuals might have over their own image and the freedom to go about their life as they please (Bronitt & Michael, 2012).

Laws, Regulations, and Policies

At the present time, laws and regulations pertaining to surveillance and listening devices, privacy, telecommunications, crimes, and even workplace relations require amendments to keep pace with advancements in wearable and even implantable sensors. The police need to be viewed as enforcing the laws that they are there to upkeep, not to don the very devices they claim to be illegal. Policies in campus settings, such as universities, also need to address the seeming imbalance in what is, and is not, possible. The commoditization of such devices will only lead to even greater public interest issues coming to the fore. The laws are clearly outdated, and there is controversy over how to overcome the legal implications of emerging technologies.

Creating new laws for each new device will lead to an endless drafting of legislation, which is not practicable, and claiming that existing laws can respond to new problems is unrealistic, as users will seek to get around the law via loopholes in a patchwork of statutes. Cameras provide a power imbalance. Initially, only a few people had mobile phones with cameras: now they are everywhere. Then, only some people carried body-worn video recorders for extreme sports: now, increasingly, many are using a GoPro, Looxcie, or Taser Axon glasses. These devices, while still nascent, have been met with some acceptance, in various contexts including some business-centric applications. Photoborgs might feel they are “hitting back” at all the cameras on the walls that are recording 24×7, but this does not cancel out the fact that the photoborgs themselves are doing exactly what they are claiming a fixed, wall-mounted camera is doing to them.

Future Implications

All of the risks mentioned above are interrelated. If we lack privacy, we lose trust; if we lack security, we feel vulnerable; if we lose our anonymity, we lose a considerable portion of our liberty and when people lose their trust and their liberty, then they feel vulnerable. This kind of scenario is deeply problematic, and portends a higher incidence of depression, as people would not feel they had the freedom to act and be themselves, sharing their true feelings. Implications of this interrelatedness are presented in Figure 1.

Since 100% security does not exist in any technological system, privacy will always be a prominent issue. When security is lacking, privacy becomes an issue, individuals become more vulnerable and the anonymity of an individual comes into question. A loss of anonymity limits people’s liberty to act and speak as they want and eventually people start losing their trust in each other and in authorities. When the people are not free to express their true selves, they become withdrawn and despite a high-tech community, people may enter a state of despondency. The real question will be in the future when it is not people who are sporting these body-worn devices, but automated data collection machines like Knightscope’s K5 (Knightscope, 2016). These will indeed be mobile camera surveillance units, converging sousveillance and surveillance in one clean sweep (Perakslis et al., 2014).

Figure 1. Major implications of wearables: the utopian and dystopian views

Future Society

Mann (2013) argues that wearable sousveillance devices that are used in everyday life to store, access, transfer and share information will be commonplace, worn continuously and perhaps even permanently implanted. Michael and Michael (2012, p. 195) in their perception of the age of Überveillence state:

There will be a segment of the consumer and business markets who will adopt the technology for no clear reason and without too much thought, save for the fact that the technology is new and seems to be the way advanced societies are heading. This segment will probably not be overly concerned with any discernible abridgement of their human rights nor the small print ‘terms and conditions agreement’ they have signed, but will take an implant on the promise that they will have greater connectivity to the Internet, and to online services and bonus loyalty schemes more generally.

Every feature added on a wearable device adds another layer of risk to the pre-existing risks. Currently, we may only have capabilities to store, access, transfer and manipulate the gathered data but as the development of technology continues, context-aware software will be able to interpret vast amounts of data into meaningful information that can be used by unauthorized third parties. It is almost certain that the laws will not be able to keep up with the pace of the technology. Accordingly, individuals will have to be alert and aware, and private and public organizations will need to set rules and guidelines to protect their employees’ privacy, as well as their own.

Society’s ability to cope with the ethical and societal problems that technology raises has long been falling behind the development of such technology and the same can be said for laws and regulations. With no legal protection and social safe zone, members of society are threatened with losing their privacy through wearable technology. When the technology becomes widespread, privacy at work, in schools, in supermarkets, at the ATM, on the Internet, even when walking, sitting in a public space, and so on, is subject to perishability.

The future is already here and, since the development of technology is seemingly unstoppable, there is more to come, but for any possible futures that may come, there needs to be a healthy human factor. “For every expert there’s an equal and opposite expert” (Sowell, 1995, p. 102; also sometimes attributed to Arthur C. Clarke). So even as we are enthusiastic about how data collected through wearable technology will enhance the quality of our daily life, we also have to be cautious to think about our security and privacy issues in an era of ubiquitous wearable technology. In this sense, creating digital footprints of our social and personal lives with the possibility of them being exposed publicly do not seem to coincide with the idea of a “healthy society”.

One has to ponder: where next? Might we be arguing that we are nearing the point of total surveillance, as everyone begins to record everything around them for “just in case” reasons such as insurance protection, establishing liability, and complaint handling (much like the in-car black box recorder unit can clear you of wrongdoing in an accident)? How gullible might we become to think that images and video footage do not lie, even though a new breed of hackers might manipulate and tamper with digital reality to their own ends. The überveillance trajectory refers to the ultimate potentiality for embedded surveillance devices like swallowable pills with onboard sensors, tags, and transponder IDs placed in the subdermal layer of the skin (Michael & Michael, 2013). Will the new frontier be surveillance of the heart and mind?

Discussion Points

• Does sound recording by wearable devices present any ethical dilemmas?

• Are wearable still cameras more acceptable than wearable video cameras?

• What should one do if bystanders of a recording in a public space question why they are being recorded?

• What themes are evident in the videos and the comments on Surveillance Camera Man Channel at http://www.liveleak.com/c/surveillancecameraman?

• What is the difference between making a recording with a smartphone and with a head mounted camera?

• If one juxtaposes a surveillance camera covertly integrated into a light fixture, with an overt head-mounted camera, then why should the two devices elicit a different response from bystanders?

• In what ways is a CCTV in a restroom any different from a photoborg in a restroom?
• Are there gender differences in enthusiasm for certain wearables? Who are the innovators of these technologies?

• What dangers exist around Internet addiction, escapism, and living in a virtual world?

• Are we nearing the point of total information surveillance? Is this a good thing? Will it decrease criminal activity or are we nearing a Minority Report style future?

• Will the new frontier be surveillance of the heart and mind beyond anything Orwell could have envisioned?

• How can the law keep pace with technological change?

• Can federal and state laws be in contradiction over the rights of a photoborg? How?

• Watch the movie The Final Cut. Watch the drama The Entire History of You. What are the similiarities and differences? What does such a future mean for personal security and national security?

• Consider in small groups other scenarios where wearables would be welcome as opposed to unwelcome.

• In which locations should body-worn video cameras never be worn?

Questions

• What is meant by surveillance, sousveillance and überveillance?

• What is a photoborg? And what is “point of view” within a filming context?

• Research the related terms surveillance, dataveillance, and überveillence.

• What does Steve Mann’s “Request for Deletion” webpage say? Why is it largely untenable?

• Why did Google decide to focus on industry applications of Glass finally, and not the total market?

• Are we ready to see many (overt or covert) sousveillers in our everyday life?

• Will we all be photoborgs one day, or live in a society where we need to be?

• Do existing provisions concerning voyeurism cover all possible sousveillance situations?

• If lifelogs are streamed in real-time and near real-time what can bystanders shown do about the distribution of their images (if they are ever find out)?

• Is lifelong lifelogging feasible? Desirable? Should it be suspended in confidential business meetings, going through airport security and customs or other areas? Which areas?

• Should citizens film their encounters with police, given police are likely to be filming it too?

• Should the person using PoVS technology have more legal protection than persons they are monitoring?

• Are wearables likely to be rapidly adopted and even outpace smartphone use?

References

Armstrong, J., & Welsh, B. (2011). The Entire History of You. In B. Reisz (Ed.), Black Mirror. London, UK: Zeppetron.

Australasian Legal Information Institute. (2014). Workplace Surveillance Act, 2005 (NSW). Retrieved June 6, 2016, from http://www.austlii.edu.au/au/legis/nsw/ consol_act/wsa2005245/

Australasian Legal Information Institute. (2015). Surveillance Devices Act, 1998 (WA). Retrieved June 6, 2016, from https://www.slp.wa.gov.au/legislation/statutes. nsf/main_mrtitle_946_currencies.html

Australasian Legal Information Institute. (2016). Privacy and Personal Information Protection Act 1998. Retrieved June 6, 2016, from http://www.austlii.edu.au/au/ legis/nsw/consol_act/papipa1998464/

Branscomb, A. W. (1994). Who Owns Information? From Privacy to Public Access. New York, NY: BasicBooks.

Bronitt, S., & Michael, K. (2012). Human rights, regulation, and national security (introduction). IEEE Technology and Society Magazine, 31(1), 15–16. doi:10.1109/ MTS.2012.2188704

Clarke, R. (2012). Point-of-View Surveillance. Retrieved from http://www.rogerclarke.com/DV/PoVS.html

Clarke, R. (2014). Surveillance by the Australian media, and its regulation. Surveillance & Society, 12(1), 89–107.

Echo. (2016). Lecture capture: Video is the new textbook. Retrieved from http:// echo360.com/what-you-can-do/lecture-capture Ericsson. (2015).

Ericsson Mobility Report. Retrieved June 6, 2016, from http:// www.ericsson.com/mobility-report

Fernandez Arguedas, V., Izquierdo, E., & Chandramouli, K. (2013). Surveillance ontology for legal, ethical and privacy protection based on SKOS. In IEEE 18th International Conference on Digital Signal Processing (DSP).

Ghorayshi, A. (2014). Google Glass user treated for internet addiction caused by the device. Retrieved June 6, 2016, from https://www.theguardian.com/science/2014/oct/14/google-glass-user-treated-addiction-withdrawal-symptoms

Gill, M., & Spriggs, A. (2005). Assessing the impact of CCTV. London: Home Office Research, Development and Statistics Directorate.

Gokyer, D., & Michael, K. (2015). Digital wearability scenarios: Trialability on the run. IEEE Consumer Electronics Magazine, 4(2), 82–91. doi:10.1109/MCE.2015.2393005

Harfield, C. (2014). Body-worn POV technology: Moral harm. IEEE Technology and Society Magazine, 33(2), 64–72. doi:10.1109/MTS.2014.2319976

Knightscope. (2016). Advanced physical security technology. Knightscope: K5. Retrieved from http://knightscope.com/

Kotler, S. (2014). Legal heroin: Is virtual reality our next hard drug. Retrieved June 6, 2016, from http://www.forbes.com/sites/stevenkotler/2014/01/15/legal-heroin-isvirtual-reality-our-next-hard-drug/#225d03c27472

Lagorio, C. (2006). Is virtual life better than reality? Retrieved June 6, 2016, from http://www.cbsnews.com/news/is-virtual-life-better-than-reality/

Levy, K. (2014). A surprising number of places have banned Google Glass in San Francisco. Business Insider, 3. Retrieved from http://www.businessinsider.com/ google-glass-ban-san-francisco-2014-3

Mann, S. (2002). Sousveillance. Retrieved from http://wearcam.org/sousveillance.htm

Mann, S. (2005). Sousveillance and cyborglogs: A 30-year empirical voyage through ethical, legal, and policy issues. Presence (Cambridge, Mass.), 14(6), 625–646. doi:10.1162/105474605775196571

Mann, S. (2013). Veillance and reciprocal transparency: Surveillance versus sousveillance,

AR glass, lifeglogging, and wearable computing. In IEEE International Symposium on Technology and Society (ISTAS). Toronto: IEEE.

Mann, S. (n.d.). The request for deletion (RFD). Retrieved from http://wearcam. org/rfd.htm

Mann, S., & Wassell, P. (2013). Proposed law on sousveillance. Retrieved from http://wearcam.org/MannWassellLaw.pdf

Michael, K. (2013). Keynote: The final cut—Tampering with direct evidence from wearable computers. In Proc. 5th Int. Conf. Multimedia Information Networking and Security (MINES).

Michael, K., & Michael, M. G. (2012). Commentary on: Mann, S. (2012): Wearable computing. In M. Soegaard & R. Dam (Eds.), Encyclopedia of human-computer interaction. The Interaction-Design.org Foundation. Retrieved from https://www. interaction-design.org/encyclopedia/wearable_computing.html

Michael, K., & Michael, M. G. (2013a). Computing ethics: No limits to watching? Communications of the ACM, 56(11), 26–28. doi:10.1145/2527187

Michael, K., Michael, M. G., & Perakslis, C. (2014). Be vigilant: There are limits to veillance. In J. Pitt (Ed.), The computer after me. London: Imperial College London Press. doi:10.1142/9781783264186_0013

Michael, M. G., & Michael, K. (Eds.). (2013b). Überveillance and the social implications of microchip implants: Emerging technologies (Advances in human and social aspects of technology). Hershey, PA: IGI Global.

Nettle, D., Nott, K., & Bateson, M. (2012). ‘Cycle thieves, we are watching you’: Impact of a simple signage intervention against bicycle theft. PLoS ONE, 7(12), e51738. doi:10.1371/journal.pone.0051738 PMID:23251615

Perakslis, C., Pitt, J., & Michael, K. (2014). Drones humanus. IEEE Technology and Society Magazine, 33(2), 38–39.

Ruvio, A., Gavish, Y., & Shoham, A. (2013). Consumer’s doppelganger: A role model perspective on intentional consumer mimicry. Journal of Consumer Behaviour, 12(1), 60–69. doi:10.1002/cb.1415

Shactman, N. (2003). A spy machine of DARPA’s dreams. Wired. Retrieved from http://archive.wired.com/techbiz/media/news/2003/05/58909?currentPage=al

Shilton, K. (2009). Four billion little brothers?: Privacy, mobile phones, and ubiquitous data collection. Communications of the ACM, 52(11), 48–53. doi:10.1145/1592761.1592778

Sowell, T. (1995). The Vision of the Anointed: Self-congratulation as a Basis for Social Policy. New York, NY: Basic Books.

Surveillance Camera Man. (2015). Surveillance Camera Man. Retrieved from https:// www.youtube.com/watch?v=jzysxHGZCAU

Tanner, R. J., Ferraro, R., Chartrand, T. L., Bettman, J., & van Baaren, R. (2008). Of chameleons and consumption: The impact of mimicry on choice and preferences. The Journal of Consumer Research, 34(6), 754–766. doi:10.1086/522322

University of Wollongong. (2014a). Campus Access and Order Rules. Retrieved from http://www.uow.edu.au/about/policy/UOW058655.html

University of Wollongong. (2014b). Ownership of Intellectual Property. Retrieved from http://www.uow.edu.au/about/policy/UOW058689.html

University of Wollongong. (2014c). Student Conduct Rules. Retrieved from http:// www.uow.edu.au/about/policy/UOW058723.html

Van Bommel, M., van Prooijen, J., Elffers, H., & Van Lange, P. (2012). Be aware to care: Public self-awareness leads to a reversal of the bystander effect. Journal of Experimental Social Psychology, 48(4), 926–930. doi:10.1016/j.jesp.2012.02.011

Key Terms and Definitions

Body-Worn Video (BWV): These are cameras that are embedded in devices that can be worn on the body to record video, typically by law enforcement officers. Closed-Campus: Refers to any organization or institution that contains a dedicated building(s) on a bounded land parcel offering a range of online and offline services, such as banking, retail, sporting. Closed campus examples include schools and universities.

Closed-Circuit Television (CCTV): Also referred to as video surveillance. CCTV is the use of video cameras to transmit a signal to a specific place. CCTV cameras can be overt (obvious) or covert (hidden).

Digital Glass: Otherwise referred to as wearable eyeglasses which house multiple sensors on board. An example of digital glass is Google Glass. The future of digital glass may well be computer-based contact lenses.

Lifelogging: When a user decides to log his/her life using wearable computing or other devices, that have audio-visual capability. It is usually a continuous stream of a recording 24/7.

Personal Security Devices: These are devices that allegedly deter perpetrators from attacking others because they are always on gathering evidence, and ready to record. PSDs may have an on-board alarm alerting central care services for further assistance.

Policy: An enforceable set of organizational rules and principles used to aid decision-making that have penalties for non-compliance, such as the termination of an employee’s contract with an employer.

Private Space: Somewhere geographically that one has an expectation of privacy, naturally. Some examples include: the home, the backyard, and the restroom.

Public Space: Somewhere geographically where there is no expectation of privacy save for when someone holds a private conversation in a private context.

Sousveillance: The opposite of surveillance from above, which includes inverse surveillance, also sometimes described as person-to-person surveillance. Citizens can use sousveillance as a mechanism to keep law enforcement officers accountable for their actions.

Surveillance: “Watching from above” such as CCTV from business buildings. For behaviors, activities, or other changing information to be under the watchful eye of authority, usually for the purpose of influencing, managing, directing, or protecting the masses.

Citation: Michael, K., Gokyer, D., & Abbas, S. (2017). Societal Implications of Wearable Technology: Interpreting “Trialability on the Run”. In A. Marrington, D. Kerr, & J. Gammack (Eds.), Managing Security Issues and the Hidden Dangers of Wearable Technologies (pp. 238-266). Hershey, PA: IGI Global. doi:10.4018/978-1-5225-1016-1.ch010

Credit Card Fraud

Abstract

This chapter provides a single person case study of Mr. Dan DeFilippi who was arrested for credit card fraud by the US Secret Service in December 2004. The chapter delves into the psychology of a cybercriminal and the inner workings of credit card fraud. A background context of credit card fraud is presented to frame the primary interview. A section on the identification of issues and controversies with respect to carding is then given. Finally, recommendations are made by the convicted cybercriminal turned key informant on how to decrease the rising incidence of cybercrime. A major finding is that credit card fraud is all too easy to enact and merchants need to conduct better staff training to catch fraudsters early. With increases in global online purchasing, international carding networks are proliferating, making it difficult for law enforcement agencies to be “policing” unauthorized transactions. Big data may well have a role to play in analyzing behaviors that expose cybercrime.

Introduction

Fraud is about exploiting weaknesses. They could be weaknesses in a system, such as a lack of controls in a company’s accounting department or a computer security hole, or a weakness in human thinking such as misplaced trust. A cybercriminal finds a weakness with an expected payout high enough to offset the risk and chooses to become involved in the endeavor. This is very much like a traditional business venture except the outcome is the opposite. A business will profit by providing goods or services that its customers value. Fraud takes value away from its victims and only enriches those committing it.

Counterfeit documents rarely need to be perfect. They only need to be good enough to serve their purpose, fooling a system or a person in a given transaction. For example, a counterfeit ID card will be scrutinized more closely by the bouncer at a bar than by a minimum wage cashier at a large department store. Bouncers have incentive to detect fakes since allowing in underage drinkers could have dire consequences for the bar. There is much less incentive to properly train cashiers since fraud makes up a small percentage of retail sales. This is sometimes referred to as the risk appetite and tolerance of an organization (Levi, 2008).

Lack of knowledge and training of store staff is by far the biggest weakness exploited when counterfeit or fraudulent documents are utilized by cybercriminals. If the victim does not know the security features of a legitimate document, they will not know how to spot a fake. For example, Visa and MasterCard are the most widely recognized credit card brands. Their dove and globe holograms are well known. A card without one would be very suspicious. However, there are other less known credit card networks such as Discover and American Express. Their security features are not as well recognized which can be exploited. If a counterfeit credit card has an appearance of legitimacy it will be accepted.

Background

Dan DeFilippi was a black hat hacker in his teens and early twenties. In college he sold fake IDs, and later committed various scams, including phishing, credit card fraud, and identity theft. He was caught in December 2004. In order to avoid a significant jail sentence, DeFilippi decided to become an informant and work for the secret service for two years, providing training and consulting and helping them understand how hackers and fraudsters think. This chapter has been written through his eyes, his practices and learnings. Cybercriminals do not necessarily have to be perfect at counterfeiting, but they do have to be superior social engineers not to get caught. While most of the cybercrime now occurs remotely over the Internet, DeFilippi exploited the human factor. A lot of the time, he would walk into a large electronics department store with a fake credit card, buy high-end items like laptops, and then proceed to sell them online for a reduced price. He could make thousands of dollars like this in a single week.

In credit card fraud, the expected payout is so much higher than traditional crimes and the risk of being caught is often much lower making it a crime of choice. Banks often write off fraud with little or no investigation until it reaches value thresholds. It is considered a cost of doing business and additional investigation is considered to cost more than it is worth. Banks in Australia, for instance, used to charge about $250 to investigate an illegal transaction, usually passing the cost onto the customer before 2002. Today they usually do not spend effort on investigating such low-value transactions but rather redirect attention on how to uphold their brand. Since about the mid-2000s, banks also have openly shared more security breaches with one another which have acted to aid law enforcement task forces to respond in a timely manner to aid in investigating cybercrime. Yet, local law enforcement continues to struggle with the investigation of electronic fraud due to lack of resources, education, or jurisdictional issues. Fraud cases may span across multiple countries requiring complex cooperation and coordination between law enforcement agencies. A criminal may buy stolen credit cards from someone living on another continent, use them to purchase goods online in state 1, have the goods shipped to state 2 while living in state 3, with the card stolen from someone in state 4.

Online criminal communities and networks, or the online underground, are often structured similarly to a loose gang. New members (newbies) have to earn the community’s trust. Items offered for sale have to be reviewed by a senior member or approved reviewer before being offered to the public. Even when people are considered “trustworthy” there is a high level of distrust between community members due to a significant level of law enforcement and paranoia from past crackdowns. Very few people know anyone by their real identity. Everyone tries to stay as anonymous as possible. Many people use multiple handles and pseudonyms for different online activities, such as one for buying, one or more for selling, and one for online discussion through asynchronous text-based chat. This dilutes their reputation but adds an additional layer of protection.

The most desirable types of fraud in these communities, and for monetary crime in general, involves directly receiving cash instead of goods. Jobs, such as “cashing out” stolen debit cards at ATMs, are sought after by everyone and are handled by the most trusted community members. Due to their desirability the proceeds are often split unequally, with the card provider taking a majority share of the reward and the “runner” taking a majority of the risk. The types of people in these communities vary from teens looking to get a new computer for free to members of organized crime syndicates. With high unemployment rates, low wages, and low levels of literacy particularly in developing nations, it is no surprise that a large number of credit card fraud players are eastern European or Russian with suspected ties to organized crime. It is a quick and easy way of making money if you know what you are doing.

Of course, things have changed a little since DeFilippi was conducting his credit card fraud between 2001 and 2004. Law enforcement agencies now have whole task forces dedicated to online fraud. Bilateral and multilateral treaties are in place with respect to cybercrime, although this still lacks the buy-in of major state players and even states where cybercrime is flourishing (Broadhurst, 2006). In terms of how technology has been used to combat credit card fraud, the Falcon system has been able to help in fraud that would have otherwise gone unnoticed. If the Falcon system identifies any transaction as suspect or unusual, the bank will attempt to get in touch with the cardholder to ascertain whether or not it is an authentic transaction. If individuals cannot be reached directly, then their card is blocked until further confirmation of a given transaction. Banks continue to encourage travelers to contact them when their pattern of credit card use changes, e.g. when travelling abroad. Software platforms nowadays do much of the analytical processing with respect to fraud detection. Predictive analytics methods, not rule-based methods, are changing the way fraud is discovered (Riordan et al., 2012). Additionally, banks have introduced two factor (also known as multifactor) authentication requirements which means an online site requires more than just a cardholder’s username and password. Commonly this takes the form of a SMS or a phone call to a predesignated number containing a randomized code. Single factor authentication is now considered inadequate in the case of high-risk transactions, or movement of funds to other parties (Aguilar, 2015).

Main Focus of Chapter 

Issues, Controversies, Problems

Katina Michael: Dan, let’s start at the end of your story which was the beginning of your reformation. What happened the day you got caught for credit card fraud?

Dan DeFilippi: It was December 2004 in Rochester, New York. I was sitting in my windowless office getting work done, and all of a sudden the door burst open, and this rush of people came flying in. “Get down under your desks. Show your hands. Hands where I can see them.” And before I could tell what was going on, my hands were cuffed behind my back and it was over. That was the end of that chapter of my life.

Katina Michael: Can you tell us what cybercrimes you committed and for how long?

Dan DeFilippi: I had been running credit card fraud, identity theft, document forgery pretty much as my fulltime job for about three years, and before that I had been a hacker.

Katina Michael: Why fraud? What led you into that life?

Dan DeFilippi: Everybody has failures. Not everybody makes great decisions in life. So why fraud? What led me to this? I mean, I had great parents, a great upbringing, a great family life. I did okay in school, and you know, not to stroke my ego too much, but I know I am intelligent and I could succeed at whatever I chose to do. But when I was growing up, one of the things that I’m really thankful for is my parents taught me to think for myself. They didn’t just focus on remembering knowledge. They taught me to learn, to think, to understand. And this is really what the hacker mentality is all about. And when I say hacker, I mean it in the traditional sense. I don’t mean it as somebody in there stealing from your company. I mean it as somebody out there seeking knowledge, testing the edges, testing the boundaries, pushing the limits, and seeing how things work. So growing up, I disassembled little broken electron­ics and things like that, and as time went on this slowly progressed into, you know, a so-called hacker.

Katina Michael: Do you remember when you actually earned your first dollar by conducting cybercrime?

Dan DeFilippi: My first experience with money in this field was towards the end of my high school. And I realized that my electronics skills could be put to use to do something beyond work. I got involved with a small group of hackers that were trying to cheat advertising systems out of money, and I didn’t even make that much. I made a couple of hundred dollars over, like, a year or something. It was pretty much insignificant. But it was that experience, that first step, that kind of showed me that there was something else out there. And at that time I knew theft and fraud was wrong. I mean, I thought it was stealing. I knew it was stealing. But it spiraled downwards after that point.

Katina Michael: Can you elaborate on how your thinking developed towards earn­ing money through cybercrime?

Dan DeFilippi: I started out with these little things and they slowly, slowly built up and built up and built up, and it was this easy money. So this initial taste of being able to make small amounts, and eventually large amounts of money with almost no work, and doing things that I really enjoyed doing was what did it for me. So from there, I went to college and I didn’t get involved with credit card fraud right away. What I did was, I tried to find a market. And I’ve always been an entrepreneur and very business-minded, and I was at school and I said, “What do people here need? ... I need money, I don’t really want to work for somebody else, I don’t like that.” I realized people needed fake IDs. So I started selling fake IDs to college students. And that again was a taste of easy money. It was work but it wasn’t hard work. And from there, there’s a cross-over here between forged documents and fraud. So that cross-over is what drew me in. I saw these other people doing credit card fraud and mak­ing money. I mean, we’re talking about serious money. We’re talking about thousands of dollars a day with only a few hours of work and up.

Katina Michael: You strike me as someone who is very ethical. I almost cannot imagine you committing fraud. I’m trying to understand what went wrong?

Dan DeFilippi: And where were my ethics and morals? Well, the problem is when you do something like this, you need to rationalize it, okay? You can’t worry about it. You have to rationalize it to yourself. So everybody out there commit­ting fraud rationalizes what they’re doing. They justify it. And that’s just how our brains work. Okay? And this is something that comes up a lot on these online fraud forums where people discuss this stuff openly. And the question is posed: “Well, why do you do this? What motivates you? Why, why is this fine with you? Why are you not, you know, opposed to this?” And often, and the biggest thing I see, is like, you know, the Robin Hood scenario- “I’m just stealing from a faceless corporation. It’s victimless.” Of course, all of us know that’s just not true. It impacts the consumers. But everybody comes up with their own reason. Everybody comes up with an explanation for why they’re doing it, and how it’s okay with them, and how they can actually get away with doing it.

Katina Michael: But how does a sensitive young man like you just not realize the impact they were having on others during the time of committing the crimes?

Dan DeFilippi: I’ve never really talked about that too much before... Look the aver­age person when they know they’ve acted against their morals feels they have done wrong; it’s an emotional connection with their failure and emotionally it feels negative. You feel that you did something wrong no one has to tell you the crime type, you just know it is bad. Well, when you start doing these kinds of crimes, you lose that discerning voice in your head. I was completely dis­connected from my emotions when it came to these types of fraud. I knew that they were ethically wrong, morally wrong, and you know, I have no interest in committing them ever again, but I did not have that visceral reaction to this type of crime. I did not have that guilty feeling of actually stealing something. I would just rationalize it.

Katina Michael: Ok. Could I ask you whether the process of rationalization has much to do with making money? And perhaps, how much money did you actu­ally make in conducting these crimes?

Dan DeFilippi: This is a pretty common question and honestly I don’t have an answer. I can tell you how much I owe the government and that’s ... well, I suppose I owe Discover Card ... I owed $209,000 to Discover Card Credit Card Company in the US. Beyond that, I mean, I didn’t keep track. One of the things I did was, and this is kind of why I got away with it for so long, is I didn’t go crazy. I wasn’t out there every day buying ten laptops. I could have but chose not to. I could’ve worked myself to the bone and made millions of dollars, but I knew if I did that the risk would be significantly higher. So I took it easy. I was going out and doing this stuff one or two days a week, and just living comfortably but not really in major luxury. So honestly, I don’t have a real figure for that. I can just tell you what the government said.

Katina Michael: There is a perception among the community that credit card fraud is sort of a non-violent crime because the “actor” being defrauded is not a person but an organization. Is this why so many people lie to the tax office, for instance?

Dan DeFilippi: Yeah, I do think that’s absolutely true. If we are honest about it, everyone has lied about something in their lifetime. And people... you’re right, you’re absolutely right, that people observe this, and they don’t see it in the big picture. They think of it on the individual level, like I said, and people see this as a faceless corporation, “Oh, they can afford it.” You know, “no big deal”. You know, “Whatever, they’re ripping off the little guy.” You know. People see it that way, and they explain it away much easier than, you know, somebody going off and punching someone in the face and then proceeding to steal their wallet. Even if the dollar figure of the financial fraud is much higher, people are generally less concerned. And I think that’s a real problem because it might entice some people into committing these crimes because they are considered “soft”. And if you’re willing to do small things, it’s going to, as in my case, eventually spiral you downwards. I started with very small fraud, and then got larger. Not that everybody would do that. Not that the police officer taking the burger for free from Burger King is going to step up to, you know, to extortion or something, but certainly it could, could definitely snowball and lead to something.

Katina Michael: It has been about 6 years since you were arrested. Has much has changed in the banking sector regarding triggers or detection of cybercriminal acts?

Dan DeFilippi: Yeah. What credit card companies are doing now is pattern match­ing and using software to find and root out these kind of things. I think that’s really key. You know, they recognize patterns of fraud and they flag it and they bring it out. I think using technology to your advantage to identify these patterns of fraud and investigate, report and root them out is probably, you know, one of the best techniques for dollar returns.

Katina Michael: How long were you actually working for the US Secret Service, as a matter of interest? Was it the length of your alleged, or so-called prison term, or how did that work?

Dan DeFilippi: No. So I was arrested early December 2004. I started working with the Secret Service in April 2005, so about six months later. And I worked with them fulltime almost for two years. I cut back on the hours a little bit towards the end, because I went back to university. But it was, it was almost exactly two years, and most of it was fulltime.

Katina Michael: I’ve heard that the US is tougher on cybercrime relative to other crimes. Is this true?

Dan DeFilippi: The punishment for credit card fraud is eight-and-a-half years in the US.

Katina Michael: Do these sentences reduce the likelihood that someone might get caught up in this kind of fraud?

Dan DeFilippi: It’s a contested topic that’s been hotly debated for a long time. And also in ethics, you know, it’s certainly an interesting topic as well. But I think it depends on the type of person. I wasn’t a hardened criminal, I wasn’t the fella down on the street, I was just a kid playing around at first that just got more serious and serious as time went on. You know, I had a great upbring­ing, I had good morals. And I think to that type of person, it does have an impact. I think that somebody who has a bright future, or could have a bright future, and could throw it all away for a couple of hundred thousand dollars, or whatever, they recognize that, I think. At least the more intelligent people recognize it in that ... you know, “This is going to ruin my life or potentially ruin a large portion of my life.” So, I think it’s obviously not the only deterrent but it can certainly be useful.

Katina Michael: You note that you worked alone. Was this always the case? Did you recruit people to assist you with the fraud and where did you go to find these people?

Dan DeFilippi: Okay. So I mainly worked alone but I did also work with other people, like I said. I was very careful to protect myself. I knew that if I had partners that I worked with regularly it was high risk. So what I did was on these discussion forums, I often chatted with people beyond just doing the credit card fraud, I did other things as well. I sold fake IDs online. I sold the printed cards online. And because I was doing this, I networked with people, and there were a few cases where I worked with other people. For example, I met somebody online. Could have been law enforcement, I don’t know. I would print them a card, send it to them, they would buy something in the store, they would mail back the item, the thing they bought, and then I would sell them online and we would split the money 50/50.

Katina Michael: Was this the manner you engaged others? An equal split?

Dan DeFilippi: Yes, actually, exactly the same deal for instance, with the person I was working with in person, and that person I met through my fake IDs. When I had been selling the fake IDs, I had a network of people that resold for me at the schools. He was one of the people that had been doing that. And then when he found out that I was going to stop selling IDs, I sort of sold him my equipment and he kind of took over. And then he realized I must have something else going on, because why would I stop doing it, it must be pretty lucrative. So when he knew that, you know, he kept pushing me. “What are you doing? Hey, I want to get involved.” And this and that. So it was that person that I happened to meet in person that in the end was my downfall, so to speak.

Katina Michael: Did anyone, say a close family or friend, know what you were doing?

Dan DeFilippi: Absolutely not. No. And I, I made it a point to not let anyone know what I was doing. I almost made it a game, because I just didn’t tell anybody anything. Well, my family I told I had a job, you know, they didn’t know... but all my friends, I just told them nothing. They would always ask me, you know, “Where do you get your money? Where do you get all this stuff?” and I would just say, “Well, you know, doing stuff.” So it was a mystery. And I kind of enjoyed having this mysterious aura about me. You know. What does this guy do? And nobody ever thought it would be anything illegitimate. Everybody thought I was doing something, you know, my own webs ites, or maybe thought I was doing something like pornography or something. I don’t know. But yeah, I definitely did not tell anybody else. I didn’t want anybody to know.

Katina Michael: What was the most outrageous thing you bought with the money you earned from stolen credit cards?

Dan DeFilippi: More than the money, the outrageous things that I did with the cards is probably the matter. In my case the main motivation was not the money alone, the money was almost valueless to a degree. Anything that anyone could buy with a card in a store, I could get for free. So, this is a mind-set change a fraudster goes through that I didn’t really highlight yet. But money had very little value to me, directly, just because there was so much I could just go out and get for free. So I would just buy stupid random things with these stolen cards. You know, for example, the case where I actually ended up leading to my arrest, we had gone out and we had purchased a laptop before that one that failed, and we bought pizza. You know? So you know, a $10 charge on a stolen credit card for pizza, risking arrest, you know, for, for a pizza. And I would buy stupid stuff like that all the time. And just because I knew it, I had that experience, I could just get away with it mostly.

Katina Michael: You’ve been pretty open with interviews you’ve given. Why?

Dan DeFilippi: It helped me move on and not to keep secrets.

Katina Michael: And on that line of thinking, had you ever met one of your victims? And I don’t mean the credit card company. I actually mean the individual whose credit card you defrauded?

Dan DeFilippi: So I haven’t personally met anyone but I have read statements. So as part of sentencing, the prosecutor solicited statements from victims. And the mind-set is always, “Big faceless corporation, you know, you just call your bank and they just, you know, reverse the charges and no big deal. It takes a little bit of time, but you know, whatever.” And the prosecutor ended up get­ting three or four statements from individuals who actually were impacted by this, and honestly, you know, I felt very upset after reading them. And I do, I still go back and I read them every once in a while. I get this great sinking feeling, that these people were affected by it. So I haven’t actually personally met anyone but just those statements.

Katina Michael: How much of hacking do you think is acting? To me traditional hacking is someone sort of hacking into a website and perhaps downloading some data. However, in your case, there was a physical presence, you walked into the store and confronted real people. It wasn’t all card-not-present fraud where you could be completely anonymous in appearance.

Dan DeFilippi: It was absolutely acting. You know, I haven’t gone into great detail in this interview, but I did hack credit card information and stuff, that’s where I got some of my info. And I did online fraud too. I mean, I would order stuff off websites and things like that. But yeah, the being in the store and playing that role, it was totally acting. It was, like I mentioned, you are playing the part of a normal person. And that normal person can be anybody. You know. You could be a high-roller, or you could just be some college student going to buy a laptop. So it was pure acting. And I like to think that I got reasonably good at it. And I would come up with scenarios. You know, ahead of time. I would think of scenarios. And answers to situations. I came up with techniques that I thought worked pretty well to talk my way out of bad situations. For example, if I was going to go up and purchase something, I might say to the cashier, before they swiped the card, I’d say, “Oh, that came to a lot more than I thought it would be. I hope my card works.” So that way, if something happened where the card was declined or it came up call for authorization, I could say, “Oh yeah, I must not have gotten my payment” or something like that. So, yeah, it was definitely acting.

Katina Michael: You’ve mentioned this idea of downward spiraling. Could you elaborate?

Dan DeFilippi: I think this is partially something that happens and it happens if you’re in this and do this too much. So catching people early on, before this takes effect is important. Now, when you’re trying to catch people involved in this, you have to really think about these kinds of things. Like, why are they doing this? Why are they motivated? And the thought process, like I was saying, is definitely very different. In my case, because I had this hacker background, and I wasn’t, you know, like some street thug who just found a computer. I did it for more than just the money. I mean, it was certainly because of the chal­lenge. It was because I was doing things I knew other people weren’t doing. I was kind of this rogue figure, this rebel. And I was learning at the edge. And especially, if I could learn something, or discover something, some technique, that I thought nobody else was using or very few people were using it, to me that was a rush. I mean, it’s almost like a drug. Except with a drug, with an addict, you’re chasing that “first high” but can’t get back to it, and with credit card fraud, your “high” is always going up. The more money you make, the better it feels. The more challenges you complete, the better you feel.

Katina Michael: You make it sound so easy. That anyone could get into cybercrime. What makes it so easy?

Dan DeFilippi: So really, you’ve got to fill the holes in the systems so they can’t be exploited. What happens is crackers, i.e. criminal hackers, and fraudsters, look for easy access. If there are ten companies that they can target, and your company has weak security, and the other nine have strong security, they’re going after you. Okay? Also, in the reverse. So if your company has strong security and nine others have weak security, well, they’re going to have a field-day with the others and they’re just going to walk past you. You know, they’re just going to skip you and move on to the next target. So you need to patch the holes in your technology and in your organization. I don’t know if you’ve noticed recently, but there’s been all kinds of hacking in the news. The PlayStation network was hacked and a lot of US targets. These are basic things that would have been discovered had they had proper controls in place, or proper security auditing happening.

Katina Michael: Okay, so there is the systems focus of weaknesses. But what about human factor issues?

Dan DeFilippi: So another step to the personnel is training. Training really is key. And I’m going to give you two stories, very similar but with totally different outcomes, that happened to me. So a little bit more about what I used to do frequently. I would mainly print fake credit cards, put stolen data on those cards and use them in store to go and purchase items. Electronics, and things like that, to go and re-sell them. So ... and in these two stories, I was at a big- box well-known electronics retailer, with a card with a matching fake ID. I also made the driver’s licenses to go along with the credit cards. And I was at this first location to purchase a laptop. So pick up your laptop and then go through the standard process. And when committing this type of crime you have to have a certain mindset. So you have to think, “I am not committing a crime. I am not stealing here. I am just a normal consumer purchasing things. So I am just buying a laptop, just like any other person would go into the store and buy a laptop.” So in this first story, I’m in the store, purchasing a laptop. Picked it out, you know, went through the standard process, they went and swiped my card. And it came up with a ‘CFA’ – call for authorization. Now, a call for authorization is a case where it’s flagged on the computer and you actually have to call in and talk to an operator that will then verify additional information to make sure it’s not fraud. If you’re trying to commit fraud, it’s a bad thing. You can’t verify this, right? Right? So this is a case where it’s very possible that you could get caught, so you try to talk your way out of the situation. You try to walk away, you try to get out of it. Well, in this case, I was unable to escape. I was unable to talk my way out of it, and they did the call for authorization. They called in. We had to go up to the front of the store, there was a customer service desk, and they had somebody up there call it in and discuss this with them. And I didn’t overhear what they were saying. I had to stand to the side. About five or ten minutes later, I don’t know, I pretty much lost track of time at that point, they come back to me and they said, “I’m sorry, we can’t complete this transaction because your information doesn’t match the information on the credit card account.” That should have raised red flags. That should have meant the worse alarm bells possible.

Katina Michael: Indeed.

Dan DeFilippi: There should have been security coming up to me immediately. They should have notified higher people in the organization to look into the matter. But rather than doing that, they just came up to me, handed me back my cards and apologized. Poor training. So just like a normal consumer, I act surprised and alarmed and amused. You know, and I kind of talked my way out of this too, “You know, what are you talking about? I have my ID and here’s my card. Obviously this is the real information.” Whatever. They just let me walk out of the store. And I got out of there as quickly as possible. And you know, basically walked away and drove away. Poor training. Had that person had the proper training to understand what was going on and what the situation was, I probably would have been arrested that day. At the very least, there would have been a foot-chase.

Katina Michael: Unbelievable. That was very poor on the side of the cashier. And the other story you were going to share?

Dan DeFilippi: The second story was the opposite experience. The personnel had proper training. Same situation. Different store. Same big-box electronic store at a different place. Go in. And this time I was actually with somebody else, who was working with me at the time. We go in together. I was posing as his friend and he was just purchasing a computer. And this time we, we didn’t really approach it like we normally did. We kind of rushed because we’d been out for a while and we just wanted to leave, so we kind of rushed it faster than a normal person would purchase a computer. Which was unusual, but not a big deal. The person handling the transaction tried to upsell, upsell some things, warranties, accessories, software, and all that stuff, and we just, “No, no, no, we don’t ... we just want to, you know, kind of rush it through.” Which is kind of weird, but okay, it happens.

Katina Michael: I’m sure this would have raised even a little suspicion however.

Dan DeFilippi: So when he went to process the transaction, he asked for the ID with the credit card, which happens at times. But at this point the person I was with started getting a little nervous. He wasn’t as used to it as I was. My biggest thing was I never panicked, no matter what the situation. I always tried to not show nervousness. And so he’s getting nervous. The guy’s checking his ID, swipes the card, okay, finally going to go through this, and call for authorization. Same situation. Except for this time, you have somebody here who’s trying to
do the transaction and he is really, really getting nervous. He’s shifting back and forth. He’s in a cold sweat. He’s fidgeting. Something’s clearly wrong with this transaction. Now, the person who was handling this transaction, the person who was trying to take the card payment and everything, it happened to be the manager of this department store. He happened to be well-trained. He happened to know and realize that something was very wrong here. Something
was not right with this transaction. So the call for authorization came up. Now, again, he had to go to the front of the store. He, he never let that credit card and fake ID out of his hands. He held on to them tight the whole time. There was no way we could have gotten them back. So he goes up to the front and he says, “All right, well, we’re going to do this.” And we said, “Okay, well, we’ll go and look at the stock while you’re doing it.” You know. I just sort of tried to play off, and as soon as he walked away, I said, “We need to get out of here.” And we left; leaving behind the ID and card. Some may not realize it as I am retelling the story, but this is what ended up leading to my arrest. They ran his photo off his ID on the local news network, somebody recognized him, turned him in, and he turned me in. So this was an obvious case of good, proper training. This guy knew how to handle the situation, and he not only prevented that fraud from happening, he prevented that laptop from leaving the store. But he also helped to catch me, and somebody else, and shot down what I was doing. So clearly, you know, failing to train people leads to failure. Okay? You need to have proper training. And you need to be able to handle the situation.

Katina Michael: What did you learn from your time at the Secret Service?

Dan DeFilippi: So a little bit more in-depth on what I observed of cybercriminals when I was working with the Secret Service. Now, this is going to be a little aside here, but it’s relevant. So people are arrogant. You have to be arrogant to commit a crime, at some level. You have to think you can get away with it. You’re not going to do it if you can’t, you know, if you think you’re going to get caught. So there’s arrogance there. And this same arrogance can be used against them. Up until the point where I got caught in the story I just told you that led to my arrest, I was arrogant. I actually wasn’t protecting myself as well as I had been, should have been. Had I been investigated closer, had law enforcement being monitoring me, they could have caught me a lot earlier. I left traces back to my office. I wasn’t very careful with protecting my office, and they could have come back and found me. So you can play off arrogance but also ignorance, obviously. They go hand-in-hand. So the more arrogant somebody is, the more risk they’re willing to take. One of the things we found frequently works to catch people was email. Most people don’t realize that email actually contains the IP address of your computer. This is the identifier on the Internet to distinguish who you are. Even a lot of criminals who are very intelligent, who are involved in this stuff, do not realize that email shows this. And it’s very easy. You just look at the source of the email and boom, there you go. You’ve got somebody’s location. This was used countless times, over and over, to catch people. Now, obviously the real big fish, the people who are really intelligent and really in this, take steps to protect themselves with that, but then those are the people who are supremely arrogant.

Katina Michael: Can you give us a specific example?

Dan DeFilippi: One case that happened a few years ago, let’s call the individual “Ted”. He actually ran a number of these online forums. These are “carding” forums, online discussion boards, where people commit these crimes. And he was extremely arrogant. He was extremely, let’s say, egotistical as well. He was very good at what he did. He was a good cracker, though he got caught multiple times. So he actually ran one of these sites, and it was a large site, and in the process, he even hacked law enforcement computers and found out information about some of these other operations that were going on. Actu­ally outed some, some informants, but the people didn’t believe him. A lot of people didn’t believe him. And his arrogance is really what led to his downfall. Because he was so arrogant he thought that he could get away with everything. He thought that he was protecting himself. And the fact of the matter was, law enforcement knew who he was almost the whole time. They tracked him back using basic techniques just like using email. Actually email was used as part of the evidence, but they actually found him before that. And it was his arrogance that really led to his getting arrested again, because he just didn’t protect himself well enough. And this really I cannot emphasize it enough, but this can really be used against people.

Katina Michael: Do you think that cybercrimes will increase in size and number and impact?

Dan DeFilippi: Financial crime is going up and up. And everybody knows this. The reality is that technology works for criminals as much as it works for businesses. Large organizations just can’t evolve fast enough. They’re slow in comparison to cybercriminals.

Katina Michael: How so?

Dan DeFilippi: A criminal’s going to use any tools they can to commit their crimes. They’re going to stay on top of their game. They’re going to be at the forefront of technology. They’re going to be the ones out there pioneering new tech­niques, finding the holes before anybody else, in new systems to get access to your data. They’re going to be the ones out there, and combining that with the availability of information. When I started hacking back in the ‘90s, it was not easy to learn. You really pretty much had to go into these chat-rooms and become kind of like an apprentice. You had to have people teach you.

Katina Michael: And today?

Dan DeFilippi: Well after the 2000s, when I started doing the identification stuff, there was easier access to data. There were more discussion boards, places where you could learn about these things, and then today it’s super easy to find any of this information. Myself, I actually wrote some tutorials on how to conduct credit card fraud. I wrote, like, a guide to in-store carding. I included how to go about it, what equipment to use, what to purchase, and it’s all out there in the public domain. You don’t even have to understand any of this. You know, you could know nothing about technology, spend a few hours online searching for this stuff, learn how to do it, and order the stuff overnight and the next day you could be out there going and doing this stuff. That’s how easy it is. And that’s why it’s really going up, in my opinion.

Katina Michael: Do you think credit card fraudsters realize the negative conse­quences of their actions?

Dan DeFilippi: People don’t realize that there is a real negative consequence to this nowadays. I’m not sure what the laws are in Australia about identity theft and credit card fraud, but in the United States, it used to be very, very easy to get away with. If you were caught, it would be a slap on the wrist. You would get almost nothing happening to you. It was more like give the money back, and possibly serve jail time if it was a repeat offence, but really that was no deterrent. Then it exploded post dot com crash, then a few years ago, we passed a new law that it’s a mandatory two years in prison if you commit identity theft. And credit card fraud is considered identity theft in the United States. So you’re guaranteed of some time in jail if caught.

Katina Michael: Do you think people are aware of the penalties?

Dan DeFilippi: People don’t realize it. And they think, “Oh, it’s nothing, you know, a slap on the wrist.” There is a need for more awareness, and campaigning on this matter. People need to be aware of the consequences of their actions. Had I realized how much time I could serve for this kind of crime, I probably would have stopped sooner. Long story short, because I worked with the Se­cret Service and trained them for a few years, I managed to keep myself out of prison. Had I not done that, I would have actually been facing eight-and-a-half years. That’s serious, especially for somebody who’s in their early 20s. And really had that happened, my future would have been ruined, I think. I probably would have become a lifelong criminal because prisons are basically teaching institutions for crime. So really I, had I known, had I realized it, I wouldn’t have done it. And I think especially younger people, if they realize that the major consequences to these actions, that they can be caught nowadays, that there are people out there looking to catch them, that really would help cut back on this. Also catching people earlier of course is more ideal. Had I been caught early on, before my mind-set had changed and the emotional ties had been broken, I think I would have definitely stopped before it got this far. It would have made a much bigger impact on me. And that’s it.

Future Research Directions

Due to the availability of information over the Internet, non-technical people can easily commit “technical” crimes. The internet has many tutorials and guides to committing fraud, ranging from counterfeit documents to credit card fraud. Many of the most successful are hackers turned carders, those who understand and know how to exploit technology to commit their crimes (Turgeman-Goldschmidt, 2008). They progress from breaking into computers to committing fraud when they discover how much money there is to be made. All humans rationalize their actions. The primary rationalization, criminals use when committing fraud, is blaming the victim. They claim that the victim should have been more knowledgeable, should have taken more steps to protect themselves, or taken some action to avoid the fraud. Confidence scams were legal in the US until a decade ago due to the mindset that it was the victim’s fault for falling for the fraud. There needs to be a lot more research conducted into the psychology of the cybercriminal. Of course technological solutions abound in the market, but it is less of a technology problem, than a human factor problem. Technology solution patents for making credit cards more secure abound. But with near field communication (NFC) cards now on the market, fraud is being propelled as investment continues in insecure devices. One has to wonder why these technologies are being chosen when they just increase the risk appetite. There also has to be more campaigning in schools, informing young people of the consequences of cybercrime, especially given so many schools are now mandating the adoption of tablets and other mobile devices in high school.

Conclusion

Avoiding detection, investigation, and arrest for committing identity theft or electronic fraud is, in most cases, fairly simple when compared to other types of crime. When using the correct tools, the internet allows the perpetrator to maintain complete anonymity through much of the crime (Wall, 2015). In the case of electronic fraud, the only risk to the perpetrator is when receiving the stolen money or goods. In some cases, such as those involving online currencies designed to be untraceable, it may be impossible for authorities to investigate due to anonymity built into the system. The internet and broad reach of information is a two-way street and can also work in law enforcement’s favor. Camera footage of a crime, such as someone using a stolen credit card at a department store, can now be easily and inexpensively distributed for the public to see. The same tools that keep criminals anonymous can be used by law enforcement to avoid detection during investigations. As with “traditional” crimes, catching a fraudster comes down to mistakes. A single mistake can unravel the target’s identity. One technique used by the US Secret Service is to check emails sent by a target for the originating IP address. This is often overlooked. Engaging a target in online chat and subpoenaing IP records from the service provider is often successful as well. Even the most technologically savvy criminal may slip up once and let their true IP address through.

Many types of fraud can be prevented through education. The general population becomes less vulnerable and law enforcement is more likely to find the perpetrator. A store clerk who is trained to recognize the security features of credit cards, checks, and IDs will be able to catch a criminal in the act. The problem with education is its cost. A store may not find a positive return on investment for the time spent training minimum wage employees. Law enforcement may not have the budget for additional training or the personnel available to investigate the crime. Added security can also prevent certain types of crime. Switching from magnetic stripe to chip and PIN-based payment cards reduced card present fraud in Europe but then we have seen the introduction more recently of NFC cards that do not require a PIN for a transaction less than $100. Consumers may be reluctant to adopt new technologies due to the added process or learning curve. Chip and PIN have not been adopted in the USA due to reluctance of merchants and banks. The cost of the change is seen as higher than the cost of fraud. NFC cards on the other hand allegedly add to convenience of conducting transactions and have seen a higher uptake in Australia. However, some merchants refuse to accept NFC transactions, as usually fraudsters go undetected and the merchant is left to with problems to address. Human exploitation is the largest factor of fraud and can make or break a scam (Hadnagy, 2011). Social engineering can play an important role when exploiting a system. Take using a stolen credit card to purchase an item in a store. If the fraudster appears nervous and distracted employees may become suspicious. Confidence goes a long way. When purchasing a large ticket item, the fraudster may suggest to the cashier that he hopes the total is not over his limit or that he hopes his recent payment has cleared. When presented with an explanation for failure before a failure happens, the employee is less likely to expect fraud. However, if there is more training invested when new employees start at an organization, the likelihood that basic frauds will be detected is very high. There is also the incidence of insider attack which is growing, where an employee, knowingly accepts an illegitimate card from a known individual, and then splits the profits. Loss prevention strategies need to be implemented by organizations and the sector as a whole need to address the credit card fraud problem in a holistic manner with all the relevant stakeholders engaged and working together to crack down on cybercrime.

References

Aguilar, M. (2015). Here's Why Your Bank Account Is Less Secure Than Your Gmail. Gizmodo. Retrieved from http://gizmodo.com/heres-why-your-bank-account-is-less-secure-than-your-gm-1683777281

Broadhurst R. (2006). Developments in the global law enforcement of cyber‐crime.Policing: An International Journal of Police Strategies & Management, 29(3), 408–433. 10.1108/13639510610684674

Hadnagy C. (2011). Social Engineering: The Art of Human Hacking. Indiana: John Wiley.

Herley, C., van Ooirschot, P.C., & Patrick, A.S. (20). Passwords: If We’re So Smart, Why Are We Still Using Them? Financial Cryptography and Data Security, LNCS (Vol. 5628, pp. 230-237).

Levi M. (2008). Organized fraud and organizing frauds: Unpacking research on networks and organization.Criminology & Criminal Justice, 8(4), 389–419. 10.1177/1748895808096470

Reardon, B., Nance, K., & McCombie, S. (2012). Visualization of ATM Usage Patterns to Detect Counterfeit Cards Usage. Proceedings of the45th Hawaii International Conference on System Science (HICSS). Hawaii (pp. 3081-3088). 10.1109/HICSS.2012.638

Turgeman-Goldschmidt O. (2008). Meanings that hackers assign to their being a hacker. International Journal of Cyber Criminology, 2(2), 382–396.

Wall, D. S. (2015). The Internet as a conduit for criminal activity. In A. Pattavina (Ed.), Information Technology and the Criminal Justice System (pp. 77-98). London: Sage Publications.

Key Terms and Definitions

Authorization: Authorizing electronic transactions done with a credit card and holding this balance as unavailable until either the merchant clears the transaction or the hold ceases.

Call for Authorization: Also known as CFA. A message that may come up when attempting to purchase something using a credit card. Requires the store to call in and verify the transaction.

Carding: Illegal use of a credit card. When criminals use carding to verify the validity of stolen card data, they test it the card by presenting it to make a small online purchase on a website that has real-time transaction processes. If the card is processed successfully, the thief knows the card is still good to use.

Card-Not-Present Fraud: Card-not-present fraud is when you make purchases over the phone or internet using card details without the card being physically presented.

Credit Card Fraud: Defined as the fraudulent acquisition and/or use of credit cards or card details for financial gain.

Cybercrime: Either crimes where computers or other information technologies are an integral part of an offence or crimes directed at computers or other information technologies (such as hacking or unauthorized access to data).

Hacking: Criminals can hack into databases of account details held by banks that hold customer information, or intercept account details that travel in unencrypted form. Hacking bank computers can lead to the withdrawal of sums of money in excess of account credit balances.

Identity Document Forgery: The process by which identity documents issued by banks are copied and/or modified by unauthorized persons for the purpose of deceiving those who would view the documents about the identity of the bearer.

Merchant: Account that allows businesses to process credit card transactions.

Risk Appetite and Tolerance: Can be defined as ‘the amount and type of risk that an organization is willing to absorb in order to meet their strategic objectives.

Citation: DeFilippi, Dan and Katina Michael. "Credit Card Fraud: Behind the Scenes." Online Banking Security Measures and Data Protection. IGI Global, 2017. 263-282. Web. 6 Jan. 2018. doi:10.4018/978-1-5225-0864-9.ch015

Sociology of the docile body

Abstract

Discipline and Punish: The Birth of the Prison (Penguin Social Sciences): Michel Foucault, Alan Sheridan: 8601404245756: Books

Embedded radio-frequency identification, sensor technologies, biomedical devices and a new breed of nanotechnologies are now being commercialized within a variety of contexts and use cases. As these technologies gather momentum in the marketplace, consumers will need to navigate the changing cybernetic landscape. The trichotomy facing consumers are: (1) to adopt RFID implants as a means of self-expression or to resolve a technological challenge; (2) to adopt RFID implants for diagnostic or prosthetic purposes to aid in restorative health; as well as considerations (3) for enforced adoption stemming from institutional or organizational top-down control that has no direct benefit to the end-user. This paper uses the penal metaphor to explore the potential negative impact of enforced microchipping. The paper concludes with a discussion on the importance of protecting human rights and freedoms and the right to opt-out of sub-dermal devices.

Section I. Introduction

Radiofrequency identification (RFID) implant technology, sensor technology, biomedical devices, and nanotechnology continue to find increasing application in a variety of vertical markets. Significant factors leading to continued innovation include: convergence in devices, miniaturisation, storage capacity, and materials. The most common implantable devices are used in the medical domain, for example, heart pacemakers and implantable cardioverter defibrillators (ICDs). In non-medical applications, implantable devices are used for identification, [close-range] location and condition monitoring, care and convenience use cases [1].

RFID implants can be passive or active, and predominantly have a function to broadcast a unique ID when triggered by a reader within a specific read range. Sensors onboard an RFID device can, for instance, provide additional data such as an individual's temperature reading, pulse rate and heart rate. Biomedical devices usually have a specific function, like the provision of an artificial knee or hip, and can contain RFID and other specific sensors. An example cited in Ratner & Ratner that demonstrates the potential for nanotechnology to bring together RFID, sensors, and the biomedical realms is to inject nanobots into a soldier's bloodstream. “The sensors would circulate through the bloodstream and could be monitored at a place where blood vessels are closest to the surface, such as the eye… While quite invasive, so-called in vivo sensors could also have other uses in continually monitoring the health of a soldier” [2], p. 42f.

The next step in the miniaturization path for RFID microchips is nanotechnology, which allows for working at the nanoscale, that is the molecular level [3] p. 90. Humancentric implants are discussed [4], pp. 198-214, in the context of nanotechnology ethical and social implications. Regardless of the breakthroughs to come in these humancentric embedded surveillance devices (ESDs), we will soon be moving the discussion beyond, merely how the technologies are aiding humanity, regardless of whether such technologies are mobilized to aid human health or impair it. The fundamental concerns will rest within human willingness to adopt the technology, and not in what the technology claims to eradicate in and of itself. In order to later contextualize the issues surrounding human rights of refusal, this paper will now present a material view of implantable technologies in their nascent stage. A clear distinction will be made between nanotechnologies that can be used as a mechanism of control versus, for example, bio-medical technologies that are freely chosen and designed for the sole purpose of improving human health with no benefit extending beyond the aid of the individual.

Section II. Previous Work

Although cybernetic technologies have boundless potential to surface under an array of interchangeable names, for the purpose of this paper, RFID implants will be investigated given the degree of global attention they have experienced [5]–[6][7][8]. In Western civilization, RFID is being used for tracking merchandise and similar devices are used in our family pets to locate them should they roam astray [9]. Now the RFID is being considered for 24-7 human location monitoring. In order to offer a pragmatic perspective, which does not deviate from one source of research to the other, Hervé Aubert's 2011 article entitled, “RFID technology for human implant devices” [10] is utilized as the primary source of data given its seminal contribution to the field.

A. Experimental Stages of Cybernetic Innovations

Aubert investigates one type of RFID known as the VeriChip™; which is a device presently engineered to provide a data-bank of important records on the individual [5], in particular on the application of a personal health record for high-risk patients (PHR) [11], [12]. In addition, this implantable RFID that is known for its remote identification of persons or animals is being considered for the purpose of protective human surveillance [13]. RFID devices are not only being considered for identifying and locating humans, but for its potential to “remotely control human biological functions” [10], [14], p. 676. According to Aubert, this nano-technology is not conducive as a ‘spychip’ with current-day technologies, as it cannot successfully be connected to a Global Positioning System (which offers real-time tracking), as the GPS would require an implant that far surpasses the size capacity of what could be realistically embedded in the human body, and would therefore defeat the notion of a submicron global surveillance system for monitoring human activity. However, there is nothing to say that off-body data receivers, powered by wireless supplies, cannot be stationed short-range to monitor passive responders, such as subdermal RFID's [15]–[16][17]. Currently the anticipated range is dependent on the inductive coupling measured in MHz [5].

Aubert concludes his findings by arguing that RFID are not suitable for real-time tracking of humans as its capability to transmit the location of the body is too limited in range, permitting receivers to only read passive implanted devices within a free space range of 10 cm or less. This limitation makes communication with GPS satellites in an attempt to locate bodies impossible. Once again, this is not to refute the claim that interrogators, stationed territorially, can transmit its data to a centralized global positioning system inversely. Regardless, researchers are arguing nanotechnologies “[w]ill not exclusively revolve around the idea of centralization of surveillance and concentration of power, […but its greatest potential for negative impact will be centred around] constant observation at decentralized levels” [18], p. 283. In addition, depending on the context, monitoring does not have to be continuous but discrete to provide particular types of evidence. It may well be enough to read an RFID at a given access node point (either on entry or exit), or to know that a given unique ID is inside a building, or even headed in a given direction [19]. Two or more points of reading also can provide intricate details about distance, speed, and time, as equipment readers have their own GPS and IP location [20], [21]. It will be simple enough to tether an implant to a mobile phone or any other device with an onboard GPS chipset. Nokia, for instance, had an RFID reader in one of its units 2004 handsets [22].

Although such technologies are far from perfected, at least to the degree of synoptic centralization, with the exception of concerns surrounding information privacy, subdermal implants that are being designed for surveillance of humans is being identified as a central ethical challenge [23]. In particular, this is an ethical challenge because subdermal chips may be either injected or external tags worn on the body such as a PayBand [24] or FitBit. This in itself is not what is creating the most obvious challenge but rather that such devices have the potential to be implemented with or without the individual's consent and, therefore, provoking discussion around the need to legislate to keep pace with technological advances [25]. Although the chip is being suggested for use in a number of ways, bioethicists suggest that prior to these new applications of nanotechnologies becoming a present day reality, “[w]e need to examine carefully the very real dangers that RFID implants could pose to our privacy and our freedom” [5], p. 27. Despite this concern, skin-embedded devices are being employed in a multiplicity of ways, more recently by the biohacking communities who are increasingly commercialising their ideas and prototypes [26].

Aubert lists various possible health benefits of embedded RFID chips, such as the following: “[t]o transmit measurements of chemical or biological data inside the body”, as well as “[m]onitor biological activity” while modifying physiological functions and offer various therapeutic means, such as patient monitoring, such as for glucose concentrations of patients with diabetes [10], p. 676. Another possible health benefit is the potential for monitoring brain activity through “[t]ransponders embedded within the skull”, [10], p. 681. Increasingly implants are being used in techniques such as deep brain stimulation (DBS) and vagus nerve stimulation (VNS) to treat a variety of illnesses [27]. As outlined in Aubert's 2011 article, these transponders communicate with implanted probes, enabling the transmittal of localized microstimulation to be administered in response to neuron signals sent.

At this point, it becomes necessary to distinguish that which is engineered to monitor human organs and is freely adopted as a mechanism to improve one's health to that which is in effect through a top-down implementation, in which the individual is given no choice pertaining to adoption. These two scenarios have been demonstrated in a TEDx talk delivered by Katina Michael in 2012 within the “convenience/care” versus “control” contexts [28].

B. Human Versus Machine

Docile Bodies | Vestoj A Chain Gang in South Carolina, c. 1929 - 1931. Doris Umann. http://vestoj.com/docile-bodies/

There is a needful distinction between human and machine. Deciphering between biomedical technology designed for example, to improve human health, or as a means of self-expression (all of which are freely chosen by the individual), versus those designed for a benefit external to the individual and has the ability to be used as a mechanism of control over the citizen. For example, a heart monitor, created to sustain a human, is designed only with the intention to benefit the patient in a life sustaining way; such a device has no apparatus external from this cause that could be used to invoke power over the individual and therefore it is designed with no additional mandate other than improving or maintaining the individual's health [29]. Generally, the decision for adopting such a biomedical implant device is determined by the patient and in most developed nations using a process of consent. Because such a device currently has no mechanism for top-down control, stakeholders (i.e., hospitals, medical device purchasers, inbound logistics managers or buyers) do not have a hidden agenda for adoption. This type of bio-medical device currently possesses no ability to monitor any type of human activity that could contribute to an imbalance of power for the consumer over the user (in this instance the patient).

More recently, one of the largest suppliers of biomedical devices, Medtronics, has begun to blur the line between devices for care and devices for control. Apart from the hard line that most manufacturers of implants hold on who owns the data emanating from the device [30], companies specialising in biomedical devices are now beginning to engage with other secondary uses of their implants [31]. Just like wearable devices, such as the FitBit, are now being used for evidentiary purposes, it will not be long before biomedical devices originally introduced for prosthetic or diagnostic purposes will be used to set individualised health insurance premiums, and more. As noted by [29], even in care-related implant applications, there is an underlying dimension of control that may propel function creep or scope creep. These are the types of issues that bring science and the arts together. George Grant wrote [32], p. 17:

The thinker who has most deeply pondered our technological destiny has stated that the new copenetrated arts and sciences are now proceeding to the apogee of their determining power around the science of cybernetics; […] the mobilization of the objective arts and sciences at their apogee comes more and more to be unified around the planning and control of human activity.

Section III. Research Approach

Hence, while it is important to understand the trichotomy of skin-embedded technologies-deciphering between technology adoption which can be seen as a post-modern indicator of the autonomous self-exercising human rights [33], to that of acceptable bio-Western technologies with its sole function to improve one's existing health conditions (that is also freely chosen of the individual), versus technology which have potential to be used as mechanisms of organizational control-implanted through imposed order [34]. When disambiguating the way in which technology can be used, it is most essential to understand that this differentiation requires no thorough understanding of the purpose of the biotechnology or its utility as the plumb line rests alone, not on the trichotomy of the technology's utility but within the individual's moral freedom and human rights to accept or refuse. Therefore, the plumb line remains, not concerning the device's distinct utility, but rather with freedom of choice.

Currently, the question is being posed as to whether legislation will keep pace, which suggests that either a higher articulation of our former constitution is required or that new legislation be erected that will explicitly defend the rights of the individual to choose for oneself [35].

The ways in which sub-dermal technology may aid correctional facilities' endeavors will be more thoroughly expounded on in the next section. A historical look at a specific top-down and bottom-up institution will be examined, not as a raw set of material facts but, in order to create an inference between the way in which the incremental process of correctional ideologies are the prevailing influence of today and are promoting the individual's outward gaze to self-censorship [36]. Some researchers are arguing it is highly improbable that laws will be erected to enforce subdermal devices, with the exception of use in criminals [37]. Therefore, this next section is being devoted to an investigation of the penal system.

Section IV. The Penal Metaphor

Because the prisoner is being noted as the central focus as a possible industry enroot to legalizing the implementation of sub-dermal RFID's, it becomes imperative to investigate the penal system from an ideological perspective in order to assess its susceptibility [38], pp. 157-249; [39], p. 35. This paper will conclude that there needs to be a distinction between spatial autonomy and moral autonomy as moral freedom is of the higher good and rights to obtain unto this good supersedes loses that could be incurred as a result of the state invoking disciplinary measures [32].

Generation after generation civilization oscillates over freedom of choice, blurring the distinction between freely adopting governing rules of belief, following an individualized interrogation of the ethical underpinnings, versus conforming to systematic ruling government without understanding its fundamental doctrine. Often such systems strive to maintain order through imposing indoctrinations, in which its people accept the ideologies of the dominant class through a constant infiltration of information not conducive to independent thinking of the autonomous self; it is argued that when this knowledge becomes singular it is a form of soft-despotism [40]. Through various mechanisms of social control, such as through a prevailing slant being propagated through the media, it has led an onslaught of persons embodied in space to a place where the individual is losing ability to see the distinction and whereby choose for oneself. The specific slant contained within the dominant message is directing Western society to a place imbued with an external message with its constancy softly-coercing the viewer or listener in one specific direction [32].

A. A Look at the System as an Apparatus of Control

As the high-tech industry evolves, the media continues to endorse such change and those adopting a consumerist mentality continue to commoditize their own body as a source of consumer capitalism [41] through the latest technological upgrade. It will only stand to logic that human adaptation to body modifying devices will become more and more acceptable as a means to live within society, function in commerce and progress in self-actualization [42]. The authors of this paper argue that when any such movement coerces the people in one specific direction it is a form of soft-despotism whether invoked intentionally or otherwise [40].

It is within this investigation of the governing forces over the masses that the focus is taken away from the history of the penal institution in itself to the state's reliance on cumulative rationale. Theorists argue that it is this over reliance on human rationale that is propelling history in one specific direction and thus becomes the force that is evoking a certain type of social order and governance [43].

In order to elucidate Ann Light's notion of how biotechnology can turn us from outside within, she first turns our attention to the penal system [36]. Theorists argue that the open persecution of punishment found within the penal process has radically shifted to become less detectable and more hidden [44]. This is a far cry from the open persecution experienced by, let us say, Joan of Arc [45], as now, largely due to humanitarianism, the public spectacle of the executioner who leads the persecuted to the stake appears an equivalent act of savagery to the public who witnessos, as is the crime itself [44]. Hence the mechanism becomes more hidden and in this sense is argued to be less pervasive [44]. But is it?

Theorists view the apparatus of the persecutor as moving from control over the body to a much more sophisticated apparatus, which slackens the hold on the tangible physical body in exchange for a far more intricate part of the self. This shifts the focus from the external body to the human mind, which is considered as the seat of the soul and the final battleground [46]. Theorists go on to state that these more sophisticated systems of control will only be confirmed to actually exist as history unfolds [36].

The panoptic, for example is a model that can be deemed as a control mechanism which is less pervasive as it moves away from physical punishment to psychological punishment [44]. Specifically the sanctioned individual who believes the monitoring of one's behavior to be constant, whereby shifting the focus of what is believed to be periodic surveillance to a continual presence. The constancy found in this form of surveillance is argued to imprint permanence on the human cognition [36]. It is what M.G. Michael has termed uberveillance—a type of big brother on the inside looking out [47]. In order that the reader may have a clearer understanding of the Panopticon, below is a description of Bentham's institution:

“The hollow interior of the circular Panopticon has an incongruous resemblance to a dovecote with all the doves behind bars. The prisoners' cells are in the circumference, but are open at all times to inspection from the observation tower in the center of the building. The theory of the Panopticon relies on the fiction that each prisoner, alone in his cell, believes that he is under constant observation: yet it is patently impossible that the contractor and his small staff within the central tower could watch 3, 000 prisoners at once. So that the prisoners may not know whom he is watching, or whether he is present at all, the contractor must at all times be invisible; and Bentham thought much about deceptive lighting systems to preserve the illusion of the contractor's permanent presence, a “dark spot” at the center of the Panopticon. Observation of a single prisoner for several hours, followed by punishment for any misdemeanors, would convince all the rest of this constant vigilance. Although the contraptions such as Venetian blinds, pinholes and speaking tubes which delighted Bentham have lost some technological credibility, the general principle is readily applicable to modern methods of surveillance” [48], pp.4-5.

Upon reviewing the detailed description of the institution designed by Bentham, it is easy to see how the panoptic system supports the shift from the body to the mind, which then turns the imprisoned body's gaze inward [36]. Out of fear of punishnent, the embodied experience is to begin to self-monitor.

Although some argue Bentham's Panopticon never came to fruition, Michael Ignatieff views it as a “[s]ymbolic caricature of the characteristic features of disciplinary thinking [of] his age” [48], p. 5. Crowther argues:

[According to] Bentham, the Panopticon was not an enclosed relationship between the prisoner and the state, removed from the outside world, but a prison constantly open to public scrutiny. The contractor in his watchtower could be joined at any minute not only by magistrates, but by the prisoners' relatives, the curious, or the concerned, “The great open committee of the tribunal of the world.

This invokes two types of control of the incarcerated; according to sociology theorists, a top down approach to surveillance is referred to organizational surveillance, whereas a bottom-up approach in which the common citizen becomes the watch-guard is referred to as inverse [49]. Bentham became aware of the possible negative impact that constant surveillance of the state and the public could produce on the prisoners' sensibilities, and therefore suggested that the prisoner wear a disguise. The mask would conceal the individual's identity while each unique disguise, would represent the crime that was committed. Hence, Bentham did make a frail attempt to resolve the way in which the apparatus' constancy could impair one's well-being [48].

The Panopticon illustrated here is merely representational, as the physical apparatus of control is being reflected upon as a means of the reader relating to the modem-day ideological shift within organizational control that is designed to turn the gaze of the end-user, the prisoner, and such, to self-monitoring. Western civilization that once employed an external gaze that had previously sought a voice in politics, for instance, is being turned from outside within. According to Ann Light [36], digital technology is promoting this shift.

Section V. Discussion

A. The Impact of Bio-Tech Constancy on the Human Psyche

Whether this surveillance transpires every moment of every day [50], or just in the sanctioned individual's mind is of little importance as it is the unknown or fear of what is “ever-lurking” that has the greatest potential to negatively impact the human psyche. When the interrogator is no longer human but the receptor is a machine there is something even more demoralizing that transpires as the removing of human contact can be likened to placing the prisoner in a type of mechanical quarantine [36], [51].

Embedded surveillance devices (although currently only engineered to accommodate short-range, such as within a correctional facility), can be considered as the all-seeing pervasive eye, the interrogator. However, the individual being tracked may lack knowledge about what is on the other side; which is the receptor. This can create a greater monster than real-life as it adds insurmountable pressure due to the unknown and the inability to understand the boundaries and limitations of the surveillance technology. This becomes that much more of an infringement when the device is placed under the individual's skin. Illustratively speaking, rather than seeing it as it is, such as, a mark of servitude, a passive information bank, a personal identifier, or a location monitor, the inductive coupling device has potential to be mistakenly deemed as the predator. In support of this notion, modern-day scholars are referring to the reader as the interrogator.

As earlier stated, in this instance, the external public gaze of the community and the state will shift from the external all-seeing eye, to that which is internalized—regardless of whether the device is passive or active. Over and above Foucault's notion of self-policing, this process could be further accentuated due to the person's inability to comprehend the full purpose or limitations of the surveillance ID system in which they are under. This internalization has potential to create a feeling of “the beast within” rather than the threat being from without. The writers of this paper argue that this form of internalization of the gaze within the body will heighten the negative impact on one's psyche—ultimately negatively impacting one's state of consciousness [52].

In this sense Bentham's panoptic vision was never really defeated but now merely considered at a higher level of sophistication or barbarianism—depending on which way it is looked upon. Rather than institutions embracing practices designed to rehabilitate the prisoner, and bring the individual to an eventual state of freedom, bio-tech adoption could impair in the recovery process—its constancy heightening psychological fears—making it near impossible to ever be disabled within the mind of the end-user. Hence, as Bentham's notion of a free-enterprise is accepted on a much more hidden level, and the self turns to policing one's own actions, this utter enclosure can be argued to lead the human body to a state of utter docility. This is a subject of debate for psychologists, bioethicists and social scientists alike, and in support of the phenomenologist must also include the insider's perspective as well.

Section VI. Conclusion

Imprisonment is transpiring on many levels, and can be argued as being the system that has led Western civilization incrementally to the place it is today, where moral relativism is ruling the people, causing the moral voice of conviction designed for political and public engagement, to be displaced for a turning inward to oneself as a forms of self-expression [34]. This may be seen as the result of top-down governing institutes esteeming systematic rationale over the individuals' voice—inadvertently marginalizing the embodied-self over other forces such as the economy. As the ruling system continues to over extend its control, it ever-so-gently coerces society in one direction only, massaging the spirit of Epicureanism which endorses human passion to have it full reign over one's own body, as the final self-embodied means of conveying a message. Whereas the governing institutions can easily rule over a docile society. In this sense bio-tech with its constancy may be seen as just one more apparatus designed to control the mind—although hidden, it most certainly is invasive. With current considerations for adoption it brings Orwell's claim to the forefront when he wrote in 1984: “Nothing was your own except the few cubic centimetres inside your skull” [53], p. 27.

References

1. K. Michael, A. Masters, "Applications of human transponder implants in mobile commerce", Proceedings of the 8th World Multiconference on Systemics Cybernetics and Informatics, pp. 505-512, 2004.

2. D. Ratner, M. A. Ratner, Nanotechnology and Homeland Security, New Jersey:Prentice Hall, 2005.

3. M. H. Fulekar, Nanotechnology: Importance and Applications, New York:I K International Publishing House, 2010.

4. F. Allhoff et al., What is Nanotechnology and Why Does it Matter? From Science to Ethics, West Sussex: Wiley Blackwell, 2010.

5. K. R. Foster, J. Jaeger, "RFID inside - The murky ethics of implanted chips", IEEE Spectrum, vol. 44, pp. 24-29, 2007.

6. A. Masters, K. Michael, "Lend me your arms: The use and implications of humancentric RFID", Electronic Commerce Research and Applications, vol. 6, pp. 29-39, 2007.

7. K. Michael, M. G. Michael, "The diffusion of RFID implants for access control and epayments: a case study on Baja Beach Club in Barcelona", IEEE Symposium on Technology and Society, pp. 242-252, 2010.

8. K. Michael, M. G. Michael, J. Pitt, "Implementing ‘Namebars’ Using Microchip Implants: The Black Box Beneath the Skin" in This Pervasive Day: The Potential and Perils of Pervasive Computing, London:Imperial College London Press, pp. 163-206, 2010.

9. W. A. Herbert, "No Direction Home: Will the Law Keep Pace With Human Tracking Technology to Protect Individual Privacy and Stop Geoslavery", Law and Policy for the Information Society, vol. 2, pp. 436, 2006.

10. H. Aubert, "RFID technology for human implant devices", Comptes Rendus Physique, vol. 12, pp. 675-683, 2011.

11. K. Michael et al., "Microchip implants for humans as unique identifiers: a case study on VeriChip", Conference on Ethics Technology and Identity (ETI), pp. 81-84, 2008.

12. K. Michael, "The technological trajectory of the automatic identification industry: the application of the systems of innovation (SI) framework for the characterisation and prediction of the auto-ID industry", 2003.

13. A. Masters, K. Michael, "Lend me your arms: The use and implications of humancentric RFID", Electronic Commerce Research and Applications, vol. 6, pp. 29-39, 2007.

14. M. Michaud-Shields, "Personal Augmentation – The Ethics and Operational Considerations of Personal Augmentation in Military Operations", Canadian Military Journal, vol. 15, 2014.

15. "JOVIX", GPS vs. RFID, May 2016, [online] Available: http://atlasrfid.com/jovix-education/auto-id-basics/gps-vs-rfid/.

16. M. Roberti, Has RFID Been Integrated With GPS?, September 2016, [online] Available: http://www.rfidjournal.com/blogs/experts/entry?10729.

17. R. Ip et al., "Location and Interactive services not only at your fingertips but under your skin", IEEE International Symposium on Technology and Society, pp. 1-7, 2009.

18. J. van den Hoven, P. E. Vermaas, "Nano-Technology and Privacy: On Continuous Surveillance Outside the Panopticon", Journal of Medicine & Philosophy, vol. 32, pp. 283-297, 2007.

19. K. Michael, T. Y. Chew, Locat'em: Towards Hierarchical Positioning Systems, 2005, [online] Available: http://works.bepress.com/kmichael/145/.

20. K. Michael et al., "The emerging ethics of humancentric GPS tracking and monitoring", International Conference on Mobile Business, pp. 34-44, 2006.

21. K. Michael et al., "Location-Based Intelligence - Modeling Behavior in Humans using GPS", Proceedings of the International Symposium on Technology and Society, pp. 1-8, 2006.

22. B. Violino, Nokia Unveils RFID Phone Reader, March 2004, [online] Available: http://www.rfidjournal.com/articles/view?834.

23. K. Michael, M. G. Michael, "The social cultural religious and ethical implications of automatic identification", Proceedings of the Seventh InternationalConference in Electronic Commerce Research, pp. 433450, 2004.

24. D. Buckey, DirectCash Payments Inc. Announces Launch of DC TAG, August 2015, [online] Available: http://pay.band/tag/visa-paywave/.

25. A. Friggieri et al., "The legal ramifications of microchipping people in the United States of America-A state legislative comparison", International Symposium on Technology and Society, pp. 1-8, 2009.

26. L. McIntyre et al., "RFID: Helpful New Technology or Threat to Privacy and Civil Liberties?", IEEE Potentials, vol. 34, pp. 13-18, 2015.

27. K. Michael, "Mental Health Implantables and Side Effects", IEEE Technology and Society Magazine, vol. 34, pp. 5-7, 2015.

28. K. Michael, TEDxUWollongong: Microchipping People, May 2012, [online] Available: https://www.youtube.com/watch?v=fnghvVR5Evc.

29. A. Masters, K. Michael, "Humancentric applications of RFID implants: the usability contexts of control convenience and care", The Second IEEE International Workshop on Mobile Commerce and Services, pp. 32-41, 2005.

30. N. Olson, Joseph Carvalko, A Review of The TechnoHuman Shell, December 2013, [online] Available: http://ieet.org/index.php/IEET/print/8510.

31. E. Strickland, Medtronic Wants to Implant Sensors in Everyone, June 2014, [online] Available: http://spectrum.ieee.org/techtalk/biomedical/devices/medtronic-wants-to-implant-sensors-ineveryone.

32. G. Grant, Technology & Justice, Ontario:House of Anansi Press Ltd, 1986.

33. S. R. Bradley-Munn, K. Michael, "Whose Body Is It? The Body as Physical Capital in a Techno-Society", IEEE Consumer Electronics Magazine, vol. 5, 2016.

34. S. R. Bradley-Munn et al., "The Social Phenomenon of BodyModifying in a World of Technological Change: Past Present Future" in IEEE Norbert Wiener, Melbourne:, 2016.

35. Y. Poullet, "Data protection legislation: What is at stake for our society and democracy?", Computer Law and Security Review, vol. 25, pp. 211-226, 2009.

36. A. Light, "The Panopticon reaches within: how digital technology turns us inside out", Identity in the Information Society, vol. 3, pp. 583-598, 2010.

37. K. Johnson et al., "Consumer Awareness in Australia on the Prospect of Humancentric RFID Implants for Personalized Applications", The Sixth International Conference on Mobile Business, 2007.

38. D. Klitou, Privacy-Invading Technologies and Privacy by Design: Safeguarding Privacy Liberty and Security in the 21st Century, London:Springer, 2014.

39. M. N. Gasson et al., Human ICT Implants: Technical Legal and Ethical, The Hague: Springer, 2012.

40. P. A. Rahe, Soft Despotism Democracy's Drift: What Tocqueville Teaches Today, New Haven:Yale University Press, 2009.

41. C. Klesse, C. Malacrida, J. Low, "Part XIV: Consumer Bodies ‘Modern Primitivism’: Non-mainstream Body Modification and Racialized Representation" in Sociology of the Body: A reader, Don Mills, Ontario: Oxford University Press, 2008.

42. A. H. Maslow, "A Theory of Human Motivation", Psychological Review, vol. 50, pp. 370-396, 1943.

43. P. Rahe et al., Soft Despotism Democracy's Drift: What Tocqueville Teaches Today (The Heritage Foundation: First Principles Series Report #28 on Political Though), September 2009, [online] Available: http://www.heritage.org/research/reports/2009/09/softdespotism-democracys-drift-what-tocqueville-teaches-today.

44. M. Foucault, Discipline & Punish: The Birth of the Prison, New York: Vintage Books, 1977.

45. A. Williamson, Biography of Joan of Arc (Jeanne d'Arc), April 1999, [online] Available: http://joan-ofarc.org/joanofarc_biography.html.

46. F. Frangipane, The Three Battlegrounds: An In-Depth View of the Three Arenas of Spiritual Warfare: The Mind the Church and the Heavenly Places, Cedar Rapids: Arrow Publications, Inc., 1989.

47. K. Michael, M. G. Michael, From Dataveillance to Überveillance and the Realpolitik of the Transparent Society (The Social Implications of National Security, Wollongong:, 2007.

48. A. Crowther, "Penal Peepshow: Bentham's Prison that Never Was", Times Literary Supplement, vol. 23, pp. 4-5, February 1996.

49. T. Timan, N. Oudshoorn, "Mobile cameras as new technologies of surveillance? How citizens experience the use of mobile cameras in public nightscapes", Surveillance Society Journal, vol. 10, pp. 167-181, 2012.

50. B. Welsh, "The Entire History of You" in Black Mirror, UK:, 2011.

51. C. Malacrida, J. Low, Sociology of the Body: A Reader, Don Mills, Ontario:Oxford University Press, 2008.

52. K. Michael, J. Pitt et al., "Be Vigilant: There are Limits to Veillance" in The ComputerAfter Me, London:, pp. 189-204, 2014.

53. G. Orwell, London: Signet Classic, 1984.

Keywords: Radio-frequency identification, Implants, Biomedical monitoring, Global Positioning System, Surveillance, Context, social sciences, cybernetics, prosthetics, radiofrequency identification, docile body sociology, penal metaphor, institutional top-down control, organizational top-down control, restorative health, diagnostic purpose, prosthetic purpose, RFID implants, cybernetic landscape, nanotechnology, biomedical device, sensor technology, human rights, freedom of choice, opt-out, penal control, constancy

Citation: S.B. Munn, Katina Michael, M.G. Michael, "Sociology of the docile body", 2016 IEEE International Symposium on Technology and Society (ISTAS16), 20-22 Oct. 2016, Kerala, India, DOI: 10.1109/ISTAS.2016.7764047