My Research Programme (2002 - Now)

Schmidt Futures Foundation Grant Proposal

A great experience to be a part of the Schmidt Futures Foundation Challenge. Great to see an emphasis on community. Thank you to City of Tempe for all their support and especially to the School for the Future of Innovation in Society. We worked solidly since last September on this project with nearly weekly meetings. Thanks also to our pitch coach Viktor Brandtneris for his invaluable expertise! Thanks also to Andrew Nelson and Jacqueline who were with us every step of the way, and to President Crow who backed the project and hosted the inaugural event! What a top experience.

IMG_20190129_085605.jpg

Thad Miller (Team Leader), with Devon, Luke, David, Rosa, Katina

Alliance for the American Dream discussion: https://schmidtfutures.com/our-work/alliance-american-dream/

ASU awarded Schmidt Futures: https://asunow.asu.edu/20180424-asu-awarded-schmidt-futures-grant

Schmidt Futures Finalists: https://sfis.asu.edu/node/2682

Example Project: Autonomous Vehicles for Mobility Access: How Self-Driving Cars Can Reduce Transportation Expenses for Middle-Class Arizonans

ASU and the city of Tempe will collaboratively develop public-private partnerships making use of autonomous shuttles to provide low-cost mobility and transit services for middle-income families in Tempe. This project will target several key population centers in the city to provide quality, cost-efficient mobility where it is currently absent. This is expected to decrease annual transportation expenditures by up to $5,000 per household.

Implantable Medical Device Tells All

Implantable Medical Device Tells All: Uberveillance Gets to the Heart of the Matter

In 2015, I provided evidence at an Australian inquiry into the use of subsection 313(3) of the Telecommunications Act of 1997 by government agencies to disrupt the operation of illegal online services [1]. I stated to the Standing Committee on Infrastructure and Communications that mandatory metadata retention laws meant blanket coverage surveillance for Australians and visitors to Australia. The intent behind asking Australian service providers to keep subscriber search history data for up to two years was to grant government and law enforcement organizations the ability to search Internet Protocol–based records in the event of suspected criminal activity.

Importantly, I told the committee that, while instituting programs of surveillance through metadata retention laws would likely help to speed up criminal investigations, we should also note that every individual is a consumer, and such programs ultimately come back to bite innocent people through some breach of privacy or security. Enter the idea of uberveillance, which, I told the committee, is “exaggerated surveillance” that allows for interference [1] that I believe is a threat to our human rights [2]. I strongly advised that evoking section 313 of the Telecommunications Act 1997 requires judicial oversight through the process of a search warrant. My recommendations fell on deaf ears, and, today, we even have the government deliberating over whether or not they should relax metadata laws to allow information to be accessed for both criminal and civil litigation [3], which includes divorces, child custody battles, and business disputes. In June 2017, Australian Prime Minister Malcolm Turnbull even stated that “global social media and messaging companies” need to assist security services’ efforts to fight terrorism by “providing access to encrypted communications” [52].

Consumer Electronics Leave Digital Data Footprints

Of course, Australia is not alone in having metadata retention laws. Numerous countries have adopted these laws or similar directives since 2005, keeping certain types of data for anywhere between 30 days and indefinitely, although the standard length is somewhere between one and two years. For example, since 2005, Italy has retained subscriber information at Internet cafes for 30 days. I recall traveling to Verona in 2008 for the European Conference on Information Systems, forgetting my passport in my hotel room, and being unable to use an Internet cafe to send a message back home because I was carrying no recognized identity information. When I asked why I was unable to send a simple message, I was handed an antiterrorism information leaflet. Italy also retains telephone data for up to two years and Internet service provider (ISP) data for up to 12 months.

Similarly, the United Kingdom retains all telecommunications data for one to two years. It also maintains postal information (sender, receiver data), banking data for up to seven years, and vehicle movements for up to two years. In Germany, metadata retention was established in 2008 under the directive Gesetz zur Neuregelung der Telekommunikationsüberwachung und anderer verdeckter Ermittlungsmaßnahmen sowie zur Umsetzung der Richtlinie 2006/24/EG, but it was overturned in 2010 by the Federal Constitutional Court of Germany, which ruled the law was unconstitutional because it violated a fundamental right, in that correspondence should remain secret. In 2015, this violation was challenged again, and a compromise was reached to retain telecommunications metadata for up to ten weeks. Mandatory data retention in Sweden was challenged by one holdout ISP, Bahnhof, which was threatened with an approximately US$605,000 fine in November 2014 if it did not comply [4]. They defended their stance to protect the privacy and integrity of their customers by offering a no-logs virtual private network free of charge [5].

Some European Union countries have been deliberating whether to extend metadata retention to chats and social media, but, in the United States, many corporations voluntarily retain subscriber data, including market giants Amazon and Google. It was reported in The Guardian in 2014 that the United States records Internet metadata for not only itself but the world at large through the National Security Agency (NSA) using its MARINA database to conduct pattern-of-life analysis [6]. Additionally, with the Amendments Act in 2008 of the Foreign Intelligence Surveillance Act 1978, the time allotted for warrantless surveillance was increased, and additional provisions were made for emergency eavesdropping. Under section 702 of the Foreign Intelligence Surveillance Act of 1978 Amendments Act, now all American citizens’ metadata is stored. Phone records are kept by the NSA in the MAINWAY telephony metadata collection database [53], and short message service and other text messaging worldwide are retained in DISHFIRE [7], [8].

Emerging Forms of Metadata in an Internet of Things World

Figure 1. An artificial pacemaker (serial number 1723182) from St. Jude medical, with electrode, which was removed from a deceased patient prior to cremation. (Photo courtesy of wikimedia commons.)

The upward movement toward a highly interconnected world through the Web of Things and people [9] will only mean that even greater amounts of data will be retained by corporations and government agencies around the world, extending beyond traditional forms of telecommunications data (e.g., phone records, e-mail correspondence, Internet search histories, metadata of images, videos, and other forms of multimedia). It should not surprise us that even medical devices are being touted as soon to be connected to the Internet of Things (IoT) [10]. Heart pacemakers, for instance, already send a steady stream of data back to the manufacturer’s data warehouse (Figure 1). Cardiac rhythmic data is stored on the implantable cardioverter-defibrillator’s (ICD’s) memory and is transmitted wirelessly to a home bedside monitor. Via a network connection, the data find their way to the manufacturer’s data store (Figure 2).

The standard setup for an EKG. A patient lies in a bed with EKG electrodes attached to his chest, upper arms, and legs. A nurse oversees the painless procedure. The ICD in a patient produces an EKG (A) which can automatically be sent to a ICD manufacturer's data store (B). (Image courtesy of wikimedia commons.)

In health speak, the ICD set up in the patient’s home is a type of remote monitoring that happens usually when the ICD recipient is in a state of rest, most often while sleeping overnight. It is a bit like how normal computer data backups happen, when network traffic is at its lowest. In the future, an ICD’s proprietary firmware updates may well travel back up to the device, remote from the manufacturer, like installing a Windows operating system update on a desktop. In the following section, we will explore the implications of access to personal cardiac data emanating from heart pacemakers in two cases.

CASE 1: HUGO CAMPOS DENIED ACCESS TO HIS PERSONAL CARDIAC DATA

Figure 3. The conventional radiography of a single-chamber pacemaker. (Photo courtesy of wikimedia commons.)

In 2007, scientist Hugo Campos collapsed at a train station and later was horrified to find out that he had to get an ICD for his genetic heart condition. ICDs usually last about seven years before they require replacement (Figure 3). A few years into wearing the device, being a high-end quantifiedself user who measured his sleep, exercise, and even alcohol consumption, Campos became inquisitive over how he might gain access to the data generated by his ICD (Figure 4). He made some requests to the ICD’s manufacturer and was told that he was unable to receive the information he sought, despite his doctor having full access. Some doctors could even remotely download the patient’s historical data on a mobile app for 24/7 support during emergency situations (Figure 5). Campos’s heart specialist did grant him access to written interrogation reports, but Campos only saw him about once every six months after his conditioned stabilized. Additionally, the logs were of no consequence to him on paper, and the fields and layout were predominantly decipherable only by a doctor (Figure 6).

Figure 4. The Nike FuelBand is a wearable computer that has become one of the most popular devices driving the so-called quantified-self trend. (Photo courtesy of wikimedia commons.)

Dissatisfied by his denied access, Campos took matters into his own hands and purchased a device on eBay that could help him get the data. He also went to a specialist ICD course and then intercepted the cardiac rhythms being recorded [11]. He got to the data stream but realized that to make sense of it from a patient perspective, a patient-centric app had to be built. Campos quickly deduced that regulatory and liability concerns were at the heart of the matter from the manufacturer’s perspective. How does a manufacturer continue to improve its product if it does not continually get feedback from the actual ICDs in the field? If manufacturers offered mobile apps for patients, might patients misread their own diagnoses? Is a manufacturer there to enhance life alone or to make a patient feel better about bearing an ICD? Can an ICD be misused by a patient? Or, in the worst case scenario, what happens in the case of device failure? Or patient death? Would the proof lie onboard? Would the data tell the true story? These are all very interesting questions.

Figure 5. The medical waveform format encoding rule software on a Blackberry device. It displays medical waveforms, such as EKG (shown), electroencephalogram, and blood pressure. Some doctors have software that allows them to interrogate EKG information, but patients presently do not have access to their own ICD data. (Photo courtesy of wikimedia commons.)

Campos might well have acted to not only get what he wanted (access to his data his own way) but to raise awareness globally as to the type of data being stored remotely by ICDs in patients. He noted in his TEDxCambridge talk in 2011 [12]:

the ICD does a lot more than just prevent a sudden cardiac arrest: it collects a lot of data about its own function and about the patient’s clinical status; it monitors its own battery life; the amount of time it takes to deliver a life-saving shock; it monitors a patient’s heart rhythm, daily activity; and even looks at variations in chest impedance to look if there is build-up of fluids in the chest; so it is a pretty complex little computer you have built into your body. Unfortunately, none of this invaluable data is available to the patient who originates it. I have absolutely no access to it, no knowledge of it.

Doctors, on the other hand, have full 24/7 unrestricted access to this information; even some of the manufacturers of these medical devices offer the ability for doctors to access this information through mobile devices. Compare this with the patients’ experience who have no access to this information. The best we can do is to get a printout or a hardcopy of an interrogation report when you go into the doctor’s office.

Figure 6. An EKG chart. Twelve different derivations of an EKG of a 23-year-old japanese man. A similar log was provided to hugo campos upon his request for six months worth of EKG readings. (Photo courtesy of wikimedia commons.)

Campos decided to sue the manufacturer after he was informed that the data being generated from his ICD measuring his own heart activity was “proprietary data” [13]. Perhaps this is the new side of big data. But it is fraught with legal implications and, as far as I am concerned, blatantly dangerous. If we deduce that a person’s natural biometric data (in this instance, the cardiac rhythm of an individual) belong to a third party, then we are headed into murky waters when we speak of even more invasive technology like deepbrain stimulators [14]. It not only means that the device is not owned by the electrophorus (the bearer of technology) [15], [16], but quite possibly the cardiac rhythms unique to the individual are also owned by the device manufacturer. We should not be surprised. In Google Glass’s “Software and Services” section of its terms of use, it states that Google has the right to “remotely disable or remove any such Glass service from user systems” at its “sole discretion” [17]. Placing this in the context of ICDs means that a third party almost indelibly has the right to switch someone off.

CASE 2: ROSS COMPTON’S PACEMAKER DATA IS SUBPOENAED FOR CRIMINAL INVESTIGATIONS

Enter the Ross Compton case of Middletown, Ohio. M.G. Michael and I have dubbed it one of the first authentic uberveillance cases in the world, because the technology was not just wearable but embedded. The story goes something like this: On 27 January 2017, 59-year-old Ross Compton was indicted on arson and insurance fraud charges. Police gained a search warrant to obtain his heart pacemaker readings (heart and cardiac rhythms) and called his alibi into question. Data from Compton’s pacemaker before, during, and after the fire in his home broke out were disclosed by the heart pacemaker manufacturer after a subpoena was served. The insurer’s bill for the damage was estimated at about US$400,000. Police became suspicious of Compton when they traced gasoline to Compton’s shoes, trousers, and shirt.

In his statement of events to police, Compton told a story that misaligned and conflicted with his call to 911. Forensic analysts found traces of multiple fires having been lit in various locations in the home. Yet, Compton told police he had rushed his escape, breaking a window with his walking stick to throw some hastily packed bags out and then fleeing the flames himself to safety. Compton also told police that he had an artificial heart with a pump attached, a fact that he thought might help his cause but that was to be his undoing. In this instance, his pacemaker acted akin to a black box recording on an airplane [18].

After securing the heart pacemaker data set, an independent cardiologist was asked to assess the telemetry data and determine if Compton’s heart function was commensurate with the exertion needed to make a break with personal belongings during a life-threatening fire [19]. The cardiologist noted that, based on the evidence he was given to interpret, it was “highly improbable” that a man who suffered with the medical conditions that Compton did could manage to collect, pack, and remove the number of items that he did from his bedroom window, escape himself, and then proceed to carry these items in front of his house, out of harm’s way (see “Columbo, How to Dial a Murder”). Compton’s own cardio readings, in effect, snitched on him, and none were happier than the law enforcement officer in charge of the case, Lieutenant Jimmy Cunningham, who noted that the pacemaker data, while only a supporting piece of evidence, was vital in proving Compton’s guilt after gasoline was found on his clothing. Evidence-based policing has now well outstripped the more traditional intelligence-led policing approach, entrenched given the new realm of big data availability [20], [21].

Columbo, How to Dial a Murder [S1] Columbo says to the murderer:
“You claim that you were at the physicians getting your heart examined…which was true [Columbo unravels a roll of EKG readings]…the electrocardiogram, Sir. Just before three o’clock your physician left you alone for a resting trace. At that moment you were lying down in a restful position and your heart showed a calm, slow, easy beat [pointing to the EKG readout]. Look at this part, right here [Columbo points to the reading], lots of sudden stress, lots of excitement, right here at three o’clock, your heart beating like a hammer just before the dogs attacked…Oh you killed him with a phone call, Sir…I’ll bet my life on it. Very simple case. Not that I’m particularly bright, Sir…I must say, I found you disappointing, I mean your incompetence, you left enough clues to sink a ship. Motive. Opportunity. And for a man of your intelligence Sir, you got caught on a lot of stupid lies. A lot.” [S1] Columbo: How to Dial a Murder. Directed by James Frawley. 1978. Los Angeles, CA: Universal Pictures Home Entertainment, 2006. DVD.

Consumer Electronics Tell a Story

Several things are now of interest to the legal community: first and foremost, how is the search warrant for a person’s pacemaker data executed? In case 1, Campos was denied access to his own ICD data stream by the manufacturer, and yet his doctor had full access. In case 2, Compton’s own data provided authorities with the extra evidence they needed to accuse him of fraud. This is yet another example of seemingly private data being used against an individual (in this instance, the person from whose body the data emanated), but in the future, for instance, the data from one person’s pacemaker might well implicate other members of the public. For example, the pacemaker might be able to prove that someone’s heart rate substantially increased during an episode of domestic violence [22] or that an individual was unfaithful in a marriage based on the cross matching of his or her time stamp and heart rate data with another.

Of course, a consumer electronic does not have to be embedded to tell a story (Figure 7). It can also be wearable or luggable, as in the case of a Fitbit that was used as a truthdetector in an alleged rape case that turned out to be completely fabricated [23]. Lawyers are now beginning to experiment with other wearable gadgetry that helps to show the impact of personal injury cases from accidents (work and nonwork related) on a person’s ability to return to his or her normal course of activities [24] (Figure 8). We can certainly expect to see a rise in criminal and civil litigation that makes use of a person’s Android S Health data, for instance, which measure things like steps taken, stress, heart rate, SpO2, and even location and time (Figure 9). But cases like Compton’s open the floodgates.

Figure 7. A Fitbit, which measures calories, steps, distance, and floors. (Photo courtesy of wikimedia commons.)

Figure 8. A closeup of a patient wearing the iRhythm ZIO XT patch, nine days after its placement. (Photo courtesy of wikimedia commons.)

I have pondered on the evidence itself: are heart rate data really any different from other biometric data, such as deoxyribonucleic acid (DNA)? Is it perhaps more revealing than DNA? Should it be dealt with in the same way? For example, is the chain of custody coming from a pacemaker equal to that of a DNA sample and profile? In some way, heart rates can be considered a behavioral biometric [25], whereas DNA is actually a cellular sample [26]. No doubt we will be debating the challenges, and extreme perspectives will be hotly contested. But it seems nothing is off limits. If it exists, it can be used for or against you.

Figure 9. (a) and (b) The health-related data from Samsung's S Health application. Unknown to most is that Samsung has diversified its businesses to be a parent company to one of the world's largest health insurers. (Photos courtesy of katina michael.)

The Paradox of Uberveillance

In 2006, M.G. Michael coined the term uberveillance to denote “an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices in the human body” [27]. No doubt Michael’s background as a former police officer in the early 1980s, together with his cross-disciplinary studies, had something to do with his insights into the creation of the term [28]. This kind of surveillance does not watch from above, rather it penetrates the body and watches from the inside, looking out [29].

Furthermore, uberveillance “takes that which was static or discrete…and makes it constant and embedded” [30]. It is real-time location and condition monitoring and “has to do with the fundamental who (ID), where (location), and when (time) questions in an attempt to derive why (motivation), what (result), and even how (method/plan/thought)” [30]. Uberveillance can be used prospectively or retrospectively. It can be applied as a “predictive mechanism for a person’s expected behavior, traits, likes, or dislikes; or it can be based on historical fact” [30].

In 2008, the term uberveillance was entered into the official Macquarie Dictionary of Australia [31]. In research that has spanned more than two decades on the social implications of implantable devices for medical and nonmedical applications, I predicted [15] that the technological trajectory of implantable devices that were once used solely for care purposes would one day be used retrospectively for tracking and monitoring purposes. Even if the consumer electronics in question were there to provide health care (e.g., the pacemaker example) or convenience (e.g., a near-field-communication-enabled smartphone), the underlying dominant function of the service would be control [32]. The socioethical implications of pervasive and persuasive emerging technologies have yet to really be understood, but increasingly, they will emerge to take center stage in court hearings, like the emergence of DNA evidence and then subsequently global positioning system (GPS) data [33].

Medical device implants provide a very rich source of human activity monitoring, such as the electrocardiogram (EKG), heart rate, and more. Companies like Medtronics, among others specializing in implantables, have proposed a future where even healthy people carry a medical implant packed with sensors that could be life sustaining and detect heart problems (among others), reporting them to a care provider and signaling when assistance might be required [34]. Heart readings provide an individual’s rhythmic biometrics and, at the same time, can record increases and decreases in activity. One could extrapolate that it won’t be long before our health insurance providers are asking for the same evidence for reduced premiums.

Figure 10. A pacemaker cemetery. (Photo courtesy of wikimedia commons.)

The future might well be one where we all carry a black box implantable recorder of some sort [35], an alibi that proves our innocence or guilt, minute by minute (Figure 10). Of course, an electronic eye constantly recording our every move brings a new connotation to the wise words expressed in the story of Pinocchio: always let your conscience be your guide. The future black boxes may not be as forgiving as Jiminy Cricket and more like Black Mirror’s “The Entire History of You” [36]. But if we assume that these technologies are to be completely trusted, whether they are implantable, wearable, or even luggable, then we are wrong.

The contribution of M.G. Michael’s uberveillance is in the emphasis that the uberveillance equation is a paradox. Yes, there are near-real-time data flowing continuously from more points of view than ever [37], closed-circuit TV looking down, smartphones in our pockets recording location and movement, and even implantables in some of us ensuring nontransferability of identity [38]. The proposition is that all this technology in sum total is bulletproof and foolproof, omniscient and omnipresent, a God’s eye view that cannot be challenged but for the fact that the infrastructure and the devices, and the software, are all too human. And while uberveillance is being touted for good through an IoT world that will collectively make us and our planet more sustainable, there is one big crack in the utopian vision: the data can misrepresent, misinform, and be subject to information manipulation [39]. Researchers are already studying the phenomenon on complex visual information manipulation, how to tell whether data has been tampered with, a suspect introduced or removed from a scene of a crime, and other forensic visual analytics [40]. It is why Vladimir Radunovic, director of cybersecurity and e-diplomacy programs in the DiploFoundation, cited M.G. Michael’s contribution that “big data must be followed by big judgment” [41].

What happens in the future if we go down the path of constant bodily monitoring of vital organs and vital signs, where we are all bearing some device or at least wearing one? Will we be in control of our own data, or, as is seemingly obvious at present, will we not be in control? And how might selfincrimination play a role in our daily lives, or even worse, individual expectations that can be achieved by only playing to a theater 24/7 so our health statistics can stack up to whatever measure and cross-examination they are put under personally or publicly [42]? Can we believe the authenticity of every data stream coming out of a sensor onboard consumer electronics? The answer is no.

Having run many years of GPS data-logging experiments, I can say that a lot can go wrong with sensors, and they are susceptible to outside environmental conditions. For instance, they can log your location miles away (even in another continent), the temperature gauge can play up, time stamps can revert to different time zones, the speed of travel can be wildly inaccurate due to propagation delays in satellites, readings may not be at regular intervals due to some kind of interference, and memory overflow and battery issues, while getting better, are still problematic. The short and long of it is that technology cannot be trusted. At best, it can act as supporting evidence but should never replace eyewitness accounts. Additionally, “the inherent problem with uberveillance is that facts do not always add up to truth (i.e., as in the case of an exclusive disjunction T 1 T 5 F), and predictions based on uberveillance are not always correct” [30].

Conclusion

While device manufacturers are challenging the possibility that their ICDs are hackable in courts [43], highly revered security experts like Bruce Schneier are heavily cautioning about going down the IoT path, no matter how inviting it might look. In his acclaimed blog, Schneier recently wrote [44]:

All computers are hackable…The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster…We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized. If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

The cardiac implantables market by 2020 is predicted to become a US$43 billion industry [45]. Obviously, the stakes are high and getting higher with every breakthrough implantable innovation we develop and bring to market. We will need to address some very pressing questions at hand, as Schneier suggests, through some form of regulation if we are to maintain consumer privacy rights and data security. Joe Carvalko, a former telecommunications engineer and U.S. patent attorney as well as an associate editor of IEEE Technology and Society Magazine and pacemaker recipient, has added much to this discussion already [46], [47]. I highly recommend several of his publications, including “Who Should Own In-the-Body Medical Data in the Age of eHealth?” [48] and an ABA publication coauthored with Cara Morris, The Science and Technology Guidebook for Lawyers [49]. Carvalko is a thought leader in this space, and I encourage you to listen to his podcast [50] and also to read his speculative fiction novel, Death by Internet, [51] which is hot off the press and wrestles with some of the issues raised in this article.

REFERENCES

[1] K. Michael, M. Thistlethwaite, M. Rowland, and K. Pitt. (2015, Mar. 6). Standing Committee on Infrastructure and Communications, Section 313 of the Telecommunications Act 1997. [Online]. Available: http:// parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;db=COMMITT EES;id=committees%2Fcommrep%2Fd8727a07-ba09-4a91-9920-73d21 e446d1d%2F0006;query=Id%3A%22committees%2Fcommrep%2Fd872 7a07-ba09-4a91-9920-73d21e446d1d%2F0000%22

[2] S. Bronitt and K. Michael, “Human rights, regulation, and national security,” IEEE Technol. Soc. Mag., vol. 31, pp. 15–16, 2012.

[3] B. Hall. (2016, Dec. 22). Australians’ phone and email records could be used in civil lawsuits. Sydney Morning Herald. [Online]. Available: http:// www.smh.com.au/federal-politics/political-news/australians-phone-andemail-records-could-be-used-in-civil-lawsuits-20161222-gtgdy6.html

[4] PureVPN. (2015, Oct. 14). Data retention laws—an update. [Online]. Available: https://www.purevpn.com/blog/data-retention-laws-by-countries/

[5] D. Crawford. (2014, Nov. 18). Renegade Swedish ISP offers all customers VPN. Best VPN. [Online]. Available: https://www.bestvpn.com/ blog/11806/renegade-swedish-isp-offers-customers-vpn/

[6] J. Ball. (2013, Oct. 1). NSA stores metadata of millions of web users for up to a year, secret files show. Guardian. [Online]. Available: https://www .theguardian.com/world/2013/sep/30/nsa-americans-metadata-year-documents

[7] J. S. Granick, American Spies: Modern Surveillance, Why You Should Care, and What to Do About It. Cambridge, U.K.: Cambridge Univ. Press, 2017.

[8] A. Gregory, American Surveillance: Intelligence, Privacy, and the Fourth Amendment. Madison: Univ. of Wisconsin Press, 2016.

[9] K. Michael, G. Roussos, G. Q. Huang, A. Chattopadhyay, R. Gadh, B. S. Prabhu, and P. Chu, “Planetary-scale RFID services in an age of uberveillance,” Proc. IEEE, vol. 98, no. 9, pp. 1663–1671, 2010.

[10] N. Lars. (2015, Mar. 26). Connected medical devices, apps: Are they leading the IOT revolution—or vice versa? Wired. [Online]. Available: https://www.wired.com/insights/2014/06/connected-medical-devicesapps-leading-iot-revolution-vice-versa/

[11] H. Campos. (2015). The heart of the matter. Slate. [Online]. Available: http://www.slate.com/articles/technology/future_tense/2015/03/ patients_should_be_allowed_to_access_data_generated_by_implanted_ devices.html

[12] H. Campos. (2011). Fighting for the right to open his heart data: Hugo Campos at TEDxCambridge 2011. [Online]. Available: https:// www.youtube.com/watch?v=oro19-l5M8k

[13] D. Hinckley. (2016, Feb. 22). This big brother/big data business goes way beyond Apple and the FBI. Huffington Post. [Online]. Available: http://www.huffingtonpost.com/david-hinckley/this-big-brotherbigdata_b_9292744.html october 2017 ^ IEEE Consumer Electronics Magazine 115

[14] K. Michael, “Mental health, implantables, and side effects,” IEEE Technol. Soc. Mag., vol. 34, no. 2, pp. 5–17, 2015.

[15] K. Michael, “The technological trajectory of the automatic identification industry: The application of the systems of innovation (SI) framework for the characterisation and prediction of the auto-ID industry,” Ph.D. dissertation, School of Information Technology and Computer Science, Univ. of Wollongong, Wollongong, Australia, 2003.

[16] K. Michael and M. G. Michael, “Homo electricus and the continued speciation of humans,” in The Encyclopedia of Information Ethics and Security, M. Quigley, Ed. Hershey, PA: IGI Global, 2007, pp. 312–318.

[17] Google Glass. (2014, Aug. 19). Glass terms of use. [Online]. Available: https://www.google.com/glass/termsofuse/

[18] K. Michael and M. G. Michael, “Implementing ‘namebers’ using microchip implants: The black box beneath the skin,” in This Pervasive Day: The Potential and Perils of Pervasive Computing, J. Pitt, Ed. London, U.K.: Imperial College Press, 2011.

[19] D. Smith. (2017, Feb. 4). Pacemaker data used to charge alleged arsonist. Jonathan Turley. [Online]. Available: https://jonathanturley .org/2017/02/04/pacemaker-data-used-to-charge-alleged-arsonist/

[20] K. Michael, “Big data and policing: The pros and cons of using situational awareness for proactive criminalisation,” presented at the Human Rights and Policing Conf,. Australian National University, Canberra, Apr. 16, 2013.

[21] K. Michael and G. L. Rose, “Human tracking technology in mutual legal assistance and police inter-state cooperation in international crimes,” in From Dataveillance to Überveillance and the Realpolitik of the Transparent Society (The Second Workshop on Social Implications of National Security), K. Michael and M. G. Michael, Eds. Wollongong, Australia: University of Wollongong, 2007.

[22] F. Gerry, “Using data to combat human rights abuses,” IEEE Technol. Soc. Mag., vol. 33, no. 4, pp. 42–43, 2014.

[23] J. Gershman. (2016, Apr. 21). Prosecutors say Fitbit device exposed fibbing in rape case. Wall Street Journal. [Online]. Available: http:// blogs.wsj.com/law/2016/04/21/prosecutors-say-fitbit-device-exposedfibbing-in-rape-case/

[24] P. Olson. (2014, Nov. 16). Fitbit data now being used in the courtroom. Forbes. [Online]. Available: https://www.forbes.com/sites/parmyolson/ 2014/11/16/fitbit-data-court-room-personal-injury-claim/#459434e37379

[25] K. Michael and M. G. Michael, “The social and behavioural implications of location-based services,” J. Location Based Services, vol. 5, no. 3–4, pp. 121–137, Sept.–Dec. 2011.

[26] K. Michael, “The European court of human rights ruling against the policy of keeping fingerprints and DNA samples of criminal suspects in Britain, Wales and Northern Ireland: The case of S. and Marper v United Kingdom,” in The Social Implications of Covert Policing (Workshop on the Social Implications of National Security, 2009), S. Bronitt, C. Harfield, and K. Michael, Eds. Wollongong, Australia: University of Wollongong, 2010, pp. 131–155.

[27] M. G. Michael and K. Michael, “National security: The social implications of the politics of transparency,” Prometheus, vol. 24, no. 4, pp. 359–364, 2006.

[28] M. G. Michael, “On the ‘birth’ of uberveillance,” in Uberveillance and the Social Implications of Microchip Implants, M. G. Michael and K. Michael, Eds. Hershey, PA: IGI Global, 2014.

[29] M. G. Michael and K. Michael, “A note on uberveillance,” in From Dataveillance to Überveillance and the Realpolitik of the Transparent Society (The Second Workshop on Social Implications of National Security), M. G. Michael and K. Michael, Eds. Wollongong, Australia: University of Wollongong, 2007.

[30] M. G. Michael and K. Michael, “Toward a state of uberveillance,” IEEE Technol. Soc. Mag., vol. 29, pp. 9–16, 2010.

[31] M. G. Michael and K. Michael, “Uberveillance,” in Fifth Edition of the Macquarie Dictionary, S. Butler, Ed. Sydney, Australia: Sydney University, 2009.

[32] A. Masters and K. Michael, “Lend me your arms: The use and implications of humancentric RFID,” Electron. Commerce Res. Applicat., vol. 6, no. 1, pp. 29–39, 2007.

[33] K. D. Stephan, K. Michael, M. G. Michael, L. Jacob, and E. P. Anesta, “Social implications of technology: The past, the present, and the future,” Proc. IEEE, vol. 100, pp. 1752–1781, 2012. [34] E. Strickland. (2014, June 10). Medtronic wants to implant sensors in everyone. IEEE Spectrum. [Online]. Available: http://spectrum.ieee .org/tech-talk/biomedical/devices/medtronic-wants-to-implant-sensorsin-everyone

[35] K. Michael, “The benefits and harms of national security technologies,” presented at the Int. Women in Law Enforcement Conf., Hyderabad, India, 2015. [36] J. A. Brian Welsh. (2011). The entire history of you,” Black Mirror, C. Brooker, Ed. [Online]. Available: https://www.youtube.com/watch?v= Sw3GIR70HAY

[37] K. Michael, “Sousveillance and point of view technologies in law enforcement,” presented at the Sixth Workshop on the Social Implications of National Security: Sousveillance and Point of View Technologies in Law Enforcement, University of Sydney, Australia, 2012.

[38] K. Albrecht and K. Michael, “Connected: To everyone and everything,” IEEE Technology and Soc. Mag., vol. 32, pp. 31–34, 2013.

[39] M. G. Michael, “The paradox of the uberveillance equation,” IEEE Technol. Soc. Mag., vol. 35, no. 3, pp. 14–16, 20, 2016.

[40] K. Michael, “The final cut—tampering with direct evidence from wearable computers,” presented at the Fifth Int. Conf. Multimedia Information Networking and Security (MINES 2013), Beijing, China, 2013.

[41] V. Radunovic, “Internet governance, security, privacy and the ethical dimension of ICTs in 2030,” IEEE Technol. Soc. Mag., vol. 35, no. 3, pp. 12–14, 2016.

[42] K. Michael. (2011, Sept. 12). The microchipping of people and the uberveillance trajectory. Social Interface. [Online]. Available: http:// socialinterface.blogspot.com.au/2011/08/microchipping-of-people-and .html

[43] O. Ford. (2017, Jan. 12). Post-merger Abbott moves into 2017 with renewed focus, still faces hurdles. J.P. Morgan Healthcare Conf. 2017. [Online]. Available: http://www.medicaldevicedaily.com/servlet/com .accumedia.web.Dispatcher?next=bioWorldHeadlines_article& forceid=94497

[44] B. Schneier. (2017, Feb. 1). Security and the Internet of Things: Schneier on security. [Online]. Available: https://www.schneier.com/ blog/archives/2017/02/security_and_th.html

[45] IndustryARC. (2015, July 30). Cardiac implantable devices market to reach $43 billion by 2020. GlobeNewswire. [Online]. Available: https://globenewswire.com/news-release/2015/07/30/756345/10143745/ en/Cardiac-Implantable-Devices-Market-to-Reach-43-Billion-By-2020 .html

[46] J. Carvalko, The Techno-Human Shell: A Jump in the Evolutionary Gap. Mechanicsburg, PA: Sunbury Press, 2013.

[47] J. Carvalko and C. Morris, “Crowdsourcing biological specimen identification: Consumer technology applied to health-care access,” IEEE Consum. Electron. Mag., vol. 4, no. 1, pp. 90–93, 2014.

[48] J. Carvalko, “Who should own in-the-body medical data in the age of ehealth?” IEEE Technol. Soc. Mag., vol. 33, no. 2, pp. 36–37, 2014.

[49] J. Carvalko and C. Morris, The Science and Technology Guidebook for Lawyers. New York: ABA, 2014.

[50] K. Michael and J. Carvalko. (2016, June 20). Joseph Carvalko speaks with Katina Michael on his non-fiction and fiction pieces. [Online]. Available: https://www.youtube.com/watch?v=p4JyVCba6VM

[51] J. Carvalko, Death by Internet. Mechanicsburg, PA: Sunbury Press, 2016.

[52] R. Pearce. (2017, June 7). “No-one’s talking about backdoors” for encrypted services, says PM’s cyber guy. Computerworld. [Online]. Available: https://www.computerworld.com.au/article/620329/no-onetalking-about-backdoors-says-pm-cyber-guy/

[53] M. Ambinder. (2013, Aug. 14). An educated guess about how the NSA is structured. The Atlantic. [Online]. Available: https://www .theatlantic.com/technology/archive/2013/08/an-educated-guess-abouthow-the-nsa-is-structured/278697/

Acknowledgment

A short form of this article was presented as a video keynote speech for the Fourth International Conference on Innovations in Information, Embedded and Communication Systems in Coimbatore, India, on 17 March 2017. The video is available at https://www.youtube.com/watch?v=bEKLDhNfZio.

Keywords

Metadata, Electrocardiography, Pacemakers, Heart beat, Telecommunication services, Implants, Biomedical equipment, biomedical equipment, cardiology, criminal law, medical computing, police data processing, transport protocols, implantable medical device, heart, Australian inquiry, government agencies, illegal online services,mandatory metadata retention laws, government organizations, law enforcement organizations, Internet protocol

Citation: Katina Michael, 2017, "Implantable Medical Device Tells All: Uberveillance Gets to the Heart of the Matter", IEEE Consumer Electronics Magazine, Vol. 6, No. 4, Oct. 2017, pp. 107 - 115, DOI: 10.1109/MCE.2017.2714279.

 

Are You Addicted to Your Smartphone, Social Media, and More?

Abstract

Back in 1998, I remember receiving my first second-generation (2G) mobile assignment at telecommunications vendor Nortel: a bid for Hutchison in Australia, a small alternate operator. At that time, I had already grown accustomed to modeling network traffic on traditional voice networks and was beginning to look at the impact of the Internet on data network dimensioning. In Australia, we were still relying on the public switched telephone network to dial up the Internet from homes, the Integrated Services Digital Network in small-to-medium enterprises, and leased lines for larger corporates and the government. But modeling mobile traffic was a different affair.

Figure 1. Russia’s Safe-Selfie campaign flyer.

Back in 1998, I remember receiving my first second-generation (2G) mobile assignment at telecommunications vendor Nortel: a bid for Hutchison in Australia, a small alternate operator. At that time, I had already grown accustomed to modeling network traffic on traditional voice networks and was beginning to look at the impact of the Internet on data network dimensioning. In Australia, we were still relying on the public switched telephone network to dial up the Internet from homes, the Integrated Services Digital Network in small-to-medium enterprises, and leased lines for larger corporates and the government. But modeling mobile traffic was a different affair.

I remember thinking: how will we begin to categorize subscribers, and what kinds of network patterns could we expect in mobility? I recall beginning to define the market segments into four categories, which included security (low-end users), road warriors (high-end corporate users), socialites (youth market), and everyday users (average users). Remember, this was even before the rise of the Wireless Application Protocol. As naysayers said that the capital expenditure spent on 2G networks would be prohibitive and that investments would never be recouped for decades, subscribers’ usage rapidly increased with devices like the Research in Motion Blackberry, which allowed for mobile e-mail.

Fast-forward to 2000. I was already knee-deep in thirdgeneration (3G) mobile bids, predicting the cost of 3G spectrum in emerging and developed markets, increasing my categories of subscriber types from four to nine segments, and calculating upload and download rates for top mobile apps like gaming, images (photos and imaging), and e-mail with chunky PowerPoint attachments and other file types. We knew what was coming was big, but perhaps we ourselves sitting on the coalface didn’t realize what a big impact it would actually have on our lives and the lives of our children. Our models showed average revenues per user of US$120 per month for corporates. At the time, most of us believed the explosion that would take place in the coming decade (but not as big as it turned out to be), despite preaching the mantra that voice is now just another bit of data. In calculating pricing models, we brainstormed with one another: who would spend over 100 min on a mobile? Or who would spend hours gaming on a handset rather than a larger gaming console?

Enter social media, enabled by this wireless Internet protocol (IP) revolution and the rapid increase in diverse mobile hardware from netbooks to tablets to smartphones and smartwatches. Then things rapidly changed again. LinkedIn, Facebook, Twitter, Instagram, Snapchat, and WeChat are all enjoyed by social media users (consumers and professionals) around the globe today, and it is estimated that there will be 2.67 billion social network users by 2018 [1]. Over one-third of consumers worldwide, more than 2.56 billion people, will have a mobile phone by 2018, and more than half of these will have smartphone capability, making feature phones the minority [2].

The Social Media Boom

When Google announced that a staggering 24 billion selfies were uploaded to its servers alone in 2015, consuming 13.7 petabytes of storage space, I stopped and contemplated the meaning of these statistics [3]. What about the zillions of selfies uploaded to Apple’s iCloud, posted to Facebook, Instagram, Snapchat, and Twitter? It means that in one’s average lifetime, most people are taking at least one selfie a day and sharing their image publicly. This figure is much higher for the impressionable teen market, with a 2015 Google study reporting that youth take, on average, 14 selfies and 16 photos or videos, check social media 21 times, and send 25 text messages per day [4]. This number continues to grow steadily, according to fresh evidence by Pew Internet Research [5], and is now even impacting workplace productivity [6]. In the same year that Google announced the selfie statistics, Russia’s Ministry of Internal Affairs began a Safe-Selfie campaign [7], stating: “When you take a selfie, make sure that you are in a safe place and your life is not in danger!” (Figure 1). This was followed, of course, by the acknowledged deaths that had occurred while younger and older individuals were in the process of taking selfies, and the rate of frequency outnumbered shark attacks in 2015 [8]. One cannot fathom.

Noticeable is the adoption of high-tech gadgetry, especially in the childhood to youth markets, with an even greater penetration by teenagers and individuals younger than 34 years. It is rather disturbing to read that 24% of U.S. teens go online “almost constantly” [5], facilitated by the widespread penetration of smartphones and increasing requirement of tablets in the secondary education system. The sheer affordability of tech gear and its increasing multifunctionality now means that most people have digital Swiss Army knives at their disposal with a smartphone. By accessing the Internet via your phone, you can upload pictures, browse websites, navigate locations on maps, and be reachable any time of the day. The allure of how to kill time while waiting for appointments or in public transportation means that most people are frequently engaged in some form of interaction through a screen. The short-lived Google Glass was a hands-free solution that would have brought the screen right up to the eye [9], but while momentarily halted, one can envisage a future where we are seeing everything through filtered lenses. Google Glass Enterprise edition is now on sale [35]!

The Rise of Internet Addiction

Experts have tried to quantify the amount of time being spent on screens, specific devices (smartphones), and even particular apps (e.g., Facebook), and have identified guidelines for various age groups for appropriate use. Most notable is the work started by Dr. Kimberly Young in 1995 when she established her website netaddiction.com and clinical practice, the Center for Internet Addiction. She has been conducting research on how the Internet changes people’s behavior. Her guideline “3-6-9-12 Screen Smart Parenting” has gained worldwide recognition [10].

Increasingly, we are hearing about social media addiction stories (see “Social Media Addiction” [11] and “Mental Health and Social Media” [36]). We have all heard about the toddler screaming for his or her iPad before breakfast and gamers who are reluctant to come to dinner with the rest of the family (independent of gender, age, or ethnicity) unless they are instant messaged. There is a growing complexity around the diagnosis of various addiction behaviors. Some suffer from Internet addiction broadly, while others are addicted to computer gaming, smartphones, and even social media. It has been postulated by some researchers that most of these modern technology-centric addictions are age-old causes, such as obsessive-compulsive disorder, but they have definitely been responsible for triggering a new breed of what I consider to be yet-to-be-defined medical health issues.

In the last five years, especially, much research has begun in the area of online addiction. Various scales for Internet addiction have been developed by psychologists, and there are even scales for specific technologies now, like smartphones. The South Koreans have developed the Smartphone Addiction Scale, Smartphone Addiction Proneness Scale, and the KS-scale, a method for Koreans to self-report Internet addiction using a short-form scale. Unsurprisingly, these scales are significant for the South Korean market, given it is the world leader in Internet connectivity, having the world’s fastest average Internet connection speed with roughly 93% of citizens connected. It therefore follows that the greater the penetration of highspeed Internet in a market, the greater the propensity for a subscriber to suffer from some form of online addiction. There are even scales for social media applications, e.g., the Bergen Facebook Addiction Scale (BFAS) developed by Dr. Cecile Andraessen at the University of Bergen in Norway in 2012 (see “BFAS Survey Statements” [12]).

Accessible Internet Feeds the Addiction

Despite its remoteness to the rest of the world, Australia surprisingly does not lag far behind the South Korean market. According to the Australian Bureau of Statistics, in 2013, 94% of Australians were Internet users, but regional areas across Australia do not enjoy the same high-speed access as in South Korea, despite the National Broadband Network initiative that was founded in 2009, with the actual rollout beginning in 2015. Yet, alarmingly, one recognized industry report, “Digital Down Under,” stated that 13.4 million Australians spent a whopping 18.8 h a day online [13]. This statistic has been contested but commensurately backed by Lee Hawksley, managing director of ExactTarget Australia, who oversaw the research. She has gone on record saying, “...49% of Australians have smartphones, which means we are online all the time…from waking to sleep, when it comes to e-mail, immersion, it’s even from the 18–65s; however, obviously with various social media channels the 18–35s are leading the charge.”

According to the same study, roughly one-third of women living in New South Wales are spending almost two-thirds of their day online. And it is women who are 30% more likely to suffer anxiety as a result of participating in social media than men [14]–[16]. This is even greater than the Albrecht and Michael deduction of 2014, which estimated that people in developed nations are spending an average of 69% of their waking life behind the screen [17]. That is about 11 h behind screens out of 16 waking hours. But, no doubt, people are no longer sleeping 8 h with access to technology at arm’s reach within the bedroom, and, as a result, cracks are appearing in relationships, employment, severe sleep deprivation, and other areas as a result of screen dependencies [18].

It is difficult to say what kinds of specific addictions exist in relation to the digital world, and various countries identify market-relevant scales and measures. While countries like China, Taiwan, and South Korea acknowledge there is something called “Internet addiction” as a diagnosed medical condition, other countries, e.g., the United States, prefer not to be explicit about the condition, such as in the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-V) [19]. Instead, a potential new diagnosis, Internet gaming disorder, is dealt with in an appendix of the DSM-V [20], [21]. Generally, Internet addiction is defined as “the inability of individuals to control their Internet use, resulting in marked distress and/or functional impairment in daily life” [22]. Some practitioners have likened online addiction as being akin to substance-based addiction. Usually it manifests predominantly in one of three quite separate but sometimes overlapping subtypes in an individual: excessive gaming, sexual preoccupations [23], and e-mail/text/social media messaging [24].

Shared Data and the Need to Know

For now, what has been quantified and is well known is the amount of screen time spent by individuals in front of multiple platforms: Internet-enabled television (e.g., Netflix), play stations (for video games), desktops (for browsing), tablets (for pictures and editing), and smartphones (for social media messaging). Rest assured, the IP-enabled devices we are enjoying are passing on our details to corporations, who in the name of billing now accurately know about our family’s every move, digitally chronicling our preferences and habits. It is a form of pervasive social and behavioral biometrics, allowing big business to know app-by-app your individual thoughts. What is happening to all this metadata? Of course, it is being repurposed to give you more of the same, generating even more profit for interested businesses. For some capitalists, there is nothing wrong with this calculated engineering. Giving you more of what you want is the new mantra, but it does have its side effects, obviously.

The Australian Psychological Society issued its “Stress and Wellbeing in Australia” report last year, which included a section on social media fear of missing out (FOMO) [25]. Alongside FOMO, we also now have a fear of being off the grid, or FOBO, and the fear of no mobile, or NoMo [26]. I personally know adults who will not leave their homes in the morning unless they have watched the top ten YouTube videos of the day or won’t go to sleep until every one of those last e-mails has been answered and placed in the appropriate folder and actioned. Screen times are forever increasing, and this has come at the expense of physical exercise and one-toone time with loved ones.

There are reports of men addicted to video games who cannot keep a nine-to-five job, there are women suffering from depression and anxiety because they compare their online status with that of their peers, there are children who message on Instagram throughout the night, and there are those who are addicted to their work at the expense of all physical relationships around them. Perhaps most disturbingly are the increasing cases of online porn exposure by children between the ages of 9 and 13 years in particular [27], cybersexual activities in adolescence, or extreme social-media communities that spread disinformation. If it’s being conducted virtually, then it must not be real, with no physical repercussions; but far from it, online addictions generate a guilt that remains and is hard to rid. This is particularly true of misdemeanors published to the websphere that can be played back, disallowing individuals to forget about their prior actions or break out of stereotypes [28].

A New Tool: The AntiSocial App

FIGURE 2. The app AntiSocial measures the number of unlocks. (Image courtesy of BugBean.)

Endless pings tend to plague smartphone users if their settings have not been tweaked for anything but a default [29]. Notifications and alerts are checked while users are driving (even if it is against the law to text and drive), in the middle of a conversation, in bed while being intimate, while using the restroom, or even while taking a shower. But no one has ever measured the end-to-end use through actual surveying of digital instrumentation in an open market setting. It has been left to self-reporting mechanisms or applications that  have run on a desktop that might monitor how long workers use various work applications or are on e-mail or closed surveying of populations participating in trials. But manual voluntary audit logs are often incomplete or underreport actual usage, and closed trials are not often representative of reality. At best, we can point to the South Korean smartphone verification and management system that has helped to raise awareness that such a system is needed for intervention [30]. And yet, the concern is so high that we can say with some confidence that it won’t take long for companies to come out with socially responsible technologies and software to help us remain in the driver’s seat with some confidence.

FIGURE 3. AntiSocial measures app usage in minutes, allowing the user to limit or block certain apps based on a predefined consumption. (Image courtesy of BugBean.)

Enter the new app called AntiSocial, created by Melbourne, Australia, software company BugBean, which has consumer interests at heart [31]. Antisocial.io has taken the world by storm and has been downloaded on GooglePlay by individuals in over 150 countries within just a few months. The fact that it ranked number three on the U.K. GooglePlay downloads after only a few days demonstrates the need for it. It will not only accurately record usage in multiple application contexts but also encourage mindfulness about usage. AntiSocial does not tell users to stop using social media or stop video gaming for entertainment, but it reminds people to consider their digital calorie intake by comparing their behaviors with other anonymous users in their age group, occupation, and location. It is not about shaming users but raising individual awareness and wasting less time. We say we are too busy for this or that, and yet we don’t realize we are getting lost and absorbed in online activities. How do we reclaim some of this time [32]?

It may well be as simple as switching off the phone in particular settings, deliberately not taking it with you on a given outing, or having a digital detox day once a week or once a month. It might be taking responsibility for the length of screen time you have when you are away from the office or using AntiSocial to block certain apps after a self-determined amount of time has been spent on the app on any given day [33]. Whatever your personal solution, taking the AntiSocial challenge is about empowering you, and letting you exploit the technology at your fingertips without it exploiting you.

The AntiSocial App will Help

FIGURE 4. AntiSocial benchmarks smartphone app usage against others in the same age group, occupation, and location. (Image courtesy of BugBean.)

Some of the social problems that arise from smartphone and/ or social media addiction in particular include sleep depravity, anxiety, depression, a drop in grades, and anger management issues. AntiSocial provides a count of the number of unlocks you perform on your handset (Figure 2) and tells you in minutes how long you use each application (Figure 3), including cameras, Facebook and Instagram, and your favorite gaming app. It will help you to compare yourself against others and take responsibility for your use (Figure 4). You might choose to replace that time spent on Facebook, e.g., with time walking the dog, helping your kids with their homework, or even learning to cook a new recipe [34]. There is also a paired version that can be shared between parents and their children or even colleagues and friends. You might like to set yourself a challenge to detox digitally, just like you might do at your local gym in terms of fitness and weight loss. Have fun within a two-week timeframe, declaring yourself the biggest loser (of mobile minutes, that is) and report back to family and friends on what you feel you have gained. You might be surprised how liberating this actually feels.

You’ll come away appreciating the digital world and its conveniences a great deal more. You’ll also likely have a clearer head and not be tempted to snap back a reply online that might hurt another or inadvertently hurt yourself. And you’ll be able to use the AntiSocial app to become more social and start that invaluable conversation with those loved ones around you in the physical space.

References

1. Number of social media users worldwide from 2010 to 2020, 2017, [online] Available: https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/.
2. 2 billion consumers worldwide to get smart(phones) by 2016, 2014, [online] Available: https://www.emarketer.com/Article/2-Billion-Consumers-Worldwide-Smartphones-by-2016/1011694.
3. R. Gray, "What a vain bunch we really are! 24 billion selfies were uploaded to Google last year", Daily Mail, 2016, [online] Available: http://www.dailymail.co.uk/sciencetech/article-3619679/What-vain-bunch-really-24-billion-selfies-uploaded-Google-year.html.
4. "Average youth clicks 14 selfies a day says Google study", Daily News and Analysis, 2015, [online] Available: http://www.dnaindia.com/lifestyle/report-average-youth-clicks-14-selfies-a-day-says-google-study-2117522.
5. A. Lenhart, Teens social media & technology overview 2015 Pew Internet Research, 2015, [online] Available: http://www.pewinternet.org/2015/04/09/teens-social-media-technology-2015/.
6. K. Olmstead, C. Lampe, N.B. Ellison, Social media and the workplace, Pew Research Centre, 2016, [online] Available: http://www.pewinternet.org/2016/06/22/social-media-and-the-workplace/.
7. Safe selfie, Ministry of Internal Affairs, 2015, [online] Available: https://xn-b1aew.xn-p1ai/safety_selfie.
8. H. Horton, More people have died by taking selfies this year than by shark attacks, 2015, [online] Available: http://www.telegraph.co.uk/technology/11881900/More-people-have-died-by-taking-selfies-this-year-than-by-shark-attacks.html.
9. K. Michael, "For now we see through a glass darkly", IEEE Technol. Soc. Mag., vol. 32, no. 4, pp. 4-5, 2013.
10. K. Young, "Children and technology: Guidelines for parents—rules for every age", IEEE Technol. Soc. Mag., vol. 36, no. 1, pp. 31-33, 2017.
11. S. Bennett, Social media addiction: Statistics & trends, AdWeek, 2014, [online] Available: http://www.adweek.com/digital/social-media-addiction-stars/.
12. C.S. Andreassen, T. Torsheim, G.S. Brunborg, S. Pallesen, "Development of a Facebook addiction scale", Psychol Rep., vol. 110, no. 2, pp. 501-517, 2012.
13. M.J. Angel, "Living the “iLife”—are Australians Internet junkies?", Sydney Morning Herald, 2013, [online] Available: http://www.smh.com.au/lifestyle/life/living-the-ilife-are-australians-internet-junkies-20130418-2i2fu.html.
14. M. Maldonado, "The anxiety of Facebook", PsychCentral, 2016, [online] Available: https://psychcentral.com/lib/the-anxiety-of-facebook/.
15. R. Williams, "How Facebook can amplify low self-esteem/narcissism/anxiety", PsychologyToday, 2014, [online] Available: https://www.psychologytoday.com/blog/wired-success/201405/how-facebook-can-amplify-low-self-esteemnarcissismanxiety.
16. J. Huntsdale, Unliking Facebook—the social media addiction that has you by the throat, ABC, 2015, [online] Available: http://www.abc.net.au/local/stories/2015/01/23/4177043.htm.
17. K. Albrecht, K. Michael, "We've got to do better", IEEE Technol. Soc. Mag., vol. 33, no. 1, pp. 5-7, 2014.
18. M. Gradisar, A.R. Wolfson, A.G. Harvey, L. Hale, R. Rosenberg, C.A. Czeisler, "The sleep and technology use of Americans: Findings from the National Sleep Foundation's 2011 sleep in America poll", J. Clin. Sleep Med., vol. 9, no. 12, pp. 1291-1299, 2013.
19. R. Pies, "Should DSM-V designate “Internet addiction” a mental disorder?", Psychiatry, vol. 6, no. 2, pp. 31-37, 2009.
20. N.M. Petry, F. Rehbein, C.H. Ko, C.P. O'Brien, "Internet gaming disorder in the DSM-5", Curr. Psychiatry Rep., vol. 17, no. 9, pp. 72, 2015.
21. K. Albrecht, K. Michael, M.G. Michael, "The dark side of video games: Are you addicted?", IEEE Consum. Electron. Mag., vol. 5, no. 1, pp. 107-113, 2015.
22. J.H. Ha, H.J. Yoo, I.H. Cho, B. Chin, D. Shin, J.H. Kum, "Psychiatric comorbidity assessed in Korean children and adolescents who screen positive for Internet addiction", J. Clin. Psychiatry, vol. 67, pp. 821-826, 2006.
23. K. Young, "Help for cybersex addicts and their loved ones", IEEE Technol. Soc. Mag., vol. 35, no. 4, pp. 13-15, 2016.
24. Y.H.C. Yau, M.J. Crowley, L.C. Mayes, M.N. Potenza, "Are Internet use and video-game-playing addictive behaviors? Biological clinical and public health implications for youths and adults", Minerva Psichiatr., vol. 53, no. 3, pp. 153-170, 2012.
25. L. Merrillees, "Psychologists scramble to keep up with growing social media addiction", ABC News, 2016.
26. P. Valdesolo, "Scientists study Nomophobia-fear of being without a mobile phone", Scientific American, 2015, [online] Available: https://www.scientificamerican.com/article/scientists-study-nomophobia-mdash-fear-of-being-without-a-mobile-phone/.
27. M. Ybarra, K.J. Mitchell, "Exposure to Internet pornography among children and adolescents: A national survey", Cyberpsychology and Behaviour, vol. 8, no. 5, pp. 473-486, 2005.
28. K. Michael, M.G. Michael, "The fallout from emerging technologies: Surveillance social networks and suicide", IEEE Technol. Soc. Mag., vol. 30, no. 3, pp. 13-17, 2011.
29. J. Huntsdale, Social media monitoring apps shine spotlight on Internet addiction, ABC News, 2017, [online] Available: http://www.abc.net.au/news/2017-02-22/social-media-addiction-monitoring-app/8292148.
30. S.-J. Lee, M.J. Rho, I.H. Yook, S.-H. Park, K.-S. Jang, B.-J. Park, O. Lee, D.K. Lee, D.-J. Kim, I.Y. Choi, "Design development and implementation of a smartphone overdependence management system for the self-control of smart devices", Appl. Sci., vol. 6, pp. 440-452, 2016.
31. Google Play, 2017, [online] Available: https://play.google.com/store/apps/details?id=com.goozix.antisocial_personal&hl=en.
32. P. Peeke, Hooked hacked hijacked: Reclaim your brain from addictive living: Dr. Pam Peeke TEDxWall Street, 2013, [online] Available: https://www.youtube.com/watch?v=aqhzFd4NUPI.
33. K. Michael, Facts and figures: The rise of social media addiction: What you need to know PC World, 2017, [online] Available: http://www.pcworld.idg.com.au/article/614696/facts-figures-rise-social-media-addiction/.
34. C. Dalgleish, "10 things you should do instead of sitting on social media", The Cusp., 2017, [online] Available: http://thecusp.com.au/10-things-instead-sitting-social-media/15142.
35. A. Myrick, Google Glass makes an unexpected return with its new Enterprise Edition, 2017, [online] Available: https://phandroid.com/2017/07/18/google-glass-enterprise-edition/.
36. Mental Health and Social Media, July 2017, [online] Available: https://www.youtube.com/watch?v=VDx9djMuIFg&t=140s.

Keywords

Social network services, Facebook, Mobile communication, Australia, Telecommunication services, smart phones, social networking (online), telephone networks, smartphone, social media, antisocial app, second-generation mobile assignment, 2G mobile assignment, Nortel telecommunications vendor, Hutchison, Australia, network traffic, voice networks, Internet, data network dimensioning, public switched telephone network, integrated services digital network, small-to-medium enterprises

Citation: Katina Michael, "Are You Addicted to Your Smartphone, Social Media, and More?: The New AntiSocial App Could Help", IEEE Consumer Electronics Magazine, Vol. 6, No. 4, Oct. 2017, pp. 116 - 121, DOI: 10.1109/MCE.2017.2714421

Reconnaissance and Social Engineering Risks as Effects of Social Networking

Author Note: This paper is a "living reference work entry". Published first in 2014, now in second edition with minor changes to original content.

… not what goes into the mouth defiles a man, but what comes out of the mouth, this defiles a man.” Matthew 15:11 (RSV)

Introduction

For decades we have been concerned with how to stop viruses and worms from penetrating organizations and how to keep hackers out of organizations by luring them toward unsuspecting honeypots. In the mid-1990s Kevin Mitnick’s “dark-side” hacking demonstrated, and possibly even glamorized (Mitnick and Simon 2002), the need for organizations to invest in security equipment like intrusion detection systems and firewalls, at every level from perimeter to internal demilitarized zones (Mitnick and Simon 2005).

In the late 1990s, there was a wave of security attacks which stifled worker productivity. During these unexpected outages, employees would take long breaks queuing at the coffee machine, spend time cleaning their desk, and try to look busy shuffling paper in their in- and out-trays. It was clear by the downtime caused by malware hitting servers worldwide that corporations had begun to rely on intranets for content and workflow management so much and that employees would be left with very little to do when they were not connected. Nowadays, everything is online with respect to the service industry, and there is a known vulnerability in the requirement to be always connected. For example, you can cripple an organization if you take away their ability to accept electronic payments online, or render their content management system inaccessible due to denial of service attacks, or hack into a company’s webpage.

When the “Melissa” virus caught employees unaware in 1999, and was then followed by the “Explorer.zip” worm in the same year, public folders had Microsoft Office files either deleted or corrupted. At the time, anecdotal stories indicated that some people (even whole groups) lost several weeks of work, after falling victim to the worm that had attacked their hard drive. This led many to seek backup copies of their files, only to find that the backups themselves were not activated (Michael 2003).

The moral of the story is that for decades we have been preoccupied with stopping data (executables, spam, false log-in attempts, and the like) from entering the organization when the real problem since the rise of broadband networks, 3G wireless, and more recently social media has been how to stop data from going out of the organization. While this sounds paradoxical, the major concern is not what data traffic comes into an organization, but what goes out of an organization that matters. We have become our own worst enemy when it comes to security in this online-everything world we live in.

In short, data leakage is responsible for most corporate damage, such as the loss of competitive information. You can secure a bucket and make it water tight, put a lid on it, even put a lock on the lid, but if that bucket has even a single tiny hole, its contents will leak out and cause spillage. Such is the dilemma of information security today – while we have become more aware of how to block out unwanted data, the greatest risk to our organization is that which leaves the organization – through the network, through storage devices, and via an employees’ online personal blog, even the spoken word. It is indeed what most security experts call the “human” factor (Michael 2008).

Reconnaissance of Social Networks for Social Engineering

Social Networking

The Millennials, also known as Gen Ys, have been the subject of great discussion by commentators. If we are to believe what researchers say about Gen Ys, then it is this generation that has voluntarily gone public with private data. This generation, propelled by advancements in broadband wireless, 3G mobiles, and cloud computing, is always connected and always sharing their sentiments and cannot get enough of the new apps. They are allegedly “transparent” with most of their data exchanges. Generally, Gen Ys do not think deeply about where the information they publish is stored, and they are focused on convenience solutions that benefit them with the least amount of rework required. They tend not to like to use products like Microsoft Office and would rather work on Google Drive using Google Docs collaboratively with their peers. They are less concerned with who owns information and more concerned with accessibility and collaboration.

Gen Ys are characterized with creating circles of friends online, doing everything digitally they possibly can, and blogging to their heart’s content. In fact, Google has recently released a study that has found that 80% of Gen Ys make up a new generation dubbed “Gen C.” Gen Cs are known as the YouTube generation and are focused on “creation, curation, connection, and community” (Google 2012). It is generally embraced in the literature that this is the generation that would rather use their personally purchased tools, devices, and equipment for work purposes because of the ease of carrying their “life” and “work” with them everywhere they go and the ease of melding their personal hobbies, interests, and professional skillsets with their workplace seamlessly (PWC 2012). Bring your own device (BYOD) is a movement that has emerged from this type of mind-set. It all has to do with customization and personalization, with working with settings that have been defined by the user and with lifelogging in a very audiovisual way. Above all the mantra of this generation is Open-Everything. The claim made by Gen Cs is that transparency is a great force to be reckoned with when it comes to accessibility. Gen Cs allegedly define their social network and are what they share, like, blog, and retweet. This is not without risk, despite that some criminologists have played down the fear as related to privacy and security concerns (David 2008).

Despite that online commentators regularly like to place us all into categories based on our age, most people we’ve spoken to through our research do not feel like any particular “generation.” Individuals like to think they are smart enough to exploit the technologies for what they need to achieve. People may generally choose not to embrace social networking for blogging purposes, for instance, but might see how the application can be put to good use within an institutional setting and educational framework. For this reason they might be heavy users of social networking applications like LinkedIn, Twitter, Facebook, and Google Latitude but also understand its shortcomings and the potential implications of providing a real name, gender, and date of birth, as well as other personal particulars like geotagged photos or live streaming.

This ability to gather and interpret cyber-physical data about individuals and their behaviors has a double-edged spur when related back to a place of work. On the one hand, we have data about someone’s personal encounters that can be placed in a context back to a place of employment (Dijst 2009). For instance, a social networking update might read: “In the morning, I met with Katina Michael, we spoke about putting a collaborative grant together on location-based tracking, and then I went and met Microsoft Research Labs to see if they were interested in working with us, and had lunch with person@microsoft.com (+person) (#microsoft) who is a senior software engineer.” This information is pretty innocent on its own but there are a lot of details in there that might be used for gathering information: (1) a real name, (2) a real e-mail address, (3) an identifiable position in an organization, (4) potentially links to an extended social network, and (5) possibly even a real physical location of where the meeting took place if the individual had a location-tracking feature switched on their mobile social network app. The underlying point here is that you might have nothing to fear by blogging or participating on social networks under your company identity, but your organization might have much to lose.

Social Reconnaissance

Despite that many of us don’t wish to admit it, we have from time to time conducted social reconnaissance online for any number of reasons. In the most basic of cases, you might be visiting a location you have not previously been to and you use Google Street View to take a quick look at what the dwelling looks like for identification purposes. You might also browse the web with your own name, dubbed “ego surfing,” to see how you have been cited, quoted, and tagged in images or generally what other people are saying about you. But businesses also are increasingly keeping their eye out on what is being said about their brand using automatic web alerts based on hashtags, to the extent that new schemes offering insurance for business reputation have begun to emerge. Now, my point here is not whether or not you conduct social reconnaissance on yourself, or your family, or your best friend, or even strangers that look enticing, but on what hackers out there might learn about you and your life and your organization by conducting both social and technical reconnaissance. Yes, indeed, if you didn’t know it already, there are people out there that will (1) spend all their work time looking up what you do (depending on who you are), (2) think about how that information they have gathered can be related back to your place of work, and (3) exploit that knowledge to conduct clever social engineering attacks (Hadnagy 2011).

Chris Hadnagy, founder of social-engineer.om, was recently quoted as saying: “[i]nformation gathering is the most important part of any engagement. I suggest spending over 50 percent of the time on information gathering… Quality information and valid names, e-mails, phone number makes the engagement have a higher chance of success. Sometimes during information gathering you can uncover serious security flaws without even having to test, testing then confirms them” (Goodchild 2012).

It is for this reason that social engineers will focus on the company website, for instance, and build their attack plan off that. Dave Kennedy, CSO of Diebold, complements this idea by firsthand experience: “[a] lot of times, browsing through the company website, looking through LinkedIn are valuable ways to understand the company and its structure. We’ll also pull down PDF’s, Word documents, Excel spread sheets and others from the website and extract the metadata which usually tells us which version of Adobe or Word they were using and operating system that was used” (Goodchild 2012).

Most of us know of people who do not wish to be photographed and who have painstakingly attempted to un-tag themselves from a variety of images on social networks, who have tried to delete their online presence and be judged before an interview panel for the person they are today, not the person they were when MySpace or Facebook first came out. But what about the separate group of people who do not acknowledge that there is a fence between their work life and home life, accept personal e-mails on a work account, and then are vocal about everything that happens to them on a moment-by-moment basis with a disclaimer that reads: “anything you read on this page is my own personal opinion and not that of the organization I work for.” Some would say these individuals are terribly naïve and are probably not acting in accord with organizational policies. The disclaimer won’t help the company nor will it help them. Ethical hackers, who have built large empires around their tricks of the trade since the onset of social networking, have spent the last few years trying to educate us all – “data leakage is your biggest problem folks” not the fact that you have weak perimeters! You are, in other words, your own worst enemy because you divulge more than you can afford to, to the online world.

No one is discounting that there are clear benefits in making tacit knowledge explicit by recording it in one form or another, or openly sharing our research data in a way that is conducive to ethical practices, and making things more interoperable than what they are today – but the world keeps moving so fast that for the greater part people are becoming complacent with how they store their datasets and the repercussions of their actions. But the repercussions do exist, and they are real.

Social Engineering

Expert social engineers have never relied on very sophisticated ways of penetrating security systems. It is worth paying a visit to the social engineering toolkit (SET) at www.ocial-engineer.rg where you might learn a great deal about ethical hacking (Palmer 2001) and pentesting (Social-Engineer.Org 2012). Here social engineering tools are categorized as physical (e.g., cameras, GPS trackers, pen recorders, and radio-frequency bug kits), computer based (e.g., common user password profilers), and phone based (e.g., caller ID spoofing). In phase 1 of their premeditated attacks, social engineers are merely engaged in the practice of observation of the information we each put up for grabs willingly. And beyond “the information” itself, subjects and objects are also under surveillance by the social engineers as these might give further clues to the potential hack. It is when there is enough information that a social engineer will think about the next phase 2 which could mean dumpster diving and collecting as much hard copy and online evidence as possible (e.g., company website info). Social networks have given social engineers a whole new avenue of investigation. In fact, social networking will keep social engineers in fulltime work forever and ever unless we all get a lot smarter with how to use these applications.

In phase 3, the evidence gathered by the hacker is used to good practice as they claw their way deeper and deeper into organizational systems. It might mean having a few full names and position profiles of employees in a company and then using their “hacting” (hacking and acting) skills to get more and more data. Think about social engineers, building on steps and penetrating deeper and deeper into the administration of an organization. While we might think executives are the least targeted individuals, social engineers are brazen to ‘attack’ personal assistants of executives as well as operational staff. One of the problems associated with social networking is that executives casually give over their login and passwords to personal assistants to take care of their online reputations, thus becoming increasingly easier to manipulate and hijack these spaces and use them to as proof for a given action. When social engineers get that level of authority they require to circumvent systems or they are able to use a technical reconnaissance to exploit data found via social reconnaissance (or vice versa), then they can gain access to an organization’s network resources remotely, free to unleash cross-site scripting, man-in-the-middle attacks, SQL code injection, and the like.

Organizational Risks

We have thus come full circle on what social reconnaissance has to do with social networks. Social networking sites (SNS) provide social engineers with every bit of space they need to conduct their unethical hacking and their own penetration tests. You would not be the first person to admit that you have accepted a “friend” on a LinkedIn invitation without knowing who they are, or even caring who they are. Just another e-mail in the inbox to clear out, so pressing accept is usually a lot easier than pressing ignore and then delete or even blocking them for life.

Consider the problem of police in metropolitan forces creating LinkedIn profiles and accepting friends of friends on their public social network profile. What are the implications of this from a criminal perspective? Carrying the analogy of police further, what of the personal gadgets they carry? How many police are currently carrying e-mails on personal mobile phones that they should not be for security concerns? Or even worse, police who have their Twitter, Facebook, or LinkedIn profile always connected via their mobile phone? The police can be said to be rapidly introducing new policies to address these problems, but the problems regardless still exist for mainstream employees of large, medium, and even small organizations. The theft does not have to be complex like the stealing of software code or other intellectual property in designs and blueprints but as simple as the theft of competitive information like customer lead lists in a Microsoft Access database, or payroll data stored in MYOB, or even the physical device itself.

Penetration testing done periodically can be used as feedback into the development of a more robust information security life cycle that can aid those in charge of information governance to react proactively to help employees understand the implications of their practices (Bishop 2007). Trustwave 2012 advocates for four types of assessment and testing. The first is straightforward and traditional physical assessment. The second is client-side penetration testing which validates whether every staff member is adhering to policies. The third is business intelligence testing which is investigating how employees are using social networking, location-enabled devices, and mobile blogging to ensure that a company’s reputation is not at risk and to find out what data exists publically about an organization. And finally, red team testing is when a group of diverse subject matter experts try to penetrate a system reviewing security profiles independently.

No one would ever want to be the cause behind the ransacking of their organization’s online information above and beyond the web scraping technologies becoming widely available (Poggi et al. 2007). It would help if policies were enforceable within various settings but these too are difficult to monitor. How does one get the message across that while blocking unwanted traffic at the door is very important for an organization, what is even more important is noting what goes walkabout from inside the organization out? It will take some years for governance structures to adapt to this kind of thinking because the security industry and the media have previously been rightly focused on Denial of Service (DoS) attacks and botnets and the like (Papadimitriou and Garcia-Molina 2011). But it really is a chicken and egg problem – the more information we give out using social networking sites, the more we are giving impetus to DoS, DDoS, and the proliferation of botnets (Kartaltepe et al. 2010; Huber et al. 2009).

Conclusion

Possibly this entry may not have convinced employees that greater care should be taken about what they publish online, on personal blogs, or the pictures or footage post on lifelogs or on YouTube, but it may have convinced employees that the biggest problems today in security systems arise from the information that users post publicly in environments that rely on social networks. This information is just waiting to be harvested by people unsuspecting to users that they will probably never meet physically. Employers need to get their staff educated on company policies periodically and even review the policies they create no less than every 2 years. As an employer you should also be considering when the last time was that your organization performed a penetration test that considered new social networking applications. Individuals should extend this kind of pentesting to their own online profiles and review their own personal situation. Sure you might not have nothing to hide, but you might have a lot to lose.

References

  1. Bishop M (2007) About penetration testing. IEEE Secur Privacy 5(6):84–87

  2. David SW (2008) Cybercrime and the culture of fear. Inf Commun Soc 11(6):861–884

  3. Dijst M (2009) ICT and social networks: towards a situational perspective on the interaction between corporeal and connected presence. In: Kitamura R, Yoshii T, Yamamoto T (eds) The expanding sphere of travel behaviour research. Emerald, Bingley

  4. Goodchild J (2012) 3 tips for using the social engineering toolkit, CSOOnline- data protection. http://ww.soonline.om/rticle/05106/-tips-for-using-the-social-engineering-toolkit. Accessed 3 Dec 2012

  5. Google (2012) Introducing Gen C: the YouTube generation. http://sl.static.om/hink/ocs/ntroducing-gen-c-the-youtube-generationesearch-studies.df. Accessed 1 Apr 2013

  6. Hadnagy C (2011) Social engineering: the art of human hacking. Wiley, Indianapolis

  7. Huber M, Kowalski S, Nohlberg M, Tjoa S (2009) Towards automating social engineering using social networking sites. In: IEEE international conference on computational science and engineering, CSE’09, Vancouver, vol 3. IEEE, Los Alamitos, pp 117–124

  8. Kartaltepe EJ, Morales JA, Xu S, Sandhu R (2010) Social network-based botnet command-and-control: emerging threats and countermeasures. In: Applied cryptography and network security. Springer, Berlin/Heidelberg, pp 511–528

  9. Michael K (2003) The battle against security attacks. In: Lawrence E, Lawrence J, Newton S, Dann S, Corbitt B, Thanasankit T (eds) Internet commerce: digital models for business. Wiley, Milton, pp 156–159. http://orks.epress.om/michael/63/. Accessed 1 Feb 2013

  10. Michael K (2008) Social and organizational aspects of information security management. In: IADIS e-Society, Algarve, 9–12 Apr 2008. http://orks.epress.om/michael/6/. Accessed 1 Feb 2013

  11. Mitnick K, Simon WL (2002) The art of deception: controlling the Human element of security. Wiley, Indianapolis

  12. Mitnick K, Simon WL (2005) The art of intrusion. Wiley, Indianapolis

  13. Palmer CC (2001) Ethical hacking. IBM Syst J 40(3):769–780

  14. Papadimitriou P, Garcia-Molina H (2011) Data leakage detection. IEEE Trans Knowl Data Eng 23(1):51–63

  15. Poggi N, Berral JL, Moreno T, Gavalda R, Torres J (2007) Automatic detection and banning of content stealing bots for e-commerce. In: NIPS 2007 workshop on machine learning in adversarial environments for computer security. http://eople.c.pc.du/poggi/ublications/.%2oggi%2-%2utomatic%2etection%2nd%2anning%2 of%2ontent%2tealing%2ots%2or%2-commerce.df. Accessed 1 May 2013

  16. PWC (2012) BYOD (Bring your own device): agility through consistent delivery. http://ww.wc.om/s/n/ncreasing-it-effectiveness/ublications/yod-agility-through-consistent-delivery.html. Accessed 3 Dec 2012

  17. Social-Engineer.Org: Security Through Education (2012) http://ww.ocial-engineer.rg/. Accessed 3 Dec 2012

  18. Trustwave (2012) Physical security and social engineering testing. https://ww.rustwave.om/ocialphysical.hp. Accessed 3 Dec 2012

Synonyms

Footprinting; Hacker; Penetration testing; Reconnaissance; Risk; Security; Self-disclosure; Social engineering; Social media; Social reconnaissance; Vulnerabilities

Glossary

Social reconnaissance: A preliminary paper-based or electronic web-based survey to gain personal information about a member or group in your community of interest. The member may be an individual friend or foe, a corporation, or the government

Social engineering: With respect to security, is the art of the manipulation of people while purporting to be someone other than your true self, thus duping them into performing actions or providing secret information

Data leakage: The deliberate or accidental outflow of private data from the corporation to the outside world, in a physical or virtual form

Online social networking: An online social network is a site that allows for the building of social networks among people who share common interests

Malware: The generic term for software that has a malicious purpose. Can take the form of a virus, worm, Trojan horse, and spyware

Citation: Katina Michael, "Reconnaissance and Social Engineering Risks as Effects of Social Networking", in Reda Alhajj and Jon Rokne, Encyclopedia of Social Network Analysis and Mining, 2017, pp. 1-7, DOI: 10.1007/978-1-4614-7163-9_401-1.

Bots Trending Now: Disinformation and Calculated Manipulation of the Masses

Bot Developments

A bot (short for robot) performs highly repetitive tasks by automatically gathering or posting information based on a set of algorithms. Internet-based bots can create new content and interact with other users like any human would. Bots are not neutral. They always have an underlying intent toward direct or indirect benefit or harm. The power is always with the individual(s)/organization(s) unleashing the bot, and imbued with the developer's subjectivity and bias [1].

Bots can be overt or covert to subjects; they can deliberately “listen” and then in turn manipulate situations providing real information or disinformation (known also as automated propaganda). They can target individuals or groups and successfully alter or even disrupt group-think, and equally silence activists trying to bring attention to a given cause (e.g., human rights abuses by governments). On the flipside, bots can be used as counterstrategies in raising awareness of political wrongdoing (e.g., censorship) but also be used for terrorist causes appealing to a global theatre (e.g., ISIS) [2].

Software engineers and computer programmers have developed bots that can do superior conversational analytics, bots to analyze human sentiment in social media platforms such as Facebook [3] and Twitter [4], and bots to get value out of unstructured data using a plethora of big data techniques. It won't be long before we have bots to analyse audio using natural language processing, and commensurate bots to analyze and respond to uploaded videos on YouTube, and even bots that respond with humanlike speech contextually adapted for age, gender, and even culture. The convergence of this suite of capabilities is known as artificial intelligence [5]. Bots can be invisible, they can appear as a 2D embodied agent on a screen (avatar or dialog screen), or as a 3D object (e.g., toy) or humanoid robot (e.g., Bina [6] and Pepper).

Bots that Pass the Turing Test

Most consumers who use instant messaging chat programs to interact with their service providers very well might not realize that they have likely interacted with a chat bot that is able to crawl through a provider's public Internet page for information acquisition [7]. After 3–4 interactions with the bot, that can last anything between 5 to 10 minutes, a human customer service representative might intervene to enable a direct answer to a more complex problem. This is known as a hybrid delivery model where bot and human work together to solve a customer inquiry. The customer may detect a slower than usual response in the chat window, but is willing to wait given the asynchronous mode of communications, and the mere fact they don't have to converse with a real person over the telephone. The benefit to the consumer is said to be bypassing a human clerk and wait times for a representative, and the benefit to the service provider is in saving the cost of human resources, including ongoing training.

Bots that interact with humans and go undetected as being non-human are considered successful in their implementation, and are said to pass the Turing Test [8]. Devised in 1950, English mathematician, Alan M. Turing suggested the “imitation game,” which consisted of a remote human interrogator within a fixed time frame being able to distinguish between a computer and a human subject based on their replies to various questions posed by the interrogator [9].

Bot Impacts Across the Globe

Bots usually have Internet/social media accounts that look like real people, generate new content like any human would, and interact with other users. Politicalbots.org reported that approximately 19 million bot accounts were tweeting in support of either Donald Trump or Hillary Clinton in the week before the U.S. presidential election [10]. Pro-Trump bots worked to sway public opinion by secretly taking over pro-Clinton hashtags like #ImWithHer and spreading fake news stories [11]. These pervasive bots are said to have swayed public opinion.

Yet bots have not just been utilized in the U.S. alone, but also the U.K. (Brexit's mood contagion [12]), Germany (fake news [1]), France (robojournalism [13]), Italy (popularity questioned [14]), and even in Australia (Coalition's fake followers [15]). Unsurprisingly, political bots have also been used by Turkey (Erdogan's 6000 robot army [16], [17]), Syria (Twitter spambots [18]), Ecuador (surveillance [19]), Mexico (Peñabots [20]), Brazil, Rwanda, Russia (Troll Houses [21]), China (tracking Tibetan protestors [22]), Ukraine (social bots [23]), Venezuela (6000 bots generating anti-U.S. sentiment [24] with #ObamaYankeeGoHome [25]).

Whether it is personal attacks meant to cause a chilling effect, spamming attacks on hashtags meant to redirect trending, overinflated follower numbers meant to show political strength, or deliberate social media messaging to perform sweeping surveillance, bots are polluting political discourse on a grand scale. So much so, that some politicians themselves are now calling for action against these autobots - with everything from demands for ethical conduct in society, to calls for more structured regulation [26] for political parties, to even implementation of criminal penalties for offenders creating and implementing malicious bot strategies.

Provided below are demonstrative examples of the use of bots in Australia, the U.K., Germany, Syria and China, with each example offering an alternative case whereby bots have been used to further specific political agendas.

Fake Followers in Australia

In 2013, the Liberal Party internally investigated the surge in Twitter followers that the then Opposition Leader Tony Abbot accumulated. On the night of August 10,2013, Abbot's Twitter following soared from 157 000 to 198 000 [27]. In the days preceding this period, his following was steadily growing at about 3000 per day. The Liberal Party had to declare on their Facebook page that someone had been purchasing “Fake Twitter followers for Tony Abbot's Twitter account,” but later a spokeswoman said it was someone not connected with the Liberal Party nor associated with the Liberal campaign and that the damage had been done using a spambot [27], an example of which is shown in Figure 1.

Figure 1. Twitter image taken from [28].

 

The Liberals acted quickly to contact Twitter who removed only about 8000 “fake followers” and by that same evening, Mr. Abbot's followers had grown again to 197000. A later analysis indicated that the Coalition had been spamming Twitter with exactly the same messages from different accounts, most of these not even from Australian shores. The ploy meant that Abbot got about 100000 likes on Facebook in a single week, which was previously unheard of for any Liberal Party leader. Shockingly, one unofficial audit noted that about 95% of Mr. Abbot's 203000 followers were fake, with 4% “active” and only 1% genuine [15]. The audit was verified by social media monitoring tools, StatusPeople and SocialBakers that determined in a report that around 41 per cent of Abbott's most recent 50000 Twitter followers were fake, unmanned Twitter accounts [29]. The social media monitoring companies noted that the numbers of fake followers were likely even higher. It is well known that most of the Coalition's supporters do not use social media [30]. Another example of the suspected use of bots during the 2013 election campaign can be seen in Figure 2.

Figure 2. Twitter image taken from [31].

Fake Trends and Robo-Journalists in the U.K.

As the U.K.'s June 2016 referendum on European Union membership drew near, researchers discovered automated social media accounts were swaying votes for and against Britain's exit from the EU. A recent study found 54% of accounts were pro-Leave, while 20% were pro-Remain [32]. And of the 1.5 million tweets with hashtags related to the referendum between June 5 and June 12, about half a million were generated by 1% of the accounts sampled.

As more and more citizenry head to social media for their primary information source, bots can sway decisions this way or that. After the results for Brexit were disclosed, many pro-Remain supporters claimed that social media had had an undue influence by discouraging “Remain” voters from actually going to the polls [33], refer to Figure 3. While there are only 15 million Twitter users in the U.K., it is possible that robo-journalists (content gathering bots) and human journalists who relied on social media content that was fake, further propelled the “fake news,” affecting more than just the TwitterSphere.

 

Figure 3. Twitter image taken from [34].

Fake News and Echo Chambers in Germany

German Chancellor Angela Merkel has expressed concern over the potential for social bots to influence this year's German national election [35]. She brought to the fore the ways in which fake news and bots have manipulated public opinion online by spreading false and malicious information. She said: “Today we have fake sites, bots, trolls - things that regenerate themselves, reinforcing opinions with certain algorithms and we have to learn to deal with them” [36]. The right-wing Alternative for Germany (AfD) already has more Facebook likes than Merkel's Christian Democrats (CDU) and the center-left Social Democrats (SPD) combined. Merkel is worried the AfD might use Trump-like strategies on social media channels to sway the vote.

It is not just that the bots are generating fake news [35], but that the algorithms that Facebook deploys as content are shared between user accounts, creating “echo chambers” and outlets for reverberation [37]. However in Germany, Facebook, which has been criticized for failing to police hate speech, in 2016 has just been legally classified as a “media company,” which means it will now be held accountable for the content it publishes. While the major political parties responded by saying they will not utilize “bots for votes,” it is now also outside geopolitical forces (e.g., Russians) who are chiming in, attempting to drive social media sentiment with their own hidden agendas [35].

Spambots and Hijacking Hashtags in Syria

During the Arab Spring, online activists were able to provide eyewitness accounts of uprisings in real time. In Syria, protesters used the hashtags #Syria, #Daraa, and #Mar15 to appeal for support from a global theater [18]. It did not take long for government intelligence officers to threaten online protesters with verbal assaults and one-on-one intimidation techniques. Syrian blogger Anas Qtiesh wrote: “These accounts were believed to be manned by Syrian mokhabarat (intelligence) agents with poor command of both written Arabic and English, and an endless arsenal of bile and insults” [38]. But when protesters continued despite the harassment, spambots created by Bahrain company EGHNA were coopted to create pro-regime accounts [39]. The pro-regime messages then flooded hashtags that had pro-revolution narratives.

This essentially drowned out protesters' voices with irrelevant information - such as photography of Syria. @LovelySyria, @SyriaBeauty and @DNNUpdates dominated #Syria with a flood of predetermined tweets every few minutes from EGHNA's media server [40]. Figure 4 provides an example of such tweets. Others who were using Twitter to portray the realities of the conflict in Syria publicly opposed the use of the spambots (see Figure 5) [43].

Figure 4. Twitter image taken from [41].

Figure 5. Twitter image taken from [42].

Since 2014, the Islamic State terror group has “ghost-tweeted” its messages to make it look like it has a large, sympathetic following [44]. This has been a deliberate act to try and attract resources, both human and financial, from global constituents. Tweets have consisted of allegations of mass killings of Iraqi soldiers and more [45]. This activity shows how extremists are employing the same social media strategies as some governments and social activists.

Sweeping Surveillance in China

In May 2016, China was exposed for purportedly fabricating 488 million social media comments annually in an effort to distract users' attention from bad news and politically sensitive issues [46]. A recent three-month study found 13% of messages had been deleted on Sina Weibo (Twitter's equivalent in China) in a bid to crack down on what government officials identified as politically charged messages [47]. It is likely that bots were used to censor messages containing key terms that matched a list of banned words. Typically, this might have included words in Mandarin such as “Tibet,” “Falun Gong,” and “democracy” [48].

China employs a classic hybrid model of online propaganda that comes into action only after some period of social unrest or protest when there is a surge in message volumes. Typically, the task is left to government officials to do the primary messaging, with back up support from bots, methodically spreading messages of positivity and ensuring political security using pro-government cheerleading. While on average it is believed that one in every 178 posts is curated for propaganda purposes, the posts are not continuous and appear to overwhelm dissent only at key times [49]. Distraction online, it seems, is the best way to overcome opposition. That distraction is carried out in conjunction with making sure there is a cap on the number of messages that can be sent from “public accounts” that have broadcasting capabilities.

What Effect are Bots Having on Society?

The deliberate act of spreading falsehoods via the Internet, and more specifically via social media, to make people believe something that is not true is certainly a form of propaganda. While it might create short-term gains in the eyes of political leaders, it inevitably causes significant public distrust in the long term. In many ways, it is a denial of citizen service that attacks fundamental human rights. It preys on the premise that most citizens in society are like sheep, a game of “follow the leader” ensues, making a mockery of the “right to know.” We are using faulty data to come to phony conclusions, to cast our votes and decide our futures. Disinformation on the Internet is now rife - and if the Internet has become our primary source of truth, then we might well believe anything.

REFERENCES

1. S.C. Woolley, "Automating power: Social bot interference in global politics", First Monday, vol. 21, no. 4.

2. H.M. Roff, D. Danks, J.H. Danks, "Fight ISIS by thinking inside the bot: How we can use artificial intelligence to distract ISIS recruiters", Slate, [online] Available: http://www.slate.com/articles/technology/future_tense/2015/10/using_chatbots_to_distract_isis_recruiters_on_social_media.htm. M. Fidelman, 10 Facebook messenger bots you need to try right now, Forbes, [online] Available: https://www.forbes.com/sites/markfidelman/2016/05/19/10-facebook-messenger-bots-you-need-to-try-right-now/#24546a4b325a.

4. D. Guilbeault, S.C. Woolley, "How Twitter bots are shaping the election", Atlantic, [online] Available: https://www.theatlantic.com/technology/archive/2016/11/election-bots/506072/.

5. K. Hammond, "What is artificial intelligence?", Computer World, [online] Available: http://www.computerworld.com/article/2906336/emerging-technology/what-is-artificial-intelligence.html

6. Bina 48 meets Bina Rothblatt - Part Two, [online] Available: https://www.youtube.com/watch?v=G5IqcRILeCc.

7. M. Vakulenko, Beyond the ‘chatbot’ - The messaging quadrant, [online] Available: https://www.visionmobile.com/blog/2016/05/beyond-chatbot-messaging-quadrant.

8. "Turing test: Artificial Intelligence", Encyclopaedia Britannica, [online] Available: https://www.britannica.com/technology/Turing-test.

9. D. Proudfoot, "What Turing himself said about the Imitation Game", Spectrum, [online] Available: http://spectrum.ieee.org/geek-life/history/what-turing-himself-said-about-the-imitation-game.

10. S.C. Woolley, Resource for understanding political bots, [online] Available: http://politicalbots.org/?p=797.

11. N. Byrnes, "How the bot-y politic influenced this election", Technology Rev., [online] Available: https://www.technologyreview.com/s/602817/how-the-bot-y-politic-influenced-this-election/.

12. I. Lapowsky, "Brexit is sending markets diving. Twitter could be making it worse", Wired, [online] Available: https://www.wired.com/2016/06/brexit-sending-markets-diving-twitter-making-worse/.

13. 2016 election news coverage, France:, [online] Available: http://www.france24.com/en/20161105-bots-step-2016-election-news-coverage.

14. A. Vogt, "Hot or bot? Italian professor casts doubt on politician's Twitter popularity", The Guardian, [online] Available: https://www.theguardian.com/world/2012/jul/22/bot-italian-politician-twitter-grillo.

15. T. Peel, "The Coalition's Twitter fraud and deception", Independent, [online] Available: https://independentaustralia.net/politics/politics-display/the-coalitions-twitter-fraud-and-deception.

16. C. Letsch, "Social media and opposition to blame for protests says Turkish PM", The Guardian, [online] Available: https://www.theguardian.com/world/2013/jun/02/turkish-protesters-control-istanbul-square.

17. E. Poyrazlar, "Turkey's leader bans his own Twitter bot army", Vocativ, [online] Available: http://www.vocativ.com/world/turkey-world/turkeys-leader-nearly-banned-twitter-bot-army/.

18. J.C. York, "Syria's Twitter spambots", The Guardian, [online] Available: https://www.theguardian.com/commentisfree/2011/apr/21/syria-twitter-spambots-pro-revolution.

19. R. Morla, "Ecuadorian websites report on hacking team get taken down", Panam Post, [online] Available: http://panampost.com/rebeca-morla/2015/07/13/ecuadorian-websites-report-on-hacking-team-get-taken-down/.

20. A. Najar, ¿Cuánto poder tienen los Feñabots. los tuiteros que combaten la crítica en Mexico?, BBC, [online] Available: http://www.bbc.com/mundo/noticias/2015/03/150317_mexico_internet_poder_penabot_an.

21. S. Walker, "Salutin’ Putin: Inside a Russian troll house", The Guardian, [online] Available: https://www.theguardian.com/world/2015/apr/02/putin-kremlin-inside-russian-troll-house.

 22. B. Krebs, "Twitter vots target Tibetan protests", Krebson Security, [online] Available: http://krebsonsecurity.com/2012/03/twitter-bots-target-tibetan-protests/.

23. S. Hegelich, D. Janetzko, "Are social bots on Twitter political actors? Empirical evidence from a Ukrainian social botnet", Proc. Tenth Int. AAAI Conf. Web and Social Media, pp. 579-582, 2016.

24. M.C. Forelle, P.N. Howard, A. Monroy-Hernandez, S. Savage, "Political bots and the manipulation of public opinion in Venezuela", SSRN, [online] Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2635800.

29. H. Polites, The perils of polling Twitter bots, The Australian, [online] Available: http://www.theaustralian.com.au/business/business-spectator/the-perils-of-polling-twitter-bots/news-story/97d733c6650991d20a03d25a4229b42e.

30. A. Bruns, Follower accession: How Australian politicians gained their Twitter followers, SBS, [online] Available: http://www.sbs.com.au/news/article/2013/07/08/follower-accession-how-australian-politicians-gained-their-twitter-followers.

31. S. Fazakerley, Paid parental leave is a winner for Tony Abbott, [online] Available: https://twitter.com/stuartfaz/status/369068662163910656/photo/1?ref_src=twsrc%5Etfw&ref_url=httpo/03Ao/02Fo/02Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347.

32. P.N. Howard, B. Kollanyi, "Bots #Stron-gerin and #Brexit: Computational propaganda during the UK-EU Referendum", SSRN, [online] Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2798311.

33. A. Bhattacharya, Watch out for the Brexit bots, [online] Available: https://qz.com/713980/watch-out-for-the-brexit-botsf.

34. M. A. Carter, N/A, [online] Available: https://twitter.com/rob_cart123/status/746091911354716161?ref_src=twsrc%5Etfw&ref_url=http%3A%2F%2Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347.

35. C. Copley, Angela Merkel fears social bots maymanipulate German election, [online] Available: http://www.smh.com.au/world/angela-merkel-fears-social-bots-may-manipulate-german-election-20161124-gsx5cu.html.

36. I. Tharoor, ‘Fake news’ threatens Germany's election too says Angela Merkel, [online] Available: http://www.smh.com.au/world/fake-news-threatens-germanys-election-too-says-angela-merkel-20161123-gsw7kp.html.

37. F. Floridi, "Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis", The Guardian, [online] Available: https://www.theguardian.com/technology/2016/nov/29/fake-news-echo-chamber-ethics-infosphere-internet-digital.

38. A. Qtiesh, The blast inside, [online] Available: http://www.anasqtiesh.com/.

39. SYRIA - Syria's Twitter spambots, [online] Available: https://wikileaks.org/gifiles/docs/19/1928607_syria-syria-s-twitter-spam-bots-html.

40. A. Qtiesh, Spam bots flooding Twitter to drown info about #Syria Protests (Updated), [online] Available: https://advox.globalvoices.org/2011/04/18/spam-bots-flooding-twitter-to-drown-info-about-syria-protests/.

41. N/A, [online] Available: https://twitter.com/SyriaBeauty/status/202585453919076353?ref_src=twsrc%5Etfw&ref_url=http%3A%2F%2Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347.

42N/A, [online] Available: https://twitter.com/BritishLebanese/status/60075290055024640?ref_src=twsrc%5Etfw&ref_url=http%3A%2F%2Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347

43. L. Shamy, To everyone who can hear me!, [online] Available: https://twltter.com/Linasharny/status/808422105809387520?ref_src=twsrcsooEtfw&ref_url=http%3A%2F%2Ftheconversation.com%2Fbots-without-borders-how-anonymous-accounts-hijack-political-debate-70347.

44. S. Woolley, Spammers scammers. and trolls: Political bot manipulation, [online] Available: http://politicalbots.org/?p=295.

45.  R. Nordland, A.J. Rubin, Massacre claim shakes Iraq, NYTimes, [online] Available: https://www.nytimes.com/2014/06/16/world/middleeast/irao.html?_r=1

46. S. Oster, "China fakes 488 million social media posts a year: Study", Bloomberg News, [online] Available: https://www.bloomberg.com/news/articles/2016-05-19/china-seen-faking-488-million-internet-posts-to-divert-criticism.

47. G. King, J. Pan, M.E. Roberts, How the Chinese Government fabricates social media posts for strategic distraction not engaged argument, [online] Available: http://gking.harvard.edu/files/gking/files/50c.pdf.

48. Y. Yang, The perfect example of political propaganda: The Chinese Government's persecution against Falun Gong, [online] Available: http://www.globalmediajournal.com/open-access/the-perfect-example-of-political-propaganda-the-chinese-governments-persecution-against-falun-gong.php?aid=35171.

49. B. Feldman, How the Chinese Govemment uses social media to stop dissent, New York:, [online] Available: http://nymag.com/selectall/2016/05/china-posts-propaganda-on-social-media-as-misdirection.htm

ACKNOWLEDGMENT

This article is adapted from an article published in The Conversation titled “Bots without borders: how anonymous accounts hijack political debate,” on January 24, 2017. Read the original article http://theconversation.com/bots-without-borders-how-anonymous-accounts-hijack-political-debate-70347 Katina Michael would like to thank Michael Courts and Amanda Dunn from The Conversation for their editorial support, and Christiane Barro from Monash University for the inspiration to write the piece. Dr. Roba Abbas was also responsible for integrating the last draft with earlier work.

Citation: Katina Michael, 2017, "Bots Trending Now: Disinformation and Calculated Manipulation of the Masses", IEEE Technology and society Magazine, Vol. 36, No. 2, pp. 6-11.

Assessing technology system contributions to urban dweller vulnerabilities

Lindsay J. Robertson+, Katina Michael+, Albert Munoz#

+ School of Computing and Information Technology, University of Wollongong, Northfields Ave, NSW 2522, Australia

# School of Management and Marketing, University of Wollongong, Northfields Ave, NSW 2522, Australia

Received 26 March 2017, Revised 16 May 2017, Accepted 18 May 2017, Available online 19 May 2017

https://doi.org/10.1016/j.techsoc.2017.05.002

Highlights

• Individual urban-dwellers have significant vulnerabilities to technological systems.

• The ‘exposure’ of a technological system can be derived from its configuration.

• Analysis of system ‘exposure’ allows valuable insights into vulnerability and its reduction.

Abstract

Urban dwellers are increasingly vulnerable to failures of technological systems that supply them with goods and services. Extant techniques for the analysis of those technological systems, although valuable, do not adequately quantify particular vulnerabilities. This study explores the significance of weaknesses within technological systems and proposes a metric of “exposure”, which is shown to represent the vulnerability contributed by the technological system to the end-user. The measure thus contributes to the theory and practice of vulnerability reduction. The results suggest specific and general conclusions.

1. Introduction

1.1. The scope and nature of user vulnerability to technological systems

Today's urban dwelling individuals are end-users that increasingly depend upon the supply of goods and services produced by technological systems. These systems are typically complex [1–4], and as cities and populations grow, demands placed on these systems lead to redesigns and increases in complexity. End-users often have no alternative means of acquiring essential goods and services and thus a failure in a technology system has implications for the individual that are disproportionately large compared to the implications for the system operator/owner. End users may also lack awareness of the technological systems that deliver these goods and services, inclusive of system complexity and fragility, yet may be expected to be concerned for their own security. The resulting dependence on technology justifies the observed concern that there is a vulnerability incurred by users, from the systems that provide them with goods and services.

Researchers [5–7], alongside the tradition of military strategists [8], have presented a socio-technical perspective on individual vulnerability, drawing attention to the complexity of the technological systems tasked with the provision of essential goods and services. Meanwhile, other researchers have noted the difficulties of detailed performance modelling of such systems [9–11].

The vulnerability of an urban dweller has also been a common topic within the popular press for example “Cyber-attack: How easy is it to take out a smart city?” [12], which speculated how such phenomena as the “Internet of Things” affect the vulnerability of connected systems. Other popular press topics have included the possibility that motor vehicle systems are vulnerable to “hacking” [13].

There is furthermore a widespread recognition that systems involving many 'things that can go wrong' are fragile. Former astronaut and United States Senator John Glenn stated in his (1997) retirement speech [14] mentioned “… the question I'm asked the most often is: ‘When you were sitting in that capsule listening to the count-down, how did you feel?’ ‘Well, the answer to that one is easy. I felt exactly how you would feel if you were getting ready to launch and knew you were sitting on top of two million parts - all built by the lowest bidder on a government contract’ …” His concern was justified, and most would appreciate that similar concerns apply to more mundane situations than the Mercury mission.

National infrastructure systems are typically a major subset of the technological systems that deliver goods and services to individual end-users. Infrastructure systems are commonly considered to be inherently valuable to socio-economic development, with the maintenance of security and functionality often emphasized by authors such as Gómez et al. (2011) [15]. We argue that infrastructural systems actually have no intrinsic value to the end-user, and are only valuable until another option can supply the goods and services to the user with lower vulnerability, higher reliability or both. If a house-scale sewage treatment technology were economical, adequately efficient and reliable, then reticulated, centralised sewage systems would have no value. We would also argue that the study of complete technological systems responsible for delivery of goods or services to an end-user, is distinguishable from the study of infrastructural systems.

For the urban apartment dweller, significant and immediate changes to lifestyle quality would occur if any of a list of services became unavailable. To name a few, these services would include those that allow the flow of work information, financial transactions, availability of potable water, fuel/power for lighting, heating, cooking and refrigeration, sewage disposal, perishable foods and general transport capabilities. Each of these essential services are supplied by technological systems of significant complexity and face an undefined range of possible hazards. This paper explores the basis for assessing the extent to which some technological systems contribute to a user's vulnerability.

Perrow [16] asserts that complexity, interconnection and possibility of major harm make catastrophe inevitable. While Perrow's assertion may have intuitive appeal, there is a need for a quantitative approach to the assessment of vulnerability. Journals (e.g. International Journal of Emergency Management, Disasters) are devoted to analysing and mitigating individuals' vulnerabilities to natural disasters. While there is an overlap of topic fields, disaster scenarios characteristically assume geographically-constrained, simultaneous disruption of a multitude of services and also implicitly assume that the geographically unaffected regions can and will supply essential needs during reconstruction. This research does not consider the effects of natural disasters, but rather the potential for component or subsystem disruptions to affect the technological system's ability to deliver goods and services to the end-user.

Some technological systems-such as communications or water distributions systems-transmit relevant goods and services via “nodes” that serve only aggregate or distribute the input goods and services. Such systems can be characterised as “homogeneous” and are thus distinguished from systems that progressively create as well as transmit goods, and thus require a combination of processes, input- and intermediate-streams and services. The latter type of system are thus categorized as heterogeneous, such heterogeneity must be accommodated in an analysis measure.

1.2. Quantification as prelude to change

We propose and justify a quantification of a technological system contributions to the vulnerability of an urban dwelling end-user who is dependent upon its outputs. The proposed approach can be applied to arbitrary heterogeneous technological systems and can be shown to be a valid measure of an attribute that had previously been only intuitively appreciated. Representative examples are used to illustrate the theory, and preliminary results from these examples illustrate some generalised concerns and approaches to decreasing urban dwelling end user “exposure” to technological systems. The investigation of a systems exposure will allow a user to make an informed assessment of their own vulnerability, to reduce their exposure by making changes to system aspects within their control. The investigation of a system's exposure will allow a user to make an informed assessment of their own vulnerability, to reduce their exposure by making changes to system aspects within their control. Quantifying a system's exposure will allow a system's owner to identify weaknesses, and to assess the effect of hypothetical changes.

2. Quantification of an individual's “exposure”: development of theory

2.1. Background to theory

Consider an individual's level of “exposure” by two scenarios: a first scenario where a specific service can only be supplied to an end-user by a single process, which is dependent on another process, which is in turn dependent on a third process. In the second scenario, the same service can be offered to the user by any one of three identical processes with no common dependencies. Any end-user of the service could reasonably be expected to feel more “exposed” under the first scenario than under the second. Service delivery systems are likely to involve complex processes that include at least some design redundancies, but also include single-points-of-failure, and cases where two or more independent failures would deny the supply of the service. For such systems, the “exposure” of the end user may not be obvious, but it would be useful to distinguish quantitatively and to be able to distinguish quantitatively among alternative configurations.

The literature acknowledges the importance of a technological configuration's contribution to end-user vulnerability [6], yet such studies do not quantitatively assess the significance of the system configuration. Reported approaches to vulnerability evaluation can be broadly categorized according to whether they consider homogeneous or heterogeneous systems, whether they assume a static or a dynamic system response, and whether system configuration is, or is-not used as the basis for the development of metrics. The published literature on risk analysis (including interconnected system risks), resilience analysis, and modelling all have a bearing on the topic, and are briefly summarised below:

Risk analysis may be applied to heterogeneous or homogeneous systems, the analysis does not analyse dynamic system responses and limits analysis to a qualitative assessment of the effect of brainstormed hazards. Classical risk analysis [17–20] requires an initial description of the system under review, however practitioners commonly only generate descriptions lacking specific system configuration detail. While many variations are possible, it is common for an expert group to carry out the risk analysis by listing all identified hazards and associated harms. Experts then categorise identified harms by severity, and hazards according to the vulnerability of the system and the probability of hazard occurrence. Risk events are then classified by the severity, based on harm magnitude, and hazards probability. Undertaking a risk analysis is valuable, yet without a detailed system definition to which the assessments of hazard and probability are applied, probability-of-occurrence evaluations may be inaccurate or fail to identify guided hazards, and the analysis may fail to identify specific weaknesses. Another issue exists if the exercise fails to account for changes to instantaneous system states; if the system is close to design capacity upon hazard occurrence, the probability of the hazard causing harm is higher than if the hazard occurred at a point in time when the system operates at lower capacities. Finally, the use of categories that correlate harm and hazard to generate a risk evaluation are inherently coarse-grained, meaning that changes to system configuration or components may- or may-not trigger a change to the category that is assigned to the risk.

Another analysis approach is that of “Failure Modes and Effects Analysis” (FMEA) [21], which examines fail or go conditions of each component within a system, ultimately producing a tabulated representation of the permutations and combinations of input “fail” criteria that cause system failure. FMEA is generally used to demonstrate that a design-redundancy requirement of a tightly-defined system is met.

'Resilience' has been the topic of significant research, much of which is dedicated to the characterization of the concept and definitional consensus. One representative definition [22] is “ … the ability of the system to withstand a major disruption within acceptable degradation parameters and to recover within an acceptable time … ” This definition is interpreted [23–25] quantitatively as a time-domain variable measuring one or more characteristics of the transient response. For complex systems, derivation of time-domain responses to a specific input disruption can be expected to be difficult, and such a derivation will only be valid for one particular initial operational state and disruption event. Responses to each possible system input and initial condition would generate a new time-domain response, and so a virtually infinite number of transient responses would be required to fully characterize the 'resilience' of a single technological system. All such approaches implicitly assume that the disturbance is below an assumed ‘maximum tolerable’ level, so the technological system's response will be a continuous function, i.e. the system will not actually fail. As resilience analysis applies to the aforementioned scenarios, a methodological issue exists in that evaluations of this kind are post-hoc observations, where feedback from event occurrences lead to design changes. Thus, an implicit assumption exists that the intention of resilient design is to minimise the disturbance to an ongoing provision of goods and services, rather than prevent output failure. Resilience analysis examines each permutation and combination of input, and constraining scope to failures, as input. As this method considers system responses to external stimulus, it requires a detailed knowledge of system configurations and system configuration, but is only practical for relatively simple cases (the difficulty of modelling large systems, has been noted by others [9]).

A third approach constructs a model of the target system in order to infer real world behaviour. The model - as a simplified version of the real world system - is constructed for the purposes of experimentation or analysis [26]. Applied to the context of end-user vulnerability, published simplifications of communication systems, power systems and water distribution systems commonly assume system homogeneity. For example, a graph theory approach will consider the conveyance of goods and services as a single entity transmitted across a mesh of edges and vertices that each serve to either disperse or distribute the product. Once a distribution network is represented as a graph, it is possible to mathematically describe the interconnection between specified vertices [10], and to draw conclusions [27] regarding the robustness of the network. Tanaka and colleagues [28] noted that it is possible to represent homogeneous networks using graph theory notation and thus make graph theory analyses possible. Common graph theory metrics consider the connections of each edge and do not consider the possibility that an edge could carry a different service from another edge. Because the graph theory metrics assume a homogeneous system, these metrics cannot be applied directly to heterogeneous systems in which interconnections do not always carry the same goods or services.

2.2. Exposure of a technological system

In order to obtain a value for the technological system contribution to end-user vulnerability that enables comparisons among system configurations a quantitative analytical technique is needed. To achieve this, four essential principles are proposed to allow and justify the development of a metric that evaluates the contribution of a heterogeneous technological system, to the vulnerability of an individual. These principles are:

(1) Application to individual end-user: an infrastructural system may be quite large and complex. Haimes and Jiang [11] considered complex interactions between different infrastructural systems by assigning values to degrees of interaction: the model allows mathematical exploration of failure effects but (as is acknowledged by these authors) depends on interaction metrics that are difficult to establish. This paper presents an approach that is focussed on a representative single end-user. When an individual user is considered, not only is the performance of the supply system readily defined, but the relevant system description is more hierarchical and less amorphous. Our initial work has also suggested that if consideration of failures requiring more than 3 simultaneous and unrelated hazards, then careful modelling can generate a defensible model without feedback loops.

(2) Service level: it is possible to not only describe goods or services that are delivered to the individual (end-user), but also to define a service level at which the specified goods or services either are-, or are-not delivered. From a definitional standpoint, this approach allows the output of a technological system to be expressed as a Boolean variable (True/False), and allows the effect of the configuration of a technological system to be measured against a single performance criterion. For some goods/services, additional insights may be possible from separate analyses at different service levels (e.g. water supply analyzed at “normal flow” and at “intermittent trickle”) however for other goods/services (e.g. power supply) a single service level (power on/off) is quite reasonable.

(3) Hazard and weakness link: events external to a technology system only threaten the output of the technology system if the external events align with a weakness in the technology system. If a hazard does not align with a weakness then it has no significance. Conversely if a weakness exists within a technological system and has not been identified, then hazards that can align with the weakness are also unlikely to be recognised. If the configuration of a particular technology system is changed, weaknesses may be removed while other weaknesses may be added. Therefore, for each weakness added, an additional set of external events can be newly identified as hazards - and correspondingly for each weakness that is removed, the associated hazards cease to be significant. Processes capable of failure and input streams that could become unavailable, are weaknesses that are significant regardless of the number and/or type of hazards of sufficient magnitude to cause failure, that might align with any specific example of such a weakness.

(4) Hazard probability: Some (e.g. extreme weather events) hazards occur randomly, can be assessed statistically, and will have a higher probability of occurrence over a long time period. Terrorist actions or sabotage in particular, do not occur randomly but must be considered as intelligently (mis)guided hazards. The effect of a guided hazard upon a risk assessment is qualitatively different from the effect of a random hazard. The guided hazard will occur every time the perpetrator elects to cause the hazard and therefore the hazard has a probability of 1.0. It is proposed that the significance of this distinction has not been fully appreciated. A malicious entity will seek out weaknesses, regardless of whether these have been identified by a risk assessment exercise or not. Since either random or guided hazards have an equal effect, and have a probability approaching 1 for a long time period, we argue that a risk assessment based upon the 'probability' (risk) of a hazard occurring is a concept with limited usefulness, and vulnerability is more validly assessed by assuming that all hazards (terrorist action, component failure or random natural event) will occur sooner or later, hence having a collective probability of 1.0. As soon as the assumption is made that sooner-or-later a hazard will occur, assessment of the technological systems contribution to user vulnerability can be refocussed from consideration of hazard probability to consideration of the number and type of weaknesses with which (inevitable) hazards can align.

A heterogeneous technological system may involve an arbitrary number of linked operations, each of which (consistent with the definition of stated by Slack et al. [29]requires inputs, executes some transformation process, and produces an output that is received by a subsequent process and ultimately serves an end-user. If the output of such a system is considered to be the delivery- or non-delivery of a nominated service-level output to an individual end-user, then the arbitrary heterogeneous technological system can be described by a configured system of notional AND/OR/NOT functions [30] whose inputs/outputs include unit-operations, input streams, intermediate product streams and services. For example, petrol is dispensed from a petrol station bowser to a car if fuel is present in the bulk tank, the power to a pump is available, a pipework and pump are operational and the required control signal is valid. Hence, a notional “AND” gate with these 5 inputs will model the operation of the dispensing system. The valid control signal will be generated when another set of different inputs is present, and the provision of this signal can be modelled by a notional “AND” function with nominated inputs. The approach allows the operational configuration of a heterogeneous technological system to be represented by a Boolean algebraic expression. Fig. 1 illustrates the use of Boolean operations to represent a somewhat more complex technological system.

Fig. 1. Process and stream operations required for system: Boolean representation.

 

Having represented a specific technological system using a Boolean algebraic expression, a 'truth table' can be constructed to display all permutations of process and stream availabilities as inputs, and technological system output as a single True or False value. From the truth table, a count of the cases in which a single input failure will cause output failure, and assign that total to the variable “E1”. A count of the cases where two input failures (exclusive of inputs whose failure will alone cause output failure) cause output failure, and assign that total value to E2. A further count of the cases in which three input failures cause output failure (and where neither single nor double input failures within that triple combination would alone cause output failure) and assign that total value to the variable “E3” and similarly for further “E” values. A simple algorithm can generate all permutations of “operate or fail” for every input process and stream. If a “1” is considered as a “operate” and “0” is considered as a “fail”, then for a model with n inputs (streams and processes) 2n options are input. If the algorithm outputs are applied to each binary representation of input states (processes and streams) and the output conditions (operate or fail) are recorded the input conditions for each output fail combination, the E1 etc. values can be computed (the E1 is the number of output-failure conditions where only a single input has failed). A truth-table approach to generating exposure metrics is illustrated in Fig. 2.

Fig. 2. Evaluation of exposure, by analysis of Boolean expression.

Fig. 2. Evaluation of exposure, by analysis of Boolean expression.

The composite metric {E1, E2, E3 … En}, is therefore mapped from the Boolean representation of the heterogeneous system and characterizes the weaknesses of that system in the contribution of the technological configuration to end-user vulnerability. Indeed, for a given single output at a defined service level - described by a Boolean value, representing “available” or “not available” - it is possible to isomorphically map an arbitrary technological system onto a Boolean algebraic expression. Thus, it is possible to create a homomorphic mapping (consistent with the guidance of Suppes [31] to a composite metric that characterizes the weakness of the system. Furthermore, the metric allows for comparison of the exposure level of alternative technological systems and configurations.

Next, we consider whether the measure represents the proposed attribute, by considering validity criteria. Hand [32] states that construct validity “involves the internal structure of the measure and also its expected relationship with other, external measures …” and “… thus refers to the theoretical construction of the test: it very clearly mixes the measurement procedure with the concept definition”. Since the Boolean algebraic expression represents all processes, streams and interactions, it can be directly mapped to a Process Flow Diagram (PFD) and so is an isomorphic mapping of the technological system with respect to processes and streams. The truth table is a homomorphic mapping of output conditions and input combinations, with output values unambiguously derived from the input values, but the configuration cannot be unambiguously derived from the output values. The {E1, E2, E3 … En} values are therefore a direct mapping of the system configuration.

Since the configuration and components of the system are represented by a Boolean expression, and the exposure metric {E1, E2, E3 … En} is assembled directly from the representation of the technological system, it has sufficient “construct validity” in the terms proposed by Hand [32]. The representational validity of this metric to the phenomenon of interest (viz. contribution to individual end-user vulnerability) must still be considered [31,32], and two justifications are proposed. Firstly, the representation of “exposure” using {E1, E2, E3 … En} supports the common system engineering “N+1”, “N+2” design redundancy concepts [33]. Secondly, the cost of achieving a given level of design redundancy can be assumed to be related to “E” values and so enumerating these will support decisions on value propositions of alternative projects, a previously-identified criterion for a valid metric.

Generating an accurate exposure metric as described, requires identification of processes and streams, which in practice requires a consideration of representation granularity. If every transistor in a processor chip were considered as a potential cause of failure, the “exposure” value calculated for the computer would be exceedingly high. If by contrast, the computer were considered as a complete, replaceable unit, then it would be assigned an exposure value of 1. A pragmatic definition of granularity will address this issue: if some sub-system of interest is potentially replaceable as a unit, and can be attacked separately from other sub-systems, then the sub-system of interest should be considered as a single potential source of failure. This definition allows adequate precision and reproducibility by different practitioners.

Each input to an operation within a technological system will commonly be the output of another technological system, which will itself have a characteristic “exposure”. The contribution of the predecessor system's exposure to the successor system must be calculated. This problem is generalised by considering that each input to a Boolean ‘AND’ or ‘OR’ operation has a composite exposure metric, and developing the principles by which the operation's output can be calculated from these inputs. Consider, for example, an AND gate that has three inputs (A, B and C), whose inputs have composite exposure metrics {A1, A2, A3 … }, {B1, B2, B3 … } and {C1, C2, C3 … }. The contributory exposure components are added component-wise, hence the resulting exposure of an AND operation is {(A1+B1+C1), (A2+B2+C2), (A3+B3+C3) … (An + Bn + Cn)}. The generalised calculation of contributory exposure is more complex for the OR operation. For an OR gate with three inputs (A, B and C), each of which has composite exposure metric {A1, A2, A3 … }, {B1, B2, B3 … } and {C1, C2, C3 … }:

• The output E1 value is 0

• The output E2 value is 0

• The output E3 value is 2((A1−1)+(B1−1)+(C1−1)),((A2−1)+(B2−1)+(C2−1)), ((A3−1)+(B3−1)+(C3−1)) since one fail from each input must occur for the output to fail, however each remaining combination of fails contributes to the E3 value

• The E4 and subsequent values are calculated in exactly the same way as the E3 value.

Since the contributory system has effectively added streams and processes, the length of the output exposure vector is increased when the contributory system is considered. The proposed approach is therefore to nominate a level to which exposure values will be evaluated. If, for example, this level is set at 2, then the representation would be considered to be complete when it could be shown that no contributory system adds to the E2 values of the represented system.

3. Implications from theory

The current levels of expenditure on infrastructure “hardening” are well reported in popular press. The theory presented is proposed to be capable, for a defined technological system, of quantitatively comparing the effectiveness of alternative projects. The described measure is also proposed to be capable of differentiating between systems that have higher or lower exposure, and thus allowing prioritisation of effort. The following examples have undergone a preliminary analysis. The numerical outputs are dependent on system details and boundaries, nevertheless, the authors' preliminary results indicate the output that is anticipated, and are considered to demonstrate the value of the principles. The example studies are diverse and examine well-defined services and service levels for the benefit of a representative individual end-user. Each example involves a technological system (with a range of processes, intermediate streams, and functionalities) and may therefore be expected to include a number of cases in which a single stream/process failure will cause the service delivery to fail - and a number of other cases in which multiple failures would result in the non-delivery of the defined service. The analyses also collectively identify common contributors, technological gaps, and common principles that inform improvement decisions. In each example case, the delivered service and level is established, following which the example is described and the boundaries confirmed. The single-cause of failure items (contributors to the E1 value) are assessed first, followed by the dual-combination causes of failure (contributors to the E2 values) and then the E3 values. It is assumed that neither maintenance nor replacement of components are required within the timeframe considered – i.e., outputs will be generated as long as their designed inputs are present, and the processes are functional.

3.1. Example 1: Petrol station, supply of fuel to a customer

The “service” in this case is the delivery, within standard fuel specifications (including absence of contaminants) and at standard flow rate, of petrol into a motor vehicle at the forecourt of a petrol-station. The scope includes the operation of the forecourt pumps, the underground fuel storage tanks, metering and transactional services. Although storage is significant, the refilling of the underground tanks from fuel stored in national-reserves can only be accomplished by a limited number of approaches, which must occur frequently relative to the considered timeframe and must be considered. Since many sources supply the bulk collection depot, the analysis will not go further back than the bulk storage depot. Similarly, the station is unlikely to have duplicated power feeders from the nearest substation and so this supply must be considered. The financial transaction system and the communications system it uses, must be included in the consideration.

On the assumption that the station is staffed, sanitary facilities (sewage, water) are also required (see Fig. 3). While completely automated “truck stop” fuel facilities exist, facilities as described are common and can reasonably be called representative. The fuel dispensing systems in both cases are almost identical, however the automated facilities cannot normally carry out cash transactions, and the manned stations commonly sell other goods (food and drink) and may provide toilet facilities.

Fig. 3. Operation of petrol station.

In work not reported here, the exposure metrics of the contributory systems have been estimated as EFTPOS financial transaction {49, 12, 1}, staff facilities system {36, 4, 6}, power {2, 3, 0}. Based on these figures, the total exposure metric for the petrol delivered to the end user is estimated at {92, 20, 8}.

In evaluating the exposure metric, the motor/petrol-pump and pipework connections do not generate E3 values (more than 3 failures would be required to cause a failure of the output function) since the petrol station has four pumps. The petrol station power supply affects several plant items that are local to the petrol station, and so is represented at the level which allows a valid assessment of its exposure contribution. For bulk petrol supply, numerous road system paths exist, tankers and drivers are capable of bringing fuel from the refinery to the station and so these do not affect the E3 values. The electricity distribution system has more than three feed-in power stations and is assumed to have at least 3 High Voltage (HV) lines to major substations, however local substations commonly have only two voltage-breakdown transformers, and a single supply line to the petrol station would be common. The local substation and feeders are assumed to be different for the sewage treatment plant, the bulk petrol synthesis and banking system clearing-house (and are accounted for in the exposure metrics for those systems), but the common HV power system (national grid) does not contribute to the E3 values, and so it is not double-counted in assessing the power supply exposure of the petrol station and contributory systems. While EFTPOS and sewage systems have had high availability, this analysis emphasizes the large contribution they make to the user's total exposure, and thus suggest options for investigation.

This example illustrates the significance of the initial determination of system boundaries. This example output is defined as fuel supplied to a user's vehicle at a representative petrol station. Other studies might consider a user seeking petrol within a greater geographic region (e.g. neighbourhood). In that case the boundaries would be determined to consider alternative petrol station and also subsystems that are common to local petrol stations (power, sewage, financial transactions, bulk fuel) and subsystems (e.g. local pumps) where design redundancy is achieved by access to multiple stations.

3.2. Example 2: Sewage system services for apartment-dweller

Consider the service of the sanitary removal of human waste, as required, via the lavatory installed in an urban apartment discharging to a wastewater treatment plant. The product of the treatment operation being environmentally acceptable treated water to waterways, and solid waste at environmentally acceptable specifications to landfill.

The technology description assumes that the individual user lives in an urban area with a population of 200,000 to 500,000. This size is selected because it is representative of a large number of cities. An informal survey of the configuration of sewage systems used by cities within this size range, reveals a level of uniformity, and hence the configuration in the example is considered “representative”.

The service is required as needed and commences from the water-flushed lavatory, and ends with the disposal of treated material. Electric power supplies to pumping stations and to local water pumps, are unlikely to have multiple feeders and will be considered to the nearest substation. The substation can be expected to have multiple feeders and so it is not considered necessary to consider the electric power supply further “back”. Significant pumping stations would commonly have a “Local/Manual” alternative control system capability, in which a remote control is normal, but allowing an operator to select “manual” at the pump station and thereafter operate the pumps and valves locally.

Operationally, the cistern will flush if town water is available and the lift pump is operational and power is available to the lift-pump. The lavatory will flush if the cistern is full. The waste will be discharged to the first pumping station if the pipework is intact (gravity fed). The first pumping station will operate if either main or backup pump are operational and power is available and either operator is available or control signal is present. The power signal will be available if signal lines are operational and signal equipment is operational and power for servo motors are available and remote operator or sensor is operational. The waste will be delivered to the treatment station if the major supply pipework is operational. The treatment station will be operational (i.e. able to separate and coalesce the sewage into a solid phase suitable for landfill, and an environmentally benign liquid that can be discharged to sea or river) if the sedimentation tank and discharge system are operable and the biofilm contactor is operational and the clarifier/aerator is operational and the clarifier sludge removal system is operational and power supply is available and operators are available. The sludge can be removed if roads are operational and driver and truck and fuel is available.

Several single sources of failure (contributors to E1 value) can be discerned. The power supply to local water pump, gravity fed pipework from lavatory to first pumping station and power supply to the first pump station. Manual control of the pumping station is possible, then this contributes to the E2 value, otherwise the control system wiring and logic will contribute to the E1 value. Assuming duplicate pumping station pumps, these contribute to the E2 value. Few of the treatment plant processes will be duplicated and so will contribute to the E1 values. For the urban population under consideration, the treatment plant is unlikely to have dual power feeds, and so the power supply contributes to the E1value.

For real treatment plants, most processes will include bypasses or overflow provisions. If the service is interpreted to specify the discharge of environmentally acceptable waste, then these bypasses are irrelevant to this study. However, if the “service” were defined with relaxed environmental impact requirements, then the availability of bypasses would mean that treatment plant processes would contribute to E2 values. Common reports of untreated sewage discharge following heavy rainfall events indicate that the resilience of treatments plants is low.

The numerical values of exposure presented in Table 1 are based on a representative design. It is noted that design detail may vary for specific systems. .

Table 1. Contributions to exposure of sewage system.

Preliminary research included the commissioning of a risk analysis by a professional practitioner of a sewage system defined to an identical scope, components and configuration of the target system. While the risk analysis generated useful results, it failed to identify all of the weaknesses that were identified by the exposure analysis.

3.3. Example 3: Supply of common perishable food to apartment-dweller

For the third example, a supply of a perishable food will be considered. The availability to a (representative) individual consumer, of whole milk with acceptable bacteriological, nutritional and organoleptic properties will be considered to represent the “service” and associated service level. Fresh whole milk is selected because it is a staple food and requires a technological system that is similar to that required by other nominally processed fresh food. Nevertheless, the technological system is not trivial - the pasteurisation step requires close control if bacteriological safety is to be obtained without deterioration of the nutritional and taste qualities.

At the delivery point, a working refrigerator and electric power are required. Transit to the apartment requires electric power to operate the elevator. Transport from the retail outlet requires fuel, driver, operational vehicle, and roads. The retail outlet requires staff, electric power, functional sewage system, functional water supply, functional lighting, communications and stocktaking system and access to financial transaction capability. Transport to the retail outlet requires fuel, driver, roadways and operational trucks. The processing and packaging system requires pasteurisation equipment, control system, Cleaning In Place (CIP) system, CIP chemical supply, pipework-and-valve changeover system for CIP, electric-powered pumps, electrically operated air compressors, fuel and fired hot water heaters, packaging equipment, packaging material supplies, water supply, waste water disposal, sewage system and skilled operators. Neither the processing facilities, nor the retail outlet, nor the apartment would commonly not have duplicated electric power feeders and so these and associated protection systems must be considered back to the nearest substation. The substation can be assumed to have multiple input feeders, and so the electric power system need not be considered any further upstream of the substation. The heating system (hot water boiler and hot water circulation system) for the pasteurizer would commonly be fired with fuel oil, but including enough storage that it would commonly be supplied directly from a fuel wholesaler.

The processes leading from raw milk collection up to the point where pasteurised and chilled milk is packaged, are defined in considerable detail by regulatory bodies - and can therefore be considered to be representative. There will be variation in packaging equipment types, however each of these can be considered as a single process and single “weakness”. Distribution to supermarkets, retail sales and distribution to individual dwellings are similar across most of the western world and are considered to be adequately representative. Neither the processing plant nor the retail outlet are staffed and so require an operational sewage disposal system and water supply.

Delivery of the milk will be achieved if the refrigeration unit in the apartment is operational and power is supplied to it and packaged product is supplied. Packaged product can be supplied if apartment elevator and power are available and individual transport from the retail outlet is functional and retailing facilities exist and are supplied with power and are staffed and have functional financial transaction systems. The retail facility can be staffed if skilled persons are available and transport allows them to access the facility and staff facilities (water, sewage systems) are operational. The sewage system is functional if water supply is available and pumping station is functional and is supplied with power and control systems are available. The bulk consumer packs are delivered to the retail outlet if road systems and drivers and vehicles and fuel is available. The packaged product is available from the processing facility if fuel is available to heat the pasteurizer and power is available to operate pumps and control system is operable and skilled operators are available and homogeniser is operational and compressed air for packaging equipment is available and packaging equipment is operational. Product can be made if CIP chemicals are available and CIP waste disposal is operational. Since very many suppliers and transport options are capable of supplying raw milk to the processing depot, the study need not consider functions/processes that are further upstream to the processing depot, i.e. the on-farm processes or the raw milk delivery to the processing depot.

Several single sources of failure (contributors to E1 value) can be identified: Power supply to the refrigerator (and cabling to the closest substation), roads fuel vehicle and driver, staff and power supply (and cabling to substation) at the retail outlet. Staff facilities (and hence the exposure contributions from the sewage system) must be considered. Provided the retail outlet is able to accept cash or paper-credit notes, then the payment system contributes to the E2 value, however if the retail outlet is constrained to electronic payments then many processes associated with the communications and banking systems will contribute to the E1 value. Roads, and fuel for bulk distribution will contribute to the E1 value, however drivers and trucks contribute to higher E values, since many drivers and trucks can be assumed to be available. The power supply to the processing and packing facility will contribute to the E1 value. The tight specifications and regulatory standards for consumer-quality milk will generally not allow any bypassing of processes, and so each of the major processes (reception, pasteurisation, standardisation, homogenisation and packaging) will all contribute to the E1 values. The milk processing and packaging facility will also need fuel for the Cleaning in Place (CIP) system and will need staff facilities - and hence the exposure contribution of the sewage system must be considered.

The examples demonstrate three commonalities. Firstly, it is both practical and informative to evaluate contributors to E1, E2 etc. variables for a broad range of cases and technologies. Secondly, sources of vulnerability that are specific to the examples can be identified, and thirdly principles for reduction of vulnerability can be readily articulated. In the petrol station supply example, one could consider eliminating the operator to obtain a greater reduction in exposure, eliminating needs for sewage, water, than retaining the operator and allowing cash transactions.

Some common themes can also be inferred among the examples, in general principles for exposure reduction likely to be applicable to other cases: The example studies contain intermediate streams: if such intermediate streams have specifications that are publicly available (open-source), there is increased opportunity for service substitution from multiple sources, and a reduction in the associated exposure values. Proprietary software and data storage are a prominent example of lack of standardisation despite the availability of such approaches as XML. Currently, electronic financial transactions require secure communications between an EFTPOS terminal and the host system, and the verification and execution of the transaction between the vendor's bank account and the purchaser's bank account. These processes are necessarily somewhat opaque. Completely decentralised transactions are possible as long as both the seller and vendor agree upon a medium of exchange. The implications of either accepting- or not-accepting a proposed medium of exchange are profound for the “exposure” created. Although the internet was designed to be fault tolerant, its design requires network controllers to determine the path taken by data, and in practice this has resulted in huge proportions of traffic being routed through a small number of high bandwidth channels. This is a significant issue: if the total intercontinental internet traffic (including streamed movies, person-to-person video chats, website service and also EFTPOS financial transaction data) were to be routed to a low-bandwidth channel, the financial transaction system would probably fail. Technological systems such as the generation of power using nuclear energy, are currently only economic at very large-scale, and hence represent dependencies to a large number of systems and users. Conversely, a system generating power using photovoltaic cells, or using external combustion (tolerant of wide variations in fuel specification) based on microalgae grown at village level, would probably be inherently less “exposed”. In every case where a single-purpose process forms an essential part of a process, it represents a source of weakness. By contrast, any “general purpose” or “re-purpose-able” component can, by definition, contribute to decreasing exposure. Human beings' ability to apply “common sense” and their unlimited capacity for re-purposing, are the epitome of multi-purpose processors. The capability to incorporate humans into a technological system is possibly the single most effective way to reduce “exposure”. The “capability to incorporate” may require some changes that do not inherently affect the operation of a system, merely incorporate a capability, such as the inclusion of a hand-crank capability in a petrol pump.

The examples (fuel supply, sewage disposal, perishable food) also uncover the existence of technological gaps, and where a solution would decrease exposure. Such gaps include the capability to store significant electrical energy locally (this is a more significant gap than is the capability to generate locally), a truly decentralised/distributed and secure communication system, and an associated knowledge storage/access system, a fully decentralised financial system that allows safe storage of financial resources and safe transactions, a decentralised sewage treatment technology, and less centralised technological approaches for the supply of both food and water and a transport system capable of significant load-carrying though not necessarily high speed, with low-specification roadways and broadly-specified energy supplies.

4. Discussion

The detailed definition of a technological system allows a more rigorous process for identification of hazards, by ensuring that all system weaknesses are considered. Calculating the exposure level of a technological system is not proposed as a replacement for risk analysis, but as a technique that offers specific insights and also increases the rigour of risk analysis. Indexing a measure of contribution-to-vulnerability is both simplified and enhanced in value if the measure is indexed to the delivery of specific goods/services, at defined levels, to an individual end-user. This approach will allow clarification of the extent to which a given project will benefit the individual. The analysis is applicable to any technological system supplying a specified deliverable at a given service level to a user. It is recognised that while some hazards e.g. a major natural or man-made disaster may affect more than one system, the analysis of the technological exposure of each system remains valid and valuable. The analysis of hazard-probability is of limited value over either long timeframes or when hazards are guided, and that characterising the number and types of weaknesses in a technological system is a better indicator of the vulnerability which it contributes to the person who depends on its outputs. An approach to quantifying the “exposure” of a technological system has been defined and justified as a valid representation of the contribution made to the vulnerability of the individual end-user. The approach is generates a fine-grained metric {E1, E2, E3 … En} that is shown to accurately measure the vulnerability incurred by the end-user: calculation of the metric is tedious but not conceptually difficult; the measure is readily able to be verified and replicated, and the calculated values allow detailed investigation of the effect of hypothesized changes to a target system. The approach has been illustrated with a number of examples, and although only preliminary analyses have been made, the practicality and utility of the approach has been demonstrated. Only a small number of example studies have been presented, although they have been selected to address a range of needs experienced by actual urban-dwellers. In each case the scope and technologies used are proposed to be representative, and hence conclusions drawn from the example studies can be considered to be significant.

Even the preliminary analyses of the examples have indicated two distinct categories of contributors to vulnerability: weaknesses that are located close to the point of final consumption, and highly centralised technological systems such as communications, banking, sewage, water treatment and power generation. In both of these categories the user's exposure is high despite limited design redundancy, however the users exposure could be reduced by selecting or deploying technology subsystems with lower exposure close to point-of-use, and by using public standards to encourage multiple opportunities for service substitution. The use of an exposure metric has been shown to provides measure of the vulnerability contributed by a given technological system to the individual end-user, has been shown to be able to be applied to representative examples of technological systems. Although results are preliminary, the metric has been shown to allow the derivation of both specific and generalised conclusions. The measure can integrate with-, and add value to-existing techniques such as risk analysis.

The approach is proposed as a theory of exposure, including conceptual definitions, domain limitations, relationship-building and predictions that are proposed [34] as essential criteria for a useful theory.

Acknowledgement

This research is supported by an Australian Government Research Training Program (RTP) Scholarship.

References

[1] J. Forrester, Industrial Dynamics, MIT Press, Boston, MA (1961)

[2] R. Ackoff, Towards a system of systems concepts, Manag. Sci., 17 (11) (1971), pp. 661-671

[3] J. Sterman, Business Dynamics: Systems Thinking and Modelling for a Complex World, McGraw-Hill Boston (2000)

[4] I. Eusgeld, C. Nan, S. Dietz, System-of-systems approach for interdependent critical infrastructures, Reliab. Eng. Syst. Saf., 96 (2011), pp. 679-686

[5] T. Forester, P. Morrison, Computer unreliability and social vulnerability, Futures, 22 (1990), pp. 462-474

[6] B. Martin, Technological vulnerability technology in society, 18 (1996), 511–523A

[7] L. Robertson, From societal fragility to sustainable robustness: some tentative technology trajectories, Technol. Soc., 32 (2010), pp. 342-351

[8] M. Kress, Operational Logistics: the Art and Science of Sustaining Military Operations, 1-4020-7084-5, Kluwer Academic Publishers (2002)

[9] L. Li, Q.-S. Jia, H. Wang, R. Yuan, X. Guan, Enhancing the robustness and efficiency of scale-free network with limited link addition, KSII Trans. Internet Inf. Syst., 6 (2012), pp. 1333-1353

[10] A. Yazdani, P. Jeffrey, Resilience enhancing expansion strategies for water distribution systems: a network theory approach, Environ. Model. Softw., 26 (2011), pp. 1574-1582

[11] Y.Y. Haimes, P. Jiang, Leontief based model of risk in complex interconnected infrastructures, ASCE J. Infrastruct. Syst., 7 (2001), pp. 1-12

[12] Cyber-attack: How Easy Is it to Take Out a Smart City?. (New Scientist, 4 August 2015), (https://www.newscientist.com/article/dn27997-cyber-attack-how-easy-is-it-to-take-out-a-smart-city/, retrieved 22 Mar 2017).

[13] Jeep Drivers Can Be HACKED to DEATH: All You Need Is the Car's IP Address (e.g. The Register, 21 Jul 2015). (https://www.theregister.co.uk/2015/07/21/jeep_patch/, retrieved 22 Mar 2017).

[14] J. Glenn (Sen), Quotation from Retirement Speech, Seen online at (1997), http://www.historicwings.com/features98/mercury/seven-left-bottom.html, in Feb 2014

[15] C. Gómez, M. Buriticá, M. Sánchez-Silva, L. Dueñas-Osorio, Optimization-based decision-making for complex networks in disastrous events, Int. J. Risk Assess. Manag., 15 (5/6) (2011), pp. 417-436

[16] C. Perrow, Normal Accidents: Living with High-risk Technologies, Basic Books, New York (1984)

[17] P. Chopade, M. Bikdash, Critical infrastructure interdependency modelling: using graph models to assess the vulnerability of smart power grid and SCADA networks, 8th International Conference and Expo on Emerging Technologies for a Smarter World, CEWIT 2011(2011)

[18] ISO GUIDE 73, Risk Management — Vocabulary, (2009)

[19] A. Gheorghe, D. Vamadu, Towards QVA - Quantitative vulnerability assessment: a generic practical model, J. Risk Res., 7 (2004), pp. 613-628

[20] ISO/IEC 31010, Risk Management— Risk Assessment Techniques', (2009)

[21] FMEA, MIL-STD-1629A, Failure Modes and Effects Analysis (1980)

[22] Y.Y. Haimes, On the definition of resilience in systems, Risk Anal., 29 (2009), pp. 498-501, 10.1111/j.1539-6924.2009.01216.x, 2009 Note page 498

[23] K. Khatri, K. Vairavamoorthy, “A new approach of risk analysis for complex infrastructure systems under future uncertainties: a case of urban water systems. ”Vulnerability, Uncertainty, and Risk: analysis, Modeling, and Management, Proceedings of the ICVRAM 2011 and ISUMA 2011 Conferences (2011), pp. 846-856

[24] Y. Shuai, X. Wang, L. Zhao, Research on measuring method of supply chain resilience based on biological cell elasticity theory, IEEE Int. Conf. Industrial Eng. Eng. Manag. (2011), pp. 264-268

[25] Munoz M. Dunbar, On the quantification of operational supply chain resilience, Int. J. Prod. Res., 53 (22) (2015), pp. 6736-6751

[26] A.M. Law, Simulation Modelling and Analysis, McGraw-Hill, New York (2007)

[27] L. Barabási, E. Bonabeau, EScale-free networks, Sci. Am., 288 (2003), pp. 60-69

[28] G. Tanaka, K. Morino, K. Aihara, Dynamical robustness in complex networks: the crucial role of low-degree nodes, Nat. Sci. Rep., 2/232 (2012)

[29] N. Slack, A. Brandon-Jones, R. Johnston, Operations Management 7th Edition by Dawson Books, (2013), ISBN-13: 978–0273776208 ISBN-10: 0273776207

[30] ISO/IEC 9075–2, Information Technology – Database Languages – SQL, (2011)

[31] P. Suppes, Measurement theory and engineering, Dov M. Gabbay, Paul Thagard, John Woods (Eds.), Handbook of the Philosophy of Science, Philosophy of Technology and Engineering Sciences, vol. 9, Elsevier BV (2009)

[32] D. Hand, Measurement Theory and Practice: the World through Quantification, Wiley (2004), ISBN: 978-0-470-68567-9.ISO/IEC 31010:2009–(Risk Management - Risk Assessment Techniques)

[33] Ponemon Institute, 2013 Study on Data Center Outages, Retrieved, (2013), http://www.emersonnetworkpower.com/documentation/en-us/brands/liebert/documents/white%20papers/2013_emerson_data_center_outages_sl-24679.pdf, Dec 2016

[34] J.G. Wacker, A definition of theory: research guidelines for different theory-building research methods in operations management, J. operations Manag., 16.4 (1998), pp. 361-385

Vitae

L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

1 Abbreviation: Failure Modes and Effects Analysis, FMEA.

2 The terminology used is typical for Australasia: other locations may use different terminology (e.g. “gas” instead of “petrol”).

Keywords

Technological vulnerability, Exposure, Urban individual, Risk

Biographies

L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

Citation: Lindsay J. Robertson, Katina Michael, Albert Munoz, "Assessing technology system contributions to urban dweller vulnerabilities", Technology in Society, Vol. 50, August 2017, pp. 83-92, DOI: https://doi.org/10.1016/j.techsoc.2017.05.002