Assessing technology system contributions to urban dweller vulnerabilities

Lindsay J. Robertson+, Katina Michael+, Albert Munoz#

+ School of Computing and Information Technology, University of Wollongong, Northfields Ave, NSW 2522, Australia

# School of Management and Marketing, University of Wollongong, Northfields Ave, NSW 2522, Australia

Received 26 March 2017, Revised 16 May 2017, Accepted 18 May 2017, Available online 19 May 2017

https://doi.org/10.1016/j.techsoc.2017.05.002

Highlights

• Individual urban-dwellers have significant vulnerabilities to technological systems.

• The ‘exposure’ of a technological system can be derived from its configuration.

• Analysis of system ‘exposure’ allows valuable insights into vulnerability and its reduction.

Abstract

Urban dwellers are increasingly vulnerable to failures of technological systems that supply them with goods and services. Extant techniques for the analysis of those technological systems, although valuable, do not adequately quantify particular vulnerabilities. This study explores the significance of weaknesses within technological systems and proposes a metric of “exposure”, which is shown to represent the vulnerability contributed by the technological system to the end-user. The measure thus contributes to the theory and practice of vulnerability reduction. The results suggest specific and general conclusions.

1. Introduction

1.1. The scope and nature of user vulnerability to technological systems

Today's urban dwelling individuals are end-users that increasingly depend upon the supply of goods and services produced by technological systems. These systems are typically complex [1–4], and as cities and populations grow, demands placed on these systems lead to redesigns and increases in complexity. End-users often have no alternative means of acquiring essential goods and services and thus a failure in a technology system has implications for the individual that are disproportionately large compared to the implications for the system operator/owner. End users may also lack awareness of the technological systems that deliver these goods and services, inclusive of system complexity and fragility, yet may be expected to be concerned for their own security. The resulting dependence on technology justifies the observed concern that there is a vulnerability incurred by users, from the systems that provide them with goods and services.

Researchers [5–7], alongside the tradition of military strategists [8], have presented a socio-technical perspective on individual vulnerability, drawing attention to the complexity of the technological systems tasked with the provision of essential goods and services. Meanwhile, other researchers have noted the difficulties of detailed performance modelling of such systems [9–11].

The vulnerability of an urban dweller has also been a common topic within the popular press for example “Cyber-attack: How easy is it to take out a smart city?” [12], which speculated how such phenomena as the “Internet of Things” affect the vulnerability of connected systems. Other popular press topics have included the possibility that motor vehicle systems are vulnerable to “hacking” [13].

There is furthermore a widespread recognition that systems involving many 'things that can go wrong' are fragile. Former astronaut and United States Senator John Glenn stated in his (1997) retirement speech [14] mentioned “… the question I'm asked the most often is: ‘When you were sitting in that capsule listening to the count-down, how did you feel?’ ‘Well, the answer to that one is easy. I felt exactly how you would feel if you were getting ready to launch and knew you were sitting on top of two million parts - all built by the lowest bidder on a government contract’ …” His concern was justified, and most would appreciate that similar concerns apply to more mundane situations than the Mercury mission.

National infrastructure systems are typically a major subset of the technological systems that deliver goods and services to individual end-users. Infrastructure systems are commonly considered to be inherently valuable to socio-economic development, with the maintenance of security and functionality often emphasized by authors such as Gómez et al. (2011) [15]. We argue that infrastructural systems actually have no intrinsic value to the end-user, and are only valuable until another option can supply the goods and services to the user with lower vulnerability, higher reliability or both. If a house-scale sewage treatment technology were economical, adequately efficient and reliable, then reticulated, centralised sewage systems would have no value. We would also argue that the study of complete technological systems responsible for delivery of goods or services to an end-user, is distinguishable from the study of infrastructural systems.

For the urban apartment dweller, significant and immediate changes to lifestyle quality would occur if any of a list of services became unavailable. To name a few, these services would include those that allow the flow of work information, financial transactions, availability of potable water, fuel/power for lighting, heating, cooking and refrigeration, sewage disposal, perishable foods and general transport capabilities. Each of these essential services are supplied by technological systems of significant complexity and face an undefined range of possible hazards. This paper explores the basis for assessing the extent to which some technological systems contribute to a user's vulnerability.

Perrow [16] asserts that complexity, interconnection and possibility of major harm make catastrophe inevitable. While Perrow's assertion may have intuitive appeal, there is a need for a quantitative approach to the assessment of vulnerability. Journals (e.g. International Journal of Emergency Management, Disasters) are devoted to analysing and mitigating individuals' vulnerabilities to natural disasters. While there is an overlap of topic fields, disaster scenarios characteristically assume geographically-constrained, simultaneous disruption of a multitude of services and also implicitly assume that the geographically unaffected regions can and will supply essential needs during reconstruction. This research does not consider the effects of natural disasters, but rather the potential for component or subsystem disruptions to affect the technological system's ability to deliver goods and services to the end-user.

Some technological systems-such as communications or water distributions systems-transmit relevant goods and services via “nodes” that serve only aggregate or distribute the input goods and services. Such systems can be characterised as “homogeneous” and are thus distinguished from systems that progressively create as well as transmit goods, and thus require a combination of processes, input- and intermediate-streams and services. The latter type of system are thus categorized as heterogeneous, such heterogeneity must be accommodated in an analysis measure.

1.2. Quantification as prelude to change

We propose and justify a quantification of a technological system contributions to the vulnerability of an urban dwelling end-user who is dependent upon its outputs. The proposed approach can be applied to arbitrary heterogeneous technological systems and can be shown to be a valid measure of an attribute that had previously been only intuitively appreciated. Representative examples are used to illustrate the theory, and preliminary results from these examples illustrate some generalised concerns and approaches to decreasing urban dwelling end user “exposure” to technological systems. The investigation of a systems exposure will allow a user to make an informed assessment of their own vulnerability, to reduce their exposure by making changes to system aspects within their control. The investigation of a system's exposure will allow a user to make an informed assessment of their own vulnerability, to reduce their exposure by making changes to system aspects within their control. Quantifying a system's exposure will allow a system's owner to identify weaknesses, and to assess the effect of hypothetical changes.

2. Quantification of an individual's “exposure”: development of theory

2.1. Background to theory

Consider an individual's level of “exposure” by two scenarios: a first scenario where a specific service can only be supplied to an end-user by a single process, which is dependent on another process, which is in turn dependent on a third process. In the second scenario, the same service can be offered to the user by any one of three identical processes with no common dependencies. Any end-user of the service could reasonably be expected to feel more “exposed” under the first scenario than under the second. Service delivery systems are likely to involve complex processes that include at least some design redundancies, but also include single-points-of-failure, and cases where two or more independent failures would deny the supply of the service. For such systems, the “exposure” of the end user may not be obvious, but it would be useful to distinguish quantitatively and to be able to distinguish quantitatively among alternative configurations.

The literature acknowledges the importance of a technological configuration's contribution to end-user vulnerability [6], yet such studies do not quantitatively assess the significance of the system configuration. Reported approaches to vulnerability evaluation can be broadly categorized according to whether they consider homogeneous or heterogeneous systems, whether they assume a static or a dynamic system response, and whether system configuration is, or is-not used as the basis for the development of metrics. The published literature on risk analysis (including interconnected system risks), resilience analysis, and modelling all have a bearing on the topic, and are briefly summarised below:

Risk analysis may be applied to heterogeneous or homogeneous systems, the analysis does not analyse dynamic system responses and limits analysis to a qualitative assessment of the effect of brainstormed hazards. Classical risk analysis [17–20] requires an initial description of the system under review, however practitioners commonly only generate descriptions lacking specific system configuration detail. While many variations are possible, it is common for an expert group to carry out the risk analysis by listing all identified hazards and associated harms. Experts then categorise identified harms by severity, and hazards according to the vulnerability of the system and the probability of hazard occurrence. Risk events are then classified by the severity, based on harm magnitude, and hazards probability. Undertaking a risk analysis is valuable, yet without a detailed system definition to which the assessments of hazard and probability are applied, probability-of-occurrence evaluations may be inaccurate or fail to identify guided hazards, and the analysis may fail to identify specific weaknesses. Another issue exists if the exercise fails to account for changes to instantaneous system states; if the system is close to design capacity upon hazard occurrence, the probability of the hazard causing harm is higher than if the hazard occurred at a point in time when the system operates at lower capacities. Finally, the use of categories that correlate harm and hazard to generate a risk evaluation are inherently coarse-grained, meaning that changes to system configuration or components may- or may-not trigger a change to the category that is assigned to the risk.

Another analysis approach is that of “Failure Modes and Effects Analysis” (FMEA) [21], which examines fail or go conditions of each component within a system, ultimately producing a tabulated representation of the permutations and combinations of input “fail” criteria that cause system failure. FMEA is generally used to demonstrate that a design-redundancy requirement of a tightly-defined system is met.

'Resilience' has been the topic of significant research, much of which is dedicated to the characterization of the concept and definitional consensus. One representative definition [22] is “ … the ability of the system to withstand a major disruption within acceptable degradation parameters and to recover within an acceptable time … ” This definition is interpreted [23–25] quantitatively as a time-domain variable measuring one or more characteristics of the transient response. For complex systems, derivation of time-domain responses to a specific input disruption can be expected to be difficult, and such a derivation will only be valid for one particular initial operational state and disruption event. Responses to each possible system input and initial condition would generate a new time-domain response, and so a virtually infinite number of transient responses would be required to fully characterize the 'resilience' of a single technological system. All such approaches implicitly assume that the disturbance is below an assumed ‘maximum tolerable’ level, so the technological system's response will be a continuous function, i.e. the system will not actually fail. As resilience analysis applies to the aforementioned scenarios, a methodological issue exists in that evaluations of this kind are post-hoc observations, where feedback from event occurrences lead to design changes. Thus, an implicit assumption exists that the intention of resilient design is to minimise the disturbance to an ongoing provision of goods and services, rather than prevent output failure. Resilience analysis examines each permutation and combination of input, and constraining scope to failures, as input. As this method considers system responses to external stimulus, it requires a detailed knowledge of system configurations and system configuration, but is only practical for relatively simple cases (the difficulty of modelling large systems, has been noted by others [9]).

A third approach constructs a model of the target system in order to infer real world behaviour. The model - as a simplified version of the real world system - is constructed for the purposes of experimentation or analysis [26]. Applied to the context of end-user vulnerability, published simplifications of communication systems, power systems and water distribution systems commonly assume system homogeneity. For example, a graph theory approach will consider the conveyance of goods and services as a single entity transmitted across a mesh of edges and vertices that each serve to either disperse or distribute the product. Once a distribution network is represented as a graph, it is possible to mathematically describe the interconnection between specified vertices [10], and to draw conclusions [27] regarding the robustness of the network. Tanaka and colleagues [28] noted that it is possible to represent homogeneous networks using graph theory notation and thus make graph theory analyses possible. Common graph theory metrics consider the connections of each edge and do not consider the possibility that an edge could carry a different service from another edge. Because the graph theory metrics assume a homogeneous system, these metrics cannot be applied directly to heterogeneous systems in which interconnections do not always carry the same goods or services.

2.2. Exposure of a technological system

In order to obtain a value for the technological system contribution to end-user vulnerability that enables comparisons among system configurations a quantitative analytical technique is needed. To achieve this, four essential principles are proposed to allow and justify the development of a metric that evaluates the contribution of a heterogeneous technological system, to the vulnerability of an individual. These principles are:

(1) Application to individual end-user: an infrastructural system may be quite large and complex. Haimes and Jiang [11] considered complex interactions between different infrastructural systems by assigning values to degrees of interaction: the model allows mathematical exploration of failure effects but (as is acknowledged by these authors) depends on interaction metrics that are difficult to establish. This paper presents an approach that is focussed on a representative single end-user. When an individual user is considered, not only is the performance of the supply system readily defined, but the relevant system description is more hierarchical and less amorphous. Our initial work has also suggested that if consideration of failures requiring more than 3 simultaneous and unrelated hazards, then careful modelling can generate a defensible model without feedback loops.

(2) Service level: it is possible to not only describe goods or services that are delivered to the individual (end-user), but also to define a service level at which the specified goods or services either are-, or are-not delivered. From a definitional standpoint, this approach allows the output of a technological system to be expressed as a Boolean variable (True/False), and allows the effect of the configuration of a technological system to be measured against a single performance criterion. For some goods/services, additional insights may be possible from separate analyses at different service levels (e.g. water supply analyzed at “normal flow” and at “intermittent trickle”) however for other goods/services (e.g. power supply) a single service level (power on/off) is quite reasonable.

(3) Hazard and weakness link: events external to a technology system only threaten the output of the technology system if the external events align with a weakness in the technology system. If a hazard does not align with a weakness then it has no significance. Conversely if a weakness exists within a technological system and has not been identified, then hazards that can align with the weakness are also unlikely to be recognised. If the configuration of a particular technology system is changed, weaknesses may be removed while other weaknesses may be added. Therefore, for each weakness added, an additional set of external events can be newly identified as hazards - and correspondingly for each weakness that is removed, the associated hazards cease to be significant. Processes capable of failure and input streams that could become unavailable, are weaknesses that are significant regardless of the number and/or type of hazards of sufficient magnitude to cause failure, that might align with any specific example of such a weakness.

(4) Hazard probability: Some (e.g. extreme weather events) hazards occur randomly, can be assessed statistically, and will have a higher probability of occurrence over a long time period. Terrorist actions or sabotage in particular, do not occur randomly but must be considered as intelligently (mis)guided hazards. The effect of a guided hazard upon a risk assessment is qualitatively different from the effect of a random hazard. The guided hazard will occur every time the perpetrator elects to cause the hazard and therefore the hazard has a probability of 1.0. It is proposed that the significance of this distinction has not been fully appreciated. A malicious entity will seek out weaknesses, regardless of whether these have been identified by a risk assessment exercise or not. Since either random or guided hazards have an equal effect, and have a probability approaching 1 for a long time period, we argue that a risk assessment based upon the 'probability' (risk) of a hazard occurring is a concept with limited usefulness, and vulnerability is more validly assessed by assuming that all hazards (terrorist action, component failure or random natural event) will occur sooner or later, hence having a collective probability of 1.0. As soon as the assumption is made that sooner-or-later a hazard will occur, assessment of the technological systems contribution to user vulnerability can be refocussed from consideration of hazard probability to consideration of the number and type of weaknesses with which (inevitable) hazards can align.

A heterogeneous technological system may involve an arbitrary number of linked operations, each of which (consistent with the definition of stated by Slack et al. [29]requires inputs, executes some transformation process, and produces an output that is received by a subsequent process and ultimately serves an end-user. If the output of such a system is considered to be the delivery- or non-delivery of a nominated service-level output to an individual end-user, then the arbitrary heterogeneous technological system can be described by a configured system of notional AND/OR/NOT functions [30] whose inputs/outputs include unit-operations, input streams, intermediate product streams and services. For example, petrol is dispensed from a petrol station bowser to a car if fuel is present in the bulk tank, the power to a pump is available, a pipework and pump are operational and the required control signal is valid. Hence, a notional “AND” gate with these 5 inputs will model the operation of the dispensing system. The valid control signal will be generated when another set of different inputs is present, and the provision of this signal can be modelled by a notional “AND” function with nominated inputs. The approach allows the operational configuration of a heterogeneous technological system to be represented by a Boolean algebraic expression. Fig. 1 illustrates the use of Boolean operations to represent a somewhat more complex technological system.

Fig. 1. Process and stream operations required for system: Boolean representation.

 

Having represented a specific technological system using a Boolean algebraic expression, a 'truth table' can be constructed to display all permutations of process and stream availabilities as inputs, and technological system output as a single True or False value. From the truth table, a count of the cases in which a single input failure will cause output failure, and assign that total to the variable “E1”. A count of the cases where two input failures (exclusive of inputs whose failure will alone cause output failure) cause output failure, and assign that total value to E2. A further count of the cases in which three input failures cause output failure (and where neither single nor double input failures within that triple combination would alone cause output failure) and assign that total value to the variable “E3” and similarly for further “E” values. A simple algorithm can generate all permutations of “operate or fail” for every input process and stream. If a “1” is considered as a “operate” and “0” is considered as a “fail”, then for a model with n inputs (streams and processes) 2n options are input. If the algorithm outputs are applied to each binary representation of input states (processes and streams) and the output conditions (operate or fail) are recorded the input conditions for each output fail combination, the E1 etc. values can be computed (the E1 is the number of output-failure conditions where only a single input has failed). A truth-table approach to generating exposure metrics is illustrated in Fig. 2.

Fig. 2. Evaluation of exposure, by analysis of Boolean expression.

Fig. 2. Evaluation of exposure, by analysis of Boolean expression.

The composite metric {E1, E2, E3 … En}, is therefore mapped from the Boolean representation of the heterogeneous system and characterizes the weaknesses of that system in the contribution of the technological configuration to end-user vulnerability. Indeed, for a given single output at a defined service level - described by a Boolean value, representing “available” or “not available” - it is possible to isomorphically map an arbitrary technological system onto a Boolean algebraic expression. Thus, it is possible to create a homomorphic mapping (consistent with the guidance of Suppes [31] to a composite metric that characterizes the weakness of the system. Furthermore, the metric allows for comparison of the exposure level of alternative technological systems and configurations.

Next, we consider whether the measure represents the proposed attribute, by considering validity criteria. Hand [32] states that construct validity “involves the internal structure of the measure and also its expected relationship with other, external measures …” and “… thus refers to the theoretical construction of the test: it very clearly mixes the measurement procedure with the concept definition”. Since the Boolean algebraic expression represents all processes, streams and interactions, it can be directly mapped to a Process Flow Diagram (PFD) and so is an isomorphic mapping of the technological system with respect to processes and streams. The truth table is a homomorphic mapping of output conditions and input combinations, with output values unambiguously derived from the input values, but the configuration cannot be unambiguously derived from the output values. The {E1, E2, E3 … En} values are therefore a direct mapping of the system configuration.

Since the configuration and components of the system are represented by a Boolean expression, and the exposure metric {E1, E2, E3 … En} is assembled directly from the representation of the technological system, it has sufficient “construct validity” in the terms proposed by Hand [32]. The representational validity of this metric to the phenomenon of interest (viz. contribution to individual end-user vulnerability) must still be considered [31,32], and two justifications are proposed. Firstly, the representation of “exposure” using {E1, E2, E3 … En} supports the common system engineering “N+1”, “N+2” design redundancy concepts [33]. Secondly, the cost of achieving a given level of design redundancy can be assumed to be related to “E” values and so enumerating these will support decisions on value propositions of alternative projects, a previously-identified criterion for a valid metric.

Generating an accurate exposure metric as described, requires identification of processes and streams, which in practice requires a consideration of representation granularity. If every transistor in a processor chip were considered as a potential cause of failure, the “exposure” value calculated for the computer would be exceedingly high. If by contrast, the computer were considered as a complete, replaceable unit, then it would be assigned an exposure value of 1. A pragmatic definition of granularity will address this issue: if some sub-system of interest is potentially replaceable as a unit, and can be attacked separately from other sub-systems, then the sub-system of interest should be considered as a single potential source of failure. This definition allows adequate precision and reproducibility by different practitioners.

Each input to an operation within a technological system will commonly be the output of another technological system, which will itself have a characteristic “exposure”. The contribution of the predecessor system's exposure to the successor system must be calculated. This problem is generalised by considering that each input to a Boolean ‘AND’ or ‘OR’ operation has a composite exposure metric, and developing the principles by which the operation's output can be calculated from these inputs. Consider, for example, an AND gate that has three inputs (A, B and C), whose inputs have composite exposure metrics {A1, A2, A3 … }, {B1, B2, B3 … } and {C1, C2, C3 … }. The contributory exposure components are added component-wise, hence the resulting exposure of an AND operation is {(A1+B1+C1), (A2+B2+C2), (A3+B3+C3) … (An + Bn + Cn)}. The generalised calculation of contributory exposure is more complex for the OR operation. For an OR gate with three inputs (A, B and C), each of which has composite exposure metric {A1, A2, A3 … }, {B1, B2, B3 … } and {C1, C2, C3 … }:

• The output E1 value is 0

• The output E2 value is 0

• The output E3 value is 2((A1−1)+(B1−1)+(C1−1)),((A2−1)+(B2−1)+(C2−1)), ((A3−1)+(B3−1)+(C3−1)) since one fail from each input must occur for the output to fail, however each remaining combination of fails contributes to the E3 value

• The E4 and subsequent values are calculated in exactly the same way as the E3 value.

Since the contributory system has effectively added streams and processes, the length of the output exposure vector is increased when the contributory system is considered. The proposed approach is therefore to nominate a level to which exposure values will be evaluated. If, for example, this level is set at 2, then the representation would be considered to be complete when it could be shown that no contributory system adds to the E2 values of the represented system.

3. Implications from theory

The current levels of expenditure on infrastructure “hardening” are well reported in popular press. The theory presented is proposed to be capable, for a defined technological system, of quantitatively comparing the effectiveness of alternative projects. The described measure is also proposed to be capable of differentiating between systems that have higher or lower exposure, and thus allowing prioritisation of effort. The following examples have undergone a preliminary analysis. The numerical outputs are dependent on system details and boundaries, nevertheless, the authors' preliminary results indicate the output that is anticipated, and are considered to demonstrate the value of the principles. The example studies are diverse and examine well-defined services and service levels for the benefit of a representative individual end-user. Each example involves a technological system (with a range of processes, intermediate streams, and functionalities) and may therefore be expected to include a number of cases in which a single stream/process failure will cause the service delivery to fail - and a number of other cases in which multiple failures would result in the non-delivery of the defined service. The analyses also collectively identify common contributors, technological gaps, and common principles that inform improvement decisions. In each example case, the delivered service and level is established, following which the example is described and the boundaries confirmed. The single-cause of failure items (contributors to the E1 value) are assessed first, followed by the dual-combination causes of failure (contributors to the E2 values) and then the E3 values. It is assumed that neither maintenance nor replacement of components are required within the timeframe considered – i.e., outputs will be generated as long as their designed inputs are present, and the processes are functional.

3.1. Example 1: Petrol station, supply of fuel to a customer

The “service” in this case is the delivery, within standard fuel specifications (including absence of contaminants) and at standard flow rate, of petrol into a motor vehicle at the forecourt of a petrol-station. The scope includes the operation of the forecourt pumps, the underground fuel storage tanks, metering and transactional services. Although storage is significant, the refilling of the underground tanks from fuel stored in national-reserves can only be accomplished by a limited number of approaches, which must occur frequently relative to the considered timeframe and must be considered. Since many sources supply the bulk collection depot, the analysis will not go further back than the bulk storage depot. Similarly, the station is unlikely to have duplicated power feeders from the nearest substation and so this supply must be considered. The financial transaction system and the communications system it uses, must be included in the consideration.

On the assumption that the station is staffed, sanitary facilities (sewage, water) are also required (see Fig. 3). While completely automated “truck stop” fuel facilities exist, facilities as described are common and can reasonably be called representative. The fuel dispensing systems in both cases are almost identical, however the automated facilities cannot normally carry out cash transactions, and the manned stations commonly sell other goods (food and drink) and may provide toilet facilities.

Fig. 3. Operation of petrol station.

In work not reported here, the exposure metrics of the contributory systems have been estimated as EFTPOS financial transaction {49, 12, 1}, staff facilities system {36, 4, 6}, power {2, 3, 0}. Based on these figures, the total exposure metric for the petrol delivered to the end user is estimated at {92, 20, 8}.

In evaluating the exposure metric, the motor/petrol-pump and pipework connections do not generate E3 values (more than 3 failures would be required to cause a failure of the output function) since the petrol station has four pumps. The petrol station power supply affects several plant items that are local to the petrol station, and so is represented at the level which allows a valid assessment of its exposure contribution. For bulk petrol supply, numerous road system paths exist, tankers and drivers are capable of bringing fuel from the refinery to the station and so these do not affect the E3 values. The electricity distribution system has more than three feed-in power stations and is assumed to have at least 3 High Voltage (HV) lines to major substations, however local substations commonly have only two voltage-breakdown transformers, and a single supply line to the petrol station would be common. The local substation and feeders are assumed to be different for the sewage treatment plant, the bulk petrol synthesis and banking system clearing-house (and are accounted for in the exposure metrics for those systems), but the common HV power system (national grid) does not contribute to the E3 values, and so it is not double-counted in assessing the power supply exposure of the petrol station and contributory systems. While EFTPOS and sewage systems have had high availability, this analysis emphasizes the large contribution they make to the user's total exposure, and thus suggest options for investigation.

This example illustrates the significance of the initial determination of system boundaries. This example output is defined as fuel supplied to a user's vehicle at a representative petrol station. Other studies might consider a user seeking petrol within a greater geographic region (e.g. neighbourhood). In that case the boundaries would be determined to consider alternative petrol station and also subsystems that are common to local petrol stations (power, sewage, financial transactions, bulk fuel) and subsystems (e.g. local pumps) where design redundancy is achieved by access to multiple stations.

3.2. Example 2: Sewage system services for apartment-dweller

Consider the service of the sanitary removal of human waste, as required, via the lavatory installed in an urban apartment discharging to a wastewater treatment plant. The product of the treatment operation being environmentally acceptable treated water to waterways, and solid waste at environmentally acceptable specifications to landfill.

The technology description assumes that the individual user lives in an urban area with a population of 200,000 to 500,000. This size is selected because it is representative of a large number of cities. An informal survey of the configuration of sewage systems used by cities within this size range, reveals a level of uniformity, and hence the configuration in the example is considered “representative”.

The service is required as needed and commences from the water-flushed lavatory, and ends with the disposal of treated material. Electric power supplies to pumping stations and to local water pumps, are unlikely to have multiple feeders and will be considered to the nearest substation. The substation can be expected to have multiple feeders and so it is not considered necessary to consider the electric power supply further “back”. Significant pumping stations would commonly have a “Local/Manual” alternative control system capability, in which a remote control is normal, but allowing an operator to select “manual” at the pump station and thereafter operate the pumps and valves locally.

Operationally, the cistern will flush if town water is available and the lift pump is operational and power is available to the lift-pump. The lavatory will flush if the cistern is full. The waste will be discharged to the first pumping station if the pipework is intact (gravity fed). The first pumping station will operate if either main or backup pump are operational and power is available and either operator is available or control signal is present. The power signal will be available if signal lines are operational and signal equipment is operational and power for servo motors are available and remote operator or sensor is operational. The waste will be delivered to the treatment station if the major supply pipework is operational. The treatment station will be operational (i.e. able to separate and coalesce the sewage into a solid phase suitable for landfill, and an environmentally benign liquid that can be discharged to sea or river) if the sedimentation tank and discharge system are operable and the biofilm contactor is operational and the clarifier/aerator is operational and the clarifier sludge removal system is operational and power supply is available and operators are available. The sludge can be removed if roads are operational and driver and truck and fuel is available.

Several single sources of failure (contributors to E1 value) can be discerned. The power supply to local water pump, gravity fed pipework from lavatory to first pumping station and power supply to the first pump station. Manual control of the pumping station is possible, then this contributes to the E2 value, otherwise the control system wiring and logic will contribute to the E1 value. Assuming duplicate pumping station pumps, these contribute to the E2 value. Few of the treatment plant processes will be duplicated and so will contribute to the E1 values. For the urban population under consideration, the treatment plant is unlikely to have dual power feeds, and so the power supply contributes to the E1value.

For real treatment plants, most processes will include bypasses or overflow provisions. If the service is interpreted to specify the discharge of environmentally acceptable waste, then these bypasses are irrelevant to this study. However, if the “service” were defined with relaxed environmental impact requirements, then the availability of bypasses would mean that treatment plant processes would contribute to E2 values. Common reports of untreated sewage discharge following heavy rainfall events indicate that the resilience of treatments plants is low.

The numerical values of exposure presented in Table 1 are based on a representative design. It is noted that design detail may vary for specific systems. .

Table 1. Contributions to exposure of sewage system.

Preliminary research included the commissioning of a risk analysis by a professional practitioner of a sewage system defined to an identical scope, components and configuration of the target system. While the risk analysis generated useful results, it failed to identify all of the weaknesses that were identified by the exposure analysis.

3.3. Example 3: Supply of common perishable food to apartment-dweller

For the third example, a supply of a perishable food will be considered. The availability to a (representative) individual consumer, of whole milk with acceptable bacteriological, nutritional and organoleptic properties will be considered to represent the “service” and associated service level. Fresh whole milk is selected because it is a staple food and requires a technological system that is similar to that required by other nominally processed fresh food. Nevertheless, the technological system is not trivial - the pasteurisation step requires close control if bacteriological safety is to be obtained without deterioration of the nutritional and taste qualities.

At the delivery point, a working refrigerator and electric power are required. Transit to the apartment requires electric power to operate the elevator. Transport from the retail outlet requires fuel, driver, operational vehicle, and roads. The retail outlet requires staff, electric power, functional sewage system, functional water supply, functional lighting, communications and stocktaking system and access to financial transaction capability. Transport to the retail outlet requires fuel, driver, roadways and operational trucks. The processing and packaging system requires pasteurisation equipment, control system, Cleaning In Place (CIP) system, CIP chemical supply, pipework-and-valve changeover system for CIP, electric-powered pumps, electrically operated air compressors, fuel and fired hot water heaters, packaging equipment, packaging material supplies, water supply, waste water disposal, sewage system and skilled operators. Neither the processing facilities, nor the retail outlet, nor the apartment would commonly not have duplicated electric power feeders and so these and associated protection systems must be considered back to the nearest substation. The substation can be assumed to have multiple input feeders, and so the electric power system need not be considered any further upstream of the substation. The heating system (hot water boiler and hot water circulation system) for the pasteurizer would commonly be fired with fuel oil, but including enough storage that it would commonly be supplied directly from a fuel wholesaler.

The processes leading from raw milk collection up to the point where pasteurised and chilled milk is packaged, are defined in considerable detail by regulatory bodies - and can therefore be considered to be representative. There will be variation in packaging equipment types, however each of these can be considered as a single process and single “weakness”. Distribution to supermarkets, retail sales and distribution to individual dwellings are similar across most of the western world and are considered to be adequately representative. Neither the processing plant nor the retail outlet are staffed and so require an operational sewage disposal system and water supply.

Delivery of the milk will be achieved if the refrigeration unit in the apartment is operational and power is supplied to it and packaged product is supplied. Packaged product can be supplied if apartment elevator and power are available and individual transport from the retail outlet is functional and retailing facilities exist and are supplied with power and are staffed and have functional financial transaction systems. The retail facility can be staffed if skilled persons are available and transport allows them to access the facility and staff facilities (water, sewage systems) are operational. The sewage system is functional if water supply is available and pumping station is functional and is supplied with power and control systems are available. The bulk consumer packs are delivered to the retail outlet if road systems and drivers and vehicles and fuel is available. The packaged product is available from the processing facility if fuel is available to heat the pasteurizer and power is available to operate pumps and control system is operable and skilled operators are available and homogeniser is operational and compressed air for packaging equipment is available and packaging equipment is operational. Product can be made if CIP chemicals are available and CIP waste disposal is operational. Since very many suppliers and transport options are capable of supplying raw milk to the processing depot, the study need not consider functions/processes that are further upstream to the processing depot, i.e. the on-farm processes or the raw milk delivery to the processing depot.

Several single sources of failure (contributors to E1 value) can be identified: Power supply to the refrigerator (and cabling to the closest substation), roads fuel vehicle and driver, staff and power supply (and cabling to substation) at the retail outlet. Staff facilities (and hence the exposure contributions from the sewage system) must be considered. Provided the retail outlet is able to accept cash or paper-credit notes, then the payment system contributes to the E2 value, however if the retail outlet is constrained to electronic payments then many processes associated with the communications and banking systems will contribute to the E1 value. Roads, and fuel for bulk distribution will contribute to the E1 value, however drivers and trucks contribute to higher E values, since many drivers and trucks can be assumed to be available. The power supply to the processing and packing facility will contribute to the E1 value. The tight specifications and regulatory standards for consumer-quality milk will generally not allow any bypassing of processes, and so each of the major processes (reception, pasteurisation, standardisation, homogenisation and packaging) will all contribute to the E1 values. The milk processing and packaging facility will also need fuel for the Cleaning in Place (CIP) system and will need staff facilities - and hence the exposure contribution of the sewage system must be considered.

The examples demonstrate three commonalities. Firstly, it is both practical and informative to evaluate contributors to E1, E2 etc. variables for a broad range of cases and technologies. Secondly, sources of vulnerability that are specific to the examples can be identified, and thirdly principles for reduction of vulnerability can be readily articulated. In the petrol station supply example, one could consider eliminating the operator to obtain a greater reduction in exposure, eliminating needs for sewage, water, than retaining the operator and allowing cash transactions.

Some common themes can also be inferred among the examples, in general principles for exposure reduction likely to be applicable to other cases: The example studies contain intermediate streams: if such intermediate streams have specifications that are publicly available (open-source), there is increased opportunity for service substitution from multiple sources, and a reduction in the associated exposure values. Proprietary software and data storage are a prominent example of lack of standardisation despite the availability of such approaches as XML. Currently, electronic financial transactions require secure communications between an EFTPOS terminal and the host system, and the verification and execution of the transaction between the vendor's bank account and the purchaser's bank account. These processes are necessarily somewhat opaque. Completely decentralised transactions are possible as long as both the seller and vendor agree upon a medium of exchange. The implications of either accepting- or not-accepting a proposed medium of exchange are profound for the “exposure” created. Although the internet was designed to be fault tolerant, its design requires network controllers to determine the path taken by data, and in practice this has resulted in huge proportions of traffic being routed through a small number of high bandwidth channels. This is a significant issue: if the total intercontinental internet traffic (including streamed movies, person-to-person video chats, website service and also EFTPOS financial transaction data) were to be routed to a low-bandwidth channel, the financial transaction system would probably fail. Technological systems such as the generation of power using nuclear energy, are currently only economic at very large-scale, and hence represent dependencies to a large number of systems and users. Conversely, a system generating power using photovoltaic cells, or using external combustion (tolerant of wide variations in fuel specification) based on microalgae grown at village level, would probably be inherently less “exposed”. In every case where a single-purpose process forms an essential part of a process, it represents a source of weakness. By contrast, any “general purpose” or “re-purpose-able” component can, by definition, contribute to decreasing exposure. Human beings' ability to apply “common sense” and their unlimited capacity for re-purposing, are the epitome of multi-purpose processors. The capability to incorporate humans into a technological system is possibly the single most effective way to reduce “exposure”. The “capability to incorporate” may require some changes that do not inherently affect the operation of a system, merely incorporate a capability, such as the inclusion of a hand-crank capability in a petrol pump.

The examples (fuel supply, sewage disposal, perishable food) also uncover the existence of technological gaps, and where a solution would decrease exposure. Such gaps include the capability to store significant electrical energy locally (this is a more significant gap than is the capability to generate locally), a truly decentralised/distributed and secure communication system, and an associated knowledge storage/access system, a fully decentralised financial system that allows safe storage of financial resources and safe transactions, a decentralised sewage treatment technology, and less centralised technological approaches for the supply of both food and water and a transport system capable of significant load-carrying though not necessarily high speed, with low-specification roadways and broadly-specified energy supplies.

4. Discussion

The detailed definition of a technological system allows a more rigorous process for identification of hazards, by ensuring that all system weaknesses are considered. Calculating the exposure level of a technological system is not proposed as a replacement for risk analysis, but as a technique that offers specific insights and also increases the rigour of risk analysis. Indexing a measure of contribution-to-vulnerability is both simplified and enhanced in value if the measure is indexed to the delivery of specific goods/services, at defined levels, to an individual end-user. This approach will allow clarification of the extent to which a given project will benefit the individual. The analysis is applicable to any technological system supplying a specified deliverable at a given service level to a user. It is recognised that while some hazards e.g. a major natural or man-made disaster may affect more than one system, the analysis of the technological exposure of each system remains valid and valuable. The analysis of hazard-probability is of limited value over either long timeframes or when hazards are guided, and that characterising the number and types of weaknesses in a technological system is a better indicator of the vulnerability which it contributes to the person who depends on its outputs. An approach to quantifying the “exposure” of a technological system has been defined and justified as a valid representation of the contribution made to the vulnerability of the individual end-user. The approach is generates a fine-grained metric {E1, E2, E3 … En} that is shown to accurately measure the vulnerability incurred by the end-user: calculation of the metric is tedious but not conceptually difficult; the measure is readily able to be verified and replicated, and the calculated values allow detailed investigation of the effect of hypothesized changes to a target system. The approach has been illustrated with a number of examples, and although only preliminary analyses have been made, the practicality and utility of the approach has been demonstrated. Only a small number of example studies have been presented, although they have been selected to address a range of needs experienced by actual urban-dwellers. In each case the scope and technologies used are proposed to be representative, and hence conclusions drawn from the example studies can be considered to be significant.

Even the preliminary analyses of the examples have indicated two distinct categories of contributors to vulnerability: weaknesses that are located close to the point of final consumption, and highly centralised technological systems such as communications, banking, sewage, water treatment and power generation. In both of these categories the user's exposure is high despite limited design redundancy, however the users exposure could be reduced by selecting or deploying technology subsystems with lower exposure close to point-of-use, and by using public standards to encourage multiple opportunities for service substitution. The use of an exposure metric has been shown to provides measure of the vulnerability contributed by a given technological system to the individual end-user, has been shown to be able to be applied to representative examples of technological systems. Although results are preliminary, the metric has been shown to allow the derivation of both specific and generalised conclusions. The measure can integrate with-, and add value to-existing techniques such as risk analysis.

The approach is proposed as a theory of exposure, including conceptual definitions, domain limitations, relationship-building and predictions that are proposed [34] as essential criteria for a useful theory.

Acknowledgement

This research is supported by an Australian Government Research Training Program (RTP) Scholarship.

References

[1] J. Forrester, Industrial Dynamics, MIT Press, Boston, MA (1961)

[2] R. Ackoff, Towards a system of systems concepts, Manag. Sci., 17 (11) (1971), pp. 661-671

[3] J. Sterman, Business Dynamics: Systems Thinking and Modelling for a Complex World, McGraw-Hill Boston (2000)

[4] I. Eusgeld, C. Nan, S. Dietz, System-of-systems approach for interdependent critical infrastructures, Reliab. Eng. Syst. Saf., 96 (2011), pp. 679-686

[5] T. Forester, P. Morrison, Computer unreliability and social vulnerability, Futures, 22 (1990), pp. 462-474

[6] B. Martin, Technological vulnerability technology in society, 18 (1996), 511–523A

[7] L. Robertson, From societal fragility to sustainable robustness: some tentative technology trajectories, Technol. Soc., 32 (2010), pp. 342-351

[8] M. Kress, Operational Logistics: the Art and Science of Sustaining Military Operations, 1-4020-7084-5, Kluwer Academic Publishers (2002)

[9] L. Li, Q.-S. Jia, H. Wang, R. Yuan, X. Guan, Enhancing the robustness and efficiency of scale-free network with limited link addition, KSII Trans. Internet Inf. Syst., 6 (2012), pp. 1333-1353

[10] A. Yazdani, P. Jeffrey, Resilience enhancing expansion strategies for water distribution systems: a network theory approach, Environ. Model. Softw., 26 (2011), pp. 1574-1582

[11] Y.Y. Haimes, P. Jiang, Leontief based model of risk in complex interconnected infrastructures, ASCE J. Infrastruct. Syst., 7 (2001), pp. 1-12

[12] Cyber-attack: How Easy Is it to Take Out a Smart City?. (New Scientist, 4 August 2015), (https://www.newscientist.com/article/dn27997-cyber-attack-how-easy-is-it-to-take-out-a-smart-city/, retrieved 22 Mar 2017).

[13] Jeep Drivers Can Be HACKED to DEATH: All You Need Is the Car's IP Address (e.g. The Register, 21 Jul 2015). (https://www.theregister.co.uk/2015/07/21/jeep_patch/, retrieved 22 Mar 2017).

[14] J. Glenn (Sen), Quotation from Retirement Speech, Seen online at (1997), http://www.historicwings.com/features98/mercury/seven-left-bottom.html, in Feb 2014

[15] C. Gómez, M. Buriticá, M. Sánchez-Silva, L. Dueñas-Osorio, Optimization-based decision-making for complex networks in disastrous events, Int. J. Risk Assess. Manag., 15 (5/6) (2011), pp. 417-436

[16] C. Perrow, Normal Accidents: Living with High-risk Technologies, Basic Books, New York (1984)

[17] P. Chopade, M. Bikdash, Critical infrastructure interdependency modelling: using graph models to assess the vulnerability of smart power grid and SCADA networks, 8th International Conference and Expo on Emerging Technologies for a Smarter World, CEWIT 2011(2011)

[18] ISO GUIDE 73, Risk Management — Vocabulary, (2009)

[19] A. Gheorghe, D. Vamadu, Towards QVA - Quantitative vulnerability assessment: a generic practical model, J. Risk Res., 7 (2004), pp. 613-628

[20] ISO/IEC 31010, Risk Management— Risk Assessment Techniques', (2009)

[21] FMEA, MIL-STD-1629A, Failure Modes and Effects Analysis (1980)

[22] Y.Y. Haimes, On the definition of resilience in systems, Risk Anal., 29 (2009), pp. 498-501, 10.1111/j.1539-6924.2009.01216.x, 2009 Note page 498

[23] K. Khatri, K. Vairavamoorthy, “A new approach of risk analysis for complex infrastructure systems under future uncertainties: a case of urban water systems. ”Vulnerability, Uncertainty, and Risk: analysis, Modeling, and Management, Proceedings of the ICVRAM 2011 and ISUMA 2011 Conferences (2011), pp. 846-856

[24] Y. Shuai, X. Wang, L. Zhao, Research on measuring method of supply chain resilience based on biological cell elasticity theory, IEEE Int. Conf. Industrial Eng. Eng. Manag. (2011), pp. 264-268

[25] Munoz M. Dunbar, On the quantification of operational supply chain resilience, Int. J. Prod. Res., 53 (22) (2015), pp. 6736-6751

[26] A.M. Law, Simulation Modelling and Analysis, McGraw-Hill, New York (2007)

[27] L. Barabási, E. Bonabeau, EScale-free networks, Sci. Am., 288 (2003), pp. 60-69

[28] G. Tanaka, K. Morino, K. Aihara, Dynamical robustness in complex networks: the crucial role of low-degree nodes, Nat. Sci. Rep., 2/232 (2012)

[29] N. Slack, A. Brandon-Jones, R. Johnston, Operations Management 7th Edition by Dawson Books, (2013), ISBN-13: 978–0273776208 ISBN-10: 0273776207

[30] ISO/IEC 9075–2, Information Technology – Database Languages – SQL, (2011)

[31] P. Suppes, Measurement theory and engineering, Dov M. Gabbay, Paul Thagard, John Woods (Eds.), Handbook of the Philosophy of Science, Philosophy of Technology and Engineering Sciences, vol. 9, Elsevier BV (2009)

[32] D. Hand, Measurement Theory and Practice: the World through Quantification, Wiley (2004), ISBN: 978-0-470-68567-9.ISO/IEC 31010:2009–(Risk Management - Risk Assessment Techniques)

[33] Ponemon Institute, 2013 Study on Data Center Outages, Retrieved, (2013), http://www.emersonnetworkpower.com/documentation/en-us/brands/liebert/documents/white%20papers/2013_emerson_data_center_outages_sl-24679.pdf, Dec 2016

[34] J.G. Wacker, A definition of theory: research guidelines for different theory-building research methods in operations management, J. operations Manag., 16.4 (1998), pp. 361-385

Vitae

L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

1 Abbreviation: Failure Modes and Effects Analysis, FMEA.

2 The terminology used is typical for Australasia: other locations may use different terminology (e.g. “gas” instead of “petrol”).

Keywords

Technological vulnerability, Exposure, Urban individual, Risk

Biographies

L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

Citation: Lindsay J. Robertson, Katina Michael, Albert Munoz, "Assessing technology system contributions to urban dweller vulnerabilities", Technology in Society, Vol. 50, August 2017, pp. 83-92, DOI: https://doi.org/10.1016/j.techsoc.2017.05.002

Cloud computing data breaches a socio-technical review of literature

Abstract

images (1).jpg

As more and more personal, enterprise and government data, services and infrastructure moves to the cloud for storage and processing, the potential for data breaches increases. Already major corporations that have outsourced some of their IT requirements to the cloud have become victims of cyber attacks. Who is responsible and how to respond to these data breaches are just two pertinent questions facing cloud computing stakeholders who have entered an agreement on cloud services. This paper reviews literature in the domain of cloud computing data breaches using a socio-technical approach. Socio-technical theory encapsulates three major dimensions- the social, the technical, and the environmental. The outcomes of the search are presented in a thematic analysis. The 7 key themes identified from the literature included: security, data availability, privacy, trust, data flow, service level agreements, and regulation. The paper considers complex issues, pre-empting the need for a better way to deal with breaches that not only affect the enterprise and cloud computing provider, but more importantly, end-users who rely on online services and have had their credentials compromised.

Section I. Introduction

Traditionally, enterprise networks were managed by internal IT staff that had access to underlying infrastructure that stored and processed organizational data. Cloud computing has emerged to overcome traditional barriers such as limited IT budgets, increased use of outdated technology and the inability of corporations to expand IT infrastructure services to users when needed [1]. Cloud computing is Internet-based infrastructure and application service delivery through a controlled and manageable environment that is provided with a pay-as-you-go agreement structure. Cloud computing has acted to lower hardware and software costs [2]. Buyya et al. [3] analogize that cloud computing is similar to utility based-services such as water, electricity, gas and telephony. Cloud computing allows for adjusting resources on an ad-hoc manner for a predefined duration with minimal management effort [4].Customers only pay for what is utilized in an affordable manner and computing requirements can be scaled down when no longer needed [3].

While cloud computing is seen as a utility, [5] state that cloud computing models are undeveloped technology structures that have immense potential for improvement. This is despite that [6] argues that cloud computing concepts are not new and models have been adopted from technologies such as time sharing mainframes, clustering and grid computing. Yet [7] elaborates that cloud computing technology is far more advanced than other technology, exceeding the regulatory environment because it transcends legal boundaries. For example, cloud computing has allowed for data to reside somewhere other than the data owner's home location [8]. There are three layers generally acknowledged “as a service” within the cloud computing context: infrastructure, platform, and software. Business customers (e.g. online merchants), may opt for one or more cloud service layers depending on the needs of their company, and the needs of end-users (i.e. the customer's customer).

A. Infrastructure as a Service

Infrastructure as a Service (IaaS) enables the cloud consumer to acquire and provision hardware infrastructure services through the use of cloud provider web interfaces [9].Through an abstraction view of the hardware, consumers are able to provision infrastructure on a pay-as-you-go basis that can be adjusted on an ad-hoc manner [10].The IaaS delivery model also provides ability to provision system images, scale storage and processing requirements and define network topologies through the cloud provider's user interface management portal [10]. The infrastructure is offered through time-shared facilities that allows storage, processing and network services to be utilized as a service [1]. According to [6, p. 44] IaaS “allows companies to essentially rent a data center environment without the need and worry to create and maintain the same data center footprint in their own company”.

B. Platform as a Service

Platform as a Service (PaaS) enables cloud customers the ability to deliver web-based applications to its users [8]. PaaS also allows cloud customers to support facilities that provide on-demand web application utilization without the need to manage underlying complex network infrastructure. According to [8, p. 49], principle characteristics of PaaS are “services to develop, test, deploy, host, and manage applications to support the application development life cycle”. The added benefits of PaaS allows cloud customers to test developed applications without the need to utilize organizationally-owned infrastructure [6].

C. Software as a Service

Software as a Service (SaaS) allows cloud customers the ability to utilize software resources through a web-based user interface [8]. The SaaS model allows cloud customers the facility of utilizing software applications without the need for them to store, process and maintain backend infrastructure and platform repositories [6]. The level of abstraction increases as cloud customers migrate from IaaS to SaaS delivery models, hence responsibility is handed to cloud providers to handle the SaaS model [11]. Furthermore [12] discuss SaaS architecture through multi-tenant utilization as it shares common resources and underlying instances of both database and object code.

Section II. Security

Several authors [13] [14] [15] agree security concerns are among one of the biggest issues that will enable growth in cloud computing services. The use of public clouds demands tighter restrictions on cloud providers to incorporate into their service models. Legal complications that cloud providers must adhere to are yet to be standardized and as a result remain the biggest obstacle to continued substantial growth of the cloud model [14]. Svantesson and Clarke [5] emphasize that the issue of security within the cloud computing context should be reviewed rigorously by potential business customers and end-users before adoption to ensure that confidentiality, integrity, availability and privacy policies are addressed by the provider.

A recent study [16] focuses on explaining the concerns over network boundaries in the cloud computing model where the risk of attacks are increased as a result of outdated security solutions. The continued usage of cloud computing will result in more devices being connected outside the traditional network boundary, which will in turn mean that the underlying data that is stored may be compromised. Similarly, [7] states whereas once a user was only allowed to log on if they were on the physical network, they can now log on from almost any device that is connected to a network connection. In traditional enterprise networks, organisations had access to security settings and configurations, whilst in the cloud computing model the network boundary is managed by the cloud provider.

Subashini and Kavitha [17, p. 3] state “guaranteeing the security of corporate data in the cloud is difficult, if not impossible”. The state of cloud security is under stress as security threats and vulnerabilities may not be noticed by the cloud customer and their end-users [18]. This in turn raises alarms for disaster recovery plans to be specified in service level agreements to avoid contract breaches. Kshetri [15] elaborates that security and privacy issues come to the fore as customers start to be concerned that data may be used without the explicit consent of the end-user. To complement the latter, [13] details further concerns such as loss of control over data via malicious or un-malicious intent, an issue that can never be completely eradicated.

A. Cloud Computing Data Security Encryption Keys

There have been numerous studies conducted seeking to secure the cloud computing model from risks and threats but this has had little impact on the overall industry [19].The outcome from [19] proposes key-based encryption through simulation modelling that allows data to be stored on cloud infrastructure, with participants accessing certain data according to their encryption key permission. To critique the former, [20] state that security mechanisms that involve encryption key solutions degrade performance levels and do not meet scalability requirements in cloud computing environments. With the issues of performance and scalability on encryption keys, Esayas [13] also elaborates that encryption keys might not meet business requirements as the effectiveness of such a technique is not suitable for all cloud computing services. Implementing security is essential in overall cloud models, although it increases overheads that diminish the return and benefits.

The influence of many industry and academic experts, state that encryption keys will pave the way for secure cloud computing, from a perpetrator and insider attack point of view. Insider breaches are becoming more common as attacks may be deliberate or simply administrative error [21]. In review, [18] detail that traditional security solutions change when enterprises adopt cloud computing and existing encryption standards are outdated for the cloud computing model, inhibiting effective use for privacy protection. The ability to overcome these issues will allow the cloud customer peace of mind with respect to data integrity and end-user confidence [22].

Another method to encryption key security is demonstrated using a simulation approach to protect data from unauthorized access and violation. The concept introduces data coloring to protect different types of data. This distorts the original data and only owners that have the same color key can view the data. Yet [13] and [20] argue that security keys are highly volatile in cloud environments, that if the decryption key is mismanaged, the data will not be able to be decrypted. To add further criticism, the data coloring security solution simulation-based approach has limitations on overall usefulness as it provides simplistic arithmetic calculations [13].

B. Disadvantages of Traditional Security Practices in Cloud

Perpetrators and insider attacks are considered high impact security threats to cloud computing. Pek et al. [23] detail security issues are not being offset by either hardware or software protocols. In assessment, [17] surveys existing traditional security solutions and believes that cloud data needs higher levels of security to overcome vulnerabilities and threats. Traditional security models such as intrusion detection systems, intrusion prevention systems and network firewalls do not effectively address the security issues that are being experienced in the cloud computing model [17].

Salah [24] introduces the proof-of-concept cloud-based network security overlay. For this simulation, Salah [24] uses security intrusion detection systems, network firewalls and anti-virus tools that are intended for cloud environments. The results demonstrate that significant cost savings can be achieved with this implementation although network latency and increased network bandwidth utilization is recorded.[25]emphasize that cloud environments are far greater in complexity and design than traditional enterprise environments. Physical and virtual machines are rapidly being deployed in data centers and the security management protocol for this environment using the traditional security methodology is dormant and unrealistic. For example, Salah [24] has not included solutions for when an intrusion has compromised a particular virtual machine, and how a cloud provider and customer should respond.This is of particular concern as [25] state that once a virtual machine has been compromised then the attacker can gain access to the lower level hypervisor of the machine.

C. Data Security Issues in Virtualised Environments

Virtualization first came on to the market in IBM mainframes through the use of its hypervisor to initiate virtual machines [21]. Virtualization concepts and technical background explanations are not being explicitly detailed to the cloud customer adding to security concerns. Sensitive data that resides on the cloud computing model are acceptable to threats and vulnerabilities using virtualization techniques [26]. A primary design issue is to denote the sensitivity of the data that is being stored and assign low and high security controls for that virtual machine. According to [23], sensitive and nonsensitive data should not be stored on the same physical machine, although this has not being publicized to cloud customers.

Data in virtualized environments according to [17] is an important topic as data location and data ownership is a key enabler to increasing trust relationships between provider and customer. To complement the former, [14] elaborate that trust can be diminished with concerns relating to data breaches. Data security breaches in virtualized environments can occur to one or many tenants that reside in a single physical machine and at times without notifications being issued to customers or consumers [14]. As many tenants reside in a single physical machine, customer data may be accessed by unauthorized personnel if the virtualized environment is compromised [27].

In [28], the definition of sensitive data relates to software configuration data, network configurations and resource allocations for virtualized environments. If we compare this with [14], they define sensitive data with respect to an individual's social information. Throughout the study, [28] state that current security measures for virtualized environments are lacking and increased prevention, detection and protection measures need to be in place. These measures include an increase in the level of policy standards and managerial say during cloud provider assessment for cloud services. [29]emphasizes that the lack of service level agreement acknowledgement during cloud provider assessment plays a pivotal role in ruling out important components of cloud services.

D. Outsourcing Sensitive Data to Virtualised Environments

When cloud customers outsource their workload to a cloud provider due to resource constraints or volatile computation requirements, [20] state that “current outsourcing practice operates in plaintext - that is, it reveals both data and computation results to the commercial public cloud”. This should be concerning to cloud customers, as they very often store data that is likely to contain sensitive information (e.g. corporate intellectual property). The management practices of data security in virtualized cloud environments according to [14] are simply inadequate for sensitive data to be stored.Rocha et al. [21] detail that system and network administrators have log-in credentials to access the virtualization management layer of the physical machine. With this level of access coupled with plaintext data, outsourcing demonstrates that virtualized cloud environments are not suitable for data storage [5].

E. Cloud Security Auditing and Certification Compliance

Standards that include auditing and certifications are considered to be inadequate for the cloud computing model [15]. To complement the former, [30] state that auditing and certifications have not been widely implemented and adopted by cloud providers. A set of security standards and best practices are being developed by the Cloud Security Alliance (CSA), although current cloud providers are yet to demonstrate enthusiasm or optimism that these will play a role in avoiding security breaches [15].

F. Billing Monitoring Security Concerns on the Cloud

The continuation of monitoring services from cloud providers offers timely and effective billing solutions for cloud customers. However, this is also a security matter, given providers need to monitor customer traffic to bill accordingly [31]. The lack of standards for monitoring services increase privacy concerns, as cloud customers cannot apply security metrics nor monitor on what is being scanned [30]. Pek et al. [23] supports accessing the management portal of the cloud computing model as integral to the overall security status and virtual environments.

G. Cloud Security Requirements and Modelling Approach

In their proposed framework modelling approach, [32] address privacy and security requirements analysis for cloud customers through a rigorous process that selects the most suitable cloud vendor. The conceptual framework incorporates different cloud computing stakeholders, iterative requirements processing and a security modelling language. The authors demonstrate the main limitation of this proposed conceptual framework is privacy being a subset requirement of security. [33]agree that the lack of service level agreement analysis during the conceptual framework process is a major contributor to ineffectively measuring cloud provider services.

Chen and Zhao [27] develop a data life cycle conceptual framework through a semantic review of current literature. The various stages address initial data generation from cloud customers to how data destruction is performed by cloud providers once a cloud service is terminated. Throughout the conceptual framework, the lack of monitoring service level agreements in respect to data location, sharing, privacy and security is of particular concern. The conceptual framework process does not provide insights into the overall compliance and regulatory status. [5]emphasize that cloud customers and end-users need to acknowledge the importance of cloud provider compliance and regulation status.

The architecture proposed by [34] is a proxy based cloud service to enable collaboration between multi-cloud consumers and providers on an ad-hoc basis. The concept allows data sharing and processing without establishing agreements or negotiation contracts and business rules. In another study, [25] elaborate the significance of establishing standard service level agreements and contracts for cloud services and how to monitor them on a continuous basis. Modi [25] also describes underlying internet protocol (IP) and proxy services vulnerabilities. Through this, attacks can include man-in-the middle, domain name system (DNS) and address resolution protocol (ARP) spoofing that can be targets for proxy based cloud models. Comparatively, [34] and [35] looked at collaboration with multi-vendor and customers clouds in an alternative way. Yang's [35] simulation involved service level agreements for customers whilst participating in cloud federation services. The measured components of the SLA had Quality of Service (QoS) attributes such as connection latency, bandwidth and threshold limits. The security that [35] incorporated in the simulation had encryption and authentication methods that were standard practice for online activities.

Yang and Jia [36] introduce their concept of enabling dynamic auditing of data that is stored on a cloud service through a conceptual framework. They define the key categories that need attention: increased confidentiality, dynamic and batch auditing. The results were compelling as decreased costs to processing these audits were achieved. The intervention through a third party auditor within the process enabled the avoidance of bias in the results. [17]emphasizes that compliance and regulatory status of cloud providers is crucial to the cloud customer. The lack of acknowledgement of [36] to include the attitude of cloud vendor participation and approval was a key difference in the studies. Cloud vendors may inevitably avoid these scenarios and lack participation for data confidentially checks.

Source: http://searchcloudprovider.techtarget.com/photostory/2240178541/What-your-customers-want-to-see-in-the-2013-cloud-marketplace/6/Security-data-protection-remain-top-cloud-computing-issues

Section III. Data Availability

A. Multiple Availability Zones

Cloud vendors that have multiple availability zones use this functionality as a method to distribute network load and offset critical services to a larger amount of geo-redundant sites. Sun et al. [26] state that replication technology is used for multiple availability zone setups to avoid data loss, although this method is prone to cross-border activities, if stored in different regulated jurisdictions. The study by [37] aimed at data availability to be effected through the use of virtualization and raised security issues. Sun et al. [26] focused on data availability through offloading services to alternative servers for load distribution. Comparatively, [37] insisted to keep high data availability applications in-house until further developments are made to the cloud computing model, although this article is now somewhat dated.

B. Enhancing Data Security to Maintain Data Availability

In the study by [38], enhanced security was achieved through the utilization of security mechanisms such as double authentication and digital signatures. Data availability was achieved by enabling the data to be securely stored and retrieved. In comparison, the study by [39], aimed to increase data availability through a two stage process: using a trusted third party to maintain visibility of the security mechanisms that are used, and using enhanced security mechanisms to protect the data. Thus, [38] proposes the solution through an experimental-based case, whereas [39] only demonstrates this through expected security tools and their capabilities. In contrast, using a literature review, [40] indicate that virtualization security is very important to data availability. With their analysis of current security mechanisms, virtualization security is under-managed and in need of enhanced management practices.

C. Data Availability Priorities

Sakr et al. [41] in their cloud computing survey, aimed to investigate cloud challenges that arise by utilizing their developed model. While they identified several advantages, such as utilization and bandwidth improvements, there were substantial drawbacks from cloud storage techniques that raised concerns. Their findings indicated that the availability of service use, highly impacts the cloud computing model, as the slightest downtime and service degradation would impact the use of the service. Similarly, the study by [42] indicates that performance delivery through the availability of the service was the most significant issue. To critique [42], findings were empirically based as compared to [41] where findings were derived from a literature survey.

Section IV. Privacy

A. Defining Information Privacy in Technology

Meanings of information privacy vary across disciplines. According to the Australian Privacy Law and Practice Report 108: “Information privacy, [is] the establishment of rules governing the collection and handling of personal data such as credit information, and medical and government records.” It is also known as data protection [43]. Information privacy can be considered an important concept when studying cloud computing. It has four sub-components [44]:

  • Psychologically: people need private space;
  • Sociologically: people need to be free to behave… but without the continual threat of being observed;
  • Economically: people need to be free to innovate; and
  • Politically: people need to be free to think and argue and act.”

It is important to note that information privacy is not only something that is important to a cloud computing business customer, but also an end-user who is likely to be an everyday consumer.

B. Technological Advances Outpace Privacy Regulation

The Australian Privacy Law and Practice Report 108 noted that the “…Privacy Act regulates the handling of personal information.” Although the Act was exclusively designed for public sector agencies, now the Information Privacy Principles (IPPs) have a broader reach [43, p. 138]. Complicating the issue of privacy, especially information privacy, is how it is interpreted, or for that matter ignored, by different legal systems.Gavison [45, p. 465] summates the problem of privacy in an ever-changing technological world when he writes: “Advances in the technology of surveillance and the recording, storage, and retrieval of information have made it either impossible or extremely costly for individuals to protect the same level of privacy that was once enjoyed.”

C. Sensitive Data Storage on Cloud Infrastructure

The EU Directive defines sensitive data as personal data that includes health records, criminal activities, or religious philosophy [14]. Similarly, [27] define e-commerce and health care systems data as sensitive. [28]defines sensitive data that includes personal attributes and security configuration files. Subashini and Kavitha [17], state that sensitive data holds value to the end-user and needs to be protected. In addition, [46] discusses that each cloud customer needs to assess suitability and evaluate the security controls that the cloud provider offers. Sun et al.'s [26] key focus is that cloud customers must first acknowledge that their sensitive data is stored on cloud computing infrastructure, and cloud providers need to assure that it is kept confidential. [15]states that cloud customers are cautious while utilizing the cloud computing model to store sensitive data. [14]state that protecting sensitive data in cloud computing is the biggest challenge for cloud customers.

Cloud customers are especially anxious about the release of their information to third party vendors, exclusive of acknowledgement [22]. Sensitive data that is stored on cloud provider infrastructure is often non-aggregated. All data is tightly coupled thus allowing stakeholders that can access the data to utilize it [47]. [17]detail that cloud customers that have non-aggregated data are vulnerable to insider breaches, as data can be taken without cloud customer acknowledgement. All non-aggregated data that can be seen as selective elements are either weakly encrypted or clearly visible. Ter [48] also discusses the importance that cloud customers need to decouple sensitive data from non-sensitive data as a minimal standard if cloud computing is utilized. The ability to process large quantities of data and query datasets at immense speed is available using cloud computing [47]. Yet this very capability raises concerns about privacy and sensitive data security mechanisms. Privacy concerns are raised as the cause for data retention, and deletion from the cloud provider with respect to virtualization techniques have not been elaborated [28].

D. EU and AUS Data Privacy

King and Raja [14] detail privacy rights that cloud customers have if they choose to store data in an EU-based cloud. It follows that cloud providers need to assure that they act according to local regulations. The complexity arises when a cloud customer in Australia, for instance, is subject to foreign laws as their data is stored in another jurisdiction [46].

Section V. Trust

Cloud providers interpret trust as either being a security or privacy issue [15]. In comparison [18] state that trust is strengthened by having tighter technical and social means to enable transparency for cloud customers. End-users of cloud computing (i.e. everyday consumers), lack trust as cloud providers limit the amount of information provided on data transfer, storage and processing to them directly. End-users may also be concerned about confidentiality [49]. A large subset of cloud end-users have concerns that their data may be used inappropriately for other purposes. Nguyen [7, p. 2205] expresses that cloud customers: “[m]aintain personal property on a third party's premises, he or she retains a reasonable expectation of privacy, even if that third party has the right to access the property for some purchases.”

A. Increasing Cloud Trust with Security Technology Solutions

Wu et al. [50] in their research, enable trust by increasing levels of security. They introduce a trusted third party to provide the secret key for encrypting data storage. This enhances the probability that consumers have higher security solutions to prevent data violation in the form of secure envelopes. This kind of solution however, incurs higher network traffic costs. In agreement [19] and [20] discuss that enhancing security encryption degrades systems performance and scalability.

The dispersion of cloud customers and data centers globally alters the current domain trust relationship as cloud customers and servers might not be in the same trusted domain [20]. In comparison with the latter, traditional methods on enabling and enhancing trust are simply unrealistic as the amount of data to process is growing exponentially [49]. Integrity mechanisms that were once used in traditional enterprise data centers focused on independent and isolated servers. The method for hashing the entire file(s) is not feasible in cloud data center technology [49]. This creates uncertainty for cloud consumers that do not have background knowledge in cloud computing. It was also found that cloud customers have little or no knowledge of trust-related issues in cloud computing.

B. Enhancing Trust from Social and Technical Perspectives

Enabling trust is notably difficult to sustain as it is dynamic in nature and subject to other factors that may influence the cloud customer's behavior [26]. The ability to improve trust using cloud computing is not solely a technical issue; it needs to include social structures [22]. Throughout, [15] describes security and privacy issues as being formed by emotions, authority and power by the individuals that use cloud computing resources.

Kshetri [15] details the importance of increasing security while lowering privacy issues by enhancing trust relationships between cloud provider and customer. To support the latter, King and Raja [14] state that security weaknesses will relate to lower consumption of cloud computing and a further decrease of customers handing off data.King and Raja [14] uphold that policymakers need to enforce standards and practices within the cloud computing industry. With respect to customer trust, [51] states that enhancing transparency with respect to security will only act to better support trust.Relating to social trust issues, [28] describe trust in relation to technological and virtualized concepts. To enable trust between virtualized systems is to overcome vulnerabilities in hardware and software design. A key security platform used in virtualized environments is the Trusted Platform Module (TPM), which is an industry standard for enabling root trust in hardware design and components [28].

Cloud customer trust concerns are likely to continue as failures in both technical and social structures of cloud computing remain unresolved [14]. Kshetri [15] states that this is conclusive and ongoing as cloud providers lack giving cloud customers adequate and meaningful information, further diminishing trust. Trusting cloud providers with corporate transactions needs better management [17]. Cloud customers with sensitive data will continue to rationalize and investigate cloud computing. Further research is required in the area of constructing regulatory frameworks that cover trust relationships between all parties in a service level agreement (SLA) [14].

C. Increasing Trust with Service Level Agreement Visibility

According to [5], cloud providers grant minimal visibility for their offered service acceptance terms and agreements. The increased response time to service deployments are critical factors for end-user acceptance. Enquiring and reading through the terms and agreements of the proposed service are dormant as most customers (and their end-users) will have not read or even become aware of the terms and agreements they sign up to [5]. King and Raja [14] state that trust will be jeopardized as privacy and security concerns continue to rise. [5], [14] and [15] discuss perceptions that trust will be further diminished as cloud providers lack the enthusiasm and impetus to address these concerns. As a result, cloud providers continue to have full authority over customer data [52]. Although this has recently changed with many suggesting mandatory data breach notification, and even commensurate penalties for untimely communications about breaches.

Section VI. Data Flow

A. Data Flow Between Multiple Jurisdictions

In network operations, data flow is essential to overall planning and lifecycle management tasks for IT departments. To understand where the data is being transferred, impacts the type of cloud computing model chosen and overall data storage techniques. Critical and sensitive data that belongs to end-users of cloud solutions may store personal information which cannot be shared with third party vendors. Fears amongst end-users of cloud computing models are greatest when cross-border data flows occur without the pre-warning of the cloud provider. Esayas [13] examined the EU Data Protection Directive, which dated back to 1995, and stated that privacy protection was rather limited to the cloud customer as data was being transferred between jurisdictions. In support, [31] detail privacy acts and regulatory bodies in various jurisdictions which clearly lack the sufficient power to withhold cloud providers the right to keep data to be transferred to another jurisdiction.

B. Cloud Infrastructure Outpacing the Legal Framework

Adrian [46] argues technology developments generally outpace privacy and regulatory issues as a key contributor to privacy concerns. This is particularly important to cloud customers as the legal framework is outdated and insufficient for current cloud computing models [46]. The ineffective use of outdated privacy laws and regulations are difficult to be tied to cross-border data flows, as foreign corporations have data ownership to the data that was transferred [5]. Complications and confusion occurs when legal frameworks become uncertain to cloud customers. To add to the severity of the problem, cloud customers are often unawares of the specific physical storage location of their data. Australia's lack of updating and enhancing of the Privacy Act 1988 is particularly problematic for cloud customers [46]. Svantesson and Clarke [5, p. 392] state that cloud computing “extends beyond mere compliance with data protection laws to encompass public expectations and policy issues that are not, or not yet, reflected in the law”.

C. Australia and EU Legal Frameworks Compared

The transfer of data over the Internet, that cloud providers perform, does not correlate to the cross-border transfer of data within the Australian Privacy Act 1988 [13]. In comparing the Australian Privacy Act 1988 and European Union (EU) Data Protection Directive 1995, [14] explicitly mentions that the EU Directive prohibits member states from cross-border data transfer activities that have below acceptable laws and regulation. In contrast, [46] mentions that conflicting judgements often occur, as enforcing these rules becomes difficult to sustain in foreign countries.

King and Raja [14] explain that EU member states have far tighter privacy laws and regulations compared to Australia when cross-border data flows come into question. The EU Directive gives cloud customers basic rights to their data, and knowledge of where their data is physically stored. Cross-border data flows out of Australia are dissimilar, as the common law is applied to these scenarios, which have far less restrictions compared to the EU Directive [52]. Simply, the EU Directive states that cross-border data flows cannot occur if foreign jurisdictions do not have the same levels of enforcement[53].

Compliance and regulations restrict certain jurisdictions from transferring data to foreign jurisdictions, the location from which the data originated and where it is being transferred [17]. With respect to the EU Directive, even if the cloud consumer is located outside the EU, the data that is generated within the EU cannot be transferred outside the EU [14]. To determine data transfer between jurisdictions is often difficult to answer as the flow of data between jurisdictions can be altered at any time without the cloud customers acknowledgement [14].

The concern for cloud customers over sensitive data is often overlooked and underestimated as cloud providers continue to transfer data to other jurisdictions. This has also raised concerns particularly with sensitive data that end-users of cloud services generate from online applications. Sensitive data that is stored within traditional enterprise networks have been controlled by authorized personnel with tight restrictions using an access-control matrix. These restrictions are both physical security as well as security solutions such as authorizations and cryptography. Regardless of data location, cloud consumers need to have control over data flow between jurisdictions [17].

D. Google Docs Privacy Policy: An Example

In their analysis of Google Docs Privacy Policy, [5] state that cloud end-users of the Google SaaS model have a minimal amount of knowledge on where their data is being transferred and processed. In complement, [53] declare that Google's service agreements bear no liability for any privacy and security of cloud end-user data. Their privacy policy does not provide fundamental information about how third party gadgets collect, manipulate and store cloud end-user data when using Google Docs [5]. This is somewhat inclusive as data residing within the EU cannot be transferred to non-EU jurisdictions even though the data owners are not EU-based residents [14]. The confusion for end-users of cloud computing is high as cross-border data flows are often not highlighted and detailed to the cloud end-user during signup of a cloud-based application. The claim made by cloud computing providers is that cross-border data flows allow for higher service guarantees to the cloud customer and their respective customers. Acceptable service level agreements for cloud consumers can be taken into consideration whilst developing cloud strategies. A unified service level agreement will help improve confidence for future cloud computing migration [54].

Section VII. Service Level Agreements

Buyya et al. [3] in their seminal study describe the importance of service level agreements in cloud computing. Service level agreements provide the needed protection between cloud provider and cloud customer. Similarly, [55] also detail that SLAs are important documents that set expectations for both the cloud customer and the provider. With cloud computing being dynamic in nature and resources being adjusted on an ad-hoc basis, [56] discuss the need for the SLA to be self-adaptable and autonomic. For uninspected service disruption to be avoided, cloud providers need to assure that service guarantees are meet in a timely fashion [57].

A. Cloud Computing Service Level Agreement Importance

The issues associated with cloud computing continue to exist and several factors considered by [17] are raised as being important. These include: service level agreements, security and privacy constraints. Service level agreements are pivotal in establishing a contract between provider and customer in the adoption of cloud computing technologies and services. Cloud customers need to be selective and to incorporate security technology and privacy prevention policies within service level agreements [2]. Interpreting SLAs on behalf of cloud customers will enable proper decisions to be made by key managerial staff. SLAs provide customers with the ability to terminate a contract if service levels are not met by the cloud provider.

B. Service Level Agreements and Negotiation Strategies

Karadsheh's[58] findings proposed a security model and SLA negotiation application process. This was derived through understanding business security requirements prior to facilitating cloud computing activities. Throughout the study, the concept was to build confidence in the enterprise by applying the right requirements. Karadsheh's[58] first point was to illustrate due diligence in the cloud provider and then apply the needed security policies, and whether the cloud provider would be able to adhere to them. The remaining component was to negotiate SLAs. Questions based on data location, privacy agreements and backup strategies were performed as measurable attributes, and if successful, a cloud provider would be selected. To complement [33] and [58] discuss the importance of understanding the SLA prior to cloud computing usage enabling all parties to set their legal and technical expectations.

C. Public Cloud Provider SLA Content Analysis Approach

Pauley [59] designed a transparency scorecard framework to measure security, privacy, auditing and SLA attributes. The scorecard framework questions were based on SLA guarantees, SLA management procedures and record of SLA usages. The scorecard was designed to allow cloud consumers the ability to note the cloud provider that best suited their application of use. Pauley's [59] approach compared cloud customer requirements with publicly available information from cloud providers and used the self-service method for analysis. In comparison, [60]analyzed cloud provider applicability with SLAs, without reference to security, privacy and audit. Qiu et al. [60] gathered SLAs from public cloud providers that had no restrictions to view their SLAs.The sample size was larger than in [59]. Qiu et al. [60] also applied the content analysis technique to analyze the data within the SLA and followed up with a case study and interview method with the cloud customer.

The findings by [59] detailed that out of six public cloud providers chosen (Google, Amazon Web Services, Microsoft Azure, IBM, Terremark, Savvis) only two scored greater than 50% in the SLA scorecard. The results were masked and the cloud providers were not identified. Qiu et al. [60] analyzed further SLA attributes than [59] providing greater insights towards the true value of SLAs for cloud computing. Some of the added attributes in the second study that proved significant were definitions of data protection policy, backup policy and regulatory compliance policy that were originally missing from the first study.

Baset [29] details the importance of understanding the variability of SLA from the cloud provider perspective. The author introduces the attributes of service guarantee, time period, granularity, exclusions, service credit and service violation monitoring. These are the key attributes that are going to be analyzed throughout the study using context analysis of the publicly available SLAs. Qiu et al.'s [60] study has additional attributes that define the obligations from both provider and customer points of view.An important finding from the study of [29], is that service violation incident reporting for all cloud providers were not available on the actual SLA, save for Amazon Compute, which had 5 (five) days of incident reporting factored. Cloud customers that stipulate acknowledgement from cloud providers that have a data breach, disruptions or security related incidents occurring are alarmingly noted as “not available” within the service level agreement. The study from [29] also indicates SLAs that were analyzed from October 2008 to April 2010, indicate that SLAs do not change and reflect actual cloud provider technology status. Baset [29] also discusses that enterprise SLAs should comprise more than just availability and performance, but also privacy, security and disaster recovery.

D. Measuring Cloud Provider Service Level Agreements

Throughout organizational use of cloud computing the important aspect of defining SLA is crucial. The service being utilized will be directly affected if the SLA does not fit the cloud consumer's requirements. In the framework that is being proposed, [61] evaluate and rank SLA attributes of cloud providers. They utilize the service measurement index (SMI) and the attributes are accountability, agility, cost, performance, assurance, security, privacy and usability. The authors extend on this concept and introduce user experiences as another attribute. This introduces the Analytical Hierarchical Process (AHP) for cloud consumers to evaluate and rank cloud customers based on the attributes of the SMI. The framework is utilized in a case study approach that consists of three cloud providers (Amazon EC2, Microsoft Azure and Rackspace). Based on the user requirements, the attributes are given a ranking matrix and results in total weight of the quality of service attributes. The final outcome of the proposed study concluded that S3 (service provider 3) anonymously given name was the best in terms of performance, although S1 (service provider 1) provides the best quality/cost ratio. To compare [29], [59], [60], [61]–introduce the known SMI and AHP frameworks that are used to evaluate and measure attributes on known metrics, rather than analyzing from an individual customer's perspective.

E. A Brief Analysis of Google Service Agreements

Svantesson and Clarke's [5] analysis of the Google Docs service terms discuss that cloud customers have very little knowledge how their data is used and where it resides.[53]also details that Google's service agreements provide no protection on both privacy and security issues for cloud customer data. With respect to cloud customer protection, [62] summates that Google's service agreements state that the Internet search giant has the right to use the content that is obtained and publicly displayed through its Google services. Google can willingly use customer data by accessing, indexing and caching without the end customer's knowledge [62]. These agreements are enforced often without the knowledge of the cloud customer or the cloud customer's customer [5].

Section VIII. Regulation

Managing cloud computing regulations in the U.S. have yet to mature and in certain circumstances lack adequate protection for cloud customer data confidentiality, integrity and availability [14]. Comparing U.S. cloud computing regulation to the EU is challenging, as the EU have tighter restrictions on what is deemed acceptable and unacceptable [14]. Current regulatory rights lack the ability to protect data that is owned by cloud customers from different jurisdictions as to the location of the data owner [18]. Conflicting regulatory rights from different jurisdictions enforce foreign laws to be applied. Adrian [46] describes that new regulation for cloud computing models are inevitably risky and costly as change would impact individual entities.Constructing new regulations would impose burdens on existing and established rights as all entities would need to learn and adapt to new regulations [46]. Similarly, imposing new regulatory laws into an ecosystem that has not yet matured can be a challenging task for all participants involved [20].

Robison's [62] discussion on United States Stored Communication Act (SCA) implies a strong and deterministic approach on legal infrastructure is simply outdated for today's technology, including cloud computing. The author describes and contrasts cloud providers to incorporate terms of service (ToS), privacy policies of the agreed service.In comparison, [7] discusses the Stored Communication Act (SCA) imposing recommendations and future frameworks. Their recommendations include: removing the remote computing services (RCS) and electronic communication services (ECS), toward the incorporation of requiring warrants, and implementing a statutory suppression remedy in the SCA. The two studies utilized the SCA as a foundation, although [62] rather intended to cooperate and provide guidelines for future use of cloud computing. Nguyen's [7] objective in his study was rather to propose the alteration of the legal infrastructure itself. Robison's [62] and Nguyen's [7] aim was to satisfy the objectives of reasoning with cloud providers and cloud customers and allow privacy protection to be strengthened. The component of removing the ECS / RCS and issuing a warrant avoids and prevents “searches from turning into fishing expeditions” [7, p. 2213]. The current court orders require less ground to impose search for data, while warrants will allow for searches that are on reasonable grounds.

Section IX. Conclusion

This paper has used a social-technical approach to review literature in the field of cloud computing. From an analysis of the technical-related works in the field of cloud computing, it is conclusive that security concerns are among the most critical issues facing stakeholders of the cloud computing value chain. It is apparent that most previous studies have focused on enhancing security technology without focusing or reviewing the actual attacks that have been successfully launched against cloud providers. This indicates that cloud data breaches are ill-defined and under-researched in cloud computing scholarly works. There are two concerns that are fundamental to cloud computing security that need further attention. The first concern is with pre-cloud computing data breach manageability and the second concern with post-cloud computing security manageability. Scholarly works have focused largely on simulating security solutions, although they have underestimated the importance of incorporating externalities within the studies. Externalities focus on government and industry related regulations which are integral components that are presently only scantly mentioned in the literature. Importantly, social, technical and environmental concerns have been largely overlooked, with works only focusing on either social-technical, technical-environmental, without reference to all three aspects of the cloud computing value chain.

The second part of this paper examined the social aspect of cloud computing and consisted of privacy and trust-based concerns in previous works. The studies found in this area, demonstrated the importance of privacy and trust within cloud computing as not only supporting continual usage of these services, but also to state concerns with utilization. At the very heart of cloud data breaches are privacy and trust. Scholarly works that were reviewed also identified issues with respect to environmental concerns; such as data flow issues, regulation and service level agreements that were either misinterpreted or missing from government statuary legislation and potential cloud provider's terms of service. It was obvious from the review of literature that a lot of research to date in the cloud computing field has focused on technical solutions than the actual social implications of cloud computing data breaches. This not only signifies the need for a balanced approach, but also specifically with respect to the social requirements, especially of cloud customers, and the end-users of cloud solutions who may not even be aware that they are using cloud services.

In terms of the environmental aspect of cloud computing, what we found is that “systems” today have not only a global reach but technology itself is sprawled over a global landscape. Cloud providers do not simply operate from one location but for the purposes of redundancy, cost, and legal boundaries could operate various components of a system scattered all over the world. It may even be impossible for the cloud provider to denote which part of a given transaction is occurring locally as opposed to across the border. Previous works, with the exception of a small number of papers, have not addressed this regulatory/ legal aspect of cloud computing. And even fewer studies, say anything significant about the vulnerability of cloud computing end-users (i.e. everyday consumers) with respect to regulation once a data breach has occurred. What happens when hackers successfully breach a cloud computing service, and the details of personal data from a cloud customer's services are stolen or leaked? Who is informed? How are they informed? When are end-users of the cloud customer notified of a breach? How is a cloud customer supported for damage to their brand by the successful security breach, and more importantly, how does a consumer of a service based on cloud infrastructure, reclaim their personal information once it has been compromised and compensated for the loss? In conclusion, there is an urgent need for research that takes a balanced approach to cloud computing data breaches and incorporates the end-user, not just the cloud provider and cloud business customer into the study. There also needs to be a balance struck between social, technical and environmental aspects covered in finding a practicable solution to security breaches as they continue to occur, for these are inevitable.

References

1. D. N. Chorafas, Cloud Computing Strategies, Boca Raton, Florida:Taylor & Francis Group, 2011.
2. G. Pallis, "Cloud Computing: The New Frontier of Internet Computing" in Internet Computing, IEEE, vol. 14, pp. 70-73, 2010.
3. R. Buyya et al., "Cloud computing and emerging IT platforms: Vision hype and reality for delivering computing as the 5th utility", Future Generation Computer Systems, vol. 25, pp. 599-616, 2009.
4. P. Mell, T. Grance, "The NIST Definition of Cloud Computing National Institute of Standards and Technology", 2011.
5. D. Svantesson, R. Clarke, "Privacy and consumer risks in cloud computing", Computer Law & Security Review, vol. 26, pp. 391-397, 2010.
6. M. H. Hugos, D. Hulitzky, Business in the cloud: what every business needs to know about cloud computing, New York:John Wiley & Sons, 2010.
7. T. M. Nguyen, "Cloud cover: privacy protections and the Stored Communications Act in the age of cloud computing", Notre Dame Law Review, vol. 86, pp. 2189.
8. J. W. Rittinghouse, J. F. Ransome, Cloud computing: implementation management and security, Boca Raton, FL:Taylor & Francis Group, 2010.
9. W. Wang et al., "Cloud-DLS: Dynamic trusted scheduling for Cloud computing", Expert Systems with Applications, vol. 39, pp. 2321-2329, 2012.
10. C. Baun et al., Cloud computing: web-based dynamic IT services, Berlin/Heidelberg:Springer, 2011.
11. J. R. Winkler, Securing the cloud: cloud computing security techniques and tactics, Burlington, MA:Elsevier, 2011.
12. B. R. Rimal, N. Antonopoulos, L. Gillam et al., "Chapter 2. A Taxonomy Survey and Issues of Cloud Computing Ecosystems" in Cloud Computing: Principles Systems and Applications, ed London, UK:Springer-Verlag, pp. 21-46, 2010.
13. S. Y. Esayas, "A walk in to the cloud and cloudy it remains: The challenges and prospects of ‘processing’ and ‘transferring’ personal data", Computer Law & Security Review, vol. 28, pp. 662-678, 2012.
14. N. J. King, V. T. Raja, "Protecting the privacy and security of sensitive customer data in the cloud", Computer Law & Security Review, vol. 28, pp. 308-319, 2012.
15. N. Kshetri, "Privacy and security issues in cloud computing: The role of institutions and institutional evolution", Telecommunications Policy, vol. 37, pp. 372-386, 2013.
16. R. Oppliger, "Security and privacy in an online world", Computer, vol. 44, pp. 21, 2011.
17. S. Subashini, V. Kavitha, "A survey on security issues in service delivery models of cloud computing", Journal of Network and Computer Applications, vol. 34, pp. 1-11, 2011.
18. H. Takabi et al., "Security and Privacy Challenges in Cloud Computing Environments", IEEE Security & Privacy, vol. 8, pp. 24-31, 2010.
19. M. Zhou et al., "Privacy enhanced data outsourcing in the cloud", Journal of Network and Computer Applications, vol. 35, pp. 1367-1373, 2012.
20. K. Ren et al., "Security Challenges for the Public Cloud", IEEE Internet Computing, vol. 16, no. 00, pp. 69-73, 2012.
21. F. Rocha et al., "The Final Frontier: Confidentiality and Privacy in the Cloud", Computer, vol. 44, pp. 44-50, 2011.
22. J. Hwang, D. Li, "Trusted cloud computing (or) controlling the cloud?", Computer Law & Security Review, vol. 14, pp. 14-22.
23. G. Pek et al., "A survey of security issues in hardware virtualization", ACM Computing Surveys, vol. 45, pp. 1-34, 2013.
24. K. Salah et al., "Using Cloud Computing to Implement a Security Overlay Network" in Security & Privacy, IEEE, vol. 11, pp. 44-53, 2013.
25. C. Modi et al., "A survey on security issues and solutions at different layers of Cloud computing", The Journal of Supercomputing, vol. 63, pp. 561-592, 2013.
26. D. Sun et al., "Surveying and Analyzing Security Privacy and Trust Issues in Cloud Computing Environments", Procedia Engineering, vol. 15, pp. 2852-2856, 2011.
27. D. Chen, H. Zhao, "Data Security and Privacy Protection Issues in Cloud Computing", Proceedings of the 2012 International Conference on Computer Science and Electronics Engineering (ICCSEE), 2012.
28. M. Pearce et al., "Virtualization: Issues security threats and solutions", ACM Comput. Surv., vol. 45, pp. 1-39, 2013.
29. S. A. Baset, "Cloud SLAs: present and future", SIGOPS Oper. Syst. Rev., vol. 46, pp. 57-66, 2012.
30. B. Grobauer et al., "Understanding Cloud Computing Vulnerabilities" in Security & Privacy, IEEE, vol. 9, pp. 50-57, 2011.
31. G. Aceto et al., "Cloud monitoring: A survey", Computer Networks, vol. 57, pp. 2093-2115, 2013.
32. H. Mouratidis et al., "A framework to support selection of cloud providers based on security and privacy requirements", Journal of Systems and Software, vol. 86, pp. 2276-2293, 2013.
33. A. Arenas et al., "Bridging the Gap between Legal and Technical Contracts" in Internet Computing, IEEE, vol. 12, pp. 13-19, 2008.
34. M. Singhal et al., "Collaboration in multicloud computing environments: framework and security issues", Computer, vol. 46, pp. 76, 2013.
35. X. Yang et al., "A business-oriented Cloud federation model for real-time applications", Future Generation Computer Systems, vol. 28, pp. 1158-1167, 2012.
36. K. Yang, X. Jia, "Data storage auditing service in cloud computing: challenges methods and opportunities", World Wide Web, vol. 15, pp. 409-428, 2012.
37. P. Hofmann, D. Woods, "Cloud Computing: The Limits of Public Clouds for Business Applications" in Internet Computing, IEEE, vol. 14, pp. 90-93, 2010.
38. S. K. Sood, "A combined approach to ensure data security in cloud computing", Journal of Network and Computer Applications, vol. 35, pp. 1831-1838, 2012.
39. D. Zissis, D. Lekkas, "Addressing cloud computing security issues", Future Generation Computer Systems, vol. 28, pp. 583-592, 2012.
40. H. Y. Tsai et al., "Threat as a Service?: Virtualization's Impact on Cloud Security", IT Professional, vol. 14, pp. 32-37, 2012.
41. S. Sakr et al., "A Survey of Large Scale Data Management Approaches in Cloud Environments" in Communications Surveys & Tutorials, IEEE, vol. 13, pp. 311-336, 2011.
42. A. Benlian, T. Hess, "Opportunities and risks of software-as-a-service: Findings from a survey of IT executives", Decision Support Systems, vol. 52, pp. 232-246, 2011.
43. "For Your Information: Australian Privacy Law and Practice", ALRC Report 108, 2008.
44. R. Clarke, "What's Privacy?", 2006, [online] Available: www.rogerclarke.com.
45. R. Gavison, "Privacy and the Limits of Law", The Yale Law Journal, vol. 89, pp. 421-471, 1980.
46. A. Adrian, "How much privacy do clouds provide? An Australian perspective", Computer Law & Security Review, vol. 29, pp. 48-57, 2013.
47. H. Wang, "Privacy-Preserving Data Sharing in Cloud Computing", Journal of Computer Science & Technology, vol. 25, pp. 401-414, 2010.
48. K. L. Ter, "Singapore's Personal Data Protection legislation: Business perspectives", Computer Law & Security Review, vol. 29, pp. 264-273, 2013.
49. Z. Xiao, Y. Xiao, "Security and Privacy in Cloud Computing", IEEE Communications Surveys & Tutorials, vol. 15, pp. 843-859, 2013.
50. W. Wu et al., "How to achieve non-repudiation of origin with privacy protection in cloud computing", Journal of Computer and System Sciences, vol. 79, pp. 1200.
51. N. Ismail, "Cursing the Cloud (or) Controlling the Cloud?", Computer Law & Security Review, vol. 27, pp. 250-257, 2011.
52. A. Gray, "Conflict of laws and the cloud", Computer Law & Security Review, vol. 29, pp. 58-65, 2013.
53. N. Kshetri, S. Murugesan, "Cloud Computing and EU Data Privacy Regulations", Computer, vol. 46, pp. 86-89, 2013.
54. Y. Wei, M. B. Blake, "Service-Oriented Computing and Cloud Computing: Challenges and Opportunities", IEEE Internet Computing, vol. 14, pp. 72-75, 2010.
55. V. Kumar, P. Pradhan, "Role of Service Level Agreements in SaaS Business Scenario", IUP Journal of Information Technology, vol. 9, pp. 64-76, 2013.
56. A. Kertesz et al., "An interoperable and self-adaptive approach for SLA-based service virtualization in heterogeneous Cloud environments", Future Generation Computer Systems, vol. 32, pp. 54-68, 2014.
57. A. G. Garcia et al., "SLA-driven dynamic cloud resource management", Future Generation Computer Systems, vol. 31, pp. 1-11, 2014.
58. L. Karadsheh, "Applying security policies and service level agreement to IaaS service model to enhance security and transition", Computers & Security, vol. 31, pp. 315-326, 2012.
59. W. A. Pauley, "Cloud Provider Transparency: An Empirical Evaluation" in Security & Privacy, IEEE, vol. 8, pp. 32-39, 2010.
60. M. M. Qiu et al., "Systematic Analysis of Public Cloud Service Level Agreements and Related Business Values", presented at the Proceedings of the 2013 IEEE International Conference on Services Computing, 2013.
61. S. K. Garg et al., "A framework for ranking of cloud computing services", Future Generation Computer Systems, vol. 29, pp. 1012-1023, 2013.
62. W. Robison, "Free at what cost?: Cloud computing privacy under the stored communications act", Georgetown Law Journal, vol. 98, pp. 1195-1239, 2010.

Citation: David Kolevski, Katina Michael, "Cloud computing data breaches a socio-technical review of literature", 2015 International Conference on Green Computing and Internet of Things (ICGCIoT), 8-10 Oct. 2015, Noida, India, DOI: 10.1109/ICGCIoT.2015.7380702

Conceptual Model of User Acceptance of Location-Based Emergency Services

Towards a Conceptual Model of User Acceptance of Location-Based Emergency Services

Abstract

This paper investigates the introduction of location-based services by government as part of an all-hazards approach to modern emergency management solutions. Its main contribution is in exploring the determinants of an individual’s acceptance or rejection of location services. The authors put forward a conceptual model to better predict why an individual would accept or reject such services, especially with respect to emergencies. While it may be posited by government agencies that individuals would unanimously wish to accept life-saving and life-sustaining location services for their well-being, this view remains untested. The theorised determinants include: visibility of the service solution, perceived service quality features, risks as perceived by using the service, trust in the service and service provider, and perceived privacy concerns. The main concern here is to predict human behaviour, i.e. acceptance or rejection. Given that location-based services are fundamentally a set of electronic services, this paper employs the Technology Acceptance Model (TAM) as a special adaptation of the Theory of Reasoned Action (TRA) to serve as the theoretical foundation of its conceptualisation. A series of propositions are drawn upon the mutual relationships between the determinants and a conceptual model is constructed using the determinants and guided by the propositions. It is argued the conceptual model presented would yield to the field of location-based services research a justifiable theoretical approach competent for exploitation in further empirical research in a variety of contexts (e.g. national security).

1. Introduction

Emergency management (EM) activities have long been practiced in civil society. Such activities evolved from simple precautions and scattered procedures into more sophisticated management processes that include preparedness, protection, response, mitigation and recovery strategies (Canton, 2007). In the twentieth century, governments have been utilising technologies such as sirens, speakers, radio, television and internet to communicate and disseminate time-critical information to citizens about impending dangers, during and after hazards. Over the past decade, location based services (LBS) have been implemented, or considered for implementation, by several countries to geographically deliver warnings, notifications and possibly life-saving information to people (Krishnamurthy, 2002; Weiss et al., 2006; Aloudat et al., 2007; Jagtman, 2010).

LBS take into account the pinpoint geographic position of a given device (handheld, wearable, implantable), and provide the user of the device with value added information based on the derived locational information (Küpper, 2005; Perusco & Michael, 2007). The location information can be obtained by using various indoor and/or outdoor positioning technologies that differ in their range, coverage, precision, target market, purpose and functionality.Radio frequencies, cellular telecommunications networks and global navigation satellite systems are amongst the main access media used to determine the geographic location of a device (Michael, 2004; Perusco & Michael, 2007).The collected location information can be stored for the purpose of further processing (e.g. analysing the whereabouts of a fleet of emergency service vehicles over a period of time) or combined with other relevant information and sent back to the user in a value-added form (e.g. traffic accidents and alternative routes). The user can either initiate a request for the service or it is triggered automatically when the device enters or leaves or comes in the vicinity of a defined geographic area.

The conventional use of LBS in emergency management is to find the almost exact location of a mobile handset after an emergency call or a distress short message service (SMS).Although the accuracy of the positioning results ranges from a few metres up to several kilometres, the current objective by several governments is to regulate the telecommunications carriers to provide the location information within accuracies between 50 to 150 metres. This type of service is generally known as wireless E911 in Northern America (i.e. Canada and the United States), E112 in the European Union, and similarly, but not officially, E000 in Australia.

But, even with proximate levels of accuracy LBS applications have the ability to create much more value when they are utilised under an all hazards approach by government. For example, with LBS in use,government agencies pertinent to the emergency management portfolio can collaborate with telecommunications carriers in the country to disseminate rapid warnings and relevant safety messages to all active mobile handsets regarding severe weather conditions, an act of terrorism, an impending natural disaster or any other extreme event if it happened or was about to happen in the vicinity of these mobile handsets. For that reason, LBS solutions are critically viewed by different governments around the world as an extremely valuable addition to their arrangements for emergency notification purposes (Aloudat et al., 2007; Jagtman, 2010).

However, in relation to LBS and EM almost no study has undertaken the responsibility of exploring an individual’s acceptance of utilising the services for emergency management. One might rightly ponder on whether any individual would ever forego LBS in a time of emergency. Nonetheless, despite the apparent benefits of this type of electronic service,their commercial utilisation has long raised numerous technical, social, ethical and legal issues amongst users. For example, the quality of the service information being provided, to issues related to the right of citizen privacy, and issues concerning the legal liability of service failure or information disclosure have been raised (O’Connor & Godar, 2003; Tilson et al., 2004; Perusco et al., 2006; Perusco & Michael, 2007; Aloudat & Michael, 2011). Accordingly, the contribution of this paper is to discuss the potential determinants or drivers of a person’s acceptance or rejection for utilising location-based services for emergency management, and propose a conceptual research model that comprises the drivers and justly serves as the theoretical basis needed for further empirical research.

The rest of this paper is organised as follows: Section 2 is a discussion of the factors expected to impact on a person’s perceptions towards the services, and presentation of the theoretical propositions of the expected relationships between the factors. Section 3 introduces the conceptual model and its theoretical foundation. Section 4 outlines the steps taken for pretesting the model via a pilot survey and provides the analysis results of the data collected. Section 5 concludes the paper and discusses the implications of this research work, including the theoretical contributions to the scholarly literature.

2. Determinants of acceptance or rejection

A review of acceptance and adoption literature was conducted to identify, critically assess and then select the factors that would most likely influence individuals’ beliefs regarding the use of LBS for emergencies. This approach has been completely justified by Taylor and Todd (1995), and Venkatesh and Brown (2001) on the basis that there is a wealth of information systems (IS) acceptance research, which minimises the need to extract beliefs anew for each new acceptance setting. The adopted working definitions for the selected factors are summarised in Table 1.

Table 1. Summary of the constructs and their definitions

Factor | Description of the Adopted Working Definition | Based Upon

  • Individual’s attitude towards the use of LBS
    • Individual’s positive or negative feelings towards using LBS in emergencies. Fishbein and Ajzen (1975)
  • Individual’s intention to use LBS
    • Individual’s decision to engage or not to engage in using LBS in emergencies. Fishbein and Ajzen (1975)
  • Trust
    • The belief that allows a potential LBS user to willingly become vulnerable to the use-case outcome of LBS, having taken the characteristics of LBS into consideration, irrespective of the ability to monitor or control the services or the service provider. Mayer et al., (1995), McKnight and Chervany (2001)
  • Risk as perceived by the potential user
    • Individual’s belief of the potential loss and the adverse consequences of using LBS in emergencies and the probability that these consequences may occur if the services are used. Pavlou and Gefen (2004), Heijden et al., (2005)
  • Perceived usefulness
    • Individual perception that using LBS for managing emergencies is useful. Davis et al., (1989) Perceived ease of use The degree to which the prospective user expects LBS to be free of effort. Davis et al., (1989)
  • Visibility
    • The extent to which the actual use of LBS is observed as a solution to its potential users Agarwal and Prasad (1997)
  • Perceived service qualities
    • Individual’s global judgment relating to the superiority of the service. Parasuraman et al., (1988)
  • Perceived currency
    • Prospective user’s perception of receiving up-to-the-minute service information during emergencies. Zeithaml et al., (2000), Yang et al., (2003)
  • Perceived accuracy
    • Prospective user’s perception about the conformity of LBS with its actual attributes of content, location, and timing. Zeithaml et al., (2000), Yang et al., (2003)
  • Perceived responsiveness
    • Prospective user’s perception of receiving a prompt LBS service during emergencies. Parasuraman et al., (1988), Liljander et al., (2002), Yang et al., (2003)
  • Privacy concerns
    • as perceived by the prospective user Individual’s concerns regarding the level of control by others over personal identifiable information. Stone et al., (1983)
  • Collection
    • The concern that extensive amounts of location information or other personal identifiable data will be collected when using LBS during emergencies. Smith et al., (1996), Junglas and Spitzmuller (2005)
  • Unauthorised secondary use
    • The concern that LBS information is collected for emergency purposes but will be used for other purposes without explicit consent from the individual. Smith et al., (1996), Junglas and Spitzmuller (2005)

A further discussion of each proposed factor and the criteria behind its selection are presented in the following sections.

2.1. The Visibility of Location- Based Emergency Services

Many individuals may not be aware of the possible utilisation of location-based services in emergency management and, therefore, it could be argued that the direct advantages and disadvantages of such utilisation are not be vis­ible to them (Pura, 2005; Chang et al., 2007). Individuals who are not aware of the existence of LBS or, basically do not know anything about the capabilities of this type of electronic services in the domain of emergency management, may not develop an appreciation, or even depreciation, towards the services unless they were properly and repeatedly being introduced (exposed) to

LBS emergency management solutions. In other words, people may not be able to accu­rately judge the advantages or disadvantages of LBS unless the application of LBS is visible to them. It should be noted however, that the exposure effect does not necessarily increase the perceived functionality of the services, but it can greatly enhance or degrade the percep­tions of an individual about the usefulness of the services, thus influencing their acceptance or rejection of the services (Thong et al., 2004).

One of the key attributes of the Diffusion of Innovation (DOI) Theory by Rogers (1995) is the construct of observability, which is “the degree to which the results of an innovation are observable to others” (p. 16). Innovation is “an idea, practice, [technology, solution, service] or object that is perceived as new by an individual” (Rogers, 1995,p. 135). Later, observability was perceived by Moore and Benbasat (1991) as two distinct constructs of demonstrability and visibility. Demonstrability is “the tangibility of the results of using an innovation,” and visibility is “the extent to which potential adopters see the innovation as being visible in the adoption context” (Agarwal & Prasad, 1997, p. 562). Further interpretation of visibility surmises that, an innovation, application, solution, technology or service may not be new but it could be un­known for its prospective users. This probably is the case with LBS and their application, where the services have been around for several years now, yet their general usage rates, especially in the contexts of emergency management are still extremely limited worldwide (Frost & Sul­livan, 2007; O’Doherty et al., 2007; Aloudat & Michael, 2010).

The main contribution of the DOI theory to this paper is the integration of its visibility construct in the proposed conceptual model. Visibility is defined as the extent to which the actual utilisation of LBS in EM is observed as a solution to its potential users. Considering the arguments above and following a line of reasoning in former studies, such as Karahanna et al., (1999) and Kurnia and Chien (2003), the following proposition is given:

Proposition P1: The perception of an individual of the usefulness of location-based services for emergency management is positively related to the degree to which the services as a solution are visible to him or her.

2.2. The Quality Features of Location-Based

Emergency Services

A classic definition of service quality is that it is “a judgment, or attitude, relating to the superiority of the service” (Parasuraman et al., 1988, p. 16). The quality is, therefore, a result of personal subjective understanding and evaluation of the merits of the service. In the context of emergency management, individuals may not always have comprehensive knowledge about the attributes of LBS in such context or the capabilities of the services for emergencies. Consequently, individuals may rely on indirect or inaccurate measures to judge such attributes. Therefore, there is a need to create verifiable direct measurements in order to present the subjective quality (perceived) in an objective way (determinable dimensions) in order to examining the impact of the quality features of LBS on people’s opinions towards utilising the services for EM.

The quality of electronic services (e-ser­vices) has been discerned by several research­ers as a multifaceted concept with different dimensions proposed for different service types (Zeithamletal., 2002; Zhang&Prybutok, 2005). Unfortunately, in the context of LBS there is no existing consummate set of dimensions that can be employed to measure the impact of LBS quality features on people’s acceptance of the services. Nonetheless, a set by Liljander et al., (2002) can serve as a good candidate for this purpose. The set of Lilj ander et al., was adapted from the well-known work of Parasuraman et al., (1988); the SERVQUAL model, but they redesigned the model to accurately reflect the quality measurements of e-services. The dimensions of Liljander et al., (2002) include reliability, responsiveness, customisation, as­surance/trust, and user interface.

Since LBS belongs to the family of e-ser­vices, most of the aforementioned dimensions in Liljander’s et al., model are highly pertinent and can be utilised to the benefit of this research. In addition, as the dimensions are highly adaptable to capture new media (Liljander et al., 2002) then it is expected that these dimensions would be capable of explaining people’s evaluation towards the introduction ofLBS into the modern emergency management solutions. Moreover, the few number of these dimensions are expected to provide a parsimonious yet reliable approach to study the impact of LBS quality features on people’s opinions without the need to employ larger scales such as Zeithaml’s et al., (2000), which comprises eleven dimensions, making it almost impractical to be employed along with other theorised constructs in any proposed conceptual model.

The interpretation of the reliability concept follows Kaynama and Black (2000), Zeithaml et al., (2000) and Yang et al., (2003) definitions as the accuracy and currency of the product information. For LBS to be considered reli­able, the services need to be delivered with the best possible accurate state and within the promised time frame (Liljander et al., 2002). This is highly relevant to emergency situations, taking into account that individuals are most likely on the move and often in time-critical circumstances that always demand accurate and current services.

Since it is reasonable to postulate that the success of LBS utilisation in emergency situa­tions depends on the ability of the government, as the provider of the service, to disseminate the service information to a large number of people in a timely fashion, and due to the fact that fast response to changing situations, or to peoples’ emergent requests, is considered as providing timely information to them then timeliness is closely related to responsiveness (Lee, 2005). Therefore, investigating the responsiveness of LBS would also be relevant in this context.

Liljander’s et al., (2002) user interface and customisation dimensions are not explic­itly pertinent to EM. The dimension of User interface comprises factors such as aesthetics, something that cannot actually be relevant to an emergency situation. Customisation refers to the state where information is presented in a tailored format to the user. This can be done for and by the user. As LBS are customised based on the location of the recipient and the type of information being sent to the user then customisation is already an intrinsic quality in the core features of these services.

Therefore, the service quality dimensions that are expected to impact on people’s accep­tance of LBS for EM include:

1. Perceived currency: the perceived qual­ity of presenting up-to-the-minute service information during emergency situations;

2. Perceived accuracy: individual’s percep­tion about the conformity of LBS with its actual attributes of content, location, and timing;

3. Perceived responsiveness: individual’s perception of receiving a prompt service (Parasuraman et al., 1988; Liljander et al., 2002; Yang et al., 2003).

Although perceived service quality is a representation of a person’s subjective ex­pectations of LBS, and not necessarily a true interpretation of the actual attributes of the service, it is expected nonetheless that these perceptions would convey an accepted degree of quality the prospective user anticipates in LBS, given the fact that limited knowledge about the actual quality dimensions are available to them in the real world.

It could be posited that an individual’s perception of how useful LBS are in emergen­cies can be highly influenced by the degree to which he or she perceives the services to be accurate, current and responsive. Here, the conceptual model follows the same rationale of TAM, which postulates perceived ease of use as a direct determinant of the perceived usefulness. Perceived ease of use is defined as “the degree to which an individual believes that using a particular system would be free of physical and mental effort” (Davis, 1989, p. 320). It is justifiable therefore to postulate that ease of use is related to the technical qual­ity features of LBS since the evaluation of an individual to the service easiness is highly associated with the convenient design of the service itself. This explains why ease of use has been conceived before by several researchers as one of the dimensions of the service quality (Zeithaml et al., 2002; Yang et al., 2003; Zhang & Prybutok, 2005).

Building upon the mentioned arguments and following the trails of TAM, LBS quality features of currency, accuracy and responsive­ness are theorised in the conceptual model as direct determinants of the perceived usefulness and, accordingly, the following propositions are defined:

Proposition P2a: There is a positive relation­ship between the perceived currency of location-based services and the perceived usefulness of the services for emergency management;

Proposition P2b: There is a positive relation­ship between the perceived accuracy of location-based services and the perceived usefulness of the services for emergency management;

Proposition P2c: There is a positive relation­ship between the perceived responsive­ness of location-based services and the perceived usefulness of the services for emergency management.

2.3. Risks Associated with Using Location-Based Emergency Services

Risk of varying types exists on a daily basis in a human’s life. In the extreme situations, such as emergencies and disasters, perceptions of risk stem from the fact that the sequence of events and magnitude of the outcome are largely unknown or cannot be totally controlled. If one takes into account that risky situations generally affect the confidence of people in technology (Im et al., 2008), then the decision of an individual to accept LBS for EM might be influenced by his or her intuition that these electronic services could be easily disrupted since the underlying infrastructure may suffer heavily in severe conditions usually associated with such situations, especially in large-scale disasters. A telling example is Hurricane Katrina, in 2005, which caused serious dis­ruptions throughout New Orleans, Louisiana, and rendered inoperable almost every piece of public and private infrastructure in the city. As a result, uncertainty about the intensity of extreme situations coupled with their unfore­seeable contingencies may have long-term implications on one’s perceptions towards the use of all technologies, including LBS, in life- threatening situations, such as emergencies.

Since it is practically rational to believe that individuals would perceive different types of risk in emergencies, then it might be highly difficult to examine particular facets of risk as being separate to one another since they can all be inextricably intertwined. Therefore, follow­ing the theoretical justification of Pavlou (2003), perceived risk is theorised in the conceptual model as a high-order unidimensional concept.

Perceived risk is defined as the individual’s belief of the potential loss and the adverse consequences of using LBS in emergencies and the probability that these consequences may occur if the services are used. Bearing in mind the high uncertainty that is usually associated with such events, this paper puts forward the following proposition:

Proposition P3: The risks perceived in using location-based services for emergency management have a negative influence on the perceived usefulness of the services.

2.4. People’s Trust in Location- Based Emergency Services

Trust has long been regarded as an important aspect of human interactions and their mutual relationships. Basically, any intended interac­tion between two parties proactively requires an element of trust predicated on the degree of certainty in one’s expectations or beliefs of the other’s trustworthiness (Mayer et al., 1995;

Li, 2008). Uncertainty in e-services, including LBS, leads individuals to reason about the capabilities of the services and their expected performance, which eventually brings them to either trust the services by willingly accept to use them or distrust the services by simply reject to use them. In emergencies, individuals may consider the possible risks associated with LBS before using this kind of services. There­fore, individuals are likely to trust the services and engage in a risk taking relationship if they perceive the benefits of LBS outweigh the risks. However, if high levels of risk are perceived, then it is most likely that individuals do not have enough trust in the services and, therefore, will not engage in a risk-taking behaviour by using them (Mayer et al., 1995). Consequently, it could be posited here that trust in LBS is a pivotal determinant of accepting the services, especially in emergency situations where great uncertainty is always present.

Trust has generally been defined as the belief that allows a person to willingly become vulnerable to the trustee after having taken the characteristics of the trustee into consideration, whether the trustee is another person, a product, a service, an institution or a group of people (McKnight & Chervany, 2001). In the context of LBS, the definition encompasses trust in the service provider (i.e. government in col­laboration with telecommunications carriers) and trust in the services and their underlying infrastructure. This definition is in agreement with the generic model of trust in e-services, which encompasses two types of trust: trust in the government agency controlling and provid­ing the service and trust in the technology and underlying infrastructure through which the service is provided (Tan & Thoen, 2001; Carter & Bélanger, 2005; Horkoffet al., 2006).

Since the willingness to use the services can be regarded as an indication that the person has taken into account the characteristics of both the services and the provider of the services, including any third parties in between, then it is highly plausible to say that investigating trust propensity in the services would provide a prediction of trust in both LBS and their provider. Some could reasonably argue that trust should be examined with the proposition that the person knows or, at least, has a presumption of knowledge about the services, their usefulness and the potential risks associated with them. Nonetheless, it should be noted here that trust is, ipso facto, a subjective interpretation of the trustworthiness of the services, given the limited knowledge of the actual usage of LBS in the domain of emergency management in the real world.

Despite the general consensus of the ex­istence of a mutual relationship between trust and risk, the two concepts should be investi­gated separately when examining their impact on the acceptance of LBS since both usually show a different set of antecedents (Junglas & Spitzmuller, 2006). Trust and perceived risks are essential constructs when uncertainty is present (Mayer et al., 1995). However, each of the two has a different type of relationship with uncertainty. While uncertainty augments the risk perceptions of LBS, trust reduces the individual’s concerns regarding the possible negative consequences of using the services, thus alleviating uncertainty around their per­formance (Morgan & Hunt, 1994; Nicolaou & McKnight, 2006).

Therefore, as trust in LBS lessens the uncer­tainty associated with the services, thus reduc­ing the perceptions of risk, this paper theorises that perceived risk is negatively related to an individual’s trust in LBS. This is in line with a large body of previous empirical research, which supports the influence of trust on the perceptions of risk (Gefen et al., 2003). Furthermore, by reducing uncertainty trust is assumed to create a positive perspective regarding the usefulness of the services and provide expectations of an acceptable level of performance. Accordingly, the following propositions are defined:

Proposition P4: Trust in location-based ser­vices positively influences the perceived usefulness of the services for emergency management;

Proposition P5: Trust in location-based ser­vices negatively impacts the risks perceived from using the services for emergency management.

2.5. Privacy Concerns

Pertaining to Location-Based Emergency Services

In the context of LBS, privacy pertains mainly to the locational information of the person and the degree of control he or she exercises over this type of information. Location information is regarded as highly sensitive data that when collected over a period of time or combined with other personal information can infer a great deal about a person’s movements and in turn reveal more than just one’s location. Indeed, Clarke and Wigan (2008) noted that knowing the past and present locations of a person could, amongst other things, enable the discovery of the person’s behavioural patterns in a way that could be used, for example, by governments to create a suspicion, or by the private sector to conduct target marketing.

Privacy concerns could originate when individuals become uncomfortable of the per­ception that there is a constant collection of their personal location information, the idea of its perennial availability to other parties, and the belief that they have incomplete control over the collection, the extent, the duration, the timing or the amount of data being collected about them.

The traditional commercial use of LBS, where a great level of detail about the service application are regularly available for the end user, may not create much sensitivity towards privacy since in most cases the explicit consent of the user is a prerequisite for initiating these services. This is completely true in the markets of the United States, Europe and Australia (Gow, 2005; Code of Practice of Passive Loca­tion Services in the UK, 2006; The Australian Government: Attorney General’s Department, 2008). However, in emergencies pertinent government departments and law enforcement agencies have the power to temporarily waive the person’s right to privacy based on the as­sumption that the consent is already implied when collecting location information in such situations (Gow, 2005; Pura, 2005).

The implications of waiving away the consent, even temporarily, may have long-term adverse effects on people’s perspectives towards the services in general. It also has the potential to raise a debate on to what extent individuals are truly willing to relinquish their privacy in exchange for a sense of continuous security (Perusco et al., 2006). The debate could be easily augmented in the current political climate of the so-called “war on terror” where governments have started to bestow additional powers on themselves to monitor, track, and gather personal location information in a way that never could have been justified before (Perusco & Michael, 2007). As a result, privacy concerns are no exception to emergency management.

Four privacy concerns have been identified previously by Smith et al. (1996). They are col­lection, unauthorised secondary use, errors in storage, and improper access of the collected data. These concerns are also pertinent to LBS (Junglas & Spitzmuller, 2006). Collection is defined as the concern that extensive amounts of location information or other personal identifi­able information would be collected when using LBS for emergency management. Unauthorised secondary use is defined as the concern that LBS information is collected for the purposes of emergency management but ultimately is used for other purposes and without explicit consent from the individual. Errors in storage describe the concern that the procedures taken against accidental or deliberate errors in stor­ing location information are inadequate. And improper access is the concern that the stored location information is accessed by parties who do not have the authority to do so.

Two particular privacy concerns, collection and unauthorised secondary use, are integrated into the conceptual model. These concerns are expected to have a direct negative impact on the perceived usefulness of LBS. Other prominent constructs of trust and perceived risk are assumed to have mediating effects on the relationship between privacy concerns and perceived usefulness since both constructs (i.e. trust and perceived risk) could be reasonably regarded as outcomes of the assessment of the individual of the privacy concerns. For instance, if a person does not have much concern about the privacy of his or her location information then it is most likely he or she trusts the services, thus perceiving LBS to be beneficial and useful. On the other hand, if the perceptions of privacy concerns were high then the individual would not probably engage in a risk taking behaviour, resulting in lower perceptions of the usefulness of the services.

Accordingly, perceived privacy concerns are theorised in the conceptual model as direct determinants of both trust and perceived risk. While perceived privacy concerns are postu­lated to have a negative impact on the trust in the services, they are theorised to have a positive influence on the risks perceived from using location-based services for emergency management.

Considering the above mentioned argu­ments, the following propositions are made:

Proposition P6a: Collection, as a perceived privacy concern, negatively impacts the perceived usefulness of location-based services for emergency management;

Proposition P6b: Unauthorised secondary use, as a perceived privacy concern, nega­tively impacts the perceived usefulness of location-based services for emergencies;

Proposition P7a: Collection, as a perceived privacy concern, has a negative impact on trust in location-based services;

Proposition P7b: Unauthorised secondary use, as a perceived privacy concern, has a negative impact on trust in location-based services;

Proposition P8a: The risks perceived from us­ing location-based services for emergency management are positively associated with the perceived privacy concern of collection;

Proposition P8b: The risks perceived from us­ing location-based services for emergency management are positively associated with the perceived privacy concern of unautho­rised secondary use.

3. A CONCEPTUAL MODEL OF LOCATION-BASED EMERGENCY SERVICES ACCEPTANCE

The determinants of LBS acceptance are inte­grated into a conceptual model that extends and builds upon the established theory of reasoned action (TRA), applied in a technology-specific adaptation as a technology acceptance model (TAM). See Figure 1.

Figure 1. The conceptual model of location-based emergency services acceptance

TAM postulates that usage or adoption behaviour is predicted by the individual’s inten­tion to use location-based emergency services. The behavioural intention is determined by the individual’s attitude towards using the services. Both the attitude and intention are postulated as the main predictors of acceptance. The at­titude, in turn, is influenced by two key beliefs: perceived ease of use and perceived usefulness of LBS. TAM also grants a basis for investi­gating the influence of external factors on its internal beliefs, attitude, and intention (Davis etal., 1989).

As illustrated in the model in Figure 1, a set of propositions that reflect the theoretical relationships between the determinants of ac­ceptance are presented as arrowed lines that start from the influential factor and end into the dependent construct. The theorised factors supplement TAM’s original set and are totally in agreement with its theoretical structural formulation. That is, all of the hypothesised effects of the factors would only be exhibited on the internal constructs (i.e. attitude and inten­tion) through the full mediation of the internal beliefs (i.e. perceived usefulness or perceived ease of use).

4. MODEL PRETESTING

A pilot survey was conducted in order to test the reliability of the model’s constructs. IS literature places great emphasis on the importance of the piloting stage as part of the model’s development (Baker, 1999; Teijlingen & Hundley, 2001). In essence, the pilot survey is an experimental study that aims to collect data from a small set of subjects in order to discover any defects or flaws that can be corrected, before the conceptual model is tested in a large scale survey (Baker, 1999; Zikmund, 2003).

4.1. Measurement of Constructs

To increase construct measurement reliability, most of the items in the survey, which have been tested and validated in former studies, were adapted to reflect the specific context of this research i.e. location-based services. It should be emphasised here that the use of existing items in the literature is completely a valid approach (Churchill, 1979).

The scales of TAM’s perceived useful­ness and perceived ease of use were measured based on the original scales of Davis (1989). Attitude measurement items were adopted from two studies by Agarwal and Prasad (1999) and Van der Heij den et al., (2001). Intention to use items were measured using scales adopted from Junglas and Spitzmuller (2005). Trust measure­ments were adopted from Mayer et al., (1995) and Junglas and Spitzmuller (2005). Pavlou and Gefen (2004)perceived risk items were adopted given the emphasis on emergency management. The items of the visibility construct were ad­opted from a study by Karahanna et al., (1999). The items of perceived privacy concerns were adopted from Smith et al., (1996) and Junglas and Spitzmuller (2005). The statements of perceived service qualities were not directly available but have been operationalized based on the recommendations of Churchill (1979), who suggested that each statement should express limited meaning, its dimensions should be kept simple and the wording should be straightforward.

4.2. Survey Design

The survey included an overview and introduction of the application of location-based services in emergency management. In addition, the survey provided the participants with four vignettes. Each vignette depicted a hypothetical scenario about the possible uses of LBS applications for managing potential hazardous situations. The scenarios covered specific topics to emergencies such as an impending natural disaster, a situation where a person was par­ticularly in need of medical assistance, severe weather conditions and a national security issue. Two of the vignettes were designed to present location-based services in a favourable light, and the other two vignettes were designed to draw out the potential pitfalls and limitations of LBS in EM. Through the use of vignettes, participants were encouraged to project their true perceptions about the services while, at the same time, involved with creating a meaning related to their potential use in these events. Creating this meaningful attachment in context was very important, as it acted to inform par­ticipant responses, given the utilisation of LBS in EM is still in its nascent stages worldwide.

A self-administrated questionnaire was used to collect data from participants. A five- point Likert rating scale was used throughout the questionnaire. The survey which predominantly yielded quantitative results also included one open-ended question in order to solicit more detailed responses from the participants.

4.3. The Sample of the Pilot Survey

Six hundred pilot surveys were randomly distributed by hand, in November 2008, to households’ mailboxes in the Illawarra region and the City of Wollongong, New South Wales, Australia. Participants were asked to return their copies to the researcher within three weeks in the enclosed reply-paid envelope provided with the survey.

Although, this traditional approach is time- consuming and demands a lot of physical effort, it was favoured as it is more resilient to social desirability effects (Zikmund, 2003), where respondents may reply in a way they think it is more socially appropriate (Cook & Campbell, 1979). In addition, it is generally associated with high perceptions of anonymity, something that may not be completely assured or guaranteed by other methods of data collection since they tend to disclose some personal information, such as name, telephone number, email address or IP address, which may cause privacy infringements (Zikmund, 2003; Michaelidou & Dibb, 2006).

The main concern was to end up with a low response rate, an issue several researchers have noted before (Yu & Cooper, 1983; Galpin, 1987; Zikmund, 2003). Indeed, a total of 35 replies were returned, yielding an extremely low response rate of 5.8%. Two incomplete replies were excluded, leaving only 33 usable surveys for the final analysis.

Although it is a desirable goal to end up with a high response rate to have more confidence in the results, and to be able to comment on the significance of the findings (Emory & Cooper, 1991; Saunders et al., 2007), it should be noted that the pilot study’s main objective is to serve as an initial test (pretest) of the conceptual model and does not, in any away, attempt to generalise its results to a new population. Therefore, the generalisability of the findings is not an issue of contention here (Morgan & Hunt, 1994).

Nonetheless, there is much discussion in the literature of what constitutes a “good” response rate of the pilot survey; hence, its acceptable sample size. Hunt et al., (1982), for example, stated that several researchers simply recom­mended a “small” sample size, others indicated a sample size between 12 and 30 as sufficient to fulfil the requirements of the analysis. Anderson and Gerbing (1991) pretested a methodology for predicting the performance of measures in a confirmatory factor analysis with a sample size of 20. They posited the consistency of this small sample size with the general agreement between researchers that the number should be relatively small. Reynolds et al., (1993) noted that the sample size of pilot surveys is generally small when discussed in the literature, ranging from 5 to 100, an depending on the goal of the study.

The main concern, however, when as­sessing the effect of a low response rate on the validity of the survey is when taking into account the non-response bias (Cummings etal., 2001; Fowler, 2001). The bias stems from the possibility that only the sample population who are interested in the topic of the pilot survey would provide their responses back (Fowler, 2001). Nonetheless, if non-respondents’ char­acteristics are systematically similar to those of the respondents, then the non-response bias is not necessarily reduced by an increased response rate (Cummings et al., 2001).

Kanuk and Berenson (1975) in a compre­hensive literature review of the factors influenc­ing response rates to mail surveys, examined the significant differences between respondents and non-respondents, taking into account a broad range of personality traits, socio-economic and demographic characteristics. The researchers concluded that the only consistent difference was that respondents tend to be better educated.

Since respondents of this pilot survey were of all levels of education, as illustrated in Table 2, where for example, 7 respondents had a secondary education while 7 had post­graduate degrees, representing the low-level educated and the well-educated population, then it is argued that non-respondents did not differ significantly from the survey’s responders, suggesting that non-response bias was not present, and therefore, low response rate is not an issue here. Thus, the pilot survey with its low response rate, and for which no systematic differences between respondents and non-respondents exist is considered valid for the analysis.

Table 2. Respondents education

The traditional benchmarks in mail survey studies that positioned a 50 percent response rate as adequate and 70 percent as very good (Babbie, 1998) should be reappraised. Current trends of thinking reject these unconditional criterion levels and assertively demand for a contextual approach where response rate is considered in conjunction with the goal of the study, its design and the nature of its sample (Fife-Schaw, 2000; Fowler, 2001).

4.4. Reliability of the Measurements

Reliability expresses the extent to which the measures in the instrument are free of random errors, thus yielding similar consistent results if repeated (Yin, 2003; Zikmund, 2003). Reli­ability reflects the internal consistency of the scale items measuring the same construct for the selected data. Hence, it is basically an evaluation of the measurement accuracy (Straub, 1989). Nunnally and Bernstein (1994) recommended the calculation of Cronbach’s alpha coefficients to assess reliability. Straub (1989) suggested an alpha value of 0.80 as the lowest accepted threshold. However, Nunnally and Bernstein (1994) stated that 0.60 is accepted for newly developed measures, otherwise, 0.70 should serve as the lowest cut-off value.

The common threshold value of 0.7 was selected as the minimum acceptable level based on the recommendations of Nunnally and Bern­stein (1994) and Agarwal and Karahanna (2000). The results ofthe analysis are presented in Table 3, revealing acceptable values for nearly all measurements except perceived accuracy which was found to be 0.684. Accordingly, one highly complex item was excluded and the revised construct was put again through another round of validation, after which a higher acceptable coefficient of 0.724 was yielded.

Table 3. Cronbach’s alpha reliability statistics

Another reliability scale assessment, through the computation of composite reli­ability, was also conduted. It is similar in interpretation to Cronbach’s alpha test, but it applies the actual loadings of the items and does not assume weight equivalency among them (Chin, 1998). Moreover, Raykov (1997) showed that Cronbach’s test may under-estimate the reliability of the congeneric measures, leav­ing the researcher with lower-bound estimates of the true reliability scores. As illustrated in Table 4, the results show that all scores far exceed the 0.7 recommended threshold (Hair et al., 2006). Consequently, these results bring more confidence in the conceptual model and its constructs as they have demonstrated high internal consistency under the evaluation of two separate reliability tests.

Table 4. Composite reliability statistics

Table 4. Composite reliability statistics

5. Conclusion and Implications

Despite the large body of research that has been written to augment our understanding of the determinants of acceptance and adoption of location-based services in various usage contexts, the scarcity of theoretical and empiri­cal studies that examine people’s acceptance of LBS in the realm of emergencies is noted. This is clearly a gap in the current research in which this study makes a significant contribu­tion. This paper is a discussion of unexplored determinants in relation to user acceptance of location-based emergency services. These include the visibility of LBS applications in the context of emergency management, the privacy of individuals and their perceived concerns regarding extensive collection and unauthorised secondary use of the collected data by governments, risks as associated with using LBS for EM, trust in the services and in the service provider, and the current, accurate and responsive quality features of the services being offered for emergency management.

This paper proposed a conceptual model based on the aforementioned determinants that should serve as the theoretical basis for future empirical examination of acceptance. The model significantly extends and builds upon the theory of reasoned action, applied in a technology-specific adaptation as a technology acceptance model.

Although the conceptual model was built specifically to predict an individual’s acceptance of LBS for emergency management, the model can nonetheless be used as a generic candidate model in empirical studies to predict people’s acceptance of location-based services in other security usage contexts, applications, scenarios or settings. This is made possible due to the fact that all of the theorised factors of the model are highly relevant to the intrinsic characteristics of LBS. Examples of where the model would be deemed particularly useful include law enforce­ment applications, such as matters related to the surveillance implications of location-based services, and location-based evidence captures and social issues pertaining to the application of the services, such as arrest support, traffic violations or riot control.

In addition, the proposed model can be used not only to identify the predictors of acceptance but also to help the service providers to design their solutions in a way that can fairly meet the end user expectations. For instance, the model identifies perceived usefulness, perceived ease of use and perceived service quality features as expected determinants of acceptance. Once em­pirically tested, the impact of these factors can provide guidelines to developers of the services to accommodate the right service requirements that reflect acceptable performance standards for the potential users.

Finally, the application of location-based services in today’s society has the potential to raise concerns amongst users. These concerns could easily be augmented in highly sensitive settings, such as emergency management or counter-terrorism solutions. While this paper presents theoretical foundations, it is hoped the knowledge obtained here can be considered by governments and interested researchers towards the formation of developing more successful deployment and diffusion strategies for loca­tion-based emergency services globally. The purpose of this paper is to help in channelling such strategies in the right direction.

References

Agarwal, R., & Prasad, J. (1997). The role of innovation characteristics and perceived volun­tariness in the acceptance of information technologies. Decision Sciences, 28(3), 557–582. doi: 10.111 1/j. 1540-5915. 1997.tb01322.x.

Aloudat,A., &Michael,K. (2010). Toward the regula­tion of ubiquitous mobile government: A case study on location-based emergency services in Australia. Electronic Commerce Research, 10(4).

Aloudat,A., &Michael, K. (2011). The socio-ethical considerations surrounding government mandated location-based services during emergencies: An Australian case study. In M. Quigley(Ed.), ICT ethics and security in the 21st century: New developments and applications (1sted.,pp. 129–154). Hershey,PA: IGI Global. doi: 10.40 18/978-1-60960-573-5.ch007.

Aloudat, A., Michael, K., & Jun, Y. (2007). Location- based services in emergency management- from government to citizens: Global case studies. In P. Mendis, J. Lai, E. Dawson, & H. Abbass (Eds.), Re­cent advances in security technology (pp. 190–201). Melbourne, Australia: Australian Homeland Security Research Centre.

Canton, L. G. (2007). Emergency management: Concepts and strategies for effective programs (1st ed.). Hoboken, NJ: John Wiley & Sons, Inc..

Carter, L., & Bélanger, F. (2005). The utilization of e-government services: Citizen trust, innovation and acceptance factors’. Information Systems Journal, 15(1),5–25.doi:10.1111/j.1365-2575.2005.00183.x.

Chang, S., Hsieh, Y.-J., Lee, T.-R., Liao, C.-K., & Wang, S.-T. (2007). A user study on the adoption of location based services. In Advances in web and network technologies, and information management (pp. 276-286).

Clarke, R., & Wigan, M. (2008). You are where you have been. In K. Michael, & M. G. Michael (Eds.), Australia and the new technologies: Evidence based policy in public administration (pp. 100–114). Can­berra: University of Wollongong.

Code of Practice of Passive Location Services in the UK. (2006). Industry code of practice for the use of mobile phone technology to provide passive location services in the UK. Retrieved August 23, 2007, from http://www.mobilebroadbandgroup.com/documents/ UKCoP_location_servs_210706v_pub_clean.pdf

Davis, F. D. (1989). Perceived usefulness, perceived ease ofuse, and user acceptance of information tech­nology. Management Information Systems Quarterly, 13(3), 318–340. doi:10.2307/249008.

Frost and Sullivan research service. (2007). Asia Pacific location-based services (LBS) mar­kets. Retrieved August 28, 2007, from http:// www.frost.com/prod/servlet/report-brochure.pag?id=P08D-01-00-00-00

Gefen, D., Srinivasan Rao, V., & Tractinsky, N. (2003, January 6-9). The conceptualization of trust, risk and their electronic commerce: the need for clarifications. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences. IEEEXplore Database.

Gow, G. A. (2005). Pinpointing consent: Location privacy and mobile phones. In K. Nyíri (Ed.), A sense of place: The global and the local in mobile communication (pp. 139–150). Vienna, Austria: Passagen Verlag.

Horkoff, J., Yu, E., & Liu, L. (2006). Analyzing trust in technology strategies. In Proceedings of the the 2006 International Conference on Privacy, Security and Trust: Bridge the Gap Between PST Technologies and Business Services, Markham, Ontario, Canada.

Im, I., Kim, Y., & Han, H.-J. (2008). The effects of perceived risk and technology type on users’ accep­tance of technologies. Information & Management, 45(1), 1–9. doi: 10. 1016/j.im.2007.03.005.

Jagtman, H. M. (2010). Cell broadcast trials in the Netherlands: Using mobile phone technology for citizens’alarming. Reliability Engineering & System Safety, 95(1), 18–28. doi: 10. 1016/j.ress.2009.07.005.

Junglas,I., & Spitzmuller, C. (2006). Personality traits and privacy perceptions: An empirical study in the context of location-based services. In Proceedings of the International Conference on Mobile Busi­ness, Copenhagen, Denmark (pp. 11). IEEEXplore Database.

Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). Information technology adoption across time: A cross-sectional comparison of pre- adoption and post-adoption beliefs. Management Information Systems Quarterly, 23(2), 183–213. doi: 10.2307/249751.

Kaynama, S. A., & Black, C. I. (2000). A proposal to assess the service quality of online travel agencies. Journal of Professional Services Marketing, 21(1), 63–68. doi: 10. 1300/J090v21n01_05.

Krishnamurthy, N. (2002, December 15-17). Using SMS to deliver location-based services. In Proceed­ings of the 2002 IEEE International Conference on Personal Wireless Communications, New Delhi, India.

Küpper, A. (2005). Location-based services: Fundamentals and operation (1st ed.). Chichester, UK: John Wiley & Sons Ltd. doi:10.1002/0470092335.

Kurnia, S., & Chien,A.-W. J. (2003, June 9-11). The acceptance of online grocery shopping. In Proceed­ings of the 16th Bled eCommerce Conference, Bled, Slovenia.

Lee, T. (2005). The impact of perceptions of interac­tivity on customer trust and transaction intentions in mobile commerce. Journal of Electronic Commerce Research, 6(3), 165–180.

Li, P. P. (2008). Toward a geocentric framework of trust: An application to organizational trust. Man­agement and Organization Review, 4(3), 413–439. doi: 10.111 1/j. 1740-8784.2008.00120.x.

Liljander, V., Van-Riel, A. C. R., & Pura, M. (2002). Customer satisfaction with e-services: The case of an on-line recruitment portal. In M. Bruhn, & B. Stauss (Eds.), Jahrbuch Dienstleistungs management 2002 – Electronic Services (1st ed., pp. 407–432). Wiesbaden, Germany: Gabler Verlag.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.

McKnight, D. H., & Chervany, N. L. (2001). What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology. Interna­tional Journal of Electronic Commerce, 6(2), 3 5–59.

Michael, K. (2004). Location-based services: A ve­hicle for IT&T convergence. In K. Cheng, D. Webb, & R. Marsh (Eds.), Advances in e-engineering and digital enterprise technology (pp. 467–478). Profes­sional Engineering Pub..

Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192–222. doi:10.1287/isre.2.3. 192.

Morgan, R. M., & Hunt, S. D. (1994). The commit­ment-trust theory of relationship marketing. Journal of Marketing, 58(3), 20. doi: 10.2307/1252308 

Nicolaou, A. I., & McKnight, D.H. (2006).Perceived information quality in data exchanges: Effects on risk, trust, and intention to use. Information Systems Re­search, 17(4), 332–351. doi: 10. 1287/isre. 1060.0103.

O’Connor, P. J., & Godar, S. H. (2003). We know where you are: The ethics of LBS advertising. In B. E. Mennecke, &T. J. Strader (Eds.),Mobile commerce: Technology, theory, and applications (pp.211–222). Hershey, PA: IGI Global. doi: 10.4018/978-1-59140- 044-8.ch013.

O’Doherty, K., Rao, S., & Mackay, M. M. (2007). ‘Young Australians’ perceptions of mobile phone content and information services: An analysis of the motivations behind usage. Young Consumers: Insight and Ideas for Responsible Marketers, 8(4),257–268. doi:10.1 108/17473610710838617.

Parasuraman, A., Berry, L., & Zeithaml, V. (1988). SERVQUAL: A multiple-item scale for measuring service quality. Journal of Retailing, 64(1), 12–40.

Pavlou, P. A. (2003). Consumer acceptance of elec­tronic commerce: Integrating trust and risk with the technology acceptance model. International Journal of Electronic Commerce, 7(3), 101–134.

Perusco, L., & Michael, K. (2007). Control, trust, privacy, and security: Evaluating location-based services. IEEE Technology and Society Magazine, 4–16. doi:10.1109/MTAS.2007.335564.

Perusco, L., Michael, K., & Michael, M. G. (2006, October 11-13). Location-based services and the privacy-security dichotomy. In Proceedings of the Third International Conference on Mobile Computing and Ubiquitous Networking,London,UK(pp. 9 1-98). Research Online: University of Wollongong Database.

Pura, M. (2005). Linking perceived value and loyalty in location-based mobile services. Managing Service Quality, 15(6), 509–538. doi:10.1 108/096045205 10634005.

Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York, NY: Free Press.

Smith, H. J., Milberg, S. J., & Burke, S. J. (1996). Information privacy: Measuring individuals’ con­cerns about organizational practices’. Management Information Systems Quarterly, 20(2), 167–196. doi: 10.23 07/249477.

Tan, Y.-H., & Thoen, W. (2001). Toward a generic model of trust for electronic commerce. International Journal of Electronic Commerce, 5(2), 6 1–74.

The Australian Government: Attorney General’s Department. (2008). Privacy act 1988: Act No. 119 of 1988 as amended. the Office of Legislative Drafting and Publishing. Retrieved August 2, 2008, from http:// www.comlaw.gov.au/ComLaw/Legislation/Act­Compilation1.nsf/0/63C00ADD09B982ECCA257490002B9D57/$file/Privacy1988_WD02HYP.pdf

Thong, J. Y. L., Hong, W., & Tam, K. Y. (2004). What leads to acceptance of digital libraries? Communications of the ACM, 47(11), 78–83. doi: 10.1145/1029496.1029498.

Tilson, D., Lyytinen, K., & Baxter, R. (2004, Janu­ary 5-8). A framework for selecting a location based service (LBS) strategy and service portfolio. In Proceedings of the 3 7th Annual Hawaii International Conference on System Sciences, Big Island, HI. IEEEXplore Database.

Weiss, D., Kramer, I., Treu, G., & Kupper, A. (2006, June 26-29). Zone services -An approach for location-based data collection. In Proceedings of the 8th IEEE International Conference on E-Commerce Technology, The 3rd IEEE International Confer­ence on Enterprise Computing, E-Commerce, and E-Services, San Francisco, CA.

Yang, Z., Peterson, R. T., & Cai, S. (2003). Services quality dimensions of Internet retailing: An explor­atory analysis. Journal of Services Marketing, 17(7), 685–700. doi: 10.1108/08876040310501241.

Zeithaml, V. A., Parasuraman, A., & Malhotra, A. (2000). A conceptual framework for understanding e-service quality: Implications for future research and managerial practice. MSI Working Paper Series, (WorkingPaper00-1 15), Marketing Science Institute, Cambridge, MA.

Zeithaml, V. A., Parasuraman, A., & Malhotra, A. (2002). Service quality delivery through web sites: A critical review of extant knowledge. Academy of Marketing Science, 30(4), 362. doi: 10.1177/009207002236911.

Zhang, X., & Prybutok, V. R. (2005). A consumer perspective of e-service quality. IEEE Transactions on Engineering Management, 52(4), 461–477. doi: 10.1 109/TEM.2005.856568.

Keywords: Acceptance, Location-Based Emergency Services, Privacy, Risk, Service Quality, Technology Acceptance Model (TAM), Theory of Reasoned Action (TRA), Trust, Visibility

Citation: Anas Aloudat, Katina Michael, "Towards a Conceptual Model of User Acceptance of Location Based Emergency Services", International Journal of Ambient Computing and Intelligence, 5(2), 17-34, April-June 2013.

Heaven and Hell: Visions for Pervasive Adaptation

Abstract

With everyday objects becoming increasingly smart and the “info-sphere” being enriched with nano-sensors and networked to computationally-enabled devices and services, the way we interact with our environment has changed significantly, and will continue to change rapidly in the next few years. Being user-centric, novel systems will tune their behaviour to individuals, taking into account users’ personal characteristics and preferences. But having a pervasive adaptive environment that understands and supports us “behaving naturally” with all its tempting charm and usability, may also bring latent risks, as we seamlessly give up our privacy (and also personal control) to a pervasive world of business-oriented goals of which we simply may be unaware.

1. Visions of pervasive adaptive technologies

This session considered some implications for the future, inviting participants to evaluate alternative utopian/dystopian visions of pervasive adaptive technologies. It was designed to appeal to anyone interested in the personal, social, economic and political impacts of pervasive, ubiquitous and adaptive computing.

The session was sponsored by projects from the FET Proactive Initiative on Pervasive Adaptation (PerAda), which targets technologies and design methodologies for pervasive information and communication systems capable of autonomously adapting in dynamic environments. The session was based on themes from the PerAda book entitled “This Pervasive Day”, to be published in 2011 by Imperial College Press, which includes several authors from the PerAda projects, who are technology experts in artificial intelligence, adaptive systems, ambient environments, and pervasive computing. The book offers visions of “user heaven” and “user hell”, describing technological benefits and useful applications of pervasive adaptation, but also potential threats of technology. For example, positive advances in sensor networks, affective computing and the ability to improve user-behaviour modeling using predictive analytics could be offset by results that ensure that neither our behaviour, nor our preferences, nor even our feelings will be exempt from being sensed, digitised, stored, shared, and even sold. Other potentially undesirable outcomes to privacy, basic freedoms (of expression, representation, demonstration etc.), and even human rights could emerge.

One of the major challenges, therefore, is how to improve pervasive technology (still in its immature phase) in order to optimise benefits and reduce the risks of negative effects. Increasingly FET research projects are asked to focus on the social and economic impacts of science and technology, and this session aimed to engage scientists in wider issues, and consider some of the less attractive effects as well as the benefits from pervasive adaptation. Future and emerging technology research should focus on the social and economic impacts of practical applications. The prospect of intelligent services increasingly usurping user preferences as well as a certain measure of human control creates challenges across a wide range of fields.

2. Format

The networking session took the form of a live debate, primed by several short “starter” talks by “This Pervasive Day” authors who each outlined “heaven and hell” scenarios. The session was chaired by Ben Paechter, Edinburgh Napier University, and coordinator of the PerAda coordination action. The other speakers were as follows:

Pervasive Adaptation and Design Contractualism.

Jeremy Pitt, Imperial College London, UK, editor of “This Pervasive Day”.

This presentation described some of the new channels, applications and affordances for pervasive computing and stressed the need to revisit the user-centric viewpoint of the domain of Human-Computer Interaction. In dealing with the issues of security and trust in such complex systems, capable of widespread data gathering and storage, Pitt suggested that there is a requirement for Design Contractualism, where the designer makes moral and ethical judgments and encodes them in the system. No privacy or security model is of any value if the system developers will not respect the implicit social contract on which the model depends.

Micro-chipping People, The Risk vs Reward Debate

Katina Michael, University of Wollongong, Australia

Michael discussed the rise of RFID chip implantation in people as a surveillance mechanism, making comparisons with the CCTV cameras that are becoming commonplace in streets and buildings worldwide. These devices are heralding in an age of “Uberveillance”, she claims, with corporations, governments and individuals being increasingly tempted to read and record the biometric and locative data of other individuals. This constant tracking of location and monitoring of physical condition raises serious questions concerning security and privacy that researchers will have to face in the near future.

Who is more adaptive: the technology or ourselves?

Nikola Serbedzija, Fraunhofer FIRST, Germany

Serbedzija discussed how today's widespread information technologies may be affecting how we are as humans. We are now entering a world where information is replacing materiality, and where control over our individual data allows us to construct ourselves as we wish to be seen by others. Serbedzija then presented examples of research into ethically critical systems, including a reflective approach to designing empathetic systems that use our personal, physical data to assist us in our activities, for example as vehicle co-driving situations.

3. Conclusion

Following the presentations, the discussion was opened out and panellists answered questions from conference delegates. This was augmented by the use of a “tweet wall” which was open to delegates to send comments and opinions using a Twitter account. This was displayed on screen during the discussion session.

Keywords: Pervasive adaptation, ubiquitous computing, sensor networks, affective computing, privacy, security

Citation: Ben Paechter, Jeremy Pitt, Nikola Serbedzija, Katina Michael, Jennifer Willies, Ingi Helgasona, 2011, "Heaven and Hell: Visions for Pervasive Adaptation", Procedia Computer Science: The European Future Technologies Conference and Exhibition 2011, Vol. 7, pp. 81-82, DOI: https://doi.org/10.1016/j.procs.2011.12.025

Toward a State of Überveillance

Introduction

Überveillance is an emerging concept, and neither its application nor its power have yet fully arrived [38]. For some time, Roger Clarke's [12, p. 498] 1988 dataveillance concept has been prevalent: the “systematic use of personal data systems in the investigation or monitoring of the actions of one or more persons.”

Almost twenty years on, technology has developed so much and the national security context has altered so greatly [52], that there is a pressing need to formulate a new term to convey both the present reality, and the Realpolitik (policy primarily based on power) of our times. However, if it had not been for dataveillance, überveillance could not be. It must be emphasized that dataveillance will always exist - it will provide the scorecard for the engine being used to fulfill überveillance.

Dataveillance to Überveillance

Überveillance takes that which was static or discrete in the dataveillance world, and makes it constant and embedded. Consider überveillance not only automatic and having to do with identification, but also about real-time location tracking and condition monitoring. That is, überveillance connotes the ability to automatically locate and identify - in essence the ability to perform automatic location identification (ALI). Überveillance has to do with the fundamental who (ID), where (location), and when (time) questions in an attempt to derive why (motivation), what (result), and even how (method/plan/thought). Überveillance can be a predictive mechanism for a person's expected behavior, traits, likes, or dislikes; or it can be based on historical fact; or it can be something in between. The inherent problem with überveillance is that facts do not always add up to truth (i.e., as in the case of an exclusive disjunction T + T = F), and predictions based on überveillance are not always correct.

Überveillance is more than closed circuit television feeds, or cross-agency databases linked to national identity cards, or biometrics and ePassports used for international travel. Überveillance is the sum total of all these types of surveillance and the deliberate integration of an individual's personal data for the continuous tracking and monitoring of identity and location in real time. In its ultimate form, überveillance has to do with more than automatic identification technologies that we carry with us. It has to do with under-the-skin technology that is embedded in the body, such as microchip implants; it is that which cuts into the flesh - a charagma (mark) [61]. Think of it as Big Brother on the inside looking out. This charagma is virtually meaningless without the hybrid network architecture that supports its functionality: making the person a walking online node i.e., beyond luggable netbooks, smart phones, and contactless cards. We are referring here to the lowest common denominator, the smallest unit of tracking - presently a tiny chip inside the body of a human being, which could one day work similarly to the black box.

Implants cannot be left behind, cannot be lost, and supposedly cannot be tampered with; they are always on, can link to objects, and make the person seemingly otherworldly. This act of “chipification” is best illustrated by the ever-increasing uses of implant devices for medical prosthesis and for diagnostics [54]. Humancentric implants are giving rise to the Electrophorus [36, p. 313], the bearer of electric technology; an individual entity very different from the sci-fi notion of Cyborg as portrayed in such popular television series as the Six Million Dollar Man (1974–1978). In its current state, the Electrophorus relies on a device being triggered wirelessly when it enters an electromagnetic field; these properties now mean that systems can interact with people within a spatial dimension, unobtrusively [62]. And it is surely not simple coincidence that alongside überveillance we are witnessing the philosophical reawakening (throughout most of the fundamental streams running through our culture) of Nietzsche's Übermensch - the overcoming of the “all-too-human” [25].

Legal and Ethical Issues

In 2005 the European Group on Ethics (EGE) in Science and New Technologies, established by the European Commission (EC), submitted an Opinion on ICT implants in the human body [45]. The thirty-four page document outlines legal and ethical issues having to do with ICT implants, and is based on the European Union Treaty (Article 6) which has to do with the “fundamental rights” of the individual. Fundamental rights have to do with human dignity, the right to the integrity of the person, and the protection of personal data. From the legal perspective the following was ascertained [45, pp. 20–21]:

  • the existence of a recognised serious but uncertain risk, currently applying to the simplest types of ICT implants in the human body, requires application of the precautionary principle. In particular, one should distinguish between active and passive implants, reversible and irreversible implants, and between offline and online implants;
  • the purpose specification principle mandates at least a distinction between medical and non-medical applications. However, medical applications should also be evaluated stringently, partly to prevent them from being invoked as a means to legitimize other types of application;
  • the data minimization principle rules out the lawfulness of ICT implants that are only aimed at identifying patients, if they can be replaced by less invasive and equally secure tools;
  • the proportionality principle rules out the lawfulness of implants such as those that are used, for instance, exclusively to facilitate entrance to public premises;
  • the principle of integrity and inviolability of the body rules out that the data subject's consent is sufficient to allow all kinds of implant to be deployed; and
  • the dignity principle prohibits transformation of the body into an object that can be manipulated and controlled remotely - into a mere source of information.

ICT implants for non-medical purposes violate fundamental legal principles. ICT implants also have numerous ethical issues, including the requirement for: non-instrumentalization, privacy, non-discrimination, informed consent, equity, and the precautionary principle (see also [8], [27], [29]). It should be stated, however, that the EGE, while not recommending ICT implants for non-medical applications because they are fundamentally fraught with legal and ethical issues, did state the following [45, p. 32]:

ICT implants for surveillance in particular threaten human dignity. They could be used by state authorities, individuals and groups to increase their power over others. The implants could be used to locate people (and also to retrieve other kinds of information about them). This might be justified for security reasons (early release for prisoners) or for safety reasons (location of vulnerable children).

However, the EGE insists that such surveillance applications of ICT implants may only be permitted if the legislator considers that there is an urgent and justified necessity in a democratic society (Article 8 of the Human Rights Convention) and there are no less intrusive methods. Nevertheless the EGE does not favor such uses and considers that surveillance applications, under all circumstances, must be specified in legislation. Surveillance procedures in individual cases should be approved and monitored by an independent court.

The same general principles should apply to the use of ICT implants for military purposes. Although this Opinion was certainly useful, we have growing concerns about the development of the information society, the lack of public debate and awareness regarding this emerging technology, and the pressing need for regulation that has not occurred commensurate to developments in this domain.

Herein rests the problem of human rights and striking a “balance” between freedom, security, and justice. First, we contend that it is a fallacy to speak of a balance. In the microchip implant scenario, there will never be a balance, so long as someone else has the potential to control the implant device or the stored data about us that is linked to the device. Second, we are living in a period where chip implants for the purposes of segregation are being discussed seriously by health officials and politicians. We are speaking here of the identification of groups of people in the name of “health management” or “national security.” We will almost certainly witness new, and more fixed forms, of “electronic apartheid.”

Consider the very real case where the “Papua Legislative Council was deliberating a regulation that would see microchips implanted in people living with HIV/AIDS so authorities could monitor their actions” [50]. Similar discussions on “registration” were held regarding asylum seekers and illegal immigrants in the European Union [18]. RFID implants or the “tagging” of populations in Asia (e.g., Singapore) were also considered “the next step” in the containment and eradication of the Severe Acute Respiratory Syndrome (SARS) in 2003 [43]. Apart from disease outbreaks, RFID has also been discussed as a response and recovery device for emergency services personnel dispatched to terrorist disasters [6], and for the identification of victims of natural disasters, such as in the case of the Boxing Day Tsunami [10]. The question remains whether there is a truly legitimate use function of chip implants for the purposes of emergency management as opposed to other applications. Definition plays a critical role in this instance. A similar debate has ensued in the use of the Schengen Information System II in the European Union where differing states have recorded alerts on individuals based on their understanding of a security risk [17].

In June of 2006, legislative analyst Anthony Gad, reported in brief 06-13 for the Legislative Reference Bureau [16], that the:

2005 Wisconsin Act 482, passed by the legislature and signed by Governor Jim Doyle on May 30, 2006, prohibits the required implanting of microchips in humans. It is the first law of its kind in the nation reflecting a proactive attempt to prevent potential abuses of this emergent technology.

A number of states in the United States have passed similar laws [63], despite the fact that at the national level, the U.S. Food and Drug Administration [15] has allowed radio frequency identification implants for medical use in humans. The Wisconsin Act [59] states:

The people of the state of Wisconsin, represented in senate and assembly, do enact as follows: SECTION 1. 146.25 of the statutes is created to read: 146.25 Required implanting of microchip prohibited. (1) No person may require an individual to undergo the implanting of a microchip. (2) Any person who violates sub. (1) may be required to forfeit not more than $10,000. Each day of continued violation constitutes a separate offense.

North Dakota followed Wisconsin's example. Wisconsin Governor Hoeven signed a two sentence bill into state law on April 4, 2007. The bill was criticized by some who said that while it protected citizens from being “injected” with an implant, it did not prevent someone from making them swallow it [51]. And indeed, there are now a number of swallowable capsule technologies for a variety of purposes that have been patented in the U.S. and worldwide. As with a number of other states, California Governor Arnold Schwarzenegger signed bill SB 362 proposed by state Senator Joe Simitian barring “employers and others from forcing people to have a radio frequency identification (RFID) device implanted under their skin” [28], [60]. According to the Californian Office of Privacy Protection [9] this bill

… would prohibit a person from requiring any other individual to undergo the subcutaneous implanting of an identification device. It would allow an aggrieved party to bring an action against a violator for injunctive relief or for the assessment of civil penalties to be determined by the court.

The bill, which went into effect January 1, 2008, did not receive support from the technology industry on the contention that it was “unnecessary.”

Interestingly, however, it is in the United States that most chip implant applications have occurred, despite the calls for caution. The first human-implantable passive RFID microchip (the VeriChipTM) was approved for medical use in October of 2004 by the U.S. Food and Drug Administration. Nine hundred hospitals across the United States have registered the VeriChip's VeriMed system, and now the corporation's focus has moved to “patient enrollment” including people with diabetes, Alzheimer's, and dementia [14]. The VeriMedTM Patient Identification System is used for “rapidly and accurately identifying people who arrive in an emergency room and are unable to communicate” [56].

In February of 2006 [55], CityWatcher.com reported two of its employees had “glass encapsulated microchips with miniature antennas embedded in their forearms … merely a way of restricting access to vaults that held sensitive data and images for police departments, a layer of security beyond key cards and clearance codes.” Implants may soon be applied to the corrective services sector [44]. In 2002, 27 of 50 American states were using some form of satellite surveillance to monitor parolees. Similar schemes have been used in Sweden since 1994. In the majority of cases, parolees wear wireless wrist or ankle bracelets and carry small boxes containing the vital tracking and positioning technology. The positioning transmitter emits a constant signal that is monitored at a central location [33]. Despite continued claims by researchers that RFID is only used for identification purposes, Health Data Management disclosed that VeriChip (the primary commercial RFID implant patient ID provider) had enhanced its patient wander application by adding the ability to follow the “real-time location of patients, the ability to define containment areas for different classes of patients, and one-touch alerting. The system now also features the ability to track equipment in addition to patients” [19]. A number of these issues have moved the American Medical Association to produce an ethics code for RFID chip implants [4], [41], [47].

Outside the U.S., we find several applications for human-centric RFID. VeriChip's Scott Silverman stated in 2004 that 7000 chip implants had been given to distributors [57]. Today the number of VeriChip implantees worldwide is estimated to be at about 2000. So where did all these chips go? As far back as 2004, a nightclub in Barcelona, Spain [11] and Rotterdam, The Netherlands, known as the Baja Beach Club was offering “its VIP clients the opportunity to have a syringeinjected microchip implanted in their upper arms that not only [gave] them special access to VIP lounges, but also [acted] as a debit account from which they [could] pay for drinks” [39]. Microchips have also been implanted in a number of Mexican officials in the law enforcement sector [57]. “Mexico's top federal prosecutors and investigators began receiving chip implants in their arms … in order to get access to restricted areas inside the attorney general's headquarters.” In this instance, the implant acted as an access control security device despite the documented evidence that RFID is not a secure technology (see Gartner Research report [42]).

Despite the obvious issues related to security, there are a few unsolicited studies that forecast that VeriChip (now under the new corporate name Positive ID) will sell between 1 million and 1.4 million chips by 2020 [64, p. 21]. While these forecasts may seem over inflated to some researchers, one need only consider the very real possibility that some Americans may opt-in to adopting a Class II device that is implantable, life-supporting, or life-sustaining for more affordable and better quality health care (see section C of the Health Care bill titled: National Medical Device Registry [65, pp. 1001–1012]. There is also the real possibility that future pandemic outbreaks even more threatening than the H1N1 influenza, may require all citizens to become implanted for early detection depending on their travel patterns [66].

In the United Kingdom, The Guardian [58], reported that 11-year old Danielle Duval had an active chip (i.e., containing a rechargeable battery) implanted in her. Her mother believes that it is no different from tracking a stolen car, albeit for more important application. Mrs. Duvall is considering implanting her younger daughter age 7 as well but will wait until the child is a bit older, “so that she fully understands what's happening.” In Tokyo the Kyowa Corporation in 2004 manufactured a schoolbag with a GPS device fitted into it, to meet parental concerns about crime, and in 2005 Yokohama City children were involved in a four month RFID bracelet trial using the I-Safety system [53]. In 2007, Trutex, a company in Lancashire England, was seriously considering fitting the school uniforms they manufacture with RFID [31]. What might be next? Will concerned parents force microchip implants on minors?

Recently, decade-old experimental studies on microchip implants in rats have come to light tying the device to tumors [29]. The American Veterinary Medical Association [3] was so concerned that they released the following statement:

The American Veterinary Medical Association (AVMA) is very concerned about recent reports and studies that have linked microchip identification implants, commonly used in dogs and cats, to cancer in dogs and laboratory animals…. In addition, removal of the chip is a more invasive procedure and not without potential complications. It's clear that there is a need for more scientific research into this technology. [emphasis added]

We see here evidence pointing to the notion of “no return” - an admittance that removal of the chip is not easy, and not without complications.

The Norplant System was a levonorgestrel contraceptive insert that over 1 million women in the United States, and over 3.6 million women worldwide had been implanted with through 1996 [2]. The implants were inserted just under the skin of the upper arm in a surgical procedure under local anesthesia and could be removed in a similar fashion. As of 1997, there were 2700 Norplant suits pending in the state and federal courts across the United States alone. Most of the claims had to do with “pain or damage associated with insertion or removal of the implants … [p]laintiffs have contended that they were not adequately warned, however, concerning the degree or severity of these events” [2]. Thus, concerns for the potential for widespread health implications caused by humancentric implants have also been around for some time. In 2003, Covacio provided evidence why implants may impact humans adversely, categorizing these into thermal (i.e., whole/partial rise in body heating), stimulation (i.e., excitation of nerves and muscles), and other effects, most of which are currently unknown [13].

Role of Emerging Technologies

Wireless networks are now commonplace. What is not yet common are formal service level agreements to hand-off transactions between different types of networks. These architectures and protocols are being developed, and it is only a matter of time before existing technologies have the capability to track individuals between indoor and outdoor locations seamlessly, or a new technology is created to do what present-day networks cannot [26]. For instance, a wristwatch device with GPS capabilities to be worn under the skin translucently is one idea that was proposed in 1998. Hengartner and Steenkiste [23] forewarn that “[l]ocation is a sensitive piece of information” and that “releasing it to random entities might pose security and privacy risks.”

There is nowhere to hide in this digital society, and nothing remains private (in due course, perhaps, not even our thoughts). Nanotechnology, the engineering of functional systems at the molecular level, is also set to change the way we perceive surveillance - microscopic bugs (some 50 000 times smaller than the width of the human hair) will be more parasitic than even the most advanced silicon-based auto-ID technologies. In the future we may be wearing hundreds of microscopic implants, each relating to an exomuscle or an exoskeleton, and which have the power to interact with literally millions of objects in the “outside world.” The question is not whether state governments will invest in this technology: they are already making these investments [40]. There is a question whether the next generation will view this technology as super “cool” and convenient and opt-in without comprehending the consequences of their compliance.

The social implications of these über-intrusive technologies will obey few limits and no political borders. They will affect our day-to-day existence and our family and community relations. They will give rise to mental health problems, even more complex forms of paranoia and obsessive compulsive disorder. Many scholars now agree that with the support of modern neuroscience, “the intimate relation between bodily and psychic functions is basic to our personal identity” [45, p. 3]. Religious observances will be affected; for example, in the practice of confession and a particular understanding of absolution from “sin” - people might confess as much as they might want, but the records on the database, the slate, will not be wiped clean. The list of social implications is limited only by our imaginations. The peeping Tom that we carry on the inside will have manifest consequences for that which philosophers and theologians normally term self-consciousness.

Paradoxical Levels of Überveillance

In all of these factors rests the multiple paradoxical levels of überveillance. In the first instance, it will be one of the great blunders of the new political order to think that chip implants (or indeed nanodevices) will provide the last inch of detail required to know where a person is, what they are doing, and what they are thinking. Authentic ambient context will always be lacking, and this could further aggravate potential “puppeteers” of any comprehensive surveillance system. Marcus Wigan captures this critical facet of context when he speaks of “asymmetric information held by third parties.” Second, chip implants will not necessarily make a person smarter or more aware (unless someone can afford chip implants that have that effect), but on the contrary and under the “right” circumstances may make us increasingly unaware and mute. Third, chip implants are not the panacea they are made out to be - they can fail, they can be stolen, they are not tamper-proof, and they may cause harmful effects to the body. They are a foreign object and their primary function is to relate to the outside world not to the body itself (as in the case of pacemakers and cochlear implants). Fourth, chip implants at present do not give a person greater control over her space, but allow for others to control and to decrease the individual's autonomy and as a result decrease interpersonal trust at both societal and state levels. Trust is inexorably linked to both metaphysical and moral freedom. Therefore the naive position routinely heard in the public domain that if you have “nothing to hide, why worry?” misses the point entirely. Fifth, chip implants will create a presently unimaginable digital divide - we are not referring to computer access here, or Internet access, but access to another mode of existence. The “haves” (implantees) and the “have-nots” (non-implantees) will not be on speaking terms; perhaps this suggests a fresh interpretation to the biblical tower of Babel (Gen. 11:9).

In the scenario, where a universal ID is instituted, unless the implant is removed within its prescribed time, the body will adopt the foreign object and tie it to tissue. At this moment, there will be no exit strategy and no contingency plan; it will be a life sentence to upgrades, virus protection mechanisms, and inescapable intrusion. Imagine a working situation where your computer - the one that stores all your personal data - has been hit by a worm, and becomes increasingly inoperable and subject to overflow errors and connectivity problems. Now imagine the same thing happening with an embedded implant. There would be little choice other than to upgrade or to opt out of the networked world altogether.

A decisive step towards überveillance will be a unique and “non-refundable” identification number (ID). The universal drive to provide us all with cradle-to-grave unique lifetime identifiers (ULIs), which will replace our names, is gaining increasing momentum, especially after September 11. Philosophers have have argued that names are the signification of identity and origin; our names possess both sense and reference [24, p. 602f]. Two of the twentieth century's greatest political consciences (one who survived the Stalinist purges and the other the holocaust), Aleksandr Solzhenitsyn and Primo Levi, have warned us of the connection between murderous regimes and the numbering of individuals. It is far easier to extinguish an individual if you are rubbing out a number rather than a life history.

Aleksandr Solzhenitsyn recounts in The Gulag Archipelago (1918–56), (2007, p. 346f):

[Corrective Labor Camps] quite blatantly borrowed from the Nazis a practice which had proved valuable to them - the substitution of a number for the prisoner's name, his “I”, his human individuality, so that the difference between one man and another was a digit more or less in an otherwise identical row of figures … [i]f you remember all this, it may not surprise you to hear that making him wear numbers was the most hurtful and effective way of damaging a prisoner's self-respect.

Primo Levi writes similarly in his own well-known account of the human condition in The Drowned and the Saved (1989, p. 94f):

Altogether different is what must be said about the tattoo [the number], an altogether autochthonous Auschwitzian invention … [t]he operation was not very painful and lasted no more than a minute, but it was traumatic. Its symbolic meaning was clear to everyone: this is an indelible mark, you will never leave here; this is the mark with which slaves are branded and cattle sent to the slaughter, and this is what you have become. You no longer have a name; this is your new name.

And many centuries before both Solzhenitsyn and Levi were to become acknowledged as two of the greatest political consciences of our times, an exile on the isle of Patmos - during the reign of the Emperor Domitian - referred to the abuses of the emperor cult which was practiced in Asia Minor away from the more sophisticated population of Rome [37, pp. 176–196]. He was Saint John the Evangelist, commonly recognized as the author of the Book of Revelation (c. A.D. 95):

16 Also it causes all, both small and great, both rich and poor, both free and slave, to be marked on the right hand or the forehead, 17 so that no one can buy or sell unless he has the mark, that is, the name of the beast or the number of its name. 18 This calls for wisdom: let him who has understanding reckon the number of the beast, for it is a human number, its number is six hundred and sixty-six (Rev 13:16–18) [RSV, 1973].

The technological infrastructures—the software, the middleware, and the hardware for ULIs—are readily available to support a diverse range of humancentric applications, and increasingly those embedded technologies which will eventually support überveillance. Multi-national corporations, particularly those involved in telecommunications, banking, and health are investing millions (expecting literally billions in return) in identifiable technologies that have a tracking capability. At the same time the media, which in some cases may yield more sway with people than government institutions themselves, squanders its influence and is not intelligently challenging the automatic identification (auto-ID) trajectory. As if in chorus, blockbuster productions from Hollywood are playing up all forms of biometrics as not only hip and smart, but also as unavoidable mini-device fashion accessories for the upwardly mobile and attractive. Advertising plays a dominant role in this cultural tech-rap. Advertisers are well aware that the market is literally limitless and demographically accessible at all levels (and more tantalizingly from cradle-to-grave consumers). Our culture, which in previous generations was for the better part the vanguard against most things detrimental to our collective well-being, is dangerously close to bankrupt (it already is idol worshipping) and has progressively become fecund territory for whatever idiocy might take our fancy. Carl Bernstein [7] captured the atmosphere of recent times very well:

We are in the process of creating what deserves to be called the idiot culture. Not an idiot sub-culture, which every society has bubbling beneath the surface and which can provide harmless fun; but the culture itself. For the first time the weird and the stupid and the coarse are becoming our cultural norm, even our cultural ideal.

Despite the technological fixation with which most of the world is engaged, there is a perceptible mood of a collective disquiet that something is not as it should be. In the face of that, this self-deception of “wellness” is not only taking a stronger hold on us, but it is also being rationalized and deconstructed on many levels. We must break free of this dangerous daydream to make out the cracks that have already started to appear on the gold tinted rim of this seeming 21st century utopia. The machine, the new technicized “gulag archipelago” is ever pitiless and without conscience. It can crush bones, break spirits, and rip out hearts without pausing.

The authors of this article are not anti-government; nor are they conspiracy theorists (though we now know better than to rule out all conspiracy theories). Nor do they believe that these dark scenarios are inevitable. But we do believe that we are close to the point of no return. Others believe that point is much closer [1]. It remains for individuals to speak up and argue for, and to demand regulation, as has happened in several states in the United States where Acts have been established to avoid microchipping without an individual's consent, i.e., compulsory electronic tagging of citizens. Our politicians for a number of reasons will not legislate on this issue of their own accord, with some few exceptions. It would involve multifaceted industry and absorb too much of their time, and there is the fear they might be labelled anti-technology or worse still, failing to do all that they can to fight against “terror.” This is one of the components of the modern-day Realpolitik, which in its push for a transparent society is bulldozing ahead without any true sensibility for the richness, fullness, and sensitivity of the undergrowth. As an actively engaged community, as a body of concerned researchers with an ecumenical conscience and voice, we can make a difference by postponing or even avoiding some of the doomsday scenario outlined here.

Finally, the authors would like to underscore three main points. First, nowhere is it suggested in this paper that medical prosthetic or therapeutic devices are not welcome technological innovations. Second, the positions, projections, and beliefs expressed in this summary do not necessarily reflect the positions, projections, and beliefs of the individual contributors to this special section. And third the authors of the papers do embrace all that which is vital and dynamic with technology, but reject its rampant application and diffusion without studied consideration as to the potential effects and consequences.

References

1. Surveillance Society Clock 23:54 American Civil Liberties Union, Oct. 2007, [online] Available: http://www.aclu.org/privacy/spying/surveillancesocietyclock.html, accessed.

2. Norplant system contraceptive inserts, Oct. 2007, [online] Available: http://www.ama-assn.org/ama/pub/category/print/13593.html.

3. "Breaking news: Statement on microchipping", American Veterinary Medical Association, Oct. 2007, [online] Available: http://www.avma.org/ aa/microchip/breaking_news_070913_pf.asp.

4. B. Bacheldor, "AMA issues Ethics Code for RFID chip implants", RFID J., Oct. 2007, [online] Available: http://www.rfidjournal.com/article/ articleprint/3487/-1/1/.

5. E. Ball, K. Bond, Bess Marion v. Eddie Cafka and ECC Enterprises Inc., Oct. 2007, [online] Available: http://www. itmootcourt.com/2005%20Briefs/Petitioner/Team18.pdf.

6. "Implant chip to identify the dead", BBC News, Jan. 2006, [online] Available: http://news.bbc.co.Uk/1/hi/technology/4721175.stm. 

7. C. Bernstein, The Guardian, June 1992.

8. P. Burton, K. Stockhausen, The Australian Medical Association's Submission to the Legal and Constitutional's Inquiry into the Privacy Act 1988, Oct. 2007, [online] Available: http://www.ama.com.au/web.nsf/doc/ WEEN-69X6DV/\$file/Privacy_Submission_to_Senate_Committee. doc.

9. California privacy legislation, State of California:Office of Privacy Protection, July 2007, [online] Available: http://www.privacy.ca.gov/califlegis.htm.

10. "Thai wave disaster largest forensic challenge in years: Expert", Channel News Asia, Feb. 2005, [online] Available: http://www.channelnewsasia.com/stories/afp_asiapacific/view/125459/1/.html.

11. C. Chase, "VIP Verichip", Baja Beach House- Zona VIP, Oct. 2007, [online] Available: http:// www.baja-beachclub.com/bajaes/asp/zonavip2.aspx.

12. R. A. Clarke, "Information technology and dataveillance", Commun. ACM, vol. 31, no. 5, pp. 498-512, 1988.

13. S. Covacio, "Technological problems associated with the subcutaneous microchips for human identification (SMHId)", InSITE-Where Parallels Intersect, pp. 843-853, June 2003.

14. "13 diabetics implanted with VeriMed RFID microchip at Boston diabetes EXPO", Medical News Today, Oct. 2007, [online] Available: http://www.medicalnewstoday.com/articles/65560.php.

15. "Medical devices; General hospital and personal use devices; classification of implantable radiofrequency transponder system for patient identification and health information", U.S. Food and Drug Administration-Department of Health and Human Services, vol. 69, no. 237, Oct. 2007, [online] Available: http://www.fda.gov/ohrms/dockets/98fr/0427077.htm.

16. A. Gad, "Legislative Brief 06-13: Human Microchip Implantation", Legislative Briefs from the Legislative Reference Bureau, June 2006, [online] Available: http://www.legis.state.wi.us/lrb/pubs/Lb/06Lb13.pdf.

17. E. Guild, D. Bigo, "The Schengen Border System and Enlargement" in Police and Justice Co-operation and the New European Borders, European Monographs, pp. 121-138, 2002.

18. M. Hawthorne, "Refugees meeting hears proposal to register every human in the world", Sydney Morning Herald, July 2003, [online] Available: http://www.smh.com.au/breaking/2001/12/14/FFX058CU6VC.html.

19. "VeriChip enhances patient wander app", Health Data Management, Oct. 2007, [online] Available: http://healthdatamanagement.com/ HDMSearchResultsDetails.cfm?articleId=12361.

20. "VeriChip buys monitoring tech vendor", Health Data Management, July 2005, [online] Available: http://healthdatamanagement.com/ HDMSearchResultsDetails.cfm?articleId=12458.

21. "Chips keep tabs on babies moms", Health Data Management, Oct. 2005, [online] Available: http://healthdatamanagement.com/HDMSearchResultsDetails. cfm?articleId=15439.

22. "Baylor uses RFID to track newborns", Health Data Management, July 2007, [online] Available: http://healthdatamanagement.com/HDMSearchResultsDetails.cfm?articleId=15439.

23. U. Hengartner, P. Steenkiste, "Access control to people location information", ACM Trans. Information Syst. Security, vol. 8, no. 4, pp. 424-456, 2005.

24. "Names" in Oxford Companion to Philosophy, U.K., Oxford:Oxford Univ. Press, pp. 602f, 1995.

25. "Nietzsche Friedrich" in Oxford Companion to Philosophy, U.K., Oxford:Oxford Univ. Press, pp. 619-623, 1995.

26. "RFID tags equipped with GPS", Navigadget, Oct. 2007, [online] Available: http://www.navigadget.com/index.php/2007/06/27/rfid-tags-equipped-with-gps/.

27. "Me & my RFIDs", IEEE Spectrum, vol. 4, no. 3, pp. 14-25, Mar. 2007.

28. K. C. Jones, "California passes bill to ban forced RFID tagging", InformationWeek, Sept. 2007, [online] Available: http://www.informationweek.com/ shared/printableArticle.jhtml?articleID=201803861.

29. T. Lewan, "Microchips implanted in humans: High-tech helpers or Big Brother's surveillance tools?", The Associated Press, Oct. 2007, [online] Available: http://abcnews.go.com/print?id=3401306.

30. T. Lewan, Chip implants linked to animal tumors, Associated Press/ WashingtonPost.com, Oct. 2007, [online] Available: http://www.washingtonpost.com/wp-dyn/content/article/2007/09/09/AR2007090900467. html.

31. J. Meikle, "Pupils face tracking bugs in school blazers", The Guardian, Aug. 2007, [online] Available: http://www.guardian.co.uk/uk_news/ story/0, 2152979,00.

32. K. Michael, Selected Works of Dr. Katina Michael, Australia, Wollongong:Univ. of Wollongong, Oct. 2007, [online] Available: http://ro.uow.edu.au/kmichael/.

33. K. Michael, A. Masters, "Realised applications of positioning technologies in defense intelligence" in Applications of Information Systems to Homeland Security and Defense, IDG Press, pp. 164-192, 2006.

34. K. Michael, A. Masters, "The advancement of positioning technologies in defence intelligence" in Applications of Information Systems to Homeland Security and Defense, IDG Press, pp. 193-214, 2006.

35. K. Michael, M. G. Michael, "Towards chipification: The multifunctional body art of the net generation" in Cultural Attitudes Towards Technology and Communication, Estonia, Tartu:, pp. 622-641, 2006.

36. K. Michael, M. G. Michael, "Homo electricus and the continued speciation of humans" in The Encyclopedia of Information Ethics and Security, IGI Global, pp. 312-318, 2007.

37. M. G. Michael, Ch IX: Imperial cult in The Number of the Beast 666 (Revelation 13:16-18): Background Sources and Interpretation, Macquarie Univ., 1998.

38. M. G. Michael, "Überveillance: 24/7 × 365-People tracking and monitoring", Proc. 29 International Conference of Data Protection and Privacy Commissioners: Privacy Horizons Terra Incognita, 2007-Sept.-25-28, [online] Available: http://www.privacyconference2007.gc.ca/Terra_Incognita_program_E.html.

39. S. Morton, "Barcelona clubbers get chipped", BBC News, Oct. 2007, [online] Available: http://news.bbc.co.Uk/2/hi/technology/3697940.stm. 

40. D. Ratner, M. A. Ratner, Nanotechnology and Homeland Security: New Weapons for New Wars, U.S.Α., New Jersey:Prentice Hall, 2004.

41. J. H. Reichman, "RFID labeling in humans American Medical Association House of Delegates: Resolution: 6 (A-06)", Reference Committee on Amendments to Constitution and Bylaws, 2006, [online] Available: http://www. ama-assn.org/amal/pub/upload/mm/471/006a06.doc.

42. M. Reynolds, "Despite the hype microchip implants won't deliver security", Gartner Research, Oct. 2007, [online] Available: http://www.gartner.com/ DisplayDocument?doc_cd=121944.

43. "Singapore fights SARS with RFID", RFID J., Aug. 2005, [online] Available: http://www.rfidjournal.com/article/articleprint/446/-1/1/.

44. "I am not a number - Tracking Australian prisoners with wearable RFID tech", RFID Gazette, Oct. 2007, [online] Available: http://www. rfidgazette.org/2006/08/i_am_not_a_numb.html.

45. S. Rodotà, R. Capurro, "Ethical aspects of ICT implants in the human body", Opinion of the European Group on Ethics in Science and New Technologies to the European Commission N° 20 Adopted on 16/03/2005, Oct. 2007, [online] Available: http://ec.europa.eu/european_group_ethics/docs/ avis20_en.pdf.

46. "Papua Legislative Council deliberating microchip regulation for people with HIV/AIDS", Radio New Zealand International, Oct. 2007, [online] Available: http://www.rnzi.com/pages/news. php?op=read&id=33896.

47. R. M. Sade, "Radio frequency ID devices in humans Report of the Council on Ethical and Judicial Affairs: CEJA Report 5-A-07", Reference Committee on Amendments to Constitution and Bylaws, Oct. 2007, [online] Available: http://www.ama-assn.org/amal/pub/upload/ mm/369/ceja_5a07.pdf.

48. B. K. Schuerenberg, "Implantable RFID chip takes root in CIO: Beta tester praises new mobile device though some experts see obstacles to widespread adoption", Health Data Management, Feb. 2005, [online] Available: http://www.healthdatamanagement.com/HDMSearchResultsDetails. cfm?articleId=12232.

49. B. K. Schuerenberg, "Patients let RFID get under their skin", Health Data Management, Nov. 2005, [online] Available: http://healthdatamanagement. com/HDMSearchResultsDetails.cfm?articleId=12601.

50. N. D. Somba, "Papua considers 'chipping' people with HIV/ AIDS", The Jakarta Post, Oct. 2007, [online] Available: http://www.thejakartapost. com/yesterdaydetail.asp?fileid=20070724.G04.

51. M. L. Songini, "N.D. bans forced RFID chipping Governor wants a balance between technology privacy", ComputerWorld, Oct. 2007, [online] Available: http://www.computerworld.com/action/article.do?command =viewArticleBasic&taxonomyId=15&articleId=9016385&intsrc=h m_topic.

52. D. M. Snow, National Security For A New Era.: Globalization And Geopolitics, Addison-Wesley, 2005.

53. C. Swedberg, "RFID watches over school kids in Japan", RFID J., Oct. 2007, [online] Available: http://www.rfidjournal.com/article/ articleview/2050/1/1/.

54. C. Swedberg, "Alzheimer's care center to carry out VeriChip pilot", RFID J., Oct. 2007, [online] Available: http://www.rfidjournal.com/article/ articleview/3340/1/1/.

55. "Chips: High tech aids or tracking tools?", Fairfax Digital: The Age, Oct. 2007, [online] Available: http://www.theage.com.au/news/Technology/Microchip-Implants-Raise-Privacy-Concern/2007/07/22/1184560127138. html. 

56. "VeriChip Corporation adds more than 200 hospitals at the American College of Emergency Physicians (ACEP) Conference", VeriChip News Release, 2007-Oct.-11, [online] Available: http://www.verichipcorp.com/ news/1192106879.

57. W. Weissert, "Microchips implanted in Mexican officials", Associated Press, Oct. 2007, [online] Available: http://www.msnbc.msn.com/id/5439055/.

58. J. Wilson, "Girl to get tracker implant to ease parents' fears", The Guardian, Oct. 2002, [online] Available: http://www.guardian.co.uk/Print/0,3858,4493297,00. html.

59. Wisconsin Act 482, May 2006, [online] Available: http://www.legis.state. wi.us/2005/data/acts/05Act482.pdf.

60. J. Woolfolk, "Back off Boss: Forcible RFID implants outlawed in California", Mercury News, Oct. 2007, [online] Available: http://www.mercurynews. com/portlet/article/html/fragments/print_article.jsp?articleId=7162880.

61. Macquarie Dictionary, Sydney University, pp. 1094, 2009.

62. K. Michael, M. G. Michael, Innovative Automatic Identification and Location-Βased Services: From Bar Codes to Chip Implants, PA, Hershey:IGI Global, pp. 401, 2009

63. A. Griggieri, K. Michael, M. G. Michael, "The legal ramifications of microchipping people in the United States of America- A state legislative comparison", Ρroc. 2009 IEEE Int. Symp. Technology and Society, pp. 1-8, 2009.

64. A. Marburger, J. Coon, K. Fleck, T. Kremer, VeriChip™: Implantable RFID for The Health Industry, June 2005, [online] Available: http://www. thecivilrightonline.com/docs/Verichip_Implantable%20RFID.pdf.

65. 111TH CONGRESS 1ST SESSION H. R. 11 A BILL: To provide affordable quality health care for all Americans and reduce the growth in health care spending and for other purposes, 2010-Apr.-1, [online] Available: http://waysandmeans. house.gov/media/pdf/111/AAHCA09001xml.pdf.

66. Positive ID. 2010. Health-ID, May 2010, [online] Available: http://www.positiveidcorp.com/ health-id.html.

IEEE Keywords: Implants, TV, Data systems, National security, Pressing, Engines, Condition monitoring, Circuits,Feeds, Databases

Citation: M.G. Michael, Katina Michael, Toward a State of Überveillance, IEEE Technology and Society Magazine ( Volume: 29, Issue: 2, Summer 2010 ), pp. 9 - 16, Date of Publication: 01 June 2010, DOI: 10.1109/MTS.2010.937024