Assessing technology system contributions to urban dweller vulnerabilities

Lindsay J. Robertson+, Katina Michael+, Albert Munoz#

+ School of Computing and Information Technology, University of Wollongong, Northfields Ave, NSW 2522, Australia

# School of Management and Marketing, University of Wollongong, Northfields Ave, NSW 2522, Australia

Received 26 March 2017, Revised 16 May 2017, Accepted 18 May 2017, Available online 19 May 2017


• Individual urban-dwellers have significant vulnerabilities to technological systems.

• The ‘exposure’ of a technological system can be derived from its configuration.

• Analysis of system ‘exposure’ allows valuable insights into vulnerability and its reduction.


Urban dwellers are increasingly vulnerable to failures of technological systems that supply them with goods and services. Extant techniques for the analysis of those technological systems, although valuable, do not adequately quantify particular vulnerabilities. This study explores the significance of weaknesses within technological systems and proposes a metric of “exposure”, which is shown to represent the vulnerability contributed by the technological system to the end-user. The measure thus contributes to the theory and practice of vulnerability reduction. The results suggest specific and general conclusions.

1. Introduction

1.1. The scope and nature of user vulnerability to technological systems

Today's urban dwelling individuals are end-users that increasingly depend upon the supply of goods and services produced by technological systems. These systems are typically complex [1–4], and as cities and populations grow, demands placed on these systems lead to redesigns and increases in complexity. End-users often have no alternative means of acquiring essential goods and services and thus a failure in a technology system has implications for the individual that are disproportionately large compared to the implications for the system operator/owner. End users may also lack awareness of the technological systems that deliver these goods and services, inclusive of system complexity and fragility, yet may be expected to be concerned for their own security. The resulting dependence on technology justifies the observed concern that there is a vulnerability incurred by users, from the systems that provide them with goods and services.

Researchers [5–7], alongside the tradition of military strategists [8], have presented a socio-technical perspective on individual vulnerability, drawing attention to the complexity of the technological systems tasked with the provision of essential goods and services. Meanwhile, other researchers have noted the difficulties of detailed performance modelling of such systems [9–11].

The vulnerability of an urban dweller has also been a common topic within the popular press for example “Cyber-attack: How easy is it to take out a smart city?” [12], which speculated how such phenomena as the “Internet of Things” affect the vulnerability of connected systems. Other popular press topics have included the possibility that motor vehicle systems are vulnerable to “hacking” [13].

There is furthermore a widespread recognition that systems involving many 'things that can go wrong' are fragile. Former astronaut and United States Senator John Glenn stated in his (1997) retirement speech [14] mentioned “… the question I'm asked the most often is: ‘When you were sitting in that capsule listening to the count-down, how did you feel?’ ‘Well, the answer to that one is easy. I felt exactly how you would feel if you were getting ready to launch and knew you were sitting on top of two million parts - all built by the lowest bidder on a government contract’ …” His concern was justified, and most would appreciate that similar concerns apply to more mundane situations than the Mercury mission.

National infrastructure systems are typically a major subset of the technological systems that deliver goods and services to individual end-users. Infrastructure systems are commonly considered to be inherently valuable to socio-economic development, with the maintenance of security and functionality often emphasized by authors such as Gómez et al. (2011) [15]. We argue that infrastructural systems actually have no intrinsic value to the end-user, and are only valuable until another option can supply the goods and services to the user with lower vulnerability, higher reliability or both. If a house-scale sewage treatment technology were economical, adequately efficient and reliable, then reticulated, centralised sewage systems would have no value. We would also argue that the study of complete technological systems responsible for delivery of goods or services to an end-user, is distinguishable from the study of infrastructural systems.

For the urban apartment dweller, significant and immediate changes to lifestyle quality would occur if any of a list of services became unavailable. To name a few, these services would include those that allow the flow of work information, financial transactions, availability of potable water, fuel/power for lighting, heating, cooking and refrigeration, sewage disposal, perishable foods and general transport capabilities. Each of these essential services are supplied by technological systems of significant complexity and face an undefined range of possible hazards. This paper explores the basis for assessing the extent to which some technological systems contribute to a user's vulnerability.

Perrow [16] asserts that complexity, interconnection and possibility of major harm make catastrophe inevitable. While Perrow's assertion may have intuitive appeal, there is a need for a quantitative approach to the assessment of vulnerability. Journals (e.g. International Journal of Emergency Management, Disasters) are devoted to analysing and mitigating individuals' vulnerabilities to natural disasters. While there is an overlap of topic fields, disaster scenarios characteristically assume geographically-constrained, simultaneous disruption of a multitude of services and also implicitly assume that the geographically unaffected regions can and will supply essential needs during reconstruction. This research does not consider the effects of natural disasters, but rather the potential for component or subsystem disruptions to affect the technological system's ability to deliver goods and services to the end-user.

Some technological systems-such as communications or water distributions systems-transmit relevant goods and services via “nodes” that serve only aggregate or distribute the input goods and services. Such systems can be characterised as “homogeneous” and are thus distinguished from systems that progressively create as well as transmit goods, and thus require a combination of processes, input- and intermediate-streams and services. The latter type of system are thus categorized as heterogeneous, such heterogeneity must be accommodated in an analysis measure.

1.2. Quantification as prelude to change

We propose and justify a quantification of a technological system contributions to the vulnerability of an urban dwelling end-user who is dependent upon its outputs. The proposed approach can be applied to arbitrary heterogeneous technological systems and can be shown to be a valid measure of an attribute that had previously been only intuitively appreciated. Representative examples are used to illustrate the theory, and preliminary results from these examples illustrate some generalised concerns and approaches to decreasing urban dwelling end user “exposure” to technological systems. The investigation of a systems exposure will allow a user to make an informed assessment of their own vulnerability, to reduce their exposure by making changes to system aspects within their control. The investigation of a system's exposure will allow a user to make an informed assessment of their own vulnerability, to reduce their exposure by making changes to system aspects within their control. Quantifying a system's exposure will allow a system's owner to identify weaknesses, and to assess the effect of hypothetical changes.

2. Quantification of an individual's “exposure”: development of theory

2.1. Background to theory

Consider an individual's level of “exposure” by two scenarios: a first scenario where a specific service can only be supplied to an end-user by a single process, which is dependent on another process, which is in turn dependent on a third process. In the second scenario, the same service can be offered to the user by any one of three identical processes with no common dependencies. Any end-user of the service could reasonably be expected to feel more “exposed” under the first scenario than under the second. Service delivery systems are likely to involve complex processes that include at least some design redundancies, but also include single-points-of-failure, and cases where two or more independent failures would deny the supply of the service. For such systems, the “exposure” of the end user may not be obvious, but it would be useful to distinguish quantitatively and to be able to distinguish quantitatively among alternative configurations.

The literature acknowledges the importance of a technological configuration's contribution to end-user vulnerability [6], yet such studies do not quantitatively assess the significance of the system configuration. Reported approaches to vulnerability evaluation can be broadly categorized according to whether they consider homogeneous or heterogeneous systems, whether they assume a static or a dynamic system response, and whether system configuration is, or is-not used as the basis for the development of metrics. The published literature on risk analysis (including interconnected system risks), resilience analysis, and modelling all have a bearing on the topic, and are briefly summarised below:

Risk analysis may be applied to heterogeneous or homogeneous systems, the analysis does not analyse dynamic system responses and limits analysis to a qualitative assessment of the effect of brainstormed hazards. Classical risk analysis [17–20] requires an initial description of the system under review, however practitioners commonly only generate descriptions lacking specific system configuration detail. While many variations are possible, it is common for an expert group to carry out the risk analysis by listing all identified hazards and associated harms. Experts then categorise identified harms by severity, and hazards according to the vulnerability of the system and the probability of hazard occurrence. Risk events are then classified by the severity, based on harm magnitude, and hazards probability. Undertaking a risk analysis is valuable, yet without a detailed system definition to which the assessments of hazard and probability are applied, probability-of-occurrence evaluations may be inaccurate or fail to identify guided hazards, and the analysis may fail to identify specific weaknesses. Another issue exists if the exercise fails to account for changes to instantaneous system states; if the system is close to design capacity upon hazard occurrence, the probability of the hazard causing harm is higher than if the hazard occurred at a point in time when the system operates at lower capacities. Finally, the use of categories that correlate harm and hazard to generate a risk evaluation are inherently coarse-grained, meaning that changes to system configuration or components may- or may-not trigger a change to the category that is assigned to the risk.

Another analysis approach is that of “Failure Modes and Effects Analysis” (FMEA) [21], which examines fail or go conditions of each component within a system, ultimately producing a tabulated representation of the permutations and combinations of input “fail” criteria that cause system failure. FMEA is generally used to demonstrate that a design-redundancy requirement of a tightly-defined system is met.

'Resilience' has been the topic of significant research, much of which is dedicated to the characterization of the concept and definitional consensus. One representative definition [22] is “ … the ability of the system to withstand a major disruption within acceptable degradation parameters and to recover within an acceptable time … ” This definition is interpreted [23–25] quantitatively as a time-domain variable measuring one or more characteristics of the transient response. For complex systems, derivation of time-domain responses to a specific input disruption can be expected to be difficult, and such a derivation will only be valid for one particular initial operational state and disruption event. Responses to each possible system input and initial condition would generate a new time-domain response, and so a virtually infinite number of transient responses would be required to fully characterize the 'resilience' of a single technological system. All such approaches implicitly assume that the disturbance is below an assumed ‘maximum tolerable’ level, so the technological system's response will be a continuous function, i.e. the system will not actually fail. As resilience analysis applies to the aforementioned scenarios, a methodological issue exists in that evaluations of this kind are post-hoc observations, where feedback from event occurrences lead to design changes. Thus, an implicit assumption exists that the intention of resilient design is to minimise the disturbance to an ongoing provision of goods and services, rather than prevent output failure. Resilience analysis examines each permutation and combination of input, and constraining scope to failures, as input. As this method considers system responses to external stimulus, it requires a detailed knowledge of system configurations and system configuration, but is only practical for relatively simple cases (the difficulty of modelling large systems, has been noted by others [9]).

A third approach constructs a model of the target system in order to infer real world behaviour. The model - as a simplified version of the real world system - is constructed for the purposes of experimentation or analysis [26]. Applied to the context of end-user vulnerability, published simplifications of communication systems, power systems and water distribution systems commonly assume system homogeneity. For example, a graph theory approach will consider the conveyance of goods and services as a single entity transmitted across a mesh of edges and vertices that each serve to either disperse or distribute the product. Once a distribution network is represented as a graph, it is possible to mathematically describe the interconnection between specified vertices [10], and to draw conclusions [27] regarding the robustness of the network. Tanaka and colleagues [28] noted that it is possible to represent homogeneous networks using graph theory notation and thus make graph theory analyses possible. Common graph theory metrics consider the connections of each edge and do not consider the possibility that an edge could carry a different service from another edge. Because the graph theory metrics assume a homogeneous system, these metrics cannot be applied directly to heterogeneous systems in which interconnections do not always carry the same goods or services.

2.2. Exposure of a technological system

In order to obtain a value for the technological system contribution to end-user vulnerability that enables comparisons among system configurations a quantitative analytical technique is needed. To achieve this, four essential principles are proposed to allow and justify the development of a metric that evaluates the contribution of a heterogeneous technological system, to the vulnerability of an individual. These principles are:

(1) Application to individual end-user: an infrastructural system may be quite large and complex. Haimes and Jiang [11] considered complex interactions between different infrastructural systems by assigning values to degrees of interaction: the model allows mathematical exploration of failure effects but (as is acknowledged by these authors) depends on interaction metrics that are difficult to establish. This paper presents an approach that is focussed on a representative single end-user. When an individual user is considered, not only is the performance of the supply system readily defined, but the relevant system description is more hierarchical and less amorphous. Our initial work has also suggested that if consideration of failures requiring more than 3 simultaneous and unrelated hazards, then careful modelling can generate a defensible model without feedback loops.

(2) Service level: it is possible to not only describe goods or services that are delivered to the individual (end-user), but also to define a service level at which the specified goods or services either are-, or are-not delivered. From a definitional standpoint, this approach allows the output of a technological system to be expressed as a Boolean variable (True/False), and allows the effect of the configuration of a technological system to be measured against a single performance criterion. For some goods/services, additional insights may be possible from separate analyses at different service levels (e.g. water supply analyzed at “normal flow” and at “intermittent trickle”) however for other goods/services (e.g. power supply) a single service level (power on/off) is quite reasonable.

(3) Hazard and weakness link: events external to a technology system only threaten the output of the technology system if the external events align with a weakness in the technology system. If a hazard does not align with a weakness then it has no significance. Conversely if a weakness exists within a technological system and has not been identified, then hazards that can align with the weakness are also unlikely to be recognised. If the configuration of a particular technology system is changed, weaknesses may be removed while other weaknesses may be added. Therefore, for each weakness added, an additional set of external events can be newly identified as hazards - and correspondingly for each weakness that is removed, the associated hazards cease to be significant. Processes capable of failure and input streams that could become unavailable, are weaknesses that are significant regardless of the number and/or type of hazards of sufficient magnitude to cause failure, that might align with any specific example of such a weakness.

(4) Hazard probability: Some (e.g. extreme weather events) hazards occur randomly, can be assessed statistically, and will have a higher probability of occurrence over a long time period. Terrorist actions or sabotage in particular, do not occur randomly but must be considered as intelligently (mis)guided hazards. The effect of a guided hazard upon a risk assessment is qualitatively different from the effect of a random hazard. The guided hazard will occur every time the perpetrator elects to cause the hazard and therefore the hazard has a probability of 1.0. It is proposed that the significance of this distinction has not been fully appreciated. A malicious entity will seek out weaknesses, regardless of whether these have been identified by a risk assessment exercise or not. Since either random or guided hazards have an equal effect, and have a probability approaching 1 for a long time period, we argue that a risk assessment based upon the 'probability' (risk) of a hazard occurring is a concept with limited usefulness, and vulnerability is more validly assessed by assuming that all hazards (terrorist action, component failure or random natural event) will occur sooner or later, hence having a collective probability of 1.0. As soon as the assumption is made that sooner-or-later a hazard will occur, assessment of the technological systems contribution to user vulnerability can be refocussed from consideration of hazard probability to consideration of the number and type of weaknesses with which (inevitable) hazards can align.

A heterogeneous technological system may involve an arbitrary number of linked operations, each of which (consistent with the definition of stated by Slack et al. [29]requires inputs, executes some transformation process, and produces an output that is received by a subsequent process and ultimately serves an end-user. If the output of such a system is considered to be the delivery- or non-delivery of a nominated service-level output to an individual end-user, then the arbitrary heterogeneous technological system can be described by a configured system of notional AND/OR/NOT functions [30] whose inputs/outputs include unit-operations, input streams, intermediate product streams and services. For example, petrol is dispensed from a petrol station bowser to a car if fuel is present in the bulk tank, the power to a pump is available, a pipework and pump are operational and the required control signal is valid. Hence, a notional “AND” gate with these 5 inputs will model the operation of the dispensing system. The valid control signal will be generated when another set of different inputs is present, and the provision of this signal can be modelled by a notional “AND” function with nominated inputs. The approach allows the operational configuration of a heterogeneous technological system to be represented by a Boolean algebraic expression. Fig. 1 illustrates the use of Boolean operations to represent a somewhat more complex technological system.

Fig. 1. Process and stream operations required for system: Boolean representation.


Having represented a specific technological system using a Boolean algebraic expression, a 'truth table' can be constructed to display all permutations of process and stream availabilities as inputs, and technological system output as a single True or False value. From the truth table, a count of the cases in which a single input failure will cause output failure, and assign that total to the variable “E1”. A count of the cases where two input failures (exclusive of inputs whose failure will alone cause output failure) cause output failure, and assign that total value to E2. A further count of the cases in which three input failures cause output failure (and where neither single nor double input failures within that triple combination would alone cause output failure) and assign that total value to the variable “E3” and similarly for further “E” values. A simple algorithm can generate all permutations of “operate or fail” for every input process and stream. If a “1” is considered as a “operate” and “0” is considered as a “fail”, then for a model with n inputs (streams and processes) 2n options are input. If the algorithm outputs are applied to each binary representation of input states (processes and streams) and the output conditions (operate or fail) are recorded the input conditions for each output fail combination, the E1 etc. values can be computed (the E1 is the number of output-failure conditions where only a single input has failed). A truth-table approach to generating exposure metrics is illustrated in Fig. 2.

Fig. 2. Evaluation of exposure, by analysis of Boolean expression.

Fig. 2. Evaluation of exposure, by analysis of Boolean expression.

The composite metric {E1, E2, E3 … En}, is therefore mapped from the Boolean representation of the heterogeneous system and characterizes the weaknesses of that system in the contribution of the technological configuration to end-user vulnerability. Indeed, for a given single output at a defined service level - described by a Boolean value, representing “available” or “not available” - it is possible to isomorphically map an arbitrary technological system onto a Boolean algebraic expression. Thus, it is possible to create a homomorphic mapping (consistent with the guidance of Suppes [31] to a composite metric that characterizes the weakness of the system. Furthermore, the metric allows for comparison of the exposure level of alternative technological systems and configurations.

Next, we consider whether the measure represents the proposed attribute, by considering validity criteria. Hand [32] states that construct validity “involves the internal structure of the measure and also its expected relationship with other, external measures …” and “… thus refers to the theoretical construction of the test: it very clearly mixes the measurement procedure with the concept definition”. Since the Boolean algebraic expression represents all processes, streams and interactions, it can be directly mapped to a Process Flow Diagram (PFD) and so is an isomorphic mapping of the technological system with respect to processes and streams. The truth table is a homomorphic mapping of output conditions and input combinations, with output values unambiguously derived from the input values, but the configuration cannot be unambiguously derived from the output values. The {E1, E2, E3 … En} values are therefore a direct mapping of the system configuration.

Since the configuration and components of the system are represented by a Boolean expression, and the exposure metric {E1, E2, E3 … En} is assembled directly from the representation of the technological system, it has sufficient “construct validity” in the terms proposed by Hand [32]. The representational validity of this metric to the phenomenon of interest (viz. contribution to individual end-user vulnerability) must still be considered [31,32], and two justifications are proposed. Firstly, the representation of “exposure” using {E1, E2, E3 … En} supports the common system engineering “N+1”, “N+2” design redundancy concepts [33]. Secondly, the cost of achieving a given level of design redundancy can be assumed to be related to “E” values and so enumerating these will support decisions on value propositions of alternative projects, a previously-identified criterion for a valid metric.

Generating an accurate exposure metric as described, requires identification of processes and streams, which in practice requires a consideration of representation granularity. If every transistor in a processor chip were considered as a potential cause of failure, the “exposure” value calculated for the computer would be exceedingly high. If by contrast, the computer were considered as a complete, replaceable unit, then it would be assigned an exposure value of 1. A pragmatic definition of granularity will address this issue: if some sub-system of interest is potentially replaceable as a unit, and can be attacked separately from other sub-systems, then the sub-system of interest should be considered as a single potential source of failure. This definition allows adequate precision and reproducibility by different practitioners.

Each input to an operation within a technological system will commonly be the output of another technological system, which will itself have a characteristic “exposure”. The contribution of the predecessor system's exposure to the successor system must be calculated. This problem is generalised by considering that each input to a Boolean ‘AND’ or ‘OR’ operation has a composite exposure metric, and developing the principles by which the operation's output can be calculated from these inputs. Consider, for example, an AND gate that has three inputs (A, B and C), whose inputs have composite exposure metrics {A1, A2, A3 … }, {B1, B2, B3 … } and {C1, C2, C3 … }. The contributory exposure components are added component-wise, hence the resulting exposure of an AND operation is {(A1+B1+C1), (A2+B2+C2), (A3+B3+C3) … (An + Bn + Cn)}. The generalised calculation of contributory exposure is more complex for the OR operation. For an OR gate with three inputs (A, B and C), each of which has composite exposure metric {A1, A2, A3 … }, {B1, B2, B3 … } and {C1, C2, C3 … }:

• The output E1 value is 0

• The output E2 value is 0

• The output E3 value is 2((A1−1)+(B1−1)+(C1−1)),((A2−1)+(B2−1)+(C2−1)), ((A3−1)+(B3−1)+(C3−1)) since one fail from each input must occur for the output to fail, however each remaining combination of fails contributes to the E3 value

• The E4 and subsequent values are calculated in exactly the same way as the E3 value.

Since the contributory system has effectively added streams and processes, the length of the output exposure vector is increased when the contributory system is considered. The proposed approach is therefore to nominate a level to which exposure values will be evaluated. If, for example, this level is set at 2, then the representation would be considered to be complete when it could be shown that no contributory system adds to the E2 values of the represented system.

3. Implications from theory

The current levels of expenditure on infrastructure “hardening” are well reported in popular press. The theory presented is proposed to be capable, for a defined technological system, of quantitatively comparing the effectiveness of alternative projects. The described measure is also proposed to be capable of differentiating between systems that have higher or lower exposure, and thus allowing prioritisation of effort. The following examples have undergone a preliminary analysis. The numerical outputs are dependent on system details and boundaries, nevertheless, the authors' preliminary results indicate the output that is anticipated, and are considered to demonstrate the value of the principles. The example studies are diverse and examine well-defined services and service levels for the benefit of a representative individual end-user. Each example involves a technological system (with a range of processes, intermediate streams, and functionalities) and may therefore be expected to include a number of cases in which a single stream/process failure will cause the service delivery to fail - and a number of other cases in which multiple failures would result in the non-delivery of the defined service. The analyses also collectively identify common contributors, technological gaps, and common principles that inform improvement decisions. In each example case, the delivered service and level is established, following which the example is described and the boundaries confirmed. The single-cause of failure items (contributors to the E1 value) are assessed first, followed by the dual-combination causes of failure (contributors to the E2 values) and then the E3 values. It is assumed that neither maintenance nor replacement of components are required within the timeframe considered – i.e., outputs will be generated as long as their designed inputs are present, and the processes are functional.

3.1. Example 1: Petrol station, supply of fuel to a customer

The “service” in this case is the delivery, within standard fuel specifications (including absence of contaminants) and at standard flow rate, of petrol into a motor vehicle at the forecourt of a petrol-station. The scope includes the operation of the forecourt pumps, the underground fuel storage tanks, metering and transactional services. Although storage is significant, the refilling of the underground tanks from fuel stored in national-reserves can only be accomplished by a limited number of approaches, which must occur frequently relative to the considered timeframe and must be considered. Since many sources supply the bulk collection depot, the analysis will not go further back than the bulk storage depot. Similarly, the station is unlikely to have duplicated power feeders from the nearest substation and so this supply must be considered. The financial transaction system and the communications system it uses, must be included in the consideration.

On the assumption that the station is staffed, sanitary facilities (sewage, water) are also required (see Fig. 3). While completely automated “truck stop” fuel facilities exist, facilities as described are common and can reasonably be called representative. The fuel dispensing systems in both cases are almost identical, however the automated facilities cannot normally carry out cash transactions, and the manned stations commonly sell other goods (food and drink) and may provide toilet facilities.

Fig. 3. Operation of petrol station.

In work not reported here, the exposure metrics of the contributory systems have been estimated as EFTPOS financial transaction {49, 12, 1}, staff facilities system {36, 4, 6}, power {2, 3, 0}. Based on these figures, the total exposure metric for the petrol delivered to the end user is estimated at {92, 20, 8}.

In evaluating the exposure metric, the motor/petrol-pump and pipework connections do not generate E3 values (more than 3 failures would be required to cause a failure of the output function) since the petrol station has four pumps. The petrol station power supply affects several plant items that are local to the petrol station, and so is represented at the level which allows a valid assessment of its exposure contribution. For bulk petrol supply, numerous road system paths exist, tankers and drivers are capable of bringing fuel from the refinery to the station and so these do not affect the E3 values. The electricity distribution system has more than three feed-in power stations and is assumed to have at least 3 High Voltage (HV) lines to major substations, however local substations commonly have only two voltage-breakdown transformers, and a single supply line to the petrol station would be common. The local substation and feeders are assumed to be different for the sewage treatment plant, the bulk petrol synthesis and banking system clearing-house (and are accounted for in the exposure metrics for those systems), but the common HV power system (national grid) does not contribute to the E3 values, and so it is not double-counted in assessing the power supply exposure of the petrol station and contributory systems. While EFTPOS and sewage systems have had high availability, this analysis emphasizes the large contribution they make to the user's total exposure, and thus suggest options for investigation.

This example illustrates the significance of the initial determination of system boundaries. This example output is defined as fuel supplied to a user's vehicle at a representative petrol station. Other studies might consider a user seeking petrol within a greater geographic region (e.g. neighbourhood). In that case the boundaries would be determined to consider alternative petrol station and also subsystems that are common to local petrol stations (power, sewage, financial transactions, bulk fuel) and subsystems (e.g. local pumps) where design redundancy is achieved by access to multiple stations.

3.2. Example 2: Sewage system services for apartment-dweller

Consider the service of the sanitary removal of human waste, as required, via the lavatory installed in an urban apartment discharging to a wastewater treatment plant. The product of the treatment operation being environmentally acceptable treated water to waterways, and solid waste at environmentally acceptable specifications to landfill.

The technology description assumes that the individual user lives in an urban area with a population of 200,000 to 500,000. This size is selected because it is representative of a large number of cities. An informal survey of the configuration of sewage systems used by cities within this size range, reveals a level of uniformity, and hence the configuration in the example is considered “representative”.

The service is required as needed and commences from the water-flushed lavatory, and ends with the disposal of treated material. Electric power supplies to pumping stations and to local water pumps, are unlikely to have multiple feeders and will be considered to the nearest substation. The substation can be expected to have multiple feeders and so it is not considered necessary to consider the electric power supply further “back”. Significant pumping stations would commonly have a “Local/Manual” alternative control system capability, in which a remote control is normal, but allowing an operator to select “manual” at the pump station and thereafter operate the pumps and valves locally.

Operationally, the cistern will flush if town water is available and the lift pump is operational and power is available to the lift-pump. The lavatory will flush if the cistern is full. The waste will be discharged to the first pumping station if the pipework is intact (gravity fed). The first pumping station will operate if either main or backup pump are operational and power is available and either operator is available or control signal is present. The power signal will be available if signal lines are operational and signal equipment is operational and power for servo motors are available and remote operator or sensor is operational. The waste will be delivered to the treatment station if the major supply pipework is operational. The treatment station will be operational (i.e. able to separate and coalesce the sewage into a solid phase suitable for landfill, and an environmentally benign liquid that can be discharged to sea or river) if the sedimentation tank and discharge system are operable and the biofilm contactor is operational and the clarifier/aerator is operational and the clarifier sludge removal system is operational and power supply is available and operators are available. The sludge can be removed if roads are operational and driver and truck and fuel is available.

Several single sources of failure (contributors to E1 value) can be discerned. The power supply to local water pump, gravity fed pipework from lavatory to first pumping station and power supply to the first pump station. Manual control of the pumping station is possible, then this contributes to the E2 value, otherwise the control system wiring and logic will contribute to the E1 value. Assuming duplicate pumping station pumps, these contribute to the E2 value. Few of the treatment plant processes will be duplicated and so will contribute to the E1 values. For the urban population under consideration, the treatment plant is unlikely to have dual power feeds, and so the power supply contributes to the E1value.

For real treatment plants, most processes will include bypasses or overflow provisions. If the service is interpreted to specify the discharge of environmentally acceptable waste, then these bypasses are irrelevant to this study. However, if the “service” were defined with relaxed environmental impact requirements, then the availability of bypasses would mean that treatment plant processes would contribute to E2 values. Common reports of untreated sewage discharge following heavy rainfall events indicate that the resilience of treatments plants is low.

The numerical values of exposure presented in Table 1 are based on a representative design. It is noted that design detail may vary for specific systems. .

Table 1. Contributions to exposure of sewage system.

Preliminary research included the commissioning of a risk analysis by a professional practitioner of a sewage system defined to an identical scope, components and configuration of the target system. While the risk analysis generated useful results, it failed to identify all of the weaknesses that were identified by the exposure analysis.

3.3. Example 3: Supply of common perishable food to apartment-dweller

For the third example, a supply of a perishable food will be considered. The availability to a (representative) individual consumer, of whole milk with acceptable bacteriological, nutritional and organoleptic properties will be considered to represent the “service” and associated service level. Fresh whole milk is selected because it is a staple food and requires a technological system that is similar to that required by other nominally processed fresh food. Nevertheless, the technological system is not trivial - the pasteurisation step requires close control if bacteriological safety is to be obtained without deterioration of the nutritional and taste qualities.

At the delivery point, a working refrigerator and electric power are required. Transit to the apartment requires electric power to operate the elevator. Transport from the retail outlet requires fuel, driver, operational vehicle, and roads. The retail outlet requires staff, electric power, functional sewage system, functional water supply, functional lighting, communications and stocktaking system and access to financial transaction capability. Transport to the retail outlet requires fuel, driver, roadways and operational trucks. The processing and packaging system requires pasteurisation equipment, control system, Cleaning In Place (CIP) system, CIP chemical supply, pipework-and-valve changeover system for CIP, electric-powered pumps, electrically operated air compressors, fuel and fired hot water heaters, packaging equipment, packaging material supplies, water supply, waste water disposal, sewage system and skilled operators. Neither the processing facilities, nor the retail outlet, nor the apartment would commonly not have duplicated electric power feeders and so these and associated protection systems must be considered back to the nearest substation. The substation can be assumed to have multiple input feeders, and so the electric power system need not be considered any further upstream of the substation. The heating system (hot water boiler and hot water circulation system) for the pasteurizer would commonly be fired with fuel oil, but including enough storage that it would commonly be supplied directly from a fuel wholesaler.

The processes leading from raw milk collection up to the point where pasteurised and chilled milk is packaged, are defined in considerable detail by regulatory bodies - and can therefore be considered to be representative. There will be variation in packaging equipment types, however each of these can be considered as a single process and single “weakness”. Distribution to supermarkets, retail sales and distribution to individual dwellings are similar across most of the western world and are considered to be adequately representative. Neither the processing plant nor the retail outlet are staffed and so require an operational sewage disposal system and water supply.

Delivery of the milk will be achieved if the refrigeration unit in the apartment is operational and power is supplied to it and packaged product is supplied. Packaged product can be supplied if apartment elevator and power are available and individual transport from the retail outlet is functional and retailing facilities exist and are supplied with power and are staffed and have functional financial transaction systems. The retail facility can be staffed if skilled persons are available and transport allows them to access the facility and staff facilities (water, sewage systems) are operational. The sewage system is functional if water supply is available and pumping station is functional and is supplied with power and control systems are available. The bulk consumer packs are delivered to the retail outlet if road systems and drivers and vehicles and fuel is available. The packaged product is available from the processing facility if fuel is available to heat the pasteurizer and power is available to operate pumps and control system is operable and skilled operators are available and homogeniser is operational and compressed air for packaging equipment is available and packaging equipment is operational. Product can be made if CIP chemicals are available and CIP waste disposal is operational. Since very many suppliers and transport options are capable of supplying raw milk to the processing depot, the study need not consider functions/processes that are further upstream to the processing depot, i.e. the on-farm processes or the raw milk delivery to the processing depot.

Several single sources of failure (contributors to E1 value) can be identified: Power supply to the refrigerator (and cabling to the closest substation), roads fuel vehicle and driver, staff and power supply (and cabling to substation) at the retail outlet. Staff facilities (and hence the exposure contributions from the sewage system) must be considered. Provided the retail outlet is able to accept cash or paper-credit notes, then the payment system contributes to the E2 value, however if the retail outlet is constrained to electronic payments then many processes associated with the communications and banking systems will contribute to the E1 value. Roads, and fuel for bulk distribution will contribute to the E1 value, however drivers and trucks contribute to higher E values, since many drivers and trucks can be assumed to be available. The power supply to the processing and packing facility will contribute to the E1 value. The tight specifications and regulatory standards for consumer-quality milk will generally not allow any bypassing of processes, and so each of the major processes (reception, pasteurisation, standardisation, homogenisation and packaging) will all contribute to the E1 values. The milk processing and packaging facility will also need fuel for the Cleaning in Place (CIP) system and will need staff facilities - and hence the exposure contribution of the sewage system must be considered.

The examples demonstrate three commonalities. Firstly, it is both practical and informative to evaluate contributors to E1, E2 etc. variables for a broad range of cases and technologies. Secondly, sources of vulnerability that are specific to the examples can be identified, and thirdly principles for reduction of vulnerability can be readily articulated. In the petrol station supply example, one could consider eliminating the operator to obtain a greater reduction in exposure, eliminating needs for sewage, water, than retaining the operator and allowing cash transactions.

Some common themes can also be inferred among the examples, in general principles for exposure reduction likely to be applicable to other cases: The example studies contain intermediate streams: if such intermediate streams have specifications that are publicly available (open-source), there is increased opportunity for service substitution from multiple sources, and a reduction in the associated exposure values. Proprietary software and data storage are a prominent example of lack of standardisation despite the availability of such approaches as XML. Currently, electronic financial transactions require secure communications between an EFTPOS terminal and the host system, and the verification and execution of the transaction between the vendor's bank account and the purchaser's bank account. These processes are necessarily somewhat opaque. Completely decentralised transactions are possible as long as both the seller and vendor agree upon a medium of exchange. The implications of either accepting- or not-accepting a proposed medium of exchange are profound for the “exposure” created. Although the internet was designed to be fault tolerant, its design requires network controllers to determine the path taken by data, and in practice this has resulted in huge proportions of traffic being routed through a small number of high bandwidth channels. This is a significant issue: if the total intercontinental internet traffic (including streamed movies, person-to-person video chats, website service and also EFTPOS financial transaction data) were to be routed to a low-bandwidth channel, the financial transaction system would probably fail. Technological systems such as the generation of power using nuclear energy, are currently only economic at very large-scale, and hence represent dependencies to a large number of systems and users. Conversely, a system generating power using photovoltaic cells, or using external combustion (tolerant of wide variations in fuel specification) based on microalgae grown at village level, would probably be inherently less “exposed”. In every case where a single-purpose process forms an essential part of a process, it represents a source of weakness. By contrast, any “general purpose” or “re-purpose-able” component can, by definition, contribute to decreasing exposure. Human beings' ability to apply “common sense” and their unlimited capacity for re-purposing, are the epitome of multi-purpose processors. The capability to incorporate humans into a technological system is possibly the single most effective way to reduce “exposure”. The “capability to incorporate” may require some changes that do not inherently affect the operation of a system, merely incorporate a capability, such as the inclusion of a hand-crank capability in a petrol pump.

The examples (fuel supply, sewage disposal, perishable food) also uncover the existence of technological gaps, and where a solution would decrease exposure. Such gaps include the capability to store significant electrical energy locally (this is a more significant gap than is the capability to generate locally), a truly decentralised/distributed and secure communication system, and an associated knowledge storage/access system, a fully decentralised financial system that allows safe storage of financial resources and safe transactions, a decentralised sewage treatment technology, and less centralised technological approaches for the supply of both food and water and a transport system capable of significant load-carrying though not necessarily high speed, with low-specification roadways and broadly-specified energy supplies.

4. Discussion

The detailed definition of a technological system allows a more rigorous process for identification of hazards, by ensuring that all system weaknesses are considered. Calculating the exposure level of a technological system is not proposed as a replacement for risk analysis, but as a technique that offers specific insights and also increases the rigour of risk analysis. Indexing a measure of contribution-to-vulnerability is both simplified and enhanced in value if the measure is indexed to the delivery of specific goods/services, at defined levels, to an individual end-user. This approach will allow clarification of the extent to which a given project will benefit the individual. The analysis is applicable to any technological system supplying a specified deliverable at a given service level to a user. It is recognised that while some hazards e.g. a major natural or man-made disaster may affect more than one system, the analysis of the technological exposure of each system remains valid and valuable. The analysis of hazard-probability is of limited value over either long timeframes or when hazards are guided, and that characterising the number and types of weaknesses in a technological system is a better indicator of the vulnerability which it contributes to the person who depends on its outputs. An approach to quantifying the “exposure” of a technological system has been defined and justified as a valid representation of the contribution made to the vulnerability of the individual end-user. The approach is generates a fine-grained metric {E1, E2, E3 … En} that is shown to accurately measure the vulnerability incurred by the end-user: calculation of the metric is tedious but not conceptually difficult; the measure is readily able to be verified and replicated, and the calculated values allow detailed investigation of the effect of hypothesized changes to a target system. The approach has been illustrated with a number of examples, and although only preliminary analyses have been made, the practicality and utility of the approach has been demonstrated. Only a small number of example studies have been presented, although they have been selected to address a range of needs experienced by actual urban-dwellers. In each case the scope and technologies used are proposed to be representative, and hence conclusions drawn from the example studies can be considered to be significant.

Even the preliminary analyses of the examples have indicated two distinct categories of contributors to vulnerability: weaknesses that are located close to the point of final consumption, and highly centralised technological systems such as communications, banking, sewage, water treatment and power generation. In both of these categories the user's exposure is high despite limited design redundancy, however the users exposure could be reduced by selecting or deploying technology subsystems with lower exposure close to point-of-use, and by using public standards to encourage multiple opportunities for service substitution. The use of an exposure metric has been shown to provides measure of the vulnerability contributed by a given technological system to the individual end-user, has been shown to be able to be applied to representative examples of technological systems. Although results are preliminary, the metric has been shown to allow the derivation of both specific and generalised conclusions. The measure can integrate with-, and add value to-existing techniques such as risk analysis.

The approach is proposed as a theory of exposure, including conceptual definitions, domain limitations, relationship-building and predictions that are proposed [34] as essential criteria for a useful theory.


This research is supported by an Australian Government Research Training Program (RTP) Scholarship.


[1] J. Forrester, Industrial Dynamics, MIT Press, Boston, MA (1961)

[2] R. Ackoff, Towards a system of systems concepts, Manag. Sci., 17 (11) (1971), pp. 661-671

[3] J. Sterman, Business Dynamics: Systems Thinking and Modelling for a Complex World, McGraw-Hill Boston (2000)

[4] I. Eusgeld, C. Nan, S. Dietz, System-of-systems approach for interdependent critical infrastructures, Reliab. Eng. Syst. Saf., 96 (2011), pp. 679-686

[5] T. Forester, P. Morrison, Computer unreliability and social vulnerability, Futures, 22 (1990), pp. 462-474

[6] B. Martin, Technological vulnerability technology in society, 18 (1996), 511–523A

[7] L. Robertson, From societal fragility to sustainable robustness: some tentative technology trajectories, Technol. Soc., 32 (2010), pp. 342-351

[8] M. Kress, Operational Logistics: the Art and Science of Sustaining Military Operations, 1-4020-7084-5, Kluwer Academic Publishers (2002)

[9] L. Li, Q.-S. Jia, H. Wang, R. Yuan, X. Guan, Enhancing the robustness and efficiency of scale-free network with limited link addition, KSII Trans. Internet Inf. Syst., 6 (2012), pp. 1333-1353

[10] A. Yazdani, P. Jeffrey, Resilience enhancing expansion strategies for water distribution systems: a network theory approach, Environ. Model. Softw., 26 (2011), pp. 1574-1582

[11] Y.Y. Haimes, P. Jiang, Leontief based model of risk in complex interconnected infrastructures, ASCE J. Infrastruct. Syst., 7 (2001), pp. 1-12

[12] Cyber-attack: How Easy Is it to Take Out a Smart City?. (New Scientist, 4 August 2015), (, retrieved 22 Mar 2017).

[13] Jeep Drivers Can Be HACKED to DEATH: All You Need Is the Car's IP Address (e.g. The Register, 21 Jul 2015). (, retrieved 22 Mar 2017).

[14] J. Glenn (Sen), Quotation from Retirement Speech, Seen online at (1997),, in Feb 2014

[15] C. Gómez, M. Buriticá, M. Sánchez-Silva, L. Dueñas-Osorio, Optimization-based decision-making for complex networks in disastrous events, Int. J. Risk Assess. Manag., 15 (5/6) (2011), pp. 417-436

[16] C. Perrow, Normal Accidents: Living with High-risk Technologies, Basic Books, New York (1984)

[17] P. Chopade, M. Bikdash, Critical infrastructure interdependency modelling: using graph models to assess the vulnerability of smart power grid and SCADA networks, 8th International Conference and Expo on Emerging Technologies for a Smarter World, CEWIT 2011(2011)

[18] ISO GUIDE 73, Risk Management — Vocabulary, (2009)

[19] A. Gheorghe, D. Vamadu, Towards QVA - Quantitative vulnerability assessment: a generic practical model, J. Risk Res., 7 (2004), pp. 613-628

[20] ISO/IEC 31010, Risk Management— Risk Assessment Techniques', (2009)

[21] FMEA, MIL-STD-1629A, Failure Modes and Effects Analysis (1980)

[22] Y.Y. Haimes, On the definition of resilience in systems, Risk Anal., 29 (2009), pp. 498-501, 10.1111/j.1539-6924.2009.01216.x, 2009 Note page 498

[23] K. Khatri, K. Vairavamoorthy, “A new approach of risk analysis for complex infrastructure systems under future uncertainties: a case of urban water systems. ”Vulnerability, Uncertainty, and Risk: analysis, Modeling, and Management, Proceedings of the ICVRAM 2011 and ISUMA 2011 Conferences (2011), pp. 846-856

[24] Y. Shuai, X. Wang, L. Zhao, Research on measuring method of supply chain resilience based on biological cell elasticity theory, IEEE Int. Conf. Industrial Eng. Eng. Manag. (2011), pp. 264-268

[25] Munoz M. Dunbar, On the quantification of operational supply chain resilience, Int. J. Prod. Res., 53 (22) (2015), pp. 6736-6751

[26] A.M. Law, Simulation Modelling and Analysis, McGraw-Hill, New York (2007)

[27] L. Barabási, E. Bonabeau, EScale-free networks, Sci. Am., 288 (2003), pp. 60-69

[28] G. Tanaka, K. Morino, K. Aihara, Dynamical robustness in complex networks: the crucial role of low-degree nodes, Nat. Sci. Rep., 2/232 (2012)

[29] N. Slack, A. Brandon-Jones, R. Johnston, Operations Management 7th Edition by Dawson Books, (2013), ISBN-13: 978–0273776208 ISBN-10: 0273776207

[30] ISO/IEC 9075–2, Information Technology – Database Languages – SQL, (2011)

[31] P. Suppes, Measurement theory and engineering, Dov M. Gabbay, Paul Thagard, John Woods (Eds.), Handbook of the Philosophy of Science, Philosophy of Technology and Engineering Sciences, vol. 9, Elsevier BV (2009)

[32] D. Hand, Measurement Theory and Practice: the World through Quantification, Wiley (2004), ISBN: 978-0-470-68567-9.ISO/IEC 31010:2009–(Risk Management - Risk Assessment Techniques)

[33] Ponemon Institute, 2013 Study on Data Center Outages, Retrieved, (2013),, Dec 2016

[34] J.G. Wacker, A definition of theory: research guidelines for different theory-building research methods in operations management, J. operations Manag., 16.4 (1998), pp. 361-385


L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

1 Abbreviation: Failure Modes and Effects Analysis, FMEA.

2 The terminology used is typical for Australasia: other locations may use different terminology (e.g. “gas” instead of “petrol”).


Technological vulnerability, Exposure, Urban individual, Risk


L Robertson is a professional engineer with a range of interests, including researching the level and causes of vulnerability that common technologies incur for individual end-users.

Dr Katina Michael, SMIEEE, is a professor in the School of Computing and Information Technology at the University of Wollongong. She has a BIT (UTS), MTransCrimPrev (UOW), and a PhD (UOW). She previously worked for Nortel Networks as a senior network and business planner until December 2002. Katina is a senior member of the IEEE Society on the Social Implications of Technology where she has edited IEEE Technology and Society Magazine for the last 5+ years.

Albert Munoz is a Lecturer in the school of management, operations & marketing, at the Faculty of Business at the University of Wollongong. Albert holds a PhD in Supply Chain Management from the University of Wollongong. His research interests centre on experimentation with systems under uncertain conditions, typically using discrete event and system dynamics simulations of manufacturing systems and supply chains.

Citation: Lindsay J. Robertson, Katina Michael, Albert Munoz, "Assessing technology system contributions to urban dweller vulnerabilities", Technology in Society, Vol. 50, August 2017, pp. 83-92, DOI:

The Auto-ID Trajectory - Chapter Ten: Conclusion

The principal conclusions from the findings given in chapter nine are threefold. First, that an evolutionary process of development is present in the auto-ID technology system (TS). Incremental steps either by way of technological recombinations or mutations have lead to revolutionary changes in the auto-ID industry- both at the device level and at the application level. The evolutionary process in the auto-ID TS does not imply a ‘survival of the fittest’ approach,[1] rather a model of coexistence where each particular auto-ID technique has a path which ultimately influences the success of the whole industry. The patterns of migration, integration and convergence can be considered either mutations or recombinations of existing auto-ID techniques for the creation of new auto-ID innovations. Second, that forecasting technological innovations is important in predicting future trends based on past and current events. Analysing the process of innovation between intervals of widespread diffusion of individual auto-ID technologies sheds light on the auto-ID trajectory. Third, that technology is autonomous by nature has been shown by the changes in uses of auto-ID; from non-living to living things, from government to commercial applications, and from external identification devices in the form of tags and badges to medical implants inserted under the skin.

Read More