Chapter XVI: Conclusion
THE AUTO-ID TRAJECTORY
This chapter is dedicated to identifying the main outcomes of the study and reflections on the future directions of the technologies that were under investigation. In concluding we have found that first, an evolutionary process of development is present in the auto-ID technology system (TS). Incremental steps either by way of technological recombinations or mutations have lead to revolutionary changes in the auto-ID industry- both at the device level and at the application level. The evolutionary process in the auto-ID TS does not imply a ‘survival of the fittest’ approach, rather a model of coexistence where each particular auto-ID technique has a path which ultimately influences the success of the whole industry. The patterns of migration, integration and convergence can be considered either mutations or recombinations of existing auto-ID techniques for the creation of new auto-ID innovations. Second, that forecasting technological innovations is important in predicting future trends based on past and current events. Analyzing the process of innovation between intervals of widespread diffusion of individual auto-ID technologies sheds light on the auto-ID trajectory. Third, that technology is autonomous by nature has been shown by the changes in uses of auto-ID; from non-living to living things, from government to commercial applications, and from external identification devices in the form of tags and badges to medical implants inserted under the skin. This does not negate, however, the inherent qualities embedded in auto-ID technologies, predisposing them to be used in certain contexts. What we have witnessed especially in auto-ID is a movement we have termed the auto-ID trajectory: from bar codes to chip implants towards the electrophorus who will herald in the age of uberveillance. Convergence of embedded automatic identification technologies with location-based services will offer unprecedented capabilities, but these capabilities will come at a high price.
The Evolutionary Paradigm
The evolutionary paradigm has shown us that “history matters”. Auto-ID techniques built their foundations on top of past manual ID techniques, the simplest being facial recognition using human memory. By the 19th century fingerprinting techniques were being discovered and by the mid 20th century auto-ID technologies were being prototyped. What has happened since that time has been cumulative technical change at an exhilarating speed. This rapid change, however, would not have been possible if the building blocks had not been cemented by first generation elementary breakthroughs. As more and more technological advancement occurred within the emerging auto-ID industry, and further support infrastructures, skills and tools emerged simultaneously, the use of auto-ID became widespread. Progress fuelled success and success fuelled progress. While the market in the mid 1960s was not ready for auto-ID, decade after decade thereafter, techniques permeated a diverse range of applications. A domino effect of new auto-ID innovations took place, revolutionizing the way people worked and lived. The conditions for entry were increasingly ‘right’ as ancillary technologies, like networks, storage devices and database software proliferated.
The auto-ID explosion was energized by up-and-coming niche technology providers who had a clear vision for their innovations. Bar codes in retail, for instance, were driven by stakeholders who could see both the potential impact the technology would make and the immediate path ahead. Understanding the sequence of events that shaped auto-ID was a major contribution of this study. Better understanding “what happened” means that efforts can be concentrated in the right places in the future. Rosenberg (1994, p. 23) describes the importance of historical analysis in understanding technologies. He pointed out that this type of analysis is not only relevant to historians but to economists and people in other fields.
Forecasting Technological Innovation
One of the downsides to exploratory predictive studies is that some researchers attempt to outdo one another with radical futuristic scenarios. This is not to discount that some of these scenarios will not happen ‘eventually’, however they neglect to use the evidence that is set before them to follow the path or direction of a particular technology, or set of technologies. This study puts forward the usefulness of using frameworks- like the systems of innovation (SI) based on evolutionary theory- to synthesize data from multiple disciplines to characterize and predict the auto-ID trajectory. The market today is so complex, that relying solely on one perspective, albeit technological, could prove severely misleading. What is required is an interdependence of sources (Drangeld, 1991, pp. 157-179). It was also intentional that predictions were not numbered or tabulated- they are present throughout the work and more pronounced in the final chapters when the technological trajectory of auto-ID was explored. The narrative style allowed for analysis throughout. None of the predictions venture beyond 2050 and most focus between the years 2010 and 2020.
Individual auto-ID techniques and their applications were considered separately at first, then as a single technology system, bringing together evidence that would indicate the direction of auto-ID in the short-term future. Among the factors explored in each case (in order of their prominence in that particular case) included: social, cultural, organizational, institutional, economic, regulatory, legal, political and technical dimensions. What was apparent was the time lag between auto-ID technical breakthroughs and developments, in for instance global standards, laws and user acceptance. Ethical considerations it was shown, were also consistently marginalized by technology and service providers until after auto-ID diffusion- an almost “let’s wait and see what happens” approach. Regardless, the technology is set to become even more ingrained in our day-to-day practices, especially for critical-response applications. New auto-ID innovations are most likely to be variations or combinations of existing auto-ID technologies, although there will be particular leaps in the use of multiapplication smart cards, the accuracy of biometric techniques (especially multimodal biometrics) and RFID transponders for human application. The study has attempted to present the forces at play which will continue to set the course of the auto-ID technology system. Of great significance is the convergence of ‘industries’, including auto-ID and biomedical.
Technology is Autonomous
That it is possible to map the future course of a technology does not negate that a given technology could be used for another subsequent purpose to what it was originally intended (Westrum, 1991, p. 238). Once a device is released, there is no turning back. As Rummler (2001, p. 3) put it: “the genie simply will not go back into the bottle”. The technology possesses intrinsic controls that can be set off with the right commercial conditions. What we assume is that we are in control, when in actual fact the technology has an inherent trajectory (Figure 1). No one ever seriously predicted (outside the apocalyptic and sci-fi genres) for instance, that auto-ID technologies would be inserted under the skin at the time that bar code was invented or when magnetic-stripe cards made their debut. However, today we have this phenomenon occurring- perhaps not at a rapid rate of adoption but at a speed that has made people take note of developments. There is now a company who has hired staff to tour the United Stated in mobile vans, to directly market the advantages of RFID transponder implants for emergency services. Consumers who choose to be implanted can do so at centralized clinics across the country or even at registered hospitals. There are even self-proclaimed “RFIDs” (i.e. underground RFID implantees), tech-savvy hobbyists who are building their own applications for interactive spaces. In brief, it is technology that to a large extent shapes society; drives changes to the way we live, to laws, to our attitudes, and our beliefs. Once diffused technology then is molded and shaped by society in specific ways.
While we have the opportunity to consider where the auto-ID trajectory is leading society, especially when it is coupled with leading edge location based services; it is our responsibility to think about the future possibilities. For instance, what if civil unrest through continual terrorist threats/attacks or outbreaks of fatal viruses causes governments worldwide to introduce RFID implants for security and safety reasons. Would society be ready for such a change? And what types of mechanisms are in place to decide whether this should or should not happen? Already victims of the Boxing Day Tsumani in Thailand were chipped to help in identification later, and European Union officials have on more than one occasion stipulated the use of RFID tags and transponders for helping curb the problem of illegal immigration (Michael and Masters, 2006a; 2006b). There was even the very real suggestion made by the Singaporean government that air travelers to and from the state would have to be tagged if the SARS virus was to continue in duration (Michael et al., 2006). For more national security examples related to auto-ID and location based services see Tootell (2007) and Aloudat, K. Michael and Yan (2007).
WITH AN EYE TO THE FUTURE
Reinterpreting the Meaning of Progress
Progress has often been synonymous with change over time, in a historical context. However, certain types of “progress” are not necessarily advancements. In fact, depending on the perspective taken, some technological progress could actually be considered to have caused social regress. A cochlear implant that gives a deaf person the ability to hear can be viewed as progress without much debate, whereas the proposed ‘Soul Catcher’ chip implant which will supposedly grant “eternal life” to an individual is surrounded by many unknowns. Consider the motivations for inventing in primitive times. The discovery of rubbing two sticks together to produce fire for warmth and cooking, that of the circular wheel to help move heavy objects, and of sharp stone implements to cut things, were all motivated by a practical need to survive. In contrast, today we seek monetary remuneration for inventing. A lot of money is spent on legal advice and getting an idea patented and this usually happens only when the inventor believes that they will somehow recoup their costs by the resulting royalties. Which leads to another fundamental point, most inventors today are part of corporations whose main goal is profit maximization. Companies measure their “progress” by comparing revenue results and whether these have increased year-to-year. They are driven more by a need to make money to continue viable operations, than by a need to ensure that their product or service offerings are adding real value to human lives. Competition is so fierce in most high-tech areas, and the pace of change so rapid, that economic and commercial discourse takes precedence over moral and ethical reflection.
The Oxford English Dictionary defines the word ‘ethics’ as ‘the science of morals; moral principles or ‘code’ and ‘ethical’ as ‘conforming to a recognized standard’. ‘Moral’ is that concerned with the distinction between right and wrong” (Brennan, 1996, p. 1/1). “[E]thics has a positive side which describes the human values which help to specify what one should do… what is the ultimate goal of human life or of society and, thus, what are the priorities for the work to be done within the particular activity or profession” (Brown, 1998, p. 301). For instance, an auto-ID manufacturer might ask “should this particular auto-ID technology be used in this new application area?” The response would most likely be linked to whether the new innovation would equate to more sales, a better company share price, and subsequently greater investor interest. The reality is that whether the technology will negatively impact individual privacy (or other similar issues) invariably remains somebody else’s problem throughout the value chain. Consider the study conducted by K. Michael, McNamee, M.G. Michael (2006) and K. Michael et al., (2006) and the ethics surrounding the precise chronicling of people using GPS-enabled devices, especially those pertaining to social networking issues (Markoff, 2008). The GPS was once used by the U.S. military alone, today it has become a global commodity used for applications like distance running in marathons (Saponas et al., 2006) to prisoners serving their sentences at home.
Managing Technological Innovation
The ability to manage technological innovation assumes that the right social institutions are in place to deal with developments. More often than not however, there is a great divide between technology and society’s ability to cope with that technology. The consumer’s attempt at resisting change initially coincides with the technology life cycle incubation stage. Eventually, however, widespread adoption is achieved as the technology begins to shape society bit-by-bit, and consumers and service providers succumb to a variety of pressures. It seems that individuals in society are too preoccupied with the ever-increasing pace of life to have the necessary time required to contemplate the far-reaching extensions of technological change, thus leaving the decision making to a small group of people. This results in a type of herd behavior being exemplified. Mass consent in more developed countries (MDCs) to adopt whatever is being flagged as the latest high-tech gadget looks to have overtaken individual reasoning.
Consumers subject themselves to the impacts of these gadgets simply by choosing to adopt them, one after the next. It is almost as if adoption of “new” technology, such as luggables and wearables, is a requirement for a fulfilled existence because our capacity to remain contented with what we have is lacking. In the case of auto-ID however, there is an inherent tyrannical quality about devices like smart cards and biometrics; consumers do not choose to have them, such as in the case of computers and mobile phones, they are imposed on them by service providers. Among the most authoritarian service providers is the government, who has the ability to issue national ID cards (and other similar mandates) to its citizens. As more and more national and international auto-ID and LBS schemes begin to emerge, the need for adequate social institutions for helping society deal with these changes becomes an immediate concern. We cannot rely on a few publicized debates on current affairs television programs to address the fundamental questions. Yet going with the flow seems more effortless than constructive thinking; the masses generally feeling powerless bystanders to these changes, or even worse, indifferent and numb. There is little doubt that as time moves on newer dimensions of Alvin Toffler’s (1970) “future shock” will distress us (most of us at least) in different ways.
The Need for Safeguard Provisioning
As we prepare for the introduction of fully fledged advanced humancentric location-based applications we need to be mindful of the potential socio-ethical changes that will occur as a result. These “changes” will also include new perspectives to traditional metaphysics as embedded in the flesh technologies challenge us to potentially “updated” definitions of identity and self-consciousness for instance. For the present, technological advancements in this space of research and investigation always seem to take precedence over discussion of the potential detrimental effects to individuals or society at large. There need to be adequate and applicable safeguards ‘built-in’ if we are to innovate smartly; we cannot hope to ‘bolt-on’ band-aid solutions on the chance things might go wrong. And things normally do go wrong. One need only refer to the historical events in manual identification to consider the possible effects of a well-orchestrated siege on privacy by any world leader or government. Even as early as 1943 observers realized the potential threats of computerized systems which could be operated for wrong ends by “…an unscrupulous government which sets to work to use that machinery for totalitarian purposes” (Clark, 1943, p. 9). Wicklein (1982, pp. 8, 191f) also wrote that “…[t]he biggest threat of a multifaceted, integrated communications system is that a single authority will win control of the whole system and its contents [and] operate it without adequate restraints”.
Who is in Control?
The dynamic nature of the process of innovation indicates that interaction between many different stakeholders leads to the development of a given product or service. It is therefore difficult to single out one particular stakeholder as the primary force for an innovation going from invention to diffusion. Feedback between different stakeholders is a continual process. In the case of auto-ID, it can be argued that the manufacturer of the device is the main instigator, yet this denies the importance of other individual stakeholders like the government, service providers, and infrastructure providers, from being considered as equally key instruments in the creative process. It is possible that the question “who is in control?” only answers the question partially; we should also consider “what is in control?” Is it stakeholders? Is it the technology itself? It is both working together. Humans need to be aware of this when they are considering such future possibilities as creating “spiritual machines” and rejecting in essence part of what it means to be human. Some scientists may believe that doing away with the flesh will grant the individual ultimate freedom- achieving a type of resurrection on Earth. Consider Moravec’s idea of doing away with the sarcous (i.e. the body) altogether (Dery, 1996, p. 300). But what needs to be foremost in the minds of these visionaries, and the rest of us, is who or what will be in control of this grand scheme? Who or what will be given the responsibility to run this complex network of online brains? A human? A clone? A robot? All of these are subject to failure are they not, leading to the potential extinction of the new genus.
When considering the possibilities of human evolution it is important to ponder on history. In 1946, the public launch of the ENIAC in the United States stimulated people’s imagination with some very fantastical thoughts (figure 2 and figure 3). However, Kevin Warwick’s Cyborg 1.0 project did not receive the same attention. One could observe that society found this breakthrough somewhat lacklustre in comparison. Perhaps what people will find captivating is that next giant leap forward, unwired thought-to-thought communication OR the potential ability to download the human consciousness OR more precisely the means to live “forever” through some technological course. On visiting the Canadian National Art Gallery in Ottawa in 2000, K. Michael took a picture of this artwork that captured this future vision and its ensuing conflicts precisely. It was titled “Amnesia” (1994) and it showed the face of a mature person encased within an external digital storage media unit linked up to an Apple Computer motherboard (Michael and Michael, 2006). It is interesting to note that the person’s eyelids were shut. One interpretation of this could have been that the person was asleep (but not dead), another interpretation could be that a future was being followed blindly without enough consideration of what lay ahead. The artwork was reminiscent of the first ENIAC pictures to hit the Sunday papers in the United States. In those famous photos accessible from the University of Pennsylvania, humans are depicted interacting with the room-filled wall-to-ceiling, floor-to-wall computer. It was as if people were about to “enter into” the ENIAC. One could imagine panels and panels of “Amnesia” side-by-side, stacked one on top of the other, i.e. members of a society occupying manifold times more space than the ENIAC and redefining what is meant by such terms as “technological society” and “global village”.
NEARING THE POINT OF NO RETURN
The idea of the human electrophorus is one that no longer exists in the realm of the impossible. This being the case, the requirement for inclusive dialogue is now, not after widespread diffusion. There are many lessons to be learnt from history, especially from such radical developments as the atomic bomb and the resulting arms race. Bill Joy (2000, p. 11), chief technologist of Sun Microsystems, has raised serious fears about continuing unfettered research into “spiritual machines”. He quotes the following example as evidence. “As the physicist Freeman Dyson later said, “The reason that it was dropped [the atomic bomb] was just that nobody had the courage or the foresight to say no…” It’s important to realize how shocked the physicists were in the aftermath of the bombing of Hiroshima, on August 6, 1945. They describe a series of waves of emotion: first, a sense of fulfillment that the bomb worked, then the horror at all the people that had been killed, and then a convincing feeling that on no account should another bomb be dropped. Yet of course another bomb was dropped, on Nagasaki, only three days after bombing of Hiroshima” (Joy, 2000, p. 11). Jacques Ellul (1964, p. 99) quotes from a lecture given by Soustelle on the atomic bomb: “[s]ince it was possible, it was necessary”.
The question the above extract raises is will humans have the foresight to say “no” or “stop” to new innovations that could potentially be a means to a socially destructive scenario. Or will they continue to make the same mistakes? Implants that may prolong life expectancy by hundreds if not thousands of years might sound ideal but they could well create unforeseen devastation in the form of technological viruses, plagues, a different level of crime and violence. The debate is far too complex to enter into here, it is rather a pressing research topic for another work, but if this book has aided to highlight its importance, it has satisfied one of its objectives. Humans may have walked on the moon, and many have dreamed about colonizing other planets but an attempt to “live forever” through the use of technology seems oblivious to the facts: that the Sun has a finite lifetime, that the Earth could be wiped out by an asteroid gone astray, or a full-blown nuclear war could break out between the major powers. These are not fatalistic or apocalyptic considerations, just simple probabilities based on scientific and political fact.
To many scientists of the positivist tradition who are anchored to an empirical world view, the notion of whether something is “right” or “wrong” is redundant and in a way irrelevant. To these individuals a moral stance has little or nothing to do with technological advancement but more with an ideological position. A group of these scientists are driven by an attitude of “let’s see how far we can go”, not “is what we are doing the best thing for humanity”; and certainly not with the thought of “what are the long-term implications of what we are doing here”. The belief is that “science” should not be stopped because it will always make things better. The reality is that it will continue to increase the divide between the “haves” and “have-nots” even further. To the “haves” and “have nots”, O’Reilly (1999, pp. 973f) adds the “can-nots”. Surely there are more immediate issues at hand than downloading our minds onto hardware. We are not referring here to the medical implant breakthroughs that are helping to save lives but to human extensions. Why not ask the question of whether or not we have directed our resources to solving the greater scientific issues facing the world such as sustainable yield for energy resources, rising water temperatures and ozone layer depletion, soil salinity and fresh water shortages? This is not seeking to be idealistic; these are real and compelling issues with real-time proofs.
What we are trying to describe here is the importance of social responsibility, not just for engineers or professionals working on complex problems that possess the knowledge but to all humans. “It has been said that what distinguishes professionals is their possession of “dangerous knowledge.” A physician has the means to cure you or kill you. An engineer can design software that is reliable and promotes your safety, or that is critically flawed and precipitates disaster. To repeat an earlier theme, knowledge is power, and the specialized knowledge possessed by professionals gives them power over our lives. Society is rightly concerned, then, that this power is used properly” (Frankel, 1988, p. 199). “[F]ailure to challenge the ‘technological imperative’ can pose serious social and moral implications, and that good technical argument, as defined by the values of effectiveness and efficiency, can be accessory to moral abominations, such as those of Hitler’s Germany” (Flynn & Ross, 2001, p. 208). This is not to say that we are against technological development but wary that not all developments will make things better rather than worse. To an extent it is narrow-sighted to be like the Jacobs family who call the debate surrounding chip implants “hullabaloo”. The family saw implants as a “gift” and think it is inconceivable that the technology could be used to “do anything but good” (Associated Press, 2002, p. 2). Perhaps Mr Jacobs did not do enough historical research to consider the possibilities… It was a tragic loss to society, and indeed to this critical debate, when young Derek Jacobs lost his life to a motorcycle accident in 2006.
Cohen and Grace (1994, p. 12) investigated the claims that engineers should not pay attention to social responsibility, concluding finally that social responsibility indeed should be seen as integral to the performance of an engineer and that he or she should not only avoid doing harm but seek opportunities to do good. There are “…five circumstances in which the engineer might choose not to hold the health, safety, and welfare of the public paramount: 1) if the engineer believes that the requirement is internally inconsistent, 2) if the engineer’s religious convictions prevent adherence to the requirement, 3) if the engineer believes that the public does not know what is best for it, 4) if the engineer is forced to do otherwise, and 5) if the engineer believes that damage to the environment outweighs short term public interest” (Vesilind, 2001, p. 162). It seems we are facing an ethical dilemma as a human race, even though the answers to the pressing questions and issues appear seemingly straightforward. What has lead to this analytical displacement? Perhaps it is a preoccupation with short-term “band-aid” solutions rather than looking at the longer-term perspective. Whatever it may be, we all need to actively and responsibly consider what these next steps should be, since generations to come will be living with the monumental and irreversible consequences of our decisions.
Quidquid agas, prudenter agas, et respice finem.- Whatever you do, do cautiously, and look to the end. [Gesta Romanorum, cap. 103]
Aloudat, A., Michael, K., & Yan, J. (2007). Location-Based Services in Emergency Management- from Government to Citizens: Global Case Studies. In P. Mendis, J. Lai, E. Dawson & H. Abbass (Eds.), Recent Advances in Security Technology (pp. 191-201). Melbourne: Australian Homeland Security Research Centre.
Associated Press. (2002, 15 October 2002). Chip in your shoulder? Family wants info device. USA Today: Tech, from http://www.usatoday.com/life/cyber/tech/2002/04/01/verichip-family.htm
Brennan, M. G. (1996, 7 October). Ethics and Technology. Paper presented at the IEE Colloquium on Technology in Medicine.
Brown, W. S. (1998). The interpersonal significance of a fork: what biomedical engineering should do! IEEE WESCON, 301-304.
Clark, C. (1943). The advance to social security. Carlton: Melbourne University Press.
Cohen, S., & Grace, D. (1994). Engineers and social responsibility: an obligation to do good, IEEE Technology and Society Magazine (pp. 12-19).
Dery, M. (1996). Escape Velocity: cyberculture at the end of the century. London: Hodder and Stoughton.
Drangeld, K. E. (1991). Characteristics and trends of innovation in electronics and information technology. In B. Henry (Ed.), Forecasting Technological Innovation (pp. 157-179). Boston: Kluwer Academic Publishers.
Ellul, J. (1964). The Technological Society. New York: Vintage Books.
Flynn, T. R., & Ross, M. S. (2001). Ethical and legal issues related to emerging technologies: reconsidering faculty roles and technical curricula in a new Environment. International Symposium on Technology and Society, 203-209.
Frankel, M. S. (1988). Professional ethics and social responsibility. In Policy Issues in Information and Communication Technologies in Medical Applications (pp. 199-200).
Joy, B. (2000, April 2000). Why the future doesn’t need us. Wired, 8.04, from http://www.wired.com/wired/archive/8.04/joy_pr.html
Markoff, J. (29 November 2008). You’re Leaving a Digital Trail. What About Privacy? The New York Times Retrieved 25 January 2009, from http://www.nytimes.com/2008/11/30/business/30privacy.html?ref=business
Michael, K., & Masters, A. (2006a). The Advancement of Positioning Technologies in Defense Intelligence. In H. Abbass & D. Essam (Eds.), Applications of Information Systems to Homeland Security and Defense (pp. 196-220). Hershey, USA: Idea Group Publishing.
Michael, K., & Masters, A. (2006b). Realised Applications of Positioning Technologies in Defense Intelligence. In H. Abbass & D. Essam (Eds.), Applications of Information Systems to Homeland Security and Defense (pp. 167-195). Hershey, USA: Idea Group Publishing Press.
Michael, K., McNamee, A., & Michael, M. G. (2006). The Emerging Ethics of Humancentric GPS Tracking and Monitoring. Paper presented at the ICMB M-Business Revisited from Speculation to Reality, Piscataway, NJ, USA.
Michael, K., McNamee, A., Michael, M. G., & Tootell, H. I. (2006). Location-based Intelligence- Modeling Behavior in Humans using GPS. Paper presented at the International Symposium on Technology and Society, Piscataway, NJ, USA.
Michael, K., & Michael, M. G. (2006, 28-1 July). Towards chipification: the multifunctional body art of the net generation. Paper presented at the Cultural Attitudes Towards Technology and Communication, Tartu, Estonia.
Michael, K., Stroh, B., Berry, O., Malhauber, A., & Nicholls, T. (2006). The AVIAN flu tracker- A Location Service Proof of Concept. In P. Mendis, J. Lai & E. Dawson (Eds.), Recent Advances in Security Technology: Proceedings of the 2006 RNSA Security Technology Conference (pp. 244-258). Canberra: Australian Homeland Security Research Centre.
O’Reilly, J. (1999). Mind the gap! Electronics and communications in the engineering continuum. Electronics & Communication Engineering Journal, 972-979.
Rosenberg, N. (1994). Exploring the Black Box: technology, economics, and history. Great Britain: Cambridge University Press.
Rummler, D. M. (2001). Societal issues in engineering [Electronic Version], 1-3. Retrieved 6 March.
Saponas, T. S., Lester, J., Hartung, C., & Kohno, T. (2006). Devices That Tell On You: The Nike+iPod Sport Kit. University of Washington, 2007, from http://www.cs.washington.edu/research/systems/privacy.html
Toffler, A. (1981). Future Shock. New York: Bantam Books.
Tootell, H. I. (2007). The Social Impact of Auto-ID and Location Based Services in National Security. University of Wollongong, Wollongong.
Vesilind, P. A. (2001). The engineer shall hold paramount the health, safety, and welfare of the public. Unless, of course,…. IEEE International Symposium on Technology and Society, 162-167.
Westrum, R. (1991). Technologies and Society: the shaping of people and things. California: Wadsworth Publishing Company.
Wicklein, J. (1982). Electronic Nightmare: the home communications set and your freedom. Boston: Beacon Press.
Wu, X. (2001). Being ethical in developing information systems: an issue of methodology or maturity in judgment? Paper presented at the IEEE Proceedings of the 34th Hawaii International Conference on System Sciences.