Saint Augustine of Hippo (354–430 CE)  one of the most revered doctors of the ecclesia catholica, might not have been so highly esteemed had he flourished centuries afterwards in a world of uberveillance . One of the unique aspects of Augustine's life that endeared him to the community of the faithful, both past and present, was his rising up from the “fornications”  and the “delight in thievery”  to become a paradigm for both the eastern and western churches of the penitent who becomes a saint. But would the celebrated bishop and author of The City of God have risen to such prominence and reverence had his early and formative life been chronicled on Facebook and “serialized” on YouTube? Would Augustine's long and grueling years of penitence and good works have been recognized? That we have his stylized and erudite Confessions on paper is another matter altogether; as to its impact, the written record cannot be compared to capturing someone in the act on closed circuit television (CCTV). The audio-visual evidence is there forever to be rerun at whim by those who have access. And what of the multitude of other canonized “sinners” who in their own time and private space might not only mature by engaging with their humanity, indeed with their flaws and weaknesses, but also aspire to sainthood through repentance. If these “lives of the saints” were rerun before us, would we view such consecrated men and women in the same way? Where context is lacking or missing, then all interpretation of content, however compelling to the contrary, must be viewed with a high degree of suspicion.
Even in the political and civil rights arena, for example, had the private lives of colossal and “untouchable” figures such as John F. Kennedy and Martin Luther King been subjected to never-ending uberveillance, how might that not only have affected the biography of these two men, but changed the course of history itself? Moreover, how would knowledge of such bio-intrusive surveillance altered both Kennedy's and King's decision-making processes and life habits? We know for instance, particularly from the seminal study of M.F. Keen, that the surveillance of prominent sociologists in the United States played a role in shaping the American sociological tradition. Certainly, J. Edgar Hoover's FBI  might have kept a detailed account of the supposed meanderings and subversions of its “suspects,” but these records whether true or false were not universally accessible and limited given the state of information and communication technology at the time . And what of the private lives of popes and patriarchs, kings and queens, great philanthropists, and other exalted figures, how might they have stood up to the nowadays literal “fly on the wall” shadowing ?
The incongruity behind traditional surveillance technologies (including wholesale surveillance and “dataveillance”) is that, generally, individuals of power and influence are not subjected to the extreme and exaggerated types of surveillance techniques designed and planned for everyone else. This concept applies, except of course to occasions of blackmail and industrial espionage, for example, when the powerful and influential make use of whatever apparatus is at their disposal to spy on and turn against their own. It is not our blanket assertion that all influential and powerful people must necessarily be corrupt. It is fundamentally a matter of control revolving around authority, access, and opportunity. We return then, to the perennial question of who will guard the guards themselves: Quis custodiet ipsos custodes?
Even uniquely enlightened persons such as Siddhartha Gautama and Jesus of Nazareth needed private space not only to engage inwardly and to reflect on their respective missions, but also to do discrete battle with their respective “temptations.” Uberveillance makes private space inch-by-inch obsolete . Private space is that location that we all, saint and sinner alike, need – to make our mistakes in secret, to mature into wisdom, and to discover what we are and are not capable of. In losing large chunks of our privacy we are also forfeiting a critical component of our personal identity, which for a substantial group of philosophers, following on from John Locke, is “the identity of consciousness” . There is, then, the potential for personality disorders to develop, particularly anxiety disorders or phobic neuroses.
The unbridled rush and push to create the transparent society, as David Brin  very well described it, has social implications that are largely ignored, or at best marginalized. The social implications of information security measures that are connected to never-ending surveillance or indeed to other network applications have serious and often irreversible psychological consequences of which only a few can be cited here: increased cases of mental illness (new forms of obsessive compulsive disorder and paranoia); a rise in related suicides; decreased levels of trust (at all spheres of relationships); and the impossibility of a “fresh start.” The traditionally received idea of the unconditional absolution of sin  in the secrecy of the confessional already does not exist in the world of some religious communities; believers are encouraged to log on and to “confess” online , . These types of social networks are especially dangerous for individuals already battling mental illness, and who might afterwards deeply regret having uploaded imaginary or real discretions for everyone to read.
Would the celebrated bishop and author of The City of God have risen to such prominence and reverence had his early and formative life been chronicled on Facebook and “serialized” on YouTube?
The author of a noteworthy article published in Newsweek , commenting on the high-profile suicides of two internationally recognized digital technologists, Theresa Duncan and Jeremy Blake, put it well when he surmised “for some, technology and mental illness have long been thought to exist in a kind of dark symbiosis.” The startling suicides first of Duncan and soon after that of her partner Blake, for whom “the very technologies that had infused their work and elevated their lives became tools to reinforce destructive delusions,” are a significant, albeit sad reminder that even those heavily involved in new technologies are not immune from delusional and paranoid torment, whether based on fact or not.
And that is precisely the point: with covert shadowing you can never be completely sure that your paranoia is groundless. Long-term research at a clinical level remains to be conducted on the subject of never-ending surveillance and mental illness. There is some evidence to suggest that a similar paranoia played at least some part in another shocking suicide, that of the Chinese American novelist and journalist Iris Chang , the author of The Rape of Nanking.
The application of technology is rarely unbiased. Once a technique  is set in motion and diffused into our society it progressively becomes irreversible, particularly given the key component of interoperability and the vast amounts of capital invested in twenty-first century machinery. However, our comprehension of this hi-tech diffusion is not on commensurate levels. Cross-disciplinary discourse, public debate, and legislation lag far behind the establishment of the infrastructure and application of the technology. In simple terms, this lag is the “too much change in too short a period of time,” which Alvin Toffler famously referred to as “Future Shock” .
The situation is, unfortunately, reminiscent of that time in Alamogordo, New Mexico, in 1945, when some of those engaged in the Manhattan Project, including one of the group's top physicists, the Nobel laureate Enrico Fermi, were taking side bets on the eve of the test on whether they would “ignite the atmosphere” once the atomic bomb was tested . But the “fallout” from uberveillance is distributed, and it will initially, at least, be invisible to all except the approved operators of the data vacuum. The setting and foreboding of notable dystopian novels, which warn of “dangerous and alienating future societies” – Yevgeny Zamyatin's We (1921), Aldous Huxley's Brave New World (1932), Ayn Rand's Anthem (1938), George Orwell's 1984 (1949), Ray Bradbury's Fahrenheit 451 (1953) – where “dissent is bad” and the deified State “knows all” is being gradually realized. This is especially worrying, for as Noam Chomsky and others point out, we are concurrently witnessing a “growing democratic deficit” , .
Great strides are also being made in the field of biomedical engineering in the application of engineering principles and techniques to the medical field . New technologies will heal and give hope to many who are suffering from life-debilitating and life-threatening diseases. The broken will walk again. The blind will see. The deaf will hear. Even bionic tongues are on the drawing board. Hearts and kidneys and other organs will be built anew. The fundamental point is that society at large is able to distinguish between positive and negative applications of technological advancements before we diffuse and integrate such innovations into our day-to-day existence.
Nanotechnology, which is behind many of these marvelous medical wonders, will interconnect with the surveillance field and quite literally make the notion of “privacy” – that is, revealing ourselves selectively – an artifact. We must do whatever is in our lawful power to check, mitigate, and to legislate against the unwarranted and abusive use of uber-intrusive surveillance applications. We are talking about applications with such incredible capabilities that will potentially have the power to dehumanize us and reach into the secret layers of our humanity. These are not unruly exaggerations when we consider that wireless sensors and motes, body area networks (BANs), and brain-computer interfaces (BCIs) are already established technologies and that the era of mind control, particularly through pioneering advancements in brain-scanning technology, is getting steadily closer.
The incongruity behind traditional surveillance technologies is that, generally, individuals of power and influence are not subjected to the extreme and exaggerated types of surveillance techniques designed and planned for everyone else.
The argument most often heard in the public domain is “if you have nothing to hide, why worry?” There are, however, at least three problems with this popular mantra. First, freedom implies not only being “free of chains” in the practical sense, to be permitted to go about one's daily business freely and without undue constraint, but nowadays also without your every move being tracked, monitored, and recorded.
Second, there is a metaphysical freedom connected to trust, which also implies to be able to dream, to think, and to believe without outside coercion.
And finally, whether we care to admit it or not, we all have something to hide. Disruption of any of these freedoms or rights would affect our decision-making processes and contribute to unhealthy personality development where what we “want” to do (or engage in) becomes what we think we must do (and theatrically engage in).
To artificially build a personality or to hold on to a set system of synthetically engineered beliefs is to deconstruct the human entity to the point where both initiative and creativity (two key components of a healthy individual) are increasingly diminished, and ultimately eradicated. Humancentric implants for surveillance will alter the “inner man” as much as the externals of technological innovation will transform the “outer man.” There are those who would argue that the body is obsolete and should be fused with machines; there are others who would support mind and identity downloading. In the context of such futuristic scenarios, Andrew Ross has aptly spoken of the “technocolonization of the body” . Others on the cutting edge of the digital world are using technology in ways supposedly never intended by the manufacturers.
If the elements to this discussion that might point to the potential mushrooming of new totalitarian regimes seem paradoxical – after all we are living and reveling in a postmodern and liberal society where the individual cult on a mass scale is idolized and thriving – then we should stand back for a moment and reconsider the emerging picture. Two prominent features of the murderous regimes of Stalin and Hitler were the obsession with state secrecy and the detailed collection of all sorts of evidence documented in scrupulous registers . Related to this collection of information was the well-known and beastly numbering of minorities, prisoners, and political dissidents. In our time, privacy experts such as David Lyon are warning, this type of “social sorting” is becoming evidenced once more . Where are we heading today? In response, already in the United States a number of states (including North Dakota and Wisconsin) have passed anti-chipping bills banning the forced implantation of RFID tags or transponders into people .
In 1902 Georges Méliès' short science-fiction film A Trip to the Moon (Le Voyage dans la Lune)spawned the fantastic tradition of putting celluloid form onto the predictive word. More recently representative of this tradition is James Bond in Casino Royale (2006). In this movie, Bond becomes a “marked” man, chipped in his left arm, just above the wrist by his government minders. “So you can keep an eye on me?” the famous spy sarcastically rejoins. The chip is not only for identification purposes but has multiple functions and applications, including the ability to act as a global positioning system (GPS) receiver for chronicling his every move. Later in the film when Bond is captured by his arch-nemesis, the banker Le Chiffre, he will have the microchip, which looks more like a miniature spark plug, cut out of his arm with a blade. These kinds of scenarios are no longer the exclusive domain of the novelist, the conspiracy theorist, the religious apocalypticist, or the intellectual property of the tech-visionary. We have the ability and potential to upgrade these information gathering mechanisms to unprecedented and sci-fi proportions.
Unique lifetime identifiers are more touted than ever before by both the private and public sectors as they have become increasingly synonymous with tax file and social security numbers. The supposed benefits of this permanent cradle-to-grave identification are energetically broadcast at various national and international forums, and especially in the contexts of white collar crime and national security. We are living in times in which commercial innovations will possibly match the internal complexity of the neuron with the help of the appositely called “labs-on-chips.” Writers dealing with these subjects have been speaking less of future shock and more along the lines of hyper-future shock. The key question, so far as identification and information-gathering technology is concerned, is: How are we as a concerned and informed community going to curb and regulate the broad dispersal and depth-charged reaches of surveillance? And how are we going to do this without denying the many positive and desirable applications of the infrastructures that underlie these technologies, particularly in the domain of healing the sick and the injured?
A great deal of this discussion should revolve around the related ethics of emerging technologies, and as we have noted, this discourse is especially critical when we consider the “unintentional” and hidden consequences of innovation. However, one of the methodological weaknesses in this global debate is the direct focus by some of the interlocutors on meta-ethics alone. What we must understand, if we are to make any practical progress in our negotiations, is that this subject must first be approached from the perspective of normative and applied ethics. The lines of distinction between all three of these approaches will at times remain unclear and even merge, but there are some litmus tests (human rights for example) for determining the morality and the ultimate price of our decisions.
Readers might well be asking what technology has to do with some of the metaphysical issues that we are raising here. Perhaps it would be sensible to periodically remind ourselves, as a recent discerning researcher also has pointed out , that two of our greatest thinkers, Plato and Aristotle, both warned of the inherent dangers of glorifying techne (art, skill). Techne should be subject to “reason and law”. Furthermore, Plato and Aristotle argued that techne represents “imperfect human imitation of nature.” The pertinent question in this instance might be why have modern societies gradually moved away from asking or seeking out these metaphysical connections. Such general apathy, with a few honorable exceptions, towards a philosophical critique of technology can probably be traced to a defensive response of western economic tradition to Karl Marx's “critique of Victorian progress.”
In relation to surveillance and to ubiquitous location determination technologies, we are at a critical junction; some might well argue that we have long ago decided which road to travel. Maybe these commenters are right. Perhaps there is no longer a place for trusty wisdom in our world. Just the same, full-scale uberveillance has not yet arrived. We must moderate the negative fallout of science and control technology and, as Jacques Ellul  would say, “transcend” it: lest its control of us becomes non-negotiable and we ourselves become the frogs in the slow warming water.
The insightful and expertly considered papers that follow were presented at the IEEE-SSIT International Symposium on Technology and Society (ISTAS) 2010, in Wollongong, Australia. In one way or another each of the writers directly investigate issues related to both the technological and social implication spheres broached in this paper. Though their approach or methodology might differ in some evident places, they all agree that the rapid pace of the development and application of new surveillance techniques without due diligence and involvement of the scientific and public communities at large, has built-in potential of a great disaster, in terms of societal loss of privacy, erosion of freedoms, and disintegration of trust.
Excerpts of this article were originally published in Quadrant magazine  in 2009. Quadrant is Australia's leading intellectual journal of ideas, literature, poetry, and historical and political debate.
Special issues and sections, Social network services, History, Technological innovation, Surveillance
Citation: MG Michael, Katina Michael, 2011, IEEE Technology and Society Magazine, Vol. 30, No. 3, Fall, 2011, pp. 13-17, DOI: 10.1109/MTS.2011.942312