Wearable devices with independent computing and networking capabilities change the proximity of people and visual information to self-presentation and self-perception. This article examines the disruptive effect that wearable technologies like the Digital Eye Glass present in documenting and representing the self in a surveillant world. We look at how the power relationships in self-presentation and self-interpretation are changed by sousveillant apparatus, and we explore how these practices of “looking” mediate the subject and power in the changing ethics and politics of human-to-human and human-to-computer interaction.
Behind us are the antiquated 19th-century anthropological notions of humans as tool users and the 1960s McLuhanesque ideas of tools as extensions of our senses. In front of us is an abyss where computing moves beyond mobile tablets and wearables, beyond “smart” devices and ambient computing strategies, into our bodies and brains, and into the very definition of what it means to be human. As computers and computing move from desks and carrying cases into our watches and glasses, onto our clothes and bodies, we stand at a precipice of a new definition: what does it mean to be a “mediated” human? What is the relationship between the tool and the human? In this future, we move beyond notions of mediated reality, beyond representation, and into presentation, the construction of what is “real,” and the construction of semiotic and semantic meaning by the human-machine. The coming tsunami of wearable devices represents the normalization of luminal and defused spaces that will redefine our understanding of the “human” and the relationships of power, agency, and subjectivity that they mediate.
Wearable technologies—technologies that are worn on the body for extended periods and incorporate circuitry, independent processing, and wireless connectivity—are intercessors for the semiotic integration of the “tool” into our experience of the world. They represent a paradigm shift from the established trajectory of ethical and sociotechnical norms toward an undiscovered country where there is no separation between what is human and what is machine. The human-machine unit creates semiotic meaning about the world around us, yet simultaneously presents a dreamy technological utopia that may be hiding dystopian undercurrents. Wearables may signal a future where machines are an integral unit of our meaning production and when sensemaking becomes dependent on the human-machine. Our practices of looking, presentation of self and interpretation of other, and way of seeing our environment's affordances are inextricably linked to the human-machine hybrid. We need to be aware that the hybrid is monitored by the corporations and governments that produce, control, and survey the devices and their supporting infrastructures. From this perspective, wearable devices represent more than just a potential economic disruption, but, in a broader sense, a disruption of the ethics by which we live .
Much like the digital revolution that brought computing to our homes and workspaces in the 1980s and 1990s, the current movement toward wearable technology is threatening our established norms and blurring the lines between technology and the body, individuals and groups, and power and the subject. For example, consider the much-hyped Google Glass project. Google Glass is a commercialized glasses frame that incorporates video capture, independent processing, connectivity, and visual feedback to allow users immediate contextual interaction with digital information—kind of like a wearable cell phone. Although many of the features of Glass are similar to what smartphones allow people to do today, the form represents a departure from established social norms by allowing unprecedented portability and capture—to go where no digital eye has gone before. This article examines how the mass adoption of wearable computing is disrupting ethical norms in the presentation of self and other through power relationships between individuals and technology, private and public spaces, and surveillance and sousveillance.
Wearables as Disruptive Technology
In The Innovator's Dilemma, Harvard Business School Professor Clayton Christensen  categorized technologies as either sustaining or disruptive to an established economic market trajectory. He argued that sustaining technologies rely on incremental improvements that generally maintain the status quo. Disruptive technologies, on the other hand, destabilize established markets or market segments and, through a process of refinement, improvement, and eventual innovation, create new norms. Christensen argued that these technologies, which start at the fringes with early adopters, generally capitalize on inefficiencies, limitations, or gaps in existing technologies. As these disruptive technologies move from the periphery to mass adoption, they establish new opportunities, new norms or market niches, and, through processes of innovation and black-boxing, create a technological paradigm shift in which disruptive technologies transgress periods of stabilized social practices and norms.
The Apple iPad can be used to illustrate this point. By the time the original iPad was released in August 2010, mobile computing was an already-established concept. Companies like Bell Canada tried replacing their technicians' laptops with tablets in 2002 . Companies like Motion Computing were also developing enterprise markets with analogous, if not homologous, devices. Motion Computing launched a Windows XP-based tablet PC as early as 2005; however, despite some early adoption, many felt that tablets were undesirable. Viseu  found that Bell technicians did not want their laptops replaced; they had gotten used to bringing their work laptops home for personal use, which they considered an employee benefit. The embryonic tablet market of the time did not permit similar uses, so the technicians resisted the idea of giving up what was perceived as a perk. Despite the existence of tablets from 2000 through 2010, the culture of computing surrounding early use proved one of the barriers to mass adoption.
The introduction of the iPad in August 2010, along with an established ecosystem of software (mostly from the existing iPhone apps), disrupted desktop, laptop, and netbook markets and, perhaps more significantly, led to major social shifts in mass adoption and use of mobile computing devices. The disruption of established paradigms and the resulting social, technological, and ethical shifts have since transformed the status quo much more broadly than even Christensen's mainly economic lens could predict. The iPad moved the masses off of desks and tables and contributed to a new norm for mobile and pervasive computing. New practices, like watching media on the second screen, began to augment, rival, and even displace established TV-viewing habits. The iPad proved to be a truly disruptive technology not just in its own market segment, but also through broad social and ethical changes that accompanied the technology. Likewise, the eventual mass adoption of wearable technologies will be economically and, more importantly, ethically disruptive to human interactions and institutions.
Like tablets, wearables are also moving from the periphery toward mass adoption. In the process, they are moving luminal cyborg identities from the margins of sociotechnological practice toward the mainstream. These devices, which provide individuals continued bidirectional connectivity, will change the presence and immediacy of information in daily experience as well as our presentations and representations of ourselves. Self-presentation, mediated by wearable technologies, has direct and immense implications for the relationship of power between self and other or, as Goffman  put it, between the actor, the audience, and the very nature of the “stage” upon which we construct our identities.
Presentation of Self
In the 1950s, Goffman used the stage as a technological metaphor for social interactions between self and the “other” in institutional and institutionalized settings. In his writings and his seminal book, The Presentation of Self in Everyday Life , Goffman frames the performance of self as a theatrical activity that is fundamentally asymmetric. He stresses that actors see from one perspective but are viewed (to paraphrase Lacan) by their audiences from many perspectives. He suggests that individuals can maximize the effectiveness of their performance by hiding aspects of the self in the backstage while allowing other aspects to appear on the front stage. Audiences receive verbal messages from the individual but are simultaneously able to read multiple cues of the performance beyond what is simply said or overtly offered by the presenter. For example, an individual who is trying to portray a particular identity at a border crossing and who appears nervous and fidgety may give inspectors unintended cues about his or her actual identity. Although the individual may wish to be viewed in one way, according to the presented documents, the performance may be interpreted differently by border officials, resulting in asymmetric viewing (i.e., individuals display only chosen aspects of self, while the audience reads more into the performance than was intended) .
“Backstage” becomes the metaphor and mechanism by which individuals hide aspects of themselves that are disadvantageous to the goals of an exchange. This private psychological or technological space is where individuals can be themselves and is ultimately a social imperative for what we consider privacy and democracy (see ). According to Goffman, we, as actors, use props and strategies to establish our identities depending on contextual needs. He also argues that we are both actors and audience members in institutionalized exchanges, implying a contextual and iterative exchange between the self and other and suggesting an agency on the part of both parties to construct and deconstruct performances as they occur. Although this exchange does not completely mitigate the asymmetries of individuals dealing with others, it does afford the self a presentation “toolbox” that can help achieve specific, contextually integral identities.
In real life, these exchanges can be seen in basic interactions between individuals who pose for ID pictures at government-authorized agencies. People posing for a photo for a driver's license or other photo ID—a 20th-century technology currently under siege by biometrics and (digital) visual analytics—tend to perform identities by subtly positioning their heads or posture for the picture and, more explicitly, by choosing appropriate outfits for the experience. Before some countries announced rules catering to early face-recognition systems in the mid-2000s, individuals posing for institutional IDs could smile to make themselves seem more friendly, pleasant, and personable. This strategy is significant not only at the initial enrollment into the institution but also later, during future performances, when individuals are asked to provide their photo IDs .
Wearable computing changes the way we present the self in everyday life. First, technological mediation allows for a new stage for presenting the self. Playing on Donna Haraway's cyborgian ideal, wearables like digital eye glasses allow individuals to construct and share potentially unconventional perspectives of self. Although by no means mainstream yet, this is seen in the practice of lifeblogging, which can be framed as an ethically disruptive practice that changes the presentation of self. Also known as cyborglogging, glogging, lifeglogging, lifelogging, and lifecasting, lifeblogging is an individual's continuous broadcast of his or her everyday experience . Lifebloggers present aspects of the self in the context of everyday life. The first person to officially document a lifecast (to stream continuous, live, first-person video) from a wearable camera was Steve Mann on 22 February 1995. Mann serendipitously used his “wear-able wireless webcam” as a roving reporter on the MIT campus when he happened to encounter and capture a fire that had broken out. Hove  maintained that the wearable web camera went too far and prophesized that constant video broadcasting would have ethically transformative implications. However, Steve Mann was a single individual, so it was impossible to know the exact implications of wearable technology on a mass scale at that time.
Historically, radio and television generally provided curated and produced materials from centralized broadcasters. Life-casting gathers and broadcasts ubiquitous information “from below,” with 24/7-streaming incidental observations and events captured as part of the continuous flow of information. Ordinary people's audiovisually recorded events become another official record that may be used to challenge authoritative history, such as an alibi, referred to by Michael and Michael  as crowd-sourced sousveillance. Furthermore, these incidental vignettes of life become more real than the truth produced by centralized media channels—a kind of Baudrillardian hyperreality. Somehow, captured experience seems perceptibly more raw and more real, more genuine than produced media (see, for example, https://giveit100.com, where participants are challenged to create 10-s videos of themselves for 100 days). This is not to say that video captured from below is not subject to the same limitations as video captured from above; both videos can be tampered with, but usually there are many versions in crowd-sourced “gazing” that can corroborate the authenticity of an event, as opposed to only one version of CCTV from above. Even though most of the content stemming from life-casts can be considered banal, it does demonstrate the potential of mobile and pervasive media to capture marginal and marginalized narratives, news, and views.
This romanticized view of lifeblogging promises to propel integration of mediated experience into the “everyday” of human experience. This is particularly significant when trying to understand a subject's relationship to a means of power and self-presentation in our everyday lives. With wearables, big “P” power, or the institutionalized mechanism of agency, control, and surveillance of bodies in modern societies, becomes increasingly mediated, surreptitious, ubiquitous, and coconstituted by its subjects. Mediating power changes the subject's ethical conduct in everyday life. However, the promise of life-blogging to construct individual perspectives hinges on the design and control of the supporting information architecture. Information flow and the power over its storage, analysis, and distribution have great implications for individuals' empowerment. The institutionalization of these new practices of looking, rather than giving agency to the individual, may simply entrench the institutions' established power structures. This is particularly true of devices like Google Glass, where the corporation mediates the storage, broadcast, and (likely) analysis of the visual data and user interactions. Further, the information gathered by a lifeblogger about the blogger backstage (and about others) may be more telling than the benefits of attempting to regulate institutional power and surveillance. For example, a lifeblogger's stream may alert a potential robber that the blogger is away from home, or it may provide information about other individuals who are caught surreptitiously in the blogger's stream that is then used against them by the custodians of the information (Google in the case of Glass) or third parties with whom the custodians share information (for example, a national security agency).
Steve Mann and the thousands of early adaptors of his EyeTap technology were on the fringes of these sociotechnical practices for decades. The promise of those on the fringes and edges, at least according to scholars like Haraway and Derrida, is that they are often the ones that help establish the spaces that eventually define the mainstream, so the mass adoption of lifecasting using technologies like Google Glass has significant implications for the ethics of social and institutional interaction as well as for the overall power relationships of gazing—the visual construction of meaning.
Shifting Gaze of the “Other”
The second potential shift presented by wearables involves the perceptions based on one's self-presentation. The promise of immediacy built into devices like the Digital Eye Glass allows audiences to view an individual's performance and access information about the performer in real time to an unprecedented degree. Wearable computing potentially gives synchronous access to asynchronous information about an individual's backstage to a degree that was previously unimaginable and unavailable. The synchronous access to information about the self, facilitated by ubiquitous, mobile, and wearable computing, challenges a person's ability to maintain a Goffman-style identity backstage. For example, developing social media allow individuals to present what Sherry Turkle, in Life on the Screen (1995), has termed the multiple distributed self, in which one's identity is contextually compartmentalized. On Facebook, an individual can construct one facet of the self for presentation to the “other,” whereas on Twitter, the same “body” may present a different persona or anonym, a different version of the self. This phenomenon has also been discussed as the idio-technopolis . The practice becomes problematic, however, when barriers between different self-presentations break down because of algorithmic surveillance strategies. When a boss surveys an employee on Facebook or a parent “spies” on a child, that individual's backstage becomes challenged; wearable technologies make this surveillance more possible. Moreover, the erosion of an individual's backstage is not limited to a temporal presentation of self. Data captured in one context can be retroactively searched and shifted to a completely different contextual construction of self. This temporal dissonance represents an identity function creep that was too difficult and resource-intensive to perform on mass scales before digitization and the currently evolving signal processing of big data. The control that groups of “others” exert over an individuated “self” is deeply rooted in the human condition and in the formation of society. In other words, despite changing technological paradigms, the self-presentation, whether on a stage, on an ID card, or through a biometric measurement, remains an archetypical enactment of power over the subject. Mobile, wearable, and ambient computing may, for some, represent an erosion of surveillant control or power on the part of the subject.
Mediating Technologies, Self-Presentation, and Power
The nature of the stage—the technological mediation involved in self-presentation—and the audience's gazing potential are key to understanding the shifting relationships of power embedded in the affordances for meaning that wearable computing represents. Currently, we are experiencing technological power mediation between the subject and “other.” This mediation is essentially a reconfiguration of power—of how technology is used to mediate agency and subject construction. According to Foucault , power is the mechanism by which humans are made into subjects of economies, institutions, and other forms of classification. Power is the underlying means by which we structure our societies and govern our ethical conduct. Human beings are objectified into systems of governance through the production of subjects. Producing an objectified subject is, according to Foucault, a key process in enacting power, and that production is mediated and changed by wearable technologies. In the case of the Digital Eye Glass, the wearer, the audience viewing the broadcast, and anyone caught in the gaze of the glass all become subjugated to the power networks maintained by the device.
For Foucault, the use of looking as a form of internalized discipline has become increasingly relevant when discussing multiple forms of visual and video surveillance. Foucault describes isolating the body of citizens into individualized identities that are constantly under obfuscated and asymmetric surveillance. In Discipline and Punish: The Birth of the Prison , he depicts the prison as the model for a modern economy of power that distributes and internalizes control through gazing based on a generalized (and generalizable) body. In Foucault's panopticon, a system based on asymmetric gazing between guards and prisoners, the watcher sees the body of the prisoner without being seen in return. Agents of the institution generally write, maintain, store, and interpret a record or identity, as opposed to the subject of the gaze, from whom the system is generally kept opaque. The guards (metaphorical authoritarians) use their ability to “see-but-not-be-seen” to observe and discipline people. This model suggests that we, as citizens, generally observe the rules of the authority in power because we fear repercussions. Foucault calls this internalized discipline. For this surveillance mechanism to monitor mass populations in an effective manner, there is a tension between the forms of localization (from literal to metaphorical imprisonment) and its resource implications for oversight. For the most part, in modern democracies, the gaze of surveillance and the threat of being caught encourage most citizens to obey (become docile bodies). A national ID system may thus be understood as a way of disciplining individuals into localized identities to create “known” docile bodies.
The knowledge produced by the state's gaze upon its citizens becomes a form of power. Bodies are linked (metaphorically localized) through institutional mechanisms to a specific file or record (i.e., identity), institutionalizing the presentation of self. These systems are often biased and depend on gazing asymmetries .
Foucault's metaphor of the citizen as prisoner in an institutional (identity) panopticon “without walls” has revolutionized the understanding and analysis of “power” in modern societies. Arguably, surveillance has pushed well beyond Foucault's vision of one-way gazing onto a localized body. Surveillance has diversified, using new ways of looking (e.g., data analytics) and new power relationships, and it has become abstracted to the level of symbols, or binary codes, aggregated and reconstituted at will by those who control it. Although surveillance and oversight once literally meant “to watch from above,” increasingly, the word surveillance (and the word oversight, its literal English translation) has taken on a broader meaning. Now, the “sight” (veiller) is being used more broadly than only for visual sensing; surveillance now also refers to audio monitoring, pressure sensing (e.g., “smart” floor tiles), and other means of data collection. The “above” (sur) no longer only means to be in a physically high place (such as a high mountaintop lookout, as suggested by Sun Tzu in The Art of War): it is now thought of metaphorically, as in to be in a position of power in a hierarchy (e.g., police keeping watch over citizens, shopkeepers keeping watch over their shoppers), regardless of one's physicallocation. For example, the computationally intensive surveillance infrastructure depicted in such television shows as Person of Interest often, in fact, accumulates in a basement computer room or a deep underground data vault, rather than at the top of a high mountain, although its network of cameras often watch us from above. A police officer recording telephone conversations from the basement of a police headquarters is still doing “surveillance,” even if the officer is physically underground, listening intently on earphones. A presidential oversight committee eavesdropping on that police officer, unbeknownst to the officer, is yet another form of surveillance—meta-surveillance (i.e., a form of “surveillance of the surveillance” that is not necessarily “sousveillance”).
The data, information, and knowledge we generate online and offline are increasingly the subjects of inspection, analysis, and aggregation by those in high places (governments, corporations, and other large organizations), as the NSA data net suggests. Surveillance has become more a matter of collecting and analyzing information than merely “looking down at people.” The very act of “looking” has become abstracted into algorithms and databases hidden behind the data shadows we leave behind . We are no longer “looked at” from a hilltop or a high turret; we are now inspected in government and corporate databases years after our lives have changed. Data from our present and past can be searched and looked at by authorities and corporations across the boundaries of time and space (e.g., from distant cities or at times in the distant future). The “looking” does not stop there. Increasingly, algorithms are being taught to look ahead, to anticipate our potential actions in what is now called predictive analytics. What has remained constant is the relationship power between the gaze of power and its subject, which continues to favor the institutionalized agent (government, corporate, or hybrid entities).
Society and technology have moved us beyond the monolithic panoptic systems of surveillance into a regime often characterized by multiple levels, methods, and agents of looking. The central thesis that we are regulated by acts of looking and being seen remains a powerful argument. With advances in mobile computing, the surveyed is increasingly a surveyor, just as a Goffman performer is increasingly a member of his own audience. The act of exerting power by gazing at subjects is being further refined and undermined by increasingly fractionated practices of looking .
Politics of Sousveillance
Surveillance scholars have argued that we do not live in virtual Foucauldian prisons—at least not ones without some form of agency , . From this perspective, new media, particularly personalized broadcasting, facilitated by devices such as networked wearables have significant implications for the power dynamics within society; they are the windows that allow individuals to gaze back. Imagine that prisoners in Foucault's panopticon could look back, see their guards, and record their interactions. In The Matrix (1999), human beings are reduced to a shared delusional state and serve as mere batteries to their mechanical overlords. In George Orwell's 1984, citizens cannot see who is behind the telescreen. The power of mobilized media that are always on and able to broadcast and access a network of followers changes the broader notions of surveillance and oppression proposed by The Matrix and 1984. We are entering an age in which people can not only look back but, in doing so, can potentially drive social and political change. To paraphrase the film, no one can tell you what the Matrix is, but once you've seen it, you are immediately embroiled in the power politics of sousveillance.
Sousveillance has the potential to change the relationships embedded in the asymmetric social control that Foucault discussed as the formational characteristic of modern societies: the panoptic gaze. But sousveillance differs critically from surveillance in the relationship of power between the observing gaze and its subjects. Sousveillance represents a “gazing” from below. The viewer is, by definition, at a lower power potential than the subject of the gaze . In this power triangle, if the viewer's incline is small, the efficacy required for effective undersight is relatively little. However, if the inclination is steep, the power required for effective change through sousveillance is much greater. When coupled with political action, the practice of viewing from below becomes a balancing force that helps democratic societies move the overall state toward a kind of veillance equilibrium, which has been referred to as equiveillance .
Sousveillance allows individuals to enact power over organized gazing units by establishing counter records—distributing data about illegality or abuse of power over individuated subjects. This individuates institutional practices and creates subjects of the institutional “other.” Although this is not new in the entire scheme of human society and experience (institutional watchdogs like civil liberties associations have existed for some time), wearables may extend the scope and potential of individuals opposing institutionalized power.
In relation to systems of institutionalized power, sousveillance can also be conceptualized as a construct of organization. Highly organized, institutionalized surveillance is often only possible through complex institutional systems, whereas less organized, decentralized mechanisms that are more informally distributed could be designated as sousveillant mechanisms. Sousveillance can then be conceptualized as distributed rather than bottom-up gazing. Thus, wearing a camera does not necessarily mean we are “shooting back” against surveillance but that we are distributing the surveillance. This perspective further suggests a kind of continuum, with different orders of gazing that are not mutually exclusive, but the potential to design systems of distributed undersight with unprecedented potential for democratization—provided one can afford the technology and access—exists like never before.
Technologies of Sousveillance
Arguably, sousveillance depends more on technology than does surveillance. Technology is one mechanism that can help mediate the asymmetries of power between a viewer and a subject. In the case of surveillance, technology, as in Foucault's panopticon, can intensify the viewer's power over the subject. Although not absolutely necessary for sousveillance, pervasive digital mobile technologies can make sousveillance more effective through archive transmission and distribution. Technology extends along the range of veillance by facilitating capabilities necessary to see (and record) the subject and also to mobilize political force against the power incline.
Wearable technologies like the Digital Eye Glass can play a significant role in sousveillance precisely because looking from below is both practically and metaphorically disadvantageous (if unmediated by technology, such as glasses or a telescope). In this case, portable computing has not proved to be enough. In the early days of the World Wide Web, scholars jumped at the gnostic and democratic potentials of portable computing. As with other technologies, once the initial optimism and excitement began to wane, people realized that computing and transmission did not provide the critical mass necessary to create a cascade of large-scale institutional change. In recent years, however, mobile-networked devices have been combined with social networks that can trigger political disruption and change. Coupling portability, capture, storage, and distribution, portable media have allowed us to bring along content, but mobile media (portable media with dedicated Internet infrastructures) provide significant opportunities for individuals to capture power abuse or corruption as well as to quickly distribute and communicate it to others for political action. For example, a personal safety device that simply transmits and records data at a remote location protects the data from being destroyed by an attacker or perpetrator, regardless of whether the perpetrator is a low-level street thug or a high-ranking corrupt police officer. The coalescing of power through wearables and social media distribution represents a mechanism for potential undersight; adding sousveillance to the veillance mix supports the (re)shaping or re-envisioning of society into a more continuous dialogue spectrum between prisoners and guards, politicians and citizens, and bureaucrats and people. This relationship implies that there is power in the act of looking back and in the acts of many people looking, when mediated by technology. Even if an individual cannot see his or her guard, the looking back by many provides a kind of backchannel or social check-and-balance to ensure that the surveillance is operated within regulatory and sociopolitical boundaries. Looking back allows the individual presenting the self in everyday life to broadcast a subject's reaction and motivate an audience. Instead of one individual looking back at the panoptic guards, the wearable technologies make it possible to have multiple eyes looking back.
The potential of sousveillance is, however, also clearly linked to who controls the flow of the captured data, who has captured the resources to analyze the data, and who has the power to act on the information derived from that data. Even if sousveillance becomes a tool for individuals and grassroots organizations, if state agencies are able to surreptitiously tap into the multitude of gathered data, it is far more likely that their analytic resources and established power structures will co-opt these practices and revert back to top-down surveillance.
With mobile and pervasive computing quickly becoming part of our reality, sousveillance becomes increasingly possible. This shift toward the mainstream makes it particularly apropos to rethink the politics and ethics of sousveillance. The politics of sousveillance are themselves divisible into those that deal with the channels, media, and technology of sousveillance that in turn influence power and its efficacy. The efficacy and power of sousveillance are, in part, influenced by media technologies that channel the message. The relationship between mediated and distributed undersight, technology, communication, and politics happens at the systems design stage. If the media technology is designed to appease corporate interests or institutionalized surveillance lobbies, the final products will likely favor established power interests. As social constructionists of technology like Latour, Callon, Pinch, Bijker, and others (see  and ) have suggested, technologies have politics “baked right in,” as designers and engineers make decisions and tradeoffs even before users make choices about use. Therefore, the politics of sousveillance can potentially be “baked into” wearable technologies if gazing neutrality gives individuals more control over the information they see and record than the power preloaded by corporations and governments, the custodians and arbiters of the gathered information.
People Looking Back
Although the political recalibration between individuals and institutions resulting from the sur/sousveillant potentials of wearable technologies will have a direct ethical impact on social and political interactions from democracies to dictatorships, more immediate changes can already be seen in the implications for interpersonal interactions—the performing of identity between people in everyday life. The indiscriminate capture of information in public and quasi-public spaces (malls, bars, restaurants) will lead to changes in reasonable expectations of privacy. A fundamental ideal of privacy is that individuals can access information about themselves, correct it when wrong, and challenge the record using evidence. To deny sousveillant technologies is to deny a fundamental aspect of individual power over the institution as subject. But at the same time, is it right to allow privileged technophiles to create subjects of others in public and quasi-public spaces without their consent or their ability to screen or correct the broadcast or archive?
A video short by Infinity AR  asks the viewer to imagine a future where a mediated individual cheats at pool and picks up a bartender, illustrating some of the changes that could develop in daily life with the use of wearable technology . The immediacy of the information provided by the Digital Eye Glass allows the protagonist, an obviously affluent and extraordinary specimen of humanity, to visualize his wardrobe, interact with his Ferrari, and navigate New York City streets. All these seem like credible and innocuous uses of the technology, but when he enters a Manhattan bar, there is a distinct repositioning of how the technology can be used. First, he plays a game of pool against an “ordinary” human—an individual unaugmented by the Digital Eye Glass. The pre-glass human does not benefit from the virtual force lines and angle options the technology provides to the protagonist, and it is easy to see how this lack of information disadvantages him and forces him to use his skill, mind, and imagination. From at least one vantage point, the discrepancy between the two players could be called cheating—steroids for the eyes and mind. The two are clearly on separate playing fields, but this game is also on a different interpersonal playing field from the next exchange.
After the pool game, the augmented human goes to the bar, where an attractive young bartender stands waiting. The face-recognition algorithm, seemingly unprompted, searches for her identity and pulls up her Facebook profile. Farfetched? Perhaps, depending on the thresholds and tolerances of the algorithm and its accesses and integration with social media, the chances of such an exact and instant match at this point seem questionable, but progress is being made . The Digital Eye Glass-equipped human sees before him the woman's profile, birth date, and astrological sign—tools that allow him to engage her in conversation and link to her in the social media universe. As in the case of the pool game, this, too, can be seen as cheating, although this time as a gendered game of power and the self-presentation. Generally, men tend to be early adaptors of new technologies, and as an early adaptor, this man clearly has a technological advantage for personalized surveillance. He is able to use the Digital Eye Glass to search out information about the woman prior to, during, and after their encounter. Positioned as an audience member to the woman's self-presentation, the Glass-augmented man is no longer just reading the front-stage performance—the verbal and body-language cues—but is now able to technologically gauge physiological responses (pupil dilatation, heart rate) at previously unattainable levels. Moreover, he now has unprecedented access to backstage information about the woman's identity. The technology and supporting infrastructures allow him to “friend” her on Facebook, invite her to his apartment, and “guess” her favorite wine. These capabilities further extend the asymmetries between the identity of the performer/performance and the audience's gaze, increasing the power differentials between the individuals; they could even be considered a form of social engineering.
There is another layer to this example—one that goes beyond the power politics of gender and the self-presentation between peers. All the interactions between the individuals, the device, the social networks, and the data produced by the searches, face-recognition algorithms, geopositioning, and other forms of signal processing are likely stored and mined by the service providers. These data can likely be accessed and assessed by third parties, including government agencies, individualized or aggregated, and sold or distributed throughout multiple networks. M. G. Michael calls this the “axes of access” . The foods eaten by the Glass wearer may be of interest to his health insurance provider, doctor, or local grocery outlets. The clothes the man chooses may be recorded, identified, and used to inform targeted advertising or other algorithmic surveillance. In this space, where decisions were once made privately (what we eat, how we dress) and in private places (personal kitchens, bedrooms, living rooms), the real collides with the virtual. The proximity of information to individual decision-making changes the immediacy of information to power and subject (see augmented reality implemented in contact lenses as an example ).
The iPad moved the masses off of desks and tables and contributed to a new norm for mobile and pervasive computing.
Wearables make us simultaneously potential sousveillant observers—with our own credible records—and passive drones of data collection for the institutional “other”—be it Google or the NSA. As Michael and Michael  write, “all this monitoring might also mean that we become acutely aware that we are being constantly watched and expected to act in particular ways in particular situations.” This negotiation of self ultimately impacts our own ability to be creative, different, diverse, and individual. It is not only the loss of privacy that is increasingly at risk but also the wonder of improvisation; we will be playing to a packed theater instead of being comfortable in our own skins and identities. The ethics of looking back are by no means monolithic; sousveillance itself can create subjects of the observed audience. When those who do not have looking-back technology get caught in the veillance net, they become subjects of the power that wearables establish in their agents. People who are caught in the gaze of the Digital Eye Glass become aspects of veillance records and data for analysis. The Digital Eye Glass worn into bars, on open streets, and even in operating theaters will invariably challenge reasonable privacy expectations as casualties of its gaze and indiscriminately capture data to be analyzed in real time or retrospectively. Casualties may also act against the data gatherer in ways never imagined.
As much as we are subjects of institutional gazes, we are increasingly gazing back at institutions using technology, new media, and distributed “cloud” politics. More akin to the telegraph than to radio or television, new media are not only channels for unidirectional broadcasting but incorporate a feedback capacity and a mechanism for organization and action. When wearable devices reach mass adoption, they will change the power politics of looking. Wearables will undoubtedly be disruptive technologies in the economic sense, but they will also likely be disruptive in the ethical and political sense.
Where virtual spaces provide affordances to distribute multiple self-identities, the hybridization of space and information will create new links between performed identities and bodies. Bodies tracked by analytics will inevitably find it harder to seek refuge behind masks or identities performed on different stages. This will change the interactions between people and communities. If a person wearing the Digital Eye Glass can access your various profiles in real time and review your likes, dislikes, and network connections, the first meeting becomes an entirely different interaction than one between two unmediated people. In many ways, the mystery, spontaneity, and discovery are taken away. It is not an interaction of learning in the moment but an interaction of the lived and learned, already time past.
Wearables create a much more complex and nuanced system of mediated looking in which individuals become nodes of sousveillance and surveillance, depending on the intent, context, and proximity of information. Closer integration between external and internal information processing will mark an intermediate point that redefines our understanding of public and personal spaces, our understanding of privacy and instant mass information distribution, and our relationships with our information and ourselves. By breaking down the boundaries between the virtual and the real and by establishing mediated space, technologies like Google Glass, when adopted by the masses, will rewrite any reasonable expectation of privacy.
The data, information, and knowledge we generate online and offline are increasingly the subjects of inspection, analysis, and aggregation by those in high places.
Wearable technologies promise to augment the world by reducing the distance to information and communication technologies. Access to information and augmenting our looking practices will change what we see and sense as well as how we see and sense our world. When the technologies move beyond mediating our senses, do they become replacements for our senses, our way of understanding the world, and our ways of seeing affordances in the natural landscape? At what point does the technology begin to construct—rather than just mediate or augment—meaning? When the technology becomes an agent in constructing meaning (likely before it is even integrated into traditional biological boundaries), does the tool define what it is to be human? These are important questions to consider, especially at a time when we are already speculating on the socioethical implications of “uberveillance”—embedded surveillance devices for the body—which will herald in an even greater pervasiveness.
1. J. Boone, Just when you thought Google Glass couldn't get creepier: new app allows strangers to ID you just by looking at you. E! Online, Feb. 2014, [online] Available: http://www.eonline.com/news/507361/just-when-you-thought-google-glass-couldn-t-get-creepier-new-app-allows-strangers-to-id-you-just-by-looking-at-you.
2. R. Abbas, K. Michael, M. G. Michael, "Using a social-ethical framework to evaluate location-based services in an internet of things world", Int. Rev. Inform. Ethics, vol. 22, no. 12, pp. 42-73, 2015.
3. W. Bijker, T. Hughes, T. Pinch, D. Douglas, The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, MIT Press, 2012.
4. C. Christensen, The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail, Harvard Business Review Press, 1997.
5. I. Kerr, S. Mann, Exploring equiveillance, Jan. 2006, [online] Available: http://wearcam.org/anonequiveillance.htm.
6. J. Ferenbok, "The identity myth: constructing the face in technologies of citizenship", 2009, [online] Available: https://tspace.library.utoronto.ca/bitstream/1807/24306/3/Ferenbok_Joseph_200911_PhD_thesis.pdf.
7. M. Foucault, P. Rainbow, , "The subject and power" in Afterword in Michel Foucault: Beyond Structuralism and Hermeneutics, Univ. of Chicago Press, pp. 208-226, 1983.
8. M. Foucault, Discipline and Punish: The Birth of the Prison, Vintage Books, 1979.
9. K. Gates, Our biometric future: the social construction of an emerging information technology, 2004.
10. E. Goffman, The Presentation of Self in Everyday Life, Doubleday Anchor Books, 1959.
11. A. Hove, Wearable web camera goes too far. The Tech Online, June 1996, [online] Available: http://tech.mit.edu/V116/N28/mann.28c.html.
12. Infinity augmented reality. YouTube, Sept. 2013, [online] Available: https://www.youtube.com/watch?v=X8z5TozXsSs.
13. D. Lyon, Theorising Surveillance: The Panopticon and Beyond, Willan Publishing, 2006.
14. S. Mann, "Through the glass lightly", IEEE Technol. Soc. Mag., vol. 31, no. 3, pp. 10-14, Dec. 2012.
15. S. Mann, "Wearable computing: a first step toward personal imaging", IEEE Computer, vol. 30, no. 2, pp. 25-32, Feb. 1997.
16. S. Mann, J. Ferenbok, "New media and the power politics of sousveillance in a surveillance-dominated world", Surveillance Soc., vol. 11, no. 1/2, pp. 18-34, 2013.
17. M. McLuhan, Understanding Media: The Extensions of Man, MIT Press, 1964.
18. K. Michael, M.G. Michael, "Historical lessons on ID technology and the consequences of an unchecked trajectory", Prometheus, vol. 24, no. 4, pp. 365-377, 2006.
19. K. Michael, "Editorial: The idio-technopolis", IEEE Technol. Soc. Mag., vol. 31, no. 2, pp. 5-12, June 2012.
20. K. Michael, M. G. Michael, "The social and behavioral implications of location-based services", J. Location Based Services, vol. 5, no. 3–4, pp. 121-137, 2011.
21. K. Michael, M.G. Michael, "Sousveillance and the social implications of point of view technologies in the law enforcement sector".
22. K. Michael, M. G. Michael, "Converging and coexisting systems towards smart surveillance", Awareness Magazine: Self-Awareness in Autonomic Systems, June 2012, [online] Available: http://www.awareness-mag.eu/view.php?source=003989-2012-06-19.
23. K. Michael, M. G. Michael, C. Perakslis, , "Be vigilant: There are limits to veillance" in The Computer After Me, Imperial College Press, pp. 189-204, 2014.
24. K. Michael, M. G. Michael, "No limits to watching?", Commun. ACM, vol. 56, no. 11, pp. 26-28, 2013.
25. K. Michael, K. Miller, "Big data: new opportunities and new challenges", IEEE Computer, vol. 46, no. 6, pp. 22-24, June 2013.
26. E. Vallejo, Sight: contact lenses with augmented reality—futuristic video. YouTube, July 2012, [online] Available: http://www.youtube.com/watch?v=GJKwHAvR4uI.
27. A. Viseu, "Simulation and augmentation: Issues of wearable computers", Ethics Inform. Technol., vol. 5, no. 1, pp. 17-26, Mar. 2003.
28. A. Westin, "Privacy and freedom", Wash. Lee Law Rev., vol. 25, no. 1, pp. 166, 1968.
29. M. Wilson, How to make the future of dating less creepy. Co. Design, Jan. 2014, [online] Available: http://www.fastcodesign.com/3024069/how-to-make-the-future-of-dating-less-creepy.
30. D. Wood, S. Elden, , "Beyond the panopticon? Foucault and surveillance studies" in Space Knowledge and Power: Foucault and Geography, Ashgate Publishing, pp. 245-263, 2007.
Keywords: Wearable computing, Tablet computers, Visualization, Semiotics, Biomedical monitoring, Google, Portable computers, wearable computers, human computer interaction, human-to-computer interaction, wearable devices, independent computing, self presentation, self perception, digital eye glass, self interpretation, human-to-human interaction
Citation: Joseph Ferenbok, Steve Mann, Katina Michael, "The Changing Ethics of Mediated Looking: Wearables, veillances, and power", IEEE Consumer Electronics Magazine ( Volume: 5, Issue: 2, April 2016 ), pp. 94 - 102, Date of Publication: 11 April 2016, DOI: 10.1109/MCE.2016.2516139.