Carly Burns of UOW Interviews Katina Michael

1. What are you working on in 2018?

Always working on lots and lots of things at once.

Carly Burns, UOW Research

Carly Burns, UOW Research

  • Socio-ethical approaches to robotics and artificial intelligence development and corresponding implications for humans
  • Tangible and intangible risks associated with bio-implantables for the medical and non-medical fields
  • Ongoing two-factor authentication requirements despite the aggressive rollout of biometrics (especially facial recognition and behavioural systems)
  • Wearable cameras and corresponding visual analytics and augmented reality capabilities in law enforcement
  • Blockchain registers and everyday transactional data flows in finance, education, and health
  • Social media pitfalls and technology dependencies, screens and addictions
  • Unauthorised and covert tracking technologies, location information sharing, crowdsourcing and notions of transparency
  • Defining uberveillance at the operational layer with respect to the internet of things and human rights
  • At the heart of my research is the interplay of engineering, law, policy and society.

2. In regards to your field of study or expertise what are some of the most innovative or exciting things emerging over the next few years?

  • Exoskeletons in humans, transputation (humans opting for non-human parts), and the ability to do things that were once considered ‘superhuman’ (e.g. carrying 2-3 times one’s body weight, or extending human height through artificial limbs).
  • Brain to computer interfaces to help the disabled with basic accessibility of communications and everyday fundamental necessities (e.g. feeding oneself). However, breakthroughs in this space will quickly be adopted by industry for applications in a variety of areas, with the primary focus being entertainment and search services.
  • Smart drug delivery (either embedded/swallowable/injectable) pill and chip solutions that allows remote health systems to monitor your drug taking behaviours, daily exercise routines, and wander/fall-down alerts. 
  • An electronic pacemaker the size of a AAA battery (or smaller) acting as the hub for body area networks, akin to a CPU in a computer, allowing for precision medicine and read-write rights to a world built on the Internet of Things ideology.
  • Personal AI services: consider this the rise of a new kind of personal Internet. Services that will be able to gather content and provide for you thought-specific level data when you need it. Your life as one long reality-TV episode, captured, ready for playback in visual or audio, adhering to private-public space differentials. Captured memories and spoken word will be admissible evidence in a future e-court, but also available for new job opportunities. The tacit becomes capturable and can help you get your next job. 

3. In regards to your field of study or expertise what are some of the things readers should be cautious/vary of over the next few years?

  • The technology we are being promised will get very personal and trespass privacy rights. Whereby in 1984 we were assured that at least the contents of our brain were private, today behavioural biometrics alongside detailed transactional data, can provide some level of proactive profile of everyday consumers. Retaining anonymity is difficult, some would say near impossible. We have surveillance cameras and smart phones and watches that track our every movement, smartTVs that watch us in our homes, IOT devices that do motion detection and human activity monitoring in private spaces, and social media that has the capacity to store instantaneous thoughts and images and multimedia across contexts. This loss of privacy will have psychological impacts and fallout, whether it be in increasing rates of mental illness, or in the room we require to develop as human beings, that right to learn and reflect from our mistakes in private. Humans are increasingly becoming decorporealised. We are fast becoming bits of bytes. Companies see us not as holistic customers any longer, but pieces of transactional data, as we are socially sorted based on our capacity to part with our dollar, and the influence measure we have on our peer groups.
  • The paperless/cashless paradigm is gathering momentum. It has many benefits to organisations and government and especially to our environment. But this has major implications for auditability, so-named transparency, the potential for corrupt practices to be instituted by skilled security hackers, and the need for traceability. Organisational workflows that go paperless will place increasing pressure on staff and administration, triggering a workplace of mere compliance (and tick boxing) as opposed to real thinking and assurance. The cashless part will lead to implicit controls on how money is spent by minority groups (e.g. disabled, pensioners, unemployed). This will no doubt impact human freedom, and fundamentally rights to choice.
  • Over-reliance on wearable and implantable technologies for a host of convenience, care and control solutions. Technology will provide a false sense of security and impact on fundamental values of trust in human relationships. More technology does not mean a better life, for some it will mean a dysfunctional life as they wrestle with what it means to be human.
  • It is questionable whether living longer means we age better. Because we are living longer, illnesses like dementia and cancer are increasing at an increasing rate. How do we cope with this burden when it comes to aged care? Send in the robots?
  • We have already seen robots (e.g. Sophia) be recognised as a citizen of Saudi Arabia, before fundamental women’s rights have been conclusively recognised in the same state. Robots and their proposed so-called sentience will likely receive special benefits that humans do not possess. Learning to live with these emerging paradigms will take some getting used to- new laws, new policies, new business models. Do robots have rights? And if so, do they supersede those of human rights? What will happen when “machines start to think” and make decisions (e.g. driverless cars)? 

4. Where do you believe major opportunities lie for youth thinking about future career options?

  • This is pretty simple, although I am biased, it is “all things digital”. If I was doing a degree today, I would be heading into biomedical engineering, neuroethics and cybersecurity. On the flip-side of this, I see the huge importance of young people thinking about social services in the very “human” sense. While we are experimenting with brain implants for a variety of illnesses, including for the treatment of major depressive disorder, and DNA and brain scanning technologies for early detection, I would say the need for counsellors (e.g. genetic) and social workers will only continue to increase. We need health professionals and psychologists and psychiatrists who get “digital” problems: a sense of feeling overwhelmed with workloads, with the speed that data travels (instantaneous communications), etc. Humans are analog, computers are digital. This cross-road will cause individuals great anxiety. It is a paradox. We’ve never had it so good in terms of working conditions, and yet we seem to have no end to social welfare and mental health problems in our society. 
  • At the same time the world is advancing in communications, and life expectancy continues to grow in most economic systems, an emphasis on food security, seeking renewable energy sources that do not create more problems than they solve, biodiversity and climate change is much needed. What good is the most advanced and super networked world, if population pressures and food security practices are not being ascertained, alongside rising sea levels that cause significant losses? We should not only be using our computing powers to model and predict changes that are inevitable to the geophysical properties of the earth, but to implement longer term solutions. 

 
5.       In regards to your field of expertise, what is the best piece of advice you could offer to our readers?

  • The future is what we make of it. While computers are helping us to translate better and to advance once remote villages, I advocate for the preservation of culture and language, music and dance and belief systems. In diversity there is richness. Some might feel the things I’ve spoken about above are hype, others might advocate them as hope, and still others might say this well is their future if they have anything to do with it. Industry and government will dictate continual innovation as being in the best interest of any economy, and I don’t disagree with this basic premise. But innovation for what and for whom? We seem to be sold the promises of perpetual upgrades on our smartphones and likely soon our own brains through memory enhancement options. It will be up to consumers to opt-out of the latest high tech gadgetry, and opt-in to a sustainable future. We should not be distracted by the development of our own creations, rather use them to ensure the preservation of our environment and healthier living. Many are calling for a re-evaluation of how we go about our daily lives. Is the goal to live forever on earth? Or is it to live the good life in all its facets? And this has to do our human values, both collectively and individually.

Matt Beard of Ethics Centre Interviews Katina Michael

Matthew Beard, Fellow of the Ethics Centre

Matthew Beard, Fellow of the Ethics Centre

Dr Matt Beard is a moral philosopher with an academic background in both applied and military ethics. Matt is a Fellow at The Ethics Centre, undertaking research into ethical principles for technology. Matt has taught philosophy and ethics at university level for several years, during which time he has also published widely in academic journals, book chapters and spoke at a number of international conferences. His work has mainly focussed on military ethics, a topic on which he has advised the Australian Army. He has also published academic pieces on torture, cyberwar, medical ethics, weaponising space, sacrifice and the psychological impacts of war on veterans. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement, recognising his “prolific contribution to public philosophy”. He regularly appears to discuss a range of ethical issues on television, radio, online and in print. He is also a columnist with New Philosopher magazine and a podcaster on the ABC’s Short & Curly, an award-winning children’s podcast aimed at getting families to engage with ethics in a fun and accessible way. Matt is an experienced speaker, writer and educator who brings enthusiasm, rigour and accessibility to his research, speaking and media engagements.

A written questionnaire was answered by Katina Michael on September 28, 2016.

Q&A With Ethics Centre: Fitness Trackers

1. Can you envision any issues assoc with health insurers offering wearable technology and the possibility of lower premiums to their customers?

  • Health insurance is a big business. High-tech companies like Samsung already have diversified into this vertical market, making them one of the world’s leading health insurers. Essentially consumers who opt into a health program using smartphones or wearables like this are heralding in so-named “surveillance for care” which still has “surveillance for control” as an underlying element. In essence these consumers are trading some of their fundamental freedoms for lower premiums. Is this the new “price of health insurance”? A major chunk of your personal information?
  • Wearable technologies are also transferable. There is no telling who is wearing the human monitoring device for certain, although over time, even a space of 2 weeks, behavioural biometrics can determine who is the actual wearer at any given time because of heart rates, pulse rates, stress rates and more. In the not too distant future, disputes would be settled only by means of an implantable sensor device that could not be removed and could with some certainty determine the wearer, despite the pitfalls of cloning devices etc.
  • Having witnessed what has happened in the car insurance industry, we can also learn a great deal. In these scenarios, companies like Norwich Union launched services where constraints were identified regarding curfews, for example for under 25 years of age/ male drivers. These programs incentivise people to do the right thing, reducing the incidence of accidents during late night driving but are in no way guaranteeing that the driver is better off in the longer run. The question, is what happens with respect to insurance claims made by consumers that go against the “lower premium” standard thresholds of usage- be it the “number of steps” or the “time spent” exercising, or the “calories burned daily” or even the oxygen saturation levels etc? If you opt for a fitbit QS style program, what happens if you (1) don’t wear the fitbit daily; (2) have a poor track record of health given personal reasons of any type (e.g. being the primary carer of a child with autism or downs syndrome) etc. Might this make you less insurable across programs in the future with other health insurance suppliers? It almost takes on a “survival of the fittest” attitude, which discriminates against different members of society at the outset.
  • What is the motivation of these kinds of programs for health insurers? Surely it is not because they feel good about reducing health premiums for their customers? How will this intimate behavioural data be used? In unrelated events? Perhaps in contradiction to advice provided by doctors. There are many cases where people have posted data about themselves on social media for instance that has rendered their medical leave void, despite legitimate reasons for leave.

2. Do these issues outweigh the advantages? Can you see any advantages?

  • In an ideal world we might deem that the advantages far outweigh concerns over bodily and psychological privacy, surveillance, autonomy and human rights. In the end, most people say they are privacy conscious but still opt to take a loyalty card, if asked at the counter on checkout if a discount ensues.
  • In an ideal world, sure I can advantages to getting healthier, fitter, being more routine based about calorie intake and calorie burning. There are many people who use their fitbits or other smartphone apps to make sure they are doing the minimum level of exercise each day. That cannot be a bad thing if you are in control of your own data, and stats.
  • My grave concern over these types of behavioural biometric apps is that the data gathered is used to further exploit the end user. “On the one hand, here is a nice discount because you are such a great example of health, and on the other hand, now that I know your general behaviours about every day life, I can further onsell other services to you that I know you’ll need and use.”
  • Once you lose your privacy, you have forgone basic freedoms- you lose that decision making power to say “hey, today it is pelting down rain, I don’t feel like going out for my daily walk”.
  • There are some wearers who will also find themselves misusing the data collected- whether it be because they want to keep pushing the boundaries with how many steps they can do in a working day, versus competing and benchmarking oneself to others in like groups.
  • Most people are willing to give away personal information for anything “so-named” that is “free” but the reality is that there is nothing free, and that discount you might get is being paid for some other way- most likely through the minute-to-minute dataset that is onsold to a third party for completely different uses.

3. Would insurers have an ethical obligation to inform users if they detected medically significant anomalies in the data they collect?

  • If health insurers recognise an anomaly in the data set gathered because they are cross-matching that data with other clinical outcomes or assessments, then yes they would need to inform their customer.
  • However, once a medical condition is observed, it will be recorded against a health record and it is possible that a “predisposition” to x or y may well rule out that sick individual from any future form of health insurance. A number of woman in the USA have found themselves in this predicament with the change of health policy during the Obama Administration and have been left with very limited public health care which hardly helps them to address their diagnosis.
  • Today, in Australia, sufferers have a right to opt out of telling their health insurer that they have a diagnosed “syndrome” as this could affect their long-term health insurance capacity and coverage. Their syndrome would not be detectable by a fitbit style device, but has been medically diagnosed via a DNA test.
  • The other issue that comes to mind is whether or not “big data” will have a role to play in providing “best estimates” of a person carrying a particular type of illness into the future. Data from the fitbit device, might be linked to heart rate data showing the potential for stroke and more. For some people, the news that they are “at risk” is sometimes more a trigger for a stroke or heart attack, than continuing to lead the lifestyle they have which is happy and carefree. I know many people who would become incredibly depressed being informed by any health insurer that if they don’t change their behaviours they will likely die prematurely by 20 years or so. It’s certainly a complex issue and not as straightforward as some think. These are choices that people have to make.

4. Are there any ethical limits to the ways the collected data could be used?

  • Anything that places an individual in a worse situation than they are in already, whatever that context is, is unethical to begin with.
  • There is a well-known case of a woman who placed her sexual encounter fit bit analysis for all to see on various social media channels. The act was not well received by most readers who called for her to take down the data in poor taste, that she acted in an improper and unethical manner toward her partner in the usage of these personal statistics, and that she had reduced the most sacred of acts down to a “quantified-self” graph.
  • There are some things that should just not be public knowledge outside the self (be it health insurer, or general public), and more and more of this personal space is being eroded because of the myriad of sensors being packeted into our smart devices. In some way these smart devices, are too smart for their own good especially when they are internetworked allowing for benchmarking to take place with control groups or other like groups.
  • There is a lack of transparency and education regarding fitbit capabilities in the general public. In the wrong hands this data could also be used to harm people.
  • I fully understand the issue of collective awareness. The more individual citizens/consumers pool their data together, the more we can try to identify anomalies, outliers, and learn more about the human body. This is an honourable aim. But the realist in me says, that this will disadvantage greatly those of our society who are disabled and live life bound to a wheelchair, suffer from mental illness or depression and simply find it difficult to participate in daily activities, the elderly, and other minority groups who find being “tracked” abhorrent in any shape or form for any reason (e.g. aboriginal communities).
  • I think minors should be exempt from such schemes, though very often, health insurance is a family product, so at what point do we say wearables or smartphones for minors are not required, even if adults opt-in to the specific health program?

5. Is there a limit to the types of health behaviour that should be collected (mood, menstrual cycle, food consumption, pregnancy, sexual activity)?

  • I think people should be allowed to track what they want. It is very important that individuals can choose which kinds of data they want to collect. For example, women who wish to be able to better plan ahead for activities during their menstrual cycle, or in order to fall pregnant should be able to keep that data in a calendar style diary. Women in Canada for instance have been lobbying for such an option to track their cycles on the iPhone but to no avail. Oft times, professional women require such a reminder to track periods of ovulation and more. This is becoming especially critical as more women are deciding to have children later on in life, valuing study and career development opportunities during their early to mid 30s. Fertility specialists request the tracking of fine level data when it comes to couples successfully falling pregnant, but most people do not track this information via pen or paper, but might well add to the data collected if they are prompted by an app or wearable device. The device, fitted with a temperature sensor, might provide that opportunity.
  • The question that really brings this to the fore is whether or not any sensitive data which is generated by the human body in particular (e.g. mood or menstrual cycle, or sexual activity) should ever be captured and transmitted to a third party, say in the Cloud. At this point I would have to say this data should be limited and accessible only to the customer opting to measure the information for their personal well-being.
  • I can imagine negative scenarios, like a couple seeking fertility treatment rebates from their health insurer, only to be told (1) you haven’t been collecting your data properly on your wearable; and (2) we noted there was no attempt at conceiving a child in months January or February so we cannot pay you for your June IVF treatment.

6. Do you think today’s technology serves as a substitute/proxy for human virtue (fit trackers as a substitute for self control and motivation, for instance)? If so, is this a moral problem or an opportunity?

  • The act of self-awareness and reflection is a wonderful opportunity. The ancients would carry a little notebook tied to their garment with some thread and write their thoughts down. There is nothing wrong with recording information down through digital means, save for obvious breaches in security or privacy that may eventuate if the data got into the wrong hands.
  • Yet, the unhealthy trend we fall into is thinking that the technology will solve all our problems. It is a well-known fact that the novelty effect wears off after the first few days, and even weeks of usage. People quite often will forget they are wearing a device that is linked somehow to their health insurer, as autonomy begins to override the human condition, even at the detriment of a higher insurance premium. Some people of course would be more regulated and driven than others, with respect to this monetary incentive. But what happens if we record “our true state” of health which will likely not be perfect and continuous given what life throws our way, what then? Surely the health insurer will need to use this in the law of averages? And what are the implications of this?
  • The virtue of self-control and motivation which is really a quality-based aspect of oneself despite its tendency towards frequency, is indeed being quantified. Self-control has its depth in spiritual, philosophical, ideological positions, not in fitbit devices. If we say it is an opportunity for us as humans, to tie ourselves to a techno-fied statue of limits, then next we will likely be advised of who we should marry, how often we should have sex, whether or not we are worthy to have children (because we carry a, b, c defective gene), and whether we can be employed in certain professions (because we stress too easily, or become anxious). This kind of world was very much observed in This Perfect Day, a novel by Ira Levin, which was later analysed by Jeremy Pitt (ed) in This Pervasive Day published by Imperial College London.

7. Anything else to add?

  • Quantifying things in a digital manner can help us to make real changes in the physical world. However, I would not like personal information to come under the scrutiny of anyone else but my own self. I certainly would not wish to equip a health provider with this data, because there will inevitably be secondary uses (most retrospective) of that health data which has not been consented explicitly by the customer, nor would they fully be aware of the implications of making that data available, beyond the lowering of a premium.
  • My concerns of inaccurate recordings in the data which have been proven against fitbit already (due to multiple users on a given device, and a maximum sensor accuracy in very fit people), uncontextualised readings of data, and implications for those in lower socio-economic demographic groups especially, would only lead us further down an uberveillant trajectory.
  • Most importantly, there have been fitbit-based legal cases already which have proven someone’s innocence or guilt. Quite possibly, these wearable devices might end up contradicting circumstantial evidence over eyewitness accounts. Uberveillance allows for misinformation, misinterpretation, and information manipulation.
  • Fundamentally, strapping devices to our bodies, or implanting ourselves with sensor chips is a breach in human rights. At present, it is persons on extended supervisor orders (ESOs) that need to wear anklet or other bracelet devices. In essence, we are taking the criminality portion of the wearer into a health focused scenario. In a top-down approach, health providers are now asking us to wear devices to give us a health rating (bodily and psychologically), and this can only mean a great deal of negative and unintended consequences. What might have started as a great idea to get people moving and healthier, living a longer and better quality of life, might end up making people have crises is they play to a global theatre (namely their health insurer, and whomever else is watching).

The Screen Bubble - Jordan Brown interviews Katina Michael

So what do I see? I see little tiny cameras in everyday objects, we’ve already been speaking about the Internet of Things—the web of things and people—and these individual objects will come alive once they have a place via IP on the Internet. So you will be able to speak to your fridge; know when there is energy being used in your home; your TV will automatically shut off when you leave the room. So all of these appliances will not only be talking with you, but also with the suppliers, the organisations that you bought these devices from. So you won’t have to worry about warranty cards; the physical lifetime of your device will alert you to the fact that you’ve had this washing machine for two years, it requires service. So our everyday objects will become smart and alive, and we will be interacting with them. So it’s no longer people-to-people communications or people-to-machine, but actually the juxtaposition of this where machines start to talk to people.

Read More

Roger Clarke - the Privacy Expert

In 1971, I was working in the (then) computer industry, and undertaking a 'social issues' unit towards my degree.  A couple of chemical engineering students made wild claims about the harm that computers would do to society.  After spending time debunking most of what they said, I was left with a couple of points that they'd made about the impact of computers on privacy that were both realistic and serious.  I've been involved throughout the four decades since then, as consultant, as researcher and as advocate.

Read More

Community Policing or Big Brother

police cameras.jpg

1.  Community policing no longer refers to "neighbourhood watch", what exactly is it?

Community policing in the traditional sense can be likened to neighbourhood policing. It’s a policing strategy and philosophy based on the notion that community interaction and support can help control crime, with community members helping to identify suspects, detain vandals and bring problems to the attention of police. Consider it a collaborative effort between law enforcement and the community to identify problems and search for solutions.

Advanced technologies which were once only available to law enforcement and defence personnel are now affordable and commercially available to the every day consumer. So today, we have the capability to conduct “community policing” with a non-collaborative flavour… so what we are witnessing and predicting is that there will be two types of community policing- one which is a collaboration between the police and the community, and another which is entirely community-led, that is policing by the people. It is what we can term sousveillance.

Sousveillance was created in practice by Professor Steve Mann, at the University of Toronto in Canada. And it can be defined as a type of “inverse surveillance” or “counter surveillance”. Plainly it is the “watching of the watchers by the watched; counter surveillance by people not in positions of power or authority.”

We can refer to these individuals as real time auditors of surveillance technologies and of those conducting the surveillance.

2. What type of technology is available to people wishing to spy on others? and what type of people want to spy on others anyway? what are the ethical and legal implications?

There are a whole host of technologies now available to people who “watch”, “spy”, “check up on” others. These include mobile media technologies, like mobile phones that have GPS receivers in them and can track the user anywhere in the globe down to 1-2m of accuracy; there are RFID bracelets or tags used in vehicles or on people such as the ANPR Automated Number Plate Recognition System which uses OCR technology. There are biometrics and even good old CCTV footage that can be used to record people as they move around crowded places etc. Our once bulky and expensive technologies have become miniature and mobile and in some instances can fit on the tip of a ball point pen, or be carried around or attached to belt clips, underbellies of vehicles. There are a whole host of ethical and legal implications such as who has the right to watch another when they are going about their daily business? Who has a right to privacy? to be let alone?  Is it against the law to track another in a public space using technology that is available off the shelf in an overt or covert manner? What does this mean in relation to human rights? Is there anything that can be considered private today?

3. Can this industry be regulated at all or is it too late?

It’s never to late to introduce reforms, but with mobile media that is increasingly becoming pervasive and ubiquitous it might become difficult to police. It is not to say also that someone is not within their right to be taking footage of their surrounds… for instance, a citizen has as much right to monitor their movements, as someone of equal or higher authority… for instance, it is okay for commercial organizations to record everything that happens both inside and outside their building in the name of physical security, but what about the individual?

This is a difficult and complex debate as it all stems from the context. E.g. we come across a lot of signs that tell us we cannot take footage in public change rooms or rest rooms and we should not. But what of other scenarios where you are taking multimedia evidence for your own protection, for your record of events?

There is a web site where there are now over 37K people taking glogs—cyberlogs… where they are taking footage of an event or task within which they are a participant. Is this unethical? It is certainly not illegal? But what if I was an employee of a government department or a large corporation, and wished to conduct sousveillance activities in those buildings? Well, quite possibly I would not have a job… even though trucking organsiations are increasingly making their employees carry automatic loggers and trackers of their rest times, whereabouts, and other details that could be considered personal.

In the UK they have tough laws that do not permit individuals from taking multimedia footage of law enforcement personnel during protests… and in places like the Pentagon you cannot take any footage of the building, save for the heavily surveiled section of the Pentagon memorial to the victims of Sept 11.

4. What are the benefits of community policing?

The benefits of community policing, are diverse if the problem is considered from a variety of contexts…

In the traditional ideal sense of the notion of community policing, having law enforcement personnel closely collaborating with the community is a positive outcome. Change usually happens at the grass roots level, and if the police can establish some rapport with the local community, then the chances are it might actually help reduce crime rates. But this doesn’t mean that all the problems will go away. Where there are organizations that have the best intentions for the community, there are also those who do not and corruption unfortunately can breed corruption.

Some have used the idea of policing the police, watching over the watchers. This cannot hurt… we have all seen footage on television or on Youtube of police not acting in accordance to the law and this has led to some reforms in each of those individual cases.

But the more important matter is one’s right to be taking footage of their own space and to be conducting sousveillance on themselves. If we lose the right to police or record our own activities, then in a way we cease to exist. We all take pictures of our family, loved ones, special events, as a record of our own life… we keep albums at home and home videos…

But community policing when misused can lead to husbands tracking their wives, overprotective parents watching their children by logging on remotely 50 times a day to check where they are, or employers enforcing harsh penalties on employees who do not reach mileage targets for deliveries or other related sales targets based on speed, distance and time factors. There are a whole host of unanswered questions which are not legislated or regulated in the Privacy Act, Telecommunications Interception Act, or even Surveillance Act of Australia.
 

5. How does dataveillance work?

For some time Roger Clarke’s [30] dataveillance has been prevalent: the ‘‘systematic use of personal data systems in the investigation or monitoring of the actions of one or more persons. Uberveillance is an above and beyond, an exaggerated, an omnipresent 24/7 electronic surveillance. It is a surveillance that is not only ‘‘always on” but ‘‘always with you” (it is ubiquitous) because the technology that facilitates it, in its ultimate implementation, is embedded within the human body. 

The problem with this kind of invasive surveillance is that omnipresence in the ‘material’ world will not always equate with omniscience, hence the real concern for misinformation, misinterpretation, and information manipulation. Uberveillance takes that which was ‘‘static” or ‘‘discrete” in the dataveillance world, and makes it ‘‘constant” and ‘‘embedded”. Consider it not only ‘‘automatic” and to do with ‘‘identification” but also about ‘‘location”. 

Citation: Russell, K. and K. Michael. (2009). "Community Policing or Big Brother?" SBS World View, from http://www20.sbs.com.au/podcasting/index.php?action=feeddetails&feedid=12&id=29202

Questions About Uberveillance

When did you coin the word uberveillance?

Symbol designed by PhD candidate Alexander Hayes in 2011.

The word uberveillance, also written as überveillance, was coined in 2006 by Dr M.G. Michael who is presently an honorary senior fellow in the School of Information Systems and Technology. The concept was further developed, defined and expanded together with Dr Katina Michael a senior lecturer in the same school.

The first time the term was used by M.G. Michael was in a guest lecture he delivered on the “Consequences of Innovation” in the course Information Technology and Innovation. Michael and Katina had long been collaborating on the research, description, and trajectory of ‘beneath-the-skin’ technologies within the surveillance space that had the ability to not only identify but locate individuals as well.

The term simply ‘came out’ in a moment of inspiration, when Michael was searching for words to describe the trajectory of embedded technologies. He could find no other but to bring together the German prefix “über” with the French root word “veillance” to describe the exaggerated surveillance conducted by governments in the name of national security. At that very moment he was thinking aloud in terms of Roger Clarke’s work on dataveillance and Nietzsche’s Ubermensch. So it was what you might say one of those incredible moments of serendipity.

In the same year the term appeared in a peer reviewed conference paper on locational intelligence, the IEEE Symposium on Technology and Society, delivered in New York showcasing the potential for 24x7 tracking and monitoring of humans. After also appearing in a volume of Prometheus guest edited by Michael and Katina, on the “Social Implications of National Security,” the term was subsequently used in a national workshop sponsored by the ARC Research Network for a Secure Australia (RNSA). The 2007 workshop entitled: “From Dataveillance to Uberveillance and the Realpolitik of the Transparent Society” brought together academics and a class list of reviewers from different disciplines to discuss the subject. At this event, Professor Roger Clarke’s 20 year contribution to the field of surveillance and more broadly privacy in Australia was also celebrated as he delivered a keynote address titled: “What ‘überveillance’ is and what to do about it.” In the proceedings of the workshop, uberveillance was embraced by a number of authors, who saw it as an appropriate term to describe the current trajectory of ‘surveillance’ technologies. In 2008 a special issue on Advanced Location Based Services in Computer Communications also published an introductory note on ethics and uberveillance.

Outside references to the term in IT-related blogs and academic papers the term has also been featured in Forbes Magazine by Robert Ellis Smith, Quadrant, National Post by Craig Offman, ABC America Online, Yahoo!Canada’s Home Page, The Inquirer by Nick Farrell, the Edmunton Sun by Alan Findlay, The Sunday Times Online, WIN News and Southern Cross Channel 10. It has also been highlighted in a number of interviews such as on ABC Illawarra with N. Rheinberger. The term was also given a special mention by the keynote on the final day of the 29th International Conference of Data Protection and Privacy Commissioners.

How did you come up with that word?

That’s a good question… it certainly did come from very close collaboration, especially over the last 7 years, bringing disparate fields of study closer together. We believe the influence came from our cross-disciplinary studies in a number of fields including Philosophy, Languages, Theology, Ancient History, Information Technology and Law. Most certainly it was MG Michael’s long affinity and love for words having studied linguistics early on in his career, and having had a number of poems published in some of Australia’s leading poetry journals, such as Southerly and Westerly.

Surveillance as a ‘word’ just did not describe the full extent of the technological capability available today. For instance, there are commercial organizations now chipping people (willing participants) for a variety of applications, including Alzheimer’s, entry access, and employment for the purpose of automatic identification.We needed a word to describe the profoundly intrusive nature of such technologies and it was no longer about Big Brother ‘looking down’, but rather about Big Brother ‘on the inside looking out’. “Uberveillance” also has an onomatopoeic ring to it as well- when one says the word out aloud, its meaning is suggested. Uberveillance is piercing, intrusive, unrelenting, pervasive, constant, and embedded in the body, and in its ultimate form what is captured cannot be edited, reversed or removed. In specific technological terms uberveillance can be described as an omnipresent electronic surveillance facilitated by technology that makes it possible to embed surveillance devices in the human body.

Uberveillance takes that which was “static” or “discrete” in the dataveillance world, and makes it “constant” and “embedded”. It has to do with the fundamental “who” (ID), “where” (location), “when” (time) questions in an attempt to derive “why” (motivation), “what” (result), and even “how” (method/plan/thought). Uberveillance can be a predictive mechanism for one’s expected behaviour, traits, characteristics, likes or dislikes; or it can be based on historical fact, or something in between.

What does it mean to you to have the word officially recognised in the Macquarie dictionary?

While the German prefix ‘uber’ was popular in the 1990s for slang terms, the connection between uber and the French veiller came about through many long hard years of research. We sort of were already describing uberveillance a long time before Michael conceived of the word. The word just summarized it all very neatly. The use of the German prefix “uber” shows that we are not just talking about typical surveillance, and that inherently in this new state of surveillance we are entering unchartered territory.

Uberveillance had previously made it onto several online dictionaries including the Webopedia.com and Merriam-Webster.com but to get it recognized in Australia’s official dictionary was for us an absolute thrill. It was also a vindication not only for us, but also for a larger group of colleagues both in Australia and internationally with whom we must also share this distinction .It clearly evidences to the impact of our work over a sustained period of time, especially given the list of words is international and includes terms that have been in use for much longer. We do not know who nominated the word, or how it got onto the list, but it is without a doubt one of the outcomes we will hold as a major achievement.

What sort of uberveillance research is UOW currently doing?

Stay tuned- we are not far from launching a web portal on Uberveillance which will showcase our research, that of our students, and fellow academic and professional collaborators. Till now our focus has been on ‘proving’ how invasive some technologies can actually be, providing avenues for public discourse and promoting the use of safeguards.

Uberveillance is not just a ‘cute’ word, there is history and substance behind it as can be seen for instance in some of the projects we are currently engaged with, and this is what most find really fascinating. Such as the study on the privacy, trust and security implications of chip implants (e.g. Alzheimer’s patients); the link between exaggerated surveillance and forms of mental duress (e.g. in this instance virtual surveillance conducted by other citizens using web cams, blogs, social networking sites); location based services regulation in Australia to supplement the Unified Privacy Principles (UPP) in the Privacy Act; the ethical implications of the electrophorus (i.e. the bearer of electric technology); discovering the motivations behind underground implantees and why they are embedding technologies under their skin on their own accord; and studying the trade-offs between privacy, value and control in RFID applications, such as e-passport and e-tollways.

Christofer Toumazou - The Biomedical Pioneer

And that’s where I come to a halt, because effectively I think that a deaf person that has heard and lost their hearing and they can get their hearing regained is fine. But actually trying to give someone that can hear, super hearing is not fine.

Read More