Matt Beard of Ethics Centre Interviews Katina Michael

Matthew Beard, Fellow of the Ethics Centre

Matthew Beard, Fellow of the Ethics Centre

Dr Matt Beard is a moral philosopher with an academic background in both applied and military ethics. Matt is a Fellow at The Ethics Centre, undertaking research into ethical principles for technology. Matt has taught philosophy and ethics at university level for several years, during which time he has also published widely in academic journals, book chapters and spoke at a number of international conferences. His work has mainly focussed on military ethics, a topic on which he has advised the Australian Army. He has also published academic pieces on torture, cyberwar, medical ethics, weaponising space, sacrifice and the psychological impacts of war on veterans. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement, recognising his “prolific contribution to public philosophy”. He regularly appears to discuss a range of ethical issues on television, radio, online and in print. He is also a columnist with New Philosopher magazine and a podcaster on the ABC’s Short & Curly, an award-winning children’s podcast aimed at getting families to engage with ethics in a fun and accessible way. Matt is an experienced speaker, writer and educator who brings enthusiasm, rigour and accessibility to his research, speaking and media engagements.

A written questionnaire was answered by Katina Michael on September 28, 2016.

Q&A With Ethics Centre: Fitness Trackers

1. Can you envision any issues assoc with health insurers offering wearable technology and the possibility of lower premiums to their customers?

  • Health insurance is a big business. High-tech companies like Samsung already have diversified into this vertical market, making them one of the world’s leading health insurers. Essentially consumers who opt into a health program using smartphones or wearables like this are heralding in so-named “surveillance for care” which still has “surveillance for control” as an underlying element. In essence these consumers are trading some of their fundamental freedoms for lower premiums. Is this the new “price of health insurance”? A major chunk of your personal information?
  • Wearable technologies are also transferable. There is no telling who is wearing the human monitoring device for certain, although over time, even a space of 2 weeks, behavioural biometrics can determine who is the actual wearer at any given time because of heart rates, pulse rates, stress rates and more. In the not too distant future, disputes would be settled only by means of an implantable sensor device that could not be removed and could with some certainty determine the wearer, despite the pitfalls of cloning devices etc.
  • Having witnessed what has happened in the car insurance industry, we can also learn a great deal. In these scenarios, companies like Norwich Union launched services where constraints were identified regarding curfews, for example for under 25 years of age/ male drivers. These programs incentivise people to do the right thing, reducing the incidence of accidents during late night driving but are in no way guaranteeing that the driver is better off in the longer run. The question, is what happens with respect to insurance claims made by consumers that go against the “lower premium” standard thresholds of usage- be it the “number of steps” or the “time spent” exercising, or the “calories burned daily” or even the oxygen saturation levels etc? If you opt for a fitbit QS style program, what happens if you (1) don’t wear the fitbit daily; (2) have a poor track record of health given personal reasons of any type (e.g. being the primary carer of a child with autism or downs syndrome) etc. Might this make you less insurable across programs in the future with other health insurance suppliers? It almost takes on a “survival of the fittest” attitude, which discriminates against different members of society at the outset.
  • What is the motivation of these kinds of programs for health insurers? Surely it is not because they feel good about reducing health premiums for their customers? How will this intimate behavioural data be used? In unrelated events? Perhaps in contradiction to advice provided by doctors. There are many cases where people have posted data about themselves on social media for instance that has rendered their medical leave void, despite legitimate reasons for leave.

2. Do these issues outweigh the advantages? Can you see any advantages?

  • In an ideal world we might deem that the advantages far outweigh concerns over bodily and psychological privacy, surveillance, autonomy and human rights. In the end, most people say they are privacy conscious but still opt to take a loyalty card, if asked at the counter on checkout if a discount ensues.
  • In an ideal world, sure I can advantages to getting healthier, fitter, being more routine based about calorie intake and calorie burning. There are many people who use their fitbits or other smartphone apps to make sure they are doing the minimum level of exercise each day. That cannot be a bad thing if you are in control of your own data, and stats.
  • My grave concern over these types of behavioural biometric apps is that the data gathered is used to further exploit the end user. “On the one hand, here is a nice discount because you are such a great example of health, and on the other hand, now that I know your general behaviours about every day life, I can further onsell other services to you that I know you’ll need and use.”
  • Once you lose your privacy, you have forgone basic freedoms- you lose that decision making power to say “hey, today it is pelting down rain, I don’t feel like going out for my daily walk”.
  • There are some wearers who will also find themselves misusing the data collected- whether it be because they want to keep pushing the boundaries with how many steps they can do in a working day, versus competing and benchmarking oneself to others in like groups.
  • Most people are willing to give away personal information for anything “so-named” that is “free” but the reality is that there is nothing free, and that discount you might get is being paid for some other way- most likely through the minute-to-minute dataset that is onsold to a third party for completely different uses.

3. Would insurers have an ethical obligation to inform users if they detected medically significant anomalies in the data they collect?

  • If health insurers recognise an anomaly in the data set gathered because they are cross-matching that data with other clinical outcomes or assessments, then yes they would need to inform their customer.
  • However, once a medical condition is observed, it will be recorded against a health record and it is possible that a “predisposition” to x or y may well rule out that sick individual from any future form of health insurance. A number of woman in the USA have found themselves in this predicament with the change of health policy during the Obama Administration and have been left with very limited public health care which hardly helps them to address their diagnosis.
  • Today, in Australia, sufferers have a right to opt out of telling their health insurer that they have a diagnosed “syndrome” as this could affect their long-term health insurance capacity and coverage. Their syndrome would not be detectable by a fitbit style device, but has been medically diagnosed via a DNA test.
  • The other issue that comes to mind is whether or not “big data” will have a role to play in providing “best estimates” of a person carrying a particular type of illness into the future. Data from the fitbit device, might be linked to heart rate data showing the potential for stroke and more. For some people, the news that they are “at risk” is sometimes more a trigger for a stroke or heart attack, than continuing to lead the lifestyle they have which is happy and carefree. I know many people who would become incredibly depressed being informed by any health insurer that if they don’t change their behaviours they will likely die prematurely by 20 years or so. It’s certainly a complex issue and not as straightforward as some think. These are choices that people have to make.

4. Are there any ethical limits to the ways the collected data could be used?

  • Anything that places an individual in a worse situation than they are in already, whatever that context is, is unethical to begin with.
  • There is a well-known case of a woman who placed her sexual encounter fit bit analysis for all to see on various social media channels. The act was not well received by most readers who called for her to take down the data in poor taste, that she acted in an improper and unethical manner toward her partner in the usage of these personal statistics, and that she had reduced the most sacred of acts down to a “quantified-self” graph.
  • There are some things that should just not be public knowledge outside the self (be it health insurer, or general public), and more and more of this personal space is being eroded because of the myriad of sensors being packeted into our smart devices. In some way these smart devices, are too smart for their own good especially when they are internetworked allowing for benchmarking to take place with control groups or other like groups.
  • There is a lack of transparency and education regarding fitbit capabilities in the general public. In the wrong hands this data could also be used to harm people.
  • I fully understand the issue of collective awareness. The more individual citizens/consumers pool their data together, the more we can try to identify anomalies, outliers, and learn more about the human body. This is an honourable aim. But the realist in me says, that this will disadvantage greatly those of our society who are disabled and live life bound to a wheelchair, suffer from mental illness or depression and simply find it difficult to participate in daily activities, the elderly, and other minority groups who find being “tracked” abhorrent in any shape or form for any reason (e.g. aboriginal communities).
  • I think minors should be exempt from such schemes, though very often, health insurance is a family product, so at what point do we say wearables or smartphones for minors are not required, even if adults opt-in to the specific health program?

5. Is there a limit to the types of health behaviour that should be collected (mood, menstrual cycle, food consumption, pregnancy, sexual activity)?

  • I think people should be allowed to track what they want. It is very important that individuals can choose which kinds of data they want to collect. For example, women who wish to be able to better plan ahead for activities during their menstrual cycle, or in order to fall pregnant should be able to keep that data in a calendar style diary. Women in Canada for instance have been lobbying for such an option to track their cycles on the iPhone but to no avail. Oft times, professional women require such a reminder to track periods of ovulation and more. This is becoming especially critical as more women are deciding to have children later on in life, valuing study and career development opportunities during their early to mid 30s. Fertility specialists request the tracking of fine level data when it comes to couples successfully falling pregnant, but most people do not track this information via pen or paper, but might well add to the data collected if they are prompted by an app or wearable device. The device, fitted with a temperature sensor, might provide that opportunity.
  • The question that really brings this to the fore is whether or not any sensitive data which is generated by the human body in particular (e.g. mood or menstrual cycle, or sexual activity) should ever be captured and transmitted to a third party, say in the Cloud. At this point I would have to say this data should be limited and accessible only to the customer opting to measure the information for their personal well-being.
  • I can imagine negative scenarios, like a couple seeking fertility treatment rebates from their health insurer, only to be told (1) you haven’t been collecting your data properly on your wearable; and (2) we noted there was no attempt at conceiving a child in months January or February so we cannot pay you for your June IVF treatment.

6. Do you think today’s technology serves as a substitute/proxy for human virtue (fit trackers as a substitute for self control and motivation, for instance)? If so, is this a moral problem or an opportunity?

  • The act of self-awareness and reflection is a wonderful opportunity. The ancients would carry a little notebook tied to their garment with some thread and write their thoughts down. There is nothing wrong with recording information down through digital means, save for obvious breaches in security or privacy that may eventuate if the data got into the wrong hands.
  • Yet, the unhealthy trend we fall into is thinking that the technology will solve all our problems. It is a well-known fact that the novelty effect wears off after the first few days, and even weeks of usage. People quite often will forget they are wearing a device that is linked somehow to their health insurer, as autonomy begins to override the human condition, even at the detriment of a higher insurance premium. Some people of course would be more regulated and driven than others, with respect to this monetary incentive. But what happens if we record “our true state” of health which will likely not be perfect and continuous given what life throws our way, what then? Surely the health insurer will need to use this in the law of averages? And what are the implications of this?
  • The virtue of self-control and motivation which is really a quality-based aspect of oneself despite its tendency towards frequency, is indeed being quantified. Self-control has its depth in spiritual, philosophical, ideological positions, not in fitbit devices. If we say it is an opportunity for us as humans, to tie ourselves to a techno-fied statue of limits, then next we will likely be advised of who we should marry, how often we should have sex, whether or not we are worthy to have children (because we carry a, b, c defective gene), and whether we can be employed in certain professions (because we stress too easily, or become anxious). This kind of world was very much observed in This Perfect Day, a novel by Ira Levin, which was later analysed by Jeremy Pitt (ed) in This Pervasive Day published by Imperial College London.

7. Anything else to add?

  • Quantifying things in a digital manner can help us to make real changes in the physical world. However, I would not like personal information to come under the scrutiny of anyone else but my own self. I certainly would not wish to equip a health provider with this data, because there will inevitably be secondary uses (most retrospective) of that health data which has not been consented explicitly by the customer, nor would they fully be aware of the implications of making that data available, beyond the lowering of a premium.
  • My concerns of inaccurate recordings in the data which have been proven against fitbit already (due to multiple users on a given device, and a maximum sensor accuracy in very fit people), uncontextualised readings of data, and implications for those in lower socio-economic demographic groups especially, would only lead us further down an uberveillant trajectory.
  • Most importantly, there have been fitbit-based legal cases already which have proven someone’s innocence or guilt. Quite possibly, these wearable devices might end up contradicting circumstantial evidence over eyewitness accounts. Uberveillance allows for misinformation, misinterpretation, and information manipulation.
  • Fundamentally, strapping devices to our bodies, or implanting ourselves with sensor chips is a breach in human rights. At present, it is persons on extended supervisor orders (ESOs) that need to wear anklet or other bracelet devices. In essence, we are taking the criminality portion of the wearer into a health focused scenario. In a top-down approach, health providers are now asking us to wear devices to give us a health rating (bodily and psychologically), and this can only mean a great deal of negative and unintended consequences. What might have started as a great idea to get people moving and healthier, living a longer and better quality of life, might end up making people have crises is they play to a global theatre (namely their health insurer, and whomever else is watching).

The Screen Bubble - Jordan Brown interviews Katina Michael

So what do I see? I see little tiny cameras in everyday objects, we’ve already been speaking about the Internet of Things—the web of things and people—and these individual objects will come alive once they have a place via IP on the Internet. So you will be able to speak to your fridge; know when there is energy being used in your home; your TV will automatically shut off when you leave the room. So all of these appliances will not only be talking with you, but also with the suppliers, the organisations that you bought these devices from. So you won’t have to worry about warranty cards; the physical lifetime of your device will alert you to the fact that you’ve had this washing machine for two years, it requires service. So our everyday objects will become smart and alive, and we will be interacting with them. So it’s no longer people-to-people communications or people-to-machine, but actually the juxtaposition of this where machines start to talk to people.

Read More

Roger Clarke - the Privacy Expert

In 1971, I was working in the (then) computer industry, and undertaking a 'social issues' unit towards my degree.  A couple of chemical engineering students made wild claims about the harm that computers would do to society.  After spending time debunking most of what they said, I was left with a couple of points that they'd made about the impact of computers on privacy that were both realistic and serious.  I've been involved throughout the four decades since then, as consultant, as researcher and as advocate.

Read More

LapTop Magazine

MG Michael interviewed by LapTop Magazine, September 21, 2008.

laptop magazine 2008.jpg

These services are especially critical in particular contexts- for instance, in care-related applications that help people track a loved one who is suffering from a progressive case of Alzheimer’s disease. In some instances LBS technology can enable some Alzheimer’s sufferers to live at home longer, instead of being placed in a facility with 24x7 supervision, which can feel like a prison. This technology can also grant carers more freedom, in supporting them in the act of supervision, using alerts and alarms when a given number of conditions are met. Location based services can legitimately be marketed and sold as a safety enhancement but it still should not mean that a patient’s rights are summarily withdrawn. Ideally, consent would still be necessary.

As for convenience-related solutions, such as the one you mention- the teen who exceeds the speed limit, or the young driver who breaks his/her curfew or drives into the city at peak hour, this is a different scenario altogether. The teen driver has all his/her cognitive capabilities. There was a study done some years ago now (I think around 2001), regarding the education of young children and the importance of teaching them to ‘sense’ dangerous situations. The organization was all for ‘educating’ the children, so they could detect and discern when something was not right, and to act appropriately. Giving them a 24x7 location device to carry which would allow a parent to track their every move during the day and to see it scroll up on a map, was considered by this particular organization to be an ‘evasion’ of responsibility. Basically, a child who was ‘street-smart’ and was not carrying a device that could locate them had much higher probability to get out of a difficult situation, than the youngster who had a locational device, pressed the emergency button, and then did not know what to do afterward. So being ‘street-smart’ was more of an advantage.

I do not think that these LBS applications enhance trust. In relationships, a lack of trust means that there is also no bonding, no giving, and no risk-taking. A relationship based on trust is about a deep connection to another; no follow-up checks have to be made, save for those that occur in normal day-to-day conversations for the purposes of logistics. The very act of monitoring destroys trust; implies that one cannot be trusted. Verifiability does not facilitate trust in a human-to-human relationship, but does facilitate trust in a ‘technical sense’, e.g. a human-to-machine relationship. That is, where I am carrying a card that needs to be verified against my credentials in a computer database because there is no other person who can verify it humanly. In addition, how can I learn to trust others if I myself am not trusted? It is possible that the meaning of trust will transcend into the future into a more ‘transactional’ context but if that happens, human relationships are bound to be eroded, and we may find ourselves living in a seemingly ‘trust-less’ society. So is the argument valid (that LBS applications enhance trust)? Certainly when you are talking about human-to-machine connections, but no when we are talking about human-to-human connections.

In our research (that is the work that I am doing with my colleague and partner Dr Katina Michael) we are obviously not arguing to go back to a time, when people lost their lives because they could not make a simple phone call to 911 and have their location revealed automatically to emergency personnel, but we are advocating that surveillance-style location services will do far more harm than good. Knowing that someone else may be monitoring our location throughout the day, without our consent and/ or knowledge, may mean that our decisions are influenced in a certain way, that we act according to what we believe the observer prefers, and thus lose our own identity and purpose in the process. Freedom and trust go hand-in-hand. These are celebrated concepts which have been universally connected to civil liberties by most political societies. Future generations may in fact witness massive shifts in the understanding of traditional metaphysics. The Ancient Greek philosophers warned that we should not be completely taken over by “techne”.

Social LBS deployments, such as friend finders, are growing at mega-speed. We only need to note the potential between the integration of social networking sites and location-based applications. Today, not only can we know whether or not someone is online, but we may also know their exact location.

Even though friend finder LBS applications seemingly look evenly balanced on the surface, i.e. any ‘friend’ who accepts to be on my buddy list, has the capability to perform a ‘find’ on me, as I do on them; the power struggle underneath can be a very different story. First of all, we assume ‘consent’ in such an application, but there is no written agreement, no formal contract which is witnessed by a third party. It is not to say that I could not change my buddy finder settings allowing myself to be ‘invisible’ for a time that I wished to be left alone, but what if my friendship relied on ‘visibility’ and relied on meeting up with the buddies and be constantly seen in particular locations and places.

Today, the x-generation count the number of people they are connected to on Facebook as the number of ‘friends’ they have. It is a somewhat superficial notion of friendship, but one that is being increasingly embraced. It is, however, problematic when my BuddyList grows so big that it is no longer useful- it rather becomes an ‘AnyoneList’ and that is when things can get out of control. One can only imagine how young people could be taken advantage of by deceitful individuals with fake online personas and who may then happen to ‘bump’ into in the physical world. Location relates to one’s physical self, whereas online identification relates only to one’s self in a virtual sense. Online there is a chance of psychological damage but in the physical world there is a lot more at stake, both psychological and bodily.

Consider also the potential for misinformation. Think about the very real possibility that ‘buddies’ who happen to have formal relationships with one another, such as a husband and wife, are alerted that they are each in proximity of one another. Imagine now, that the husband sees his best friend at a nearby location to his wife, raising undue suspicion of the over protective and over controlling spouse. What then? The service may well indeed be considered voluntary, but it has unforeseen consequences.

In addition, as the digital divide grows with the adoption of more high-tech gadgetry, increasingly members of society are finding it more and more difficult to keep pace. There is nothing to stop ‘buddies’ from hijacking their friends’ phone, setting up a pervasive location service, and then misusing the service to their own end. The service may indeed be voluntary, one can opt-out and opt-in as they please, but what if one does not have the knowledge to do so, or feels pressured into opting-in by another? What if opting-out has consequences, like being left behind by the rest of the group, being excluded because one ‘just didn’t know’ about the short-notice outing which was organized via LBS? Again we have ‘the haves’ versus the ‘have-nots’… and this will lead to a number of social acceptance problems. Opting-out will generally equate to ‘losing out’ and being considered ‘different’. To some degree we can already see this happening with the general use of the mobile phone, texting, and having a presence on MySpace or Facebook, etc.

Control here exists in both the ability to find and to be found. Accordingly, control is the overriding theme encompassing all contexts. Mark Weiser, the founding father of ubiquitous computing, once said that the problem surrounding the introduction of new technologies is “often couched in terms of privacy, [but] is really one of control.” Indeed, given that humans do not by nature trust others to safeguard their own individual privacy, in controlling technology we feel we can also control access to any social implications stemming from it. At its simplest, this highlights the different focus between the end result of using technology and the administration of its use! It becomes the choice between the idea that I am given privacy and the idea that I control how much privacy I have.

Ian Angell - The Economist

And all security fails. It may fail catechismically, catastrophically or it could be just little failures. But little failures damage individuals catastrophically. The nation may be fairly secure but individuals become very damaged.

Read More