Dr Matt Beard is a moral philosopher with an academic background in both applied and military ethics. Matt is a Fellow at The Ethics Centre, undertaking research into ethical principles for technology. Matt has taught philosophy and ethics at university level for several years, during which time he has also published widely in academic journals, book chapters and spoke at a number of international conferences. His work has mainly focussed on military ethics, a topic on which he has advised the Australian Army. He has also published academic pieces on torture, cyberwar, medical ethics, weaponising space, sacrifice and the psychological impacts of war on veterans. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement, recognising his “prolific contribution to public philosophy”. He regularly appears to discuss a range of ethical issues on television, radio, online and in print. He is also a columnist with New Philosopher magazine and a podcaster on the ABC’s Short & Curly, an award-winning children’s podcast aimed at getting families to engage with ethics in a fun and accessible way. Matt is an experienced speaker, writer and educator who brings enthusiasm, rigour and accessibility to his research, speaking and media engagements.
A written questionnaire was answered by Katina Michael on September 28, 2016.
Q&A With Ethics Centre: Fitness Trackers
1. Can you envision any issues assoc with health insurers offering wearable technology and the possibility of lower premiums to their customers?
- Health insurance is a big business. High-tech companies like Samsung already have diversified into this vertical market, making them one of the world’s leading health insurers. Essentially consumers who opt into a health program using smartphones or wearables like this are heralding in so-named “surveillance for care” which still has “surveillance for control” as an underlying element. In essence these consumers are trading some of their fundamental freedoms for lower premiums. Is this the new “price of health insurance”? A major chunk of your personal information?
- Wearable technologies are also transferable. There is no telling who is wearing the human monitoring device for certain, although over time, even a space of 2 weeks, behavioural biometrics can determine who is the actual wearer at any given time because of heart rates, pulse rates, stress rates and more. In the not too distant future, disputes would be settled only by means of an implantable sensor device that could not be removed and could with some certainty determine the wearer, despite the pitfalls of cloning devices etc.
- Having witnessed what has happened in the car insurance industry, we can also learn a great deal. In these scenarios, companies like Norwich Union launched services where constraints were identified regarding curfews, for example for under 25 years of age/ male drivers. These programs incentivise people to do the right thing, reducing the incidence of accidents during late night driving but are in no way guaranteeing that the driver is better off in the longer run. The question, is what happens with respect to insurance claims made by consumers that go against the “lower premium” standard thresholds of usage- be it the “number of steps” or the “time spent” exercising, or the “calories burned daily” or even the oxygen saturation levels etc? If you opt for a fitbit QS style program, what happens if you (1) don’t wear the fitbit daily; (2) have a poor track record of health given personal reasons of any type (e.g. being the primary carer of a child with autism or downs syndrome) etc. Might this make you less insurable across programs in the future with other health insurance suppliers? It almost takes on a “survival of the fittest” attitude, which discriminates against different members of society at the outset.
- What is the motivation of these kinds of programs for health insurers? Surely it is not because they feel good about reducing health premiums for their customers? How will this intimate behavioural data be used? In unrelated events? Perhaps in contradiction to advice provided by doctors. There are many cases where people have posted data about themselves on social media for instance that has rendered their medical leave void, despite legitimate reasons for leave.
2. Do these issues outweigh the advantages? Can you see any advantages?
- In an ideal world we might deem that the advantages far outweigh concerns over bodily and psychological privacy, surveillance, autonomy and human rights. In the end, most people say they are privacy conscious but still opt to take a loyalty card, if asked at the counter on checkout if a discount ensues.
- In an ideal world, sure I can advantages to getting healthier, fitter, being more routine based about calorie intake and calorie burning. There are many people who use their fitbits or other smartphone apps to make sure they are doing the minimum level of exercise each day. That cannot be a bad thing if you are in control of your own data, and stats.
- My grave concern over these types of behavioural biometric apps is that the data gathered is used to further exploit the end user. “On the one hand, here is a nice discount because you are such a great example of health, and on the other hand, now that I know your general behaviours about every day life, I can further onsell other services to you that I know you’ll need and use.”
- Once you lose your privacy, you have forgone basic freedoms- you lose that decision making power to say “hey, today it is pelting down rain, I don’t feel like going out for my daily walk”.
- There are some wearers who will also find themselves misusing the data collected- whether it be because they want to keep pushing the boundaries with how many steps they can do in a working day, versus competing and benchmarking oneself to others in like groups.
- Most people are willing to give away personal information for anything “so-named” that is “free” but the reality is that there is nothing free, and that discount you might get is being paid for some other way- most likely through the minute-to-minute dataset that is onsold to a third party for completely different uses.
3. Would insurers have an ethical obligation to inform users if they detected medically significant anomalies in the data they collect?
- If health insurers recognise an anomaly in the data set gathered because they are cross-matching that data with other clinical outcomes or assessments, then yes they would need to inform their customer.
- However, once a medical condition is observed, it will be recorded against a health record and it is possible that a “predisposition” to x or y may well rule out that sick individual from any future form of health insurance. A number of woman in the USA have found themselves in this predicament with the change of health policy during the Obama Administration and have been left with very limited public health care which hardly helps them to address their diagnosis.
- Today, in Australia, sufferers have a right to opt out of telling their health insurer that they have a diagnosed “syndrome” as this could affect their long-term health insurance capacity and coverage. Their syndrome would not be detectable by a fitbit style device, but has been medically diagnosed via a DNA test.
- The other issue that comes to mind is whether or not “big data” will have a role to play in providing “best estimates” of a person carrying a particular type of illness into the future. Data from the fitbit device, might be linked to heart rate data showing the potential for stroke and more. For some people, the news that they are “at risk” is sometimes more a trigger for a stroke or heart attack, than continuing to lead the lifestyle they have which is happy and carefree. I know many people who would become incredibly depressed being informed by any health insurer that if they don’t change their behaviours they will likely die prematurely by 20 years or so. It’s certainly a complex issue and not as straightforward as some think. These are choices that people have to make.
4. Are there any ethical limits to the ways the collected data could be used?
- Anything that places an individual in a worse situation than they are in already, whatever that context is, is unethical to begin with.
- There is a well-known case of a woman who placed her sexual encounter fit bit analysis for all to see on various social media channels. The act was not well received by most readers who called for her to take down the data in poor taste, that she acted in an improper and unethical manner toward her partner in the usage of these personal statistics, and that she had reduced the most sacred of acts down to a “quantified-self” graph.
- There are some things that should just not be public knowledge outside the self (be it health insurer, or general public), and more and more of this personal space is being eroded because of the myriad of sensors being packeted into our smart devices. In some way these smart devices, are too smart for their own good especially when they are internetworked allowing for benchmarking to take place with control groups or other like groups.
- There is a lack of transparency and education regarding fitbit capabilities in the general public. In the wrong hands this data could also be used to harm people.
- I fully understand the issue of collective awareness. The more individual citizens/consumers pool their data together, the more we can try to identify anomalies, outliers, and learn more about the human body. This is an honourable aim. But the realist in me says, that this will disadvantage greatly those of our society who are disabled and live life bound to a wheelchair, suffer from mental illness or depression and simply find it difficult to participate in daily activities, the elderly, and other minority groups who find being “tracked” abhorrent in any shape or form for any reason (e.g. aboriginal communities).
- I think minors should be exempt from such schemes, though very often, health insurance is a family product, so at what point do we say wearables or smartphones for minors are not required, even if adults opt-in to the specific health program?
5. Is there a limit to the types of health behaviour that should be collected (mood, menstrual cycle, food consumption, pregnancy, sexual activity)?
- I think people should be allowed to track what they want. It is very important that individuals can choose which kinds of data they want to collect. For example, women who wish to be able to better plan ahead for activities during their menstrual cycle, or in order to fall pregnant should be able to keep that data in a calendar style diary. Women in Canada for instance have been lobbying for such an option to track their cycles on the iPhone but to no avail. Oft times, professional women require such a reminder to track periods of ovulation and more. This is becoming especially critical as more women are deciding to have children later on in life, valuing study and career development opportunities during their early to mid 30s. Fertility specialists request the tracking of fine level data when it comes to couples successfully falling pregnant, but most people do not track this information via pen or paper, but might well add to the data collected if they are prompted by an app or wearable device. The device, fitted with a temperature sensor, might provide that opportunity.
- The question that really brings this to the fore is whether or not any sensitive data which is generated by the human body in particular (e.g. mood or menstrual cycle, or sexual activity) should ever be captured and transmitted to a third party, say in the Cloud. At this point I would have to say this data should be limited and accessible only to the customer opting to measure the information for their personal well-being.
- I can imagine negative scenarios, like a couple seeking fertility treatment rebates from their health insurer, only to be told (1) you haven’t been collecting your data properly on your wearable; and (2) we noted there was no attempt at conceiving a child in months January or February so we cannot pay you for your June IVF treatment.
6. Do you think today’s technology serves as a substitute/proxy for human virtue (fit trackers as a substitute for self control and motivation, for instance)? If so, is this a moral problem or an opportunity?
- The act of self-awareness and reflection is a wonderful opportunity. The ancients would carry a little notebook tied to their garment with some thread and write their thoughts down. There is nothing wrong with recording information down through digital means, save for obvious breaches in security or privacy that may eventuate if the data got into the wrong hands.
- Yet, the unhealthy trend we fall into is thinking that the technology will solve all our problems. It is a well-known fact that the novelty effect wears off after the first few days, and even weeks of usage. People quite often will forget they are wearing a device that is linked somehow to their health insurer, as autonomy begins to override the human condition, even at the detriment of a higher insurance premium. Some people of course would be more regulated and driven than others, with respect to this monetary incentive. But what happens if we record “our true state” of health which will likely not be perfect and continuous given what life throws our way, what then? Surely the health insurer will need to use this in the law of averages? And what are the implications of this?
- The virtue of self-control and motivation which is really a quality-based aspect of oneself despite its tendency towards frequency, is indeed being quantified. Self-control has its depth in spiritual, philosophical, ideological positions, not in fitbit devices. If we say it is an opportunity for us as humans, to tie ourselves to a techno-fied statue of limits, then next we will likely be advised of who we should marry, how often we should have sex, whether or not we are worthy to have children (because we carry a, b, c defective gene), and whether we can be employed in certain professions (because we stress too easily, or become anxious). This kind of world was very much observed in This Perfect Day, a novel by Ira Levin, which was later analysed by Jeremy Pitt (ed) in This Pervasive Day published by Imperial College London.
7. Anything else to add?
- Quantifying things in a digital manner can help us to make real changes in the physical world. However, I would not like personal information to come under the scrutiny of anyone else but my own self. I certainly would not wish to equip a health provider with this data, because there will inevitably be secondary uses (most retrospective) of that health data which has not been consented explicitly by the customer, nor would they fully be aware of the implications of making that data available, beyond the lowering of a premium.
- My concerns of inaccurate recordings in the data which have been proven against fitbit already (due to multiple users on a given device, and a maximum sensor accuracy in very fit people), uncontextualised readings of data, and implications for those in lower socio-economic demographic groups especially, would only lead us further down an uberveillant trajectory.
- Most importantly, there have been fitbit-based legal cases already which have proven someone’s innocence or guilt. Quite possibly, these wearable devices might end up contradicting circumstantial evidence over eyewitness accounts. Uberveillance allows for misinformation, misinterpretation, and information manipulation.
- Fundamentally, strapping devices to our bodies, or implanting ourselves with sensor chips is a breach in human rights. At present, it is persons on extended supervisor orders (ESOs) that need to wear anklet or other bracelet devices. In essence, we are taking the criminality portion of the wearer into a health focused scenario. In a top-down approach, health providers are now asking us to wear devices to give us a health rating (bodily and psychologically), and this can only mean a great deal of negative and unintended consequences. What might have started as a great idea to get people moving and healthier, living a longer and better quality of life, might end up making people have crises is they play to a global theatre (namely their health insurer, and whomever else is watching).