Indepth interviews on the topic of...

Carly Burns of UOW Interviews Katina Michael

1. What are you working on in 2018?

Always working on lots and lots of things at once.

 Carly Burns, UOW Research

Carly Burns, UOW Research

  • Socio-ethical approaches to robotics and artificial intelligence development and corresponding implications for humans
  • Tangible and intangible risks associated with bio-implantables for the medical and non-medical fields
  • Ongoing two-factor authentication requirements despite the aggressive rollout of biometrics (especially facial recognition and behavioural systems)
  • Wearable cameras and corresponding visual analytics and augmented reality capabilities in law enforcement
  • Blockchain registers and everyday transactional data flows in finance, education, and health
  • Social media pitfalls and technology dependencies, screens and addictions
  • Unauthorised and covert tracking technologies, location information sharing, crowdsourcing and notions of transparency
  • Defining uberveillance at the operational layer with respect to the internet of things and human rights
  • At the heart of my research is the interplay of engineering, law, policy and society.

2. In regards to your field of study or expertise what are some of the most innovative or exciting things emerging over the next few years?

  • Exoskeletons in humans, transputation (humans opting for non-human parts), and the ability to do things that were once considered ‘superhuman’ (e.g. carrying 2-3 times one’s body weight, or extending human height through artificial limbs).
  • Brain to computer interfaces to help the disabled with basic accessibility of communications and everyday fundamental necessities (e.g. feeding oneself). However, breakthroughs in this space will quickly be adopted by industry for applications in a variety of areas, with the primary focus being entertainment and search services.
  • Smart drug delivery (either embedded/swallowable/injectable) pill and chip solutions that allows remote health systems to monitor your drug taking behaviours, daily exercise routines, and wander/fall-down alerts. 
  • An electronic pacemaker the size of a AAA battery (or smaller) acting as the hub for body area networks, akin to a CPU in a computer, allowing for precision medicine and read-write rights to a world built on the Internet of Things ideology.
  • Personal AI services: consider this the rise of a new kind of personal Internet. Services that will be able to gather content and provide for you thought-specific level data when you need it. Your life as one long reality-TV episode, captured, ready for playback in visual or audio, adhering to private-public space differentials. Captured memories and spoken word will be admissible evidence in a future e-court, but also available for new job opportunities. The tacit becomes capturable and can help you get your next job. 

3. In regards to your field of study or expertise what are some of the things readers should be cautious/vary of over the next few years?

  • The technology we are being promised will get very personal and trespass privacy rights. Whereby in 1984 we were assured that at least the contents of our brain were private, today behavioural biometrics alongside detailed transactional data, can provide some level of proactive profile of everyday consumers. Retaining anonymity is difficult, some would say near impossible. We have surveillance cameras and smart phones and watches that track our every movement, smartTVs that watch us in our homes, IOT devices that do motion detection and human activity monitoring in private spaces, and social media that has the capacity to store instantaneous thoughts and images and multimedia across contexts. This loss of privacy will have psychological impacts and fallout, whether it be in increasing rates of mental illness, or in the room we require to develop as human beings, that right to learn and reflect from our mistakes in private. Humans are increasingly becoming decorporealised. We are fast becoming bits of bytes. Companies see us not as holistic customers any longer, but pieces of transactional data, as we are socially sorted based on our capacity to part with our dollar, and the influence measure we have on our peer groups.
  • The paperless/cashless paradigm is gathering momentum. It has many benefits to organisations and government and especially to our environment. But this has major implications for auditability, so-named transparency, the potential for corrupt practices to be instituted by skilled security hackers, and the need for traceability. Organisational workflows that go paperless will place increasing pressure on staff and administration, triggering a workplace of mere compliance (and tick boxing) as opposed to real thinking and assurance. The cashless part will lead to implicit controls on how money is spent by minority groups (e.g. disabled, pensioners, unemployed). This will no doubt impact human freedom, and fundamentally rights to choice.
  • Over-reliance on wearable and implantable technologies for a host of convenience, care and control solutions. Technology will provide a false sense of security and impact on fundamental values of trust in human relationships. More technology does not mean a better life, for some it will mean a dysfunctional life as they wrestle with what it means to be human.
  • It is questionable whether living longer means we age better. Because we are living longer, illnesses like dementia and cancer are increasing at an increasing rate. How do we cope with this burden when it comes to aged care? Send in the robots?
  • We have already seen robots (e.g. Sophia) be recognised as a citizen of Saudi Arabia, before fundamental women’s rights have been conclusively recognised in the same state. Robots and their proposed so-called sentience will likely receive special benefits that humans do not possess. Learning to live with these emerging paradigms will take some getting used to- new laws, new policies, new business models. Do robots have rights? And if so, do they supersede those of human rights? What will happen when “machines start to think” and make decisions (e.g. driverless cars)? 

4. Where do you believe major opportunities lie for youth thinking about future career options?

  • This is pretty simple, although I am biased, it is “all things digital”. If I was doing a degree today, I would be heading into biomedical engineering, neuroethics and cybersecurity. On the flip-side of this, I see the huge importance of young people thinking about social services in the very “human” sense. While we are experimenting with brain implants for a variety of illnesses, including for the treatment of major depressive disorder, and DNA and brain scanning technologies for early detection, I would say the need for counsellors (e.g. genetic) and social workers will only continue to increase. We need health professionals and psychologists and psychiatrists who get “digital” problems: a sense of feeling overwhelmed with workloads, with the speed that data travels (instantaneous communications), etc. Humans are analog, computers are digital. This cross-road will cause individuals great anxiety. It is a paradox. We’ve never had it so good in terms of working conditions, and yet we seem to have no end to social welfare and mental health problems in our society. 
  • At the same time the world is advancing in communications, and life expectancy continues to grow in most economic systems, an emphasis on food security, seeking renewable energy sources that do not create more problems than they solve, biodiversity and climate change is much needed. What good is the most advanced and super networked world, if population pressures and food security practices are not being ascertained, alongside rising sea levels that cause significant losses? We should not only be using our computing powers to model and predict changes that are inevitable to the geophysical properties of the earth, but to implement longer term solutions. 

 
5.       In regards to your field of expertise, what is the best piece of advice you could offer to our readers?

  • The future is what we make of it. While computers are helping us to translate better and to advance once remote villages, I advocate for the preservation of culture and language, music and dance and belief systems. In diversity there is richness. Some might feel the things I’ve spoken about above are hype, others might advocate them as hope, and still others might say this well is their future if they have anything to do with it. Industry and government will dictate continual innovation as being in the best interest of any economy, and I don’t disagree with this basic premise. But innovation for what and for whom? We seem to be sold the promises of perpetual upgrades on our smartphones and likely soon our own brains through memory enhancement options. It will be up to consumers to opt-out of the latest high tech gadgetry, and opt-in to a sustainable future. We should not be distracted by the development of our own creations, rather use them to ensure the preservation of our environment and healthier living. Many are calling for a re-evaluation of how we go about our daily lives. Is the goal to live forever on earth? Or is it to live the good life in all its facets? And this has to do our human values, both collectively and individually.

Patient Feedback in Product Lifecycle Management of Brain Pacemakers

The Need for Patient Feedback in the Product Lifecycle Management of Deep Brain Stimulation Devices

Katina Michael interviews Gary Olhoeft

Background

 Professor Emeritus Gary Olhoeft of the Colorado School of Mines

Professor Emeritus Gary Olhoeft of the Colorado School of Mines

This interview was conducted by Katina Michael with Gary Olhoeft, a deep brain stimulation (DBS) device recipient on September 8, 2017. Katina is a Professor at the University of Wollongong who has been researching the social implications of implantable devices for the last 20 years and Gary is a retired Emeritus Professor of Geophysics at the Colorado School of Mines. Gary has previously taught a number of subjects related to Advanced Electrical and Electromagnetic Methods, Antennas, Near Surface Field Methods, Ground Penetrating Radar, and Complex Resistivity. Gary had a deep brain stimulator  installed in 2009 to help him combat his Parkinson’s Disease. This interview is a participant observer’s first-hand journey into a life dependent on a deep brain stimulator. Of particular interest is the qualified nature of the participant in the field of electromagnetics with respect to his first-hand experience of the benefits, risks and challenges surrounding the device that has been implanted in his body. Katina first came to know of Gary’s work through his open comments in a Gizmodo article in 2017 [i] while looking into the risks associated with biomedical devices in general. Gary has also delivered numerous presentations to the EMR Policy Institute on “Electromagnetic Interference and Medical Implants”, dating back to 2009 [ii]. The interview is broken into two parts.

KATINA MICHAEL: Gary, thank you for your time. We have previously corresponded on two full-length written questionnaires, and now this structured Skype interview. I think within my capacity in the Society for the Social Implications of Technology in the IEEE, we might be able to take some of the issues you raise forward. I think as more people come on board with various brain implants, heart pacemakers and internal diagnostic devices that the Federal Communications Commission (FCC), the Food and Drug Administration (FDA) and the health insurance industry more generally, will have to engage with at least some of the issues that you and other biomedical device recipients have identified from your experience.

GARY OLHOEFT: Thank you for the opportunity.

KATINA MICHAEL: So many people who are designing biomedical devices do not actually realise that patients are awake during some of the DBS procedure. I found on the engineering side of the design, that many engineers have never witnessed a DBS going into someone’s brain, or at least understood the actual process of implantation. I have spoken to biomedical engineers in key academic institutions that have major funded brain implant projects who have challenged me about whether or not the patient is actually awake during the process. I do find it bewildering at times that some engineers have never spoken to patients or are so withdrawn from the practical side of biomedical device deployment. Engineers tasked with some complex problems sometimes look at only solving a single part of the end-to-end design, without understanding how all the componentry works together.

GARY OLHOEFT: That’s amazing.

KATINA MICHAEL: Yes.

GARY OLHOEFT: I was also amazed to talk to the Chief Engineer at Medtronic about the DBS once. He told me the whole thing was entirely built out of discrete components with no integrated circuits because the FDA has not approved any integrated circuits yet.

KATINA MICHAEL: What do you make of this? That the regulations and the regulatory body responsible is holding things up? What is your personal position?

GARY OLHOEFT: Well, I definitely think that the regulatory body is holding things up. Just look at when the first DBS was installed in France in 1987 [iii]. It was something like 14 years before it was made available in the USA in about 2001 with FDA approval. I got mine in 2009, and they had already sold hundreds of them at that point in America.

KATINA MICHAEL: And for you at that time, there was no other alternative? I assume that if you had not adopted, that your quality of life was going to diminish quickly?

GARY OLHOEFT: That’s right. I would have continued shaking and not been able to write or I would have avoided reading, or walking or talking. Something I think I haven’t told you yet is that my device is also an interleaved device that has two settings that alternate- one is set for me to walk, and the other is set for me to talk. You used to have to choose between the two but now they can alternate because they are interleaved so that I can do both at the same time.

KATINA MICHAEL: For me Gary, it is nothing short of miracles what they are doing.

GARY OLHOEFT: Yes.

KATINA MICHAEL: And I marvel at these things. Was the FDA correct in waiting those 15 years or so before they approved or they should have approved earlier so other people may have had an improved quality of life in the United States? What do you think about the length of time it took to get approval? Are you critical of it?

GARY OLHOEFT: It depends on what they are talking about. Some of the things they are talking about with genetic modification implants- with viral inducing genetic modifications and stem cells- these things are going too fast. A doctor once told me when they go to the FDA for approval they have to go through trials. The first trial involves a few people. The next trial involves a few tens of people. And then at the approval point there are hundreds of people but then when it is approved possibly hundreds, or thousands or millions of people will get it and next all kinds of things can go wrong that they did not anticipate. So you have to be very careful about this stuff. However, the FDA seems to reinvent the wheel requiring their own testing when adequate testing has already been done in other countries.

KATINA MICHAEL: I agree with you. It is the brain we are talking about after all.

GARY OLHOEFT: The thing that bothers me most is that Apple footage you sent me. You know that clip with Steve Jobs and the Wi-Fi problem?

KATINA MICHAEL: Yes [iv].

GARY OLHOEFT: I would not have liked to have been in that room with a DBS.

KATINA MICHAEL: Yes. Interestingly I was researching that for a talk on business ethics and AI and the future and then we had this correspondence, and I just connected the two things together [v]. And if he could not run an iPod demo with that EMC (electromagnetic compatibility) interference problem when we know he would do exhaustive user testing at launches [vi], then what are we going to do Gary when we have more and more people getting implants and even more potential electromagnetic interference? I am trying to figure out what kind of design solution could tackle this?

GARY OLHOEFT: And there’s a whole bunch of other things that bother me, like the electromagnetic pulse to stop cars on freeways and the devices they have to shock people.

KATINA MICHAEL: The tasers?

GARY OLHOEFT: What about all those people that have implants like me or other kinds of implants? In one of those fictitious mystery shows someone was depicted as being killed in a bank robbery and he was killed by an electromagnetic pulse. So we can see these kinds of scenarios are making it into the public eye through the visual press.

KATINA MICHAEL: And that is a fictional account, right?

GARY OLHOEFT: It’s a fictional scenario but it is certainly possible [vii].

KATINA MICHAEL: Yes, it sure is. Exactly. I am talking at the annual conference for the Australian Communications Media Authority (ACMA) next month, and I will be using our discussion today as a single case study to raise awareness here. I am talking on implantables for non-medical applications, and there is presently a great deal of pressure from the biohacking community [viii]. A lot of these guys are my friends given my research area, but are doing some very strange things. Presently some of them are talking about hacking the brain and I am telling them you really should not be doing that without medical expertise even if it is in the name of “citizen science”. Some of them are amateur engineers and others are fully-fledged qualified engineers but not medical people. And I personally feel the brain is not to be experimented with like this. It is reminiscent of what I would call ‘backyard lobotomies’. 

GARY OLHOEFT: It is like DARPA. They have a call up at the moment to have a million electrodes inside the brain so they can communicate, not for therapeutic value like I have [ix],[x].

KATINA MICHAEL: You are likely familiar with the DARPA project from 2012, for a brain implantable device that could be used to aid former service men and women suffering from post-traumatic stress disorder, depression and anxiety [xi]. We did a special issue on this in the IEEE Technology and Society Magazine last year [xii]. They have also claimed this device solution could be used for memory enhancement. It sounds like the cyborgisation of our forces.

GARY OLHOEFT: That’s like what I have. The latest one is more like when you want to remote control a vehicle or something. The September 2017 IEEE Spectrum had an article about Brain Racers using brain controlled avatars to compete in a cyborg Olympics [xiii].

KATINA MICHAEL: Exactly. And we did raise issues in that special which I will send to you. I held a national security workshop on brain implants in the military in 2016 [xiv], at the University of Melbourne where they are doing research on stentrodes. The University of Melbourne is considered to have some leading academics in this space, receiving some partial funding I believe from DARPA [xv]. I then invited some biomedical engineers in the DBS space from the University of Melbourne to participate in the workshop, like Thomas Oxley, but all were unavailable to make it. Thomas incidentally was undergoing training in the USA related to DBS and stentrodes [xvi].

GARY OLHOEFT: Okay.

KATINA MICHAEL: There are so many things going on at present like implantables in your jaw that are so close to the ear that they can allow you to communicate wirelessly so you can hear via your teeth [xvii]. We were looking at these kinds of implants and implications at various workshops including at the University of Wollongong where we have a Centre of Excellence [xviii].

GARY OLHOEFT: It’s not surprising. In the old days, when we had the silver amalgam fillings in teeth, there were people that used to go listening to the radio through their teeth.

KATINA MICHAEL: Yes. There’s a well-known episode of the Partridge Family where Laurie gets braces and her boyfriend’s Walkman is interfering with her ability to sing songs when a film crew comes to record music in the family home [xix]. So yes, teeth are amazing, the auditory links there have been well-known for decades are just being rediscovered by the younger generation.

GARY OLHOEFT: Yes.

KATINA MICHAEL: And the communications for autonomous weapons or over-ride. Can a human be autonomous for instance? Last week we were discussing some of the ethics behind overriding someone's decision not to fire or strike at a target [xx]. Or imagine the ability to remotely control a drone just by using your thoughts, versus someone in a remote location executing the fire or strike commands without being in-situ by intercepting that communication stream. Imagine the potential to intercept a person’s thoughts and to make them physically do something. This is where for me the waters get muddied. I do not mind the advancements in the Cochlear space for instance, where deaf persons have the ability to hear music and entertainment through an embedded technological device [xxi]. I think that is another marvel, really. But I’d be interested to hear your opinion about the crossover between the medical and non-medical spaces? Do you think that is just life- that is just how innovation is? That we need to get used to this or do you believe prosthetics are the only reason we should be implanting people in the brain?

GARY OLHOEFT: I think the only reason we should be implanting people is for therapeutic reasons. For instance, I have a deep brain stimulator for a specific disease, others might have a particular problem or maybe it is to replace a part of the brain that has been damaged physically. Because the question becomes, when are we no longer human anymore if we go beyond prosthetics purposes?

KATINA MICHAEL: Yes.

GARY OLHOEFT: We have problems with driverless cars and people are talking about mirrored systems and all sorts of electronics in them that interfere with DBS. There was a paper that was published where researchers took about 10 cars at different times, and they discovered the ones that were diesel powered did not interfere because they didn’t have any ignition system [xxii]. Conventionally powered cars which had an electronic ignition system pad caused some interference. But electrics and hybrid engines had problems with people with implants [xxiii], [xxiv], [xxv], [xxvi].

KATINA MICHAEL: So do you fear getting in a vehicle at any time? Or is that not the issue, rather it is if you are driving or physically touching parts of the car?

GARY OLHOEFT: No, it’s probably if you are just in the vehicle itself because of the way they have the wiring in some vehicles. A Prius has 8 computers inside it, Wi-Fi and Bluetooth, and the way they run the wiring from the batteries to the front, it is not twisted wiring it is just a straight pair of wiring. If it was twisted pair there would be a lot less magnetic noise inside the car body.

KATINA MICHAEL: So that’s the car company trying to save money, right?

GARY OLHOEFT: I really don’t know. We have a Prius as well. I’ve tested our car. We have two sets of batteries. The front and right passenger seat are okay but the driver’s position is very noisy. There’s a woman we know, when she drives her Prius, her deep brain stimulator turns off when the car goes into charging mode (while braking) [xxvii].

KATINA MICHAEL: Oh dear, this is a major problem.

GARY OLHOEFT: That’s why I don’t drive.

KATINA MICHAEL: These issues must get more visibility. They can no longer be ignored. This is where consumer electronics come head-to-head with biomedical devices.

GARY OLHOEFT: I’ve also sent you documents that I’ve sent to the FCC and FDA.

KATINA MICHAEL: I read these.

GARY OLHOEFT: I’ve not received any response to these.

KATINA MICHAEL: This is truly an important research area. This topic crosses over engineering, policy and society. It is really about the importance of including the end-user (or patient in this case) in the product lifecycle management (PLM) process.

GARY OLHOEFT: Agreed.

KATINA MICHAEL: You are the first person I have engaged with who has convinced me to go further with this particular research endeavour. Save for some very sporadic papers in the press, and random articles in journal publications about electromagnetic interference issues, it was the Gizmodo article [xxviii] that my husband stumbled across citing you, that has validated our present conversation. It is time to take this very seriously now. We now have so many pacemakers, it is not just heart, it is brain as well. And I cannot even get a good figure for how many there are out there and I keep being asked but different sources state different things.

GARY OLHOEFT: They don’t know because they don’t track them [see introduction in [xxix].

KATINA MICHAEL: That is right but somewhat shocking to me because surely these numbers exist somewhere. And we have to track them. And I do not mean track the names of people. I do not really want people to be in a database of some sort because they bear an implant. I worry about potential hackers getting access to that, not from the privacy perspective alone but the fact that I do not wish to tip off potential predatory hackers “switching people off” so to speak, in the future.

GARY OLHOEFT: Sure.

KATINA MICHAEL: My concern is that the more of us who bear these implantables for non-medical reasons in the future, the greater the risks.

GARY OLHOEFT: There is a well-known story of someone who has had an internal insulin pump hacked, and an insulin dose was changed so that it killed them [xxx], [xxxi].

KATINA MICHAEL: I do wonder Gary if this all has to do with liability issues [xxxii]. There is simply no reason that companies like Medtronic should not be engaging the public on these matters. In fact, it is in their best interest to receive customer feedback.

GARY OLHOEFT: It’s definitely a problem and I don’t know what to do about that….

KATINA MICHAEL: So we need some hard core evidence that someone’s implantable has previously been tampered with?

GARY OLHOEFT: I’ve already raised the issue several times and Medtronic, my brain implant manufacturer just sent me the programmer’s manual for the patient. The original one I got was just a couple of pages that had to do with interference. The latest version is 16-18 pages in length on interference. And that is because of the questions I raised about interference and the evidence I showed them.

KATINA MICHAEL: Right.

GARY OLHOEFT: They still won’t admit that their device was defective in one case where I could prove it. My doctor believed me because I showed him the evidence, so he had them replace it at no charge.

KATINA MICHAEL: Okay. I have a question about this. Thank you for the information you sent me regarding your EEG as being logged by your DBS implant.

GARY OLHOEFT: It is not the EEG that I sent you, it is the measurement of the magnetic field from induced DBS current.

KATINA MICHAEL: It is the pulse?

GARY OLHOEFT: Right. The pulse height and the pulse frequency.

KATINA MICHAEL: Ok. So I saw the graph which indicated that every second pulse was being skipped.

GARY OLHOEFT: Right.

KATINA MICHAEL: So the question I have, is whether you have access to your EEG information? There was a well-known case of Hugo Campos who wanted access to his ECG information and last I heard he had taken to court the manufacturer who claimed they had the right to withhold this data [xxxiii]. He is more interested in data advocacy than anything else [xxxiv]. He was claiming it was “his” heart rate, and his personal biometrics, and that he had every right to have access to that information [xxxv].

GARY OLHOEFT: There are devices on the web that show you how to build something to get an ECG [xxxvi].

KATINA MICHAEL: Exactly. Hugo, even enrolled himself in courses meant for biomedical engineers to do this himself, that is hack into his own heart beat information, with the pacemaker device residing in his own body [xxxvii]. So he has been at the forefront of that. But the manufacturer is claiming that “they” own the data [xxxviii]. And my question to you is: Is your DBS data uploaded through some mechanism, like the heart pacemaker data is uploaded on a nightly basis and sent back to base [xxxix]?

GARY OLHOEFT: No, only when I visit the doctor’s office, is the only time they have access to it. Only when I go to the doctor.

KATINA MICHAEL: You mean to download information or to check its operation?

GARY OLHOEFT: Download information from the pack in my chest. Actually, they store it in there. They print it out in hardcopy because they are afraid of people hacking their computers.

KATINA MICHAEL: Yes.

GARY OLHOEFT: And this is the University of Colorado Hospital.

KATINA MICHAEL: Yes, I totally understand this from my background reading. I’ve seen similar evidence where hardcopies are provided to the patient but a lot of the patients like Hugo Campos are saying hardcopies are not enough. I should be able to have access at any time, and I should be able to tell someone my device is playing up, or look something is wrong [xl].

GARY OLHOEFT: Right. Remember how I told you about the interleave function. Well when they set it to the interleave setting for the first time, they didn’t do it right. And I woke up the next morning feeling like I had had 40 cups of coffee. It turns out it was running at twice the frequency it should have been and I could show that. So I called them up and said you’ve got a problem here and they fixed it right away. I figured I could measure it independently of the Medtronic device. That’s why I built my own AC wirewound ferrite core magnetometer to monitor my own DBS.

KATINA MICHAEL: Okay.

GARY OLHOEFT: But all the Doctor had was a program that told him whatever Medtronic wanted to tell him. I wanted more information than that, I wanted to actually see it so I built my own.

KATINA MICHAEL: So I saw your information that you would have had a product out on the market to help others but the iPhone keeps upgrading so I get that.

GARY OLHOEFT: It keeps changing faster than I can keep up with it.

KATINA MICHAEL: So I am going to argue that it is their responsibility, the manufacturer’s responsibility to provide this capability.

GARY OLHOEFT: I see no reason why they couldn’t, but I like the idea of a third party providing an independent measurement of whether the implant is working and measuring the parameters directly (pulse height, pulse width, pulse repetition frequency, etc.).

KATINA MICHAEL: So I am concerned on a number of fronts, and have been for some time. This in particular is not a huge ask if they are cooperative in the process of an incremental innovation. E.g. imagine if Apple collaborated with Medtronic or the other providers from Stryker and so forth, like Cochlear have collaborated with Apple. I think biomedical device manufacturers have to offer this as a service and in layman’s understanding for non-engineers. And it must be free and not cost the recipient anything. It is the only way to empower recipients of the pacemakers and for them to feel at ease, without having to go for a visit to a cardiac specialist.

GARY OLHOEFT: I told you about the experience of walking into a Best Buy and having their automated inventory control system turn off my DBS?

KATINA MICHAEL: Yes.

GARY OLHOEFT: Then I used my device to see what frequency it was operating at and then asked my doctor to change my DBS to a different frequency so that I could walk in and out of Best Buy. So the frequency range means the operator needs to have such things in mind. These inventory control devices are built  into walls in stores and malls, so you no longer know that they are even there or have any warning. But they are there.

KATINA MICHAEL: I know, they are unobtrusive.

GARY OLHOEFT: So there needs to be warning signs or other things like that. They seem to begin appearing in hospitals and imaging centres where they say “MRI in use” if you have a cardiac pacemaker or brain pacemaker device do not enter this room. But it is a rare thing still. I remember 20 years ago or so when they had “danger microwave oven in use”.

KATINA MICHAEL: Yes, I remember that.

GARY OLHOEFT: It is like we need a more generic reason than that.

KATINA MICHAEL: What is your feeling with respect to radio frequency identification (RFID)? Or the new payment systems using near field communications? Are they affecting pacemakers? Or is it way too low in terms of emissions?

GARY OLHOEFT: Well, no. There are wireless devices that are low-level that don’t bother me. For example, I have a computer with Wi-Fi, and that doesn’t bother me. That is because I’ve measured it and I know what it is. It is a dosage thing. If I stay nearby it the dosage begins to build up, and eventually it can get to a point where it could be a problem. Not necessarily for my DBS but for other things. Heart pacemakers are much closer to the heart so there is less of a problem. And the length of wiring is much shorter in heart pacemakers. I have a piece of wire that runs from my chest, up my neck, up over the top of my head, and back behind my eyes, and it is almost 18 inches long. That is part of the problem. They could have made that a twisted pair with shielding like CAT6 wiring but they didn’t and the Medtronic people need to fix that one.

KATINA MICHAEL: And Gary I spoke to some researchers last year in June who were talking about not having the battery packs so low, having it closer to the brain and smaller in size. Do you think the problem would dissipate somewhat if the battery pack was closer to the brain?

GARY OLHOEFT: Yes. But then the battery won’t last as long because it’s smaller.

KATINA MICHAEL: Yes, I know that is the issue.

GARY OLHOEFT: They are already trying rechargeable batteries but you spend all day at the charger- you get 9 hours of charging for only 1 hour of use.

KATINA MICHAEL: No, that is definitely not feasible.

GARY OLHOEFT: So my doctor told me that, and he recommended against that for me.

KATINA MICHAEL: Now here is another question that is a difficult one for you, I think. Do you find a conflict in your heart sometimes? You are trying to help the manufacturer make a better product and you are trying to raise awareness of the important issues that patients face, and yet you are relying on the very product that you are trying to get some response to. Have you ever written to Medtronic and said “This is my name and this is my story- would you allow me to advise you, that is, provide feedback to your design team?” [xli]

GARY OLHOEFT: There is a Vice President that is responsible for R&D who is both a medical doctor and an engineer… I wrote to him several times and never got an answer.

KATINA MICHAEL: Right.

GARY OLHOEFT: But I have spoken to Medtronic’s Chief Engineer that my device is misbehaving, you know with all those missing pulses. He was quite open about it. I also told him about the interleave problem at that time that it felt like I had had 40 cups of coffee and he said that was outside his area of expertise because he built the hardware but someone else programmed it. And you can find there are books out there that might tell you how to program these things. I’ve looked at them but I don’t agree with the approach they take. They never talk about interference. The default programming for this is 180 repetitions … it’s still the wrong place to start because in the US, 180 is a multiple (harmonic) of the powerline and close to the frequency used by many security systems and inventory control systems. See my talks on YouTube [xlii], [xliii].

KATINA MICHAEL: I am so concerned about what I am hearing. Concerned that the company is not taking any action. Concerned that we are not teaching our up and coming engineers about these problems because they have to know if they are not going to fall into the same pitfalls down the track as devices get even more sophisticated. I am also concerned that recipients of these brain pacemakers are not given the opportunity to provide proper feedback to design teams directly and that there is no indirect path in which to do this. A web page does not cut it. There are people like yourself Gary, who are willing and have relevant research expertise, whom these companies should be even welcoming onto their payroll to improve the robustness of their technologies. And I’ve already raised issues like those you are stating, with collaborators at the Consortium for Science, Policy & Outcomes at Arizona State University.

GARY OLHOEFT: Well, I’ve tried writing to various organisations and agencies and when possible, giving testimony to FDA, FCC and other agency requests for information.

KATINA MICHAEL: I think it is important to create a safe space where manufacturers, medical practitioners, patients, and policymakers come together to discuss these matters openly. I know there are user groups where patients go to discuss issues but that serves quite a different function, more of a support group. But until there is some level of openness then it will be likely that these issues will continue to cloud future developments. Gary, we need more people like yourself who have real stories to share, that are documented, together with peer-reviewed published research in the domain of interference and DBS. We should continue to write to them and also invite them to workshops and roundtable meetings, invite representatives from the FDA and FCC. What do you think about this approach?

GARY OLHOEFT: Yes, you can put me down for that. I’ll be involved.

KATINA MICHAEL: Great!

GARY OLHOEFT: Part of the problem is that the FCC authorisation says 9KHertz up to 300 GigaHertz. And these devices operate at below 200 Hertz. So the FCC has no regulatory authority over them, except as Part 15 devices. The FDA has no limit. From lasers down to direct current (DC). The FCC has nothing to do with it, so we need to get involved with the FDA. We need them to get to document things at any rate.

KATINA MICHAEL: I have a question also about the length of time that battery packs last in biomedical implantable devices? Could they last longer? One researcher who is known as a Cyborg Anthropologist was speaking to someone on a plane from one of these biomedical companies who said to her that the devices are replaced in 4-5 year periods so that the companies can make more money, like 40,000 dollars for each new device. What do you make of this?

GARY OLHOEFT: Possibly the case. But really, you don’t want to be making bigger battery packs, right? You just want to be able to make better battery technology. For example, do you really want a Lithium ion battery in your body because it lasts longer?

KATINA MICHAEL: Yes, you have a point there. What kind of battery do you have?

GARY OLHOEFT: I don’t know what kind it is. I do have the dimensions for how big it is.

KATINA MICHAEL: Yes, I saw the information you sent, 6x6x2 cm.

GARY OLHOEFT: It’s already presentable “looks-wise”, so I wouldn’t risk it.

KATINA MICHAEL: Ok, I agree with your concerns here. I was just worried about this remark because I have heard it before, replacement biomedical devices being a money generator for the industry [xliv].

GARY OLHOEFT: He must’ve been a marketing type.

KATINA MICHAEL: Yes, he was in sales engineering.

GARY OLHOEFT: Do you know how they have those wireless power transmitters now? The Qi system is the only one I have been able to test because I’ve been able to get through to them as there is a potential there for interference [xlv]. So they have given me a device with which to actually play with.

KATINA MICHAEL: That is great. And a very good example of what we are talking about should be happening.

GARY OLHOEFT: There is a Wireless Power Consortium of other people who work at different frequencies [xlvi]. They are the only ones that give me no response to my letters. So the wireless power transmission people need to be brought into the scope of this somehow.

KATINA MICHAEL: Could you elaborate?

GARY OLHOEFT: These are the people who create devices to recharge batteries for devices that require power transmission.

KATINA MICHAEL: Yes, mobility types of technology devices. And there are lots of those coming and most of them with little testing in the security space. I mean the Internet of Things is promising so much in this market space. I think the last statistic I read that the media caught wind of was 20 billion devices by 2020 [xlvii].

GARY OLHOEFT: You are looking at a house that could have every lightbulb, every appliance, every device in it on the Internet.

KATINA MICHAEL: Yes indeed, we just have to look at the advent of NEST.

GARY OLHOEFT: And yet they are wirelessly transmitting. It would be much better if they were hooked up using fibre optics.

KATINA MICHAEL: Agreed… I mean for me it is also a privacy concern with everything hooked up in the house to the Internet [xlviii]. Last year somebody demonstrated they could set a toaster alight in the IOT scenario [xlix].

GARY OLHOEFT: You know how Google has these cars driving around taking pictures everywhere?

KATINA MICHAEL: Yes, that is part of my research [l].

GARY OLHOEFT: And they also record whatever wireless systems they can get into that is not subject to a password, and then they can record anything that is in it. They lost a lawsuit over that.

KATINA MICHAEL: Yes. There is one that was handed down into the billions in Europe recently. But over the last several years they have been fined very different amounts in different markets. It was very ridiculous that they were fined only a few thousand US dollars in for example, South Korea! [li]

GARY OLHOEFT: That is a joke.

KATINA MICHAEL: At the IEEE Sections Congress last month I spoke to several young people involved with driverless cars. And I don’t know, they were very much discounting the privacy and security issues that will arise. One delegate told me: “it’s all under control”. But I do not think they quite get it Gary. I said to one of them: “but what about the security issues” and he replied: “what issues, we’ve got them all under control, I am not in the slightest concerned about this because we are going to have protocols.” And I pointed to the Jeep Cherokee case that some hackers got to stop in its tracks on a highway in the United States [lii], [liii]. One of my concerns with these driverless cars is that people will die, sizzling in a hot vehicle, where they have been accidentally locked inside by the “car”. And they don’t even have to have pacemakers, it is an issue of simply having a vehicle unlock its doors for a client to exit.

GARY OLHOEFT: There was the case of the hybrid vehicle that was successfully stopped and demonstrated on TV.

KATINA MICHAEL: Yes. And there was also someone wearing an Emotiv device that was steering their vehicle with their thoughts [liv]. I was giving a talk at Wollongong’s innovation hub called iAccelerate last week and I told them this very scenario. What if I hacked into the driver’s thoughts, and steered the car off a cliff?

GARY OLHOEFT: So what will they do between vehicles when the devices start to interfere with one another? 

KATINA MICHAEL: Yes, exactly! And when devices begin interfering with one another more frequently for who knows what reason?

GARY OLHOEFT: We have had situations in which cell phones have stopped working because the network is simply overloaded on highways, or blocked by landslides or just traffic congestion. The Broncos Football Stadium here is undergoing a six million dollar upgrade, just so they can get the Wi-Fi working, and now they are building it in to every seat. So they now have security systems like Airports do, and so I cannot go into the Stadium anymore because of my DBS. I couldn’t sit in a  light rail train either.

KATINA MICHAEL: So here is a more metaphysical and existential question. I am so fortunate to be speaking to you! You are alive, you are well in terms of being able to talk and communicate, and yet somehow this sophisticated tech also means that you have had to dull down your accessibility to certain places, almost living off the grid to some degree. So all of this complex tech actually means you are living more simply perhaps. What does that feel like? It really is a paradox. You are being careful, testing your devices, testing the Wi-Fi, and learning by trial and error on-the-fly it seems.

GARY OLHOEFT: Well I have a landline phone against my head right now because I know it doesn’t bother me. I cannot hold a cellular phone within 20 inches of my head.

KATINA MICHAEL: Hmm…

GARY OLHOEFT: So you are right. I mean there are a lot of places I cannot go to, like the School Library or the Public Library because of their system for keeping track of books. It has a very powerful electromagnetic pulse. So when I go to the Library, I go remotely via Virtual Private Network (VPN) on the Internet and fortunately I have access to that. I can also call the librarian who lets me in via the back door.

KATINA MICHAEL: So for me, in one case you are very free, and in the other case, somewhat not free at all. I really do not know how to express that in any other way.

GARY OLHOEFT: I see what you are trying to say but I would be less free without the device because it dramatically improves my functionality and quality of life, but also limits where I can go.

KATINA MICHAEL: I know. I know. I am ever so thankful that you have it and that we are able to talk so freely. I am not one to slow down progress but I am looking at future social implications. One of the things I have been pondering on is the potential to use these brain stimulators in a jail-like way. I am not referring here to torturous uses of brain stimulators, but for example, the possibility of using brain stimulators for repeat offenders in paedophilia for instance, or extreme crimes, whether we would ever get to the point where an implantable would be used for boundary control. Perhaps I am referring here to electronic jails of the future.

GARY OLHOEFT: That gets to be worrisome in a different way. How far are you away from that from controlling people. 1984 and all that [lv]. These DBS are being used now to help with obsessive compulsive disorder (OCD) and neural pain management.

KATINA MICHAEL: Yes, that is what we have pondered in the research we have conducted on uberveillance with MG Michael. So if we can fix the brain with an implantable then we can also do damage to it [lvi]. It is a bit like the internal insulin pump- if we can help someone receive the right amount of insulin, we can also reverse this process and give an individual the wrong amount to worsen the problem. Predatory hacking is something that will happen, if it is not happening already. That’s just the human condition that people would be dabbling with that kind of stuff. It is very difficult to talk about this in public because you do not wish to scare or alarm brain pacemaker or any pacemaker recipient, but we do need to raise awareness about this.

GARY OLHOEFT: That would be good because we don’t have enough people talking about these issues.

KATINA MICHAEL: I know. According to the NIH, there are 25 million people who have pacemakers and are vulnerable to cybersecurity hacks [lvii], [lviii]. That is a huge number. And it was you who also told me that 8% of Americans have some form of implant.

GARY OLHOEFT: Well it was 25 million in the year 2000.

KATINA MICHAEL: And the biggest thing? They must never ever link biomedical devices to the Internet of Things. Never. That is probably my biggest worry for the pacemaker community, that the companies will not think about this properly and they are going to be thinking of the ease of firmware updates and monitoring rather than safety of the individual. I envisage it will require a community of people and I am not short-sighted, it will mean a five-year engagement to make a difference to policy internal to organisations, and government agencies to listen to the growing needs of biomedical patients. But this too is an educational process and highly iterative. This is not like going down to your local mechanic and getting your car serviced, this is about the potential for things to go wrong, minimising exposure, and ensuring they stay right.

GARY OLHOEFT: I agree.

KATINA MICHAEL: Thank you Gary for your time.

 

Key Terms

Biomedical device: is the integration of a medical device and information system that facilitates life-sustaining care to a patient in need of a prosthetic function. Biomedical devices monitor physiological characteristics through mechanical parts small enough to embed in the human body. Popular biomedical devices include heart pacemakers and defibrillators, brain stimulator and vagus nerve stimulator devices, cochlear and retinal implants, among others. The biomedical device takes what was once a manual function in the human body, and replaces it with an automatic function, for example, helping to pump blood through the heart to sustain circulation.

Biomedical Co-creation: co-creation is a term popularised in the Harvard Business Review in 2000. Biomedical co-creation is a management design strategy, bringing together a company the manufactures a biomedical device and recipients of that device (i.e. patients) in order to jointly produce a mutually valued outcome. Customer perspectives, experiences and views in this instance, are vital for the long-term success of biomedical devices.

Deep Brain Stimulation: also known as DBS, is a neurosurgical procedure involving the implantation of a biomedical device called a neurostimulator (also known as a brain pacemaker), which sends electrical impulses, through implanted electrodes, to specific targets in the brain for the treatment of movement and neuropsychiatric disorders. DBS has provided therapeutic uses in otherwise treatment-resistant illnesses like Parkinson's disease, Tourette’s Syndrome, dystonia, chronic pain, major depressive disorder (MDD), and obsessive compulsive disorder (OCD). It is also being considered in the fields of autism and even anxiety-related disorders. The technique is still in its infancy and at the experimental stages with inconclusive evidence in treating MDD or OCD.

Federal Communications Commission: The Federal Communications Commission is an independent agency of the United States government created by statute to regulate interstate communications by radio, television, wire, satellite, and cable. Biomedical devices are not under the regulation of the FCC.

Food and Drug Administration: The Food and Drug Administration (FDA or USFDA) is a federal agency of the United States Department of Health and Human Services, one of the United States federal executive departments. The FDA is responsible for protecting and promoting public health through the control and supervision of a number of domains, among them those relevant to the biomedical device industry including electromagnetic radiation emitting devices (ERED).

Cybersecurity issues: are those that affect biomedical device recipients and place patients at risk of an unauthorised intervention. Hackers can attempt to hi-jack and administer incorrect levels of dosage to a recipient by penetrating proprietary code. These hackers are known as predatory hackers, given the harm they can cause persons who rely on life-sustaining technology.

Implantables: are technologies that sense parameters of various diseases and can either transfer data to a remote center, direct the patient to take a specific action, or automatically perform a function based on what the sensors are reading. There are implantables that have sensors that monitor, and those that facilitate direct drug delivery, or those that do both.

Participatory Design: is synonymous with a co-design strategy of development of biomedical devices. It is an approach that tries to incorporate various stakeholders in the process of design, such as engineers, medical practitioners, partners, manufacturers, surgeons, patients, ethics and privacy-related NGOs, end-users, to ensure that resultant needs are met.

Product Lifecycle Management: is the process of managing the entire lifecycle of a biomedical device from inception, through engineering design and manufacture, to service and disposal of manufactured products. Importantly, PLM is being extended to the ongoing monitoring of the embedded biomedical device in the patient, remotely using wireless capabilities.

 

References

[i] Kristen V. Brown, March 7, 2017, “Why People with Brain Implants are Afraid to Go Through Automatic Doors”, Gizmodo, http://www.gizmodo.com.au/2017/07/why-people-with-brain-implants-are-afraid-to-go-through-automatic-doors/, Accessed: February 19, 2018.

[ii] Gary Olhoeft, December 7, 2009, “Electromagnetic interference and medical implants”, The EMR Policy Institute, https://youtu.be/jo-B6LWfVzw

[iii] Rosie Spinks, June 13, 2016, “Meet the French neurosurgeon who accidentally invented the “brain pacemaker””, Quartz, https://qz.com/704522/meet-the-french-neurosurgeon-who-accidentally-invented-the-brain-pacemaker/, Accessed: September 16, 2017.

[iv] Staff. “Steve Jobs Most Pissed Off Moments (1997-2010)”, Apple, https://www.youtube.com/watch?v=1-oIL9cLHDc, Accessed: September 15, 2017.

[v] Katina Michael, “The Creative Genius in Us- the Sky’s the Limit or Is It?”, iAccelerate Series, https://www.youtube.com/watch?v=3W-8oR_bjAU, Accessed: October 18, 2017.

[vi] Fred Vogelstein, October 4, 2013, “And Then Steve Said, ‘Let There Be an iPhone’”, The New York Times Magazine, Accessed: September 15, 2017.

[vii] Alexandra Ossola, November 17, 2015, “Tasers May Be Deadly, Study Finds”, http://www.popsci.com/tasers-may-be-unsafe-says-new-report, Popular Science, Accessed: February 17, 2018.

[viii] Katina Michael, February 2, 2018, “The Internet of Us”, RADCOMM2017, http://www.katinamichael.com/seminars/2017/11/2/the-internet-of-us-radcomm2017, Accessed: February 19, 2018.

[ix] Emily Waltz, April 26, 2017, “DARPA to Use Electrical Stimulation to Enhance Military Training”, IEEE Spectrum, https://spectrum.ieee.org/the-human-os/biomedical/devices/darpa-to-use-electrical-stimulation-to-improve-military-training, Accessed: September 15, 2017.

[x] Kristen V. Brown, July 11, 2017, “DARPA Is Funding Brain-Computer Interfaces To Treat Blindness, Paralysis And Speech Disorders”, Gizmodo Australia, https://www.gizmodo.com.au/2017/07/darpa-is-funding-brain-computer-interfaces-to-treat-blindness-paralysis-and-speech-disorders/ Accessed: September 15, 2017.

[xi] Robbin A. Miranda, William D. Casebeer, Amy M. Hein, Jack W. Judy et al., 2015, “DARPA-funded efforts in the development of novel brain–computer interface technologies”, Journal of Neuroscience Methods, Vol. 244, pp. 52-67, http://www.sciencedirect.com/science/article/pii/S0165027014002702, Accessed: September 15, 2017.

[xii] Katina Michael, M.G. Michael, Jai C. Galliot, Rob Nicholls, 2017, “Socio-Ethical Implications of Implantable Technologies in the Military Sector”, IEEE Technology and Society Magazine, Vol. 36, No. 1, March 2017, pp. 7-9, http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7879457, Accessed: September 15, 2017.

[xiii] Serafeim Perdikis, Luca Tonin, Jose del R. Millan, 2017, "Brain racers," IEEE Spectrum, Vol. 54, No. 9, September 2017, pp. 44-51, http://www.ieeeexplore.ws/stamp/stamp.jsp?tp=&arnumber=8012239&isnumber=8012214, Accessed: February 16, 2018.

[xiv] Katina Michael, M.G. Michael, Jai C. Galliot, Rob Nicholls, 2016, “The Socio-Ethical Implications of Implantable Technologies in the Military Sector”, 9th Workshop on the Social Implications of National Security, University of Melbourne, Australia, July 12, 2016, http://www.katinamichael.com/sins16, Accessed: September 15, 2017.

[xv] Jane Gardner, Feburary 9, 2016, “Moving with the Power of Thought”, Pursuit, https://pursuit.unimelb.edu.au/articles/moving-with-the-power-of-thought, Accessed: September 15, 2017.

[xvi] Katina Michael, May 8, 2016, “Invitation to speak at the 9th Workshop on the Social Implications of National Security”, Personal Communications with Thomas Oxley.

[xvii] MF+ Staff, 2016, “SoundBite: Hearing Aid on your Teeth”, Sonitus Medical, http://medicalfuturist.com/soundbite-hearing-aid-on-your-teeth/, Accessed: September 15, 2017.

[xviii] Gordon Wallace, Joseph Wang, Katina Michael, 2016, “Public Information Session – Wearable Sensing Technologies: What we have and where we are going!”, Wearables and Implantables Workshop, University of Wollongong, Australia, Innovation Campus, August 19, 2016, http://www.electromaterials.edu.au/uploads/69133/ufiles/Booklet_-_Wearable_and_Implantable_Sensors_Workshop.pdf, Accessed: September 15, 2017.

[xix] Herbert Kenwith and James S. Henerson, 1971 “Laurie Gets Braces”, Partridge Family: Season 1, https://www.youtube.com/watch?v=KzBz1oR8nYE, Accessed: September 15, 2017.

[xx] Katina Michael, August 31, 2017, “The Creative Genius in Us- the Sky’s the Limit or Is It?”, iAccelerate: Illawarra’s Business Incubator, https://www.iaccelerate.com.au/events/882-the-creative-genius-in-us-the-sky-s-the-limit-or-is-it.html, Accessed: September 15, 2017.

[xxi] Emma Hinchcliffe, July 27, 2017, “This made-for-iPhone cochlear implant is a big deal for the deaf community”, Mashable, http://mashable.com/2017/07/26/cochlear-implant-iphone/#9ijsleSJqaqJ, Accessed: September 15, 2017.

[xxii] Ronen Hareuveny, Madhuri Sudan, Malka N. Halgamuge, Yoav Yaffe, Yuval Tzabari, Daniel Namir, Leeka Kheifets, 2015, “Characterization of Extremely Low Frequency Magnetic Fields from Diesel, Gasoline and Hybrid Cars under Controlled Conditions”, Vol. 12, No. 2, pp. 1651–1666.

[xxiii] Nicole Lou, February 27, 2017, “Everyday Exposure to EM Fields Can Disrupt Pacemakers”, MedPage Today/CRTonline.org, https://www.medpagetoday.com/cardiology/arrhythmias/63433, Accessed: February 17, 2018.

[xxiv] Oxana S. Pantchenko, Seth J. Seidman, Joshua W. Guag, 2011, “Analysis of induced electrical currents from magnetic field coupling inside implantable neurostimulator leads”, BioMedical Engineering OnLine, Vol. 10, No. 1, pp. 94, https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/1475-925X-10-94, Accessed: February 17, 2018.

[xxv] Oxana S. Pantchenko, Seth J. Seidman, Joshua W. Guag, Donald M. Witters Jr., Curt L. Sponberg, 2011, “Electromagnetic compatibility of implantable neurostimulators to RFID emitters”, BioMedical Engineering OnLine, Vol. 10, No. 1, pp. 50, https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/1475-925X-10-50, Accessed: February 17, 2018.

[xxvi] Kelly Dustin, 2008, “Evaluation of Electromagnetic Incompatability Concerns for Deep Brain Stimulators”, Disclosures: J. Neurosci. Nurs., Vol. 40, No. 5, pp. 299-303, http://www.medscape.com/viewarticle/582572, Accessed: February 19, 2018.

[xxvii] Joel M. Moskowitz, September 2, 2017, “Hybrid & Electric Cars: Electromagnetic Radiation Risks”, Electromagnetic Radiation Safety, http://www.saferemr.com/2014/07/shouldnt-hybrid-and-electric-cars-be-re.html, Accessed: February 16, 2018.

[xxviii] Kristen V. Brown, July 4, 2017, “Why People With Brain Implants Are Afraid To Go Through Automatic Doors”, Gizmodo: Australia, https://www.gizmodo.com.au/2017/07/why-people-with-brain-implants-are-afraid-to-go-through-automatic-doors/, Accessed: September 15, 2017.

[xxix] National Institutes of Health, January 10-12, 2000, “Improving Medical Implant Performance Through Retrieval Information: Challenges and Opportunities”,  U.S. Department of Health and Human Services, https://consensus.nih.gov/2000/2000MedicalImplantsta019html.htm, Accessed: February 16, 2018.

[xxx] Dan Goodin, October 27, 2011, “Insulin pump hack delivers fatal dosage over the air”, The Registerhttps://www.theregister.co.uk/2011/10/27/fatal_insulin_pump_attack/, Accessed: September 15, 2017.

[xxxi] BBC Staff, October 4, 2016, “Johnson & Johnson says insulin pump 'could be hacked'”, BBC News, http://www.bbc.com/news/business-37551633#, Accessed: September 15, 2017.

[xxxii] Office of Public Affairs, December 12, 2011, “Minnesota-Based Medtronic Inc. Pays US $23.5 Million to Settle Claims That Company Paid Kickbacks to Physicians”, Department of Justice, https://www.justice.gov/opa/pr/minnesota-based-medtronic-inc-pays-us-235-million-settle-claims-company-paid-kickbacks, Accessed: February 19, 2018.

[xxxiii] Hugo Campos, January 19, 2012, “Fighting for the Right to Open his Heart Data: Hugo Campos”, TEDxCambridge 2011, https://www.youtube.com/watch?v=oro19-l5M8k, Accessed: September 15, 2017.

[xxxiv] Hugo Campos, July 15, 2012, “Stanford Medicine X ePatient: On ICDs and Access to Patient Device Data”, Stanford Medicine X, https://www.youtube.com/watch?v=K35enPVJki4, Accessed: September 15, 2017.

[xxxv] Emily Singer, “Getting Health Data from Inside Your Body”, MIT Technology Review, https://www.technologyreview.com/s/426171/getting-health-data-from-inside-your-body/, Accessed: September 15, 2017.

[xxxvi] Hugo Silva, June 22, 2015, “How to build a DIY heart and activity tracking device”, OpenSource.com, https://opensource.com/life/15/6/how-build-diy-activity-tracking-device, Accessed: September 15, 2017.

[xxxvii] Hugo Campos, March 24, 2015, “The Heart of the Matter: I can’t access the data generated by my implanted defibrillator. That’s absurd.”, Slate, http://www.slate.com/articles/technology/future_tense/2015/03/patients_should_be_allowed_to_access_data_generated_by_implanted_devices.html, Accessed: September 15, 2017.

[xxxviii] Jody Ranck, 2016, “Rise of e-Patient and Citizen-Centric Public Health”, Ed. Jody Ranck, Disruptive Cooperation in Digital Health, Springer, Switzerland, pp. 49-51.

[xxxix] Haran Burri and David Senouf, 2009, “Remote monitoring and follow-up of pacemakers and implantable cardioverter defibrillators”, Europace. Jun, Vol. 11, No. 6, pp. 701–709, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2686319/, Accessed: December 6, 2017.

[xl] Mike Miliard, November 20, 2015, “Medtronic enables pacemaker monitoring by smartphone”, Healthcare IT News, http://www.healthcareitnews.com/news/medtronic-enables-pacemaker-monitoring-smartphone, Accessed: September 15, 2017.

[xli] Staff. “How can we help?”, Medtronic, http://professional.medtronic.com/customer-support/contact-us/index.htm#.Wbua5dWCzIU, Accessed: September 15, 2017.

[xlii] Gary Olhoeft, December 12, 2009, “Gary Olhoeft #1 Electromagnetic Interference and Medical Implants.mov”, Youtube: EMRPolicyInstitutehttps://www.youtube.com/watch?v=SymnXTNh8Ms, Accessed: February 16, 2018.

[xliii] Gary Olhoeft, December 12, 2009, “Gary Olhoeft #2 Electromagnetic Interference and Medical Implants.mov”, Youtube: EMRPolicyInstitute, https://www.youtube.com/watch?v=XrETLgwPljQ, Accessed: February 16, 2018.

[xliv] Tim Pool, August 2, 2017, “When Companies Start Implanting People: An Interview with Amber Case on the Ethics of Biohacking”, TimCast, Episode 139, https://www.youtube.com/watch?v=UC8iQzjKQoU, Accessed: September 15, 2017.

[xlv] Administrators. Qi (Standard), Wikipedia, https://en.wikipedia.org/wiki/Qi_(standard), Accessed: September 15, 2017.

[xlvi] WPC, 2017, Wireless Power Consortium, https://www.wirelesspowerconsortium.com/, Accessed: September 15, 2017.

[xlvii] Amy Nordrum, August 16, 2016, “Popular Internet of Things Forecast of 50 Billion Devices by 2020 Is Outdated”, IEEE Spectrum, https://spectrum.ieee.org/tech-talk/telecom/internet/popular-internet-of-things-forecast-of-50-billion-devices-by-2020-is-outdated, Accessed: September 16, 2017.

[xlviii] Grant Hernandez, Orlando Arias, Daniel Buentello, Yier Jin, 2014, “Smart Nest Thermostat: A Smart Spy in Your Home”, Blackhat.com, https://www.blackhat.com/docs/us-14/materials/us-14-Jin-Smart-Nest-Thermostat-A-Smart-Spy-In-Your-Home-WP.pdf, Accessed: September 16, 2017.

[xlix] Mario Ballano Barcena, Candid Wueest, 2015, “Insecurity in the Internet of Things”, Symantec, https://www.symantec.com/content/en/us/enterprise/iot/b-insecurity-in-the-internet-of-things_21349619.pdf, Accessed: September 16, 2017.

[l] Katina Michael and Roger Clarke, 2013, “Location and tracking of mobile devices: Überveillance stalks the streets”, Computer Law and Security Review: the International Journal of Technology Law and Practice, Vol. 29, No. 3, pp. 216-228.

[li] Katina Michael, M.G. Michael, 2011, “The social and behavioural implications of location-based services”, Journal of Location Based Services, Vol. 5, Iss. 3-4, http://www.tandfonline.com/doi/full/10.1080/17489725.2011.642820?src=recsys, Accessed: September 16, 2017.

[lii] Andy Greenberg, July 21, 2015, “Hackers remotely kill a Jeep on the Highway- with me in it”, Wired, https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/, Accessed: September 16, 2017.

[liii] Andy Greenberg, August 1, 2016, “The Jeep Hackers are back to prove car hacking can get much worse”, Wired, https://www.wired.com/2016/08/jeep-hackers-return-high-speed-steering-acceleration-hacks/, Accessed: September 16, 2017.

[liv] Markus Waibel, February 17, 2011, “BrainDriver: A Mind Controlled Car”, IEEE Spectrum, https://spectrum.ieee.org/automaton/transportation/human-factors/braindriver-a-mind-controlled-car, Accessed: September 16, 2017.

[lv] Oliver Balch, November 17, 2016, “Brave new world: implantables, the future of healthcare and the risk to privacy”, The Guardian, https://www.theguardian.com/sustainable-business/2016/nov/17/brave-new-world-implantables-the-future-of-healthcare-and-the-risk-to-privacy, Accessed: February 19, 2018.

[lvi] Katina Michael, 2015, “Mental Health, Implantables, and Side Effects”, IEEE Technology and Society Magazine, Vol. 34, No. 2, June, pp. 5-7, 17, http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7128830, Accessed: September 16, 2017.

[lvii] Staff. August 30, 2017, “Cyber-flaw affects 745,000 pacemakers”, BBC News, http://www.bbc.com/news/technology-41099867, Accessed: September 16, 2017.

[lviii] Carmen Camara, Pedro Peris-Lopez, Juan E. Tapiador, 2015, “Security and privacy issues in implantable medical devices: A comprehensive survey”, Journal of Biomedical Informatics, Vol. 55, June 2015, pp. 272-289, https://www.sciencedirect.com/science/article/pii/S153204641500074X, Accessed: February 19, 2018.

 

Citation: Excerpt from Gary Olhoeft and Katina Michael (2018), Product Lifecycle Management for Brain Pacemakers: Risks, Issues and Challenges Technology and Society (Vol. 2), University of Wollongong (Faculty of Engineering and Information Services), ISBN: 978-1-74128-270-2.

Kallistos Ware on Religion, Science & Technology

This interview with Kallistos Ware took place in Oxford, England, on October 20, 2014. The interview was transcribed by Katina Michael and adapted again in Oxford, on October 18, 2016, by Metropolitan Kallistos in preparation for it to appear in print. MG Michael predominantly prepared the questions that framed the interview.

Biography

Born Timothy Ware in Bath, Somerset, England, Metropolitan Kallistos was educated at Westminster School (to which he had won a scholarship) and Magdalen College, Oxford, where he took a Double First in Classics as well as reading Theology. In 1966 Kallistos became a lecturer at the University of Oxford, teaching Eastern Orthodox Studies, a position he held for 35 years until his retirement. In 1979, he was appointed to a Fellowship at Pembroke College, Oxford.

Do You Differentiate between the Terms Science and Technology? And is there a Difference between the Terms in Your Eyes?

Science, as I understand it, is the attempt systematically to examine reality. So in that way, you can have many different kinds of science. Physical science is involved in studying the physical structure of the universe. Human science is examining human beings. Thus the aim of science, as I understand it, is truth. Indeed, the Latin term scientia means knowledge. So, then, science is an attempt through the use of our reasoning brain to understand the world in which we live, and the world that exists within us. Technology, as I interpret it, means applying science in practical ways, producing particular kinds of machines or gadgets that people can use. So science provides the basis for technology.

What Does Religion have to Say on Matters of Science and Technology?

I would not call religion a science, though some people do, because religion relies not simply on the use of our reasoning brain but it depends also on God's revelation. So religion is based usually on a sacred book of some kind. If you are a Christian that means the Bible, the Old and New Testaments. If you are a Muslim, then the Old Testament and the Quran.

So science as such does not appeal to any outside revelation, it is an examination of the empirical facts before us. But in the case of religion, we do not rely solely on our reasoning brain but on what God has revealed to us, through Scripture and in the case of an Orthodox Christian, through Scripture and Tradition. Technology, is something we would wish to judge in the light of our religious beliefs. Not all of the things that are possible for us to do applying our scientific knowledge are necessarily good. Technology by itself cannot supply us with the ethical standards that we wish to apply. So then religion is something by which we assess the value or otherwise of technology.

Could We Go Insofar as Saying that Science and Religion Could be in Conflict? Or at Least is there a Point Where they Might Become Incompatible One with the Other?

I do not believe that there is a fundamental conflict between science and religion. God has given us a reasoning brain, he has given us faculties by which we can collect and organize evidence. Therefore, fundamentally all truth is from God. But there might be particular ways of using science which on religious grounds we would think wrong. So there is not a fundamental conflict, but perhaps in practice a certain clash. Problems can arise when, from the point of view of religion, we try to answer questions which are not strictly scientific. It can arise when scientists go beyond the examination of evidence and form value judgements which perhaps could conflict with religion. I would see conflict arising, not so much from science as such in the pursuit of truth, but from scientism, by which I mean the view that methods of scientific enquiry necessarily answer all the questions that we may wish to raise. There may be areas where science cannot give us the answer. For example, do we survive death? Is there a future life? That is to me a religious question. And I do not think that our faith in a future life can be proved from science, nor do I think it can be disproved by science. Equally, if we say God created the world, we are making a religious statement that in my view cannot be proved or disproved by science. So religion and science are both pursuing truth but on different levels and by different methods.

Are there Any Principles or Examples in the Judeo-Christian Tradition Which Point to the Uses and Abuses of Technology?

One precious element in the Judeo-Christian tradition is respect for the human person. We believe as Christians that every person is of infinite value in God's sight. Each person is unique. God expects from each one of us something that he doesn't expect from anyone else. We are not just repetitive stereotypes. We are each made in the image and likeness of God, and we realize that likeness and image, each in our own way. Humans are unique basically because we possess freedom. Therefore we make choices. And these choices which are personal to each one of us determine what kind of person we are. Now, any technology which diminishes our personhood, which degrades us as humans, this I see as wrong. For example, to interfere with people's brains by medical experimentation, I would see as wrong. Medicine that aims to enable our bodies and our minds to function correctly, that clearly I would see as good. But experiments that have been done by different governments in the 20th century, whether by Communism or in Nazi Germany, that I would see as an abuse of technology because it does not show proper respect for the integrity of the human person. So this would be my great test - how far technology is undermining our personhood? Clearly our freedom has to be limited because we have to respect the freedom of other people. And therefore, much of politics consists of a delicate balancing of one freedom against another. But technology should be used always to enhance our freedom, not to obliterate it.

How did the Ancient World Generally Understand and Practice Technology?

Interpreting technology in the broadest possible sense, I would consider that you cannot have a civilized human life without some form of technology. If you choose to live in a house that you have built yourself or somebody else has built for you, instead of living in a cave, already that implies a use of technology. If you wear clothes woven of linen, instead of sheepskins or goatskins, that again is a use of technology. In that sense, technology is not something modern, it came into existence as soon as people began using fire and cooking meals for themselves, for example. Clearly, the amount of technology that existed in the ancient world was far less than what we have today. And most of the technological changes have come, I suppose, in the last 200 years: the ability to travel by railway, by car, and then by plane; the ability to use telephones and now to communicate through the Internet. All of this is a modern development. Therefore we have an elaboration of technology, far greater than ever existed in the ancient world. That brings both advantages and risks. We can travel easily and communicate by all kinds of new means. This in itself gives us the opportunity to do far more, but the advantages are not automatic. Always it is a question of how we use technology. Why do we travel quickly from place to place? What is our aim? When we communicate with the Internet, what is it that we are wishing to communicate to one another? So value judgements come in as to how we use technology. That we should use it seems to me fully in accordance with Christian tradition. But the more complex technology becomes, the more we can do through technology, the more questions are raised whether it is right to do these things. So we have a greater responsibility than ever people had in the ancient world, and we are seeing the dangers of misuse of our technology, in for example the pollution of the environment. For the most part the ecological crisis is due to the wrong use of our technological skills. We should not give up using those skills, but we do need to think much more carefully how and why we are using them.

In What Ways has Technology Impacted Upon Our Practice of Religion? Is there Anything Positive Coming from this?

One positive gain from technology is clearly the greater ease by which we can communicate. We can share our ideas far more readily. A huge advance came in the fifteenth century with the invention of printing. You no longer had to write everything out by hand, you could produce things in thousands of copies. And now of course a whole revolution that has come in through the use of computers, which again renders communication far easier. But once more we are challenged: we are given greater power through these technological advances, but how are we going to use this power? We possess today a knowledge that earlier generations did not possess, quantitative, information, technological, and scientific facts that earlier ages did not have. But though we have greater knowledge today, it is a question whether we have greater wisdom. Wisdom goes beyond knowledge, and the right use of knowledge has become much more difficult. To give an example from bioethics: We can now interfere in the processes of birth in a way that was not possible in the past. I am by no means an expert here, but I am told that it is possible or soon will be for parents to choose the sex of their children. But we have to ask: Is it desirable? Is it right, from a Christian point of view that we should interfere in the mystery of birth in this way? My answer is that parents should not be allowed to choose the sex of their child. This is going beyond our proper human responsibility. This is something that we should leave in the hands of God, and I fear that there could be grave social problems if we started choosing whether we would have sons or daughters. There are societies where girls are regarded as inferior, and in due course there might arise a grave imbalance between the sexes. That is just one illustration of how technology makes things possible, but we as Christians on the basis of the teaching of the church have certain moral standards, which say this is possible but is not the right thing to do. Technology in itself, indeed science in itself, cannot tell us what is right or wrong. We go beyond technology, and beyond the strict methods of science, when we begin to express value judgements. And where do our values come from? They come from our religious belief.

How are We to Understand the Idea of Being Created in the “Image and Likeness” of God in the Pursuit of the Highest Levels and Trajectories of Technology?

There is no single interpretation in the Christian tradition of what is meant by the creation of the human person according to the image and likeness of God. But a very widespread approach, found for example among many of the Greek fathers, is to make a distinction between these two terms. Image on this approach denotes the basic human faculties that we are given; those things which make us to be human beings, the capacities that are conferred on every human. The image is as it were, our starting point, the initial equipment that we are all of us given. The likeness is seen as the end point. The likeness means the human person in communion with God, living a life of holiness. Likeness means sanctity. The true human being on this approach is the saint. We humans, then, are travellers, pilgrims, on a journey from the image to the likeness. We should think of the human nature in dynamic terms. Fundamental to our personhood is the element of growth. Now, the image then means that we possess the power of rational thinking, the power of speech, articulate language with which we can communicate with others; it means therefore reason in the broadest sense. More fundamentally, it means that we humans have a conscience, a sense of right or wrong, that we make moral decisions. Most fundamentally of all, the image means that we humans have God-awareness, the possibility to relate to God, to enter into communion with him through prayer. And this to me is the basic meaning of the image, that we humans are created to relate to God. There is a direction, an orientation in our humanness. We are not simply autonomous. The human being considered without any relationship to God is not truly human. Without God we are only subhuman. So the image gives us the potentiality to be in communion with God, and that is our true nature. We are created to live in fellowship and in communion with God the Creator. So the image means you cannot consider human beings simply in isolation, as self-contained and self-dependent but you have to look at our relationship with God. Only then will you understand what it is to be human.

At What Point Would Theologians or Ethicists Reckon we Have Crossed the Line from Responsible Innovation and Scientific Enquiry over into “Hubris”?

As a Christian theologian, I would not wish to impose, as if from a higher authority, limits on scientific enquiry. As I said earlier, God has given us the power to understand the world around us. All truth comes from him. Christ is present in scientific enquiry, even if his name is not mentioned. Therefore, I do not seek in a theoretical way to say to the scientist: Thus far and no further. The scientist, using the methods of enquiry that he has developed, should continue his work unimpeded. One cannot say that any subject is forbidden for us to look at. But there is then the question: how do we apply our scientific knowledge? Hubris comes in when scientists go beyond their proper discipline and try to dictate how we are to live our lives. Morality does not depend solely on scientific facts. We get our values, if we are Christians, from our faith. Modern science is an honest enquiry into the truth. So long as that is the case, we should say to the scientist: please continue with your work. You are not talking about God, but God is present in what you are doing, whether you recognize that or not. Hubris comes in when the scientist thinks he can answer all the questions about human life. Hubris comes in when we think we can simply develop our technology without enquiring: is this a good or bad application of science?

Is that Well-Known Story of the Tower of Babel from the Book of Genesis 11:1-9 at all Relevant with its Dual Reference to “Hubris” and “Engineering”?

Yes, that is an interesting way of looking at the story of the Tower of Babel. The story of the Tower of Babel is basically a way of trying to understand why it is that we humans speak so many different languages and find such difficulty in communicating with one another. But underlying the story of Babel exactly is an overconfidence in our human powers. In the story of the Tower of Babel, the people think that they can build a tower that will reach from earth to heaven. By the power of engineering they think they can bridge the gap between the human and the divine. And this exactly would be attributing to technology, to our faculty for engineering, something that lies beyond technology and beyond engineering. Once you are moving from the realm of factual reality to the realm of heaven, then you are moving into a different realm where we no longer depend simply on our own powers of enquiry and our own ability to apply science. So exactly, the story of the Tower of Babel is a story of humans thinking they have unlimited power, and particularly an unlimited power to unite the earthly with the heavenly, whereas such unity can only come through a recognition of our dependence on God.

Why Cannot or Should We Not Explore and Innovate, and Go as Far as is Humanly Possible with Respect to Innovation, if We Carry the Seed of God's Creative Genius within Us?

Yes, we carry the seed of God's creative genius within us, but on the Christian world view we humans are fallen beings and we live in a fallen world. Now, how the fall is interpreted in Christian tradition can vary, but underlying all understandings of the fall is the idea that the world that we live in has in some way or another gone wrong. There is a tragic discrepancy between God's purpose and our present situation. As fallen human beings, therefore, we have to submit our projects to the judgement of God. We have to ask, not only whether this is possible but whether this is in accordance with the will of God. That obviously is not a scientific or technological question. It is not a question of what is possible but of what is right. Of course, it is true that many people do not believe in God, and therefore would not accept what I just said about this being a fallen world. Nevertheless they too, even those who have no belief in God, have to apply a moral understanding to science and technology. I hope they would do this by reflecting on the meaning of what it is to be human, on the value of personhood. And I believe that in this field it is possible, for Christians and non-Christians, for believers and unbelievers, to find a large measure of common ground. At the same time, we cannot fully understand our limitations as fallen human beings without reference to our faith. So the cooperation with the non-believer only extends to a certain limited degree.

Can a Particular Technology, for Instance Hardware or Software, be Viewed as Being “Immoral”?

One answer might be to say technology is not in itself moral or immoral. Technology simply tells us what is possible for us to do. Therefore, it is the use we make of technology that brings us to the question of whether a thing is moral or immoral. On the other hand, I would want to go further than that, to say that certain forms of technology might in themselves involve a misuse of humans or animals. I have grave reservations, for example, about experiments on animals by dissection. Many of the things that are done in this field fail to show a proper respect for the animals as God's creation. So, it is not perhaps just the application of technology that can be wrong but the actual technology itself, if it involves a wrong use of living creatures, humans or animals. Again, a technology that involves widespread destruction of natural resources, that pollutes the world round us, that too, I would say in itself is wrong, regardless of what this technology is being used for. Often it must be a question of balancing one thing against another. All technology is going to affect people, one way or another. But there comes a point where the effect is unacceptable because it is making this world more difficult for other humans to live in. It is making the world unsuitable for future generations to survive within. Thus, one cannot make a sharp distinction between the technology in itself and how we apply it. Perhaps the technology itself may involve a wrongful use of humans, animals, or natural things; wrongful because it makes the world somehow less pleasant and less healthy for us to live in.

Is Religious Faith in Any Way Threatened by Technology?

If we assume a scientific approach, that assumes that humans are simply elaborate machines, and if we develop technologies which work on that basis, I do think that is a threat to our religious faith, because of my belief in the dignity and value of the human person. We are not simply machines. We have been given free will. We have the possibility to communicate with God. So in assuming that the human being is merely a machine, we are going far beyond the actual facts of science, far beyond the empirical application of technology, since this is an assumption with deep religious implications. Thus there can be conflict when science and technology go beyond their proper limits, and when they do not show respect for our personhood.

Can Technology Itself become the New Religion in its Quest for Singularitarianism - the Belief in a Technological Singularity, where we are Ultimately Becoming Machines?

Yes. If we assume that science and technology, taken together, can answer every question and solve every problem, that would be making them into a new religion, and a religion that I reject. But science and technology do not have to take that path. As before, I would emphasize we have to respect certain limits, and these limits do not come simply from science or technology. We have, that is to say, to respect certain limits on our human action. We can, for example, by technology, bring people's lives to an end. Indeed, today increasingly we hear arguments to justify euthanasia. I am not at all happy about that as a Christian. I believe that our life is in God's hands and we should not decide when to end it, still less should we decide when to end other people's lives. Here, then, is a very obvious use of technology, of medical knowledge, where I feel we are overstepping the proper limits because we are taking into our hands that which essentially belongs to God.

Can You Comment on the Modern Day Quest toward Transhumanism or what is Now Referred to as Posthumanism?

I do not know exactly what is meant by posthumanism. I see the human person as the crown and fulfilment of God's creation. Humans have uniqueness because they alone are made in the image and likeness of God. Could there be a further development in the process of evolution, whereby some living being would come into existence, that was created but on a higher level than us humans? This is a question that we cannot really answer. But from the religious point of view, speaking in terms of my faith as a Christian, I find it difficult to accept the idea that human beings might be transcended by some new kind of living creature. I note that in our Christian tradition we believe that God has become human in Christ Jesus. The second person of the Trinity entered into our human life by taking up the fullness of our human nature into Himself. I see the incarnation as a kind of limit that we cannot surpass and that will not be superseded. And so I do not find it helpful to speculate about anything beyond our human life as we have it now. But we are not omniscient. All I would say is that it will get us nowhere if we try to speculate about something that would transcend human nature. The only way we can transcend human nature is by entering ever more fully into communion with God, but we do not thereby cease to be human. Whether God has further plans of which we know nothing, we cannot say. I can only say that, within the perspective of human life as we know it, I cannot see the possibility of going beyond the incarnation of Christ.

Is Human Enhancement or Bodily Amplification an Acceptable Practice When Considered against Medical Prosthesis?

Human enhancement and bodily amplification are acceptable if their purpose is to enable our human personhood to function in a true and balanced way but if we use them to make us into something different from what we truly are, then surely they are not. Of course that raises the question of acceptable, what we truly are. Here the answer, as I have already said comes not from science but from our religious faith.

What if Consciousness Could Ever be Downloaded through Concepts Such as “Brain in a Vat”?

[Sigh]. I become deeply uneasy when such things are suggested, basically because it undermines the fullness of our personhood. Anything that degrades living persons into impersonal machines is surely to be rejected and opposed.

In the Opposite Vein, What if Machines Were to Achieve Fully Fledged Artificial Intelligence through Advancement?

When I spoke of what it means for humans to be created in God's image, I mentioned as the deepest aspect of this that we have God-awareness. There is as it were in our human nature a God-shaped hole which only He can fill. Now perhaps robots, automatic machines, can solve intellectual problems, can develop methods of rational thought, but do such machines have a sense of right or wrong? Still more, do such machines have an awareness of God? I think not.

What is so Unique about Our Spirit Which We Cannot Imbue Or Suggest into Future Humanoid Machines?

The uniqueness of the human person for me is closely linked with our possession of a sense of awe and wonder, a sense of the sacred, a sense of the divine presence. As human beings we have an impulse within us that leads us to pray. Indeed, prayer is our true nature as humans. Only in prayer do we become fully ourselves. And to the qualities that I just mentioned, awe, wonder, a sense of the sacred, I would add a sense of love. Through loving other humans, through loving the animals, and loving God, we become ourselves, we become truly human. Without love we are not human. Now, a machine however subtle does not feel love, does not pray, does not have a sense of the sacred, a sense of awe and wonder. To me these are human qualities that no machine, however elaborate, would be able to reproduce. You may love your computer but your computer does not love you.

Where Does Christianity Stand on Organ Donation and Matters Related to Human Transplantation? Are there Any Guidelines in the Bioethics Domain?

In assessing such questions as organ donations, heart transplants, and the like, my criterion is: do these interventions help the human person in question to lead a full and balanced human life? If organ transplants and the like enhance our life, enable us to be more fully ourselves, to function properly as human beings, then I consider that these interventions are justified. So, the question basically is: is the intervention life enhancing?

That would bring me to another point. As Christians we see this life as a preparation for the life beyond death. We believe that the life after death will be far fuller and far more wonderful than our life is at present. We believe that all that is best in our human experience, as we now know it, will be reaffirmed on a far higher level after our death. Since the present life is in this way a preparation for a life that is fuller and more authentic, then our aim as Christians is not simply to prolong life as long as we can.

Can You Comment on One's Choice to Sustain Life through the Use of Modern Medical Processes?

The question therefore arises about the quality of life that we secure through these medical processes. For example, I recall when my grandmother was 96, the doctors suggested that various things could be done to continue to keep her alive. I asked how much longer will they keep her alive and the answer was, well perhaps a few months, perhaps a year. And when I discovered that this meant that she would always have various machines inserted into her that would be pumping things into her, I felt this is not the quality of life that I wish her to have. She had lived for 96 years. She had lived a full and active life. I felt, should she not be allowed to die in peace without all this machinery interfering in her. If on the other hand, it were a question of an organ transplant, that I could give to somebody who was half her age, and if that afforded a prospect that they might live for many years to come, with a full and active existence, then that would be very different. So my question would always be, not just the prolonging of life but the quality of the life that would be so prolonged. I do not, however, see any basic religious objection to organ transplants, even to heart transplants. As long as the personality is not being basically tampered with, I see a place for these operations. Do we wish to accept such transplants? That is a personal decision which each one is entitled to make.

ACKNOWLEDGMENT

This interview transcript has previously been published by the University of Wollongong, Australia.

IEEE Keywords: Interviews, Cognition, Education, Standards, Internet, Technological innovation, Ethics

Citation: M.G. Michael, Katina Michael, "Religion, Science, and Technology: An Interview with Metropolitan Kallistos Ware", IEEE Technology and Society Magazine, 2017, Volume: 36, Issue: 1, pp. 20 - 26, DOI: 10.1109/MTS.2017.2654283.

Matt Beard of Ethics Centre Interviews Katina Michael

 Matthew Beard, Fellow of the Ethics Centre

Matthew Beard, Fellow of the Ethics Centre

Dr Matt Beard is a moral philosopher with an academic background in both applied and military ethics. Matt is a Fellow at The Ethics Centre, undertaking research into ethical principles for technology. Matt has taught philosophy and ethics at university level for several years, during which time he has also published widely in academic journals, book chapters and spoke at a number of international conferences. His work has mainly focussed on military ethics, a topic on which he has advised the Australian Army. He has also published academic pieces on torture, cyberwar, medical ethics, weaponising space, sacrifice and the psychological impacts of war on veterans. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement, recognising his “prolific contribution to public philosophy”. He regularly appears to discuss a range of ethical issues on television, radio, online and in print. He is also a columnist with New Philosopher magazine and a podcaster on the ABC’s Short & Curly, an award-winning children’s podcast aimed at getting families to engage with ethics in a fun and accessible way. Matt is an experienced speaker, writer and educator who brings enthusiasm, rigour and accessibility to his research, speaking and media engagements.

A written questionnaire was answered by Katina Michael on September 28, 2016.

Q&A With Ethics Centre: Fitness Trackers

1. Can you envision any issues assoc with health insurers offering wearable technology and the possibility of lower premiums to their customers?

  • Health insurance is a big business. High-tech companies like Samsung already have diversified into this vertical market, making them one of the world’s leading health insurers. Essentially consumers who opt into a health program using smartphones or wearables like this are heralding in so-named “surveillance for care” which still has “surveillance for control” as an underlying element. In essence these consumers are trading some of their fundamental freedoms for lower premiums. Is this the new “price of health insurance”? A major chunk of your personal information?
  • Wearable technologies are also transferable. There is no telling who is wearing the human monitoring device for certain, although over time, even a space of 2 weeks, behavioural biometrics can determine who is the actual wearer at any given time because of heart rates, pulse rates, stress rates and more. In the not too distant future, disputes would be settled only by means of an implantable sensor device that could not be removed and could with some certainty determine the wearer, despite the pitfalls of cloning devices etc.
  • Having witnessed what has happened in the car insurance industry, we can also learn a great deal. In these scenarios, companies like Norwich Union launched services where constraints were identified regarding curfews, for example for under 25 years of age/ male drivers. These programs incentivise people to do the right thing, reducing the incidence of accidents during late night driving but are in no way guaranteeing that the driver is better off in the longer run. The question, is what happens with respect to insurance claims made by consumers that go against the “lower premium” standard thresholds of usage- be it the “number of steps” or the “time spent” exercising, or the “calories burned daily” or even the oxygen saturation levels etc? If you opt for a fitbit QS style program, what happens if you (1) don’t wear the fitbit daily; (2) have a poor track record of health given personal reasons of any type (e.g. being the primary carer of a child with autism or downs syndrome) etc. Might this make you less insurable across programs in the future with other health insurance suppliers? It almost takes on a “survival of the fittest” attitude, which discriminates against different members of society at the outset.
  • What is the motivation of these kinds of programs for health insurers? Surely it is not because they feel good about reducing health premiums for their customers? How will this intimate behavioural data be used? In unrelated events? Perhaps in contradiction to advice provided by doctors. There are many cases where people have posted data about themselves on social media for instance that has rendered their medical leave void, despite legitimate reasons for leave.

2. Do these issues outweigh the advantages? Can you see any advantages?

  • In an ideal world we might deem that the advantages far outweigh concerns over bodily and psychological privacy, surveillance, autonomy and human rights. In the end, most people say they are privacy conscious but still opt to take a loyalty card, if asked at the counter on checkout if a discount ensues.
  • In an ideal world, sure I can advantages to getting healthier, fitter, being more routine based about calorie intake and calorie burning. There are many people who use their fitbits or other smartphone apps to make sure they are doing the minimum level of exercise each day. That cannot be a bad thing if you are in control of your own data, and stats.
  • My grave concern over these types of behavioural biometric apps is that the data gathered is used to further exploit the end user. “On the one hand, here is a nice discount because you are such a great example of health, and on the other hand, now that I know your general behaviours about every day life, I can further onsell other services to you that I know you’ll need and use.”
  • Once you lose your privacy, you have forgone basic freedoms- you lose that decision making power to say “hey, today it is pelting down rain, I don’t feel like going out for my daily walk”.
  • There are some wearers who will also find themselves misusing the data collected- whether it be because they want to keep pushing the boundaries with how many steps they can do in a working day, versus competing and benchmarking oneself to others in like groups.
  • Most people are willing to give away personal information for anything “so-named” that is “free” but the reality is that there is nothing free, and that discount you might get is being paid for some other way- most likely through the minute-to-minute dataset that is onsold to a third party for completely different uses.

3. Would insurers have an ethical obligation to inform users if they detected medically significant anomalies in the data they collect?

  • If health insurers recognise an anomaly in the data set gathered because they are cross-matching that data with other clinical outcomes or assessments, then yes they would need to inform their customer.
  • However, once a medical condition is observed, it will be recorded against a health record and it is possible that a “predisposition” to x or y may well rule out that sick individual from any future form of health insurance. A number of woman in the USA have found themselves in this predicament with the change of health policy during the Obama Administration and have been left with very limited public health care which hardly helps them to address their diagnosis.
  • Today, in Australia, sufferers have a right to opt out of telling their health insurer that they have a diagnosed “syndrome” as this could affect their long-term health insurance capacity and coverage. Their syndrome would not be detectable by a fitbit style device, but has been medically diagnosed via a DNA test.
  • The other issue that comes to mind is whether or not “big data” will have a role to play in providing “best estimates” of a person carrying a particular type of illness into the future. Data from the fitbit device, might be linked to heart rate data showing the potential for stroke and more. For some people, the news that they are “at risk” is sometimes more a trigger for a stroke or heart attack, than continuing to lead the lifestyle they have which is happy and carefree. I know many people who would become incredibly depressed being informed by any health insurer that if they don’t change their behaviours they will likely die prematurely by 20 years or so. It’s certainly a complex issue and not as straightforward as some think. These are choices that people have to make.

4. Are there any ethical limits to the ways the collected data could be used?

  • Anything that places an individual in a worse situation than they are in already, whatever that context is, is unethical to begin with.
  • There is a well-known case of a woman who placed her sexual encounter fit bit analysis for all to see on various social media channels. The act was not well received by most readers who called for her to take down the data in poor taste, that she acted in an improper and unethical manner toward her partner in the usage of these personal statistics, and that she had reduced the most sacred of acts down to a “quantified-self” graph.
  • There are some things that should just not be public knowledge outside the self (be it health insurer, or general public), and more and more of this personal space is being eroded because of the myriad of sensors being packeted into our smart devices. In some way these smart devices, are too smart for their own good especially when they are internetworked allowing for benchmarking to take place with control groups or other like groups.
  • There is a lack of transparency and education regarding fitbit capabilities in the general public. In the wrong hands this data could also be used to harm people.
  • I fully understand the issue of collective awareness. The more individual citizens/consumers pool their data together, the more we can try to identify anomalies, outliers, and learn more about the human body. This is an honourable aim. But the realist in me says, that this will disadvantage greatly those of our society who are disabled and live life bound to a wheelchair, suffer from mental illness or depression and simply find it difficult to participate in daily activities, the elderly, and other minority groups who find being “tracked” abhorrent in any shape or form for any reason (e.g. aboriginal communities).
  • I think minors should be exempt from such schemes, though very often, health insurance is a family product, so at what point do we say wearables or smartphones for minors are not required, even if adults opt-in to the specific health program?

5. Is there a limit to the types of health behaviour that should be collected (mood, menstrual cycle, food consumption, pregnancy, sexual activity)?

  • I think people should be allowed to track what they want. It is very important that individuals can choose which kinds of data they want to collect. For example, women who wish to be able to better plan ahead for activities during their menstrual cycle, or in order to fall pregnant should be able to keep that data in a calendar style diary. Women in Canada for instance have been lobbying for such an option to track their cycles on the iPhone but to no avail. Oft times, professional women require such a reminder to track periods of ovulation and more. This is becoming especially critical as more women are deciding to have children later on in life, valuing study and career development opportunities during their early to mid 30s. Fertility specialists request the tracking of fine level data when it comes to couples successfully falling pregnant, but most people do not track this information via pen or paper, but might well add to the data collected if they are prompted by an app or wearable device. The device, fitted with a temperature sensor, might provide that opportunity.
  • The question that really brings this to the fore is whether or not any sensitive data which is generated by the human body in particular (e.g. mood or menstrual cycle, or sexual activity) should ever be captured and transmitted to a third party, say in the Cloud. At this point I would have to say this data should be limited and accessible only to the customer opting to measure the information for their personal well-being.
  • I can imagine negative scenarios, like a couple seeking fertility treatment rebates from their health insurer, only to be told (1) you haven’t been collecting your data properly on your wearable; and (2) we noted there was no attempt at conceiving a child in months January or February so we cannot pay you for your June IVF treatment.

6. Do you think today’s technology serves as a substitute/proxy for human virtue (fit trackers as a substitute for self control and motivation, for instance)? If so, is this a moral problem or an opportunity?

  • The act of self-awareness and reflection is a wonderful opportunity. The ancients would carry a little notebook tied to their garment with some thread and write their thoughts down. There is nothing wrong with recording information down through digital means, save for obvious breaches in security or privacy that may eventuate if the data got into the wrong hands.
  • Yet, the unhealthy trend we fall into is thinking that the technology will solve all our problems. It is a well-known fact that the novelty effect wears off after the first few days, and even weeks of usage. People quite often will forget they are wearing a device that is linked somehow to their health insurer, as autonomy begins to override the human condition, even at the detriment of a higher insurance premium. Some people of course would be more regulated and driven than others, with respect to this monetary incentive. But what happens if we record “our true state” of health which will likely not be perfect and continuous given what life throws our way, what then? Surely the health insurer will need to use this in the law of averages? And what are the implications of this?
  • The virtue of self-control and motivation which is really a quality-based aspect of oneself despite its tendency towards frequency, is indeed being quantified. Self-control has its depth in spiritual, philosophical, ideological positions, not in fitbit devices. If we say it is an opportunity for us as humans, to tie ourselves to a techno-fied statue of limits, then next we will likely be advised of who we should marry, how often we should have sex, whether or not we are worthy to have children (because we carry a, b, c defective gene), and whether we can be employed in certain professions (because we stress too easily, or become anxious). This kind of world was very much observed in This Perfect Day, a novel by Ira Levin, which was later analysed by Jeremy Pitt (ed) in This Pervasive Day published by Imperial College London.

7. Anything else to add?

  • Quantifying things in a digital manner can help us to make real changes in the physical world. However, I would not like personal information to come under the scrutiny of anyone else but my own self. I certainly would not wish to equip a health provider with this data, because there will inevitably be secondary uses (most retrospective) of that health data which has not been consented explicitly by the customer, nor would they fully be aware of the implications of making that data available, beyond the lowering of a premium.
  • My concerns of inaccurate recordings in the data which have been proven against fitbit already (due to multiple users on a given device, and a maximum sensor accuracy in very fit people), uncontextualised readings of data, and implications for those in lower socio-economic demographic groups especially, would only lead us further down an uberveillant trajectory.
  • Most importantly, there have been fitbit-based legal cases already which have proven someone’s innocence or guilt. Quite possibly, these wearable devices might end up contradicting circumstantial evidence over eyewitness accounts. Uberveillance allows for misinformation, misinterpretation, and information manipulation.
  • Fundamentally, strapping devices to our bodies, or implanting ourselves with sensor chips is a breach in human rights. At present, it is persons on extended supervisor orders (ESOs) that need to wear anklet or other bracelet devices. In essence, we are taking the criminality portion of the wearer into a health focused scenario. In a top-down approach, health providers are now asking us to wear devices to give us a health rating (bodily and psychologically), and this can only mean a great deal of negative and unintended consequences. What might have started as a great idea to get people moving and healthier, living a longer and better quality of life, might end up making people have crises is they play to a global theatre (namely their health insurer, and whomever else is watching).