Indepth interviews on the topic of...

Carly Burns of UOW Interviews Katina Michael

1. What are you working on in 2018?

Always working on lots and lots of things at once.

 Carly Burns, UOW Research

Carly Burns, UOW Research

  • Socio-ethical approaches to robotics and artificial intelligence development and corresponding implications for humans
  • Tangible and intangible risks associated with bio-implantables for the medical and non-medical fields
  • Ongoing two-factor authentication requirements despite the aggressive rollout of biometrics (especially facial recognition and behavioural systems)
  • Wearable cameras and corresponding visual analytics and augmented reality capabilities in law enforcement
  • Blockchain registers and everyday transactional data flows in finance, education, and health
  • Social media pitfalls and technology dependencies, screens and addictions
  • Unauthorised and covert tracking technologies, location information sharing, crowdsourcing and notions of transparency
  • Defining uberveillance at the operational layer with respect to the internet of things and human rights
  • At the heart of my research is the interplay of engineering, law, policy and society.

2. In regards to your field of study or expertise what are some of the most innovative or exciting things emerging over the next few years?

  • Exoskeletons in humans, transputation (humans opting for non-human parts), and the ability to do things that were once considered ‘superhuman’ (e.g. carrying 2-3 times one’s body weight, or extending human height through artificial limbs).
  • Brain to computer interfaces to help the disabled with basic accessibility of communications and everyday fundamental necessities (e.g. feeding oneself). However, breakthroughs in this space will quickly be adopted by industry for applications in a variety of areas, with the primary focus being entertainment and search services.
  • Smart drug delivery (either embedded/swallowable/injectable) pill and chip solutions that allows remote health systems to monitor your drug taking behaviours, daily exercise routines, and wander/fall-down alerts. 
  • An electronic pacemaker the size of a AAA battery (or smaller) acting as the hub for body area networks, akin to a CPU in a computer, allowing for precision medicine and read-write rights to a world built on the Internet of Things ideology.
  • Personal AI services: consider this the rise of a new kind of personal Internet. Services that will be able to gather content and provide for you thought-specific level data when you need it. Your life as one long reality-TV episode, captured, ready for playback in visual or audio, adhering to private-public space differentials. Captured memories and spoken word will be admissible evidence in a future e-court, but also available for new job opportunities. The tacit becomes capturable and can help you get your next job. 

3. In regards to your field of study or expertise what are some of the things readers should be cautious/vary of over the next few years?

  • The technology we are being promised will get very personal and trespass privacy rights. Whereby in 1984 we were assured that at least the contents of our brain were private, today behavioural biometrics alongside detailed transactional data, can provide some level of proactive profile of everyday consumers. Retaining anonymity is difficult, some would say near impossible. We have surveillance cameras and smart phones and watches that track our every movement, smartTVs that watch us in our homes, IOT devices that do motion detection and human activity monitoring in private spaces, and social media that has the capacity to store instantaneous thoughts and images and multimedia across contexts. This loss of privacy will have psychological impacts and fallout, whether it be in increasing rates of mental illness, or in the room we require to develop as human beings, that right to learn and reflect from our mistakes in private. Humans are increasingly becoming decorporealised. We are fast becoming bits of bytes. Companies see us not as holistic customers any longer, but pieces of transactional data, as we are socially sorted based on our capacity to part with our dollar, and the influence measure we have on our peer groups.
  • The paperless/cashless paradigm is gathering momentum. It has many benefits to organisations and government and especially to our environment. But this has major implications for auditability, so-named transparency, the potential for corrupt practices to be instituted by skilled security hackers, and the need for traceability. Organisational workflows that go paperless will place increasing pressure on staff and administration, triggering a workplace of mere compliance (and tick boxing) as opposed to real thinking and assurance. The cashless part will lead to implicit controls on how money is spent by minority groups (e.g. disabled, pensioners, unemployed). This will no doubt impact human freedom, and fundamentally rights to choice.
  • Over-reliance on wearable and implantable technologies for a host of convenience, care and control solutions. Technology will provide a false sense of security and impact on fundamental values of trust in human relationships. More technology does not mean a better life, for some it will mean a dysfunctional life as they wrestle with what it means to be human.
  • It is questionable whether living longer means we age better. Because we are living longer, illnesses like dementia and cancer are increasing at an increasing rate. How do we cope with this burden when it comes to aged care? Send in the robots?
  • We have already seen robots (e.g. Sophia) be recognised as a citizen of Saudi Arabia, before fundamental women’s rights have been conclusively recognised in the same state. Robots and their proposed so-called sentience will likely receive special benefits that humans do not possess. Learning to live with these emerging paradigms will take some getting used to- new laws, new policies, new business models. Do robots have rights? And if so, do they supersede those of human rights? What will happen when “machines start to think” and make decisions (e.g. driverless cars)? 

4. Where do you believe major opportunities lie for youth thinking about future career options?

  • This is pretty simple, although I am biased, it is “all things digital”. If I was doing a degree today, I would be heading into biomedical engineering, neuroethics and cybersecurity. On the flip-side of this, I see the huge importance of young people thinking about social services in the very “human” sense. While we are experimenting with brain implants for a variety of illnesses, including for the treatment of major depressive disorder, and DNA and brain scanning technologies for early detection, I would say the need for counsellors (e.g. genetic) and social workers will only continue to increase. We need health professionals and psychologists and psychiatrists who get “digital” problems: a sense of feeling overwhelmed with workloads, with the speed that data travels (instantaneous communications), etc. Humans are analog, computers are digital. This cross-road will cause individuals great anxiety. It is a paradox. We’ve never had it so good in terms of working conditions, and yet we seem to have no end to social welfare and mental health problems in our society. 
  • At the same time the world is advancing in communications, and life expectancy continues to grow in most economic systems, an emphasis on food security, seeking renewable energy sources that do not create more problems than they solve, biodiversity and climate change is much needed. What good is the most advanced and super networked world, if population pressures and food security practices are not being ascertained, alongside rising sea levels that cause significant losses? We should not only be using our computing powers to model and predict changes that are inevitable to the geophysical properties of the earth, but to implement longer term solutions. 

5.       In regards to your field of expertise, what is the best piece of advice you could offer to our readers?

  • The future is what we make of it. While computers are helping us to translate better and to advance once remote villages, I advocate for the preservation of culture and language, music and dance and belief systems. In diversity there is richness. Some might feel the things I’ve spoken about above are hype, others might advocate them as hope, and still others might say this well is their future if they have anything to do with it. Industry and government will dictate continual innovation as being in the best interest of any economy, and I don’t disagree with this basic premise. But innovation for what and for whom? We seem to be sold the promises of perpetual upgrades on our smartphones and likely soon our own brains through memory enhancement options. It will be up to consumers to opt-out of the latest high tech gadgetry, and opt-in to a sustainable future. We should not be distracted by the development of our own creations, rather use them to ensure the preservation of our environment and healthier living. Many are calling for a re-evaluation of how we go about our daily lives. Is the goal to live forever on earth? Or is it to live the good life in all its facets? And this has to do our human values, both collectively and individually.

Patient Feedback in Product Lifecycle Management of Brain Pacemakers

The Need for Patient Feedback in the Product Lifecycle Management of Deep Brain Stimulation Devices

Katina Michael interviews Gary Olhoeft


 Professor Emeritus Gary Olhoeft of the Colorado School of Mines

Professor Emeritus Gary Olhoeft of the Colorado School of Mines

This interview was conducted by Katina Michael with Gary Olhoeft, a deep brain stimulation (DBS) device recipient on September 8, 2017. Katina is a Professor at the University of Wollongong who has been researching the social implications of implantable devices for the last 20 years and Gary is a retired Emeritus Professor of Geophysics at the Colorado School of Mines. Gary has previously taught a number of subjects related to Advanced Electrical and Electromagnetic Methods, Antennas, Near Surface Field Methods, Ground Penetrating Radar, and Complex Resistivity. Gary had a deep brain stimulator  installed in 2009 to help him combat his Parkinson’s Disease. This interview is a participant observer’s first-hand journey into a life dependent on a deep brain stimulator. Of particular interest is the qualified nature of the participant in the field of electromagnetics with respect to his first-hand experience of the benefits, risks and challenges surrounding the device that has been implanted in his body. Katina first came to know of Gary’s work through his open comments in a Gizmodo article in 2017 [i] while looking into the risks associated with biomedical devices in general. Gary has also delivered numerous presentations to the EMR Policy Institute on “Electromagnetic Interference and Medical Implants”, dating back to 2009 [ii]. The interview is broken into two parts.

KATINA MICHAEL: Gary, thank you for your time. We have previously corresponded on two full-length written questionnaires, and now this structured Skype interview. I think within my capacity in the Society for the Social Implications of Technology in the IEEE, we might be able to take some of the issues you raise forward. I think as more people come on board with various brain implants, heart pacemakers and internal diagnostic devices that the Federal Communications Commission (FCC), the Food and Drug Administration (FDA) and the health insurance industry more generally, will have to engage with at least some of the issues that you and other biomedical device recipients have identified from your experience.

GARY OLHOEFT: Thank you for the opportunity.

KATINA MICHAEL: So many people who are designing biomedical devices do not actually realise that patients are awake during some of the DBS procedure. I found on the engineering side of the design, that many engineers have never witnessed a DBS going into someone’s brain, or at least understood the actual process of implantation. I have spoken to biomedical engineers in key academic institutions that have major funded brain implant projects who have challenged me about whether or not the patient is actually awake during the process. I do find it bewildering at times that some engineers have never spoken to patients or are so withdrawn from the practical side of biomedical device deployment. Engineers tasked with some complex problems sometimes look at only solving a single part of the end-to-end design, without understanding how all the componentry works together.

GARY OLHOEFT: That’s amazing.


GARY OLHOEFT: I was also amazed to talk to the Chief Engineer at Medtronic about the DBS once. He told me the whole thing was entirely built out of discrete components with no integrated circuits because the FDA has not approved any integrated circuits yet.

KATINA MICHAEL: What do you make of this? That the regulations and the regulatory body responsible is holding things up? What is your personal position?

GARY OLHOEFT: Well, I definitely think that the regulatory body is holding things up. Just look at when the first DBS was installed in France in 1987 [iii]. It was something like 14 years before it was made available in the USA in about 2001 with FDA approval. I got mine in 2009, and they had already sold hundreds of them at that point in America.

KATINA MICHAEL: And for you at that time, there was no other alternative? I assume that if you had not adopted, that your quality of life was going to diminish quickly?

GARY OLHOEFT: That’s right. I would have continued shaking and not been able to write or I would have avoided reading, or walking or talking. Something I think I haven’t told you yet is that my device is also an interleaved device that has two settings that alternate- one is set for me to walk, and the other is set for me to talk. You used to have to choose between the two but now they can alternate because they are interleaved so that I can do both at the same time.

KATINA MICHAEL: For me Gary, it is nothing short of miracles what they are doing.


KATINA MICHAEL: And I marvel at these things. Was the FDA correct in waiting those 15 years or so before they approved or they should have approved earlier so other people may have had an improved quality of life in the United States? What do you think about the length of time it took to get approval? Are you critical of it?

GARY OLHOEFT: It depends on what they are talking about. Some of the things they are talking about with genetic modification implants- with viral inducing genetic modifications and stem cells- these things are going too fast. A doctor once told me when they go to the FDA for approval they have to go through trials. The first trial involves a few people. The next trial involves a few tens of people. And then at the approval point there are hundreds of people but then when it is approved possibly hundreds, or thousands or millions of people will get it and next all kinds of things can go wrong that they did not anticipate. So you have to be very careful about this stuff. However, the FDA seems to reinvent the wheel requiring their own testing when adequate testing has already been done in other countries.

KATINA MICHAEL: I agree with you. It is the brain we are talking about after all.

GARY OLHOEFT: The thing that bothers me most is that Apple footage you sent me. You know that clip with Steve Jobs and the Wi-Fi problem?


GARY OLHOEFT: I would not have liked to have been in that room with a DBS.

KATINA MICHAEL: Yes. Interestingly I was researching that for a talk on business ethics and AI and the future and then we had this correspondence, and I just connected the two things together [v]. And if he could not run an iPod demo with that EMC (electromagnetic compatibility) interference problem when we know he would do exhaustive user testing at launches [vi], then what are we going to do Gary when we have more and more people getting implants and even more potential electromagnetic interference? I am trying to figure out what kind of design solution could tackle this?

GARY OLHOEFT: And there’s a whole bunch of other things that bother me, like the electromagnetic pulse to stop cars on freeways and the devices they have to shock people.


GARY OLHOEFT: What about all those people that have implants like me or other kinds of implants? In one of those fictitious mystery shows someone was depicted as being killed in a bank robbery and he was killed by an electromagnetic pulse. So we can see these kinds of scenarios are making it into the public eye through the visual press.

KATINA MICHAEL: And that is a fictional account, right?

GARY OLHOEFT: It’s a fictional scenario but it is certainly possible [vii].

KATINA MICHAEL: Yes, it sure is. Exactly. I am talking at the annual conference for the Australian Communications Media Authority (ACMA) next month, and I will be using our discussion today as a single case study to raise awareness here. I am talking on implantables for non-medical applications, and there is presently a great deal of pressure from the biohacking community [viii]. A lot of these guys are my friends given my research area, but are doing some very strange things. Presently some of them are talking about hacking the brain and I am telling them you really should not be doing that without medical expertise even if it is in the name of “citizen science”. Some of them are amateur engineers and others are fully-fledged qualified engineers but not medical people. And I personally feel the brain is not to be experimented with like this. It is reminiscent of what I would call ‘backyard lobotomies’. 

GARY OLHOEFT: It is like DARPA. They have a call up at the moment to have a million electrodes inside the brain so they can communicate, not for therapeutic value like I have [ix],[x].

KATINA MICHAEL: You are likely familiar with the DARPA project from 2012, for a brain implantable device that could be used to aid former service men and women suffering from post-traumatic stress disorder, depression and anxiety [xi]. We did a special issue on this in the IEEE Technology and Society Magazine last year [xii]. They have also claimed this device solution could be used for memory enhancement. It sounds like the cyborgisation of our forces.

GARY OLHOEFT: That’s like what I have. The latest one is more like when you want to remote control a vehicle or something. The September 2017 IEEE Spectrum had an article about Brain Racers using brain controlled avatars to compete in a cyborg Olympics [xiii].

KATINA MICHAEL: Exactly. And we did raise issues in that special which I will send to you. I held a national security workshop on brain implants in the military in 2016 [xiv], at the University of Melbourne where they are doing research on stentrodes. The University of Melbourne is considered to have some leading academics in this space, receiving some partial funding I believe from DARPA [xv]. I then invited some biomedical engineers in the DBS space from the University of Melbourne to participate in the workshop, like Thomas Oxley, but all were unavailable to make it. Thomas incidentally was undergoing training in the USA related to DBS and stentrodes [xvi].


KATINA MICHAEL: There are so many things going on at present like implantables in your jaw that are so close to the ear that they can allow you to communicate wirelessly so you can hear via your teeth [xvii]. We were looking at these kinds of implants and implications at various workshops including at the University of Wollongong where we have a Centre of Excellence [xviii].

GARY OLHOEFT: It’s not surprising. In the old days, when we had the silver amalgam fillings in teeth, there were people that used to go listening to the radio through their teeth.

KATINA MICHAEL: Yes. There’s a well-known episode of the Partridge Family where Laurie gets braces and her boyfriend’s Walkman is interfering with her ability to sing songs when a film crew comes to record music in the family home [xix]. So yes, teeth are amazing, the auditory links there have been well-known for decades are just being rediscovered by the younger generation.


KATINA MICHAEL: And the communications for autonomous weapons or over-ride. Can a human be autonomous for instance? Last week we were discussing some of the ethics behind overriding someone's decision not to fire or strike at a target [xx]. Or imagine the ability to remotely control a drone just by using your thoughts, versus someone in a remote location executing the fire or strike commands without being in-situ by intercepting that communication stream. Imagine the potential to intercept a person’s thoughts and to make them physically do something. This is where for me the waters get muddied. I do not mind the advancements in the Cochlear space for instance, where deaf persons have the ability to hear music and entertainment through an embedded technological device [xxi]. I think that is another marvel, really. But I’d be interested to hear your opinion about the crossover between the medical and non-medical spaces? Do you think that is just life- that is just how innovation is? That we need to get used to this or do you believe prosthetics are the only reason we should be implanting people in the brain?

GARY OLHOEFT: I think the only reason we should be implanting people is for therapeutic reasons. For instance, I have a deep brain stimulator for a specific disease, others might have a particular problem or maybe it is to replace a part of the brain that has been damaged physically. Because the question becomes, when are we no longer human anymore if we go beyond prosthetics purposes?


GARY OLHOEFT: We have problems with driverless cars and people are talking about mirrored systems and all sorts of electronics in them that interfere with DBS. There was a paper that was published where researchers took about 10 cars at different times, and they discovered the ones that were diesel powered did not interfere because they didn’t have any ignition system [xxii]. Conventionally powered cars which had an electronic ignition system pad caused some interference. But electrics and hybrid engines had problems with people with implants [xxiii], [xxiv], [xxv], [xxvi].

KATINA MICHAEL: So do you fear getting in a vehicle at any time? Or is that not the issue, rather it is if you are driving or physically touching parts of the car?

GARY OLHOEFT: No, it’s probably if you are just in the vehicle itself because of the way they have the wiring in some vehicles. A Prius has 8 computers inside it, Wi-Fi and Bluetooth, and the way they run the wiring from the batteries to the front, it is not twisted wiring it is just a straight pair of wiring. If it was twisted pair there would be a lot less magnetic noise inside the car body.

KATINA MICHAEL: So that’s the car company trying to save money, right?

GARY OLHOEFT: I really don’t know. We have a Prius as well. I’ve tested our car. We have two sets of batteries. The front and right passenger seat are okay but the driver’s position is very noisy. There’s a woman we know, when she drives her Prius, her deep brain stimulator turns off when the car goes into charging mode (while braking) [xxvii].

KATINA MICHAEL: Oh dear, this is a major problem.

GARY OLHOEFT: That’s why I don’t drive.

KATINA MICHAEL: These issues must get more visibility. They can no longer be ignored. This is where consumer electronics come head-to-head with biomedical devices.

GARY OLHOEFT: I’ve also sent you documents that I’ve sent to the FCC and FDA.

KATINA MICHAEL: I read these.

GARY OLHOEFT: I’ve not received any response to these.

KATINA MICHAEL: This is truly an important research area. This topic crosses over engineering, policy and society. It is really about the importance of including the end-user (or patient in this case) in the product lifecycle management (PLM) process.


KATINA MICHAEL: You are the first person I have engaged with who has convinced me to go further with this particular research endeavour. Save for some very sporadic papers in the press, and random articles in journal publications about electromagnetic interference issues, it was the Gizmodo article [xxviii] that my husband stumbled across citing you, that has validated our present conversation. It is time to take this very seriously now. We now have so many pacemakers, it is not just heart, it is brain as well. And I cannot even get a good figure for how many there are out there and I keep being asked but different sources state different things.

GARY OLHOEFT: They don’t know because they don’t track them [see introduction in [xxix].

KATINA MICHAEL: That is right but somewhat shocking to me because surely these numbers exist somewhere. And we have to track them. And I do not mean track the names of people. I do not really want people to be in a database of some sort because they bear an implant. I worry about potential hackers getting access to that, not from the privacy perspective alone but the fact that I do not wish to tip off potential predatory hackers “switching people off” so to speak, in the future.


KATINA MICHAEL: My concern is that the more of us who bear these implantables for non-medical reasons in the future, the greater the risks.

GARY OLHOEFT: There is a well-known story of someone who has had an internal insulin pump hacked, and an insulin dose was changed so that it killed them [xxx], [xxxi].

KATINA MICHAEL: I do wonder Gary if this all has to do with liability issues [xxxii]. There is simply no reason that companies like Medtronic should not be engaging the public on these matters. In fact, it is in their best interest to receive customer feedback.

GARY OLHOEFT: It’s definitely a problem and I don’t know what to do about that….

KATINA MICHAEL: So we need some hard core evidence that someone’s implantable has previously been tampered with?

GARY OLHOEFT: I’ve already raised the issue several times and Medtronic, my brain implant manufacturer just sent me the programmer’s manual for the patient. The original one I got was just a couple of pages that had to do with interference. The latest version is 16-18 pages in length on interference. And that is because of the questions I raised about interference and the evidence I showed them.


GARY OLHOEFT: They still won’t admit that their device was defective in one case where I could prove it. My doctor believed me because I showed him the evidence, so he had them replace it at no charge.

KATINA MICHAEL: Okay. I have a question about this. Thank you for the information you sent me regarding your EEG as being logged by your DBS implant.

GARY OLHOEFT: It is not the EEG that I sent you, it is the measurement of the magnetic field from induced DBS current.

KATINA MICHAEL: It is the pulse?

GARY OLHOEFT: Right. The pulse height and the pulse frequency.

KATINA MICHAEL: Ok. So I saw the graph which indicated that every second pulse was being skipped.


KATINA MICHAEL: So the question I have, is whether you have access to your EEG information? There was a well-known case of Hugo Campos who wanted access to his ECG information and last I heard he had taken to court the manufacturer who claimed they had the right to withhold this data [xxxiii]. He is more interested in data advocacy than anything else [xxxiv]. He was claiming it was “his” heart rate, and his personal biometrics, and that he had every right to have access to that information [xxxv].

GARY OLHOEFT: There are devices on the web that show you how to build something to get an ECG [xxxvi].

KATINA MICHAEL: Exactly. Hugo, even enrolled himself in courses meant for biomedical engineers to do this himself, that is hack into his own heart beat information, with the pacemaker device residing in his own body [xxxvii]. So he has been at the forefront of that. But the manufacturer is claiming that “they” own the data [xxxviii]. And my question to you is: Is your DBS data uploaded through some mechanism, like the heart pacemaker data is uploaded on a nightly basis and sent back to base [xxxix]?

GARY OLHOEFT: No, only when I visit the doctor’s office, is the only time they have access to it. Only when I go to the doctor.

KATINA MICHAEL: You mean to download information or to check its operation?

GARY OLHOEFT: Download information from the pack in my chest. Actually, they store it in there. They print it out in hardcopy because they are afraid of people hacking their computers.


GARY OLHOEFT: And this is the University of Colorado Hospital.

KATINA MICHAEL: Yes, I totally understand this from my background reading. I’ve seen similar evidence where hardcopies are provided to the patient but a lot of the patients like Hugo Campos are saying hardcopies are not enough. I should be able to have access at any time, and I should be able to tell someone my device is playing up, or look something is wrong [xl].

GARY OLHOEFT: Right. Remember how I told you about the interleave function. Well when they set it to the interleave setting for the first time, they didn’t do it right. And I woke up the next morning feeling like I had had 40 cups of coffee. It turns out it was running at twice the frequency it should have been and I could show that. So I called them up and said you’ve got a problem here and they fixed it right away. I figured I could measure it independently of the Medtronic device. That’s why I built my own AC wirewound ferrite core magnetometer to monitor my own DBS.


GARY OLHOEFT: But all the Doctor had was a program that told him whatever Medtronic wanted to tell him. I wanted more information than that, I wanted to actually see it so I built my own.

KATINA MICHAEL: So I saw your information that you would have had a product out on the market to help others but the iPhone keeps upgrading so I get that.

GARY OLHOEFT: It keeps changing faster than I can keep up with it.

KATINA MICHAEL: So I am going to argue that it is their responsibility, the manufacturer’s responsibility to provide this capability.

GARY OLHOEFT: I see no reason why they couldn’t, but I like the idea of a third party providing an independent measurement of whether the implant is working and measuring the parameters directly (pulse height, pulse width, pulse repetition frequency, etc.).

KATINA MICHAEL: So I am concerned on a number of fronts, and have been for some time. This in particular is not a huge ask if they are cooperative in the process of an incremental innovation. E.g. imagine if Apple collaborated with Medtronic or the other providers from Stryker and so forth, like Cochlear have collaborated with Apple. I think biomedical device manufacturers have to offer this as a service and in layman’s understanding for non-engineers. And it must be free and not cost the recipient anything. It is the only way to empower recipients of the pacemakers and for them to feel at ease, without having to go for a visit to a cardiac specialist.

GARY OLHOEFT: I told you about the experience of walking into a Best Buy and having their automated inventory control system turn off my DBS?


GARY OLHOEFT: Then I used my device to see what frequency it was operating at and then asked my doctor to change my DBS to a different frequency so that I could walk in and out of Best Buy. So the frequency range means the operator needs to have such things in mind. These inventory control devices are built  into walls in stores and malls, so you no longer know that they are even there or have any warning. But they are there.

KATINA MICHAEL: I know, they are unobtrusive.

GARY OLHOEFT: So there needs to be warning signs or other things like that. They seem to begin appearing in hospitals and imaging centres where they say “MRI in use” if you have a cardiac pacemaker or brain pacemaker device do not enter this room. But it is a rare thing still. I remember 20 years ago or so when they had “danger microwave oven in use”.

KATINA MICHAEL: Yes, I remember that.

GARY OLHOEFT: It is like we need a more generic reason than that.

KATINA MICHAEL: What is your feeling with respect to radio frequency identification (RFID)? Or the new payment systems using near field communications? Are they affecting pacemakers? Or is it way too low in terms of emissions?

GARY OLHOEFT: Well, no. There are wireless devices that are low-level that don’t bother me. For example, I have a computer with Wi-Fi, and that doesn’t bother me. That is because I’ve measured it and I know what it is. It is a dosage thing. If I stay nearby it the dosage begins to build up, and eventually it can get to a point where it could be a problem. Not necessarily for my DBS but for other things. Heart pacemakers are much closer to the heart so there is less of a problem. And the length of wiring is much shorter in heart pacemakers. I have a piece of wire that runs from my chest, up my neck, up over the top of my head, and back behind my eyes, and it is almost 18 inches long. That is part of the problem. They could have made that a twisted pair with shielding like CAT6 wiring but they didn’t and the Medtronic people need to fix that one.

KATINA MICHAEL: And Gary I spoke to some researchers last year in June who were talking about not having the battery packs so low, having it closer to the brain and smaller in size. Do you think the problem would dissipate somewhat if the battery pack was closer to the brain?

GARY OLHOEFT: Yes. But then the battery won’t last as long because it’s smaller.

KATINA MICHAEL: Yes, I know that is the issue.

GARY OLHOEFT: They are already trying rechargeable batteries but you spend all day at the charger- you get 9 hours of charging for only 1 hour of use.

KATINA MICHAEL: No, that is definitely not feasible.

GARY OLHOEFT: So my doctor told me that, and he recommended against that for me.

KATINA MICHAEL: Now here is another question that is a difficult one for you, I think. Do you find a conflict in your heart sometimes? You are trying to help the manufacturer make a better product and you are trying to raise awareness of the important issues that patients face, and yet you are relying on the very product that you are trying to get some response to. Have you ever written to Medtronic and said “This is my name and this is my story- would you allow me to advise you, that is, provide feedback to your design team?” [xli]

GARY OLHOEFT: There is a Vice President that is responsible for R&D who is both a medical doctor and an engineer… I wrote to him several times and never got an answer.


GARY OLHOEFT: But I have spoken to Medtronic’s Chief Engineer that my device is misbehaving, you know with all those missing pulses. He was quite open about it. I also told him about the interleave problem at that time that it felt like I had had 40 cups of coffee and he said that was outside his area of expertise because he built the hardware but someone else programmed it. And you can find there are books out there that might tell you how to program these things. I’ve looked at them but I don’t agree with the approach they take. They never talk about interference. The default programming for this is 180 repetitions … it’s still the wrong place to start because in the US, 180 is a multiple (harmonic) of the powerline and close to the frequency used by many security systems and inventory control systems. See my talks on YouTube [xlii], [xliii].

KATINA MICHAEL: I am so concerned about what I am hearing. Concerned that the company is not taking any action. Concerned that we are not teaching our up and coming engineers about these problems because they have to know if they are not going to fall into the same pitfalls down the track as devices get even more sophisticated. I am also concerned that recipients of these brain pacemakers are not given the opportunity to provide proper feedback to design teams directly and that there is no indirect path in which to do this. A web page does not cut it. There are people like yourself Gary, who are willing and have relevant research expertise, whom these companies should be even welcoming onto their payroll to improve the robustness of their technologies. And I’ve already raised issues like those you are stating, with collaborators at the Consortium for Science, Policy & Outcomes at Arizona State University.

GARY OLHOEFT: Well, I’ve tried writing to various organisations and agencies and when possible, giving testimony to FDA, FCC and other agency requests for information.

KATINA MICHAEL: I think it is important to create a safe space where manufacturers, medical practitioners, patients, and policymakers come together to discuss these matters openly. I know there are user groups where patients go to discuss issues but that serves quite a different function, more of a support group. But until there is some level of openness then it will be likely that these issues will continue to cloud future developments. Gary, we need more people like yourself who have real stories to share, that are documented, together with peer-reviewed published research in the domain of interference and DBS. We should continue to write to them and also invite them to workshops and roundtable meetings, invite representatives from the FDA and FCC. What do you think about this approach?

GARY OLHOEFT: Yes, you can put me down for that. I’ll be involved.


GARY OLHOEFT: Part of the problem is that the FCC authorisation says 9KHertz up to 300 GigaHertz. And these devices operate at below 200 Hertz. So the FCC has no regulatory authority over them, except as Part 15 devices. The FDA has no limit. From lasers down to direct current (DC). The FCC has nothing to do with it, so we need to get involved with the FDA. We need them to get to document things at any rate.

KATINA MICHAEL: I have a question also about the length of time that battery packs last in biomedical implantable devices? Could they last longer? One researcher who is known as a Cyborg Anthropologist was speaking to someone on a plane from one of these biomedical companies who said to her that the devices are replaced in 4-5 year periods so that the companies can make more money, like 40,000 dollars for each new device. What do you make of this?

GARY OLHOEFT: Possibly the case. But really, you don’t want to be making bigger battery packs, right? You just want to be able to make better battery technology. For example, do you really want a Lithium ion battery in your body because it lasts longer?

KATINA MICHAEL: Yes, you have a point there. What kind of battery do you have?

GARY OLHOEFT: I don’t know what kind it is. I do have the dimensions for how big it is.

KATINA MICHAEL: Yes, I saw the information you sent, 6x6x2 cm.

GARY OLHOEFT: It’s already presentable “looks-wise”, so I wouldn’t risk it.

KATINA MICHAEL: Ok, I agree with your concerns here. I was just worried about this remark because I have heard it before, replacement biomedical devices being a money generator for the industry [xliv].

GARY OLHOEFT: He must’ve been a marketing type.

KATINA MICHAEL: Yes, he was in sales engineering.

GARY OLHOEFT: Do you know how they have those wireless power transmitters now? The Qi system is the only one I have been able to test because I’ve been able to get through to them as there is a potential there for interference [xlv]. So they have given me a device with which to actually play with.

KATINA MICHAEL: That is great. And a very good example of what we are talking about should be happening.

GARY OLHOEFT: There is a Wireless Power Consortium of other people who work at different frequencies [xlvi]. They are the only ones that give me no response to my letters. So the wireless power transmission people need to be brought into the scope of this somehow.

KATINA MICHAEL: Could you elaborate?

GARY OLHOEFT: These are the people who create devices to recharge batteries for devices that require power transmission.

KATINA MICHAEL: Yes, mobility types of technology devices. And there are lots of those coming and most of them with little testing in the security space. I mean the Internet of Things is promising so much in this market space. I think the last statistic I read that the media caught wind of was 20 billion devices by 2020 [xlvii].

GARY OLHOEFT: You are looking at a house that could have every lightbulb, every appliance, every device in it on the Internet.

KATINA MICHAEL: Yes indeed, we just have to look at the advent of NEST.

GARY OLHOEFT: And yet they are wirelessly transmitting. It would be much better if they were hooked up using fibre optics.

KATINA MICHAEL: Agreed… I mean for me it is also a privacy concern with everything hooked up in the house to the Internet [xlviii]. Last year somebody demonstrated they could set a toaster alight in the IOT scenario [xlix].

GARY OLHOEFT: You know how Google has these cars driving around taking pictures everywhere?

KATINA MICHAEL: Yes, that is part of my research [l].

GARY OLHOEFT: And they also record whatever wireless systems they can get into that is not subject to a password, and then they can record anything that is in it. They lost a lawsuit over that.

KATINA MICHAEL: Yes. There is one that was handed down into the billions in Europe recently. But over the last several years they have been fined very different amounts in different markets. It was very ridiculous that they were fined only a few thousand US dollars in for example, South Korea! [li]

GARY OLHOEFT: That is a joke.

KATINA MICHAEL: At the IEEE Sections Congress last month I spoke to several young people involved with driverless cars. And I don’t know, they were very much discounting the privacy and security issues that will arise. One delegate told me: “it’s all under control”. But I do not think they quite get it Gary. I said to one of them: “but what about the security issues” and he replied: “what issues, we’ve got them all under control, I am not in the slightest concerned about this because we are going to have protocols.” And I pointed to the Jeep Cherokee case that some hackers got to stop in its tracks on a highway in the United States [lii], [liii]. One of my concerns with these driverless cars is that people will die, sizzling in a hot vehicle, where they have been accidentally locked inside by the “car”. And they don’t even have to have pacemakers, it is an issue of simply having a vehicle unlock its doors for a client to exit.

GARY OLHOEFT: There was the case of the hybrid vehicle that was successfully stopped and demonstrated on TV.

KATINA MICHAEL: Yes. And there was also someone wearing an Emotiv device that was steering their vehicle with their thoughts [liv]. I was giving a talk at Wollongong’s innovation hub called iAccelerate last week and I told them this very scenario. What if I hacked into the driver’s thoughts, and steered the car off a cliff?

GARY OLHOEFT: So what will they do between vehicles when the devices start to interfere with one another? 

KATINA MICHAEL: Yes, exactly! And when devices begin interfering with one another more frequently for who knows what reason?

GARY OLHOEFT: We have had situations in which cell phones have stopped working because the network is simply overloaded on highways, or blocked by landslides or just traffic congestion. The Broncos Football Stadium here is undergoing a six million dollar upgrade, just so they can get the Wi-Fi working, and now they are building it in to every seat. So they now have security systems like Airports do, and so I cannot go into the Stadium anymore because of my DBS. I couldn’t sit in a  light rail train either.

KATINA MICHAEL: So here is a more metaphysical and existential question. I am so fortunate to be speaking to you! You are alive, you are well in terms of being able to talk and communicate, and yet somehow this sophisticated tech also means that you have had to dull down your accessibility to certain places, almost living off the grid to some degree. So all of this complex tech actually means you are living more simply perhaps. What does that feel like? It really is a paradox. You are being careful, testing your devices, testing the Wi-Fi, and learning by trial and error on-the-fly it seems.

GARY OLHOEFT: Well I have a landline phone against my head right now because I know it doesn’t bother me. I cannot hold a cellular phone within 20 inches of my head.


GARY OLHOEFT: So you are right. I mean there are a lot of places I cannot go to, like the School Library or the Public Library because of their system for keeping track of books. It has a very powerful electromagnetic pulse. So when I go to the Library, I go remotely via Virtual Private Network (VPN) on the Internet and fortunately I have access to that. I can also call the librarian who lets me in via the back door.

KATINA MICHAEL: So for me, in one case you are very free, and in the other case, somewhat not free at all. I really do not know how to express that in any other way.

GARY OLHOEFT: I see what you are trying to say but I would be less free without the device because it dramatically improves my functionality and quality of life, but also limits where I can go.

KATINA MICHAEL: I know. I know. I am ever so thankful that you have it and that we are able to talk so freely. I am not one to slow down progress but I am looking at future social implications. One of the things I have been pondering on is the potential to use these brain stimulators in a jail-like way. I am not referring here to torturous uses of brain stimulators, but for example, the possibility of using brain stimulators for repeat offenders in paedophilia for instance, or extreme crimes, whether we would ever get to the point where an implantable would be used for boundary control. Perhaps I am referring here to electronic jails of the future.

GARY OLHOEFT: That gets to be worrisome in a different way. How far are you away from that from controlling people. 1984 and all that [lv]. These DBS are being used now to help with obsessive compulsive disorder (OCD) and neural pain management.

KATINA MICHAEL: Yes, that is what we have pondered in the research we have conducted on uberveillance with MG Michael. So if we can fix the brain with an implantable then we can also do damage to it [lvi]. It is a bit like the internal insulin pump- if we can help someone receive the right amount of insulin, we can also reverse this process and give an individual the wrong amount to worsen the problem. Predatory hacking is something that will happen, if it is not happening already. That’s just the human condition that people would be dabbling with that kind of stuff. It is very difficult to talk about this in public because you do not wish to scare or alarm brain pacemaker or any pacemaker recipient, but we do need to raise awareness about this.

GARY OLHOEFT: That would be good because we don’t have enough people talking about these issues.

KATINA MICHAEL: I know. According to the NIH, there are 25 million people who have pacemakers and are vulnerable to cybersecurity hacks [lvii], [lviii]. That is a huge number. And it was you who also told me that 8% of Americans have some form of implant.

GARY OLHOEFT: Well it was 25 million in the year 2000.

KATINA MICHAEL: And the biggest thing? They must never ever link biomedical devices to the Internet of Things. Never. That is probably my biggest worry for the pacemaker community, that the companies will not think about this properly and they are going to be thinking of the ease of firmware updates and monitoring rather than safety of the individual. I envisage it will require a community of people and I am not short-sighted, it will mean a five-year engagement to make a difference to policy internal to organisations, and government agencies to listen to the growing needs of biomedical patients. But this too is an educational process and highly iterative. This is not like going down to your local mechanic and getting your car serviced, this is about the potential for things to go wrong, minimising exposure, and ensuring they stay right.


KATINA MICHAEL: Thank you Gary for your time.


Key Terms

Biomedical device: is the integration of a medical device and information system that facilitates life-sustaining care to a patient in need of a prosthetic function. Biomedical devices monitor physiological characteristics through mechanical parts small enough to embed in the human body. Popular biomedical devices include heart pacemakers and defibrillators, brain stimulator and vagus nerve stimulator devices, cochlear and retinal implants, among others. The biomedical device takes what was once a manual function in the human body, and replaces it with an automatic function, for example, helping to pump blood through the heart to sustain circulation.

Biomedical Co-creation: co-creation is a term popularised in the Harvard Business Review in 2000. Biomedical co-creation is a management design strategy, bringing together a company the manufactures a biomedical device and recipients of that device (i.e. patients) in order to jointly produce a mutually valued outcome. Customer perspectives, experiences and views in this instance, are vital for the long-term success of biomedical devices.

Deep Brain Stimulation: also known as DBS, is a neurosurgical procedure involving the implantation of a biomedical device called a neurostimulator (also known as a brain pacemaker), which sends electrical impulses, through implanted electrodes, to specific targets in the brain for the treatment of movement and neuropsychiatric disorders. DBS has provided therapeutic uses in otherwise treatment-resistant illnesses like Parkinson's disease, Tourette’s Syndrome, dystonia, chronic pain, major depressive disorder (MDD), and obsessive compulsive disorder (OCD). It is also being considered in the fields of autism and even anxiety-related disorders. The technique is still in its infancy and at the experimental stages with inconclusive evidence in treating MDD or OCD.

Federal Communications Commission: The Federal Communications Commission is an independent agency of the United States government created by statute to regulate interstate communications by radio, television, wire, satellite, and cable. Biomedical devices are not under the regulation of the FCC.

Food and Drug Administration: The Food and Drug Administration (FDA or USFDA) is a federal agency of the United States Department of Health and Human Services, one of the United States federal executive departments. The FDA is responsible for protecting and promoting public health through the control and supervision of a number of domains, among them those relevant to the biomedical device industry including electromagnetic radiation emitting devices (ERED).

Cybersecurity issues: are those that affect biomedical device recipients and place patients at risk of an unauthorised intervention. Hackers can attempt to hi-jack and administer incorrect levels of dosage to a recipient by penetrating proprietary code. These hackers are known as predatory hackers, given the harm they can cause persons who rely on life-sustaining technology.

Implantables: are technologies that sense parameters of various diseases and can either transfer data to a remote center, direct the patient to take a specific action, or automatically perform a function based on what the sensors are reading. There are implantables that have sensors that monitor, and those that facilitate direct drug delivery, or those that do both.

Participatory Design: is synonymous with a co-design strategy of development of biomedical devices. It is an approach that tries to incorporate various stakeholders in the process of design, such as engineers, medical practitioners, partners, manufacturers, surgeons, patients, ethics and privacy-related NGOs, end-users, to ensure that resultant needs are met.

Product Lifecycle Management: is the process of managing the entire lifecycle of a biomedical device from inception, through engineering design and manufacture, to service and disposal of manufactured products. Importantly, PLM is being extended to the ongoing monitoring of the embedded biomedical device in the patient, remotely using wireless capabilities.



[i] Kristen V. Brown, March 7, 2017, “Why People with Brain Implants are Afraid to Go Through Automatic Doors”, Gizmodo,, Accessed: February 19, 2018.

[ii] Gary Olhoeft, December 7, 2009, “Electromagnetic interference and medical implants”, The EMR Policy Institute,

[iii] Rosie Spinks, June 13, 2016, “Meet the French neurosurgeon who accidentally invented the “brain pacemaker””, Quartz,, Accessed: September 16, 2017.

[iv] Staff. “Steve Jobs Most Pissed Off Moments (1997-2010)”, Apple,, Accessed: September 15, 2017.

[v] Katina Michael, “The Creative Genius in Us- the Sky’s the Limit or Is It?”, iAccelerate Series,, Accessed: October 18, 2017.

[vi] Fred Vogelstein, October 4, 2013, “And Then Steve Said, ‘Let There Be an iPhone’”, The New York Times Magazine, Accessed: September 15, 2017.

[vii] Alexandra Ossola, November 17, 2015, “Tasers May Be Deadly, Study Finds”,, Popular Science, Accessed: February 17, 2018.

[viii] Katina Michael, February 2, 2018, “The Internet of Us”, RADCOMM2017,, Accessed: February 19, 2018.

[ix] Emily Waltz, April 26, 2017, “DARPA to Use Electrical Stimulation to Enhance Military Training”, IEEE Spectrum,, Accessed: September 15, 2017.

[x] Kristen V. Brown, July 11, 2017, “DARPA Is Funding Brain-Computer Interfaces To Treat Blindness, Paralysis And Speech Disorders”, Gizmodo Australia, Accessed: September 15, 2017.

[xi] Robbin A. Miranda, William D. Casebeer, Amy M. Hein, Jack W. Judy et al., 2015, “DARPA-funded efforts in the development of novel brain–computer interface technologies”, Journal of Neuroscience Methods, Vol. 244, pp. 52-67,, Accessed: September 15, 2017.

[xii] Katina Michael, M.G. Michael, Jai C. Galliot, Rob Nicholls, 2017, “Socio-Ethical Implications of Implantable Technologies in the Military Sector”, IEEE Technology and Society Magazine, Vol. 36, No. 1, March 2017, pp. 7-9,, Accessed: September 15, 2017.

[xiii] Serafeim Perdikis, Luca Tonin, Jose del R. Millan, 2017, "Brain racers," IEEE Spectrum, Vol. 54, No. 9, September 2017, pp. 44-51,, Accessed: February 16, 2018.

[xiv] Katina Michael, M.G. Michael, Jai C. Galliot, Rob Nicholls, 2016, “The Socio-Ethical Implications of Implantable Technologies in the Military Sector”, 9th Workshop on the Social Implications of National Security, University of Melbourne, Australia, July 12, 2016,, Accessed: September 15, 2017.

[xv] Jane Gardner, Feburary 9, 2016, “Moving with the Power of Thought”, Pursuit,, Accessed: September 15, 2017.

[xvi] Katina Michael, May 8, 2016, “Invitation to speak at the 9th Workshop on the Social Implications of National Security”, Personal Communications with Thomas Oxley.

[xvii] MF+ Staff, 2016, “SoundBite: Hearing Aid on your Teeth”, Sonitus Medical,, Accessed: September 15, 2017.

[xviii] Gordon Wallace, Joseph Wang, Katina Michael, 2016, “Public Information Session – Wearable Sensing Technologies: What we have and where we are going!”, Wearables and Implantables Workshop, University of Wollongong, Australia, Innovation Campus, August 19, 2016,, Accessed: September 15, 2017.

[xix] Herbert Kenwith and James S. Henerson, 1971 “Laurie Gets Braces”, Partridge Family: Season 1,, Accessed: September 15, 2017.

[xx] Katina Michael, August 31, 2017, “The Creative Genius in Us- the Sky’s the Limit or Is It?”, iAccelerate: Illawarra’s Business Incubator,, Accessed: September 15, 2017.

[xxi] Emma Hinchcliffe, July 27, 2017, “This made-for-iPhone cochlear implant is a big deal for the deaf community”, Mashable,, Accessed: September 15, 2017.

[xxii] Ronen Hareuveny, Madhuri Sudan, Malka N. Halgamuge, Yoav Yaffe, Yuval Tzabari, Daniel Namir, Leeka Kheifets, 2015, “Characterization of Extremely Low Frequency Magnetic Fields from Diesel, Gasoline and Hybrid Cars under Controlled Conditions”, Vol. 12, No. 2, pp. 1651–1666.

[xxiii] Nicole Lou, February 27, 2017, “Everyday Exposure to EM Fields Can Disrupt Pacemakers”, MedPage Today/,, Accessed: February 17, 2018.

[xxiv] Oxana S. Pantchenko, Seth J. Seidman, Joshua W. Guag, 2011, “Analysis of induced electrical currents from magnetic field coupling inside implantable neurostimulator leads”, BioMedical Engineering OnLine, Vol. 10, No. 1, pp. 94,, Accessed: February 17, 2018.

[xxv] Oxana S. Pantchenko, Seth J. Seidman, Joshua W. Guag, Donald M. Witters Jr., Curt L. Sponberg, 2011, “Electromagnetic compatibility of implantable neurostimulators to RFID emitters”, BioMedical Engineering OnLine, Vol. 10, No. 1, pp. 50,, Accessed: February 17, 2018.

[xxvi] Kelly Dustin, 2008, “Evaluation of Electromagnetic Incompatability Concerns for Deep Brain Stimulators”, Disclosures: J. Neurosci. Nurs., Vol. 40, No. 5, pp. 299-303,, Accessed: February 19, 2018.

[xxvii] Joel M. Moskowitz, September 2, 2017, “Hybrid & Electric Cars: Electromagnetic Radiation Risks”, Electromagnetic Radiation Safety,, Accessed: February 16, 2018.

[xxviii] Kristen V. Brown, July 4, 2017, “Why People With Brain Implants Are Afraid To Go Through Automatic Doors”, Gizmodo: Australia,, Accessed: September 15, 2017.

[xxix] National Institutes of Health, January 10-12, 2000, “Improving Medical Implant Performance Through Retrieval Information: Challenges and Opportunities”,  U.S. Department of Health and Human Services,, Accessed: February 16, 2018.

[xxx] Dan Goodin, October 27, 2011, “Insulin pump hack delivers fatal dosage over the air”, The Register, Accessed: September 15, 2017.

[xxxi] BBC Staff, October 4, 2016, “Johnson & Johnson says insulin pump 'could be hacked'”, BBC News,, Accessed: September 15, 2017.

[xxxii] Office of Public Affairs, December 12, 2011, “Minnesota-Based Medtronic Inc. Pays US $23.5 Million to Settle Claims That Company Paid Kickbacks to Physicians”, Department of Justice,, Accessed: February 19, 2018.

[xxxiii] Hugo Campos, January 19, 2012, “Fighting for the Right to Open his Heart Data: Hugo Campos”, TEDxCambridge 2011,, Accessed: September 15, 2017.

[xxxiv] Hugo Campos, July 15, 2012, “Stanford Medicine X ePatient: On ICDs and Access to Patient Device Data”, Stanford Medicine X,, Accessed: September 15, 2017.

[xxxv] Emily Singer, “Getting Health Data from Inside Your Body”, MIT Technology Review,, Accessed: September 15, 2017.

[xxxvi] Hugo Silva, June 22, 2015, “How to build a DIY heart and activity tracking device”,,, Accessed: September 15, 2017.

[xxxvii] Hugo Campos, March 24, 2015, “The Heart of the Matter: I can’t access the data generated by my implanted defibrillator. That’s absurd.”, Slate,, Accessed: September 15, 2017.

[xxxviii] Jody Ranck, 2016, “Rise of e-Patient and Citizen-Centric Public Health”, Ed. Jody Ranck, Disruptive Cooperation in Digital Health, Springer, Switzerland, pp. 49-51.

[xxxix] Haran Burri and David Senouf, 2009, “Remote monitoring and follow-up of pacemakers and implantable cardioverter defibrillators”, Europace. Jun, Vol. 11, No. 6, pp. 701–709,, Accessed: December 6, 2017.

[xl] Mike Miliard, November 20, 2015, “Medtronic enables pacemaker monitoring by smartphone”, Healthcare IT News,, Accessed: September 15, 2017.

[xli] Staff. “How can we help?”, Medtronic,, Accessed: September 15, 2017.

[xlii] Gary Olhoeft, December 12, 2009, “Gary Olhoeft #1 Electromagnetic Interference and Medical”, Youtube: EMRPolicyInstitute, Accessed: February 16, 2018.

[xliii] Gary Olhoeft, December 12, 2009, “Gary Olhoeft #2 Electromagnetic Interference and Medical”, Youtube: EMRPolicyInstitute,, Accessed: February 16, 2018.

[xliv] Tim Pool, August 2, 2017, “When Companies Start Implanting People: An Interview with Amber Case on the Ethics of Biohacking”, TimCast, Episode 139,, Accessed: September 15, 2017.

[xlv] Administrators. Qi (Standard), Wikipedia,, Accessed: September 15, 2017.

[xlvi] WPC, 2017, Wireless Power Consortium,, Accessed: September 15, 2017.

[xlvii] Amy Nordrum, August 16, 2016, “Popular Internet of Things Forecast of 50 Billion Devices by 2020 Is Outdated”, IEEE Spectrum,, Accessed: September 16, 2017.

[xlviii] Grant Hernandez, Orlando Arias, Daniel Buentello, Yier Jin, 2014, “Smart Nest Thermostat: A Smart Spy in Your Home”,,, Accessed: September 16, 2017.

[xlix] Mario Ballano Barcena, Candid Wueest, 2015, “Insecurity in the Internet of Things”, Symantec,, Accessed: September 16, 2017.

[l] Katina Michael and Roger Clarke, 2013, “Location and tracking of mobile devices: Überveillance stalks the streets”, Computer Law and Security Review: the International Journal of Technology Law and Practice, Vol. 29, No. 3, pp. 216-228.

[li] Katina Michael, M.G. Michael, 2011, “The social and behavioural implications of location-based services”, Journal of Location Based Services, Vol. 5, Iss. 3-4,, Accessed: September 16, 2017.

[lii] Andy Greenberg, July 21, 2015, “Hackers remotely kill a Jeep on the Highway- with me in it”, Wired,, Accessed: September 16, 2017.

[liii] Andy Greenberg, August 1, 2016, “The Jeep Hackers are back to prove car hacking can get much worse”, Wired,, Accessed: September 16, 2017.

[liv] Markus Waibel, February 17, 2011, “BrainDriver: A Mind Controlled Car”, IEEE Spectrum,, Accessed: September 16, 2017.

[lv] Oliver Balch, November 17, 2016, “Brave new world: implantables, the future of healthcare and the risk to privacy”, The Guardian,, Accessed: February 19, 2018.

[lvi] Katina Michael, 2015, “Mental Health, Implantables, and Side Effects”, IEEE Technology and Society Magazine, Vol. 34, No. 2, June, pp. 5-7, 17,, Accessed: September 16, 2017.

[lvii] Staff. August 30, 2017, “Cyber-flaw affects 745,000 pacemakers”, BBC News,, Accessed: September 16, 2017.

[lviii] Carmen Camara, Pedro Peris-Lopez, Juan E. Tapiador, 2015, “Security and privacy issues in implantable medical devices: A comprehensive survey”, Journal of Biomedical Informatics, Vol. 55, June 2015, pp. 272-289,, Accessed: February 19, 2018.


Citation: Excerpt from Gary Olhoeft and Katina Michael (2018), Product Lifecycle Management for Brain Pacemakers: Risks, Issues and Challenges Technology and Society (Vol. 2), University of Wollongong (Faculty of Engineering and Information Services), ISBN: 978-1-74128-270-2.

Kallistos Ware on Religion, Science & Technology

This interview with Kallistos Ware took place in Oxford, England, on October 20, 2014. The interview was transcribed by Katina Michael and adapted again in Oxford, on October 18, 2016, by Metropolitan Kallistos in preparation for it to appear in print. MG Michael predominantly prepared the questions that framed the interview.


Born Timothy Ware in Bath, Somerset, England, Metropolitan Kallistos was educated at Westminster School (to which he had won a scholarship) and Magdalen College, Oxford, where he took a Double First in Classics as well as reading Theology. In 1966 Kallistos became a lecturer at the University of Oxford, teaching Eastern Orthodox Studies, a position he held for 35 years until his retirement. In 1979, he was appointed to a Fellowship at Pembroke College, Oxford.

Do You Differentiate between the Terms Science and Technology? And is there a Difference between the Terms in Your Eyes?

Science, as I understand it, is the attempt systematically to examine reality. So in that way, you can have many different kinds of science. Physical science is involved in studying the physical structure of the universe. Human science is examining human beings. Thus the aim of science, as I understand it, is truth. Indeed, the Latin term scientia means knowledge. So, then, science is an attempt through the use of our reasoning brain to understand the world in which we live, and the world that exists within us. Technology, as I interpret it, means applying science in practical ways, producing particular kinds of machines or gadgets that people can use. So science provides the basis for technology.

What Does Religion have to Say on Matters of Science and Technology?

I would not call religion a science, though some people do, because religion relies not simply on the use of our reasoning brain but it depends also on God's revelation. So religion is based usually on a sacred book of some kind. If you are a Christian that means the Bible, the Old and New Testaments. If you are a Muslim, then the Old Testament and the Quran.

So science as such does not appeal to any outside revelation, it is an examination of the empirical facts before us. But in the case of religion, we do not rely solely on our reasoning brain but on what God has revealed to us, through Scripture and in the case of an Orthodox Christian, through Scripture and Tradition. Technology, is something we would wish to judge in the light of our religious beliefs. Not all of the things that are possible for us to do applying our scientific knowledge are necessarily good. Technology by itself cannot supply us with the ethical standards that we wish to apply. So then religion is something by which we assess the value or otherwise of technology.

Could We Go Insofar as Saying that Science and Religion Could be in Conflict? Or at Least is there a Point Where they Might Become Incompatible One with the Other?

I do not believe that there is a fundamental conflict between science and religion. God has given us a reasoning brain, he has given us faculties by which we can collect and organize evidence. Therefore, fundamentally all truth is from God. But there might be particular ways of using science which on religious grounds we would think wrong. So there is not a fundamental conflict, but perhaps in practice a certain clash. Problems can arise when, from the point of view of religion, we try to answer questions which are not strictly scientific. It can arise when scientists go beyond the examination of evidence and form value judgements which perhaps could conflict with religion. I would see conflict arising, not so much from science as such in the pursuit of truth, but from scientism, by which I mean the view that methods of scientific enquiry necessarily answer all the questions that we may wish to raise. There may be areas where science cannot give us the answer. For example, do we survive death? Is there a future life? That is to me a religious question. And I do not think that our faith in a future life can be proved from science, nor do I think it can be disproved by science. Equally, if we say God created the world, we are making a religious statement that in my view cannot be proved or disproved by science. So religion and science are both pursuing truth but on different levels and by different methods.

Are there Any Principles or Examples in the Judeo-Christian Tradition Which Point to the Uses and Abuses of Technology?

One precious element in the Judeo-Christian tradition is respect for the human person. We believe as Christians that every person is of infinite value in God's sight. Each person is unique. God expects from each one of us something that he doesn't expect from anyone else. We are not just repetitive stereotypes. We are each made in the image and likeness of God, and we realize that likeness and image, each in our own way. Humans are unique basically because we possess freedom. Therefore we make choices. And these choices which are personal to each one of us determine what kind of person we are. Now, any technology which diminishes our personhood, which degrades us as humans, this I see as wrong. For example, to interfere with people's brains by medical experimentation, I would see as wrong. Medicine that aims to enable our bodies and our minds to function correctly, that clearly I would see as good. But experiments that have been done by different governments in the 20th century, whether by Communism or in Nazi Germany, that I would see as an abuse of technology because it does not show proper respect for the integrity of the human person. So this would be my great test - how far technology is undermining our personhood? Clearly our freedom has to be limited because we have to respect the freedom of other people. And therefore, much of politics consists of a delicate balancing of one freedom against another. But technology should be used always to enhance our freedom, not to obliterate it.

How did the Ancient World Generally Understand and Practice Technology?

Interpreting technology in the broadest possible sense, I would consider that you cannot have a civilized human life without some form of technology. If you choose to live in a house that you have built yourself or somebody else has built for you, instead of living in a cave, already that implies a use of technology. If you wear clothes woven of linen, instead of sheepskins or goatskins, that again is a use of technology. In that sense, technology is not something modern, it came into existence as soon as people began using fire and cooking meals for themselves, for example. Clearly, the amount of technology that existed in the ancient world was far less than what we have today. And most of the technological changes have come, I suppose, in the last 200 years: the ability to travel by railway, by car, and then by plane; the ability to use telephones and now to communicate through the Internet. All of this is a modern development. Therefore we have an elaboration of technology, far greater than ever existed in the ancient world. That brings both advantages and risks. We can travel easily and communicate by all kinds of new means. This in itself gives us the opportunity to do far more, but the advantages are not automatic. Always it is a question of how we use technology. Why do we travel quickly from place to place? What is our aim? When we communicate with the Internet, what is it that we are wishing to communicate to one another? So value judgements come in as to how we use technology. That we should use it seems to me fully in accordance with Christian tradition. But the more complex technology becomes, the more we can do through technology, the more questions are raised whether it is right to do these things. So we have a greater responsibility than ever people had in the ancient world, and we are seeing the dangers of misuse of our technology, in for example the pollution of the environment. For the most part the ecological crisis is due to the wrong use of our technological skills. We should not give up using those skills, but we do need to think much more carefully how and why we are using them.

In What Ways has Technology Impacted Upon Our Practice of Religion? Is there Anything Positive Coming from this?

One positive gain from technology is clearly the greater ease by which we can communicate. We can share our ideas far more readily. A huge advance came in the fifteenth century with the invention of printing. You no longer had to write everything out by hand, you could produce things in thousands of copies. And now of course a whole revolution that has come in through the use of computers, which again renders communication far easier. But once more we are challenged: we are given greater power through these technological advances, but how are we going to use this power? We possess today a knowledge that earlier generations did not possess, quantitative, information, technological, and scientific facts that earlier ages did not have. But though we have greater knowledge today, it is a question whether we have greater wisdom. Wisdom goes beyond knowledge, and the right use of knowledge has become much more difficult. To give an example from bioethics: We can now interfere in the processes of birth in a way that was not possible in the past. I am by no means an expert here, but I am told that it is possible or soon will be for parents to choose the sex of their children. But we have to ask: Is it desirable? Is it right, from a Christian point of view that we should interfere in the mystery of birth in this way? My answer is that parents should not be allowed to choose the sex of their child. This is going beyond our proper human responsibility. This is something that we should leave in the hands of God, and I fear that there could be grave social problems if we started choosing whether we would have sons or daughters. There are societies where girls are regarded as inferior, and in due course there might arise a grave imbalance between the sexes. That is just one illustration of how technology makes things possible, but we as Christians on the basis of the teaching of the church have certain moral standards, which say this is possible but is not the right thing to do. Technology in itself, indeed science in itself, cannot tell us what is right or wrong. We go beyond technology, and beyond the strict methods of science, when we begin to express value judgements. And where do our values come from? They come from our religious belief.

How are We to Understand the Idea of Being Created in the “Image and Likeness” of God in the Pursuit of the Highest Levels and Trajectories of Technology?

There is no single interpretation in the Christian tradition of what is meant by the creation of the human person according to the image and likeness of God. But a very widespread approach, found for example among many of the Greek fathers, is to make a distinction between these two terms. Image on this approach denotes the basic human faculties that we are given; those things which make us to be human beings, the capacities that are conferred on every human. The image is as it were, our starting point, the initial equipment that we are all of us given. The likeness is seen as the end point. The likeness means the human person in communion with God, living a life of holiness. Likeness means sanctity. The true human being on this approach is the saint. We humans, then, are travellers, pilgrims, on a journey from the image to the likeness. We should think of the human nature in dynamic terms. Fundamental to our personhood is the element of growth. Now, the image then means that we possess the power of rational thinking, the power of speech, articulate language with which we can communicate with others; it means therefore reason in the broadest sense. More fundamentally, it means that we humans have a conscience, a sense of right or wrong, that we make moral decisions. Most fundamentally of all, the image means that we humans have God-awareness, the possibility to relate to God, to enter into communion with him through prayer. And this to me is the basic meaning of the image, that we humans are created to relate to God. There is a direction, an orientation in our humanness. We are not simply autonomous. The human being considered without any relationship to God is not truly human. Without God we are only subhuman. So the image gives us the potentiality to be in communion with God, and that is our true nature. We are created to live in fellowship and in communion with God the Creator. So the image means you cannot consider human beings simply in isolation, as self-contained and self-dependent but you have to look at our relationship with God. Only then will you understand what it is to be human.

At What Point Would Theologians or Ethicists Reckon we Have Crossed the Line from Responsible Innovation and Scientific Enquiry over into “Hubris”?

As a Christian theologian, I would not wish to impose, as if from a higher authority, limits on scientific enquiry. As I said earlier, God has given us the power to understand the world around us. All truth comes from him. Christ is present in scientific enquiry, even if his name is not mentioned. Therefore, I do not seek in a theoretical way to say to the scientist: Thus far and no further. The scientist, using the methods of enquiry that he has developed, should continue his work unimpeded. One cannot say that any subject is forbidden for us to look at. But there is then the question: how do we apply our scientific knowledge? Hubris comes in when scientists go beyond their proper discipline and try to dictate how we are to live our lives. Morality does not depend solely on scientific facts. We get our values, if we are Christians, from our faith. Modern science is an honest enquiry into the truth. So long as that is the case, we should say to the scientist: please continue with your work. You are not talking about God, but God is present in what you are doing, whether you recognize that or not. Hubris comes in when the scientist thinks he can answer all the questions about human life. Hubris comes in when we think we can simply develop our technology without enquiring: is this a good or bad application of science?

Is that Well-Known Story of the Tower of Babel from the Book of Genesis 11:1-9 at all Relevant with its Dual Reference to “Hubris” and “Engineering”?

Yes, that is an interesting way of looking at the story of the Tower of Babel. The story of the Tower of Babel is basically a way of trying to understand why it is that we humans speak so many different languages and find such difficulty in communicating with one another. But underlying the story of Babel exactly is an overconfidence in our human powers. In the story of the Tower of Babel, the people think that they can build a tower that will reach from earth to heaven. By the power of engineering they think they can bridge the gap between the human and the divine. And this exactly would be attributing to technology, to our faculty for engineering, something that lies beyond technology and beyond engineering. Once you are moving from the realm of factual reality to the realm of heaven, then you are moving into a different realm where we no longer depend simply on our own powers of enquiry and our own ability to apply science. So exactly, the story of the Tower of Babel is a story of humans thinking they have unlimited power, and particularly an unlimited power to unite the earthly with the heavenly, whereas such unity can only come through a recognition of our dependence on God.

Why Cannot or Should We Not Explore and Innovate, and Go as Far as is Humanly Possible with Respect to Innovation, if We Carry the Seed of God's Creative Genius within Us?

Yes, we carry the seed of God's creative genius within us, but on the Christian world view we humans are fallen beings and we live in a fallen world. Now, how the fall is interpreted in Christian tradition can vary, but underlying all understandings of the fall is the idea that the world that we live in has in some way or another gone wrong. There is a tragic discrepancy between God's purpose and our present situation. As fallen human beings, therefore, we have to submit our projects to the judgement of God. We have to ask, not only whether this is possible but whether this is in accordance with the will of God. That obviously is not a scientific or technological question. It is not a question of what is possible but of what is right. Of course, it is true that many people do not believe in God, and therefore would not accept what I just said about this being a fallen world. Nevertheless they too, even those who have no belief in God, have to apply a moral understanding to science and technology. I hope they would do this by reflecting on the meaning of what it is to be human, on the value of personhood. And I believe that in this field it is possible, for Christians and non-Christians, for believers and unbelievers, to find a large measure of common ground. At the same time, we cannot fully understand our limitations as fallen human beings without reference to our faith. So the cooperation with the non-believer only extends to a certain limited degree.

Can a Particular Technology, for Instance Hardware or Software, be Viewed as Being “Immoral”?

One answer might be to say technology is not in itself moral or immoral. Technology simply tells us what is possible for us to do. Therefore, it is the use we make of technology that brings us to the question of whether a thing is moral or immoral. On the other hand, I would want to go further than that, to say that certain forms of technology might in themselves involve a misuse of humans or animals. I have grave reservations, for example, about experiments on animals by dissection. Many of the things that are done in this field fail to show a proper respect for the animals as God's creation. So, it is not perhaps just the application of technology that can be wrong but the actual technology itself, if it involves a wrong use of living creatures, humans or animals. Again, a technology that involves widespread destruction of natural resources, that pollutes the world round us, that too, I would say in itself is wrong, regardless of what this technology is being used for. Often it must be a question of balancing one thing against another. All technology is going to affect people, one way or another. But there comes a point where the effect is unacceptable because it is making this world more difficult for other humans to live in. It is making the world unsuitable for future generations to survive within. Thus, one cannot make a sharp distinction between the technology in itself and how we apply it. Perhaps the technology itself may involve a wrongful use of humans, animals, or natural things; wrongful because it makes the world somehow less pleasant and less healthy for us to live in.

Is Religious Faith in Any Way Threatened by Technology?

If we assume a scientific approach, that assumes that humans are simply elaborate machines, and if we develop technologies which work on that basis, I do think that is a threat to our religious faith, because of my belief in the dignity and value of the human person. We are not simply machines. We have been given free will. We have the possibility to communicate with God. So in assuming that the human being is merely a machine, we are going far beyond the actual facts of science, far beyond the empirical application of technology, since this is an assumption with deep religious implications. Thus there can be conflict when science and technology go beyond their proper limits, and when they do not show respect for our personhood.

Can Technology Itself become the New Religion in its Quest for Singularitarianism - the Belief in a Technological Singularity, where we are Ultimately Becoming Machines?

Yes. If we assume that science and technology, taken together, can answer every question and solve every problem, that would be making them into a new religion, and a religion that I reject. But science and technology do not have to take that path. As before, I would emphasize we have to respect certain limits, and these limits do not come simply from science or technology. We have, that is to say, to respect certain limits on our human action. We can, for example, by technology, bring people's lives to an end. Indeed, today increasingly we hear arguments to justify euthanasia. I am not at all happy about that as a Christian. I believe that our life is in God's hands and we should not decide when to end it, still less should we decide when to end other people's lives. Here, then, is a very obvious use of technology, of medical knowledge, where I feel we are overstepping the proper limits because we are taking into our hands that which essentially belongs to God.

Can You Comment on the Modern Day Quest toward Transhumanism or what is Now Referred to as Posthumanism?

I do not know exactly what is meant by posthumanism. I see the human person as the crown and fulfilment of God's creation. Humans have uniqueness because they alone are made in the image and likeness of God. Could there be a further development in the process of evolution, whereby some living being would come into existence, that was created but on a higher level than us humans? This is a question that we cannot really answer. But from the religious point of view, speaking in terms of my faith as a Christian, I find it difficult to accept the idea that human beings might be transcended by some new kind of living creature. I note that in our Christian tradition we believe that God has become human in Christ Jesus. The second person of the Trinity entered into our human life by taking up the fullness of our human nature into Himself. I see the incarnation as a kind of limit that we cannot surpass and that will not be superseded. And so I do not find it helpful to speculate about anything beyond our human life as we have it now. But we are not omniscient. All I would say is that it will get us nowhere if we try to speculate about something that would transcend human nature. The only way we can transcend human nature is by entering ever more fully into communion with God, but we do not thereby cease to be human. Whether God has further plans of which we know nothing, we cannot say. I can only say that, within the perspective of human life as we know it, I cannot see the possibility of going beyond the incarnation of Christ.

Is Human Enhancement or Bodily Amplification an Acceptable Practice When Considered against Medical Prosthesis?

Human enhancement and bodily amplification are acceptable if their purpose is to enable our human personhood to function in a true and balanced way but if we use them to make us into something different from what we truly are, then surely they are not. Of course that raises the question of acceptable, what we truly are. Here the answer, as I have already said comes not from science but from our religious faith.

What if Consciousness Could Ever be Downloaded through Concepts Such as “Brain in a Vat”?

[Sigh]. I become deeply uneasy when such things are suggested, basically because it undermines the fullness of our personhood. Anything that degrades living persons into impersonal machines is surely to be rejected and opposed.

In the Opposite Vein, What if Machines Were to Achieve Fully Fledged Artificial Intelligence through Advancement?

When I spoke of what it means for humans to be created in God's image, I mentioned as the deepest aspect of this that we have God-awareness. There is as it were in our human nature a God-shaped hole which only He can fill. Now perhaps robots, automatic machines, can solve intellectual problems, can develop methods of rational thought, but do such machines have a sense of right or wrong? Still more, do such machines have an awareness of God? I think not.

What is so Unique about Our Spirit Which We Cannot Imbue Or Suggest into Future Humanoid Machines?

The uniqueness of the human person for me is closely linked with our possession of a sense of awe and wonder, a sense of the sacred, a sense of the divine presence. As human beings we have an impulse within us that leads us to pray. Indeed, prayer is our true nature as humans. Only in prayer do we become fully ourselves. And to the qualities that I just mentioned, awe, wonder, a sense of the sacred, I would add a sense of love. Through loving other humans, through loving the animals, and loving God, we become ourselves, we become truly human. Without love we are not human. Now, a machine however subtle does not feel love, does not pray, does not have a sense of the sacred, a sense of awe and wonder. To me these are human qualities that no machine, however elaborate, would be able to reproduce. You may love your computer but your computer does not love you.

Where Does Christianity Stand on Organ Donation and Matters Related to Human Transplantation? Are there Any Guidelines in the Bioethics Domain?

In assessing such questions as organ donations, heart transplants, and the like, my criterion is: do these interventions help the human person in question to lead a full and balanced human life? If organ transplants and the like enhance our life, enable us to be more fully ourselves, to function properly as human beings, then I consider that these interventions are justified. So, the question basically is: is the intervention life enhancing?

That would bring me to another point. As Christians we see this life as a preparation for the life beyond death. We believe that the life after death will be far fuller and far more wonderful than our life is at present. We believe that all that is best in our human experience, as we now know it, will be reaffirmed on a far higher level after our death. Since the present life is in this way a preparation for a life that is fuller and more authentic, then our aim as Christians is not simply to prolong life as long as we can.

Can You Comment on One's Choice to Sustain Life through the Use of Modern Medical Processes?

The question therefore arises about the quality of life that we secure through these medical processes. For example, I recall when my grandmother was 96, the doctors suggested that various things could be done to continue to keep her alive. I asked how much longer will they keep her alive and the answer was, well perhaps a few months, perhaps a year. And when I discovered that this meant that she would always have various machines inserted into her that would be pumping things into her, I felt this is not the quality of life that I wish her to have. She had lived for 96 years. She had lived a full and active life. I felt, should she not be allowed to die in peace without all this machinery interfering in her. If on the other hand, it were a question of an organ transplant, that I could give to somebody who was half her age, and if that afforded a prospect that they might live for many years to come, with a full and active existence, then that would be very different. So my question would always be, not just the prolonging of life but the quality of the life that would be so prolonged. I do not, however, see any basic religious objection to organ transplants, even to heart transplants. As long as the personality is not being basically tampered with, I see a place for these operations. Do we wish to accept such transplants? That is a personal decision which each one is entitled to make.


This interview transcript has previously been published by the University of Wollongong, Australia.

IEEE Keywords: Interviews, Cognition, Education, Standards, Internet, Technological innovation, Ethics

Citation: M.G. Michael, Katina Michael, "Religion, Science, and Technology: An Interview with Metropolitan Kallistos Ware", IEEE Technology and Society Magazine, 2017, Volume: 36, Issue: 1, pp. 20 - 26, DOI: 10.1109/MTS.2017.2654283.

The Screen Bubble - Jordan Brown interviews Katina Michael

So what do I see? I see little tiny cameras in everyday objects, we’ve already been speaking about the Internet of Things—the web of things and people—and these individual objects will come alive once they have a place via IP on the Internet. So you will be able to speak to your fridge; know when there is energy being used in your home; your TV will automatically shut off when you leave the room. So all of these appliances will not only be talking with you, but also with the suppliers, the organisations that you bought these devices from. So you won’t have to worry about warranty cards; the physical lifetime of your device will alert you to the fact that you’ve had this washing machine for two years, it requires service. So our everyday objects will become smart and alive, and we will be interacting with them. So it’s no longer people-to-people communications or people-to-machine, but actually the juxtaposition of this where machines start to talk to people.

Read More

Big Data's Big Unintended Consequences


marcus wigan.jpg

Businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons. The Web extra at is a video in which author Marcus Wigan expands on his article "Big Data's Big Unintended Consequences" and discusses how businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons.

Citation: "Big Data's Big Unintended Consequences", Computer (Volume: 46, Issue: 6, June 2013), pp. 46 - 53, 07 June 2013, DOI: 10.1109/MC.2013.195

Corporate Governance of Big Data: Perspectives on Value, Risk, and Cost


 Prof Paul Tallon

Prof Paul Tallon

Finding data governance practices that maintain a balance between value creation and risk exposure is the new organizational imperative for unlocking competitive advantage and maximizing value from the application of big data. The first Web extra at is a video in which author Paul Tallon expands on his article "Corporate Governance of Big Data: Perspectives on Value, Risk, and Cost" and discusses how finding data governance practices that maintain a balance between value creation and risk exposure is the new organizational imperative for unlocking competitive advantage and maximizing value from the application of big data. The second Web extra at is a video in which author Paul Tallon discusses the supplementary material to his article "Corporate Governance of Big Data: Perspectives on Value, Risk, and Cost" and how projection models can help individuals responsible for data handling plan for and understand big data storage issues.

Citation: Paul Tallon, Computer (Volume: 46, Issue: 6, June 2013), pp. 32 - 38, Date of Publication: 23 May 2013, DOI: 10.1109/MC.2013.155.

Public Policy Considerations for Data-Driven Innovation


 Jess Hemerly from Google, Inc.

Jess Hemerly from Google, Inc.

To achieve the maximum benefits from data-driven innovation, policymakers must take into account the possibility that regulation could preclude economic and societal benefits. The proposed framework for policy discussions examines three main areas of policy interest: privacy and security, ownership and transfer, and infrastructure and data civics. The Web extra at is a video in which author Jess Hemerly expands on her article "Public Policy Considerations for Data-Driven Innovation," in which she discusses how regulation could preclude some of the economic and societal benefits of data-driven innovation.

Citation: Jess Hemerly, "Public Policy Considerations for Data-Driven Innovation", Computer (Volume: 46, Issue: 6, June 2013), pp. 25 - 31, Date of Publication: 13 May 2013, DOI: 10.1109/MC.2013.186

Roger Clarke - the Privacy Expert

In 1971, I was working in the (then) computer industry, and undertaking a 'social issues' unit towards my degree.  A couple of chemical engineering students made wild claims about the harm that computers would do to society.  After spending time debunking most of what they said, I was left with a couple of points that they'd made about the impact of computers on privacy that were both realistic and serious.  I've been involved throughout the four decades since then, as consultant, as researcher and as advocate.

Read More

Big Data in Neonatal Intensive Care

 Carolyn McGregor discussing neonatal issues with a doctor at UOIT.

Carolyn McGregor discussing neonatal issues with a doctor at UOIT.

The effective use of big data within neonatal intensive care units has great potential to support a new wave of clinical discovery, leading to earlier detection and prevention of a wide range of deadly medical conditions. The Web extra at is a video in which author Carolyn McGregor expands on her article "Big Data in Neonatal Intensive Care" and discusses how the effective use of big data within neonatal intensive care units has great potential to support a new wave of clinical discovery, leading to earlier detection and prevention of a wide range of deadly medical conditions.

Citation: Carolyn McGregor, Computer (Volume: 46, Issue: 6, June 2013), pp. 54 - 59, Date of Publication: 03 May 2013, Print ISSN: 0018-9162, DOI: 10.1109/MC.2013.157


Transforming Big Data into Collective Awareness


 Professor Jeremy Pitt

Professor Jeremy Pitt

Integrating social and sensor networks can transform big data, if treated as a knowledge commons, into a higher form of collective awareness that can motivate users to self-organize and create innovative solutions to various socioeconomic problems. The Web extra at is a video in which author Jeremy Pitt expands on his article "Transforming Big Data into Collective Awareness" and discusses how integrating social and sensor networks can transform big data, if it's treated as a knowledge commons, into a higher form of collective awareness that can motivate users to self-organize and create innovative solutions to various socioeconomic problems.

Citation: Jeremy Pitt, "Transforming Big Data into Collective Awareness", Computer (Volume: 46, Issue: 6, June 2013), pp. 40 - 45, 29 April 2013, DOI: 10.1109/MC.2013.153

Wendy Syfret of VICE Australia interviews Katina Michael

 Wendy Syfret now Head of Verticals at VICE Media

Wendy Syfret now Head of Verticals at VICE Media

WS: The upsides of these technologies are clear, and shown to us everyday, what are some of the downsides that people may not be considering fully?

  • Cybercrime, illegal material gathering
  • Trust, relationships
  • Privacy, secrecy
  • Covert surveillance, human rights
  • Uberveillance, information manipulation- misrepresentation of data- misinterpretation; context is missing
  • Much more...

WS: Why is it important for people to be aware of their relationship and dependence of technology?

KM: There comes a point where one needs to question whether they are being enslaved by technology or liberated by it. Human autonomy is a quality that makes our life free and grants us the ability to make decisions for ourselves. Some people take breaks away from technology to "live off the grid" by consciously turning off their mobile phones or not taking their laptop with them when they travel for leisure. There are unforeseen consequences when we strap technology to our bodies- and here I am not simply referring to belt buckle smart phone clips on, but full blown wearable technologies, some of these even head-mounted. What happens when we forget technology is even "there" and actively recording the space around us? We may not be impacting our own self, but the camera may be encroaching on the human rights of others. We can argue that this is how CCTV works- that most of us forget it is even on and present- but then wearables are overseen by their wearer, by individuals who may choose to do what they wish with the captured footage and are not regulated by acceptable use policies or procedures.

WS: What are some of the fears that surround physically embracing new technologies and allowing them to join with us?

KM: I wouldn't so much call them fears but unintended consequences. When we put on these technologies do we become a piece of technology ourselves? What does that mean for life-long dependencies? Do we lose our freedom? Are we subjugating ourselves to a life of upgrades? What happens when we wish to take off the camera but feel we cannot because of health repercussions like the focus of our eyes (or even mind) with respect to digital glass. Do we become so enthralled in the online world that we forget about offline functions, like eating and going to the toilet? What kinds of addictions might this new technology ignite? We won't know the answers to some of these questions for some time but we can definitely anticipate some of the major concerns that might eventuate through scenario-based planning.

WS: What are some of the risks if these technologies are misused?

KM: All technologies can be misused. Some technologies however come endowed with intrinsic inherent functionality that lend themselves to greater human risks than others. A table is a piece of technology I use to write letters on, it can be hurled at someone to cause physical harm but that is perhaps the limits of its utility. New technologies that are more complex pieces of innovation are not just products but embedded in processes. The more advanced the technologies the more the harm might be psychological and not just physical. We can learn a great deal from case law- just last month there was an Australian case of how a GPS device was strapped onto a vehicle of a victim by a stranger to stalk them in order to learn about their precise movements. The heinous crime is described here: Now I do not wish to extend the analogy here at all- but crimes against the person proliferate, and these technologies might be misused in any number of ways. I have gone on record previously as stating the issues as follows:-- "one can quickly imagine this new technology being misused by cybercriminals- namely for crimes against the person. In effect, we are providing a potential capability to share visual surveillance in real-time with people in underground networks of all sorts- for the distribution of child pornography, for grooming, cyberstalking, voyeurism and even for corporate fraud where "the computer" is the ultimate target." Before too long, direct visual evidence captured might even be used to render an insurance policy void- whether it has to do with rehabilitation, life insurance or any other aspect of life.

WS: Is there any work or proposed work at the moment in these fiels that worry you?

KM: At the moment my primary concern has to do with how people might react to wearers of this technology after the novelty effect wears off.

See "veillance"  community G+ group.

“I am getting not so excited (uncomfortable) looks in public toilets when I pony up to the urinal wearing Google Glass.”
— Brandon Allgood

KM: This reminds me of the alleged repercussions of Mann wearing his camera at a McDonalds store in France.

Please also read the issue with people accepting the "video evidence" in place of eyewitness accounts or otherwise. Supposedly direct evidence cannot lie- but in my opinion the wearer is in control of their point of eye and what they choose to record or not to record. See this blogpost:

I also hold grave concerns for how Google Glass will affect minors, especially children in general:

The abuses of Digital Glass, at least in the first few months by trial participants won't show the ugly side of wearable recording. No one exploring the ugly side of Glass is going to post up their video of a heinous application-- they will for the present go undetected.

Aside: Simply the difference between a handheld recording device and a body worn recording device is that you have two hands free in the latter case.

WS: If you allow me to be dramatic, what's the worst thing that could happen if people totally embrace the melding between humans as we know it and these technologies?

KM: We become something other than human and we lose our ability to differentiate between reality and augmediated reality. In essence we lose control over our decision making processes either because we cannot distinguish what is real and what is not or because we cannot transition between the online and offline world. We become like a vinyl record which has been scratched with an inability to move on.

It is cute when people paint a picture of Digital Glass as being able to help us recollect memories and the like- but actually in the real world why would people wish to make records of ultra painful moments in their life? Is it healthy to replay moments of suffering, wrong doings and the like? Does this propel positively the development of an individual human being (either the offender or the sufferer)?

Perhaps there is a potentially ugly side to POV and the glamour of capturing “every moment of your life”... While going through a bitter divorce most people would be inclined to naturally try to move on by deleting or removing images and video footage from sight when for a variety of reasons things just don’t work out. What then to POV if taken in the same way as a reality TV show?

This is true of any relationship- not just marriages... the same can pertain to partnerships, friendships and the like.

There are some who would discount that there is an ugly side to real-time POV... but what next? A break-up video? How I caught you on camera with someone else? The swearing and the shouting captured while the children are crying? The tears that follow and the anguish?

The point I am trying to make is that there is an occasion for all things. A video invitation is a great idea for the happy couple who want a “time capsule” to remember perhaps the most carefree time of their life... something that can be handed down to children as a long-lasting representation of love in the immediate family. But those who tout real-time POV, all the time for every occasion, have to rethink what “always on” REALLY means and the consequences of such an existence.
— Katina Michael

WS: In your opinion, what should people be worried about? Or maybe, looking out for?

KM: People need to think about what it means to record others without their permission whether in a public or private space. Checking in at a location might mean revealing someone's personal information for instance without their permission. We also need to think about the convergence of Digital Glass with social media and other apps out there. We must not be naive about the uses- history has proven time and time again- early adopters of new technologies will exploit them in ways that were never intended and not beneficial to society. The problem with unleashing a technology that has no real obvious utility is that we are letting the imagination stretch- that might be great for app building and creative industries- but also might be ugly with respect to negative uses. We like to read about novel applications and "benefits to humanity" stories but don't like to venture into stories of abuse.

We often forget about the asymmetry that comes with new innovations, or belittle the side effects as being applicable to the unlucky few and teething problems of a prototype. Tell that to the mother of a teenager who has committed suicide after her partner has uploaded comprising video/images to the Internet that have subsequently gone viral.  Just one of many cases which are tragic- The point here is not that the attackers would not have done what they did without a smartphone to take pictures of the attack but that in the future we will simply see more explicit evidence. Our acts might be seen as a part of a reality tv show, but wearers might not realise the repercussions of their actions in the physical world.

We need to introduce adequate policies within for instance the educational use of digital glass, the workplace use of digital glass, and need to educate consumers using scenarios about when it might or might not be appropriate to turn on glass. The other issue has to do with legislation. Wearers might find themselves in conflict with the law and they need to know their rights but also when they are breaking the law by their actions. In this case, one size does NOT fit all.

WS: You mentioned Francis Fukuyama calling these some of the worlds most dangerous ideas, what does that mean?

KM: It has to do with the nature of control and surveillance. Fukuyama looks at the impact of drones and their consequences. You will find his work quoted in many places- he is a political scientist at Stanford University.

WS: Could these technologies fall into the "wrong hands"?

KM: Sure they can. Imagine a crowd full of people wearing Glass and recording- now imagine trying to capture someone who is conducting covert surveillance? A bit of an oxymoron. This leads to the privatisation of intelligence gathering (spy agencies not of that given State). I have written a blogpost talking about human drones-- wearers of cameras that act like drones, being paid potentially to gather first-person video up and down public streets- for applications in retail among many others.

WS: What could the consequences of those be?

KM: We lose our trust in social structures where we have previously felt safe. This breaks down the very fibres that make society work. There is an immediate chilling effect- people, especially those suffering from mental illness will find it difficult to venture out into "safe" zones for fear of being recorded or otherwise.

WS: Are people ignorant to the changing world around them?

KM: I think for the greater part people are aware of the rapid changes happening via new technologies but feel powerless as to what to do about it. They also do not have time to sit and think about the implications of policies they have agreed to because things move at webspeed and no sooner have they adopted one technology than they are barraged with even newer technologies to "try and buy". It is an endless spiral- we have to have the latest gadget these days, or be on board the latest social media app making waves or we simple aren't with it etc. Ask most technology developers/providers these days and they will sell you the story that new technologies will enable you to be empowered. Yes, I agree, if used the right way you can certainly apply new technology for good, to help in time management, for reflection, for knowledge discovery and knowledge sharing. But these new technologies are also changing the dynamics between how people communicate, engage one another, and belong to a group or community, at times detrimentally, lending themselves to anti-social behaviours either deliberately or through negligence.

Where will all this data be stored? Who will have access to it? What are individual privacy rights? Intellectual property rights? Do we "YouTubify" our life? How does that profit us? What are the risks of the new PersonView world? What next? Implantable cameras? * here is a patent by Steve Mann in 2000 on the implantable camera--

Are we thus beckoning forth an uberveillance society? Always on implantables? Big brother on the inside looking out?

Dan DeFilippi - Credit Card Fraud: Behind the Scenes

Katina Michael: Dan, let’s start at the end of your story which was the beginning of your reformation. What happened the day you got caught for credit card fraud?

Dan DeFilippi: It was December 2004 in Rochester, New York. I was sitting in my windowless office getting work done, and all of a sudden the door burst open, and this rush of people came flying in. “Get down under your desks. Show your hands. Hands where I can see them.” And before I could tell what was going on, my hands were cuffed behind my back and it was over. That was the end of that chapter of my life.

Katina Michael: Can you tell us what cybercrimes you committed and for how long?

Dan DeFilippi: I had been running credit card fraud, identity theft, document forgery pretty much as my fulltime job for about three years, and before that I had been a hacker.

Katina Michael: Why fraud? What led you into that life?

Dan DeFilippi: Everybody has failures. Not everybody makes great decisions in life. So why fraud? What led me to this? I mean, I had great parents, a great upbringing, a great family life. I did okay in school, and you know, not to stroke my ego too much, but I know I am intelligent and I could succeed at whatever I chose to do. But when I was growing up, one of the things that I’m really thankful for is my parents taught me to think for myself. They didn’t just focus on remembering knowledge. They taught me to learn, to think, to understand. And this is really what the hacker mentality is all about. And when I say hacker, I mean it in the traditional sense. I don’t mean it as somebody in there stealing from your company. I mean it as somebody out there seeking knowledge, testing the edges, testing the boundaries, pushing the limits, and seeing how things work. So growing up, I disassembled little broken electron­ics and things like that, and as time went on this slowly progressed into, you know, a so-called hacker.

Katina Michael: Do you remember when you actually earned your first dollar by conducting cybercrime?

Dan DeFilippi: My first experience with money in this field was towards the end of my high school. And I realized that my electronics skills could be put to use to do something beyond work. I got involved with a small group of hackers that were trying to cheat advertising systems out of money, and I didn’t even make that much. I made a couple of hundred dollars over, like, a year or something. It was pretty much insignificant. But it was that experience, that first step, that kind of showed me that there was something else out there. And at that time I knew theft and fraud was wrong. I mean, I thought it was stealing. I knew it was stealing. But it spiraled downwards after that point.

Katina Michael: Can you elaborate on how your thinking developed towards earn­ing money through cybercrime?

Dan DeFilippi: I started out with these little things and they slowly, slowly built up and built up and built up, and it was this easy money. So this initial taste of being able to make small amounts, and eventually large amounts of money with almost no work, and doing things that I really enjoyed doing was what did it for me. So from there, I went to college and I didn’t get involved with credit card fraud right away. What I did was, I tried to find a market. And I’ve always been an entrepreneur and very business-minded, and I was at school and I said, “What do people here need? ... I need money, I don’t really want to work for somebody else, I don’t like that.” I realized people needed fake IDs. So I started selling fake IDs to college students. And that again was a taste of easy money. It was work but it wasn’t hard work. And from there, there’s a cross-over here between forged documents and fraud. So that cross-over is what drew me in. I saw these other people doing credit card fraud and mak­ing money. I mean, we’re talking about serious money. We’re talking about thousands of dollars a day with only a few hours of work and up.

Katina Michael: You strike me as someone who is very ethical. I almost cannot imagine you committing fraud. I’m trying to understand what went wrong?

Dan DeFilippi: And where were my ethics and morals? Well, the problem is when you do something like this, you need to rationalize it, okay? You can’t worry about it. You have to rationalize it to yourself. So everybody out there commit­ting fraud rationalizes what they’re doing. They justify it. And that’s just how our brains work. Okay? And this is something that comes up a lot on these online fraud forums where people discuss this stuff openly. And the question is posed: “Well, why do you do this? What motivates you? Why, why is this fine with you? Why are you not, you know, opposed to this?” And often, and the biggest thing I see, is like, you know, the Robin Hood scenario- “I’m just stealing from a faceless corporation. It’s victimless.” Of course, all of us know that’s just not true. It impacts the consumers. But everybody comes up with their own reason. Everybody comes up with an explanation for why they’re doing it, and how it’s okay with them, and how they can actually get away with doing it.

Katina Michael: But how does a sensitive young man like you just not realize the impact they were having on others during the time of committing the crimes?

Dan DeFilippi: I’ve never really talked about that too much before... Look the aver­age person when they know they’ve acted against their morals feels they have done wrong; it’s an emotional connection with their failure and emotionally it feels negative. You feel that you did something wrong no one has to tell you the crime type, you just know it is bad. Well, when you start doing these kinds of crimes, you lose that discerning voice in your head. I was completely dis­connected from my emotions when it came to these types of fraud. I knew that they were ethically wrong, morally wrong, and you know, I have no interest in committing them ever again, but I did not have that visceral reaction to this type of crime. I did not have that guilty feeling of actually stealing something. I would just rationalize it.

Katina Michael: Ok. Could I ask you whether the process of rationalization has much to do with making money? And perhaps, how much money did you actu­ally make in conducting these crimes?

Dan DeFilippi: This is a pretty common question and honestly I don’t have an answer. I can tell you how much I owe the government and that’s ... well, I suppose I owe Discover Card ... I owed $209,000 to Discover Card Credit Card Company in the US. Beyond that, I mean, I didn’t keep track. One of the things I did was, and this is kind of why I got away with it for so long, is I didn’t go crazy. I wasn’t out there every day buying ten laptops. I could have but chose not to. I could’ve worked myself to the bone and made millions of dollars, but I knew if I did that the risk would be significantly higher. So I took it easy. I was going out and doing this stuff one or two days a week, and just living comfortably but not really in major luxury. So honestly, I don’t have a real figure for that. I can just tell you what the government said.

Katina Michael: There is a perception among the community that credit card fraud is sort of a non-violent crime because the “actor” being defrauded is not a person but an organization. Is this why so many people lie to the tax office, for instance?

Dan DeFilippi: Yeah, I do think that’s absolutely true. If we are honest about it, everyone has lied about something in their lifetime. And people... you’re right, you’re absolutely right, that people observe this, and they don’t see it in the big picture. They think of it on the individual level, like I said, and people see this as a faceless corporation, “Oh, they can afford it.” You know, “no big deal”. You know, “Whatever, they’re ripping off the little guy.” You know. People see it that way, and they explain it away much easier than, you know, somebody going off and punching someone in the face and then proceeding to steal their wallet. Even if the dollar figure of the financial fraud is much higher, people are generally less concerned. And I think that’s a real problem because it might entice some people into committing these crimes because they are considered “soft”. And if you’re willing to do small things, it’s going to, as in my case, eventually spiral you downwards. I started with very small fraud, and then got larger. Not that everybody would do that. Not that the police officer taking the burger for free from Burger King is going to step up to, you know, to extortion or something, but certainly it could, could definitely snowball and lead to something.

Katina Michael: It has been about 6 years since you were arrested. Has much has changed in the banking sector regarding triggers or detection of cybercriminal acts?

Dan DeFilippi: Yeah. What credit card companies are doing now is pattern match­ing and using software to find and root out these kind of things. I think that’s really key. You know, they recognize patterns of fraud and they flag it and they bring it out. I think using technology to your advantage to identify these patterns of fraud and investigate, report and root them out is probably, you know, one of the best techniques for dollar returns.

Katina Michael: How long were you actually working for the US Secret Service, as a matter of interest? Was it the length of your alleged, or so-called prison term, or how did that work?

Dan DeFilippi: No. So I was arrested early December 2004. I started working with the Secret Service in April 2005, so about six months later. And I worked with them fulltime almost for two years. I cut back on the hours a little bit towards the end, because I went back to university. But it was, it was almost exactly two years, and most of it was fulltime.

Katina Michael: I’ve heard that the US is tougher on cybercrime relative to other crimes. Is this true?

Dan DeFilippi: The punishment for credit card fraud is eight-and-a-half years in the US.

Katina Michael: Do these sentences reduce the likelihood that someone might get caught up in this kind of fraud?

Dan DeFilippi: It’s a contested topic that’s been hotly debated for a long time. And also in ethics, you know, it’s certainly an interesting topic as well. But I think it depends on the type of person. I wasn’t a hardened criminal, I wasn’t the fella down on the street, I was just a kid playing around at first that just got more serious and serious as time went on. You know, I had a great upbring­ing, I had good morals. And I think to that type of person, it does have an impact. I think that somebody who has a bright future, or could have a bright future, and could throw it all away for a couple of hundred thousand dollars, or whatever, they recognize that, I think. At least the more intelligent people recognize it in that ... you know, “This is going to ruin my life or potentially ruin a large portion of my life.” So, I think it’s obviously not the only deterrent but it can certainly be useful.

Katina Michael: You note that you worked alone. Was this always the case? Did you recruit people to assist you with the fraud and where did you go to find these people?

Dan DeFilippi: Okay. So I mainly worked alone but I did also work with other people, like I said. I was very careful to protect myself. I knew that if I had partners that I worked with regularly it was high risk. So what I did was on these discussion forums, I often chatted with people beyond just doing the credit card fraud, I did other things as well. I sold fake IDs online. I sold the printed cards online. And because I was doing this, I networked with people, and there were a few cases where I worked with other people. For example, I met somebody online. Could have been law enforcement, I don’t know. I would print them a card, send it to them, they would buy something in the store, they would mail back the item, the thing they bought, and then I would sell them online and we would split the money 50/50.

Katina Michael: Was this the manner you engaged others? An equal split?

Dan DeFilippi: Yes, actually, exactly the same deal for instance, with the person I was working with in person, and that person I met through my fake IDs. When I had been selling the fake IDs, I had a network of people that resold for me at the schools. He was one of the people that had been doing that. And then when he found out that I was going to stop selling IDs, I sort of sold him my equipment and he kind of took over. And then he realized I must have something else going on, because why would I stop doing it, it must be pretty lucrative. So when he knew that, you know, he kept pushing me. “What are you doing? Hey, I want to get involved.” And this and that. So it was that person that I happened to meet in person that in the end was my downfall, so to speak.

Katina Michael: Did anyone, say a close family or friend, know what you were doing?

Dan DeFilippi: Absolutely not. No. And I, I made it a point to not let anyone know what I was doing. I almost made it a game, because I just didn’t tell anybody anything. Well, my family I told I had a job, you know, they didn’t know... but all my friends, I just told them nothing. They would always ask me, you know, “Where do you get your money? Where do you get all this stuff?” and I would just say, “Well, you know, doing stuff.” So it was a mystery. And I kind of enjoyed having this mysterious aura about me. You know. What does this guy do? And nobody ever thought it would be anything illegitimate. Everybody thought I was doing something, you know, my own webs ites, or maybe thought I was doing something like pornography or something. I don’t know. But yeah, I definitely did not tell anybody else. I didn’t want anybody to know.

Katina Michael: What was the most outrageous thing you bought with the money you earned from stolen credit cards?

Dan DeFilippi: More than the money, the outrageous things that I did with the cards is probably the matter. In my case the main motivation was not the money alone, the money was almost valueless to a degree. Anything that anyone could buy with a card in a store, I could get for free. So, this is a mind-set change a fraudster goes through that I didn’t really highlight yet. But money had very little value to me, directly, just because there was so much I could just go out and get for free. So I would just buy stupid random things with these stolen cards. You know, for example, the case where I actually ended up leading to my arrest, we had gone out and we had purchased a laptop before that one that failed, and we bought pizza. You know? So you know, a $10 charge on a stolen credit card for pizza, risking arrest, you know, for, for a pizza. And I would buy stupid stuff like that all the time. And just because I knew it, I had that experience, I could just get away with it mostly.

Katina Michael: You’ve been pretty open with interviews you’ve given. Why?

Dan DeFilippi: It helped me move on and not to keep secrets.

Katina Michael: And on that line of thinking, had you ever met one of your victims? And I don’t mean the credit card company. I actually mean the individual whose credit card you defrauded?

Dan DeFilippi: So I haven’t personally met anyone but I have read statements. So as part of sentencing, the prosecutor solicited statements from victims. And the mind-set is always, “Big faceless corporation, you know, you just call your bank and they just, you know, reverse the charges and no big deal. It takes a little bit of time, but you know, whatever.” And the prosecutor ended up get­ting three or four statements from individuals who actually were impacted by this, and honestly, you know, I felt very upset after reading them. And I do, I still go back and I read them every once in a while. I get this great sinking feeling, that these people were affected by it. So I haven’t actually personally met anyone but just those statements.

Katina Michael: How much of hacking do you think is acting? To me traditional hacking is someone sort of hacking into a website and perhaps downloading some data. However, in your case, there was a physical presence, you walked into the store and confronted real people. It wasn’t all card-not-present fraud where you could be completely anonymous in appearance.

Dan DeFilippi: It was absolutely acting. You know, I haven’t gone into great detail in this interview, but I did hack credit card information and stuff, that’s where I got some of my info. And I did online fraud too. I mean, I would order stuff off websites and things like that. But yeah, the being in the store and playing that role, it was totally acting. It was, like I mentioned, you are playing the part of a normal person. And that normal person can be anybody. You know. You could be a high-roller, or you could just be some college student going to buy a laptop. So it was pure acting. And I like to think that I got reasonably good at it. And I would come up with scenarios. You know, ahead of time. I would think of scenarios. And answers to situations. I came up with techniques that I thought worked pretty well to talk my way out of bad situations. For example, if I was going to go up and purchase something, I might say to the cashier, before they swiped the card, I’d say, “Oh, that came to a lot more than I thought it would be. I hope my card works.” So that way, if something happened where the card was declined or it came up call for authorization, I could say, “Oh yeah, I must not have gotten my payment” or something like that. So, yeah, it was definitely acting.

Katina Michael: You’ve mentioned this idea of downward spiraling. Could you elaborate?

Dan DeFilippi: I think this is partially something that happens and it happens if you’re in this and do this too much. So catching people early on, before this takes effect is important. Now, when you’re trying to catch people involved in this, you have to really think about these kinds of things. Like, why are they doing this? Why are they motivated? And the thought process, like I was saying, is definitely very different. In my case, because I had this hacker background, and I wasn’t, you know, like some street thug who just found a computer. I did it for more than just the money. I mean, it was certainly because of the chal­lenge. It was because I was doing things I knew other people weren’t doing. I was kind of this rogue figure, this rebel. And I was learning at the edge. And especially, if I could learn something, or discover something, some technique, that I thought nobody else was using or very few people were using it, to me that was a rush. I mean, it’s almost like a drug. Except with a drug, with an addict, you’re chasing that “first high” but can’t get back to it, and with credit card fraud, your “high” is always going up. The more money you make, the better it feels. The more challenges you complete, the better you feel.

Katina Michael: You make it sound so easy. That anyone could get into cybercrime. What makes it so easy?

Dan DeFilippi: So really, you’ve got to fill the holes in the systems so they can’t be exploited. What happens is crackers, i.e. criminal hackers, and fraudsters, look for easy access. If there are ten companies that they can target, and your company has weak security, and the other nine have strong security, they’re going after you. Okay? Also, in the reverse. So if your company has strong security and nine others have weak security, well, they’re going to have a field-day with the others and they’re just going to walk past you. You know, they’re just going to skip you and move on to the next target. So you need to patch the holes in your technology and in your organization. I don’t know if you’ve noticed recently, but there’s been all kinds of hacking in the news. The PlayStation network was hacked and a lot of US targets. These are basic things that would have been discovered had they had proper controls in place, or proper security auditing happening.

Katina Michael: Okay, so there is the systems focus of weaknesses. But what about human factor issues?

Dan DeFilippi: So another step to the personnel is training. Training really is key. And I’m going to give you two stories, very similar but with totally different outcomes, that happened to me. So a little bit more about what I used to do frequently. I would mainly print fake credit cards, put stolen data on those cards and use them in store to go and purchase items. Electronics, and things like that, to go and re-sell them. So ... and in these two stories, I was at a big- box well-known electronics retailer, with a card with a matching fake ID. I also made the driver’s licenses to go along with the credit cards. And I was at this first location to purchase a laptop. So pick up your laptop and then go through the standard process. And when committing this type of crime you have to have a certain mindset. So you have to think, “I am not committing a crime. I am not stealing here. I am just a normal consumer purchasing things. So I am just buying a laptop, just like any other person would go into the store and buy a laptop.” So in this first story, I’m in the store, purchasing a laptop. Picked it out, you know, went through the standard process, they went and swiped my card. And it came up with a ‘CFA’ – call for authorization. Now, a call for authorization is a case where it’s flagged on the computer and you actually have to call in and talk to an operator that will then verify additional information to make sure it’s not fraud. If you’re trying to commit fraud, it’s a bad thing. You can’t verify this, right? Right? So this is a case where it’s very possible that you could get caught, so you try to talk your way out of the situation. You try to walk away, you try to get out of it. Well, in this case, I was unable to escape. I was unable to talk my way out of it, and they did the call for authorization. They called in. We had to go up to the front of the store, there was a customer service desk, and they had somebody up there call it in and discuss this with them. And I didn’t overhear what they were saying. I had to stand to the side. About five or ten minutes later, I don’t know, I pretty much lost track of time at that point, they come back to me and they said, “I’m sorry, we can’t complete this transaction because your information doesn’t match the information on the credit card account.” That should have raised red flags. That should have meant the worse alarm bells possible.

Katina Michael: Indeed.

Dan DeFilippi: There should have been security coming up to me immediately. They should have notified higher people in the organization to look into the matter. But rather than doing that, they just came up to me, handed me back my cards and apologized. Poor training. So just like a normal consumer, I act surprised and alarmed and amused. You know, and I kind of talked my way out of this too, “You know, what are you talking about? I have my ID and here’s my card. Obviously this is the real information.” Whatever. They just let me walk out of the store. And I got out of there as quickly as possible. And you know, basically walked away and drove away. Poor training. Had that person had the proper training to understand what was going on and what the situation was, I probably would have been arrested that day. At the very least, there would have been a foot-chase.

Katina Michael: Unbelievable. That was very poor on the side of the cashier. And the other story you were going to share?

Dan DeFilippi: The second story was the opposite experience. The personnel had proper training. Same situation. Different store. Same big-box electronic store at a different place. Go in. And this time I was actually with somebody else, who was working with me at the time. We go in together. I was posing as his friend and he was just purchasing a computer. And this time we, we didn’t really approach it like we normally did. We kind of rushed because we’d been out for a while and we just wanted to leave, so we kind of rushed it faster than a normal person would purchase a computer. Which was unusual, but not a big deal. The person handling the transaction tried to upsell, upsell some things, warranties, accessories, software, and all that stuff, and we just, “No, no, no, we don’t ... we just want to, you know, kind of rush it through.” Which is kind of weird, but okay, it happens.

Katina Michael: I’m sure this would have raised even a little suspicion however.

Dan DeFilippi: So when he went to process the transaction, he asked for the ID with the credit card, which happens at times. But at this point the person I was with started getting a little nervous. He wasn’t as used to it as I was. My biggest thing was I never panicked, no matter what the situation. I always tried to not show nervousness. And so he’s getting nervous. The guy’s checking his ID, swipes the card, okay, finally going to go through this, and call for authorization. Same situation. Except for this time, you have somebody here who’s trying to
do the transaction and he is really, really getting nervous. He’s shifting back and forth. He’s in a cold sweat. He’s fidgeting. Something’s clearly wrong with this transaction. Now, the person who was handling this transaction, the person who was trying to take the card payment and everything, it happened to be the manager of this department store. He happened to be well-trained. He happened to know and realize that something was very wrong here. Something
was not right with this transaction. So the call for authorization came up. Now, again, he had to go to the front of the store. He, he never let that credit card and fake ID out of his hands. He held on to them tight the whole time. There was no way we could have gotten them back. So he goes up to the front and he says, “All right, well, we’re going to do this.” And we said, “Okay, well, we’ll go and look at the stock while you’re doing it.” You know. I just sort of tried to play off, and as soon as he walked away, I said, “We need to get out of here.” And we left; leaving behind the ID and card. Some may not realize it as I am retelling the story, but this is what ended up leading to my arrest. They ran his photo off his ID on the local news network, somebody recognized him, turned him in, and he turned me in. So this was an obvious case of good, proper training. This guy knew how to handle the situation, and he not only prevented that fraud from happening, he prevented that laptop from leaving the store. But he also helped to catch me, and somebody else, and shot down what I was doing. So clearly, you know, failing to train people leads to failure. Okay? You need to have proper training. And you need to be able to handle the situation.

Katina Michael: What did you learn from your time at the Secret Service?

Dan DeFilippi: So a little bit more in-depth on what I observed of cybercriminals when I was working with the Secret Service. Now, this is going to be a little aside here, but it’s relevant. So people are arrogant. You have to be arrogant to commit a crime, at some level. You have to think you can get away with it. You’re not going to do it if you can’t, you know, if you think you’re going to get caught. So there’s arrogance there. And this same arrogance can be used against them. Up until the point where I got caught in the story I just told you that led to my arrest, I was arrogant. I actually wasn’t protecting myself as well as I had been, should have been. Had I been investigated closer, had law enforcement being monitoring me, they could have caught me a lot earlier. I left traces back to my office. I wasn’t very careful with protecting my office, and they could have come back and found me. So you can play off arrogance but also ignorance, obviously. They go hand-in-hand. So the more arrogant somebody is, the more risk they’re willing to take. One of the things we found frequently works to catch people was email. Most people don’t realize that email actually contains the IP address of your computer. This is the identifier on the Internet to distinguish who you are. Even a lot of criminals who are very intelligent, who are involved in this stuff, do not realize that email shows this. And it’s very easy. You just look at the source of the email and boom, there you go. You’ve got somebody’s location. This was used countless times, over and over, to catch people. Now, obviously the real big fish, the people who are really intelligent and really in this, take steps to protect themselves with that, but then those are the people who are supremely arrogant.

Katina Michael: Can you give us a specific example?

Dan DeFilippi: One case that happened a few years ago, let’s call the individual “Ted”. He actually ran a number of these online forums. These are “carding” forums, online discussion boards, where people commit these crimes. And he was extremely arrogant. He was extremely, let’s say, egotistical as well. He was very good at what he did. He was a good cracker, though he got caught multiple times. So he actually ran one of these sites, and it was a large site, and in the process, he even hacked law enforcement computers and found out information about some of these other operations that were going on. Actu­ally outed some, some informants, but the people didn’t believe him. A lot of people didn’t believe him. And his arrogance is really what led to his downfall. Because he was so arrogant he thought that he could get away with everything. He thought that he was protecting himself. And the fact of the matter was, law enforcement knew who he was almost the whole time. They tracked him back using basic techniques just like using email. Actually email was used as part of the evidence, but they actually found him before that. And it was his arrogance that really led to his getting arrested again, because he just didn’t protect himself well enough. And this really I cannot emphasize it enough, but this can really be used against people.

Katina Michael: Do you think that cybercrimes will increase in size and number and impact?

Dan DeFilippi: Financial crime is going up and up. And everybody knows this. The reality is that technology works for criminals as much as it works for businesses. Large organizations just can’t evolve fast enough. They’re slow in comparison to cybercriminals.

Katina Michael: How so?

Dan DeFilippi: A criminal’s going to use any tools they can to commit their crimes. They’re going to stay on top of their game. They’re going to be at the forefront of technology. They’re going to be the ones out there pioneering new tech­niques, finding the holes before anybody else, in new systems to get access to your data. They’re going to be the ones out there, and combining that with the availability of information. When I started hacking back in the ‘90s, it was not easy to learn. You really pretty much had to go into these chat-rooms and become kind of like an apprentice. You had to have people teach you.

Katina Michael: And today?

Dan DeFilippi: Well after the 2000s, when I started doing the identification stuff, there was easier access to data. There were more discussion boards, places where you could learn about these things, and then today it’s super easy to find any of this information. Myself, I actually wrote some tutorials on how to conduct credit card fraud. I wrote, like, a guide to in-store carding. I included how to go about it, what equipment to use, what to purchase, and it’s all out there in the public domain. You don’t even have to understand any of this. You know, you could know nothing about technology, spend a few hours online searching for this stuff, learn how to do it, and order the stuff overnight and the next day you could be out there going and doing this stuff. That’s how easy it is. And that’s why it’s really going up, in my opinion.

Katina Michael: Do you think credit card fraudsters realize the negative conse­quences of their actions?

Dan DeFilippi: People don’t realize that there is a real negative consequence to this nowadays. I’m not sure what the laws are in Australia about identity theft and credit card fraud, but in the United States, it used to be very, very easy to get away with. If you were caught, it would be a slap on the wrist. You would get almost nothing happening to you. It was more like give the money back, and possibly serve jail time if it was a repeat offence, but really that was no deterrent. Then it exploded post dot com crash, then a few years ago, we passed a new law that it’s a mandatory two years in prison if you commit identity theft. And credit card fraud is considered identity theft in the United States. So you’re guaranteed of some time in jail if caught.

Katina Michael: Do you think people are aware of the penalties?

Dan DeFilippi: People don’t realize it. And they think, “Oh, it’s nothing, you know, a slap on the wrist.” There is a need for more awareness, and campaigning on this matter. People need to be aware of the consequences of their actions. Had I realized how much time I could serve for this kind of crime, I probably would have stopped sooner. Long story short, because I worked with the Se­cret Service and trained them for a few years, I managed to keep myself out of prison. Had I not done that, I would have actually been facing eight-and-a-half years. That’s serious, especially for somebody who’s in their early 20s. And really had that happened, my future would have been ruined, I think. I probably would have become a lifelong criminal because prisons are basically teaching institutions for crime. So really I, had I known, had I realized it, I wouldn’t have done it. And I think especially younger people, if they realize that the major consequences to these actions, that they can be caught nowadays, that there are people out there looking to catch them, that really would help cut back on this. Also catching people earlier of course is more ideal. Had I been caught early on, before my mind-set had changed and the emotional ties had been broken, I think I would have definitely stopped before it got this far. It would have made a much bigger impact on me. And that’s it.