Dr Katina Michael
Expertise: Privacy and Cybersecurity
School of Computer and Information Technology
Professor Michael comments regularly on the social implications of emerging technologies with an emphasis on privacy and national security.
The topics she’s best-versed on are cybersecurity, privacy, technology, ethics, social media, wearables and biotechnology.
She researches the social and ethical implications of emerging technologies.
She has engaged in debates on hot issues; smartphone addiction; Facebook’s privacy breaches; whether humans are being enslaved or empowered by technology; when citizen rights are violated by tech companies or governments and the possibilities and limitations of mechanical upgrades to the human body.
She has also researched on the regulatory environment surrounding the tracking and monitoring of people using commercial global positioning systems (GPS) applications, focusing on people with dementia, mental illnesses, parolees, and minors.
Since 1996 Dr Michael has been studying the impact of microcircuitry and nanotechnology devices in humans.
Her research on location intelligence and resulting behaviours was a precursor to wearable devices like the FitBit.
She delivered a TEDx talk on the future prospects of microchipping people and more recently on the future prospects of brain pacemakers.
She understands the history of computers, and key innovations in design since they were first developed.
She is deeply involved in the Public Interest Technology movement, and technology for good with respect to Sustainable Development Goals.
Dr Michael can talk in depth about automatic identification technologies including bar codes, magnetic stripe cards, smart cards, biometrics, radio-frequency identification tags and transponders.
She can provide an informed opinion on location-based services including Global Positioning Systems, UHF and A-GPS, Wireless Local Area Networks, Cellular and 3G Mobile and IP Location Services.
On computing her knowledge covers context aware applications, mobile media, wearable computing, chip implants and nanotechnology.
She has a strong interest in national security including homeland defence, national identification schemes, counter-terrorism strategies, natural disaster prevention and response, pandemics and government readiness.
On privacy and surveillance, she can discuss dataveillance, sousveillance and uberveillance.
On public policy her expertise covers the Telecommunication Interception Access Act, anti-terrorism laws, standards and guidelines.
Dr Michael works between Australia and the US. While she's in the US, media can reach her via +14804941149.
Source: Katina Michael, June 12, 2019, “Expertise: Privacy and Cybersecurity (Katina Michael)”, UOW Media, https://www.uow.edu.au/media/find-an-expert/katina-michael/
Citation: Katina Michael with Gemma Veness, June 2, 2019, “United States visa applicants now required to hand over social media usernames”, ABC24hour News: Afternoons, https://www.abc.net.au/news/newschannel/
For an alternate perspective: https://www.abc.net.au/news/2019-06-04/us-visa-rules-social-media-accounts/11174262
For just $US60, a company registered in New York state is selling the data of over 2,000 Australian women who have signed up for online dating services.
Information from dating profiles of more than 2,000 Australians is sold to the ABC for approximately $86.55
Between 2,500 and 4,000 companies in the United States buy, sell and share personal data
Services like Paypal, now used by more than 7 million Australians, shares its user data with over 600 different third parties
For one woman, 'Rosie' — who wished to remain anonymous — her file included her age, contact details, place of employment and photographs.
The file also noted that while she did not have children, she would like some in the future.
Rosie's mother told the ABC her daughter was "quite shocked" to learn how intimate details of her life and her future hopes were being sold online for a profit.
"I feel like it's more than one website that this information has come from," she said.
The company that sold the information obtained it from dating apps and websites, but would not respond to questions about exactly how it got the data.
A close up photo of a phone with dating apps like Tinder, Grindr and Bumble on the screen.
PHOTO: It's unclear which apps and services shared the data. (ABC: Tara Cassidy)
The ABC's PM program bought the data as part of an investigation into data privacy.
Sarah, a 27-year-old woman whose data was also included with the purchase, said the she was concerned about safety after learning her data was available for sale.
"It would be pretty easy to track me down even from just my name and profession," she said.
Sarah had previously been doxed online, with contact details and photos of her posted maliciously to the website 4chan.
Listen to Privacy Unravelled, an investigation by PM:
Episode 1: What do Facebook and Google know about us?
Episode 2: The explosion of stored data on us all
Episode 3: How much does the government know about us?
Episode 4: The future of data surveillance
"It's pretty gross to learn that your identity is getting treated like a commodity that's for sale," she said.
"It makes you feel a bit small and powerless."
Dating sites often include the right to share or on-sell client data as part of the terms and conditions of starting an account.
Gathering data points
This case is a classic example of how our data is being sold around the world without our knowledge, according to Katina Michael, a professor in computing and information technology at the University of Wollongong.
"There are companies that are scraping people's data of all types — dating is quite obtrusive — and consumers do not understand what is possible with sophisticated data-scraping algorithms," Professor Michael said.
PHOTO: Consumers cannot possibly know how much data about them is out there, Professor Michael says. (Supplied: Katina Michael)
The companies that accumulate and combine this information are known as data brokers. The US Federal Trade Commission found that one data broker alone had 3,000 pieces of data on nearly every person in the United States.
It is difficult to know exactly how many companies are selling and trading data in this way, but credible estimates put the number of data brokers in the United States alone at between 2,500 and 4,000 companies.
So, are our phones listening to us?
Many suspect their phone is spying on them, but that doesn't mean it is.
University of Maryland law professor Frank Pasquale said brokers would use data to classify people into certain categories that could be discriminatory.
He gave the example of grouping consumers as "elderly and gullible" and then selling their information to gambling marketers.
"People have no idea, nobody has any idea of the vulnerabilities it entails," Professor Pasquale said.
"There's all kinds of data in there that can be used against us — by insurers by employers — and we just sort of have to hope that the laws keep those bad uses at bay."
For Australians, relying on the law to protect our data is difficult as much of it is stored outside Australia's jurisdiction.
"The mere fact that we're using international platforms to begin with means our data was already residing in America," explained Professor Michael.
"For instance, you may have a Gmail account — it may look like you're in Australia but your information is being stored in America."
Services like Paypal, which is now used by more than 7 million Australians, shares its user data with over 600 different third parties.
While data brokers are well-established in the US, they are becoming increasingly involved in new international markets like Australia.
"There's global data brokers that are saying, 'We can use the same algorithms as in the United States and we can apply them to other countries'," Professor Michael said.
Data gets linked to social media
Siva Vaidhyanathan, a professor in Media Studies at the University of Virginia, said when it comes to building these multiple data points into a complete picture of us, no one does it better than tech giants like Facebook and Google.
"Facebook for years purchased commercial databases and government databases, so they could cross-list all that data with the data that they had gathered from you," he said.
For example, if you use one of Facebook's apps on your phone, such as Facebook Messenger, Whatsapp or Instagram, then the tech giant can record your location.
"If you walk through a shopping centre, Facebook keeps track of the shops that you enter and cross-references that with any commercial activity that it has followed," he said, adding that other tech giants like Google collect similar information.
"You've told Facebook who your closest friends are; who your closest family are.
"You have also told Facebook what your political interests are, what music you like and what books you read.
"I couldn't imagine a richer picture of each of us. Facebook essentially has a doppelganger of us in its servers — our expressions and desires."
Professor Michael said her greatest concern was the judgements that would be made about consumers by algorithms based on all of this data.
"It is basically creating classes of people and it's creating segregation," she said.
"When we leave things to algorithms we get things wrong and I'm worried that in the next 10 years we'll see algorithms go out of control.
"Judgements are being made about me that I couldn't even conceive of."
Citation: Flint Duxfield and Scott Mitchell, May 30, 2019, “Personal data of thousands of Australians sold for just $US60”, ABC NEWS, https://www.abc.net.au/news/2019-05-31/online-privacy-personal-data-purchased-for-$us60-warning-experts/11157092?section=business
Katina Michael, professor of computing and IT, Wollongong University
Joseph Turow, professor of communications, University of Pennsylvania
Frank Pasquale, professor of law, University of Maryland
LINDA MOTTRAM: Data on Australians is being gathered — sometimes bought and sold around the world as part of a growing, opaque market in sometimes very personal information about you.
The information that interests these so called "data brokers", can be anything from your address and where you work, to more sensitive information like particular health conditions we might have.
Well, today in part two of our series "Privacy Unravelled" that is running all week on PM, we take a deep dive into how this is happening.
Flint Duxfield is the reporter.
(Telephone ring tone)
FLINT DUXFIELD: Last week I got a call from someone I'm going to call Catherine.
Hello, this is Flint.
FLINT DUXFIELD: Hi, how are you doing?
CATHERINE: Good, thank you.
FLINT DUXFIELD: The reason I'm not telling you her real name is because Catherine is pretty concerned about some of the things I was able to find out about her daughter.
CATHERINE: She was quite shocked with what information it did have on there. You just don't think it is going to be available to anybody.
FLINT DUXFIELD: Even though we've never met, I know the full name of Catherine's daughter, how old she is, her email, where she works, what she does for a living and what she looks like and the fact that she doesn't have kids yet but would probably like some.
And I know all this because I found this information and more for sale on a website in the US selling the dating profiles of Australians.
CATHERINE: It was on Tinder.
FLINT DUXFIELD: Right.
CATHERINE: But, I feel like it is more than one site that all that information has come from.
FLINT DUXFIELD: You don't think that it just came from the one dating site that she was on?
FLINT DUXFIELD: The company selling this information wouldn't say how they'd gotten their hands on it and experts say it's a classic example of the way our data is collected and sold without us having any idea.
KATINA MICHAEL: There are companies scraping people's data of all types. Dating is quite intrusive and consumers don't realise what is possible with the digital web with sophisticated data mining and data scraping algorithms.
FLINT DUXFIELD: Katina Michael is a professor of computing and IT at Wollongong University and she says even if you're careful not to share your personal information on things like dating sites, data about you is still being soaked up by all sorts private companies every single day.
KATINA MICHAEL: Every time a plastic card touches a machine is a data point. Every time you have transacted at a supermarket store is a data point. Every time you've used your loyalty card scheme to have Frequent Flyers and get to a destination through points, these are all accumulations of you.
We basically can constitute you in ones and zeroes in digital data points.
FLINT DUXFIELD: Just about every company you can think of is looking to collect as much information about you as they can from online sites like Amazon and eBay, retailers like Target and Woolworths, music and entertainment sites like Spotify and Netflix and even your credit cards and this information allows those companies to work out a whole raft of things about us.
JOSEPH TUROW: Data are, as some people say, the new oil. Data are the new ways in which to understand customers.
FLINT DUXFIELD: That is Professor Joseph Turow from the University of Pennsylvania who says the kinds of things that can be worked out about us from our data are pretty specific.
JOSEPH TUROW: In today's world everything is sensitive. In an AI world where you can use deep learning to figure out so many different inferences about people, the most benign categories can yield inferences about an individual that that person would shudder to think that people think of that person.
FLINT DUXFIELD: What sorts of things?
JOSEPH TUROW: Well, for example, you can decide based upon a person's shopping habits what personality they have, what diseases they may get, how long they are likely to live.
Do you want that kind of material to be inferred about you? It is quite possible to do this.
FLINT DUXFIELD: Now the ability to work out that kind of thing might seem creepy but the thing that really concerns a lot of privacy experts is that often this data doesn't just stay with the companies that collect it.
It gets bought, sold and shared in ways most of us have no idea about.
Katina Michael again.
KATINA MICHAEL: Where it becomes a little bit manipulative to me, and exploitative, is when third parties that don't have customers begin to say to companies like Woolworths and Coles, for arguments sake, 'I've got some data that you might really like and here's the price,' and it is no longer personal information that is of a subscriber base.
It is third party information that is then being used to further manipulate consumers.
I think when we cross boundaries in corporations and the data being handed over is no longer your own customer data, then that becomes quite intrusive.
FLINT DUXFIELD: Even services like Paypal, which is now used by more than seven million Australians, shares data about its users with over 600 different third parties.
But the companies at the heart of buying and selling data are called data brokers and they're some of the biggest firms you've probably never heard of — companies like Experian, Axciom and Quantium.
KATINA MICHAEL: When the company washes their hands of your personal information and says 'Well, I used it appropriately. I de-identified it, and then I decided to sell it'.
And they say 'We don't care down the value chain how that data is used because we did the right thing'. But as it goes down the value chain, it is being misused and abused in ways that people could never have imagined.
FLINT DUXFIELD: And Katina says that one of the reasons for this is that while companies will often say they only pass on data about you anonymously, what's known as 'de-identified data', several studies have shown that de-identification is, at best, very difficult and at worst, well, a bit of a con.
KATINA MICHAEL: So we have this set of de-identified data that algorithms are now, with some precision, allowing us to re-identify particular customers as they transact and go about their business.
So this notion of selling de-identified data is really bogus.
FLINT DUXFIELD: And the ways that allows data brokers to classify us are kind of scary.
Frank Pasquale is a professor of law at the University of Maryland and he gave me just a couple of examples.
FRANK PASQUALE: AIDS and HIV sufferers, gullible households, always elderly and gullible and that was to be sold to gambling marketers.
FLINT DUXFIELD: To what extent do you think people realise that these companies are amassing all this data and trading it in this way?
FRANK PASQUALE: People have no idea, nobody knows. Nobody knows and nobody really has a sense of the vulnerabilities it entails.
It is like- Paul Ohm calls it the 'Database of Ruin', you know, that these companies are creating about every one of us.
There is all sorts of information in there that eventually can be used against us by insurers, by employers etc and you know, we just have to hope that the laws keep those bad users at bay.
FLINT DUXFIELD: And the amount of information these data brokers have is truly staggering.
The US Federal Trade Commission found that one data broker alone had 3,000 pieces of data on nearly every person in the United States.
And Katina Michael says Australia's data broking industry is rapidly heading down the same path.
KATINA MICHAEL: The way that the Australian operators work is that they've grown in size and they've also been bought out and these global data brokers now are saying 'Ah uh, we can use the same algorithms that we built in the US and we can apply them to other states and we can make more money'.
FLINT DUXFIELD: And so would you expect to see the kind of categories and the kind of uses of data that we see in the US being increasingly used in Australia then?
KATINA MICHAEL: Definitely. I think the mere fact that we are using international platforms to begin with means that our data was already residing in America.
For instance, people don't realise, you may have a Gmail account. Yes, it looks like you're in Australia but that information is being stored in America.
FLINT DUXFIELD: Of course, one of the big risks of storing all this information is the potential for it to get into the wrong hands.
In the last 10 years major data brokers like Equifax, Experian and Lexis Nexis have seen data about hundreds of millions of people get hacked or breached and Frank Pasquale says, that means it's now available for purchase to the highest bidder.
FRANK PASQUALE: There is some news about this, about Chinese data markets where a journalist just went to one of these random data markets and found out all the hotels someone had visited and their credit card charges, something like that.
So, that I think, is a dystopian possibility, right?
And if sort of shrug our shoulders at every data breach and say well, oh, who's going to use that data, who's going to use that data. Oh, I guess all the hospital records get breached but who's really going to put that back together on me etc.
Eventually, I think it is quite possible that you're going to see enough of this sort of escape.
FLINT DUXFIELD: But in some cases the data doesn't even need to be hacked to get into the wrong hands.
In 2011 an investigation by security researcher, Brian Krebs, found that an identity theft operation in the US was just buying data about people from Experian, a major data broker and this is a company which sells, as one of its products, identity theft protection.
But for those who study data brokers, like Katina Michael, even the legal use of this data raises serious concerns — the most obvious of which is lumping together consumer profiles that are then sold on to marketers and used for advertising.
KATINA MICHAEL: This is looking at between 500 and 5,000 data points. They may be micro-analysed to target you in ways that you are oblivious to.
So you see a message maybe in your wall feed on Facebook or you see it popping up on the right hand side perhaps on your Gmail oblivious to the fact that these are calculated, targeted messages that people are sending you, or algorithms are sending you in the hope that they will maximise sales.
FLINT DUXFIELD: Now sometimes getting ads you're interested in is a good thing but there are more questionable uses for targeted ads, like this company which allows you to secretly send ads to your parents, your partner, your friends or anyone really without them having any idea.
EXTRACT FROM ADVERTISEMENT: That person, the target, is exposed to hundreds of items strategically placed as editorial content whether it is proposed marriage, quit smoking, initiate sex or stop riding motorcycles.
FLINT DUXFIELD: That's right, for around 45 bucks they claim you can surreptitiously put articles in someone's social media feeds or on the websites they visit to convince them to do something you want them to do.
EXTRACT FROM ADVERTISEMENT: The most requested tailormade campaign is settle outside of court and get back with your ex.
FLINT DUXFIELD: And it's exactly that kind of surreptitious, targeted advertising which privacy researchers like Frank Pasquale say is being used for some pretty questionable things.
FRANK PASQUALE: One of the things that I think is a common thread in both the US and Australia is like these shady vocational schools, right.
They set up and they are supposed to be higher education but they are just a big waste of money, and there is a lot of evidence that the people that are shown these sorts of ads are people that have been classified via data in a certain social class or being in the sort of gullible type or something like that.
You know, you're being trapped toward either like bad schooling opportunities, bad credit opportunities like payday loans, very high interest loans.
Those are examples of this sort of data playing into marketing schemes to draw people into things that are bad for them or things that are really bad business opportunities.
FLINT DUXFIELD: The way that companies can easily track so many things about us also makes it easier for them to work out how much we're willing to pay and in some cases charge people different prices depending on what's know about them — something called price discrimination.
FRANK PASQUALE: The price discrimination is definitely happening. We're seeing that.
We actually had a division of financial services in New York recently issued guidance for insurers who want to use social media to write life insurance. So they want to pass everything you've been doing on social media to decide how much life insurance to give you.
There's a whole array of companies in that space and I've also given testimony before the US Senate that mentions globally data that is used against people.
For example in India, people that have evidence of being involved in politics were denied loans because they felt that if you are getting into politics, you may be not a reliable credit risk.
So these are all examples of, I think, things where you wouldn't think that you know, oh, I looked at the hang gliding website three times and now my life insurance costs 10 per cent more, right?
But that's exactly what could be happening.
FLINT DUXFIELD: The more these companies know about us, the easier it is for them to decide whether to service us at all.
Several health and car insurance companies for instance now offer discounts if you agree to let them track your car while you're driving or record your health data with a smart watch - which sounds like it might be a good idea, right, if you're healthy and you look after yourself you save money.
But that's not the case for everyone as Professor David Watts from LaTrobe uni explains.
DAVID WATTS: It means that yes, you can be offered particular targeted services that may be beneficial for you. That's the good side of it.
The bad side of it is that it could be used to discriminate against you. It could be sold on to someone else, it could be sold to your employer who may decide that they don't want to employ you because you've engaged in activity and sought medical treatment for it that it morally disapproves of.
And it also means that you actually might not get offered health insurance or your premium might climb through the roof but it can be used and has been used in that American context to deny people insurance and often they are the most underprivileged.
That's a societal problem, you know, does it mean that those who are in the greatest need of healthcare, are denied it.
FLINT DUXFIELD: And Katina Michael says as the amount of data increases, so too does the potential for companies to analyse us in more and more calculated ways.
KATINA MICHAEL: When we systematize and leave things to algorithms, we get things wrong and I'm worried that in the next 10 years we will see algorithms go out of control and I'm concerned that there is a loss of human dignity, there is a loss of freedom in these practices and there is a loss of autonomy when judgements are being made about me in ways that I had never thought possible. I just couldn't even conceive it.
LINDA MOTTRAM: Katina Michael, she's professor of computing at IT at Wollongong University.
Flint Duxfield reporting, Privacy Unraveled series — running all this week on PM.
Now you can find this episode and last night's at the PM website if you search ABC Privacy Unraveled, or just search ABC PM.
And make a date tomorrow to join us at the same time for part three of Privacy Unraveled. Flint will look specifically at what government knows about us. It is quite a bit and what can happen when this data gets into the wrong hands.
That's PM at the same time tomorrow evening, our series Privacy Unraveled.
A short radio interview with Amelia Wood of the University of Wollongong, a second year journalism student on “Smartphone Addiction”.
Citation: Katina Michael with Amelia Wood, “Smartphone Addiction”, University of Wollongong, Journalism Assignment, https://soundcloud.com/amelia-wood-105258743/jrnl203-assignment-3-smartphone-addiction
Las empresas aplican sistemas de inteligencia artificial para almacenar datos de los consumidores y guiar sus elecciones comerciales sin que ellos lo sepan
El gigante comercial Wal-Mart, el mayor minorista del mundo, dispone de una patente que descubre el estado de ánimo de sus clientes simplemente estudiando sus caras. De este forma, aspira a localizar a los compradores descontentos para presentarles una atención especial. El banco australiano Westpac tiene un sistema similar, aunque está centrado en su personal, para que los jefes intervengan si algún empleado lo requiere. Puede haber consumidores y empleados que valoren favorablemente estas tecnologías, porque, gracias a ellas, supuestamente se resolverán antes sus problemas.
No obstante, expertos como la doctora Monique Mann, de la Universidad de Tecnología de Queensland (Australia), alertan de que, incluso en estos casos, se estaría manejando un “concepto de consentimiento anticuado”. Ella se muestra tajante al afirmar: “La ley y los marcos regulatorios se han quedado atrás con respecto a los avances tecnológicos. Esto acarrea serios inconvenientes para la privacidad”. En este contexto, sus colegas Katina Michael y M. G. Michael acuñaron el término “omnivigilancia”, con la siguiente definición: “sistemas de vigilancia generalizada, con tecnología habilitada e integrada en la sociedad, los dispositivos electrónicos y hasta el cuerpo humano”.
El Congreso estadounidense está discutiendo de qué modo interviene la tecnología en los comercios. Las empresas cuentan con innovaciones digitales que les permiten orientar mejor su actividad. Sin embargo, miles de particulares, asociaciones y partidos políticos consideran que estas fórmulas suponen una amenaza para la intimidad. Se refieren a escaparates que graban a los ciudadanos que se paran ante ellos; a espejos dotados de inteligencia artificial para aconsejar a los compradores sobre la ropa que se están probando; a cámaras repartidas por doquier que se fijan en los rostros, los cuerpos y las bolsas de los clientes para clasificarles…
En estas ocasiones, no se trata de regalar datos personales de manera —más o menos— voluntaria, como sucede durante muchas compras en línea, sino que los afectados se pueden sentir violentados, puesto que se les espía, estudia y manipula sin pedirles permiso ni comunicarles lo que se está llevando a cabo con ellos. Además, por si no hubiese suficiente con la presencia de los consumidores para poner en marcha estos métodos de rastreo, los smartphones contribuyen a facilitar esta identificación: cuando los individuos se conectan al WiFi, al activar su bluetooth, etc. Las tiendas físicas de Amazon, la cadena de perfumería Sephora… se valen de algunos de estos mecanimos.
Los ejemplos aumentan día a día. Así, paneles informativos como los instalados en el nuevo centro financiero internacional de Seúl (Corea del Sur) sirven igualmente para vigilar a los clientes y analizar sus desplazamientos. En apariencia, el principal cometido de estas máquinas es proporcionarles ayuda a quienes la necesitan, pero los gestores de este establecimiento los utilizan para saber en todo momento qué hacen quienes les visitan. Y por qué lo hacen. Los minoristas de los Emiratos Árabes Unidos también están avanzando rápidamente en esta línea. Muchos de ellos emplean aparatos de este tipo para contar e identificar a las personas.
“Los programas más solicitados se adelantan a la dirección que seguirán los consumidores. Por esta razón, un gran empresario acaba de comprar 250 de estos sistemas”, revela el director para Oriente Medio y África de la compañía multinacional Milestone Systems, Peter Biltsted. El movimiento no se detendrá, como señala otra voz autorizada, Marwan Khoury, gerente de marketing de otra firma especializada, Axis Communications. Él recuerda que el aparendizaje profundo (deep learning) ya ha conseguido en Japón adaptar la publicidad que se exhibe en una carretera en función del tipo de vehículo que esté circulando delante de ella.
El mercado de la tecnología de última generación para las ventas ascenderá a 1.500 millones de euros en 2020, según los cálculos de la consultora Deloitte. Los mismos desarrollos que explotan los detalles biométricos para garantizar la seguridad —en la prevención de atentados, el control de aduanas, etc.— se están aplicando al comercio. No obstante, si el primero de estos usos ha desencadenado un debate de carácter ético, ¿cómo no lo iba a motivar el segundo? Un ex jefe de la Fuerza fronteriza británica, Tony Smith, subrayó en un foro reciente de la cadena pública BBC que los gobiernos deberían legislar para evitar las prácticas inapropiadas con estos datos.
Como a muchos otros, a él le preocupa que ya sea realidad un itinerario como el que se relata a continuación. De camino a los grandes almacenes, un conductor entra a una estación de servicio para repostar. Mientras llena el depósito de gasolina, observa los anuncios que aparecen en la pantalla del surtidor. En ese instante, el sistema de inteligencia artificial que oculta este monitor le está catalogando: edad, sexo… ¿Lleva gafas? ¿Barba, tal vez? Estos factores le ayudan al robot para asignarle un perfil demográfico, que será transmitido a los anunciantes y que le acompañará a las tiendas, e incluso a su casa, sin que él lo sepa. El ciudadano pensará, sencillamente, que ha visto algunos anuncios en la gasolinera.
JOSEP LLUÍS MICÓ, April 27, 2019, “Nadie se puede librar de la “omnivigilancia””, Lavanguardia, https://www.lavanguardia.com/tecnologia/20190427/461874572551/big-data-inteligencia-artificial-mercado-consumidores-publicidad.html
Citation: Jeffrey Duggan, April 14, 2019, “RFID Journal Live 2019”, ReelyActive Blog, https://reelyactive.com/blog/archives/tag/rfid-2-0
Oh the irony of human-entered data at an RFID conference. Ten years ago, Kevin Ashton, who coined the term “Internet of Things”, explained in RFID Journal:
We need to empower computers with their own means of gathering information […] without the limitations of human-entered data.
Case in point, the badge: the surname and given name are reversed, with the latter mispelled misspelled as a result of human data entry during onsite registration from a paper & pencil form. Nonetheless, this is an excellent example for emphasising the potential of RFID and the IoT!
Indeed, at the co-hosted IEEE RFID event, I, Jeffery Jeffrey, presented a workshop entitled Co-located RFID Systems Unite! focused on this potential now that there are nearly 20 billion RAIN (passive) and BLE (active) units shipping annually. An open architecture for collecting, contextualising and distributing the resulting data is becoming critical, and I was pleased to hear this sentiment echoed on the RFID Journal side by Richard Haig of Herman Kay and Joachim Wilkens of C&A.
Also heard echoed was the prevalence of BLE (active RFID) throughout the conference. Literally.
This contraption which converts radio decodings into musical notes may seem odd at first, but over the past year we’ve learned that art is a powerful tool for conveying to a non-technical audience the prevalence and potential of RFID and IoT in our daily lives. A few attendees were invited to listen with headphones and walk around until they found a silent spot. None were successful.
And we can only expect such prevalence to increase with energy harvesting technology maturing. We were pleased to see Wiliot’s live demo of an energy harvesting BLE tag, making good on their objectives from last year’s conference. Inexpensive battery-free BLE will be key to RFID proliferating to all the physical spaces in which we live, work and play—the BLE receiver infrastructure is often already there.
Which came first: the RFID or the Digital Twin?
The concept of the Digital Twin has also taken off over the past year, and we were pleased to have the opportunity to ask Jürgen Hartmann which came first in the Mercedes-Benz car factory example he presented? His answer was clear:
“Without RFID, for us there is no Digital Twin.”
Ironically, our April Fool’s post from two days previous was about Digital Conjoined Twins where we joked that the digital twin resides in the optimal location: adjacent to the physical entity that it represents. Perhaps not so silly in the context of industrial applications highly sensitive to latency???
RFID projects championed by the organisation’s finance department?
That is exactly what Joachim Wilkens of C&A argued. The success of their retail RFID deployment was in direct consequence of the C-level being on board, but more importantly by having a business case championed by the finance department:
“This is not an IT project, this is a business project.”
While we’ve observed our fair share of tech-driven deployments over the past few years, we’re increasingly seeing measurable business outcomes. For instance, a recent workplace occupancy deployment delivered, within months, a 15% savings in real-estate. That is a business project—one the finance department would love to repeat!
IoT: the next generation
What will we discuss in our RFID Journal Live 2029 blog post when the IoT celebrates its third decade? That may well be in the hands of the next generation. Since we began attending the co-hosted IEEE RFID and RFID Journal Live in 2013, we’ve observed a slow but steady shift in demographics. A younger generation—one which grew up with the Internet—is succeeding the generation instrumental in the development and commercialisation of RFID. On the showroom floor, we’re talking about the Web and APIs. At the IEEE dinner we’re discussing industry-academia collaboration to teach students about applications and ethics. And in the IEEE workshops, ASU Prof. Katina Michael took the initiative to invite one of her undergraduate students to argue the (highly controversial) case for implantables, effectively ceding centre stage to the next generation.
Joseph Cox and Jason Koebler, March 27, 2019, “Facebook Bans White Nationalism and White Separatism”, Motherboard, https://motherboard.vice.com/en_us/article/nexpbx/facebook-bans-white-nationalism-and-white-separatism
Joseph Cox and Jason Koebler, August 23 2018, “The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People”, https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works
Sasha Ingber, March 27, 2019, “Facebook Bans White Nationalism And Separatism Content From Its Platforms”, NPR, https://www.npr.org/2019/03/27/707258353/facebook-bans-white-nationalism-and-separatism-content-from-its-platforms
Brain Implants: Hope or Hype — Katina Michael:
From therapy to entertainment, to the technological singularity — a recent hire to the School for the Future of Innovation in Society opens a window into the endless possibilities, dangers and uncertainty of brain implants at ASU’s TEDx event.
“I'm looking at how they are presently being used and why they are being used,” Michael said. “For example, a person suffering from Parkinson's disease, Tourette syndrome, or dystonia — when they become resistant to drugs and pharmaceuticals — will opt for this procedure as a last resort because it’s the only hope they have of having some quality of life."
Michael will also explore a future where implants are used beyond therapeutics for entertainment, and eventually, the possibility of uploading our own consciousness to technology.
“We don't know the long term effects of being disembodied," Michael said. "But I'm arguing at the end of my talk that to some degree we have already been disembodied by the technological interventions we are using, whether it's smartphones or, whether it's our social media, we have less physical contact with those we love and we have more contact with inanimate objects even if we're using them as vehicles of communication. So this decrease in face-to-face is going to cause problems.”
Source: Isaac Windes, 22 March 2019, “ASU students and faculty look to the future at TEDx NextGen”, StatePress, http://www.statepress.com/article/2019/03/spscience-asu-students-look-to-the-future-at-tedx-nextgen
Fewer than 200 people watched the original live video of the Christchurch massacre, Facebook has said.
None of them reported it immediately to Facebook during the attack, and it took half an hour after the killer started his live video for anyone to report it using Facebook's reporting tools, the company said.
However, this has been challenged. Jared Holt, a reporter for Right Wing Watch, said he was alerted to the livestream and reported it during the attack.
Police carry flowers left by well wishers to the Al Noor Mosque in Christchurch. Fifty people died in the shootings on Friday.
"I was sent a link to the 8chan post by someone who was scared shortly after it was posted. I followed the Facebook link shared in the post. It was mid-attack and it was horrifying. I reported it," Holt tweeted.
* Christchurch mosque shooting accused not allowed TV or newspapers in prison
* 'You just think about what those people went through' - top Facebook executive
* Christchurch shooting demonstrates how social media is used to spread violence
"Either Facebook is lying or their system wasn't functioning properly."
Holt then checked and could find no record of his report on Facebook's internal tool for listing the reports users send off.
"I definitely remember reporting this but there's no record of it in Facebook. It's very frustrating," Holt told Business Insider.
"I don't know that I believe Facebook would lie about this, especially given the fact law enforcement is likely asking them for info, but I'm so confused as to why the system appears not to have processed my flag."
Facebook declined to comment when contacted by Business Insider.
Facebook vice president Chris Sonderby said the social media giant is working around the clock to prevent the video from being shared again.
"The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast," Sonderby said in a statement.
"Including the views during the live broadcast, the video was viewed about 4000 times in total before being removed from Facebook.
"The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended."
The link to the live-stream was posted on anonymous message board 8chan, and shortly after the 17-minute video ended, a download link for it was also posted on the site.
Facebook removed the video and "hashed" it to automatically prevent it being uploaded again, but some users added watermarks or edited the video in order to slip it past the detection algorithms.
In the first 24 hours after the shooting, Facebook removed about 1.5 million versions of the attack video.
"More than 1.2 million of those videos were blocked at upload, and were therefore prevented from being seen on our services," Sonderby said.
"We have been working directly with the New Zealand Police to respond to the attack and support their investigation."
Prime Minister Jacinda Ardern has spoken to Facebook chief operating officer Sheryl Sandberg since the attack.
The Government's Cabinet meeting on Monday is expected to be mostly focused on gun law but it is understood the Government is also keen to call on social networks to do more to fight radicalisation in the wake of the mosque shootings. This could include a call to share more data directly with intelligence agencies.
The Global Internet Forum to Counter Terrorism - a consortium of global technology firms including Facebook, Google and Twitter - said it shared the digital "fingerprints" of more than 800 edited versions of the video.
Neal Mohan, YouTube's chief product officer, told The Washington Postthat his platform also struggled to moderate the video successfully on its platform.
His team finally took unprecedented steps - including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips were altered in ways that outsmarted the company's detection systems, he said.
Despite such efforts, concerns have been raised by a professor of engineering and information sciences about social media's failure to implement preventative measures.
Professor Katina Michael of the University of Wollongong said algorithms can only do so much to prevent certain content being uploaded and human moderators are already forced to wade through screes of questionable content.
"The best algorithms couldn't have stopped this. Having said that, if you [Facebook] can't stop it, don't offer it. If you want to provide the service, perhaps you have to vet the users."
Michael said the current algorithms were set up based on a corporate model that was centred around generating revenue, not looking for controversial content. "It is the failure of not only the algorithms, but human moderators."
Australian prime minister Scott Morrison has asked G20 members to consider practical ways to force companies like Facebook and Google to stop broadcasting atrocities and violent crimes.
Sonderby said Facebook is committed to working with leaders in New Zealand and other governments to help counter hate speech and the threat of terrorism.
Meanwhile, police probing the online presence of the terror suspect and his involvement in far-right chat boards and other internet activity have met with some resistance.
In one email exchange, New Zealand police requested an American-based website preserve the emails and IP addresses linked to a number of posts about the attack, but were met with an expletive-filled reply.
- Stuff with AAP and BusinessInsider.com.au
Katina Michael in Matthew Rosenberg, March 20, 2019, “Alarm raised about Facebook livestream mid-attack in Christchurch, man claims”, stuff.nz, https://www.stuff.co.nz/national/christchurch-shooting/111412396/fewer-than-200-people-watched-shooters-christchurch-massacre-live-video-facebook-says
Disclaimer: The way I was quoted seems to imply that the content moderators at Facebook were partially to blame. This is not what I said in the interview with Matthew. Moderators are not paid to catch this kind of content; they are paid to investigate copyright and controversial content. Humans are at the mercy of the machine on this occasion. It can be likened to 100 people trying to stop leaks in 200,000 buckets. It just cannot happen. In terms of what could have stopped this footage from spreading? Ensuring more predictive AI algorithms, and also total information surveillance of everything coming through servers, and still that is not foolproof.
但是，这一说法已经受到挑战。Right Wing Watch的记者Jared Holt说，他在凶手直播恐怖袭击的过程中就得到消息，并且马上向Facebook进行了举报。
反恐怖主义全球互联网论坛(The Global Internet Forum to Counter Terrorism)– 包括Facebook，Google和Twitter在内的全球科技公司联盟 – 表示，它共享了800多种编辑版视频的数字“数字指纹”。
Global Internet Forum to Counter Terrorism)– 包括Facebook，Google和Twitter在内的全球科技公司联盟 – 表示，它共享了800多种编辑版视频的数字“数字指纹”。
University of Wollongong的Katina Michael教授表示，算法只能做这么多事情来防止某些内容被上传，人类审查员已经被迫在那些可疑内容的碎片之间疲于奔命。
Katina Michael with Linsday McDougall, March 19, 2019, “Facebook Livestreaming and the NZ Massacre: A question of corporate responsibility”, ABC Illawarra Radio: Drive, https://radioinfo.com.au/news/lindsay-doctor-mcdougall-loves-illawarra-hosts-new-abc-drive-program.
Thanks also to ABC Producers Rory McDonald and Jake Cupitt.
© 2019 News Limited. All rights reserved.
Commuters are willing to use facial recognition technology that charge them each time they get on and off their mode of transport.
The technology has already become part of everyday life in China — in some cities it is used to verify commuters’ identities through camera technology installed at train stations.
Visa’s and Stanford University’s new Future of Transportation: Mobility in the age of the Megacity report, quizzed more than 20,000 people in 19 countries including Australia and found 54 per cent of Australians would be willing to try facial recognition or bluetooth technology when commuting.
Visa’s head of product for Australia, New Zealand and the South Pacific Axel Boye-Moller said while this type of technology is yet to be used by Australian commuters this could change.
“We have rolled it (biometric) out through tap and pay and using your mobile, biometrical authentication is an important part of that customer experience,” he said.
“There’s a lot of work going on including some providers that are looking at biometric authentication in a transit environment.”
Mr Boye-Moller said this would allow commuters to do away with using a transport or bank card to pay for their commute.
Instead customers could rely on facial recognition to pay by linking it up to an app that contains their bank information.
Transport NSW has already gone live on ferries, light rail and trains allowing customers to tap and pay with their bank card and/or compatible smartphone or smartwatch when they get on and off transport.
And in Melbourne similar technology is still in the trial phase for commuters using trains, buses and trams.
University of Wollongong professor Katina Michael, an expert in biometrics, said commuters would be open to using biometrics to pay because “they think they are being monitored anyway”.
“We’ve got cameras everywhere and if they are going to be adopted to connect with people’s facial images then people will become complacent,” she said.
“They think they don’t have privacy anyway so it will help them get on and off public transport.”
The report also showed Australians’ reliance on using their own vehicle is the highest in the world — 72 per cent use this mode of transport to get to and from work each day.
It also showed 64 per cent would be willing to pay with a debit or credit card for all modes of public transport.
Citation: Sophie Elsworth, “Travellers want to try facial recognition technology to pay”, The Cairns Post (Sun Herald), 27 February 2019.
I cast doubt on the survey findings of the University of Stanford study. As I suspected it was an online survey, confirmed by the reporter. Over 50% of commuters were willing to use their biometric to pay for transportation costs in 19 countries?
Online surveys attract individuals using very small incentives for surveys they fill out. These are people who are already ‘in sync’ with the gig economy. Postal surveys would yield different results because you are asking a different kind of demographic what they think.
I described consumer responses toward biometrics as a complacency to the number of surveillance cameras that are currently tracking people. Have individuals lost faith in privacy?
So many government agencies now demand a passport-like, drivers license-like facial image registration, that people are likely sick of enrolling into such systems. The Capability has changed everything as the government amasses all photographic ID into the one big system.
The fact that in Australia, soon it will not be illegal to drive without a card token for a drivers license, has opened the flood gates to other services being utilised with unique biometrics.
I identified major problems with false positives and false negatives. Revenues lost, mistakes with computers being “blocked out of travel”, and also the issue of identical twins who could rort the system.
I had the great pleasure of being interviewed today by Ms Anja Taylor in Los Angeles today. Anja works for Wildbear Entertainment that does co-productions with all the major television channels in Australia. She was formerly a researcher and presenter on Catalyst. This interview will form a part of the documentary series: “Searching for the Super Human” that will air on ABC in Australia later this year.
Here are some of the topic Anja and I talked about:
Brief discussion about the internet of things and the emergence of big data. What effect / impact this is having on society.
You have mentioned “ambient intelligence” in your articles - what is it?
What are “insertable chips” and what is their brief history? What types of new insertable chips are starting to emerge?
Recently we have seen trials for insertable chips which can be used to open doors or pay for public transport, the trials found largely that people found them useful and painless – do you have concerns with these?
What smart chips are you most concerned with?
We are already being tracked with our smartphones – is this different?
Our pets are now chipped as a matter of course – do you see this happening with humans? What are the implications?
Can we not just opt out? Can it be done responsibly?
A special thank you to Luke for filming.
Government leaders & law enforcement are trying to force tech companies to put backdoors in encryption in the name of public safety. There are 750,000 law enforcement employees & 1/2 million US intelligence agencies community employees who may use those backdoors, & likely many others worldwide. Strong encryption is available throughout the world. If businesses & general public are forced to use encryption with back doors, will cybercrooks will be the only ones using strong encryption; those the backdoors were intended to be used on to begin with? How will Australia’s new law requiring encryption backdoors impact data security & privacy? Who has oversight of that law? How will it impact other countries? Does any evidence prove encryption backdoors have improved safety/security? Rebecca discusses these and related issues with Dr. Katina Michael, Arizona State University director of the Centre for Engineering, Policy and Society. Katina is also a privacy and uberveillance pioneer.
Source: Katina Michael with Rebecca Herold, 5 February 2019, “Will Australia’s Encryption Law Kill Privacy in the Name of Safety?”, Data Security and Privacy: Voice of America, https://www.voiceamerica.com/show/episode/112884
As the Federal Government today pushes the button to create My Health Records for every Australian who wants one, the industry has stepped out asking for more transparency around security and secondary use of the records to enable people to make more informed decisions about it.
The industry has also voiced out about data de- and re-identification, a global approach to cybersecurity issues as healthcare digitises, information security requirements of the future and blockchain as a way to alleviate some of the challenges associated with the My Health Record system.
On 26 November 2018, the Federal Parliament passed legislation to strengthen privacy protections in My Health Records Act 2012 without debate or division.
The new legislation means that Australians can opt in or opt out of My Health Record at any time in their lives. Records will be created for every Australian who wants one after 31 January and after then, they have a choice to delete their record permanently at any time.
The date of 31 January follows much deliberation from the Federal Government to extend the opt-out date. Australians initially had until 15 October 2018 to opt out of the national health database, or a My Health Record was to be created for them by the end of that year.
But following the opposition calling for an extension to the opt-out period, the public outcry against the potential for the data to be shared with police and other government agencies, a leaked government document detailing the Australian Digital Health Agency’s response to concerns and a raft of changes recommended by the Senate Inquiry into My Health Record, the Federal Government pushed this date back and relaxed its stance on when Australians can opt in or opt out of the system.
Australian Academy of Technology and Engineering (ATSE) President Professor Hugh Bradlow said the collection of health data across the population will result in better health outcomes as it not only shows how effective interventions are, but also allows treatments to be personalised based on the experience of thousands of other patients.
“New forms of measurement (based on artificial intelligence) will also give patients far more significant information about institutional performance, practitioner performance, the outcomes of specific interventions, etc.” he said.
The Society of Hospital Pharmacists of Australia (SHPA) Chief Executive Kristin Michaels said the My Health Record debate highlighted the need for an integrated ehealth system, accessible only to health professionals and set up at the request of health organisations, for the benefit of all Australians.
"All Australians, regardless of any illness or condition, deserve to get the highest-quality care,” Michaels said.
“More often than many would think, patients are unable to explain the medicines they are already taking and for what conditions they are already being treated, particularly after a seizure or if unconscious. Many of these patients are unaccompanied. Sometimes this lack of information leads to errors that have serious impacts on people’s lives.
“[Hence] hospital pharmacists have long called for a shared, electronic patient data system that links up a fragmented health system and empowers patients in their own care."
The issue of security
However, University of Melbourne Department of Computing and Information Systems Cybersecurity Senior Lecturer Associate Professor Vanessa Teague expressed her concerns around the privacy implications of secondary uses of My Health Records not being accurately explained.
“Both doctors and patients can be easily and confidently identified in a dataset… In the case of patients, this means that a few points of information, such as the patient's age and dates of surgeries or childbirths, is enough to identify the person and thus, retrieve all their Medicare bills and PBS [Pharmaceutical Benefits Scheme] prescriptions for many years.
“Easy and confident re-identification has been demonstrated on numerous other datasets that were shared in the mistaken belief that they were de-identified. It is probably not possible to securely de-identify detailed individual records like My Health Records without altering the data so much that its scientific value is substantially reduced.”
Teague said patients may choose to opt out of secondary uses of their data but are unable to make a “genuinely informed decision” if they are inaccurately told that their detailed record cannot be identified.
“Even more importantly, those whose identifiable MBS [Medicare Benefits Schedule]-PBS records were already published in 2016 should be notified, because the earlier release could make re-identification of their My Health Records much easier,” she said.
Harvard Medical School International Healthcare Innovation Professor Dr John Halamka also previously criticised the system for relying on outdated technology, saying that the $2 billion My Health Record was nothing more than “digitised paper” as it uses such “out-of-date” technology that crucial patient information on test results and diseases are unable to be read or shared by computers.
University of Wollongong School of Computing and Information Technology Professor Katina Michael said health data breaches, for some, could have a huge impact.
She used the recent example from Singapore, where 1.5 million Singapore health records were breached in a highly targeted effort on SingHealth. Among the breached health records was Singapore Prime Minister Lee Hsien Loong's personal records.
“What does this tell us when one of the world's most advanced cybersecurity nations suffers such a large-scale attack? Plainly, that no one's personal information is safe, no matter the measures in place,” she said.
"If we have learnt anything over the last four months, it is that electronic health records are hackable. We need not have to look too far to see that no system is impenetrable.”
Michael also speculated that there is the possibility of a ramp up of blockchain initiatives to beef up on My Health Record security.
“We will likely be told in the not too distant future that we wildly underestimated our security requirements and as such, must go one step further and protect our credentials,” she said.
According to Professor Michael, this involves the implant of a 16-digit Personal Health Record (PHR) ID number into people that also reads vital signs while embedded. This technology then alerts first responders of ailments and medications without the need for the person to provide any information.
ATSE’s Bradlow said the industry needs to be “realistic” about it as the danger of data leaking due to cyber hacking is as true as hacking any other data system.
“Let’s remember that many [healthcare professionals] have easy access to today’s paper-based health records – an electronic record is actually a step up in privacy. Within My Health Record, we can make it the default to require a patient access code,” he said.
“A well-designed record system which is managed by a professional security organisation and has a clear audit trail, for example, provided by blockchain, can mitigate this risk significantly."
Source: Hafizah Osman, 31 January 2019, “Industry calls for more caution over MHR system”, https://www.healthcareit.com.au/article/industry-calls-more-caution-over-mhr-system
Note: Thank you Hafizah Osman— interestingly I was referring to the VeriChip experiment of the PHR that Dr John Halamka trialled for a short time and wrote about in 2006 here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1656959/
Today is the final day to opt out of the controversial My Health Record. So should you?
Here are the major arguments for and against the national medical database.
In the pro-MyHealth camp is a range of doctors and scientists, including most of the peak health bodies.
They say having a national, digital record of individual health “journeys” will stop medication errors (and doctor shopping), give health specialists a heads-up on a patient when they see them for the first time, and help patients track their own treatment.
They argue the patient will control what goes on file, and stays on file. Parents will have control over young children’s files. There will also be a stream of data that will help researchers get better outcomes.
The Federal Government recently added extra safeguards, including stopping insurers from accessing data, further restricting access by law enforcement and government agencies, and doling out bigger penalties for improper use. A person can now permanently delete their record.
Individual doctors have concerns they might be liable for other doctors’ mistakes, but the main industry authorities are all gung-ho.
SPECIAL SUBSCRIPTION OFFER: $1 FOR FIRST 28 DAYS
Every story + App access + Digital print edition + Exclusive rewards
No lock-in contract. Limited time offer. Find out more and sign up here
The Australian Medical Association argues MyHealth has improved. President Tony Bartone says it is now a “far better product” than it was.
“Australians can be assured that it’s as good as possible,” he says. “It is going to aid in the clinical outcomes of a vast number of Australians, and prevent unnecessary medication errors, unnecessary hospital readmissions.
“It’s going to help with mapping out the journey that is very complex through the whole health system, and hopefully become that backbone that improves the communications and connectivity that is sadly lacking in our health system at the moment.”
Pharmacists say often patients don’t know the medicines they’re on – or they might be unconscious – and this can lead to errors.
The Society of Hospital Pharmacists of Australia says it’s critical to make sure patients get the right drugs, particularly in emergencies.
“All Australians, regardless of any illness or condition, deserve to get the highest-quality care,” chief executive Kristin Michaels says.
Professor Hugh Bradlow, president of the Australian Academy of Technology and Engineering, says plenty of people already have access to paper records, so MyHealth is actually a “step up” in privacy.
The Federal Government says the data will be secure and will only be accessed by healthcare workers who legitimately need it.
People can also elect to be notified when an organisation accesses their record and see a log of every access.
PlayMute0:00/2:00Loaded: 0%Progress: 0%Fullscreen
Greg Hunt guarantees My Health security
In the anti-MyHealth camp are many digital privacy experts who say the system is flawed and that any system that collects that much data on people is a tantalising prospect for hackers. Some groups are worried it could be misused to stalk someone.
Unless people take action, from today records containing vital health information including any records of sexually transmitted diseases or mental health issues will be kept.
Computing expert Katina Michael, from the University of Wollongong, pointed to a large-scale attack on Singapore’s health records, where 1.5 million records were breached.
“If we have learnt anything over the last four months, it is that electronic health records are hackable,” she said.
“Plainly … no one’s personal information is safe, no matter the measures in place.”
As well as the possibility of someone illicitly accessing your records, some experts worry about secondary uses of data. Information that has theoretically been “de-identified” – had the personal details removed – will be given to researchers.
Cybersecurity senior lecturer Associate Professor Vanessa Teague, from the University of Melbourne, said “de-identifying” the data had been shown not to work because other information such as surgery dates could be used to “re-identify” the person.
“It is probably not possible to securely de-identify detailed individual records like MyHealth records without altering the data so much that its scientific value is substantially reduced,” she said.
The Federal Opposition is not entirely against MyHealth – it established the electronic health record system that preceded it. But they’re taking the opportunity to have a crack at the Government. Yesterday Opposition health spokeswoman Catherine King said the already extended deadline should be extended further.
Labor also wants an independent Privacy Commissioner review and have pledged to start one if they win this year’s election.
Meanwhile, The Advertiser has revealed today that ambulance paramedics can’t access the record – and they’re the people most likely to need to know if you have a pre-existing issue or deadly allergy.
The Australian Digital Health Agency, which runs MyHealth, says that access has not been activated yet. On top of that, there are complaints that bad IT setups in public hospitals mean doctors in emergency departments can’t access the records anyway.
In the end – as Health Minister Greg Hunt said yesterday – “it’s every Australian’s choice”.
Source: Tory Shepherd, January 31, 2019, “My Health Record: To opt in or out? The case for both sides”, news.com.au The Advertiser, https://www.news.com.au/national/south-australia/my-health-record-to-opt-in-or-out-the-case-for-both-sides/news-story/a5a4ac4b6d1999eea9dcf057de1d04e9
Australian Print: (1)
Local Radio: (3)
Zinc 666 (+2)
Australia Online: (27)
Healthcare IT Australia
The Daily Telegraph
The Sydney Morning Herald
SBS - Greek
Gold Coast Bulletin
The Canberra Times
MSN - Australia
Australian Health Information Technology
SBS Turkish (EN)
Overseas Online: (2)
Over 33 million instances of reach were recorded by the Australian Science Media Centre campaign for more information related to eHealth.
EXPERT REACTION: Chinese government report on rogue scientist behind GM babies
Publicly released: Tue 22 Jan 2019 at 1107 AEDT | 1307 NZDT
It has just been reported the Chinese scientist who claimed to have used gene-editing to create the world’s first genetically modified babies “deliberately evaded oversight” in a quest for fame and fortune, according to a Chinese government report. An investigation set up by the Health Commission of China also found the scientist raised funds himself, privately organised a team of people and forged ethical papers to enlist volunteers for the procedure. You can read the full Reuters report below.
Organisation/s: Australian Science Media Centre
These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.
Professor Julian Savulescu is Director at the Oxford Uehiro Centre for Practical Ethics, University of Oxford, and Visiting Professorial Fellow at the Murdoch Children's Research Institute
"The response to reckless human experimentation has to go way beyond Dr He's dismissal. This is not merely a failure of compliance, Dr He failed to grasp the ethical principles and concepts he was vigorously espousing. There will undoubtedly be more guidelines and laws on gene editing but we also need basic education of the next generation of scientists in what ethics is and why this kind of behaviour is wrong. This was not a failure of science, or even regulation, but ethics.
More important than He's fate is the future for those victims affected. The couples and babies will need world class medical management and counselling. The second couple carrying a gene edited pregnancy should have already been fully informed of and understood the risks to their fetus and given the free choice to continue or terminate their pregnancy."
Last updated: 23 Jan 2019 11:10am
Professor Simon Foote is Director of The John Curtin School of Medical Research at the Australian National University (ANU)
"This is a straightforward criminal matter and should be treated as such.
The only problem is that the consequences of this act will be inherited by the children of those whose DNA was altered and their children. It is even possible that these mutated genes will form part of our contribution to the genetic makeup of generations to come.
The consequences may be catastrophic. The only way to stop this is to monitor the future fertility of those treated.
The other issue is that there may be health consequences for the children thus treated. Their needs to be closely monitored."
Professor Katina Michael is from the BIT Faculty of Engineering and Information Sciences at the University of Wollongong
"Critical to this investigation has been the deliberate and calculated action by the recognised scientist to avoid scrutiny of a research ethics board that has oversight over genetic and health and medical research. The evidence against the scientist demonstrates a premeditated act to circumvent due process in an academic institution.
A secondary concern has to do with the act of applying CRISPR technology in private to gene edit a baby without recognising the socioethical and legal implications for the public. This lone action will only encourage others to follow suit, especially in private practice, ignoring processes that have withstood the test of time since the Universal Declaration of Human Rights.
We are at an inflection point in history- to forge on with limitless genetic discovery with unforeseen consequences or to comprehend what it means to be human and humane. We need commensurate safeguards in industry to ensure ethics boards hold research that is privately funded to the same scrutiny as University research. And this is a much more difficult objective to attain."
Dr Darren Saunders is a cancer biologist and senior lecturer in pathology at the University of New South Wales
"The use of gene editing to alter the human germline without a clear medical need or careful weighing of the significant risks involved was a deeply disturbing application of powerful and promising technology, and apparently done largely for fame and fortune of the scientists(s) involved. They have not only placed the health of these babies at risk without a clear medical need, but also placed a cloud over this technology that will likely hinder its justifiable, ethical and responsible use in many other areas of real need, where it holds significant promise.
The preliminary findings reported here indicate that the scientists were deliberately operating outside ethical and regulatory boundaries, which most responsible scientists will find deeply troubling. It is really encouraging to see swift action by Chinese authorities and sends a clear statement of zero tolerance for this reckless and dangerous behaviour."
Dr Dimitri Perrin is a Senior Lecturer at Queensland University of Technology.
"Gene editing has great potential, but as soon as this story broke last November, the experiment appeared to be a poorly designed and regrettable effort to win a 'race' and grab attention. This latest report confirms what was feared.
Editing human embryos is premature: the long-term effects are still unclear. This experiment should not have taken place, and must not open the door to other similar studies at this stage.
However, there is a place for responsible gene editing, in health and in other domains such as agriculture. It is crucial that we openly engage with the general public so that everyone can understand more about the technology and contribute to discussions on which applications would be acceptable and which ones should stay off limits."
Dr Clovis Palmer is a Senior Monash University Fellow and head of the Immunometabolism and Inflammation Laboratory at the Burnet Institute
"Ethical guidelines are put in place to protect the public, prevent exploitation of vulnerable population, and maintain integrity of the scientific community and institutions. We do need to push Scientific boundaries to advance humanity, but within the boundaries of ethical guidelines.
We still don’t know how safe the gene-editing technology CRISPR-Cas9 is. The presumption that this technology can be used to fight against HIV, demonstrates a lack of understanding of the challenges and tools we need to combat the epidemic. The Tuskegee syphilis experiment between1932-1972 shows us what can go horribly wrong when science evades rigorous ethical oversight."
Citation: Katina Michael, January 22, 2019, “Chinese government report on rogue scientist behind GM babies”, Scimex, https://www.scimex.org/newsfeed/expert-reaction-chinese-government-report-on-rogue-gm-baby-scientist