What might MyHR mean for workers in Australia?

Unions are claiming employers could potentially get access to the record through third parties under the default clause and the government says this section below overrides the default clause.


Pilots and GPs are just two candidate job types where employers may seek access to health records under the guise of "duty of care" or "due diligence" using the "third parties" clause. 

We are living in a society where people are being routinely socially sorted into "at risk" categories based on various digital and physical chronicles. Should a pilot who seeks help to manage stress levels be stood down? Should a GP who has gone through relationship problems due to long work hours and is mildly depressed have their license to practice suspended?

The government is backpedalling on claims that third parties will not have access to health records based on seemingly contradictory legislation (section 70).

Fundamentally, what is new here? Is that while law enforcement has ALWAYS had the right to over-ride someone's privacy based on the proportionality principle from the very beginning of the enactment of the Privacy Act of Australia, letting third parties have access to sensitive information (of which health is) is a completely different proposition. The fact that the legislation is seemingly contradictory leaves Australian workers second guessing whether their individual case will be dealt with differently based on their employer's interpretation of the law.

It is one thing for an existing employee to have their license to practice revoked based on MyHealthRecord, and an almost completely different circumstances when a candidate is not hired for a job based on their MyHealthRecord. How would they ever know? It used to be that social media profiles of potential employees were screened for "best fit", but the future might be: "show us how mentally and physically healthy you are, and we will tell you how likely we are to hire you".

There are many GPs who have deleted their electronic health record to ensure they don't fall victim to such retrospective uses of the MyHealthRecord. 

Section 14(2) of the Healthcare Identifers Act 2010 :

(2) This section does not authorise the collection, use or disclosure of the healthcare identifier of a healthcare recipient for the purpose of communicating or managing health information as part of:
(a) underwriting a contract of insurance that covers the healthcare recipient; or
(b) determining whether to enter into a contract of insurance that covers the healthcare recipient (whether alone or as a member of a class); or
(c) determining whether a contract of insurance covers the healthcare recipient in relation to a particular event; or
(d) employing the healthcare recipient.

MY HEALTH RECORDS ACT 2012 - SECT 61 Collection, use and disclosure for providing healthcare
Collection, use and disclosure for providing healthcare
(1) A participant in the My Health Record system is authorised to collect, use and disclose health information included in a registered healthcare recipient's My Health Record if the collection, use or disclosure of the health information is:

(a) for the purpose of providing healthcare to the registered healthcare recipient; and
(b) in accordance with:
(i) the access controls set by the registered healthcare recipient; or
(ii) if the registered healthcare recipient has not set access controls--the default access controls specified by the My Health Records Rules or, if the My Health Records Rules do not specify default access controls, by the System Operator.

MY HEALTH RECORDS ACT 2012 - SECT 5 Definitions[2]
"healthcare" means health service within the meaning of subsection 6(1) of the Privacy Act 1988 .

PRIVACY ACT 1988 - SECT 6FB Meaning of health service[3]
Meaning of health service
(1) An activity performed in relation to an individual is a health service if the activity is intended or claimed (expressly or otherwise) by the individual or the person performing it:

(a) to assess, maintain or improve the individual's health; or
(b) where the individual's health cannot be maintained or improved--to manage the individual's health; or
(c) to diagnose the individual's illness, disability or injury; or
(d) to treat the individual's illness, disability or injury or suspected illness, disability or injury; or
(e) to record the individual's health for the purposes of assessing, maintaining, improving or managing the individual's health.

So a provider can assess, diagnose and record information subject to the “access controls” set by the user. This is where the issue of default settings comes into play.

Default Settings of My Health Record
How is consent managed in the My Health Record system?
By default, when an individual registers for a My Health Record they give standing consent for all registered healthcare provider organisations to access and upload information to their My Health Record. Healthcare professionals working in healthcare provider organisations can:
Access the individual's My Health Record during, or in regard to, a consultation or clinical event involving the individual; and
View all documents in the My Health Record system and upload documents to the My Health Record, unless the individual specifically requests the healthcare professional not to upload the document.

Facial recognition, law enforcement and the risks for and against

Katrina Dunn of Ideapod interviews Katina Michael of UOW.

ANALYSIS: Human Microchipping Poses Dangers to Health, Privacy

WASHINGTON, April 30 (RIA Novosti), Lyudmila Chernova – Although hardly a
novel idea, microchipping humans arouses justified concerns about risks to health and
privacy, experts told RIA Novosti Wednesday.

“Along with the potential risks to health, there is a real risk to freedom and privacy, one
of the key purposes of RFID is the tracking technology. Besides, numbering people is
very dehumanizing. It turns you into a barcode on the package of meat that’s get
tracked like inventory,” said Dr. Katherine Albrecht, an RFID microchip and consumer
privacy expert.

Katina Michael, an associate professor at the University of Wollongong, echoed the
opinion, stating that implanting automatic identification technology for non-medical
purposes could entail the total loss of the right to privacy.

“There is a grave danger in it, as someone who gets an implant does not have control
over bodily privacy. They cannot remove the implant on their own accord. They do not
know when someone is attempting to hack into their device, no matter how proprietary
the code that is stored on the device, and no matter whether the implant has built-in
encryption,” Michael told RIA Novosti.

In 2007 Albrecht and Associated Press Reporter Todd Lewan revealed to the public
studies that showed microchips cause cancer when they are implanted into laboratory
animals. The finding led to the suspension the VeriChip company’s work.
“In our research we found that between one and ten percent of laboratory animals
implanted with radio frequency microchips developed cancer adjacent to and even
surrounding the microchips,” Albrecht said.

“Pacemakers can also cause cancer, but in a case of a pacemaker where the alternative
is literally dying, it is worth the risk. However, in a case of something like an
identification microchip or dosages of drugs being delivered to the body, that does not
make any sense. Most people would prefer to simply take those drugs themselves than
run the risk of an implant,” she added.

Dr. Michael also explained that implanting microchips is not new in the health industry,
as society has already adopted implantables for a variety of uses. However, implantables
for medical applications or for the identification of animals have a number of
documented health side effects in line with Dr. Albrecht’s opinion.

“People with microstimulators have described … varying levels of neurological response
that were not as prescribed, … or health implications such as infection, or even ongoing
stress,” said Michael, adding that there are a whole gambit of health issues that no one is
really studying properly.

The expert claimed that these kinds of technologies are being tested already, but have
not yet been approved by the FDA for use as medical devices.

However, Albrecht said that the FDA appears to have never looked at the studies
pointing to the dangers.

“One of the things I learned is that the FDA relies on the company that’s looking for the
approval to provide the evidence of the safety and of the danger of the product. They
don’t do independent research, and I think there is a very serious potential to having the
companies be the ones that determine the safety of their own product,” she said.

The VeriChip Corporation implanted identification microchips into diabetic and
Alzheimer's patients as a trial with Blue Cross Blue Shield in 2007. The trial was stopped
due to cancer risks.

In recent years, advocates of the technology have promised neural implants that could stimulate the brain to help people with depression, implants that would deliver certain
amounts of medication which may be remote controllable. The technologies involved
are not new, and neither is the argument on their appropriateness.

Tags: microchipping, privacy, technology

Lyudmila Chernova, April 30, 2014, "ANALYSIS: Human Microchipping Poses Dangers to Health, Privacy", Ria Novosti [РИА Новости], http://en.ria.ru/business/20140430/189481760/ANALYSIS-Human-Microchipping-Poses-Dangers-to-Health-Privacy.html

Are disaster early warnings effective?

Key Link


Kerri WorthingtonSBS Radio
Katina MichaelUniversity of Wollongong
Peter JohnsonARUP
Paul BarnesQueensland University of Technology

Article comments

Details can be found here: http://www.sbs.com.au/podcasts/Podcasts/radionews/episode/251657/Are-disaster-early-warnings-effective



Australia's summer is traditionally a time of heightened preparation for natural disasters, with cyclones and floods menacing the north and bushfires a constant threat in the south. And the prospect of more frequent, and more intense, disasters thanks to climate change has brought the need for an effective early warning system to the forefront of policy-making. Technological advances and improved telecommunication systems have raised expectations that warning of disasters will come early enough to keep people safe. But are those expectations too high? Kerri Worthington reports.

Increasingly, the world's governments -- and their citizens -- rely on technology-based early warning systems to give sufficient notice to prepare for disaster. The 2004 Indian Ocean tsunami that killed well over a quarter of a million people led to the establishment of an early warning system for countries bordering the ocean. Last year, Indonesian president Susilo Bambang Yudhoyono praised the system for warning people to prepare for a possible tsunami after an 8.6 magnitude quake in the ocean floor northwest of the country. Japan's years of preparedness is also credited for saving lives in the 2011 earthquake and subsequent tsunami.

In Australia, the Federal Government has instigated an 'all-hazards' approach to early warnings, including terrorist acts as well as natural disasters, in the wake of a number of international terrorist attacks that affected Australians. Professor Katina Michael of the University of Wollongong specialises in technologies used for national security. Professor Michael has praised Australia's location-based national emergency warning system which allows service providers to reach people in hazardous or disaster areas, locating them through their mobile devices. "And that's a real new innovation for the Australian capability which I think is among the first in the world to actually venture into that mandated approach to location warning of individuals. And this allows people who are visiting a location, maybe working in a location they're not residing in, or maybe enjoying recreation activities in a location to be warned about a hazard." But there are concerns those systems can breed complacency.

Peter Johnson is a fellow at Arup, a global firm of designers, planners, engineers and technical specialists. "There is a concern about people in communities being too reliant just on official warnings to trigger actions. There's people in the community who think 'well I don't need to do anything, I just have to wait and someone will tell me what to do' and ignore the personal responsibility for their response and actions, so that's an issue. There's another issue about official warnings in some cases may come too late in flash floods or days of very high fire danger and rapid spread."

Mr Johnson says warnings need to be timely and relevant, with minimal false alarms to avoid 'warning fatigue', where people ignore alerts. That's an issue Victoria's County Fire Authority is currently grappling with. It's come under criticism after hundreds of people reported its FireReady app for mobile devices that gives location of fires and fire conditions, has proven to be unreliable. Many Victorians are anxious about early warning of impending fires, after many were taken by surprise -- with some fatal consequences -- in the Black Saturday fires of 2009. Fire experts say it's important not to rely only on one source of information for disaster warnings. And Peter Johnson says government bodies need to set warnings within an overall emergency management context. "We need the risk knowledge, we need the planning, the pre-event information and the broad season warnings and alerting us to days of flooding or total fire ban. Equally we need to understand, and probably better understand, the response of people and communities to those warnings and what actions are taking place."

Paul Barnes, the coordinator of the Risk and Crisis Management Research Domain at the Queensland University of Technology, agrees early warning policies need to be part of a broader risk and hazard communication capability. "When we have natural and socio-technical disasters often we start with the natural phenomena, the natural threat. We had seismic activity, earthquakes in Japan, bushfires, flooding in Australia. But very quickly the impacts from that initial source impact on technical hazards, technical issues, so we lose infrastructure systems, we lose telephony. We also therefore have, in some cases, biological problems in terms of water supply being contaminated." Dr Barnes says often what starts out to be one type of problem quickly cascades into others, and information about ongoing issues needs to be communicated to the public. "Once the initial event occurs, there will be an ongoing need to have continuing types of information flow to the public about cascading elements and the connective elements of these sorts of impacts as they go through time. So the basic principle of the complexity of the situation and matching the sophistication and adaptability of information that needs to go to the public, and also those not affected -- emergency responders, government officials, etc -- is a very complex situation that requires some very sophisticated application of thinking."

Suggested Citation

Kerri Worthington, Katina Michael, Peter Johnson, and Paul Barnes. "Are disaster early warnings effective?" SBS Radio: World News Jan. 2013. Available at: http://works.bepress.com/kmichael/318

Review: “Control, Trust, Privacy, and Security: Evaluating Location-Based Services”

Source: Trimble.

The Navtrak Website proudly tells businesses that “with the Navtrak GPS vehicle tracker, your [fleet] insurance risks decrease dramatically... .”In October 2003, Wired reported that “The Georgia Institute of Technology is sponsoring a study using global positioning systems to track the movements of cars and monitor the motoring habits of their drivers.”

A common complaint among those who like to imagine vast government conspiracies and alien abductions is that of the feared “implant,” essentially a radio frequency ID (RFID) chip, used to track the recipient’s movements.

The following is the fourth and last review of articles about ethical and philosophical considerations for security and privacy in technology from the Spring 2007 (vol. 26, #1) issue of IEEE Technology and Society Magazine.

Control, Trust, Privacy, and Security: Evaluating Location-Based Services” by Laura Perusco and Katina Michael

The use of location based services (LBS) has long been a figure in popular science fiction. Practical tracking of individuals for the benefit of society is not a new possibility, as the Wired quote indicates. Only recently has technology, cost, and desire merged to create the necessary atmosphere. Today, such an ability is even bragged about as a way for a business to save money.

Ms. Perusco and Ms. Michael use LBS in their article as a concrete example of technology’s ethical ambiguity. Generally, an LBS is any service that uses the position of something for a specific purpose. GPS and RFID are examples of LBS.

The use of LBS creates special ethical and legal questions. Who has accountability for the accuracy and availability of location information? Under what circumstances can a user opt-in or -out of LBS? What are the rights of caregivers and guardians to the location information of their charges? How long is location information stored?

The authors use five short stories, which they call scenarios, to set up the discussion of these issues. Because the authors are from the University of Wollongong in Australia, they conduct their analyses from an Australian social and legal perspective.

There exists a serious disparity between technological progress and its implications for the future, especially in terms of security and privacy. This, the authors argue, requires increased scrutiny. Their article is one attempt.

The first scenario, “Control Unwired,” explores vulnerability. Kate, working late in the big city, comes close to mortal peril as she struggles to use her PDA to locate and call a cab.

The second scenario, “The Husband and His Wife,” highlights the threat to personal autonomy. Unhappy Colin wears an RFID chip in his shirts so his wife Helen can keep track of his movements. She worries about his health after a scare with angina.

Next, “The Friends and Colleagues” examines group control. Scott and his girlfriend Janet debate the government’s increased use of RFID chips implanted into parolees. As a parole officer, Scott argues the benefits to society. Janet, though, worries that the government could expand tracking further into the general population.

The fourth and fifth scenarios combine to show the dangers of misplaced trust in technology. At a routine visit to a parolee, Scott checks that Doug’s RFID is functioning properly. After Scott leaves, we see that Doug has spoofed the system. He can leave the chip at home while he goes out for his own particular kind of fun.

Together, these scenarios present a bleak picture of people who have lost control over their autonomy. For example, we see Kate who cannot get a cab without the aid of her PDA, putting her safety in jeopardy. Then there’s poor Colin, whose movements are monitored and restricted by his well-meaning wife.

Additionally, we have examples of false security from LBS. Colin gets the better of his wife when the battery dies while she’s on a plane. Doug can go on the prowl after he cuts out his RFID.

In the real world, the situation is no better. The authors report that following the July 2005 London subway bombings, the Australian government passed laws allowing people merely suspected of terrorist activities to be tracked with wearable devices.

The scenarios prompt many questions, none of which have obvious answers. When can mere suspicion justify the ultimate invasion of privacy–our bodies? Who decides when intensive monitoring is for “our own good?”

A long running debate centers on whether technology is neutral or has an inherent social impact. “Guns don’t kill people; people kill people.”

Ms. Perusco and Ms. Michael state that “[t]hese situations [the stories] imply that LBS is not neutral, and that the technology is designed to enhance control in various forms.” (p. 11) In this case, though, they fail to mention that LBS are primarily used to monitor and control inventory, which most would consider neutral.

Technological determinism is the theory that technical developments drive the way we live. The authors counter that technologies which cannot find a market never develop enough to change society. For example, electronic tracking requires LBS. The use of LBS on people requires a society strongly concerned with security. Social needs and technology mesh.

Society must also be wary of the consequences of relying heavily on any technology. “If we become as reliant on LBS as we have become on other technologies like electricity, motor vehicles, and computers, we must be prepared for the consequences when (not if) the technology fails” (p. 12), write the authors.

As in the previous three articles from IEEE Technology and Society Magazine summarized here, “[t]he principal question is: how much privacy are we willing to trade in order to increase security?” (p. 13)

The authors ask whether the widespread use of LBS will have a long-term positive or negative on society and individuals? “[N]ot all secondary effects can be foreseen. However, this does not mean that deliberating on the possible consequences is without some genuine worth.”

Read all the articles in this series:

·      “Review: Privacy and Security as Ideology“

·      “Review: Designing Ethical Phishing Experiments“

·      “Review: Good Neighbors Can Make Good Fences“

·      “Review: Privacy and Security a Synthesis“

06.18.2007. | Categories: Literature Review

JML Research is powered by WordPress

Citation: JML Research, Review: "Control, Trust, Privacy, and Security: Evaluating Location-Based Services" by Perusco & Michael (2007), November 23, 2008.