The disrupted digital frontier: How emerging technology of today will shape who we become tomorrow.
You are invited to join us in Sydney to hear our expert panel discuss disruptive technology and what it will mean for you.
Technology has created a state of perpetual revolution and is already disrupting traditional markets and social structures, changing the way we interact with the world around us.
Digital disruption will eventually affect every corner of Australian business and society. It will rewrite economics, scramble supply chains, blur category boundaries and make us question our ethics. How will your business be impacted and how will you respond to become a digital survivor?
Join our expert panel of University of Wollongong alumni and academics to explore the technological, social and economic impacts that these emerging technologies are having.
Meet our expert panel:
Professor Katina Michael, UOW alumna and Professor, Faculty of Engineering and Information Sciences, UOW
Prof Katina Michael’s contribution to the future of emerging technologies is vast and includes her editorship for the award winning Institute of Electrical and Electronics Engineers (IEEE) Technology & Society Magazine from 2012-2017.
She is a Professor in the School of Computing and Information Technology at UOW and until recently she was the Associate Dean – International in the Faculty of Engineering and Information Sciences.
Since 2008 she has been a board member of the Australian Privacy Foundation, and formerly the Vice-Chair. Prof Michael researches on the socio-ethical implications of emerging technologies. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/uberveillance. In 2017, she was awarded the prestigious Brian M. O'Connell Award for Distinguished Service to the IEEE Society on the Social Implications of Technology (IEEESSIT).
Dr Michael holds a PhD in Information & Communication Technology from the University of Wollongong (2003) and a Masters in Transnational Crime Prevention (2009).
Dr Shahriar Akter, Associate Professor of Digital Marketing, Analytics & Innovation at the Sydney Business School, UOW.
Dr Shahriar Akter was awarded his PhD from the UNSW Business School Australia, with a doctoral fellowship in research methods from University of Oxford.
He has published in leading business and management journals with a Google Scholar h-Index of 20 and more than 1800 citations since 2013. He received the UOW Vice Chancellor's award for teaching, a nomination for excellent research supervision and several prestigious awards for research. He has won various internal and external grants, including more than $100,000 in 2017, mostly for his research on business analytics of big data.
He was awarded the Paper of the Year Award in 2018 by Electronic Markets Journal for his research on big data analytics. Dr Akter is an advisory board member of WebHawks IT and is also the Chief Advisor of Digital Marketing Next that investigates digital, social and analytics applications. He is also a member of the Australian Direct Marketing Association (ADMA) and Institute of Analytics Professionals of Australia (IAPA).
Dr Alex Badran, UOW alumnus and co-founder, Spriggy
Dr Alex Badran left his job at Citigroup to co-found Spriggy - a mobile app allowing kids to manage their pocket money with the help of their parents. Spriggy launched in 2016 and now has over 100,000 members. The co-founders met while working as derivatives traders at Citigroup, and connected over the belief that financial institutions should do more to help their users live happier financial lives.
Spriggy was one of 10 start-ups selected in August 2017 from a highly competitive pool for the Austrade Landing Pad in Tel Aviv (a 10-day boot camp). Dr Badran was awarded Most Innovative Team in the 2017 Finder Awards and Best Banking Innovation, beating Macquarie Bank and AMP Capital in the category. He was recognised by his peers as an elected Non-Executive Director of Stone and Chalk (from November 2015-November 2017). In 2017, Spriggy raised a further $2.5 million of funding to grow its business, and currently employs 15 people, making Spriggy one of the most successful early-stage start-ups in Australia.
Dr Badran holds Bachelor of Mathematics Advanced and Bachelor of Mathematics Advanced (Honours) from the University of Wollongong.
Dr Thomas Birtchnell, Senior Lecturer, Faculty of Social Sciences, University of Wollongong
Social scientist Dr Thomas Birtchnell says mavericks are small groups of technology users who are early adopters and tend to take risks. Somewhat like beta software testers, they will poke and prod to find the limits of use and in many ways, lay the groundwork for how people end up using the product or service.
Dr Birtchnell says the problem with technology is that despite the grand pronouncements made by entrepreneurs and those who have a vested interest in the mass uptake of a technology, no one really has a clue how it will turn out. “Technology does not determine human actions; humans determine the application of technology. Social and cultural forces are just as important in the development of technology as economic or technical ones.” Technology doesn’t follow a linear pathway. Innovations are most often a combination of different things used in a new way, but those combinations are unknown and unpredictable.
He is an associate member of UOW’s Institute for Social Transformation Research: expanding our capacity to understand and engage with our social, cultural and geo‐political environment.
Kylie Cameron, UOW alumna and Senior Managing Consultant, IBM
Kylie Cameron is a digital strategist, facilitator and leader and an Associate Partner within IBM's Global Business Services Digital Team. She has over eight years of consulting experience working across industry sectors including retail, financial services, telecommunications, pharmaceuticals, utilities, government, manufacturing, mining, and oil & gas.
Kylie's role requires her to work with stakeholders including media outlets to source relevant information; legal representatives to work through IP concerns; and with client analysts to define and implement a solution that supports their operations.
Intelligence agencies use cognitive technology in conjunction with other IT systems to increase the speed and efficiency of investigations. Cognitive technologies such as machine learning, pattern recognition and natural language processing tap into the explosion of unstructured data that can hold the key to breaking a case. Cognitive technology differs from traditional IT in how it’s set up and maintained as well as how users interact with it. As intelligence agencies implement cognitive solutions, they quickly realise the implications of these differences for their personnel, workflow and culture. But perhaps most significantly, cognitive technology affects the way users think. Analysts not only have more time to think because the technology helps them collect intelligence, but the technology also makes them think differently about how they do research and intelligence discovery.
Kylie holds a Bachelor of Information Technology from the University of Wollongong.
Dane Sharp, UOW alumnus and Digital Experience Manager, McDonald's Corporation
Dane Sharp is a successful, award-winning and highly skilled marketing, media, brand, product and digital manager. He has had the opportunity to experience many facets of business both locally in Australia and internationally and is currently the Digital Experience Manager for the McDonald’s Corporation.
Prior to this role, Dane held senior positions with Rip Curl, Under Armour and eBay. He has also had the opportunity to work closely with a partnership portfolio that includes Coca-Cola, Google, Apple, Telstra, Facebook, MySpace, Samsung, Woolworths, Target, Officeworks, Rebel Sport, Firefox, AFL, ASP/WSL, Tough Mudder, VML and DDB.
At McDonald’s, he leads a team that drives digital transformation for the business by developing and introducing innovation throughout the customer journey, identifying the most meaningful initiatives for customers and operators, and developing capabilities to bring them to life.
He holds a Bachelor of Arts degree from the University of Wollongong majoring in Communications, Cultural Studies and Journalism.
The UOW Knowledge Series showcases University of Wollongong thought leaders in various locations, discussing a range of engaging topics. Previous knowledge series lectures can be viewed here.
Innovations in Health Technology
Moderator: Jason Robert, Lincoln Center for Applied Ethics, ASU
Making Precision Medicine A Reality: Molecular Diagnostics, Remote Health Status Monitoring and the Big Data Challenge
George Poste, Center for Complex Adaptive Systems, ASU
Your body and Your Brain “At Risk” – The Business of Recalling Biomedical Implants
Katina Michael, School of Computing and Information Technology, U. of Wollongong
Jane Bambauer, James E. Rogers College of Law, University of Arizona
A case study: Development of a Novel Prosthetic Heart Valve
Geoff D. Moggridge, Cambridge University
Consumer electronics are “wants” bought by people who have purchasing power. These might range from human aids like calculators and robot vacuum cleaners to entertainment-driven electronics like smart TVs and tablets, to personal assistants like smart watches and fitness trackers. While most do not consider biomedical implants like heart pacemakers and brain pacemakers to be “consumer electronics”, by definition they are “a good bought for personal rather than commercial use”. The only paradox in this instance is that this suite of biomedical implantables are really “needs” as opposed to “wants”. Patients have a choice on whether or not to adopt this emerging technology, but most say that opting in is the only real option to maintaining their quality of life and longer-term wellbeing.
In the general consumer market, taking back a faulty product simply requires an original proof of purchase so an item can be validated as still being under warranty. In the case of biomedical implantables, a recipient simply cannot take back an implant for repair if it malfunctions. Biomedical implantables are willingly embedded in the body of a consumer by a surgical team, and require special expertise for removal, replacement or maintenance (i.e. upgrade). The manufacturer, for example, cannot conduct the removal process, but a surgeon with the right equipment and human resource support (e.g. nurses) can. In 2010, one supplier of pacemakers, Medtronic Inc., had to pay $268 million to settle thousands of lawsuits that patients filed after a 2007 recall of a faulty heart defibrillator wire that caused at least 13 deaths. In other cases, battery packs have failed causing disruption to consumer implants, and more recently we have witnessed software code security vulnerabilities in heart pacemakers which have meant that recipients had to undergo a firmware upgrade in a doctor’s office, a procedure that takes up to 5 minutes and is non-invasive.
On the one hand, these pacemakers are life-sustaining and life-enhancing to their recipients, on the other hand they place voluntary human implantees at some level of risk. The various types of risks will be considered in this presentation as will the impact of “recalls” on consumer implantees.
This Medtronic YouTube Video is shown in the context of this educational presentation under "fair use" rights. Gary's story demonstrates the positive and life-changing impact a DBS can have on one's life if they are suffering from Parkinson's Disease.
Now read about another Gary here. Two part interview will appear shortly in IEEE Consumer Electronics Magazine.
Warning: The contents of this video are disturbing.
A one-day expert workshop on IOT, focusing on the role of "soft law" in IOT governance. Attendance limit to 30 people. I will be presenting a 10 min talk on "Why Privacy Experts are Concerned about IOT?" and participating in the roundtable.
Organiser: Professor Gary Marchant
Gary is Distinguished Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability; Regents' Professor and Lincoln Professor of Emerging Technologies, Law and Ethics, Sandra Day O'Connor College of Law; Executive Director and Faculty Fellow, Center for the Study of Law, Science and Innovation.
Presentation delivered on Sunday April 15, 2018, 2.30pm.
Poster presentation at the 9th Annual International Conference on Ethics in Biology, Engineering, & Medicine in Miami, Florida.
Sunday April 15, 8:30am Breakfast and Registration Announcements/ Welcome
BIOENGINEERING ETHICS EDUCATION Session Chairs: Dr. Katina Michael & Dr. Subrata Saha
What a wonderful opportunity to be at the 9th International Conference on Ethics in Biology, Engineering & Medicine since I was travelling to Florida for RFID2018. I have been trying to get to this conference for a number of years without success. Professor Subrata Saha began the conference series, and this year the host university was Florida International University. FIU's main campus is situated in Miami and this year the conference chairs were Subrata Saha and Zachary Danziger.
Keynote: Jonathan Moreno University of Pennsylvania
There are some people who can entertain and inform all in the same breath! Professor Jonathan Moreno is one of those people. A self-confessed historian of science rather than ethicist, Moreno breaks down decades of research in minutes with his knowledge of history, science and technology, and national security. He officially holds the David and Lyn Silfen University Professor Of Ethics and is a professor in the Department of Medical Ethics and Health Policy.
The following notes are from Jonathan Moreno's talks. I took the notes as accurately as possible but surely I have made errors on occasion of transcription, sentiment or context. For this I take full responsibility. But I managed to capture some of the audio of the presentation. Of the greatest value I received, is a reassurance that my own research in this space is richer for its multidisciplinarity, and while some may not see its relevance yet, the time is fast approaching (almost here now), where we will become preoccupied by ethics first and foremost in the development process of any kind of scientific or technological endeavour.
Mind Wars – Jonathan D. Moreno
Brain science and the military in the 21st century
- National security and the brain before neuroscience
- Non-invasive imaging
- Brain-machine interface
- International law
"The Silence of the Neuroengineers", Nature, 2004
- Nasty attack on DARPA
- Given up moral standing—what will be done with your work
- Cheap shot at DARPA—figuring out to get prosthetic arms
- Dual use and multiuse
- Decide what purposes these devices should be put toward
- Multiuse not just single purpose (consider importance of prosthetics)
The Era of “Big Neuroscience”
- Simulate the human brain
- Map activity of neurons
The Human Brain Project (Henry Markman—simulate human brain)
- We’ll believe that when we see it!
- Concerned about PTSD, TBI (traumatic brain injury) and dementia in general
- Growth of FMRI
- Dozens of labs and postdocs, grants, publications
In Pharma world there is not a whole lot that people have to offer for these medical illnesses
Diagnostic based approach (DSM) or neuroscience-based?
- Epistemological debate
- Not much in pipeline (severe depression, schizophrenia) with standard drugs
- Pharma buy up small companies that specialise in one or two drugs
- Centre for Neuroscience
- Neuroscience bootcamp—teach about the brain
- Growth in publication in 1990s was explosive and now continues
- Corporate room—law—dalga test for scientific evidence
Cognitive Neuroscience Funding (US Defense, 2011)
- Army $55 million
- Navy $34 million
- Air Force $24 million
- DARPA >$240M
Margaret Kosal, Georgia Tech (5/12/11)
White House BRAIN INITIATIVE FY 2014 (soft side of neuroethics)
Examples of DOD Research Programs, 2016
- Human, Social, Cultural, Behavioral MOdeling (HSCB)
- Army Research Office, Life Science
- Direct neural interfacing
The US Third Offset Strategy $18 Billion FY 2017
- Wonky defense department strategist term
- 1st offset is Atomic Bomb
- 2nd offset was Guided Missiles (First gulf war)
- 3rd offset a grab bag, term of convenience, computational neuroscience, what are they doing in general
- E.g. robotics, systems autonomy, miniaturisation, big data, advanced manufacturing
- Partner with innovative private sector companies
- Manchurian Candidate (1962)
History of brain science loosely understood and counterintelligence
- Sinatra bought copyright… now 1970s cult film
- Rumours… American prisoners of war were brainwashed (South Florida journalist invented term)… hypnotised… one of them would be turned into assassin… and then VP candidate was Manchurian Candidate
- Fairly new drug called LSD
- Stumbled on by chemist, in Barren, Switzerland, Albert Hoffman
- Put it on shelf, and then he had visual hallucinations
- 101 age passed away
- POWs had made false confessions committing crimes against Korean
1953, experiments on LSD
- Make a discrete man indiscrete
- Sex, alcohol—old ways are best… get information from people
- Worried about putting LSD into water supply
- This is not new
- LSD in trials
- Cardinal Mindszenty (1949 trial) interrogated
- CIA infiltrated 17 area groups and gave out LSD
Operation Midnight Climax (two-way window)
- Report to the President by the Commission on CIA Activities within the US
- Hottest new tech to put into defense and offence
1950s big thing was hallucinogenics
- Iconic drug from 1960s… but in 1950s was a national security
Operation Moneybags, 1964
- 25-50 min after the drug had taken effect; 1 person was taken away 20 min after
- Using drug to modify behaviour to see if you can find some defences against it
- UK… joint US – Canadian operation
- Letter to Parliament from Secretary of State for Defence, 18 July 1995
- Veterans were upset
- Couldn’t have asked for consent because screwed up experimental design
- Usually it is US get (FIA, second amendment, celebrities get excited about it)
- Jennifer Lawrence to direct chemical warfare film
- Neuroengineering with drugs, 1950s and 1960s
In Florida and California in 1970s…
- Loose… sun, ocean, open lifestyle in Florida…
- New culture developing
- Let’s not do drugs… let’s find natural ways to live on beach and be hippies, and we can learn from war fighters… people talking to dolphins… we can learn how to be Buddhists and warrior monks…
- Army picked it up in the 1980s…
1988 The Mind Race “Enhancing Human Performance”
Committee on Techniques for the Enhancement of Human Performance, The National Research Council, 1988
- Warrior monks
- Levitating and second healing and walking through walls
- NRC advised them NOT to continue with this project
- The men who stare at goats
- supplement or replacement for amphetamines
- Antisleep pill (approved to use in 2004 to airforce pilots)
- 60-80 hours
- Speed? But not exactly speed
- Cognitive enhancers… decent meal and exercise…
The trust drug- oxytocin
- Natural production is associated with trust behaviour
- Cuddle drug…
- May be artificially administered in a spray to encourage cooperation
- Use in interrogations?
- Claremont—gave oxytocin would be more cooperative and more trusing
- See Paul Zak's experiment and TED talk
- IN counter-intelligence could you do this with a suspect in a terrorist plot?
- Violation of chemical weapons convention?
- Lawyer—they fight for ISIS, ,they don’t fight under Geneva convention, just give it to them
The Anti-Conscience Pill—beta blockers, depressed … experiences but did not get happy or sad
- In 1990s… give to people before warfighting/combat…
- Beta blockers can be used to treat stress, prevent PTSD
- Surpress release of hormones like norephinephrine that help encode memory
- Could they reduce guilt feelings?
- Would we want something like this?
- 1990 brain fingerprinter P300 (recognise a picture) true or false
o FBI have watched some of these
- Visualising memory
o Episodic memory retrieval
- A private company: 7-8 years how using certain device with some electrodes that they can restimulate your memory.. recall of words/events
- Memory retrieval is harder than they thought- these are very distributed
o Alan S. Cowen et al TMS
- Neural portraits
- Functional MRI
- Clip reconstructed from brain activity
- Presented clip in MRI—then it can be reconstructed
- Can you do this with dream images? Reconstruct dream images?
- T. Horiwaka M Tamai, Science, April 4, 2013
- Neural decoding of visual memory during sleep
- In theory you want to watch your dreams…
- Visual cortex… is big.. .back… a lot of stuff… cheating…
- Reconstructing speech from human auditory cortex
- DARPA project
- tapping into the rat (whiskers—right and left)
- it does have a choice… but if it does right thing, he gets pleasure centre hit to right or left
- DARPA funded story… how the brain processes these signals
- Seattle brain to brain interface, Doree Armstrong and Michelle like me…
TMS (Transcranial Magnetic Stimulation)
- Transcranial Direct Current Stimulation (tDCS)
- Replace shock therapy
- B Zwissler—make you see something that you didn’t see
- J of Neuroscience, neural modification
- DIY tDCS
o Transcranial experimentation
- Beckman institute in Uni of Illinois
- MIT undergraduate… TMS
- An experiment in the neuroscience of ethics
- That’s terrible… won’t let my girlfriend cross
- After TMS… just asking if the girlfriend is ok
- Neurons are genetically engineers to carry a protein adapted
- The very hungry mouse (chronically implanted on or off)… getting into hypocalmus
- What if you could link people’s brains together
- Talk to them without pointing, without using a radio
- Computing arm movements with a monkey brainet (linking brains together) at UniPenn
- Arjun 2015 (with Duke Uni previously)
- Fooling cognitive resources
- Pooling electro-resources
- Effective communications… movies better… or hook each other up… like monkeys…juice…
- Beyond better standard array (only in the lab it works, Brown University)
- DARPA: briding the bio-electronic divide (100,000 microelectroarray)… quantum
- Send signals intentionally
- Korea… robosoldier… 10 years… in DMZ… heavily armed… make decisions on battle field…
- Wall Street Journal…
- Needs a “person in the loop”
- Autonomous weapons already been used… moving to offensive world… autonomous lethal weapons…
- Face recognition algorithm finally beats humans… by 1% machines are better…
- UNOG Disarmament conventions
- Are the human experiment rules adequate?
- How can we assess risks and benefits?
- Human Rights
New Ethical Principles?
NRC, 2008, 2009
- Neuroscience for Army
- Emerging and Readily available tech and national security
- Framework for ethics
The Royal Society report on Neuroscience, National Security
Italy even joining
- Getting human cells… neural organoids into rat brains to see if they can get a smarter rat?
- What about the AI world—IBM has a computer beating Go? Why are we not worried with IBM? Why is there NOT a committee?
- Driverless cars? Insurance, risk question at the heart…
- Computers cannot recognise INTENTION
Biggest issues are not consumers but national security
- Wellcome Trust
- The Dana Foundation
- The Greenwall Foundation
- History and Sociology of Science (Penn Arts and Sciences)
- Department of Medical Ethics and Health Policy
- Rockefeller Brothers Fund
- National Institute of Health
Richard L Wilson from Towson University delivers talks at BEC
Big data is being introduced in the insurance industry which brings about increased regulations and restrictions regarding customer privacy
Technological artefact, happens to be a watch. Uses it to self-medicate. And then they join a group on Facebook, and the individuals in the group self-medicate. There is no IRB board for recommendations they are making.
Should insurance companies have access to the personal information, including health information, that is tracked on the wearable technology devices?
What is a wearable technology?
A category of technology devices that can be worn by a consumer and often include tracking information related to health and fitness
Wearable Tech and Your Health
Wearable tech are becoming increasingly popular
Wearable tech can monitor live movements, heart rate, activity levels
Can calculate risk
2016: 275 million devices sold; 2017 have about 332 million
Wearable Tech and Accidents
Wearable tech isn’t limited to things such as watches that record exercise habits
Insurance is wearable cameras
Cameras mounted on cars
Cameras record events, accident scenes
Good training tools
More accurate accident reports
Health insurance companies
Rights based ethics
Users concerned about lifelogging (family medical records) being accessed which could result in increases or decreases in premiums
Anticipatory: future- what is it going to look like?
Corporations making the devices
Utilitarinism: if you do what we tell you we will increase happiness and reduce harm/sadness
Created as a way to track users
User agreements are important
3Billion in 5 hours… selling what you have given them!
The info begins to the company, but data breaches and the changing landscape of the big data create new opportunities
Lawmakers and governments
Rights-based ethics: within a persons right to refuse sensitive data to insurance companies
HIPAA regulations, the rights of wearable users to refuse their health data to private and government insurance
Barriers to Overcome
We need readjustment of ethical principles
Can insurance companies use your data without permission
Someone at work make you track yourself
If not required by insurance companies have access to data on wearable technology, user should sign a consent form
Universal healthcare- do not overcharge healthy people
Government has responsibility to keep people healthy
Tele Surgery and Virtual Reality: Robotic Assisted Surgery Anticipating the Ethical Issues
What is an autonomous surgeon?
- Robot capable of performing multiple surgical procedures without the need of constant human intervention
- Uses feedback devices such as cameras, sensors etc
Over 40K surgeries have been robot aided in the last year
Programmed to minimise doctor’s intervention
Most common is the da vinci
Cost 1.7 million
Not ready for human trials
Where the field can go
Use teleoperated for operations overseas where doctors are unavailable
Possible that surgical robots can replace surgeons altogether
Program robots to perfom surgeries with no assistance
Personalised surgery robots
Patients who have off site surgeries
They are costly
Some surgeries provide NO benefit for using the robots
What is there is a disconnect mid surgery
If autonomous and something goes wrong no doctor will be present to fix it
Possibility for hacking/and or sabotage
In the Loop- human control
On the Loop- sight of human
Out of the Loop- no human involved
Import cheaper parts
Secondary power source
Isolate robot from global networks
Airgap the system
Could conflict with beliefs
Some Amish, memmonite etc
An option for the patient to choose whether or not an autonomous surgeon performs their surgery
Publish the trial testing and trial data to the public
E.g. dentist example
Responsibility in the event of an undesirable outcome
Non-sentient machines having control over human morality
Alternative motives by programming (intentional harm or death)
Pre-establish legal liability prior to the start of the surgical process
Maintain human surgeon oversight
Require registration of robots and periodical assessments to ensure programming and function
The Bioethics of Implantable Biohybrid Systems
Andres E Pena (firstname.lastname@example.org), Ranu Jung
What are biohybrid systems?
Life changing neuroprosthetic technologies
There are patterns of neural activity
Therapeutic and Reparative Neurotechnology
Cochlear implants: success story
Neuroprosthetic device. 300K users worldwide
Paving way for artificial limbs
Control, and feel like these are their own
ANS – Neural enabled prosthetic hand system
Restore sense of touch and proprioception
LIFEs longitudinal …
LEAD system in upper arm
Used to stimulate periphery nerves
Designed for comparable . J. Neural Eng 2017
Ethics of hybridisation and potential impacts
Sense of self
Perception of reality
Restoration vs enhancement
Loss after failure
Cost and coverage
Access to maintenance
Future of implantable biohybrid systems
Closing the loop
Adaptive Neural Systems Lab
Some of them linked with materials (may not be able to guarantee neural function). The materials of the implants should last for lifetime of person. Regards of doing neuralstimulation, you don’t expect this problem… not electronics… but stimulation – how will last that for a lifetime.
* The graphics from Altered Carbon are placed here in so far as the television series was mentioned by the presenter, simply as futuristic visions.
Reuse of cardiac pacemakers in 3rd world countries… ethical issues…
Inline connector designs
For now just hoping to get sensation back to upper arm for those who have lost it, so they feel that their hand or their prosthetic is actually a part of their body. Using same approach as cochlear implant but for upper arm.
ANS Team at FIU Rocks
The Adaptive Neural Systems Laboratory team headed by Dr Ranu Jung is an exceptional team with a young dynamic group. Looking at the biographies of this team, quite a few of the researchers, including Dr Jung had long standing careers at ASU. Their work is truly inspiring and groundbreaking, and students/ grad students/ and staff have a deep awareness of ethical issues. It is exciting to meet teams like this at conferences who take philosophy and ethics so seriously.
Navigating Ethics in a STEM Training Grant – El Paso
Building scholars for the future of biomedical research
Responsible conduct in research
Identifying an impact
Signs of a sound scientist
Goal: Train biomedical researchers through a multi-institution consortium
Partner Schools: ASU, Baylor, Clemson etc
96 students (60 Female, 36 male)
Responsible conduct in research
Dangers of Research Fraud
Goal: provide an experience of research ethics… students take ownership of their research
Participation and activities other than watching, listening and taking notes
Scale Up Space
Students sit in a circle within their groups
Sudden introduction into scientific research realm
Bootcamp for incoming freshmen—3 weeks course in Summer
What is ethics? What is values? What are your values? Why are they important?
Scenarios, situations, put themselves in position of principal investigator
How do I feel about it/
Boot camp, research foundations course, research development course
Take the CITI module – Collaborative Institutional Training Initiative
Based on RFC Completion
Guage impact of course on students
Methods practising are impactful both for students who are doing the courses as BUILDScholars and those who are doing it as “affiliates”.
Similar outcomes for 2015, 2016
Building Student Identity
BuildScholars have bootcamp and CITI training
Identified only first semester… is the impact same over time?
Students are more active in identifying potential for research misconduct
Have confidence to bring up to superiors
“We just want to do research why do we have to do ethics training?”
NIH Video called “Lab” – Dynamic situation… if not following method then there are issues
Biomedical research intro
Safety training, blood/pathogen, misuse of laboratory equipment
Computer Programming Literacy for Medical Professionals
Teshaun J Francis
Should doctors and nurses learn to code?
Jeff Atwood: “Please Don’t Learn to Code” because of specialisation
Programming literacy should be a requirement but not learning to program
Should medical professionals learn to code?
Computer code is a tool to solve problems
Organisational tool. Neuromind.cc
TED Talk: clinical decision supporting systems
Data input, manipulation, data output
How technology is being used to teach medical students?
Next generation quiz cards
Not a replacement for cadavers… but there to supplement
Microsoft HoloLens teaching anatomy
IBM Watson Health
“You want doctor’s to teach anatomy”? Not IBM unless they utilised doctor’s to do this
IBM Watson is an attempt to doing diagnostic role of a doctor
75-98% agreement with a doctor
It is a tool… not a doctor…
Watson agrees with physicians…
Teaching Ethics in an Advanced Education in General Residency Program: general dentists UCSF
AEGD. After 4 years of dental school can specialise or go to practice, or do 1 year of clinical training of research
Teaching ethics to dental students is really difficult, disengaged of residents, irrelevant etc.
Dental students getting trained from ethics is decreasing in hours…
Basic training but in postgrad, it is little. The framework is absent.
Residents presented cases every week based on personal experiences while they were in residence!
Mean number of contact hours of ethics instruction is around 26.5 hours- undergrad
In postgraduate only about 15 hours
“Ethics is important, but our curriculum is already crowded and there is no room for ethics teaching”
“observing how seniors do it”
“learnt at home with family, not at school”
What is ethics?
“when in doubt it is probably not ethical”
Five fundamental principles
Patient autonomy: informed consent, what is treatment, risk, benefits, costs, alternatives in layman’s terms
Nonmalficence: do no harm
Beneficence: to do good, patients get treated in time
Justice: duty to provide dental care no matter who it is
Veracity: dentist-patient relationship based on trust and truthfulness
· About 36% of schools ONLY teach this
What happens when a patient comes in and has issues with sensitivity in front teeth… teeth issues, decay, cannot pay… anxiety as victim of domestic violence… temporary crown… and then dismiss… what do you do?
“Termination of Care”: cannot terminate midway; cannot withhold records
Resolultion: new resident will complete front 6 teeth, and do not take on new responsibility unless she pays up. Need to be stable. Provided care. Did not do harm.
Began Jan 2017 to Dec 2017.
Personal clinical cases: 11 residents, plus students, academics and risk management office
Vertically integrated ethics curriculum is being proposed
Common ethical challenges
Opportunity to discuss ethical dilemmas “on the fly”
Available resource consultations
Practice management implications
Designing a case based didactic program
Goals of care
Understanding informed refusal
California Ethics Law
Informed consent and presumed consent—did they sign?
I was lying down and vulnerable at mercy of a dentist—lawyers get in
Truth is that students are serious about the clinical training—they are in the field. Do they block the Friday morning from residency, where the whole day could be used for integrated learning.
Increase from 26 to 100 hours on ethics…
Research Ethics Training for Rising Researchers – Eman Ghanem, Sigma Xi
Data management The responsible handling of data
Staying professional: collaboration, mentorship, authorship
Disclosing details: confidentiality and conflict of interest
Respecting research subjects: bioethics crash course
Staying cool: understanding research pressures and consequences
Research Ethics Jeopardy
· Modules were 30 minutes each
· Activities: Case study, discussions, Game: around the lab, ethical or unethical
Necessity of Animal Research Ethics
Humane treatment is essential in research
Reduction (# of animals), Refinement (improve techniques for harm/stress), Replacement (simulation, computational models)
Involve other collaborations- like vet courses, to destroy less animals
Idea development, protocol development, grant app, review, experimentation, data analysis, result interpretation
Greatest contribution when anatomy and physiology is similar to human
How to establish link?
Animal suffering and distress can be reduced by collaboration
Data Ethics and Computational Bioscience
Kenneth W. Goodman (Miami University)
Many sources of data collection and sharing that are morally obligatory
Population health science has always been
Able to presume the consent of its beneficiaries
Technology, scientific advances, beliefs about how the world is?
Let’s try that?
History of pharmacology
Raising questions for applied prescriptions in x or y
Blood letting questions?
We try to organise this through scientific theory
Blood, phlegm, Red, green
George Washington was famously killed by his surgeons who over-blood letting
Surgeons over physicians…
If you don’t know what is going on inside DO NOT cut it out.
Film called THE PHYSICIAN with Ben Kingsely
Organ transplantation, autopsy, necromancy, mutilation
What is the role of religion?
What about keeping dying people alive for a transplant…
Destination therapy and not “bridges”
Is that an appropriate use of a technology?
Resuscitate every dying patient?
End of life care
Probability of futility AND amount of suffering the child would undergo
Dying with more suffering…
Texas Advanced Corrective Act
Cleveland Clinic Institution—but not in Florida… it is too weird here…
Broader application of data
Precision medicine, personalised medicine, ekmo, ASV devices, end of life, duty of care etc
If you just use population health to help people it would be major to save many people in the globe with CLEAN DRINKING WATER
“Is it selfishness? Not love…”
Secondary gain: “if she dies, the social security $ will stop”
Population health—need more information
(Paracelsus)… pattern in the data… respiratory illness when they were in mines… correlation
John Snow… dot on a map… data collection without consent… did he stop the epidemic?
Removed the pump… gun show epidemiology
Epidemiology in the era of big data. Epidemiology 2015
More human lives will be touched by technology than anything else…
Privacy vs science
Privacy and confidentiality were never seriously considered to be hard barriers to sharing and analysis
Reductio issue (left toe example)… can you help others with same problem?
Biomedical research has long relied on the work of trusted entities to collect health information
Security, de-identification, anonymization, pseudonymisation
Cloud is a metaphor
Data is used sloppily—knowledge, information etc
Ethical concerns should focus on decision support- given variable data and database quality, uncertainty
Appropriate user and users
Data sharing and interoperability
5mb computer in 1956 pic.
Is not an absolute right
Must therefore be balanced again other right, including a right to benefit from science
Infrastructure support refusers
Health and privacy
Smart laws and policies
Recognition of duties to collectives
Learning healthcare systems
Public health analogue “duty to treat”
Role of Ethics
Illuminate the force, scope and limitation of rights
Identify and balance conflicting rights, and rights and duties
Identify and justify duties
Management and governance
Balance health, data, privacy
Identify best practices
Develop revised IR-like review entities
Consultation capacity for risk communications, decisions under uncertainty
Agreement and civil society
DNA, epigenome, life going through illness, and exposure to all thinigs in world…
Social determinants of disease…
Scandanavia—clinical care/ research? Health insurance provision?
Health care provided for everyone? They affect the health of populations. Buy insurance?
Understanding cases within profession (Wade Robinson)
Edward Tufte’s compelling mistaken reading of what went wrong with the Challenger.
Challenger ‘O-rings’ safety in the Challenger: the astronauts were killed by impact not by explosion
Tufte blamed the engineers- if they had done a scatterplot they could have worked it out
You can see increasing risk of damage
The ascending curve of risk
If proper scatterplot done, no one would have risked to launch the Challenger in such cold weather
“They didn’t know what they were doing, but they were doing a lot of it”
Big thing is that NASA launched BELOW 50 C at 40C.
Research Misconduct in FDA Regulated Clinical Trials: What Not to Do
What is research misconduct?
The falsification of data or results or recording and reporting
Deviation from established protocol- data doesn’t reflect what you were studying
Violation of human subjects and rights
How did definition happen?
World Med Assoc Declaration of Helsinki
Tuskugee Syphilis study and Belmont Report
US Government Oversight
DHHS (office of human research protection)
How did FDA get authority?
Thalidomide incident in 1950s (Francis Kelsey)
1962 Amendments (Kefauver-Harris Amendment)
What does FDA do to ensure research misconduct does not occur?
What is FDA looking for in regards to ethical standards and regulatory requirements
Required reports by IRB
FDA Research Observations
FDA investigators cite observed deviations from regulatory requirements
What happens after observations are reported?
Observations by FDA investigators are passed through multiple levels of review before a final classification
FDA Clinical Investigator Action
Warning letter and nidpoe
Get in trouble—this is big issue
University of Pennsylvania Gene Therapy Trial
Objective: treatment for OTC deficiency
Prevents proper elimination of ammonia
Death in the trial within 96 hours
1999 Nov. 1.5 month Clinical Investigator James Wilson
Based on all the citations that happen
Did not follow protocol
Should have STOPPED clinical trial… 5 increases of therapy before subject died in cohort 6!
They did not tell the patient ALL the information.
University College London Medical Device Implantations
Strict regulatory oversight—safe for humans
Results: Deaths of guinea pig subjects
Termination of researchers employment
Regulatory and criminal investigations
Bial-Portela Clinical Trial
The investigators didn’t do anything wrong, this was just a poorly designed trial
Cohorts were overlapping
Real time data wasn’t coming in and subject died, and others ended up with brain lesions
Written to prevent issues that have previously occurred
Human subject rights protection
Assurance of data validity
Inspections and Audits (FDA, OHRP)
Dr Sheldon Krimsky
Monsanto Litigation Documents
Integrity of Research journals
Crisis in Credibility
Contested issue between science and commerce
Can it affect public health
Corporations have a different view of science- one of many inputs to production
Science just one of inputs into production function …. Lead, asbestos, BPA, tobacco (lobby against)
IARC, WHO Report 2000
DSM-IV: panel members had conflicted interest when they produced categories
100% panel on mood disorders, schizophrenia etc.
“On the Take”
Conflict of interest and scientific journals
Can you believe what you read
Funding effect in science
The Monsanto Litigation
Specialised cancer research arm of the WHO, reached a determination chemical glyphosate-based herbiside
More than 270 of the cases have been consolidated multi-district litigation for oversight by 1 judge in the US District court of San Francisco
Multi district litigation
Monsanto’s Web site on Glyphosate—does not have adverse effects to humans, wildlife and environment (hodgkins lymphoma IHL)
Rightoknow web site
Ghost writing: that is how they do business. EPA referenced. Determination 2016, glyphosate was not likely carcinogenic
They hired Intertek Scientific & Regulatory Consultancy
Undo influence of regulatory agencies
Undo scientific journals…
Unethical to ghost write journals!
It was published by Toxicology articles
William Heydens… disclosed this
Editor in chief got money for this from Monsanto!
JFCT- retract or remove the journal… paper retracted after 1 year
“results were not definitive”… paper retracted but as soon as it was another journal published the article
Who should a journal editor review—an employee of Monsanto review a paper about Monsanto
When vital public health reports are published in refereed journals there is a heightened expectation that they meet professional standards of scientific integrity. Those standards include full disclosure of conflicts of interests…. Sources of funding….
The Lancet--- 3 positive reviews, and one so-so… “not a priority to publish”
Ethical Guidelines for Authorship- Subrata Saha
Gift authorship; ghost writing
Millikan Oil Drop Experiment: https://www.youtube.com/watch?v=XMfYHag7Liw
Fletcher and Millikan share but each sole authorship
The Journal of Bone and Joint Surgery
Henry R Cowell 1998 wrote an editorial
Inform patient care
Goal should not be to add to CV but to help
Ideas should be new
Don’t waste space, for redundant data… should be consolidated, not salami?
Prior to the experiments
Informed consent, clinical aspects, cost, statistical method, begun BEFORE we begin
Manuscript written with same patient care in mind than before
Career advancement, get grants
Rights to copyright
Pressure to publish first
Who was first? Discovery
Tenure and professional standing
Who should be an author?
Advisor and graduated students…
Now many groups collaborate, in group and within nation
All work builds on multiple achievements
Expectation: author wrote it; author did work that was written down; always real data; and results and claims should be accurate; author should disclose bias or potential for COI.
Contributions—interpretation, acquisition of data, drafting, critical, final submission, author should know about it and carefully review it
Financial disclosure form
Should not be allowed to publish
Who is not the author?
Just because someone got funding but played NO role in design or rationality of study then should not be author
General responsibility for lab—not enough for author
Just because they have the data—not enough for author
Other authorship issues
Still a problem: divvy up work, order of authorship, etc Avoid conflict by meeting from outset and publishing paper.
What are external regulations?
Self-plagiarism—is this plagiarism?
Biomedical Device Risks and Non-Medical Implantable Risks by Katina Michael
Slides available here.
Audio of my presentation here.
My Disney Experience – Frequently Asked Questions
Q. What information should I know about my privacy and MyMagic+ at Walt Disney World Resort?
A. MyMagic+ is designed to make your visit to the Walt Disney World Resort (the “Resort”), and other participating operations within the Walt Disney Parks and Resorts business segment, even more immersive and personalized. If you choose to participate in MyMagic+, we will collect information from you online and when you visit the Resort. Because many features of MyMagic+ are new, you might have questions about the information we collect and how we use it to deliver your experience. Below are answers to questions you may have. We value our guests and are dedicated to treating the information you share with us with care and respect. As we continue to enhance the park experience and develop new ways of interacting with you, we will update these FAQs.
Information We Collect from You Online for My Disney Experience
Information We Collect From You When You Visit the Resort
We also collect certain information from you while you are at select locations throughout the Resort through means other than the My Disney Experience website or mobile app. The following Privacy FAQs address aspects of MyMagic+ that may be of interest to you.
What are the RF Devices used for MyMagic+?
The benefits of MyMagic+ are delivered using either radio frequency (RF) technology-enabled MagicBands or RF cards ("RF Devices"). The RF Device you receive is unique to you and allows us to authenticate you and the benefits associated with you.
You can use your RF Device to enter the theme parks and water parks and to redeem FastPass+ selections if park admission entitlements are associated to your RF Device, to access your room if you are staying at a Disney Resort hotel, and to make purchases at select Resort locations. And, if you choose to use a MagicBand, it can add a touch of magic to your vacation by unlocking special surprises, personalized just for you, throughout the Walt Disney World Resort.
How do the RF Devices work?
Each MagicBand contains an HF radio frequency device and a transmitter which sends and receives RF signals through a small antenna inside the MagicBand. Each RF Card contains a passive HF radio frequency chip. Some of your benefits will be unlocked by "touching" your MagicBand or card to short-range reader touch points located at the Resort, including access to your Disney Resort hotel room, entry to a Disney theme park or making purchases at select Resort locations. The MagicBands can also be read by long-range readers placed in select locations throughout the Resort used to deliver personalized experiences and photos, as well as provide information that helps us improve the overall experience in our parks.
Guests can participate in MyMagic+ and visit the Resort without using the MagicBand by choosing a card, which cannot be detected by the long-range readers; however, certain features of MyMagic+ are dependent upon long-range readers, including automatic delivery of certain attraction photos and videos and some personalized offerings, which are only available to guests using a MagicBand.
The RF Devices are not GPS-based and do not enable collection of continuous location signals. Instead, MyMagic+ uses both short- and long-range readers located within the Resort to deliver the benefits of MyMagic+.
The RF Devices themselves do not store your personal information. Rather, your RF Device contains only a randomly assigned code that securely links to an encrypted database. This allows us to associate your RF Device with the benefits you have purchased and to collect information regarding your interactions with the various RF Device readers located at the Resort.
What information is collected through your use of the RF Devices and how is it used?
We use the information collected in connection with MyMagic+ to deliver the best possible guest experience. For example:
When you use your RF Device at touch points (e.g., for Disney Resort room entry, park admission, FastPass+, and purchases at select Resort locations), we are able to record your transaction and, when necessary, make the appropriate adjustment to your account.
The long-range readers will be in specific locations to enable the delivery of attraction photos and videos and personalized offerings.
Your interactions provide us with information about the products and services you experience in the Parks; your wait time for rides, restaurants and other attractions; and similar types of information.
If you sign up on the My Disney Experience website or mobile app to receive special email and/or text message alerts during your Resort visit, we may use information collected through your use of the RF Device to fulfill your request.
We may also use the information we collect through your use of the RF Device to send you information about products and services that we think may be of interest to you, but you can always choose not to receive marketing information from us. We will not use information collected in connection with MyMagic+ to personalize or target advertising to children under age 13.
Aggregate information can be used to better understand guest behavior and make improvements to the guest experience (e.g., managing wait times and improving traffic flow).
How is information collected through your use of the RF Devices shared?
We may share information about your experiences at the Resort with other members of the Walt Disney Family of Companies, but you can always choose not to receive marketing from us.
However, information about your specific park experience collected automatically when your MagicBand is read by long-range readers will not be shared with other members of the Walt Disney Family of Companies to use for marketing purposes unless you elect that we do so.
We will only share information about you that is collected automatically when your MagicBand is read by long-range readers with third parties for their marketing uses if you elect that we do so.
What choices are available regarding information collected through your use of the RF Devices?
Guests participating in MyMagic+ have a choice about how their information is used for marketing and can choose not to receive marketing information from us by opting out of receiving such communications. MyMagic+ also allows guests to receive offers and tips during their visit to Walt Disney World, but only if they request them from us.
We will not use information collected in connection with MyMagic+ to personalize or target advertising to children under age 13.
Are all guests required to participate in MyMagic+?
Guests can enjoy admission to the parks without having to participate in MyMagic+, register on the My Disney Experience website or provide any personal information. Guests who choose to participate in MyMagic+ and register on My Disney Experience will be able to use the FastPass+ service, make dining reservations online, and enjoy the convenience of having their tickets, FastPass+ selections, Resort room access and other enhanced features all in one place. Guests participating in MyMagic+ always have the option to use a card instead of a MagicBand. Guests can also choose whether to use their RF Device to make purchases at select Resort locations.
Can I use my RF Device after my vacation?
Guests can keep their MagicBand or RF Card after their vacation and may use it on a return visit to the Resort. In addition, from time to time, members of The Walt Disney Family of Companies may introduce products or services that can recognize the RF Devices and deliver experiences or value outside the Resort if the guest chooses to use their MagicBand or RF Card for those purposes. Information will be collected in connection with such use of the RF Devices only if the guest chooses to avail themselves of such products and services. RF Devices are intended for use in the United States only.
What is the Ticket Tag service and what information is collected through it?
We offer the convenience of Ticket Tag at the entrance of many of our theme parks and water parks. Ticket Tag helps to facilitate ease of re-entry into our parks and helps prevent fraud.
In order to use Ticket Tag, you simply place your finger on a reader. The system, which utilizes the technology of biometrics, takes an image of your finger, converts the image into a unique numerical value and immediately discards the image. The numerical value is recalled when you use Ticket Tag with the same ticket to re-enter or visit another Park. Ticket Tag does not store fingerprints.
Are all guests required to use Ticket Tag?
If you don't want to use Ticket Tag, you can simply carry and show a photo ID that matches the name identified with your ticket.
How do we keep the information we collect secure?
The security, integrity and confidentiality of your information are extremely important to us. We have implemented technical, administrative and physical security measures that are designed to protect guest information from unauthorized access, disclosure, use and modification. From time to time, we review our security procedures to consider appropriate new technology and methods. Please be aware that, despite our best efforts, no security measures are perfect or impenetrable.
Where can I get more information?
You can learn more about our privacy and data collection policies by visiting MyDisneyExperience.com/privacy. You may also contact Guest Services at 407-WDISNEY (407-934-7639).
Keynote by Sanjay Sharma at IEEE RFID2018
Primary carers of people who wander have a substantial onus to keep their loved ones and clients safe. Though patterns of wandering differ between various stakeholder types in various contexts, the two main design points include:
- Ensuring an individual does not go beyond the perimeters of a home (in-building) or a facility (on-campus)
- Ensuring an individual who has wandered can be found quickly (usually traversing a public space).
Wandering about a public space is one of the freedoms people enjoy about being alive. Whether it is a brisk walk to the local park, a bus or train trip to the beach, or aeroplane travel to various parts of the world, we can all enjoy the world around us. Walking does not require any token, travel often requires a ticket such as a TravelPass, and flying a passport with an appropriate VISA. People who wander usually do so on foot or by public transport. This session tries to narrow in to the potential for using RFID/NFC, facial recognition, and GPS to trigger mobile alerts when someone has wandered outside a minimum bounded area.
Children with autism for example, have often escaped their homes, only to find themselves in danger, either from oncoming traffic or from deep waters. Those suffering from varying levels of dementia have found themselves on public transport or disoriented at the wheel. Quite often wanderers frequent paths they know well. Wanderers who are in urban centres can have a very different experience to those in regional or rural settings. Context awareness is paramount for a carer. Is there a lake nearby? Is their busy traffic outside the family home? Is the wanderer known to people in the local community like café owners or train station attendants?
Since the early 2000s, various kinds of technological solutions have attempted to help those in need in various markets. Though we are making major inroads into what we have termed hierarchical positioning systems, most systems seem to fall short and so we still have many reports of wanderers falling to their deaths, or drowning, or suffering some other plight. The anguish for carers is significant. There is no respite for them, and the responsibility takes a grave toll on individuals.
This session will explore how technologies could be utilized to monitor people in need within the family home or institutional facility (e.g. wearing RFID/NFC tags) and furthermore how once traversing a public space the wanderer can be located. A number of factors can impact findability: morphological conditions, the individual’s agreement to wear a device, how to respond to mobile alerts once a trigger has been executed.
Participants will learn about:
- Individual wearer responses to wearable medic alert bracelets and tag technology
- In-building and on-campus solutions offered by BLE and UWB
- Advances in satellite-to-base chips (GPS sensors) used by the military
- The role of visual analytics in near real-time analysis
- Informed consent issues, duty of care, and getting privacy right
- Patterns of analysis in human activity monitoring and what that can tell us
- The importance of affordable solutions for primary carers who usually do not have a full time job while they are caring for loved ones who wander
- Coordination with emergency services for assistance in finding a missing person
Presenter: Professor Katina Michael, Faculty of Engineering and Information Sciences, University of Wollongong
Collaborator: Dr Roba Abbas, honorary fellow, School of Computing and Information Technology, University of Wollongong
Tragic stories involving wanderers
It's a devastating occurrence, but it's not rare.
About 20 times a month, a child with autism wanders off, according to national statistics tracked by the nonprofit Autism Speaks.
Two or three of those children die each month in the United States, the group’s numbers from 2017 show.
The most common cause of death will not surprise anyone who followed last weekend's disappearance of 4-year-old Chelsea Noel.
"Autistic children aged 14 years and younger are 40 times more likely to die from injury than the general pediatric population," Li said. Specifically, drowning accounts for 46% of all injury deaths among children with autism, which translates to 160 times the chance of dying from drowning compared to other children.
"The risk of drowning in autistic children peaks at age 5 to 7 years," Li said.
He explained that children with the disorder often feel anxiety, and wandering, especially toward water, is one way they seek relief. With 100,000 children newly diagnosed with disorders each year in the US, he added, "the first concrete step parents and caregivers could take to reduce the exceptionally high risk of accidental drowning is to enroll these children in swimming classes."
2. WANDERING AND SAFETY MAJOR CONCERNS
People with autism are seven times more likely to encounter law enforcement; communication difficulties and atypical behavior can result in serious misunderstandings; people with autism are more likely to be victims of crimes; nearly 50 percent of people with autism wander or elope from safety; and accidental drowning accounts for approximately 90% of lethal wandering outcomes. To best address the safety needs of the community, Autism Speaks facilitates a two-pronged approach including Family Safety Fairs and Autism Awareness Training for First Responders.
A recent study showed 49 percent of parents with a child on the autism spectrum had reported that child had wandered away. Of those, 65 percent were in danger of a traffic accident, and 24 percent were in danger of drowning.1
"You can put the security systems, you can put locks on the door - not just where the doorknob is, but also up high and down low. You have security cameras, motion defectors in the middle of the night," Laxton said.
"The national program partners with local first responders to connect caregivers of children and adults with cognitive disorders with wearable trackers.
Each device emits a unique frequency and a tracker can pick up a signal from it one to two miles away, which drastically reduces the time of a more “blind” search.
“For us, as a community, to have something like this, it truly is, has the capability of saving lives and being able to find people before it’s too late,” said assistant chief of police Brian Nugent.
In Hendricks County, the devices, which have a $350 up-front cost, are free for qualifying children and adults who tend to wander.
“Our goal is simply to reach out with these families, let them know this program is available and do everything in our power to facilitate that at absolutely no hassle or cost to them,” said Nugent.
Hendricks County first responders handle battery changes every 60 days and replacing cases and wristbands for free too.
The wearable isn’t just for when the wearer is in Hendricks County. The radio frequency transmitters can be picked up in another county too, as long as a nearby first responder has a tracker.
When families enrolled in the program leave the county for an extended period of time, the Project Lifesaver coordinator, Karen Hendershot, calls ahead to give the visiting county a proactive call with the wearer’s frequency information before they arrive.
No matter where the wearer is, the peace of mind gained by the caregiver relieves some of the burden of caring 24/7 for a person with a cognitive disorder, who tends to wander and is attracted to water.
“You could answer the phone or use the restroom,” said Denoon. “No matter what you try, there’s a possibility they could escape if they really want to. You can’t be with them 24/7 no matter how much you try.”
Other counties also offer the Project Lifesaver program, but many require parents or caregivers to pay at least part.
It’s only available in Hendricks County for free because of grants and donations."
Top 10 Solutions for Adults with Dementia who wander as listed by Alzheimer's.net here.
AngelSense provides caregivers a comprehensive view of their loved one’s activities, comings and goings. The device attaches to a loved one’s clothing and can only be removed by the caregiver. It provides a daily timeline of locations, routes and transit speed and sends an instant alert to caregivers if their loved one is in an unfamiliar place. Caregivers can listen in to hear what is happening around their loved one, can receive an alert if their loved one has not left for an appointment on time, allows caregivers to communicate with their loved one, and sends an alarm to locate your loved one – wherever they are.
Similar to the GPS Shoe and from the same designers, the GPS Smart Sole fits into most shoes and allows caregivers to track their loved one from any smartphone, tablet or web browser. The shoe insert is enabled with GPS technology and allows real-time syncing, a detailed report of location history, and allows users to set up a safe radius for their loved one.
iTraq is a tracking device that can be used to track pretty much anything – from loved ones to luggage, this tracker pairs with an app on a smartphone to find anyone and anything. For seniors, the device includes a motion or fall sensor and will send an alert if a fall is detected. It also has a temperature sensor. Their newest device, the iTraq Nano is marketed as the world’s smallest all-in-one tracking device that has global tracking, two months battery life, is water and dust resistant is able to be charged wirelessly. The device also has an SOS button that will send an instant alert to friends and family, notifying them of their loved one’s precise location.
This device was originally created to help emergency responders treat patients who could not speak for themselves. Today, the device also helps people with dementia who wander. The device is worn as a bracelet and when a loved one goes missing, caregivers can call the police and have the police call the 24-hour hotline to get the location of the missing person. Caregivers can also call the hotline themselves to get information. In addition to a tracking device, the bracelet has important medical information engraved upon it.
Mindme offers two lifesaving devices, one is a location device, the other is an alarm. The alarm allows the user to alert a Mindme response center in case of a fall or other emergency. The locator device is specifically designed for people with dementia or other cognitive disabilities. The simple device works as a pendant that can be put in a bag or pocket and allows caregivers to track the user online at any time. Caregivers can also set a radius for the user and will be alerted if the person travels outside that zone.
PocketFinder was founded in 2005 by a single parent who wanted to know the whereabouts of his young son, especially when he wasn’t there. Their slogan, “If you love it, locate it!” sums up their philosophy and service offerings. Tracking everything from luggage to pets to children to seniors, the company offers a wide range of emerging technological products. PocketFinder is designed to be the smallest tracker on the market and the device can fit in the palm of your hand. It has a battery life up to one week and allows caregivers to track wearers through a user-friendly app. The device was updated in January 2017 and now includes three location technologies including GPS, Cell ID and Google Wi-Fi Touch. It also now has an SOS button.
The mission of Project Lifesaver is “to provide timely response to save lives and reduce potential injury for adults and children who wander due to Alzheimer’s, autism and other related condition or disorders.” Seniors who are enrolled in Project Lifesaver are given a personal transmitter that they wear around their ankle. If they wander, the caregiver calls a local Project Lifesaver agency and a trained team will respond. Recovery times average 30 minutes and many who wander are found within a few miles of their home. In addition to the location device, Project Lifesaver works with public safety agencies to train them on the risks associated with wandering.
Revolutionary Tracker has location-based systems to keep tabs on seniors who may wander. The company strives to “bring an unparalleled level of functionality, capability, ease of use and relevant presentation of information to give people the ability to extend communication, knowledge, protection and care for their loved ones.” Their GPS enabled personal tracker features an SOS button for emergencies and offers real-time tracking. This device allows multiple seniors to be tracked at the same time and syncs directly to a caregiver’s smart phone or computer.
9. Safe Link
Safe Link is another GPS tracking system available for people with Alzheimer’s or dementia. The product promises to “increase safety for the elderly, promote independent living and ultimately lead to a healthier lifestyle.” Safe Link is a small device carried by the person who may wander. The device periodically sends its geographic coordinates to central servers and family members and caregivers can view the wearer’s location via website. The device needs to be charged and worn at all times. All devices have an SOS button for emergencies.
Trax is touted as the world’s smallest and lightest live GPS tracker. The device sends position, speed, and direction through the cellular network directly to your app on a smartphone. Trax comes with a clip that is easy to attach to a loved one. The app allows caregivers to set “Geofences” and will send an alert if a loved one enters or leaves a predetermined area. Trax Geofences have no size limit, caregivers can create as many fence areas as needed, and can schedule when those virtual fences are in effect.
Radio-frequency identification (RFID) has been deployed in government mandated livestock identification schemes across the world since the 1990s. RFID in its basic function can help authorities identify animals, especially when traceability becomes paramount during disease outbreaks across regions. This session provides a view of how an RFID-enabled dairy farm can leverage mobile network infrastructure towards achieving total farm management. The data for the study was collected from two case studies, both NLIS (national livestock identification system) compliant dairy farms on the South Coast of New South Wales in Australia, soon after the NLIS was mandated. The Cochrane and Strong Farms were used as models to illustrate the core and auxiliary technology components of an RFID-enabled dairy farm. Beyond satisfying the regulations of government agencies for livestock to be a part of a national identification system for tracking purposes, farmers are now venturing beyond mere basic compliance systems. Once installed, farmers have begun to realize that their initial capital investment into an RFID system holds great strategic potential. The initial outlay while substantial is a once only cost that with a few more application-centric uses can yield a return on investment manifold. This workshop session provides an end-to-end view of the infrastructure and processes required to achieve an advanced RFID-enabled state-of-the-art dairy farm.
Participants will learn about:
- Regulatory changes in the livestock industry: identification, traceability
- Mandatory components for RFID-enabled dairy farms
- RFID tags and boluses, herd management software, fixed RFID reader, digital network
- Auxiliary components for RFID-enabled dairy farms
- Portable readers, weight scales, automated feed-dropping controllers, milk meters, milking controller units, drafting gates, temperature monitoring, tracking, calf-feeding machines
- Benefits of total farm management
Presenter: Professor Katina Michael, Faculty of Engineering and Information Sciences, University of Wollongong
Collaborator: Mr Adam Trevarthen, alumni of the University of Wollongong (for identification purposes only)
NSW Health Promotion Annual Forum 2018:
Heather Nesbitt - Greater Sydney Commission
Julian Bolleter - University of Western Australia
Katina Michael - University of Wollongong
Michael J Gadiel - Treasury NSW
1.a) External Panel – the world in 20 years
This session will be a panel of experts discussing how the world, and NSW in particular, will change over the next 20 years. This will include, but not be limited to, the following areas:
· Demographics and population change
· Economic landscape
· Artificial intelligence
· Transport and urbanisation
Biography: Dr Katina Michael is with the Faculty of Engineering and Information Sciences at the University of Wollongong. She researches emerging technologies and societal implications. One of her areas of interest has to do with screen time and the youth obesity epidemic. Today, Katina makes use of simple AI-based productivity tools to ensure work can be done while on the move. Katina is a senior member of the IEEE and has held several editorial roles, among them chief editor of IEEE Technology & Society Magazine, and IEEE Consumer Electronics Magazine.
Comments from the Audience
I have uploaded the slides of my presentation above. A whole workshop on this topic would have been possible, but somehow to condense to 1 hour.
Before anyone writes to me telling me that I do not understand the value of biomedical devices to patients, a disclaimer that I am ALL for implantables that markedly improve the quality of life of recipients. It is hard to argue against someone who has lived daily with severe dystonia, not to at least try deep brain stimulation as a way to have some semblance of a normal life. The same can be said for those who are battling Parkinson's Disease and Tourette Syndrome. I am a little more skeptical when it comes to brain pacemakers for Major Depressive Disorder, but there are signs that the technique is working for at least some patients. It is important to add, that brain pacemakers are not a cure for these diseases, physical or mental, they merely put at bay the symptoms of living with the disease.
So, yes, I accept prosthesis under obvious circumstances. Prosthesis by choice, that is when there is no restorative functional requirement, I am not a fan of. I really do not believe we should be fiddling with the human body. I do not rule out body pacemakers for health purposes, but still there, I would caution that we are nowhere near a clinically proven solution despite that some manufacturers have made their intention clear that they want all of us to be bearing implants for our own good.
When it comes to implantables for business applications, or indeed personal use (e.g. biohacking or citizen science), I become even more skeptical. Who would want an implant owned and operated by an employer or service provider? And furthermore, who would want an implant that is simply for personal use in a home location? In the case of the latter, it seems a lot of people want to be 'advanced' cyborgs using old tech, just so as not have to worry about physical access control mechanisms and carry keys for instance. As I have read quite often, "if you can wear it, why bear it?" Yes, I understand the issues of transferability, leaving it behind, or even losing it. But these issues can happen if the device is embedded as well.
For the time being the risks we are faced with differ because only a small number of people in the world have implantables for convenience. Interestingly however, there are now about 10% of Americans who bear some type of biomedical implant (joint, ear, heart, brain etc). As that percentage rises in the next decade, due to affordability, and an ageing population, there may well be more like 15-20% of citizens residing in more developed nations that bear an implant. Risk therefore becomes a huge topic of consideration.
In this presentation I am working from the angle that we have to know the risks that we are faced with today, and educate engineers and physicians and non-engineers (e.g. citizens) about why getting an implant is not a straightforward decision. When we look at the Engineers Australia Code of Ethics, there really is some excellent advice on ways forward:
- Demonstrate integrity - respect the dignity of all persons
- Practice competently - act on the basis of adequate knowledge
- Exercise leadership - communicate honestly
- Promote sustainability - practise engineering to foster the health, safety and wellbeing of the community
Audio of Event
There are 3 audio mp3's that accompany the presentation above. Please listen to each of these:
- Katina Michael's opening introductory comments (30 min)
- Andreas Sjöström of Sogeti (a division of CapGemini) single case study: 'What did I learn by implanting a chip in my body?' (20 min)
- Katina Michael's closing comments and Q&A (15 min)
Joint Institution Lecture Series: Microchipping People - The Risks
There is nothing new about placing materials into the human body for prosthetic purposes. Since 1959 when an internal pacemaker was implanted into a patient, we have seen a proliferation of biomedical devices made from different chemical compositions (e.g. chromium, nickel, cobalt, titanium). Over this time, the implants have become much smaller in size, some manufacturers are even calling for their insertion into every human for personalised medicine.
We hear that implants are now not only surgically placed in the heart or joints or ears, but since 1987 have made their debut also in the brain and retina. There are now a diverse range of use cases of passive implantable devices in the form of RF identification tags, marketed for multi-applications like identity tokens and physical access controllers. While we have a grasp of the known risks associated with biomedical devices, the risks associated with the open market of embedding microchips in voluntary participants is less understood.
Most do-it-yourselfer implantees will say: “if it’s good enough for my dog or cat, then it’s good enough for me”. Are the risks surrounding implantables (medical and non-medical) exaggerated or do we need further research to ascertain their short-term and long-term effects on the human body?
This presentation will discuss the risks associated with microchipping people for any reason, and will consider what the normalisation of biomedical devices for non-medical applications might mean in society at large in terms of risk.
About the Speaker: Professor Katina Michael, School of Computing & IT, University of Wollongong:
Joined UOW in 2002. Prior to joining UOW, Katina spent 5 years working at Nortel Networks as a senior engineer. She researches emerging technologies, societal implications, and national security. Katina is a senior member of the IEEE and has held several editorial roles, among them chief editor of IEEE Technology & Society Magazine, and IEEE Consumer Electronics Magazine. PhD, MTransCrimPrev, BIT. www.katinamichael.com.
Videos from the Presentation
There are more than 413,106 Australians living with dementia. Australia's population is 24.13 million. Without a medical breakthrough, the number of people with dementia is expected to reach 1,100,890 by 2056. Currently around 244 people each day are joining the population with dementia. Dementia is the second leading cause of death of Australians contributing to 5.4% of all deaths in males and 10.6% of all deaths in females each year.
As we live longer due to medical breakthroughs as demonstrated by the average life expectancy (82.45 in Australia, compared to 83.84 in Japan and 78.74 in the USA) and are able to see more, our quality of life seems to be diminishing in other aspects. Futurists like Ray Kurzweil describe notions of the Singularity, and yet, families living with dementia face every day complexities today. Is there a solution to this growing problem? Transhumanists will say, yes!
In 2008, I had an article in the Illawarra Mercury that caught the attention of a gentle man, Kenneth Lea. Kenneth lived in Thirroul and we spent some time together discussing how location technologies might help carers with loved ones suffering from dementia. I visited Diggers in Corrimal with Kenneth to meet his beautiful wife. Kenneth had done everything to help his wife enjoy the comforts of home before the disease progressed and it was no longer safe for her to be there. I was heavily pregnant with my second child that year, but with Kenneth's handwritten letters I was moved to learn more about his story. With his patience, I was catapulted into what seemed a foreign world. I got to meet other carers also. They helped to formulate the opinions I have today with respect to how technology can aid sufferers and carers alike. There is also the wonderful work of Lyn Phillipson and her team at the Centre for Health Initiatives (CHI) at the University of Wollongong that I have always respected.
Some months ago I had the immense joy of meeting Suzi Jowsey Fetherstone. Her mother Patricia's story of Alzheimer's Disease (a form of dementia) was documented by AttitudeLive in 2014. I watched this episode last week for the first time. I was moved by many things. This is what I want to share with you when I see you at U3A. After watching this documentary aptly titled "Together Apart" you will understand the title of my presentation "Dealing with Dementia Gracefully". Perhaps, there is nothing graceful about dementia as a 'disease'. But how we honour, understand and respect our loved ones when they regress at their end of life stage, if they fall victim to dementia or Alzheimer's Disease can be graceful. Suzi Jowsey, and her father Victor, tell their intimate story. It is a celebration of Patricia's life, then and now. At my U3A presentation we will watch this documentary together and then have an open discussion about what we learnt from it.
Abstract: It is estimated in the USA alone that 10% of the population have implantable devices for therapeutics. While we do not know the exact figures for the number of people who bear prosthetic implantables worldwide (as these are not monitored by authorities), we can speculate that it is in the hundreds of millions, given individual reports of various types, including, biomedical devices for the heart, brain, ears, eyes, throat, hip, knee, drug delivery etc. While the market is dominated by few biomedical industry manufacturers specialising in various implantable devices, the market can still be considered in some segments to be in its early phases of exploration, especially with respect to brain pacemakers. Over time, these devices will proliferate not just for therapeutics, but repurposed for everyday applications in the name of convenience. Just one example of this is in offering music services to those with cochlear implants through an iPhone app.
Biomedical companies who help the deaf to hear, stabilise the effects of Parkinson’s disease or Tourette’s syndrome, provide limited mobility to those who are either wheelchair-bound or have lost a limb, are looked upon as providing near miraculous services. But speak to any recipient of an internal biomedical device, and you will soon realise the complexity of surgery, rehabilitation, recovery, and most importantly ongoing maintenance for those bearing more complex devices. Companies until now have acknowledged the risks of undergoing invasive procedures (e.g. infection), have recalled products that have been defective, and have even acknowledged device-level misprogramming. More recently, after years of speculation, cybersecurity threats direct to implantables have also been acknowledged by some biomedical suppliers, and measures put in place to overcome those devices that were vulnerable to cyberattack. “Engagement” at multiple levels, has never been more critical for patients suffering from critical and chronic conditions.
And yet, we are still in the nascent stages of developing feedback loops direct from recipients of biomedical devices back into product lifecycle management processes (including product improvement). If anyone knows the true impact of biomedical implantables for therapeutics, it is the patient. For now at least, there are limited ways in which patients can communicate with the manufacturers who have supplied these life-sustaining devices. A patient’s first point of call is usually their medical specialist, then a letter to the company itself describing the issues at hand (usually submitted via a company web page), possible communications, in the case of the USA, to the Food and Drug Administration (e.g. MAUDE Database) and to the Federal Communications Commission, and other non-government agencies. While there is confirmation that patient documentation has been received by the biomedical company, there is little evidence to suggest that anything has been done to address the significant concerns that patients have encountered.
Patients are caught in a difficult situation which has them weighing up life sustaining technologies that could possibly be faulty or not work appropriately, versus medical conditions that are near impossible to live with. The attitude from biomedical companies for now seems to be: we utterly care about your well-being but we’ve given you something that has bettered your life, so do not criticize or complain about the very product that is keeping you alive. This is not patient engagement, and this is unacceptable if we continue down the path that encourages implantables for diverse medical conditions not yet even addressed. I predict that as a greater number of players enter the market, biomedical companies will have to address patient concerns in a more rigorous and public manner, thinking deeply about participatory healthcare.
One of the ways in which companies are considering gaining quantitative-based product feedback from patients, is by connecting them to an Internet of Medical Things (IOMT) infrastructure. This feedback is auto-generated by machine-to-machine protocols that excludes direct input from the patient, sending back to base information about availability of a device, misbehaving devices and the like. IOMT detracts from qualitative personal – related feedback provided by a patient, that is time consuming to gather, allowing evidence-based feedback into the product improvement process. Apart from the serious potentiality for cyber hacking Internet of Things devices, most patients do not want their implantable devices connected to the Internet in any shape or form, especially if they are related to the brain. We need to get serious about product life-cycle management, and the richest source of feedback has always been the end user, it just so happens that this type of end-user has an embedded device on which their life depends.
Katina Michael is a professor in the Faculty of Engineering and Information Sciences at the University of Wollongong. She is Editor in Chief of IEEE Technology and Society Magazine, and Senior Editor of IEEE Consumer Electronics Magazine. Katina has previously served as a representative of Consumers Federation of Australia between 2010 and 2016. She has been researching the socio-ethical implications of biomedical devices over the last 20 years.
This will be a keynote delivery at the IEEE Life Sciences Conference in Sydney, NSW.
Photos from Day Two (December 14, 2017)
IEEE Life Sciences Conference
Title of Panel: From Wearables to Implantables that Measure and Enhance Human Behaviour: What can we do already? And where are we headed?
Estimated Time: 1 hour
Structure: Each panellist will have 10 minutes to present their case. The moderator will then spent 20 minutes in discussion. Finally, the audience will be invited to ask questions for 10 minutes of each participant.
11.30am-12.30pm Thursday (14 Dec)
Moderator: Katina Michael
Biography: Katina Michael is a professor in the Faculty of Engineering and Information Sciences at the University of Wollongong. She is Editor in Chief of IEEE Technology and Society Magazine, and Senior Editor of IEEE Consumer Electronics Magazine. Katina has previously served as a representative of Consumers Federation of Australia between 2010 and 2016. She has been researching the socio-ethical implications of biomedical devices over the last 20 years.
Panelist 1: Ms Shanti Korporaal
Shanti Korporaal is a Futurist, Serial Entrepreneur, Speaker, Facilitator, Whisky Chick and most of all, lives for Lightbulb moments. With her husband, Skeeve Stevens, she runs eight businesses with offices in two countries - Australia and Cambodia. In life and in business they make a great team, Skeeve is the visionary and ideas and Shanti is the practical tactical, implementer. She is co-founder and Director of Future Sumo, VR the World, Chip My Life, Niisch, eintellego Networks, eintellego Networks (Cambodia) and Elastic Venues (Cambodia). All of her companies are about empowering her clients to grow and flourish in their own businesses or department.
Panelist 2: Mr Meow Meow
Meow is the founder of BioFoundry Inc Australia. He is a citizen scientist whose lab dabbles in wearable and implantable technology among other biohacking applications. His website is http://foundry.bio/. He has been featured in Bloomberg’s Hello World documentary in 2016. He was also the first person to implant and Opal card NFC device into his hand. He is a molecular biologist by qualifications and training.
Panelist 3: Rebecca Herold
Rebecca has 25+ years of systems engineering, information security, privacy & compliance experience, is CEO of The Privacy Professor® consultancy she founded in 2004, and President of SIMBUS, LLC Information Security, Privacy, Technology & Compliance cloud services she founded in 2014. Rebecca engineered the SIMBUS architecture, including risk assessments, LMS, and breach calculator and management system, plus others. Rebecca has authored 19 books, contributed to dozens of other books, and hundreds of articles. Rebecca led the NIST Smart Grid Privacy Subgroup for 7 years, was a co-founder/officer for IEEE P1912 Privacy and Security Architecture for Consumer Wireless Devices Working Group, and is on many advisory boards. Rebecca was Adjunct Professor for the Norwich University MSISA program for 9 years, has received numerous awards, and has provided keynotes on five continents. Rebecca appears regularly on the KCWI23 television show, and quoted in diverse publications. Rebecca is based in Des Moines, Iowa, USA.
Conference Link: http://lsc.ieee.org/2017/
Photos from the Panel
Participants in each session will be divided into four specific topics of increasing importance as we move into the next 15+ years, namely Demographics & Population Change, AI and Automation, The Economic Landscape and Urbanisation.
In particular, the participants will focus on three main questions:
- What are the big shifts or trends we anticipate in the next 15+ years?
- How do we see these impacting the public service?
- What should we as public servants be doing to prepare?
Four Thematic Pillars (Examples Only)
1. Automation, Warehouses, Fulfillment
2. Health, Medical and Well-Being
3. Law Enforcement, Surveillance and Monitoring, Social Media, Body Wearables, Audio-Visual Analytics
4. Personal Communications, Translation, Digital Chronicling, Personas
My contribution will be under the topic – AI and Automation with respect to Demographics & Population Changes.
Biography: Katina Michael is a Professor in the Faculty of Engineering and Information Sciences at the University of Wollongong, Australia. She is the editor-in-chief of IEEE Technology and Society Magazine and Senior Editor of IEEE Consumer Electronics Magazine. She has been a recipient of telecommunications-centric grants on location services policy, and has been the convener of the annual social implications of national security workshop since the beginning of the series in 2006 with the Research Network for a Secure Australia (RNSA). Katina has consulted for government agencies and defence organisations, such as Prime Minister and Cabinet, Booz Allen, the Commissioner for Law Enforcement Data Security and the Defence Science Technology Office (DSTO). She is a board member of the Australian Privacy Foundation (APF) and has previously been an active member of the Consumer Federation of Australia (CFA) Previously Katina worked for one of the world’s largest telecommunications vendors with secondments throughout Asia and North America. Katina’s PhD was in automatic identification, she has a Masters of Transnational Crime Prevention from the Faculty of Law at UOW, and she completed her Bachelors of IT in the cooperative program at the University of Technology, Sydney with semester employment at Andersen Consulting, and United Technologies.
- 1943: WW2 triggers fresh thinking. World War Two brought together scientists from many disciplines, including the emerging fields of neuroscience and computing. In Britain, mathematician Alan Turing and neurologist Grey Walter were two of the bright minds who tackled the challenges of intelligent machines.
- 1950: Science fiction steers the conversation. See Isaac Asimov explain his Three Laws of Robotics to prevent intelligent machines from turning evil. In 1950, I Robot was published – a collection of short stories by science fiction writer Isaac Asimov.
- 1956: A top-down approach. The term 'artificial intelligence' was coined for a summer conference at Dartmouth University, organised by a young computer scientist, John McCarthy. Top scientists debated how to tackle AI. Some, like influential academic Marvin Minsky, favoured a top-down approach: pre-programming a computer with the rules that govern human behaviour. Others preferred a bottom-up approach, such as neural networks that simulated brain cells and learned new behaviours. Over time Minsky's views dominated, and together with McCarthy he won substantial funding from the US government, who hoped AI might give them the upper hand in the Cold War.
- 1968: 2001: A Space Odyssey – imagining where AI could lead. Minsky influenced science fiction too. He advised Stanley Kubrick on the film 2001: A Space Odyssey, featuring an intelligent computer, HAL 9000.
- 1969: A tough problem to crack. AI was lagging far behind the lofty predictions made by advocates like Minsky – something made apparent by Shakey the Robot. Shakey was the first general-purpose mobile robot able to make decisions about its own actions by reasoning about its surroundings. It built a spatial map of what it saw, before moving. But it was painfully slow, even in an area with few obstacles. Each time it nudged forward, Shakey would have to update its map. A moving object in its field of view could easily bewilder it, sometimes stopping it in its tracks for an hour while it planned its next move.
- 1973: The AI Winter. By the early 1970s AI was in trouble. Millions had been spent, with little to show for it. There was strong criticism from the US Congress and, in 1973, leading mathematician Professor Sir James Lighthill gave a damning health report on the state of AI in the UK. His view was that machines would only ever be capable of an "experienced amateur" level of chess. Common sense reasoning and supposedly simple tasks like face recognition would always be beyond their capability. Funding for the industry was slashed, ushering in what became known as the AI winter.
- 1981: A solution for big business. The moment that historians pinpoint as the end of the AI winter was when AI's commercial value started to be realised, attracting new investment. The new commercial systems were far less ambitious than early AI. Instead of trying to create a general intelligence, these ‘expert systems’ focused on much narrower tasks. That meant they only needed to be programmed with the rules of a very particular problem. The first successful commercial expert system, known as the RI, began operation at the Digital Equipment Corporation helping configure orders for new computer systems. By 1986 it was saving the company an estimated $40m a year.
- 1990: Back to nature for bottom-up inspiration. Expert systems couldn't crack the problem of imitating biology. Then AI scientist Rodney Brooks published a new paper: Elephants Don’t Play Chess. Brooks was inspired by advances in neuroscience, which had started to explain the mysteries of human cognition. Vision, for example, needed different 'modules' in the brain to work together to recognise patterns, with no central control. Brooks argued that the top-down approach of pre-programming a computer with the rules of intelligent behaviour was wrong. He helped drive a revival of the bottom-up approach to AI, including the long unfashionable field of neural networks.
- 1997: Man vs Machine. Supporters of top-down AI still had their champions: supercomputers like Deep Blue, which in 1997 took on world chess champion Garry Kasparov. The IBM-built machine was, on paper, far superior to Kasparov - capable of evaluating up to 200 million positions a second. But could it think strategically? The answer was a resounding yes. The supercomputer won the contest, dubbed 'the brain's last stand', with such flair that Kasparov believed a human being had to be behind the controls. Some hailed this as the moment that AI came of age. But for others, this simply showed brute force at work on a highly specialised problem with clear rules.
- 2002: The First Robot for the Home. Rodney Brook's spin-off company, iRobot, created the first commercially successful robot for the home – an autonomous vacuum cleaner called Roomba. Cleaning the carpet was a far cry from the early AI pioneers' ambitions. But Roomba was a big achievement. Its few layers of behaviour-generating systems were far simpler than Shakey the Robot's algorithms, and were more like Grey Walter’s robots over half a century before. Despite relatively simple sensors and minimal processing power, the device had enough intelligence to reliably and efficiently clean a home. Roomba ushered in a new era of autonomous robots, focused on specific tasks.
- 2005. War Machines. Having seen their dreams of AI in the Cold War come to nothing, the US military was now getting back on board with this new approach. They began to invest in autonomous robots. BigDog, made by Boston Dynamics, was one of the first. Built to serve as a robotic pack animal in terrain too rough for conventional vehicles, it has never actually seen active service. iRobot also became a big player in this field. Their bomb disposal robot, PackBot, marries user control with intelligent capabilities such as explosives sniffing. Over 2000 PackBots have been deployed in Iraq and Afghanistan.
- 2008: Starting to crack the big problems. In November 2008, a small feature appeared on the new Apple iPhone – a Google app with speech recognition. It seemed simple. But this heralded a major breakthrough. Despite speech recognition being one of AI's key goals, decades of investment had never lifted it above 80% accuracy. Google pioneered a new approach: thousands of powerful computers, running parallel neural networks, learning to spot patterns in the vast volumes of data streaming in from Google's many users. At first it was still fairly inaccurate but, after years of learning and improvements, Google now claims it is 92% accurate.
- 2010: Dance Bots. At the same time as massive mainframes were changing the way AI was done, new technology meant smaller computers could also pack a bigger punch. These new computers enabled humanoid robots, like the NAO robot, which could do things predecessors like Shakey had found almost impossible.
2011: Man vs Machine (Watson). In 2011, IBM's Watson took on the human brain on US quiz show Jeopardy. This was a far greater challenge for the machine than chess. Watson had to answer riddles and complex questions. Its makers used a myriad of AI techniques, including neural networks, and trained the machine for more than three years to recognise patterns in questions and answers. Watson trounced its opposition – the two best performers of all time on the show. The victory went viral and was hailed as a triumph for AI.
2014: Are machines intelligent now? Sixty-four years after Turing published his idea of a test that would prove machine intelligence, a chatbot called Eugene Goostman finally passed. But very few AI experts saw this a watershed moment. Eugene Goostman was seen as 'taught for the test', using tricks to fool the judges. It was other developments in 2014 that really showed how far AI had come in 70 years. From Google's billion dollar investment in driverless cars, to Skype's launch of real-time voice translation, intelligent machines were now becoming an everyday reality that would change all of our lives.
"Computers have been over-sold. Understandably enough as they are very big business indeed. It's common knowledge that some firms bought computers in the expectation of benefits which failed to materialise. My concern tonight however is with the over-selling of the longer term future of computers. The scientific community has a heavy responsibility to put forward the facts to avoid the public being seriously mislead. Just as the US National Academy of Sciences did in 1966 when it reported that enormous sums of money had been spent on the aim of language translation by computer with very little useful result, a conclusion not subsequently shaken. Failures continually occurred also in computer recognition of human speech, or handwritten letters and in automatic proving of theorems in higher mathematics."
Roy Amara has said in contrast:
"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
"Dave this conversation can serve no purpose anymore. Goodbye." (Voice of HAL9000)
"I can see you're really upset about this. I honestly think you've got to sit down calmly, take a stress pill, and think things over. I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission." (Voice of HAL9000)
It is amazing to watch a professional game of chess. The first moves happen so quickly that the human eye watching can barely keep up and 'compute' mentally. A sure visualisation of a computer making a move metaphorically would see every possible future step played out before moving within a few split seconds if not less, impossible to denote via the naked eye.
Kasparov says: "...you said is it for better or worse? It's happening period. The technology is neither good nor bad, it's agnostic. You know you can do many great things with your mobile phone but you can also create a terrorist network so it says it's happening and we just have to adjust."
"[Context: Human and computer vs machine]. I can tell you the quality was not very high because it was limited amount of time and it was so new for us how to use the machine. Eventually I realized- we had many events the so-called freestyle events on the internet that proved, and it could sound quite ironic but you don't need a very strong player to get the best result of human plus machine combination. It could sound like a heresy now but I would say that you don't want a strong player it's you need a good operator ignition, and a decent player."
Who is https://www.partnershiponai.org/?
7 Important Lessons for Making AI Predictions by Rodney Brookes
Overestimating and underestimating
Imagining Magic: See here Arthur C. Clarke's three laws:
- When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
- The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
- Any sufficiently advanced technology is indistinguishable from magic.
Performance vs competence
- People hear that some robot or some AI system has performed some task. They then generalize from that performance to a competence that a person performing the same task could be expected to have. And they apply that generalization to the robot or AI system.
- Today’s robots and AI systems are incredibly narrow in what they can do. Human-style generalizations do not apply.
- When people hear that machine learning is making great strides in some new domain, they tend to use as a mental model the way in which a person would learn that new domain. However, machine learning is very brittle, and it requires lots of preparation by human researchers or engineers, special-purpose coding, special-purpose sets of training data, and a custom learning structure for each new problem domain. Today’s machine learning is not at all the sponge-like learning that humans engage in, making rapid progress in a new domain without having to be surgically altered or purpose-built.
- Likewise, when people hear that a computer can beat the world chess champion (in 1997) or one of the world’s best Go players (in 2016), they tend to think that it is “playing” the game just as a human would. Of course, in reality those programs had no idea what a game actually was, or even that they were playing. They were also much less adaptable. When humans play a game, a small change in rules does not throw them off. Not so for AlphaGo or Deep Blue.
- When people are suffering from exponentialism, they may think that the exponentials they use to justify an argument are going to continue apace. But Moore’s Law and other seemingly exponential laws can fail because they were not truly exponential in the first place.
- It turns out that many AI researchers and AI pundits, especially those pessimists who indulge in predictions about AI getting out of control and killing people, are similarly imagination-challenged. They ignore the fact that if we are able to eventually build such smart devices, the world will have changed significantly by then. We will not suddenly be surprised by the existence of such super-intelligences.
Speed of deployment
- A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products.
- Nothing could be further from the truth. Almost all innovations in robotics and AI take far, far, longer to be really widely deployed than people in the field and outside the field imagine.
1. SAFETY CRITICAL AI
Advances in AI have the potential to improve outcomes, enhance quality, and reduce costs in such safety-critical areas as healthcare and transportation. Effective and careful applications of pattern recognition, automated decision making, and robotic systems show promise for enhancing the quality of life and preventing thousands of needless deaths.
However, where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy, and aligned with the ethics and preferences of people who are influenced by their actions.
We will pursue studies and best practices around the fielding of AI in safety-critical application areas.
2. FAIR, TRANSPARENT, AND ACCOUNTABLE AI
AI has the potential to provide societal value by recognizing patterns and drawing inferences from large amounts of data. Data can be harnessed to develop useful diagnostic systems and recommendation engines, and to support people in making breakthroughs in such areas as biomedicine, public health, safety, criminal justice, education, and sustainability.
While such results promise to provide great value, we need to be sensitive to the possibility that there are hidden assumptions and biases in data, and therefore in the systems built from that data. This can lead to actions and recommendations that replicate those biases, and suffer from serious blindspots.
Researchers, officials, and the public should be sensitive to these possibilities and we should seek to develop methods that detect and correct those errors and biases, not replicate them. We also need to work to develop systems that can explain the rationale for inferences.
We will pursue opportunities to develop best practices around the development and fielding of fair, explainable, and accountable AI systems.
3. COLLABORATIONS BETWEEN PEOPLE AND AI SYSTEMS
A promising area of AI is the design of systems that augment the perception, cognition, and problem-solving abilities of people. Examples include the use of AI technologies to help physicians make more timely and accurate diagnoses and assistance provided to drivers of cars to help them to avoid dangerous situations and crashes.
Opportunities for R&D and for the development of best practices on AI-human collaboration include methods that provide people with clarity about the understandings and confidence that AI systems have about situations, means for coordinating human and AI contributions to problem solving, and enabling AI systems to work with people to resolve uncertainties about human goals.
4. AI, LABOR, AND THE ECONOMY
AI advances will undoubtedly have multiple influences on the distribution of jobs and nature of work. While advances promise to inject great value into the economy, they can also be the source of disruptions as new kinds of work are created and other types of work become less needed due to automation.
Discussions are rising on the best approaches to minimizing potential disruptions, making sure that the fruits of AI advances are widely shared and competition and innovation is encouraged and not stifled. We seek to study and understand best paths forward, and play a role in this discussion.
5. SOCIAL AND SOCIETAL INFLUENCES OF AI
AI advances will touch people and society in numerous ways, including potential influences on privacy, democracy, criminal justice, and human rights. For example, while technologies that personalize information and that assist people with recommendations can provide people with valuable assistance, they could also inadvertently or deliberately manipulate people and influence opinions.
We seek to promote thoughtful collaboration and open dialogue about the potential subtle and salient influences of AI on people and society.
6. AI AND SOCIAL GOOD
AI offers great potential for promoting the public good, for example in the realms of education, housing, public health, and sustainability. We see great value in collaborating with public and private organizations, including academia, scientific societies, NGOs, social entrepreneurs, and interested private citizens to promote discussions and catalyze efforts to address society’s most pressing challenges.
Some of these projects may address deep societal challenges and will be moonshots – ambitious big bets that could have far-reaching impacts. Others may be creative ideas that could quickly produce positive results by harnessing AI advances.
Over the next five years, this project will study the impact that genetics, environmental factors, daily habits and the human microbiome have on the cognition of older adults.
This collaborative research initiative will also use artificial intelligence (AI) systems to comb through massive amounts of data with the goal of promoting healthier living. We want caring for the older population to be not just palliative, but preventive. Rather than treating serious cognitive decline, we seek ways to stop it.
Automating a job can result in more of those jobs. Here's the theory explained:
Lower prices, which makes its products more appealing and creates an increased demand that may lead to the need for more workers.
Generate more profit or pay higher wages. That may lead to increased investment or increased consumption, which can also lead to more production, and thus, more employment.
Amazon offers a more modern example of this phenomena. The company has over the last three years increased the number of robots working in its warehouses from 1,400 to 45,000. Over the same period, the rate at which it hires workers hasn’t changed.
Automation doesn’t necessarily make humans obsolete
In 2013, researchers at Oxford sparked fear of the robot revolution when they estimated that almost half of US occupations were likely to be automated. But three years later, McKinsey arrived at a very different number. After analyzing 830 occupations, it concluded that just 5% of them could be completely automated.
The two studies obviously counted differently. The Oxford researchers assessed the probability that occupations would be fully automated within a decade or two. But automation is more likely to replace part of a job than an entire job. When Amazon installs warehouse robots, they currently don’t replace full workers, but rather, the part of the job that involves fetching products from different shelves. Similarly, when my colleague used artificial intelligence to transcribe an interview, we didn’t fire him; he just worked on the other parts of his job. McKinsey’s researchers’ model didn’t attempt to sort jobs into “replaceable” and “not replaceable,” but rather to place them on a spectrum of automation potential.
In the 1930s, economist John Maynard Keynes famously coined the term “technological unemployment.” Less famous is the argument he was making at the time. His case wasn’t that impending technology doomed society to prolonged massive unemployment, but rather that a reaction to new technology should neither assume the end of the world or refuse to recognize that world had changed. From his essay, Economic Possibilities For Our Grandchildren:
The prevailing world depression, the enormous anomaly of unemployment in a world full of wants, the disastrous mistakes we have made, blind us to what is going on under the surface to the true interpretation, of the trend of things. For I predict that both of the two opposed errors of pessimism which now make so much noise in the world will be proved wrong in our own time-the pessimism of the revolutionaries who think that things are so bad that nothing can save us but violent change, and the pessimism of the reactionaries who consider the balance of our economic and social life so precarious that we must risk no experiments.
Caption: So what's it going to be? Freedom or enslavement? Or more of the same old same old?
Important work being carried out at IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. More recent new standards projects in this space can be found here. A comprehensive discussion document for Ethically-Aligned Design can be found here.
An exceedingly optimistic view of, and predication for, AI. The potential bad results for/from AI are just as great as the potential benefits. It all depends upon how the AI is engineered, implemented, and TESTED before implementation (I capitalized because I've seen a trend in the past 15 years for software and systems to NOT be tested enough before being put into production). • What ethical computing rules will be required and followed when creating these AI applications, systems and networks? Will ethical computing rules even be considered? Etc. • What types of security (data and physical) controls and rules will be implemented? Will there be any? Will they be appropriate? Adequate? Etc. • What types of privacy controls and rules will be implemented? How will they be determined? Will they truly support privacy? Etc.
Will it ever be possible for AI to do wrong the right way?
Siri and other AI programs that attempt to converse with or listen to humans can appear to make more mistakes than we do. Correcting these errors so that AI is flawless is a holy grail for programmers and roboticist. Once AI is able to understand when it has gone wrong surely then will it be able to exhibit self-reflection and adapt accordingly just as we humans do? Notwithstanding our many perfections and achievements, we humans are also much of the time deeply error-prone. Humans make mistakes, period. Yet, a select few of our mistakes are not regretted; rather, they are 'happy accidents'. The term ‘happy accident' is an idiom for serendipitous discoveries. Human history abounds with such beneficial errors: the invention of penicillin, stainless steel, velcro, the list goes on. In this discussion we explore how AI will need to be able to profit from its wrongs, learn which accidents could be ‘happy’ ones, and right its wrongs creatively in order to become truly more like humans. By considering creative industries and labour-saving devices we tackle the complex reflexivity required to be able to do wrong the right way.
Part 1. Listening without Ears: Artificial Intelligence in the Creative Labour of Audio Post-Production (Dr Thomas Birtchnell - 15 mins).
Part 2. Sensemaking with AI: The Future of Intelligent Assistants, Bots, and Deep Learners (Professor Katina Michael - 15 mins).
Part 3. DISCUSSION (30 mins)
Biography: Katina Michael is a professor in the Faculty of Engineering and Information Sciences at the University of Wollongong. She is Editor in Chief of IEEE Technology and Society Magazine, and Senior Editor of IEEE Consumer Electronics Magazine.
2017 conference of the Australasian Association for History, Philosophy and Social Studies of Science (22-24 November)
This event, supported by the Australian Academy of Sciences (National Committee for History and Philosophy of Science), brings together scientists and social scientists to examine questions of contemporary relevance. We are planning to include three such conversations.
Notes: Three conversations. 1. dating human history (with Bert Roberts, UoW, and Billy Griffith, Deakin), 2. synthetic biology (with colleagues from UNSW, TBC) and, 3. Katina Michael and Thomas Birtchnell.
Each conversation is meant to last 1 hour, with a mixed audience of conference attendees and members of the public.
Venue: University of Wollongong
This year's AAHPSSS conference will be held at the University of Wollongong (UoW). Not only does UoW host the longest running science and technology studies program in Australia, it is now the only remaining STS program in the country.
At a time when the STEM disciplines (i.e. science, technology, the environment and medicine) are being called upon by government, business and industry to drive the next wave of global innovation and economic growth, it is essential that the critical perspectives of science and technology studies (STS), history and philosophy of science (HPS) and cognate disciplines in the social sciences and humanities are meaningfully and substantively included in the relevant public and policy debates. Encouraging such critical perspectives in public discourse concerning STEM issues is arguably necessaryto ensure that the hyper-rationalist hubris and technoscientific excesses of the 20th century are not repeated in the 21st century.
The AAHPSSS executive and many of its long-term members are very conscious of the demise of multiple STS and HPS programs throughout Australia's universities over the last two decades. While most of us deplore these developments, we are convinced that the only way this situation can be halted and hopefully reversed is through the efforts of the relevant scholarly communities to come together, demonstrate solidarity, and forge new alliances over the next several years.
This conference is therefore aimed at rebuilding scholarly networks and bridging some of the divides that have recently emerged between more specialized scholarly communities in Australia and New Zealand. We therefore welcome the participation and attendance at the conference of tenured and casual academics, independent scholars, students, professionals in the public and private sectors, and interested laypeople.
Program Downloads: https://aahpsss.net.au/conference/2017-conference/2017-program-and-sessions/
Stimuli for Katina's Presentation
Key reading material: https://www.theatlantic.com/technology/archive/2017/06/automated-transcription/530973/
After delivering the talk, I came across the following reference almost accidentally (DadBot by @jamesvlahos).
I had the pleasure of attending a two day conference of The Australasian Association for the History, Philosophy, and Social Studies of Science. A number of papers impressed me but perhaps none more than Michael Arnold. I was writing on a small pad of paper and hope I have represented his comments accurately. I take full responsibility for errors and apologise in advance for my poor notetaking.
Michael researches: social studies of technology (technology and death, community informatics, domestic technologies, communications technologies)
Michael presented on his recent research endeavour- robotic surgery. Here are some of the handwritten notes I took from his presentation:
- An auto-ethnographic study
- Post phenomenological
- “I”, world
- Highly experiential, existential, and self-conscious
o The ‘I’ in this instance is a humanist and idealist
o Thought and belief are one and the same
o All humans have agency
Michael put to us the following:
- I> [Technology-World]
In the context of Michael’s study this now can be represented as:
- Surgeon> [robot-me]
Option presented to the auto-ethnographer were:
- Conservative approach to his cancer
- Radical approach to his cancer
After assessing the options, the ethnographer went for the radical approach which were the full removal of the organs where the cancer was residing. This included prostate and bladder.
Michael describes the operating theatre as a vector of assemblage of human-technology hybrids. This might include air conditioning units, beds, medical apparatus, the telerobotic machine and more. He represented this simplistically as:
[da vinci robot + surgeon]
[da vinci robot + me]
With respect to the da vinci controller robot a doctor must “dock” into the robot with his face against the goggles. This is an active state. When a doctor moves his face away from the goggles, for example to sneeze, operation of the remote arms ceases. The remote robot apparatus then docks to the body of the patient. Literature depicts the robot on the side of the bed, but really the machine stands close to the body, usually the patient has his or her legs apart and the robot docks close.
Presently the da vinci robot is responsible for about 200,000 annual operations worldwide. Last year in 2016 in the last quarter, there was a 16% increase in the number of operations done with the assistance of the da vinci.
The patient had 7 ports, and at any one time 2 arms were being used of the robot of 4. Surgeons operating the da vinci can use switches, clutches, and other mechanisms depending on what they want to achieve.
Mediated vision was described by the ethnographer:
- - No touch
- - no haptic feedback
- - does not sense pressure
- Sight privilege
Katina’s reflection: Da vinci is built by Intuitive. Some issues with the company with respect to deaths at the hand of the machine. World wide court cases. Some 177 disputes in motion presently. Reportedly the machine has been responsible for 74 deaths. Is it possible to blame the machine for the deaths? What role or responsibility does the surgeon have? If any? In the case of Intuitive it was found to have taken IP to help with procedures. Other competitor machines have haptic feedback, and will likely be able to do more automated steps in the future..
The surgeons have no further periphery vision than what they see in the robot goggles. The surgeons report that they do not require haptic feedback and they feel they have all the vision they need. Quote from one of the surgeons paraphrased was: “You are down there. You are in there. No doubt about that.”
It is all quite cybernetic in many ways as the surgeon is 3m away from the surgeon. There was an IP dispute about the surgeon ‘requiring’ to be in the room with the patient and not in another room.
Mediated action was described by Michael Arnold. Human-machine correspondence is what is going on. When the surgeon clasps their hand, the nodes operating clasp their hands. Instinctive surgeon actions take place on one side and the other. Some soreness reported by the surgeon when clasping robotic levers for some time on the console.
Davinci is able to operate arms beyond what humans can. The reach and flexibility is greater. Human wrists are like hinges whereas davinci hands have ball joints and can twist, swirl, turn, 360.
Davinci remote unit and the controller unit calibrate about 1300 times a second to ensure there is no lag in instrumentation and action. The intuitive system is build on more than a million lines of code. Human hands tremor but the tremor is removed through the algorithms in davinci. When you watch the surgeon it looks like they are conducting gross movements but that is an amplification of the actions they are making on the body which are millimetre to centimetre level.
In the next phase of the davinci development might be:
- Bounds and rules regarding types of “allowed” movements in segements of the body
o What is enabled or disabled for instance
- Vision systems so that arteries and organs and veins are identifiable
o Clamp this but don’t let an ‘artery’ be cut otherwise patient can bleed to death etc
Michael also described that the “movement” the surgeons do is different. Not like a reproducing movement.
The instrumentation is decentralised:
- There is a vision system
- All at the tips of the arms of the probe.
- Each instrument is disposable after a certain number of uses and can cost on average about $5000 AUD
What is me?
- Multiple translations
- Multiple ontologies
- Materialise semiotics
Full personhood à then partial patient à then open tissues (personhood is erased) à simply data representation à then come out of that in reverse
What is me?
- A data body representation of me. And that data is mobile.
- The personhood is temporarily surpressed.
- Surgeons say: “you forget you are operating on a specific person.”
- But “I am a living body”.
- “Sutchers” and “surjeeps”
- Huge number of data points to go back and to analyse
- There are angles and pressure applied and so much more.
- Might it mean in the future that the Robot can learn from these and conduct their own “surjeeps” and that the doctors will simply point to a place and their function might well be replaced altogether?
I asked but what about the law suits on the da vinci robot in progress? What about the alleged deaths? What happens then? Who takes the blame? What about competitors? How is intuitive responding to these?
How does one become the machine? The trend is not to blame the surgeon.
You can find these on incident reports:--
- Big data
- SurgicalWatch group(?)
- Embedded ethics
- Intuitive’s patent means that the patient and the surgeon now need to be in the same room. Operations cannot be done remotely.