An inaugural presentation of the Allens Hub at the University of New South Wales.
I had the good fortune of meeting Mireille Hildebrandt in 2008 while we were both at the London School of Economics presenting at an ID Systems Conference. And while on a recent trip to Sydney I learnt unexpectedly from Lyria Bennett Moses that Mireille would be presenting, so I stayed behind another few days. It was worth listening to her methodical presentation live!
My notes were comprehensive but may not have been altogether accurate and organised as presented (I take full responsibility for inaccuracies). Sadly there were likely another 20 or so slides that were glossed over all too quickly toward the end of the talk, but this only promises for Mireille to come out to Australia again for a part 2. I learnt some very profound things that night- and I had several k-ching moments throughout the night. This is what a brilliant academic can do- take the audience on a long metaphor and then come in with the practice. As a philosopher Mireille has a big advantage over your standard lawyer- and her sources demonstrated her art in her craft- we were spoiled with references to mathematicians, anthropologists, computer scientists, technology lawyers and more. Thank you Mireille! We look forward to the book.
Mireille has a full time affiliation with Vrije Universteit Brussel and several other minor affiliations.
Interfacing law and technology
Lawyer and philosopher; part-time chair with CS; lawyers begin to interact with CS
Responses from lawyers regarding legal tech
Some of the responses include:
· AI in the law is nonsense, not feasible, waste of time
o Old logic
· AI in law will democratize the provision of legal services
o Apps. Landlord vs big company
o Find the app and get a prediction of whether you will win a case or not
· AI in law will solve many legal problems caused by text-driven complexity
o Text is naturally ambiguous
o Enormous; so much text; especially with international jurisdictions; the contradictions are too much. Without AI it is impossible?
· AI in law will solve some problems and create new ones
· AI will depend on how we will develop “legal tech” and by whom?
o Lawyers, CS, consultants, policymakers?
o And for WHOM?
§ Who is paying for this?
o What are we optimizing for?
· Is legal text about reasoning, meaning; but in a machine legal text is just “Data” that you are trying to correlate, and what you are asking algorithm to do is “optimize”. The question is what are you optimizing for? If you have no answer to this the machine will not learn anything.
Should we understand how the technology of legal tech works?
· We can drive a car without knowing how the engine works?
o Remember Pirsig’s “The Art of Motorcycle Maintenance”
· We act on doctor’s advice though we don’t know how they get their diagnosis?
o Note that doctors generally employ ‘diagnosis by treatment’
o Doctor has potential diagnosis or none- and he will try something- and then start to figure out what is wrong with you
§ We expect doctor knows and we don’t have to get involved
§ There is trust in a doctor’s knowledge and that the ‘engine is built well’
· Can we trust legal technology?
· We must not believe
o “prestidigitator enables him to do things that are not noticed by those whom he is engaged in fooling (1939-John Dewey)
· Some people claim a trade-off between interpretability and accuracy:
o The less “ordinary folk” understand AI, the better it functions
§ If we have this legal tech we can predict the case solution
§ Dangerous way of going about things
§ Some people not necessarily within CS, they claim there is an accuracy issue
§ AS systems become more difficult to understand they will become more accurate
§ Don’t try to make them understandable because it downgrades their functionality
§ If a then b does not imply if b than a…
§ If the developer says they do not understand what the AI is doing, raises the question “What the accuracy actually means”? Is this truly accuracy?
· Some people claim that blockchain application support trustless transactions
o Trust is displaced from institutions, and put in technology instead
§ Trust in the minors… people who write the code for smart contracts
· This is the lure of magic:
o We don’t have to drive or trust the doctor, so we don’t have to understand the code to use it
· In anthropology:
o Mistaken attribution of causality (the rain dance)
o Raising fear and inviting subservience (the power of the priest)
o We are now warned of an arms race in AI and asked to submit, e.g. our data
· Such magic is not reserved for “primitive society”
o All types of society are vulnerable to such thinking
o All types of society found ways to fact-check and to call-to-account
§ To call the priests to account, the board of governors etc… requires the below…
o This requires resilience, patience and a serious effort to understand
§ Lawyers need to understand this tech
o And a well designed system of checks and balances (e.g. re car and doctor)
§ That is why we can afford to say as a user- I don’t need to know more about it.
o We call it Rule of Law: legality, auditability and contestability
Counting as a human being in the era of Computational Law (COHUBICOL)
· However not everything that can be counted counts, and not everything that counts can be counted. “William Cameron”
· To count to calculate to compute
o Incomputability from a CS perspective
§ Maths and CS term
o Godel, Wolpert, Mitchell
§ Moving from axcioms to deductively drawing conclusions
§ Mathematical proof “no lunch theorem”
o Inferences from that data to predict new data
§ There is one thing that limits machine learning from predictions, the simple fact that you can only train on present and historical data
· You cannot train on future data…
· This is what limits machine learning
· The mathematical assumption of ML are incorrect but productive
o They can do great work but are incorrect
· To count, to quality, to matter:
o Incomputability from an anthropological perspective about how people interact
o G.H. Mead, Arendt, Plessner, Ricoeur
o The co-constitution of self, mind and society
o “Imagine that I talk to a small child of 1-1.5 yrs old, the child learning to speak, and I tell the child you are Charlotte and I am Mireille, and then the child will say opposite… and then say again… “You and I”… and then the child, that “she is you to me”, and then decentering, and that looking back at yourself from a perspective of being human
§ We are trying to make computer a different thing
o When I say “I” to constitution of self, we are not born into a self. We are developing into a self because we are being addressed by others.
§ System of “law” to address others.
· Law co-constitutes us in our expectations
o Descartes: I think therefore I am
§ He did not get it…
§ “Being profiled”; you think therefore I am.
§ The constitution of the self
o Legal protection
§ Because we are being profiled, we get better results from search engine
§ It is not because we have computers we are constituted by things, it is not because of machines…
§ Currently through “human interactions”
§ Something changes when the emancipation comes from machines and not other human beings.
· To what extent is being profiled by machines, different from other people or constitutions, or the law, and what is changing because of computation background because of systems profiling us
§ Against being “overdetermined”
· By gaze of the other (Mead)
· How state sees us
· So what happens if something like the law that is nl and text, that the law is based in legal tech
Data-driven legal tech:
· Argumentation mining
· Prediction of judgements
· So when I have a particular legal case I can look for things that count in the law
· Most dangerous type of AI is the prediction of judgements
o Code driven legal tech
o Code driven legislation (policy articulated in code), contracts (in fintech, transfer of assets), decisions (public administration), connected with blockchain
o Instead of writing a contract in NL, you translate the content into CODE, and make that code “Self-Executing”
o Can do the same with regulation; could have a policy written into code
o With every move you make, the text is executed autonomically
o What is A/B Testing?
o All the web sites are liked, and are experienced, there is A/B testing happening continuously
o A small change that is tested.
§ Do you like version A or version B?
§ Software automatically calculates which attracts the most favourable behavior (click-based behavior; or purchasing behavior; preferred?)
§ Which site has better outcome, that is the site that has better implementation
§ Continuous process
§ Changing buttons; +3 days keep up with competitors
o We are continually being nudged by certain things running in the background
o Online environment is “Add revenue”
o Optimize web site means put content that attracts more $
o We are surrounded by online environments that have ad driven content
§ This microtargeting does not really work because human behavior is far too complex and far too smart
§ From side of CS, there is an urge to think it works
§ What is the statistical relevance?
· Because moving too fast, there is a lure to do “p-hacking”
o If you have significance you find favourable, then you STOP, it is methodologically unsound but people continue buying into this
§ Proctor and Gamble withdrew their budget, and this year it said “it didn’t cost us anything”
o Lawyers must not make that same mistake
o We don’t want “Crappy Machine Learning” giving verdicts
o Tom Mitchell:
§ A computer program is said to learn from experience E, with respect to some class of tasks T (prediction of judgments)…
o ML often parasites on human domain expertise: what cs calls “ground truth”
o The politics are in who get to determine E, T, and P
o The ethics are in how they are determined
§ Law concerns the contestability
o Are you optimizing to cut out racism or for banal statistics… triggered a whole discussion on “fairness” from statistical point of view
o But law is about contestability? How can we make law this way and make sense in criticizing it
o Company that has the priority software to do this (how they handle the stats), the statistical error for black people is to their detriment, and for white people it is beneficial.
o But what we have to do is to “compensate” and understand the error rates
o “AI Program able to predict human rights trials with 79% accuracy”
o Is it in use?
o “Assumption: text extracted from published judgements are a proxy for applications lodged with the Court”
§ But not accurate. They only used cases in English. And not everything published. Doesn’t have all relevant data.
§ Get to the “low hanging fruit” but why?
· So don’t suggest you have it right for 79% of cases
· Problem: as authors state, facts may be articulated by court to fit the conclusion
o As selected and rendered by courts as they have found in their mind
o Cases held inadmissible or struck out beforehand are not reported, which entails that a text based predictive analysis of these case is not possible
o The experience is reduced
o Why? Admissible cases = low hanging fruit
o Problem that cases that are not reported applications, they would make a difference which now remains invisible
o Data on cases related to art. 3, 6, 8 ECHR (privacy, torture…)
o Why? Because they provided the most data to be scraped, and sufficient cases for each
o Problem: here you are framing things, that all cases are either 3, 6, or 8 but the rest remain invisible (e.g. art. 5, 7, 8, 10, 14)… but the way it works, that all interlinked articles, and they say x y z is obvious. But you are presenting these as independent variables.
o Dataset = publicly available
o Need to distinguish it is EXPLORATORY RESEARCH
§ You don’t have the full dataset
§ For each article: all cases (apart from non-English judgements)
§ Equal amount of violation and non-violation cases
§ Test extraction by using regular expressions, excluding operative provisions (of, and on, and the)
o Circumstances and topics are best predictors, combined works best
o Law has the lowest performance
§ Discussion: facts are more important than the law
§ Legal formalism and realism: evidence that legal realism is realistic
o This is nonsense
§ Facts has been framed in a way that they are “Determined” but you don’t have the facts…
§ In a lot of the cases there are no law section due to an inadmissibility judgement
o SO sometimes there is no law
o I’ve asked lawyers and look the program is better! NO!
o Only a good lawyer does close reading and bounded rationality
o Integrity of law and logical coherence
o Treating like cases alike
o Legal certainty as contestability
o Ambiguity tells us how text will affect life and opens for contestation and argumentation… this is very different to code law
Data- and code-driven interpretation:
o These systems do “remote reading”
o With new technologies we can do remote.. instead of how we read a canonical text… and machine can read everything and do inferences
o Remote reading based on NLP
o Coherence based on the approximation of a mathematical target function
o Input and output- mathematical function (not reasoning), so there is an assumption that with machine learning our world (and our legal world), is ruled with mathematical functions
o By getting hold of maths function someone become a good lawyer- no!
o But if we outsource tasks to these technologies it is based on these assunptions that a mathematical “Target function”, based on “predictive accuracy” or blockchain type legal tech
Text-driven normativity and legal protection
o Because of ambiguity in human language but not so flexible … lawyers contrained by legal norms… higher court can impose interpretation on them
o Suspension of judgement, contraints upon personal opinion
o Practice and effective legal remedies with institutions checks and balances
Data-and-code-driven normativity and legal protection
o Either freezing the future by making predictions based on historical data
o Or by way of deterministic self-executing code
o Contesting statistics and contesting execution of irreversible code
Lawyers will have to make a good grip on statistics to contesting
Learners and decisional algorithms
o The learner…
o Once the system has learned then you can translate that output to another algorithm
o Cases with these 4 characteristics will always be judged like that
§ Then develop another algorithm—this causes violation
Illusion of legal certainty
Legal protection by design vs legal design
o Can you use it enforce compliance
o If you reorganize legislation and contracts, where non-compliance is ruled out, then it is technical management, and administration
o Legal protection by design
o We cannot think of legal protection in same way as always done
o We have no tools to protect ourselves as lawyers and people we defend
§ If you develop and employ legal technology, then you have to embed in the design of that technology, the legal protection must be embedded
§ Democratic legitimization (representation, deliberation, participation)
§ Enabling the contestation into the design of these systems
o Legal protection impact assessment
The legal tech is NOT magic (not like the car or medical science)
IT does not deserve blind trust
Law shapes the checks and balances that enable trust in engines and medicine
Integrating the Social and Technical in Engineering Education: Practical Challenges, Conceptual Barriers, Promising Responses
Nov. 27, 2018
Memorial Union: Graham Rm 226
This talk explores a series of experiments in engineering education that span course instruction, educational culture reform, interdisciplinary programming, and department identity building through a lens of “sociotechnical integration” — where social and technical dimensions of engineering are brought together as co-equal components of engineering practice. It considers some of the specific practical challenges that arise around these integration efforts, the conceptual underpinnings explaining those challenges, and reflections on how institutions committed to rethinking their approach to engineering education might respond.
Seating is limited.
Biography: Dean Nieusma is Division Director and Associate Professor of Engineering, Design, & Society at the Colorado School of Mines. His research focuses on integrating social and technical dimensions of engineering in education and practice, with a focus on design and project-based learning. He is also broadly interested in the social and ethical implications of technologies and the application of engineering and design expertise to enduring social and environmental problems.
I was asked by Dr Roba Abbas to give a guest lecture to her Operations Management course. What an honour to be doing so! My slide-deck is below.
About the Seminar
October 09, 2018 3:00pm—7:30pm
With technologies like self-driving vehicles and proposals like tourist trips to the moon, science fiction is rapidly becoming science reality. How should science policy govern this rapidly approaching future? Who should be involved, what tools do we need, and how do we prepare the next generation of leaders?
Exploring Democratic Governance: Solar Geoengineering Research*
3:00 – 4:30 pm
A mini-deliberation on the future of solar geoengineering research. This deliberation draws on our recent project, funded by the Sloan Foundation, to better understand public perspectives on a controversial approach to addressing climate change. How can participatory technology assessment (pTA) help bring public values into difficult and complex science policy decisions?
The Rightful Place of Science: New Tools for Science Policy
5:00 – 6:00 pm
Elizabeth McNie, Ryan Meyer, and Roger Pielke Jr., moderated by Angela Bednarek, will discuss innovative new tools to improve science policy.
True Stories That Matter
6:00 – 6:15 pm
Lee Gutkind will discuss the importance of narrative in thinking about and communicating science policy.
Future of Innovation in Society: Keynote & Reception
6:15 – 7:30 pm
Katina Michael, a professor at ASU’s School for the Future of Innovation in Society, will discuss how SFIS helps students plan and create the future. Her keynote will be followed by a reception.
ASU Barrett & O’Connor Center
1800 I St NW
Washington, DC 20006
Consortium for Science, Policy & Outcomes, “New Tools for Science Policy”, Open House: The Future of Science Policy, https://cspo.org/event/ntsp100918/
Problem: Will there be Netflix on Mars?
Interplanetary skin; Interplanetary Networks of Things; Interplanetary Internet; Extreme Environments; Deep Space Network; Network of Nodes; Network of Networks; Interplanetary Networking (IPN); Challenges; Connectivity; Building a Space Internet; Bundle Protocol; Transmission Delays; Mars Telecommunications Orbiter (MTO); Talking by Laser; DTN Based Communications; User Applications
“The Future of Space travel demands better communications”
“Outer Space Forbids Constant Connectivity”
Network of Nodes
Ever since the first American spacecraft went orbital in 1958, NASA's craft have communicated by radio with mission control on Earth using a group of large antennas known as the Deep Space Network. For a few lonely probes talking to the home planet, that worked fine. In the decades since then, as NASA and other space agencies have accumulated dozens of satellites, probes, and rovers on or around other planets and moons, the Deep Space Network has become increasingly noisy. It now negotiates complex scheduling protocols to communicate with more than 100 spacecraft.
Most rovers (both lunar and Martian) talk to the Deep Space Network in one of two ways: by sending data directly from the rover to Earth or by sending data from the rover to an orbiter, which then relays the data to Earth. Although the latter method is wildly more energy efficient because the orbiters have larger solar arrays and antennas, it can still be error-prone, slow, and expensive to maintain.
The future of space travel demands better communication. The pokey pace at which our current Martian spacecraft exchange data with Earth just isn't enough for future inhabitants who want to talk to their loved ones back home or spend a Saturday binge-watching Netflix. So NASA engineers have begun planning ways to build a better network. The idea is an interplanetary internet in which orbiters and satellites can talk to one another rather than solely relying on a direct link with the Deep Space Network, and scientific data can be transferred back to Earth with vastly improved efficiency and accuracy. In this way, space internet would also enable scientific missions that would be impossible with current communications tech.
Exploratory Learning Session at ASU
Exploratory learning can be defined as an approach to teaching and learning that encourages learners to examine and investigate new material with the purpose of discovering relationships between existing background knowledge and unfamiliar content and concepts.
The direct link to the Interplanetary Initiative’s Exploratory Learning module is here.
Through exploration learning, learners should:
Recognize and be unafraid of unsolved problems,
Be curious about what is known and how we know it,
Be willing to work toward answers in steps over time,
Develop independence and initiative in working toward solutions,
Have patience with ambiguity,
Have patience with dead ends (“failures”) and thus build resilience,
Understand the difference between a problem they have not solved, and a problem no one has solved,
Practice listening and respecting the contributions of teammates and
Experience knowledge creation.
During exploration learning, learners should do one or more of:
Practice asking questions,
Learn how to improve their questions,
Solve problems that require multiple steps and may not have single answers,
Identify and tackle problems whose solution is not known to the team or instructor (knowledge creation),
Obtain and assess the quality of the content they use to reach answers,
Assess the quality of the answers they produce, and
Work in interdisciplinary groups where all voices contribute.
What is a Planetary Skin?
The launch of Planetary Skin by NASA and Cisco Inc., a new platform for measurement, reporting and verification is hoped to enable the unlocking of US$350 billion per year in 2010–2020 for mitigation and adaptation to climate change.
Planetary Skin is a global-monitoring system of environmental conditions intended to help effective decision making with data collected from various sources which includes space, airborne, maritime, terrestrial and people-based sensor networks. It is then analyzed, verified and reported over an open standards based Web 2.0 and 3.0 collaborative spaces.
Cisco and NASA R&D public that cuts across institutional, disciplinary, and national boundaries and create a space for flexible pooling of assets and ideas between stakeholder networks.
Planetary Skin Institute is a bridge between organizations like the World Economic Forum, NASA, and the University of Minnesota. It takes in massive amounts of data from space-to-mud-to-ocean sensors. And it uses experts and big data analytics to help emerging market governments know things like where to build infrastructure and where droughts will hit.
Its latest project: Developing virtual weather stations using “exhaust” cell phone data. And helping the government of Brazil create a national monitoring and early warning system for natural disasters–a system few countries have, but all need.
One notable example of these risk management and prevention tools: new virtual weather stations currently being tested by Planetary Skin Institute and their partners.
In a breakthrough in environmental sensing and a new way to use junk data, the team uses “exhaust data” from cell phone towers to predict weather conditions by monitoring the speed of radio waves as they travel through humid air. This allows sensing of local environmental conditions anywhere there are cell towers–places that rarely have weather monitoring right now because traditional weather stations are too costly or the locations are too remote. It sounds simple, but it is critically important. Tracking the weather allows data scientists to connect that with all sorts of other information and, most immediately, to predict things like landslides. And to do so for people who live in the areas nearest the towers–typically shantytowns where populations are at greatest risk.
The project in Brazil has been running successfully for two years. If it continues to work, it is a risk management approach and toolkit that Planetary Skin Institute is planning to bring to the rest of the world–the next step in their evolution.
Planetary skin institute ALERTS: automated land change evaluation, reporting and tracking system by J. D. Stanley of the Planetary Skin Institute, Proceeding COM.Geo '11 Proceedings of the 2nd International Conference on Computing for Geospatial Research & Applications, Article No. 65, Washington, DC, USA — May 23 - 25, 2011.
Planetary Skin: A Global Platform for a New Era of Collaboration, Juan Carlos Castilla-Rubio and Simon Willis, 2009.
Complexity and uncertainty are hallmarks of the early 21st century, as recent developments in the global financial markets demonstrate all too vividly. Responses to the financial crisis have prominently featured demands for global coordination. Our economic woes, however, are dwarfed by the increasing threats of climate change and environmental degradation— and their attendant miseries, such as pandemics and poverty. Unprecedented global coordination and collaboration are the only ways to address these environmental dangers.
Actionable consensus on addressing climate change is now evident in public policy announcements from global leaders, and in the coalescing of private and public opinion that the world needs to address quickly and decisively the varied perils created by man-made climate change. At the World Economic Forum in 2009, public and private sector leaders outlined three basic requirements for mitigating and adapting to changing climate: (1) targets for countries that effectively put a price on carbon; (2) large-scale predictable and sustainable financing for mitigation and adaptation strategies, and, critically (3) the creation of a globally trusted mechanism for measurement, reporting, and verification (MRV).
While measurement is third on the list, it is the essential precondition to any creation of value, or to unlock financial flows. The simple axiom that “you can’t manage what you can’t measure” holds true—especially for the most complex challenges.
NASA and Cisco Systems Inc. are developing "Planetary Skin" -- a marriage of satellites, land sensors and the Internet -- to capture, analyze and interpret global environmental data. Under terms of an agreement announced during a Capitol Hill climate summit today, NASA and Cisco (Nasdaq: CSCO) will develop the online collaborative platform to process data from satellite, airborne and sea- and land-based sensors around the globe.
The goal is to translate the data into information that governments and businesses can use to mitigate and adapt to climate change and manage energy and natural resources more effectively, NASA and Cisco officials explained in interviews.
"There are a lot of data out there, but we have to turn that into information," explained S. Pete Worden, director of NASA's Ames Research Center. "What we are trying to do is use Cisco's expertise in data handling, put our data in there and explain what's really going on in the rainforests."
Indeed, the partners' first project, "Rainforest Skin," will focus on integrating a comprehensive sensor network in rainforests around the world. The project will examine how to capture, analyze and present information about the changes in the level of carbon dioxide -- the main heat-trapping gas -- in the Amazon and other areas. Information will be posted on the project's Web site.
Other projects during the next 18 months will look at changes in land use and water, Worden noted.
"This will begin to give us a sense of, if we pass cap and trade, is it working," he added.
Now about the project's name: "There are many layers of skin, of information, and this will help us understand all of the interconnected data," explained Worden, whose agency provides continuous global observations using satellites and other spacecraft.
Juan Carlos Castilla-Rubio, who directs Cisco's climate change practice, said the information should help companies manage environmental and financial risks in a carbon-constrained world.
"It's providing the support platforms for people to make decisions because today we fly blind," added Castilla-Rubio, whose San Jose, Calif.-based company specializes in Internet Protocol networking.
The Center for Global Development has developed a Web site of its own, called Carma (Carbon Monitoring for Action), which tracks emissions from 50,000 power plants around the world. The Washington, D.C.-based nonprofit research organization is also developing a way to monitor emissions savings from forest conservation.
"These investments in information now are absolutely critical," said Nancy Birdsall, the center's president, who participated in today's summit with Cisco Chairman and CEO John Chambers. "We have to create that information and track it over time if we're going to have any kind of system at a global level that people in this country and other countries can trust."
"We'll have to have ... something akin to independent monitoring," she added.
Of Angels and Uberveillance: The Point of View Continuum in Wim Wenders’ Wings of Desire
By M.G. Michael and Katina Michael
Abstract: Uberveillance is an omnipresent form of 24/7 surveillance of humans based on widespread electronic devices, and especially computer chips embedded into the body. It is akin to a planetary skin that is able to pinpoint any living (or deceased) individual in near real-time anywhere on the earth’s surface. In its ultimate form it is big brother on the inside looking out. Uberveillance was once impossible given patchy infrastructure- a world without networks and global position systems (GPS) and a world without closed circuit television (CCTV) and smartphones. The integration of innovations such as mobile CCTV and facial recognition, spurred on by Defence and later commercialised, has meant we are living in a “Point of View” continuum. The unfeeling gaze never goes to sleep, and has become a subject of ethical, legal and socio-technical research. In this paper, we juxtapose Wim Wenders’ Wings of Desire against the domain of “Uberveillance” as a way to help further explain the technological trajectory. What is it about the qualities of angels that differ so starkly from the machine-like prowess of pervasive CCTV? How is it that angels can see so clearly and so precisely with such deep understanding, and technology that is tasked to surveil and deconstruct can get it so wrong? The varied points of view depicted in Wings of Desire – the view from above looking down, at Street-level, and inside the private thoughts of a human – typify the spectrum of uberveillant capabilities. While it is deemed natural for angels to fly and be up close as guardians and protectors of human beings, there is something unnatural about the physical world being captured for playback in a virtual realm. What do we hope to achieve by this reality TV-style vision? Do we hope to store it all, every aspect and minutiae of life, every person’s eye view, every moment through time, to transcend through screens? Are we in some way abandoning the reason behind our existence, that is, to grow and to learn through experience? And are we forging ahead with an ‘unnatural’ path by seeking to explore and to interrogate our lives as bystanders through our creations? Wings of Desire provides a vehicle for discussing the pros and cons of uber views. Significantly, angels in this cinematic masterpiece are mere witnesses, and cannot intervene in the lives of those they observe, no matter what injustices they see, unless they decide to willingly ‘incarnate’. Machines on the other hand, are indiscriminate or at least subject to some outside input, they can autonomously trigger alerts and force decisions, making judgments about contexts, even if they are incorrect.
For more commentary on Wings of Desire visit here.
SUFF and Sydney College of the Arts, the University of Sydney are pleased to announce the inaugural “Inhuman Screens” conference, which aims to open a conversation as to how technology has redefined the human. To this end, the conference examines all aspects of contemporary screen ecologies, including frontier technology, social media, theories of screen culture, contemporary art, and other engagements with digital technology and posthumanism.
Citation: Kate Lemay (facilitator) with Anna Johnston, Samantha Chan, Katina Michael, David Vaile.
A presentation delivered for the Australia and New Zealand School of Government. A lot of the slides are based on Ryan Abbot and Bret Bogenschneider with a major influence in perspective from Moshe Vardi. Sources compiled through my own searching.
Amazon currently employs 125,000 people, it recently put its 100,000th robot to work.
Should robots be taxed? Joao Guerreiro, Sergio Rebelo and Pedro Teles (April 2018)
We use a model of automation to show that with the current U.S. tax system, a fall in automation costs could lead to a massive rise in income inequality. This inequality can be reduced by raising marginal income tax rates and taxing robots. But this solution involves a substantial efficiency loss for the reduced level of inequality. A Mirrleesian optimal income tax can reduce inequality at a smaller efficiency cost, but is difficult to implement. An alternative approach is to amend the current tax system to include a lump-sum rebate. In our model, with the rebate in place, it is optimal to tax robots only when there is partial automation.
Should Robots Pay Taxes? Tax Policy in the Age of Automation. Ryan Abbott and Bret Bogenschneider (Harvard Law and Policy Review, 2018)
Existing technologies can already automate most work functions, and the cost of these technologies is decreasing at a time when human labor costs are increasing. This, combined with ongoing advances in computing, artificial intelligence, and robotics, has led experts to predict that automation will lead to significant job losses and worsening income inequality. Policy makers are actively debating how to deal with these problems, with most proposals focusing on investing in education to train workers in new job types, or investing in social benefits to distribute the gains of automation. The importance of tax policy has been neglected in this debate, which is unfortunate because such policies are critically important. The tax system incentivizes automation even in cases where it is not otherwise efficient. This is because the vast majority of tax revenues are now derived from labor income, so firms avoid taxes by eliminating employees. Also, when a machine replaces a person, the government loses a substantial amount of tax revenue—potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once the labor is
capital. Robots are not good taxpayers. We argue that existing tax policies must be changed. The system should be at least “neutral” as between robot and human workers, and automation should not be allowed to reduce tax revenue. This could be achieved through some combination of disallowing corporate tax deductions for automated workers, creating an “automation tax” which mirrors existing unemployment schemes, granting offsetting tax preferences for human workers, levying a corporate self-employment tax, and increasing the corporate tax rate.
Robot Tax: https://www.techemergence.com/robot-tax-summary-arguments/. Is it trade or automation that is causing unemployment in advanced economies or developing ones? Multiple angles include:
- Whether a robot tax might even be needed. I.e. how quickly are robots going to replace workers
- How such a robot tax might work.
- What are the arguments for or against it.
When is a Robot a Robot for tax purposes?
Gates simply suggested that “some of it could come on the profits that are generated by the labor-saving efficiency there. Some of it could come directly from some type of robot tax.”
A segment of a proposed motion in the EU parliament which was rejected simply called for looking at, “levying tax on the work performed by a robot or a fee for using and maintaining a robot should be examined in the context of funding the support and retraining of unemployed workers whose jobs have been reduced or eliminated.”
Levying a fee on each robot sounds much easier in theory than it practice. How would we define “robots” for these tax purposes? What separates a tool from a robot or a complex computer program from AI? For example, a backhoe replaces human ditch diggers, but few people consider that a “robot.”
Is a vending machine a human-replacing robot? What about a vending machine that can automatically alert its owner when it is out of a product? What about a vending machine the size of a small store that can automatically stock itself and answer complex questions from customers?
If a country did try to place a fee on actual robots, it is likely it would be difficult for some companies to avoid the tax. Manufacturing robots, self-driving cars, delivery drones, and anything that looks like a classic “robot” or clearly replaces a specific category of job would likely be hit.
But there are a lot of marginally robot-like tools that manufacturers would aggressive lobby policy makers to exclude from the tax. Alternatively, manufacturers could modify the tools just enough to exploit some loophole in the definition of a robot.
Approaches to a Robot Tax
For example South Korea reducing the tax incentives for investing in automation is called a robot tax. It is possible to imagine the “robot tax” taking the form of higher taxes certain types of productively enhancing developments and/or the form of big tax breaks for companies that keep the size of their workforce stable each year.
A “robot tax” could also maybe take the form of a tax that target companies with high profits/revenue but small workforces. Something like a worker to profit ratio could serve as a proxy for deciding what company are making use of a lot of automation.
Why Should We Start to Tax Robots that are taking Human Jobs?
That’s because automation allows firms to avoid wage taxes, which fund social benefit programmes such as Medicare, Medicaid, and Social Security in the US, or National Insurance contributions in the UK. Firms are mostly responsible for just paying wage taxes for human workers to their governments.
In the US at least, there is a further incentive to automate because firms can claim accelerated tax deductions for automation equipment, but not human wages. Wage taxes are generally only deductible as paid. This structure allows firms to generate a significant financial benefit from claiming significant tax deductions sooner for robots.
Automation also results in indirect tax incentives. For instance, human workers are also consumers who are responsible for paying consumption taxes, such as retail sales tax (RST) in the US or value added tax (VAT) in the UK. Employers are generally thought to bear at least some of the cost of these taxes, as they may have to increase salaries in response to higher taxes on workers. Because robot workers are not consumers, they are not subject to these indirect taxes and so firms can avoid any associated burden.
Perhaps most concerning, these policies result in dramatically reduced tax revenue for the government. That’s because most government revenue comes from wage and consumption taxes. Corporate taxation now represents less than 9% of the overall tax base in the US and is likely to significantly decrease following the recently enacted Tax Cuts and Jobs Act of 2017.
When firms replace people with machines (or elect to automate initially), the government loses the ability to tax workers. This is not compensated for in the form of higher taxes on corporate earnings. In the aggregate, this could amount to hundreds of billions of dollars a year in lost tax revenues if robots replace workers to the extent predicted by many experts.
Why Taxing Robots is Not a Good Idea
He argues that today’s robots should be taxed—either their installation, or the profits firms enjoy by saving on the costs of the human labour displaced. The money generated could be used to retrain workers, and perhaps to finance an expansion of health care and education, which provide lots of hard-to-automate jobs in teaching or caring for the old and sick.
A robot is a capital investment, like a blast furnace or a computer. Economists typically advise against taxing such things, which allow an economy to produce more. Taxation that deters investment is thought to make people poorer without raising much money. But Mr Gates seems to suggest that investment in robots is a little like investing in a coal-fired generator: it boosts economic output but also imposes a social cost, what economists call a negative externality. Perhaps rapid automation threatens to dislodge workers from old jobs faster than new sectors can absorb them. That could lead to socially costly long-term unemployment, and potentially to support for destructive government policy. A tax on robots that reduced those costs might well be worth implementing, just as a tax on harmful blast-furnace emissions can discourage pollution and leave society better off.
Yet in an economy already awash with abundant, cheap labour, it may be that firms face too little pressure to invest in labour-saving technologies. Why refit a warehouse when people queue up to do the work at the minimum wage? Mr Gates’s proposal, by increasing the expense of robots relative to human labour, might further delay an already overdue productivity boom.
But as machines displace humans in production, their incomes will face the same pressures that afflict humans. The share of total income paid in wages—the “labour share”—has been falling for decades. Labour abundance is partly to blame; the owners of factors of production in shorter supply—such as land in Silicon Valley or protected intellectual property—are in a better position to bargain. But machines are no less abundant than people. Factories can churn out even complex contraptions; the cost of producing the second or millionth copy of a piece of software is roughly zero. Every lorry driver needs individual instruction; a capable autonomous-driving system can be duplicated endlessly. Abundant machines will prove no more capable of grabbing a fair share of the gains from growth than abundant humans have.
Waves of automation might necessitate sharing the wealth of superstar firms: through distributed share-ownership when they are public, or by taxing their profits when they are not. Robots are a convenient villain, but Mr Gates might reconsider his target; when firms enjoy unassailable market positions, workers and machines alike lose out.
Robots creating a wages and employment 'death spiral' warns IMF
The worst outcome under the IMF modelling is where robots only replace low-skill workers.
"While skilled labour enjoys continuous large gains, the wage for low-skill labour decreases in the short/medium run under conditions much weaker than in the benchmark model [where robots can do any job]," the IMF found.
"Nor is there any assurance that growth eventually raises the low-skill wage. Quite the contrary: there is a strong presumption the real wage decreases more in the long run than in the short run.
"The magnitude of the worsening in inequality is horrific."
Under the research modelling in this case, the skilled wage increases from between 56 to 157 per cent in the long run, while wages paid to low-skill labour drop between 26 to 56 per cent.
The low-skilled group's share in national income also decreases from roughly a third to as low as 8 per cent.
Even in the scenario where robots only compete for some jobs, and the impacts on wages and growth are reduced, the IMF paper said inequality gets worse.
"Allowing for tasks that complement robots does not help as much as one might think, partly because more and more workers compete for those jobs, driving down the overall wage.
"In addition to the fall in the average wage and the rise in the capital share, unskilled workers suffer large decreases in absolute and relative wages."
Even in areas where robots can't compete, the news isn't great from the IMF team.
"This also does not really help, again because there are only so many of those jobs to go around, and labour chased out of the automatable sector tends to drive down wages."
As for solutions, the IMF broadly targets two possible ways to limit mounting inequality: through education, and tax.
Sadly neither option looks overly promising.
While education can be seen as an investment to convert workers from unskilled to skilled labour, it has its limitations.
"Can it offset the huge real wage cuts unskilled labour suffers and the decrease in labour's overall income share at an acceptable cost? And if the answer is yes, how long will it take for wages to increase for those who remain unskilled?" the economists ask.
As for tax, as governments around the world are already aware, it is not easy to track down and get a fair share of the profits and capital accumulation of big corporations.
- Improved education and access to skills, which may require major changes in the system of education finance and admission
- Reforms of labor market institutions to boost workers’ bargaining power and including a higher minimum wage
- Corporate governance reforms and worker co-determination of the distribution of profits
- Steeply progressive taxation that affects the determination of pay and salaries and the pre-tax distribution of income, particularly at the top end
Katina Michael, When Uber Cars Become Driverless: “They Won’t Need No Driver", IEEE Technology and Society Magazine, https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7484838
Australian "Robo-Debt" Scandal
Data shows 7,456 debts were reduced to zero and another 12,524 partially reduced between July last year and March. At least 20,000 Centrelink debts were either wiped or reduced in a nine-month period, newly released figures show.
The event is taking place at the Shangri-La Hotel, Sydney. Katina is coming in via Zoom.
Date: Wednesday 5 September
Time: 9:00-10:00 AEST
Site Visit: Google (The Millennial and Technology Story)
Participants: ATO Senior Leadership Program
The disrupted digital frontier: How emerging technology of today will shape who we become tomorrow.
You are invited to join us in Sydney to hear our expert panel discuss disruptive technology and what it will mean for you.
Technology has created a state of perpetual revolution and is already disrupting traditional markets and social structures, changing the way we interact with the world around us.
Digital disruption will eventually affect every corner of Australian business and society. It will rewrite economics, scramble supply chains, blur category boundaries and make us question our ethics. How will your business be impacted and how will you respond to become a digital survivor?
Join our expert panel of University of Wollongong alumni and academics to explore the technological, social and economic impacts that these emerging technologies are having.
Meet our expert panel:
Professor Katina Michael, UOW alumna and Professor, Faculty of Engineering and Information Sciences, UOW
Prof Katina Michael’s contribution to the future of emerging technologies is vast and includes her editorship for the award winning Institute of Electrical and Electronics Engineers (IEEE) Technology & Society Magazine from 2012-2017.
She is a Professor in the School of Computing and Information Technology at UOW and until recently she was the Associate Dean – International in the Faculty of Engineering and Information Sciences.
Since 2008 she has been a board member of the Australian Privacy Foundation, and formerly the Vice-Chair. Prof Michael researches on the socio-ethical implications of emerging technologies. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/uberveillance. In 2017, she was awarded the prestigious Brian M. O'Connell Award for Distinguished Service to the IEEE Society on the Social Implications of Technology (IEEESSIT).
Dr Michael holds a PhD in Information & Communication Technology from the University of Wollongong (2003) and a Masters in Transnational Crime Prevention (2009).
Dr Shahriar Akter, Associate Professor of Digital Marketing, Analytics & Innovation at the Sydney Business School, UOW.
Dr Shahriar Akter was awarded his PhD from the UNSW Business School Australia, with a doctoral fellowship in research methods from University of Oxford.
He has published in leading business and management journals with a Google Scholar h-Index of 20 and more than 1800 citations since 2013. He received the UOW Vice Chancellor's award for teaching, a nomination for excellent research supervision and several prestigious awards for research. He has won various internal and external grants, including more than $100,000 in 2017, mostly for his research on business analytics of big data.
He was awarded the Paper of the Year Award in 2018 by Electronic Markets Journal for his research on big data analytics. Dr Akter is an advisory board member of WebHawks IT and is also the Chief Advisor of Digital Marketing Next that investigates digital, social and analytics applications. He is also a member of the Australian Direct Marketing Association (ADMA) and Institute of Analytics Professionals of Australia (IAPA).
Dr Alex Badran, UOW alumnus and co-founder, Spriggy
Dr Alex Badran left his job at Citigroup to co-found Spriggy - a mobile app allowing kids to manage their pocket money with the help of their parents. Spriggy launched in 2016 and now has over 100,000 members. The co-founders met while working as derivatives traders at Citigroup, and connected over the belief that financial institutions should do more to help their users live happier financial lives.
Spriggy was one of 10 start-ups selected in August 2017 from a highly competitive pool for the Austrade Landing Pad in Tel Aviv (a 10-day boot camp). Dr Badran was awarded Most Innovative Team in the 2017 Finder Awards and Best Banking Innovation, beating Macquarie Bank and AMP Capital in the category. He was recognised by his peers as an elected Non-Executive Director of Stone and Chalk (from November 2015-November 2017). In 2017, Spriggy raised a further $2.5 million of funding to grow its business, and currently employs 15 people, making Spriggy one of the most successful early-stage start-ups in Australia.
Dr Badran holds Bachelor of Mathematics Advanced and Bachelor of Mathematics Advanced (Honours) from the University of Wollongong.
Dr Thomas Birtchnell, Senior Lecturer, Faculty of Social Sciences, University of Wollongong
Social scientist Dr Thomas Birtchnell says mavericks are small groups of technology users who are early adopters and tend to take risks. Somewhat like beta software testers, they will poke and prod to find the limits of use and in many ways, lay the groundwork for how people end up using the product or service.
Dr Birtchnell says the problem with technology is that despite the grand pronouncements made by entrepreneurs and those who have a vested interest in the mass uptake of a technology, no one really has a clue how it will turn out. “Technology does not determine human actions; humans determine the application of technology. Social and cultural forces are just as important in the development of technology as economic or technical ones.” Technology doesn’t follow a linear pathway. Innovations are most often a combination of different things used in a new way, but those combinations are unknown and unpredictable.
He is an associate member of UOW’s Institute for Social Transformation Research: expanding our capacity to understand and engage with our social, cultural and geo‐political environment.
Kylie Cameron, UOW alumna and Senior Managing Consultant, IBM
Kylie Cameron is a digital strategist, facilitator and leader and an Associate Partner within IBM's Global Business Services Digital Team. She has over eight years of consulting experience working across industry sectors including retail, financial services, telecommunications, pharmaceuticals, utilities, government, manufacturing, mining, and oil & gas.
Kylie's role requires her to work with stakeholders including media outlets to source relevant information; legal representatives to work through IP concerns; and with client analysts to define and implement a solution that supports their operations.
Intelligence agencies use cognitive technology in conjunction with other IT systems to increase the speed and efficiency of investigations. Cognitive technologies such as machine learning, pattern recognition and natural language processing tap into the explosion of unstructured data that can hold the key to breaking a case. Cognitive technology differs from traditional IT in how it’s set up and maintained as well as how users interact with it. As intelligence agencies implement cognitive solutions, they quickly realise the implications of these differences for their personnel, workflow and culture. But perhaps most significantly, cognitive technology affects the way users think. Analysts not only have more time to think because the technology helps them collect intelligence, but the technology also makes them think differently about how they do research and intelligence discovery.
Kylie holds a Bachelor of Information Technology from the University of Wollongong.
Dane Sharp, UOW alumnus and Digital Experience Manager, McDonald's Corporation
Dane Sharp is a successful, award-winning and highly skilled marketing, media, brand, product and digital manager. He has had the opportunity to experience many facets of business both locally in Australia and internationally and is currently the Digital Experience Manager for the McDonald’s Corporation.
Prior to this role, Dane held senior positions with Rip Curl, Under Armour and eBay. He has also had the opportunity to work closely with a partnership portfolio that includes Coca-Cola, Google, Apple, Telstra, Facebook, MySpace, Samsung, Woolworths, Target, Officeworks, Rebel Sport, Firefox, AFL, ASP/WSL, Tough Mudder, VML and DDB.
At McDonald’s, he leads a team that drives digital transformation for the business by developing and introducing innovation throughout the customer journey, identifying the most meaningful initiatives for customers and operators, and developing capabilities to bring them to life.
He holds a Bachelor of Arts degree from the University of Wollongong majoring in Communications, Cultural Studies and Journalism.
The UOW Knowledge Series showcases University of Wollongong thought leaders in various locations, discussing a range of engaging topics. Previous knowledge series lectures can be viewed here.
Innovations in Health Technology
Moderator: Jason Robert, Lincoln Center for Applied Ethics, ASU
Making Precision Medicine A Reality: Molecular Diagnostics, Remote Health Status Monitoring and the Big Data Challenge
George Poste, Center for Complex Adaptive Systems, ASU
Your body and Your Brain “At Risk” – The Business of Recalling Biomedical Implants
Katina Michael, School of Computing and Information Technology, U. of Wollongong
Jane Bambauer, James E. Rogers College of Law, University of Arizona
A case study: Development of a Novel Prosthetic Heart Valve
Geoff D. Moggridge, Cambridge University
Consumer electronics are “wants” bought by people who have purchasing power. These might range from human aids like calculators and robot vacuum cleaners to entertainment-driven electronics like smart TVs and tablets, to personal assistants like smart watches and fitness trackers. While most do not consider biomedical implants like heart pacemakers and brain pacemakers to be “consumer electronics”, by definition they are “a good bought for personal rather than commercial use”. The only paradox in this instance is that this suite of biomedical implantables are really “needs” as opposed to “wants”. Patients have a choice on whether or not to adopt this emerging technology, but most say that opting in is the only real option to maintaining their quality of life and longer-term wellbeing.
In the general consumer market, taking back a faulty product simply requires an original proof of purchase so an item can be validated as still being under warranty. In the case of biomedical implantables, a recipient simply cannot take back an implant for repair if it malfunctions. Biomedical implantables are willingly embedded in the body of a consumer by a surgical team, and require special expertise for removal, replacement or maintenance (i.e. upgrade). The manufacturer, for example, cannot conduct the removal process, but a surgeon with the right equipment and human resource support (e.g. nurses) can. In 2010, one supplier of pacemakers, Medtronic Inc., had to pay $268 million to settle thousands of lawsuits that patients filed after a 2007 recall of a faulty heart defibrillator wire that caused at least 13 deaths. In other cases, battery packs have failed causing disruption to consumer implants, and more recently we have witnessed software code security vulnerabilities in heart pacemakers which have meant that recipients had to undergo a firmware upgrade in a doctor’s office, a procedure that takes up to 5 minutes and is non-invasive.
On the one hand, these pacemakers are life-sustaining and life-enhancing to their recipients, on the other hand they place voluntary human implantees at some level of risk. The various types of risks will be considered in this presentation as will the impact of “recalls” on consumer implantees.
This Medtronic YouTube Video is shown in the context of this educational presentation under "fair use" rights. Gary's story demonstrates the positive and life-changing impact a DBS can have on one's life if they are suffering from Parkinson's Disease.
Now read about another Gary here. Two part interview will appear shortly in IEEE Consumer Electronics Magazine.
Warning: The contents of this video are disturbing.
A one-day expert workshop on IOT, focusing on the role of "soft law" in IOT governance. Attendance limit to 30 people. I will be presenting a 10 min talk on "Why Privacy Experts are Concerned about IOT?" and participating in the roundtable.
Organiser: Professor Gary Marchant
Gary is Distinguished Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability; Regents' Professor and Lincoln Professor of Emerging Technologies, Law and Ethics, Sandra Day O'Connor College of Law; Executive Director and Faculty Fellow, Center for the Study of Law, Science and Innovation.
Presentation delivered on Sunday April 15, 2018, 2.30pm.
Poster presentation at the 9th Annual International Conference on Ethics in Biology, Engineering, & Medicine in Miami, Florida.
Sunday April 15, 8:30am Breakfast and Registration Announcements/ Welcome
BIOENGINEERING ETHICS EDUCATION Session Chairs: Dr. Katina Michael & Dr. Subrata Saha
What a wonderful opportunity to be at the 9th International Conference on Ethics in Biology, Engineering & Medicine since I was travelling to Florida for RFID2018. I have been trying to get to this conference for a number of years without success. Professor Subrata Saha began the conference series, and this year the host university was Florida International University. FIU's main campus is situated in Miami and this year the conference chairs were Subrata Saha and Zachary Danziger.
Keynote: Jonathan Moreno University of Pennsylvania
There are some people who can entertain and inform all in the same breath! Professor Jonathan Moreno is one of those people. A self-confessed historian of science rather than ethicist, Moreno breaks down decades of research in minutes with his knowledge of history, science and technology, and national security. He officially holds the David and Lyn Silfen University Professor Of Ethics and is a professor in the Department of Medical Ethics and Health Policy.
The following notes are from Jonathan Moreno's talks. I took the notes as accurately as possible but surely I have made errors on occasion of transcription, sentiment or context. For this I take full responsibility. But I managed to capture some of the audio of the presentation. Of the greatest value I received, is a reassurance that my own research in this space is richer for its multidisciplinarity, and while some may not see its relevance yet, the time is fast approaching (almost here now), where we will become preoccupied by ethics first and foremost in the development process of any kind of scientific or technological endeavour.
Mind Wars – Jonathan D. Moreno
Brain science and the military in the 21st century
- National security and the brain before neuroscience
- Non-invasive imaging
- Brain-machine interface
- International law
"The Silence of the Neuroengineers", Nature, 2004
- Nasty attack on DARPA
- Given up moral standing—what will be done with your work
- Cheap shot at DARPA—figuring out to get prosthetic arms
- Dual use and multiuse
- Decide what purposes these devices should be put toward
- Multiuse not just single purpose (consider importance of prosthetics)
The Era of “Big Neuroscience”
- Simulate the human brain
- Map activity of neurons
The Human Brain Project (Henry Markman—simulate human brain)
- We’ll believe that when we see it!
- Concerned about PTSD, TBI (traumatic brain injury) and dementia in general
- Growth of FMRI
- Dozens of labs and postdocs, grants, publications
In Pharma world there is not a whole lot that people have to offer for these medical illnesses
Diagnostic based approach (DSM) or neuroscience-based?
- Epistemological debate
- Not much in pipeline (severe depression, schizophrenia) with standard drugs
- Pharma buy up small companies that specialise in one or two drugs
- Centre for Neuroscience
- Neuroscience bootcamp—teach about the brain
- Growth in publication in 1990s was explosive and now continues
- Corporate room—law—dalga test for scientific evidence
Cognitive Neuroscience Funding (US Defense, 2011)
- Army $55 million
- Navy $34 million
- Air Force $24 million
- DARPA >$240M
Margaret Kosal, Georgia Tech (5/12/11)
White House BRAIN INITIATIVE FY 2014 (soft side of neuroethics)
Examples of DOD Research Programs, 2016
- Human, Social, Cultural, Behavioral MOdeling (HSCB)
- Army Research Office, Life Science
- Direct neural interfacing
The US Third Offset Strategy $18 Billion FY 2017
- Wonky defense department strategist term
- 1st offset is Atomic Bomb
- 2nd offset was Guided Missiles (First gulf war)
- 3rd offset a grab bag, term of convenience, computational neuroscience, what are they doing in general
- E.g. robotics, systems autonomy, miniaturisation, big data, advanced manufacturing
- Partner with innovative private sector companies
- Manchurian Candidate (1962)
History of brain science loosely understood and counterintelligence
- Sinatra bought copyright… now 1970s cult film
- Rumours… American prisoners of war were brainwashed (South Florida journalist invented term)… hypnotised… one of them would be turned into assassin… and then VP candidate was Manchurian Candidate
- Fairly new drug called LSD
- Stumbled on by chemist, in Barren, Switzerland, Albert Hoffman
- Put it on shelf, and then he had visual hallucinations
- 101 age passed away
- POWs had made false confessions committing crimes against Korean
1953, experiments on LSD
- Make a discrete man indiscrete
- Sex, alcohol—old ways are best… get information from people
- Worried about putting LSD into water supply
- This is not new
- LSD in trials
- Cardinal Mindszenty (1949 trial) interrogated
- CIA infiltrated 17 area groups and gave out LSD
Operation Midnight Climax (two-way window)
- Report to the President by the Commission on CIA Activities within the US
- Hottest new tech to put into defense and offence
1950s big thing was hallucinogenics
- Iconic drug from 1960s… but in 1950s was a national security
Operation Moneybags, 1964
- 25-50 min after the drug had taken effect; 1 person was taken away 20 min after
- Using drug to modify behaviour to see if you can find some defences against it
- UK… joint US – Canadian operation
- Letter to Parliament from Secretary of State for Defence, 18 July 1995
- Veterans were upset
- Couldn’t have asked for consent because screwed up experimental design
- Usually it is US get (FIA, second amendment, celebrities get excited about it)
- Jennifer Lawrence to direct chemical warfare film
- Neuroengineering with drugs, 1950s and 1960s
In Florida and California in 1970s…
- Loose… sun, ocean, open lifestyle in Florida…
- New culture developing
- Let’s not do drugs… let’s find natural ways to live on beach and be hippies, and we can learn from war fighters… people talking to dolphins… we can learn how to be Buddhists and warrior monks…
- Army picked it up in the 1980s…
1988 The Mind Race “Enhancing Human Performance”
Committee on Techniques for the Enhancement of Human Performance, The National Research Council, 1988
- Warrior monks
- Levitating and second healing and walking through walls
- NRC advised them NOT to continue with this project
- The men who stare at goats
- supplement or replacement for amphetamines
- Antisleep pill (approved to use in 2004 to airforce pilots)
- 60-80 hours
- Speed? But not exactly speed
- Cognitive enhancers… decent meal and exercise…
The trust drug- oxytocin
- Natural production is associated with trust behaviour
- Cuddle drug…
- May be artificially administered in a spray to encourage cooperation
- Use in interrogations?
- Claremont—gave oxytocin would be more cooperative and more trusing
- See Paul Zak's experiment and TED talk
- IN counter-intelligence could you do this with a suspect in a terrorist plot?
- Violation of chemical weapons convention?
- Lawyer—they fight for ISIS, ,they don’t fight under Geneva convention, just give it to them
The Anti-Conscience Pill—beta blockers, depressed … experiences but did not get happy or sad
- In 1990s… give to people before warfighting/combat…
- Beta blockers can be used to treat stress, prevent PTSD
- Surpress release of hormones like norephinephrine that help encode memory
- Could they reduce guilt feelings?
- Would we want something like this?
- 1990 brain fingerprinter P300 (recognise a picture) true or false
o FBI have watched some of these
- Visualising memory
o Episodic memory retrieval
- A private company: 7-8 years how using certain device with some electrodes that they can restimulate your memory.. recall of words/events
- Memory retrieval is harder than they thought- these are very distributed
o Alan S. Cowen et al TMS
- Neural portraits
- Functional MRI
- Clip reconstructed from brain activity
- Presented clip in MRI—then it can be reconstructed
- Can you do this with dream images? Reconstruct dream images?
- T. Horiwaka M Tamai, Science, April 4, 2013
- Neural decoding of visual memory during sleep
- In theory you want to watch your dreams…
- Visual cortex… is big.. .back… a lot of stuff… cheating…
- Reconstructing speech from human auditory cortex
- DARPA project
- tapping into the rat (whiskers—right and left)
- it does have a choice… but if it does right thing, he gets pleasure centre hit to right or left
- DARPA funded story… how the brain processes these signals
- Seattle brain to brain interface, Doree Armstrong and Michelle like me…
TMS (Transcranial Magnetic Stimulation)
- Transcranial Direct Current Stimulation (tDCS)
- Replace shock therapy
- B Zwissler—make you see something that you didn’t see
- J of Neuroscience, neural modification
- DIY tDCS
o Transcranial experimentation
- Beckman institute in Uni of Illinois
- MIT undergraduate… TMS
- An experiment in the neuroscience of ethics
- That’s terrible… won’t let my girlfriend cross
- After TMS… just asking if the girlfriend is ok
- Neurons are genetically engineers to carry a protein adapted
- The very hungry mouse (chronically implanted on or off)… getting into hypocalmus
- What if you could link people’s brains together
- Talk to them without pointing, without using a radio
- Computing arm movements with a monkey brainet (linking brains together) at UniPenn
- Arjun 2015 (with Duke Uni previously)
- Fooling cognitive resources
- Pooling electro-resources
- Effective communications… movies better… or hook each other up… like monkeys…juice…
- Beyond better standard array (only in the lab it works, Brown University)
- DARPA: briding the bio-electronic divide (100,000 microelectroarray)… quantum
- Send signals intentionally
- Korea… robosoldier… 10 years… in DMZ… heavily armed… make decisions on battle field…
- Wall Street Journal…
- Needs a “person in the loop”
- Autonomous weapons already been used… moving to offensive world… autonomous lethal weapons…
- Face recognition algorithm finally beats humans… by 1% machines are better…
- UNOG Disarmament conventions
- Are the human experiment rules adequate?
- How can we assess risks and benefits?
- Human Rights
New Ethical Principles?
NRC, 2008, 2009
- Neuroscience for Army
- Emerging and Readily available tech and national security
- Framework for ethics
The Royal Society report on Neuroscience, National Security
Italy even joining
- Getting human cells… neural organoids into rat brains to see if they can get a smarter rat?
- What about the AI world—IBM has a computer beating Go? Why are we not worried with IBM? Why is there NOT a committee?
- Driverless cars? Insurance, risk question at the heart…
- Computers cannot recognise INTENTION
Biggest issues are not consumers but national security
- Wellcome Trust
- The Dana Foundation
- The Greenwall Foundation
- History and Sociology of Science (Penn Arts and Sciences)
- Department of Medical Ethics and Health Policy
- Rockefeller Brothers Fund
- National Institute of Health
Richard L Wilson from Towson University delivers talks at BEC
Big data is being introduced in the insurance industry which brings about increased regulations and restrictions regarding customer privacy
Technological artefact, happens to be a watch. Uses it to self-medicate. And then they join a group on Facebook, and the individuals in the group self-medicate. There is no IRB board for recommendations they are making.
Should insurance companies have access to the personal information, including health information, that is tracked on the wearable technology devices?
What is a wearable technology?
A category of technology devices that can be worn by a consumer and often include tracking information related to health and fitness
Wearable Tech and Your Health
Wearable tech are becoming increasingly popular
Wearable tech can monitor live movements, heart rate, activity levels
Can calculate risk
2016: 275 million devices sold; 2017 have about 332 million
Wearable Tech and Accidents
Wearable tech isn’t limited to things such as watches that record exercise habits
Insurance is wearable cameras
Cameras mounted on cars
Cameras record events, accident scenes
Good training tools
More accurate accident reports
Health insurance companies
Rights based ethics
Users concerned about lifelogging (family medical records) being accessed which could result in increases or decreases in premiums
Anticipatory: future- what is it going to look like?
Corporations making the devices
Utilitarinism: if you do what we tell you we will increase happiness and reduce harm/sadness
Created as a way to track users
User agreements are important
3Billion in 5 hours… selling what you have given them!
The info begins to the company, but data breaches and the changing landscape of the big data create new opportunities
Lawmakers and governments
Rights-based ethics: within a persons right to refuse sensitive data to insurance companies
HIPAA regulations, the rights of wearable users to refuse their health data to private and government insurance
Barriers to Overcome
We need readjustment of ethical principles
Can insurance companies use your data without permission
Someone at work make you track yourself
If not required by insurance companies have access to data on wearable technology, user should sign a consent form
Universal healthcare- do not overcharge healthy people
Government has responsibility to keep people healthy
Tele Surgery and Virtual Reality: Robotic Assisted Surgery Anticipating the Ethical Issues
What is an autonomous surgeon?
- Robot capable of performing multiple surgical procedures without the need of constant human intervention
- Uses feedback devices such as cameras, sensors etc
Over 40K surgeries have been robot aided in the last year
Programmed to minimise doctor’s intervention
Most common is the da vinci
Cost 1.7 million
Not ready for human trials
Where the field can go
Use teleoperated for operations overseas where doctors are unavailable
Possible that surgical robots can replace surgeons altogether
Program robots to perfom surgeries with no assistance
Personalised surgery robots
Patients who have off site surgeries
They are costly
Some surgeries provide NO benefit for using the robots
What is there is a disconnect mid surgery
If autonomous and something goes wrong no doctor will be present to fix it
Possibility for hacking/and or sabotage
In the Loop- human control
On the Loop- sight of human
Out of the Loop- no human involved
Import cheaper parts
Secondary power source
Isolate robot from global networks
Airgap the system
Could conflict with beliefs
Some Amish, memmonite etc
An option for the patient to choose whether or not an autonomous surgeon performs their surgery
Publish the trial testing and trial data to the public
E.g. dentist example
Responsibility in the event of an undesirable outcome
Non-sentient machines having control over human morality
Alternative motives by programming (intentional harm or death)
Pre-establish legal liability prior to the start of the surgical process
Maintain human surgeon oversight
Require registration of robots and periodical assessments to ensure programming and function
The Bioethics of Implantable Biohybrid Systems
Andres E Pena (firstname.lastname@example.org), Ranu Jung
What are biohybrid systems?
Life changing neuroprosthetic technologies
There are patterns of neural activity
Therapeutic and Reparative Neurotechnology
Cochlear implants: success story
Neuroprosthetic device. 300K users worldwide
Paving way for artificial limbs
Control, and feel like these are their own
ANS – Neural enabled prosthetic hand system
Restore sense of touch and proprioception
LIFEs longitudinal …
LEAD system in upper arm
Used to stimulate periphery nerves
Designed for comparable . J. Neural Eng 2017
Ethics of hybridisation and potential impacts
Sense of self
Perception of reality
Restoration vs enhancement
Loss after failure
Cost and coverage
Access to maintenance
Future of implantable biohybrid systems
Closing the loop
Adaptive Neural Systems Lab
Some of them linked with materials (may not be able to guarantee neural function). The materials of the implants should last for lifetime of person. Regards of doing neuralstimulation, you don’t expect this problem… not electronics… but stimulation – how will last that for a lifetime.
* The graphics from Altered Carbon are placed here in so far as the television series was mentioned by the presenter, simply as futuristic visions.
Reuse of cardiac pacemakers in 3rd world countries… ethical issues…
Inline connector designs
For now just hoping to get sensation back to upper arm for those who have lost it, so they feel that their hand or their prosthetic is actually a part of their body. Using same approach as cochlear implant but for upper arm.
ANS Team at FIU Rocks
The Adaptive Neural Systems Laboratory team headed by Dr Ranu Jung is an exceptional team with a young dynamic group. Looking at the biographies of this team, quite a few of the researchers, including Dr Jung had long standing careers at ASU. Their work is truly inspiring and groundbreaking, and students/ grad students/ and staff have a deep awareness of ethical issues. It is exciting to meet teams like this at conferences who take philosophy and ethics so seriously.
Navigating Ethics in a STEM Training Grant – El Paso
Building scholars for the future of biomedical research
Responsible conduct in research
Identifying an impact
Signs of a sound scientist
Goal: Train biomedical researchers through a multi-institution consortium
Partner Schools: ASU, Baylor, Clemson etc
96 students (60 Female, 36 male)
Responsible conduct in research
Dangers of Research Fraud
Goal: provide an experience of research ethics… students take ownership of their research
Participation and activities other than watching, listening and taking notes
Scale Up Space
Students sit in a circle within their groups
Sudden introduction into scientific research realm
Bootcamp for incoming freshmen—3 weeks course in Summer
What is ethics? What is values? What are your values? Why are they important?
Scenarios, situations, put themselves in position of principal investigator
How do I feel about it/
Boot camp, research foundations course, research development course
Take the CITI module – Collaborative Institutional Training Initiative
Based on RFC Completion
Guage impact of course on students
Methods practising are impactful both for students who are doing the courses as BUILDScholars and those who are doing it as “affiliates”.
Similar outcomes for 2015, 2016
Building Student Identity
BuildScholars have bootcamp and CITI training
Identified only first semester… is the impact same over time?
Students are more active in identifying potential for research misconduct
Have confidence to bring up to superiors
“We just want to do research why do we have to do ethics training?”
NIH Video called “Lab” – Dynamic situation… if not following method then there are issues
Biomedical research intro
Safety training, blood/pathogen, misuse of laboratory equipment
Computer Programming Literacy for Medical Professionals
Teshaun J Francis
Should doctors and nurses learn to code?
Jeff Atwood: “Please Don’t Learn to Code” because of specialisation
Programming literacy should be a requirement but not learning to program
Should medical professionals learn to code?
Computer code is a tool to solve problems
Organisational tool. Neuromind.cc
TED Talk: clinical decision supporting systems
Data input, manipulation, data output
How technology is being used to teach medical students?
Next generation quiz cards
Not a replacement for cadavers… but there to supplement
Microsoft HoloLens teaching anatomy
IBM Watson Health
“You want doctor’s to teach anatomy”? Not IBM unless they utilised doctor’s to do this
IBM Watson is an attempt to doing diagnostic role of a doctor
75-98% agreement with a doctor
It is a tool… not a doctor…
Watson agrees with physicians…
Teaching Ethics in an Advanced Education in General Residency Program: general dentists UCSF
AEGD. After 4 years of dental school can specialise or go to practice, or do 1 year of clinical training of research
Teaching ethics to dental students is really difficult, disengaged of residents, irrelevant etc.
Dental students getting trained from ethics is decreasing in hours…
Basic training but in postgrad, it is little. The framework is absent.
Residents presented cases every week based on personal experiences while they were in residence!
Mean number of contact hours of ethics instruction is around 26.5 hours- undergrad
In postgraduate only about 15 hours
“Ethics is important, but our curriculum is already crowded and there is no room for ethics teaching”
“observing how seniors do it”
“learnt at home with family, not at school”
What is ethics?
“when in doubt it is probably not ethical”
Five fundamental principles
Patient autonomy: informed consent, what is treatment, risk, benefits, costs, alternatives in layman’s terms
Nonmalficence: do no harm
Beneficence: to do good, patients get treated in time
Justice: duty to provide dental care no matter who it is
Veracity: dentist-patient relationship based on trust and truthfulness
· About 36% of schools ONLY teach this
What happens when a patient comes in and has issues with sensitivity in front teeth… teeth issues, decay, cannot pay… anxiety as victim of domestic violence… temporary crown… and then dismiss… what do you do?
“Termination of Care”: cannot terminate midway; cannot withhold records
Resolultion: new resident will complete front 6 teeth, and do not take on new responsibility unless she pays up. Need to be stable. Provided care. Did not do harm.
Began Jan 2017 to Dec 2017.
Personal clinical cases: 11 residents, plus students, academics and risk management office
Vertically integrated ethics curriculum is being proposed
Common ethical challenges
Opportunity to discuss ethical dilemmas “on the fly”
Available resource consultations
Practice management implications
Designing a case based didactic program
Goals of care
Understanding informed refusal
California Ethics Law
Informed consent and presumed consent—did they sign?
I was lying down and vulnerable at mercy of a dentist—lawyers get in
Truth is that students are serious about the clinical training—they are in the field. Do they block the Friday morning from residency, where the whole day could be used for integrated learning.
Increase from 26 to 100 hours on ethics…
Research Ethics Training for Rising Researchers – Eman Ghanem, Sigma Xi
Data management The responsible handling of data
Staying professional: collaboration, mentorship, authorship
Disclosing details: confidentiality and conflict of interest
Respecting research subjects: bioethics crash course
Staying cool: understanding research pressures and consequences
Research Ethics Jeopardy
· Modules were 30 minutes each
· Activities: Case study, discussions, Game: around the lab, ethical or unethical
Necessity of Animal Research Ethics
Humane treatment is essential in research
Reduction (# of animals), Refinement (improve techniques for harm/stress), Replacement (simulation, computational models)
Involve other collaborations- like vet courses, to destroy less animals
Idea development, protocol development, grant app, review, experimentation, data analysis, result interpretation
Greatest contribution when anatomy and physiology is similar to human
How to establish link?
Animal suffering and distress can be reduced by collaboration
Data Ethics and Computational Bioscience
Kenneth W. Goodman (Miami University)
Many sources of data collection and sharing that are morally obligatory
Population health science has always been
Able to presume the consent of its beneficiaries
Technology, scientific advances, beliefs about how the world is?
Let’s try that?
History of pharmacology
Raising questions for applied prescriptions in x or y
Blood letting questions?
We try to organise this through scientific theory
Blood, phlegm, Red, green
George Washington was famously killed by his surgeons who over-blood letting
Surgeons over physicians…
If you don’t know what is going on inside DO NOT cut it out.
Film called THE PHYSICIAN with Ben Kingsely
Organ transplantation, autopsy, necromancy, mutilation
What is the role of religion?
What about keeping dying people alive for a transplant…
Destination therapy and not “bridges”
Is that an appropriate use of a technology?
Resuscitate every dying patient?
End of life care
Probability of futility AND amount of suffering the child would undergo
Dying with more suffering…
Texas Advanced Corrective Act
Cleveland Clinic Institution—but not in Florida… it is too weird here…
Broader application of data
Precision medicine, personalised medicine, ekmo, ASV devices, end of life, duty of care etc
If you just use population health to help people it would be major to save many people in the globe with CLEAN DRINKING WATER
“Is it selfishness? Not love…”
Secondary gain: “if she dies, the social security $ will stop”
Population health—need more information
(Paracelsus)… pattern in the data… respiratory illness when they were in mines… correlation
John Snow… dot on a map… data collection without consent… did he stop the epidemic?
Removed the pump… gun show epidemiology
Epidemiology in the era of big data. Epidemiology 2015
More human lives will be touched by technology than anything else…
Privacy vs science
Privacy and confidentiality were never seriously considered to be hard barriers to sharing and analysis
Reductio issue (left toe example)… can you help others with same problem?
Biomedical research has long relied on the work of trusted entities to collect health information
Security, de-identification, anonymization, pseudonymisation
Cloud is a metaphor
Data is used sloppily—knowledge, information etc
Ethical concerns should focus on decision support- given variable data and database quality, uncertainty
Appropriate user and users
Data sharing and interoperability
5mb computer in 1956 pic.
Is not an absolute right
Must therefore be balanced again other right, including a right to benefit from science
Infrastructure support refusers
Health and privacy
Smart laws and policies
Recognition of duties to collectives
Learning healthcare systems
Public health analogue “duty to treat”
Role of Ethics
Illuminate the force, scope and limitation of rights
Identify and balance conflicting rights, and rights and duties
Identify and justify duties
Management and governance
Balance health, data, privacy
Identify best practices
Develop revised IR-like review entities
Consultation capacity for risk communications, decisions under uncertainty
Agreement and civil society
DNA, epigenome, life going through illness, and exposure to all thinigs in world…
Social determinants of disease…
Scandanavia—clinical care/ research? Health insurance provision?
Health care provided for everyone? They affect the health of populations. Buy insurance?
Understanding cases within profession (Wade Robinson)
Edward Tufte’s compelling mistaken reading of what went wrong with the Challenger.
Challenger ‘O-rings’ safety in the Challenger: the astronauts were killed by impact not by explosion
Tufte blamed the engineers- if they had done a scatterplot they could have worked it out
You can see increasing risk of damage
The ascending curve of risk
If proper scatterplot done, no one would have risked to launch the Challenger in such cold weather
“They didn’t know what they were doing, but they were doing a lot of it”
Big thing is that NASA launched BELOW 50 C at 40C.
Research Misconduct in FDA Regulated Clinical Trials: What Not to Do
What is research misconduct?
The falsification of data or results or recording and reporting
Deviation from established protocol- data doesn’t reflect what you were studying
Violation of human subjects and rights
How did definition happen?
World Med Assoc Declaration of Helsinki
Tuskugee Syphilis study and Belmont Report
US Government Oversight
DHHS (office of human research protection)
How did FDA get authority?
Thalidomide incident in 1950s (Francis Kelsey)
1962 Amendments (Kefauver-Harris Amendment)
What does FDA do to ensure research misconduct does not occur?
What is FDA looking for in regards to ethical standards and regulatory requirements
Required reports by IRB
FDA Research Observations
FDA investigators cite observed deviations from regulatory requirements
What happens after observations are reported?
Observations by FDA investigators are passed through multiple levels of review before a final classification
FDA Clinical Investigator Action
Warning letter and nidpoe
Get in trouble—this is big issue
University of Pennsylvania Gene Therapy Trial
Objective: treatment for OTC deficiency
Prevents proper elimination of ammonia
Death in the trial within 96 hours
1999 Nov. 1.5 month Clinical Investigator James Wilson
Based on all the citations that happen
Did not follow protocol
Should have STOPPED clinical trial… 5 increases of therapy before subject died in cohort 6!
They did not tell the patient ALL the information.
University College London Medical Device Implantations
Strict regulatory oversight—safe for humans
Results: Deaths of guinea pig subjects
Termination of researchers employment
Regulatory and criminal investigations
Bial-Portela Clinical Trial
The investigators didn’t do anything wrong, this was just a poorly designed trial
Cohorts were overlapping
Real time data wasn’t coming in and subject died, and others ended up with brain lesions
Written to prevent issues that have previously occurred
Human subject rights protection
Assurance of data validity
Inspections and Audits (FDA, OHRP)
Dr Sheldon Krimsky
Monsanto Litigation Documents
Integrity of Research journals
Crisis in Credibility
Contested issue between science and commerce
Can it affect public health
Corporations have a different view of science- one of many inputs to production
Science just one of inputs into production function …. Lead, asbestos, BPA, tobacco (lobby against)
IARC, WHO Report 2000
DSM-IV: panel members had conflicted interest when they produced categories
100% panel on mood disorders, schizophrenia etc.
“On the Take”
Conflict of interest and scientific journals
Can you believe what you read
Funding effect in science
The Monsanto Litigation
Specialised cancer research arm of the WHO, reached a determination chemical glyphosate-based herbiside
More than 270 of the cases have been consolidated multi-district litigation for oversight by 1 judge in the US District court of San Francisco
Multi district litigation
Monsanto’s Web site on Glyphosate—does not have adverse effects to humans, wildlife and environment (hodgkins lymphoma IHL)
Rightoknow web site
Ghost writing: that is how they do business. EPA referenced. Determination 2016, glyphosate was not likely carcinogenic
They hired Intertek Scientific & Regulatory Consultancy
Undo influence of regulatory agencies
Undo scientific journals…
Unethical to ghost write journals!
It was published by Toxicology articles
William Heydens… disclosed this
Editor in chief got money for this from Monsanto!
JFCT- retract or remove the journal… paper retracted after 1 year
“results were not definitive”… paper retracted but as soon as it was another journal published the article
Who should a journal editor review—an employee of Monsanto review a paper about Monsanto
When vital public health reports are published in refereed journals there is a heightened expectation that they meet professional standards of scientific integrity. Those standards include full disclosure of conflicts of interests…. Sources of funding….
The Lancet--- 3 positive reviews, and one so-so… “not a priority to publish”
Ethical Guidelines for Authorship- Subrata Saha
Gift authorship; ghost writing
Millikan Oil Drop Experiment: https://www.youtube.com/watch?v=XMfYHag7Liw
Fletcher and Millikan share but each sole authorship
The Journal of Bone and Joint Surgery
Henry R Cowell 1998 wrote an editorial
Inform patient care
Goal should not be to add to CV but to help
Ideas should be new
Don’t waste space, for redundant data… should be consolidated, not salami?
Prior to the experiments
Informed consent, clinical aspects, cost, statistical method, begun BEFORE we begin
Manuscript written with same patient care in mind than before
Career advancement, get grants
Rights to copyright
Pressure to publish first
Who was first? Discovery
Tenure and professional standing
Who should be an author?
Advisor and graduated students…
Now many groups collaborate, in group and within nation
All work builds on multiple achievements
Expectation: author wrote it; author did work that was written down; always real data; and results and claims should be accurate; author should disclose bias or potential for COI.
Contributions—interpretation, acquisition of data, drafting, critical, final submission, author should know about it and carefully review it
Financial disclosure form
Should not be allowed to publish
Who is not the author?
Just because someone got funding but played NO role in design or rationality of study then should not be author
General responsibility for lab—not enough for author
Just because they have the data—not enough for author
Other authorship issues
Still a problem: divvy up work, order of authorship, etc Avoid conflict by meeting from outset and publishing paper.
What are external regulations?
Self-plagiarism—is this plagiarism?
Biomedical Device Risks and Non-Medical Implantable Risks by Katina Michael
Slides available here.
Audio of my presentation here.