Noel Sharkey (University of Sheffield), Aimee van Wynsberghe (University of Twente) , John C. Havens (The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems), Katina Michael (University of Wollongong)
Converging approaches adopted by engineers, computer scientists and software developers have brought together niche skillsets in robotics for the purposes of a complete product, prototype or application. Some robotics developments have been met with criticism, especially those of an anthropomorphic nature or in a collaborative task with humans. Due to the emerging role of robotics in our society and economy, there is an increasing need to engage social scientists and more broadly humanities scholars in the field. In this manner we can furthermore ensure that robots are developed and implemented considering the socio-ethical implications that they raise.
This call for papers, supposes that more recently, projects have brought on board personnel with a multidisciplinary background to ask those all important questions about “what if” or “what might be” at a time that the initial idea generation is occurring to achieve a human-centered design. The ability to draw these approaches into the “design” process, means that areas of concern to the general public are addresses. These might include issues surrounding consumer privacy, citizen security, individual trust, acceptance, control, safety, fear of job loss and more.
In introducing participatory practices into the design process, preliminary results can be reached to inform the developers of the way in which they should consider a particular course of action. This is not to halt the freedom of the designer, but rather to consider the value-laden responsibility that designers have in creating things for the good of humankind, independent of their application.
This call seeks to include novel research results demonstrated on working systems that incorporate in a multidisciplinary approach technological solutions which respond to socio-ethical issues. Ideally this Robotics and Automation Magazine paper is complemented by a paper submitted in parallel to Technology and Society Magazine that investigates the application from a socio-ethical viewpoint.
March 10 - Call for papers
August 1 - Submission deadline
October 1 - Author notification
November 15 - Revised paper submitted
November 25 - End of the second review round
November 30 - Final acceptance decision communicated to Authors
December 10 - Final manuscripts uploaded by authors
March 10, 2018 - issue mailed to all members
* Information for authors can be found here.
Robots have been used in a variety of applications, everything from healthcare to automation. Robots for repetitive actions exude accuracy and specificity. Robots don’t get tired, although they do require maintenance, they can be on 24x7, although stoppages in process flows can happen frequently due to a variety of external factors. It is a fallacy that robots don’t require human inputs and can literally run on their own without much human intervention. And yet, there is a fear surrounding the application of robots mostly swelled by sensational media reports and the science fiction genre. Anthropomorphic robots have also caused a great deal of concern for consumer advocate groups who take the singularity concept very seriously.
It is the job of technologists to dispel myths about robotics, and to raise awareness and in so doing robot literacy, the reachable limits of artificial intelligence imbued into robots, and the positive benefits that can be gained by future developments in the field. This special will focus on the hopes of robot application in non-traditional areas and the plausible intended and unintended consequences of such a trajectory.
Engineers in sensor development, artificial consciousness, components assemblage, visual and aesthetic artistry are encouraged to engage with colleagues from across disciplines- philosophers, sociologists and anthropologists, humanities scholars, experts in English and creative writing, journalists and communications specialists- to engage in this call.
Multidisciplinary teams of researchers are requested to submit papers addressing pressing socio-ethical issues in order to provide inputs on how to build more robust robotics that will address citizen issues. For example:
- How can self driving cars make more ethical decisions?
- How can co-working with robots becoming an acceptable practice to humans?
- How might their be more fluent interactions between humans and robots?
- Can drones have privacy-by-design incorporated into their controls?
This issue calls for technical strategic-level and high-level design papers that have a social science feel to them, and are written for a general audience. The issue encourages researchers to ponder on the socio-ethical implications stemming from their developments, and how they might be discussed in the general public.
Broad Vertical Sectors
• Driverless cars, buses, transportation
• Military robots
• Security robots
• Assistive technologies
• Robots as companions
• Robots as co-workers
• Ageing population
• Children with syndromes
• Learning technologies
• Sex trade
• Privacy-enhanced technologies, privacy by design, security, ethics
• Legal entity, trust, control, moral agency, authority, autonomy, liberty, regulatory
• Cultural, philosophical, anthropological, sociological, critical, phenomenological, normative
• Real, virtual, conscious, artificial intelligence, anthropomorphic
Paper Submission: April 30, 2017
Author Notification of Paper Acceptance: August 1, 2017
Final Revised Paper: November 1, 2017
Publication Date: March 1, 2018
How to Submit
Formatting guidelines for IEEE Technology and Society Magazine are available here. Select the Magazine menu and go to "Information for Authors".
All papers are to be submitted to https://mc.manuscriptcentral.com/tsm
During the submission process you will be asked to enter in your details if you are a new author to the Magazine. You will also be asked to enter the names and email address of three academics who might be able to review your article. These individuals must not be close contacts.
Papers cannot go over 5,000 words, including references. A variety of paper types are acceptable including: commentaries, opinion pieces, leading edge, industry views, book reviews, peer reviewed articles etc.
Guest Editor Biographies
Ramona Pringle is an Assistant Professor in the RTA School of Media at Ryerson University and Director of the Transmedia Zone, an incubator for innovation in media and storytelling. As a writer, producer and digital journalist, Ramona’s work examines the evolving relationship between humans and technology. She is a technology columnist with CBC, and the writer and director of the interactive documentary “Avatar Secrets.” Previously, she was the interactive producer of PBS Frontline’s “Digital Nation” and editor in chief of “Rdigitalife”. Ramona is a co-editor of the IEEE Potentials Magazine special edition, “Unintended Consequences: the Paradox of Technological Potential” (2016) and an associate editor of IEEE Technology and Society Magazine. Ramona’s projects have been featured at festivals and conferences including i-docs, Power to the Pixel, TFI Interactive, Sheffield Doc/Fest, Hot Docs, SXSW, NXNE, Social Media Week, TEDx, and in publications including the New York Times, Mashable, Cult of Mac and the Huffington Post. Ramona has a Master’s Degree from NYU’s Interactive Telecommunications Program.
Diana M. Bowman is an Associate Professor in the Sandra Day O’Connor College of Law and the School for the Future of Innovation and Society at Arizona State University, and a visiting scholar in the Faculty of Law at KU Leuven. Diana’s research has primarily focused on the legal and policy issues associated with emerging technologies, and public health law. Diana has a BSc, a LLB and a PhD in Law from Monash University, Australia. In August 2011 she was admitted to practice as a Barrister and Solicitor of the Supreme Court of Victoria (Australia).
Prof. Meg Leta (previously Ambrose) Jones is an Assistant Professor in the Communication, Culture & Technology program at Georgetown University where she researches rules and technological change with a focus on privacy, data protection, and automation in information and communication technologies. She is also an affiliate faculty member of the Science, Technology, and International Affairs program in Georgetown's School of Foreign Service, the Center for Privacy & Technology at Georgetown Law Center, and the Brussels Privacy Hub at Vrije Universiteit Brussel. Dr. Jones's research interests cover issues including comparative information and communication technology law, engineering and information ethics, critical information and data studies, robotics law and policy, and the legal history of technology. She engages with indisciplinary fields like cyberlaw, science and technology studies, and communication and information policy using comparative, interpretive, legal, and historical methods. Ctrl+Z: The Right to be Forgotten, her first book, is about the social, legal, and technical issues surrounding digital oblivion. Advised by Paul Ohm, Dr. Jones earned a Ph.D. in Technology, Media & Society from the University of Colorado, Engineering and Applied Science (ATLAS). Prior to pursuing a Ph.D., she earned a J.D. from the University of Illinois College of Law in 2008, where she focused on technology and information issues. She has held fellowships and research positions with the NSF funded eCSite project in the University of Colorado Department of Computer Science, the Silicon Flatirons Center at the University of Colorado School of Law, the Harvard Berkman Center for Internet & Society, and CableLabs. Since 2013, Dr. Jones has been teaching and researching in Washington, DC at Georgetown University.
Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong. She is the IEEE Technology and Society Magazine editor-in-chief and also serves as the senior editor of the IEEE Consumer Electronics Magazine. Since 2008 she has been a board member of the Australian Privacy Foundation, and also served as Vice-Chair. Michael researches on the socio-ethical implications of emerging technologies. She has also conducted research on the regulatory environment surrounding the tracking and monitoring of people using commercial global positioning systems (GPS) applications in the area of dementia, mental illness, parolees, and minors for which she was awarded an Australian Research Council Discovery grant.
Background to Special Issue
Currently working on a second round review of a proposal for Proceedings of the IEEE. The project is taking form and has been already 4-5 months in the making.
I am working with Alan Winfield whose Lab I had the pleasure of visiting at UWE at Bristol in October of this year. And also alongside my dear colleague Jeremy Pitt from Imperial College whom I've had the great honor of writing for previously. Our fourth guest editor in Vanessa Evers whom will bring a great deal of finesse to the design space known as human media interaction.
I will continue to update this space with the invited authors on this project as the details emerge. And shortly I hope we can confirm that indeed the proposal has been approved by PIEEE.
The so called 4th industrial revolution and its economic and societal implications is no longer an academic concern, but has become a matter for political as well as a public debate. The Fourth Industrial Revolution - characterised as the convergence of robotics, AI, autonomous systems and information technology, or cyber-physical systems - was the focus of the World Economic Forum, at Davos, in 2016. In the US the White House initiated a series of public workshops on artificial intelligence (AI) and the creation of an interagency working group; the UK parliamentary select committee on Science and Technology commenced an Inquiry on Robotics and Artificial Intelligence, and the European Parliament committee for legal affairs published a draft report with recommendations to the Commission on Civil Law Rules on Robotics.
Notably all of these initiatives express the need for serious consideration of the ethical and societal implications. Machine ethics has been transformed from a niche area of concern of a few engineers, philosophers and law academics, to an international debate. For these reasons a special issue focused on the ethics of intelligent autonomous systems is not only timely but necessary.
We propose a special issue that is broad in scope, spanning both robot and AI ethics and ethical robots and AI systems. The former is broadly concerned with the ethical use of autonomous systems including standards and regulation - in a nutshell ethical governance, while the latter is concerned with how autonomous systems can themselves be ethical, i.e. be imbued with ethical values. Both are of critical importance. Ethical governance is needed in order to develop standards that allow us to transparently and robustly assure the safety of autonomous systems and hence build public trust and confidence. Ethical autonomous systems are needed because, inevitably, near future systems are moral agents; consider driverless cars, or medical diagnosis AIs, both of which will need to make choices with ethical consequences.
FINAL REVISED Description:
The primary focus of this special issue will be on machine ethics, that is the question of how autonomous systems can be imbued with ethical values. Ethical autonomous systems are needed because, inevitably, near future systems are moral agents; consider driverless cars, or medical diagnosis AIs, both of which will need to make choices with ethical consequences. Using the terminology of James Moor (2006) we seek papers that deal with both implicit ethical agents, that is machines designed to avoid unethical outcomes, and explicit ethical agents, that is machines which either explicitly encode or learn ethics and determine actions based on those ethics. Of course ethical machines are socio-technical systems thus, as a secondary focus, we invite papers that explore the educational, societal and regulatory implications of machine ethics, including the issue of ethical governance. Ethical governance is needed in order to develop standards and processes that allow us to transparently and robustly assure the safety of ethical autonomous systems and hence build public trust and confidence.
 James H. Moor: The Nature, Importance, and Difficulty of Machine Ethics, IEEE Intelligent Systems, vol. 21, no. 4, pp. 18-21, July/August 2006.
Submission of papers: May 15, 2017
Reviews back to authors: Aug 15, 2017
Submission of revised papers: Sep 15, 2017
Final decision for all papers: Oct 15, 2017
Complete issue to Editorial Office: Nov 15, 2017
Publication: Dec 2017 - Jan 2018
Invited Papers from the Following Authors who have accepted to participate:
David J. Gunkel, Northern Illinois University, USA
Peter Asaro, The New School, USA
Stuart Russell, UC-Berkeley, USA
Wendell Wallach, Yale University, USA
* Note: Invitations do not imply acceptance of final papers. Authors invited to submit full papers will undergo a rigorous peer review process. Papers not accepted for the PIEEE special may be considered for other publication outlets (e.g. IEEE Robotics and Automation Magazine, or IEEE Technology and Society Magazine).
Guest Editor Details
Bristol Robotics Laboratory
Alan Winfield is Professor of Robot Ethics at the University of the West of England (UWE), Bristol, UK, and Visiting Professor at the University of York. He received his PhD in Digital Communications from the University of Hull in 1984, then co-founded and led APD Communications Ltd until taking-up appointment at UWE, Bristol in 1991. Winfield co-founded the Bristol Robotics Laboratory and his research is focussed on understanding the nature and limits of robot intelligence. He is a member of the editorial boards of Swarm Intelligence and the Journal of Experimental and Theoretical Artificial Intelligence, and an associate editor of Evolutionary Robotics.
Winfield is passionate about communicating research and ideas in science, engineering and technology; he led UK wide public engagement project Walking with Robots, awarded the 2010 Royal Academy of Engineering Rooke medal for public promotion of engineering. Winfield is an advocate for robot ethics; he was co-organiser and member of the 2010 EPSRC/AHRC working group that drafted the Principles of Robotics; he was a member of the team that drafted British Standard BS 8611: Guide to the ethical design and application of robots and robotic systems (2016), and he currently co-chairs the General Principles committee for the IEEE global initiative on Ethical Considerations in Autonomous Systems. He serves on the Ethics Advisory Board for the EU flagship Human Brain Project, and is a member of the WEF Global Future Council on the Future of Technology, Values and Policy.
Winfield has published over 200 works, including Robotics: A Very Short Introduction (Oxford University Press, 2012), and lectures widely on robotics (including robot ethics) presenting to both academic and public audiences.
Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., Newman, P., Parry, V., Pegman, G., Rodden, T., Sorell, T., Wallis, M., Whitby, B. and Winfield, A. F. (2011) Principles of Robotics. The United Kingdom’s Engineering and Physical Sciences Research Council (EPSRC), Website.
Winfield, A. F. (2012) Robotics: A very short introduction. Oxford University Press.
Grand, A., Wilkinson, C., Bultitude, K. and Winfield, A. F. (2012) Open Science: A new 'trust technology'? Science Communication, 34 (5). pp. 679-689.
Woodman, R., Winfield, A. F., Harper, C. and Fraser, M. (2012) Building safer robots: Safety driven control. International Journal of Robotics Research, 31 (13). pp. 1603-1626.
Winfield, A. F., Blum, C. and Liu, W. (2014) Towards an ethical robot: Internal models, consequences and ethical action selection. In: Mistry, M., Leonardis, Aleš, Witkowski, M. and Melhuish, C., eds. Advances in Autonomous Robotics Systems, Springer, pp. 85-96.
Dennis, L. A., Fisher, M. and Winfield, A. F. (2015) Towards verifiably ethical robot behaviour. In: Proceedings of the twenty-ninth AAAI conference on artificial intelligence, Texas, USA, January 2015.
Grand, A., Wilkinson, C., Bultitude, K. and Winfield, A. F. (2016) Mapping the hinterland: Data issues in open science. Public Understanding of Science, 25 (1). pp. 88-103.
Winfield, A. F. (2016) Written evidence submitted to the UK Parliamentary Select Committee on Science and Technology Inquiry on Robotics and Artificial Intelligence. Discussion Paper. Science and Technology Committee (Commons), Website.
University of Wollongong
Katina Michael is Professor who researchers the Socio-ethical Implications of Emerging Technologies at the University of Wollongong, Wollongong, Australia, and is a Visiting Professor at Nanjing University, and Visiting Academic at the University of Southampton. She began her career working for Nortel Networks as a telecommunications engineer in 1996, and received her PhD in Information and Communication Technology from the University of Wollongong in 2003. Michael became the editor in chief of IEEE Technology & Society Magazine in 2012, and senior editor of IEEE Consumer Electronics Magazine in 2015. Michael also has a masters degree from the law faculty at the University of Wollongong where she studied national security. Her research focuses on the interplay between technological innovation, societal concerns, ethics, law and regulation, and the impact of intended and unintended consequences.
Michael is the Routledge series editor of Emerging Technologies, Ethics and International Affairs of which several volumes are related to social robots, healthcare robots and drones. She has co-edited a dozen special issues for a variety of journals including PIEEE, IEEE Potentials, IEEE Computer, IEEE Technology & Society Magazine, Computer Communications on topics related to converging technologies: RFID, Location-Based Services, Big Data and Embedded Systems.
Michael has held an annual workshop which is now in its tenth year on the Social Implications of National Security, that was originally funded by the Australian Research Council. She has presented to Prime Minister & Cabinet, Defence Science and Technology Organisation, and and Booze & Allen around emerging technologies in law enforcement and defence. She also was awarded a large ARC-Discovery Project grant in 2008 on Telecommunications Policy on the topic of Location-based Services Regulation in Australia and was the director of the IP Location Services Programme which was jointly sponsored by Andrews Corporation. Michael has published widely across disciplines including Technology, Security, Law, Policy, Media, and Culture.
L Perusco, K Michael, 2007. Control, trust, privacy, and security: evaluating location-based services, IEEE Technology and Society Magazine, 26 (1), 4-16.
Michael, K. & Michael, M. G. (2007). “Homo Electricus and the Continued Speciation of Humans”. In Marian Quigley (ed.), The Encyclopedia of Information Ethics and Security (pp. 312-318). United States of America: IGI Global.
MG Michael, SJ Fusco, K Michael, 2008. A research note on ethics in the emerging age of überveillance, Computer Communications, 31 (6), 1192-1199.
KD Stephan, K Michael, MG Michael, L Jacob, EP Anesta, 2012. Social implications of technology: The past, the present, and the future, Proceedings of the IEEE 100 (Special Centennial Issue), 1752-1781.
Katina Michael, M.G. Michael, 2012. "Implementing Namebers Using Microchip Implants: The Black Box Beneath The Skin", in Jeremy Pitt (ed) This Pervasive Day: The Potential and Perils of Pervasive Computing.
K Michael, KW Miller, 2013. Big data: New opportunities and new challenges, Computer, 46 (6), 22-24.
K Michael, MG Michael, 2013. No limits to watching? Communications of the ACM, 56 (11), 26-28.
K Michael, MG Michael, 2013. The future prospects of embedded microchips in humans as unique identifiers: the risks versus the rewards, Media, Culture & Society, 35 (1), 78-86.
MG Michael, K Michael, 2016. National security: the social implications of the politics of transparency, Prometheus, 24 (4), 359-363.
Imperial College London
Jeremy Pitt is Professor of Intelligent and Self-Organising Systems in the Department of Electrical & Electronic Engineering (EEE) at Imperial College London. He was awarded a PhD in Computing from the Department of Computing, Imperial College London, in 1991, and was appointed to a Lectureship in the Department of Electrical & Electronic Engineering in 1996, followed by promotion to Senior Lecturer (2000), Reader in Intelligent Systems (2004) and Professor (September 2015).
Pitt’s research programme focuses on developing formal models of social processes using computational logic, and their application to self-organising and multi-agent systems, for example in agent societies, agent communication languages, and electronic institutions. This work has produced a number of innovative software systems, most recently the multi-agent simulation platform PreSage-2 and the serious game Social mPower.
Since joining EEE, Pitt has taught courses on Software Engineering, Human-Computer Interaction and Artificial Intelligence, has graduated 17 PhD students, and has acted as Deputy Head of the Intelligent Systems & Networks research group since 2005. He also has a strong interest in the social impact of technology, and has edited two recent books, This Pervasive Day (IC Press, 2012) and The Computer After Me (IC Press, 2014). Articles on his work have appeared in Wired magazine, the Financial Times, and Süddeutsche Zeitung.
He has published more than 200 articles in journals, international conferences, workshops and book chapters, winning a number of Best Paper prizes. Pitt is a Senior Member of the ACM, a Fellow of the BCS (British Computer Society), and a Fellow of the IET (Institute of Engineering and Technology); he is also an Associate Editor of ACM Transactions on Autonomous and Adaptive Systems (TAAS) and an Associate Editor of IEEE Technology and Society Magazine.
Jeremy Pitt, Aikaterini Bourazeri, Andrzej Nowak, Magdalena Roszczynska-Kurasinska, Agnieszka Rychwalska, Inmaculada Rodríguez Santiago, Maite López-Sánchez, Monica Florea, Mihai Sanduleac: Transforming Big Data into Collective Awareness. IEEE Computer 46(6): 40-45 (2013)
Jeremy Pitt, Dídac Busquets, Sam Macbeth: Distributive Justice for Self-Organised Common-Pool Resource Management. TAAS 9(3): 14:1-14:39 (2014)
Jeremy Pitt, Andrzej Nowak: The Reinvention of Social Capital for Socio-Technical Systems. IEEE Technol. Soc. Mag. 33(1): 27-33 (2014)
Jeremy Pitt, Alexander Artikis: The open agent society: retrospective and prospective views. Artif. Intell. Law 23(3): 241-270 (2015)
Jeremy Pitt, Dídac Busquets, Régis Riveret: The pursuit of computational justice in open systems. AI Soc. 30(3): 359-378 (2015)
Sam Macbeth, Jeremy Pitt: Self-organising management of user-generated data and knowledge. Knowledge Eng. Review 30(3): 237-264 (2015)
University of Twente
Vanessa Evers is a full professor of Computer Science at the University of Twente’s Human Media Interaction group and Director of the DesignLab for multidisciplinary research. She received a M.SC. in Information Systems from the University of Amsterdam, and a Ph. D. from the Open University, UK. During her Master studies she spent two years at the Institute of Management Information Studies of the University of New South Wales, Sydney. After her Ph.D. she has worked for the Boston Consulting Group, London and later became an assistant professor at the University of Amsterdam’s Institute of Informatics. She was a visiting researcher at Stanford University (2005-2007). Her research interests focus on on interaction with intelligent and autonomous systems such as robots or machine learning systems as well as cultural aspects of Human Computer Interaction. She has published over 80 peer reviewed publications, many of which in high quality journals and conferences in human computer interaction and human robot interaction. She serves on Program Committees of ACM/IEEE HRI, ACM SIGCHI, HSI, ACM CSCW and ACM Multimedia.
Evers is frequently interviewed about her work on national public tv, newspapers or magazines. She won the best thesis prize awarded by the Dutch National Society of Registered Information Specialists, was co-author of the James Chen best paper award of the journal on User Modeling and User Adapted Interaction together with then her Ph.D. student Henriette Cramer. She holds the 2014 Opzij talent award. Vanessa is an editor for the International Journal of Social Robotics, she is co-chair of the ACM International Human Robot Interaction Steering Committee and Associate Editor of the Human Robot Interaction Journal.
Evers Representative Publications
Heerink, M., B.J.A. Kröse, B.J. Wielinga, and V. Evers (2010). Measuring acceptance of assistive social agent technology by older adults: the Almere model. International Journal of Social Robotics, 2(4), 361-375.
Wang, L., Rau, P-L., Evers, V., Robinson, B., Hinds, P. (2010). When in Rome: The role of culture and context in the adherence to robot recommendations. Proceedings of the 5th ACM/IEEE international conference on Human Robot Interaction, HRI’10, Osaka.
Lazar, J., Abascal, J., Davis, J., Evers, V., Gulliksen, J., Jorge, J., McEwan, T., Paterno, F., Persson , H., Prates, R., Von Axelson, H., Winckler, M., Wulf, V. (2012). Public Policy Activities in 2012 Related to Human-Computer Interaction: A 10-Country Discussion. Interactions 19(3).
Joosse, M.P., Sardar, A., Lohse, M. & Evers, V. (2013). BEHAVE-II: The Revised Set of Measures to Assess Users’ Attitudinal and Behavioral Responses to a Social Robot. International Journal of Social Robotics, 5(3), 379-388.
Gallego-Perez, M. Lohse, and V. Evers, “Improving psychological wellbeing with robots” in 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2015.
F.C.A. Groen, G. Pavlin, A. Winterboer, V. Evers (2016) A hybrid approach to decision making and information fusion: Combining humans and artificial agents, Robotics and Autonomous Systems, Available online 30 September 2016, ISSN 0921-8890, http://dx.doi.org/10.1016/j.robot.2016.08.009.
Theme: The Socio-Ethical Implications of Implantable Technologies in the Military Sector
Venue: Richard Newton Conference Room, Level 5, Building 193, The University of Melbourne
Date: 12 July 2016
Workshop co-located with IEEE Norbert Wiener Conference
Select papers to be published in a special section of IEEE Technology and Society Magazine in 2017
Call for Abstracts
The military sector has been investing in nanotechnology solutions since their inception. Internal assessment committees in defense programmatically determine how much complex technology will be systematically diffused into the Armed Forces. The broad term nanotechnology is used to encompass a variety of innovations from special paint markers that can determine unique identity to RFID implants in humans. With the purported demand for these new materials, we have seen the development of a fabrication process that has catapulted a suite of advanced technologies in the military marketplace. These technologies were once the stuff of science fiction- everything from exoskeletons to wearable headsets with accelerated night vision, to armaments that have increased in durability in rugged conditions with the ability to be commanded centrally and without human intervention. But what of the emergence of the so-named supersoldier, a type of Iron Man?
This workshop will focus on humancentric implantable technologies in the military sector. The key questions it will seek to discuss with respect to implants include: (1) What are the social implications of new proposed security technologies? (2) What are the rights of soldiers who are contracted to the defense forces? (3) Does local military law override the rights provided under the rule of law in a given jurisdiction, and thus, what are the possible legal implications? (4) How pervasive are these technologies in society at large? (5) And what might be some of the side effects experienced by personnel in using these devices that have not yet been tested under conditions of war and conflict? More broadly the workshop seeks to understand the socio-ethical implications (community), social contract (individual), and stakeholder (industry/government) perspectives.
This one day workshop invites multidisiplinary views from experts in the nanotechnology space.
Workshop Series Background
The Social Implications of National Security workshop series began in 2006 funded by the Australian Research Council, Research Network for a Secure Australia. The RNSA funded the workshop until 2012, and spear-headed the "Human Factor Series" for the lifetime of the research network. Its proceedings have been deposited in a variety of key stakeholders, including the National Library, the Department of Prime Minister & Cabinet, the Commisioner for Law Enforcement Data Security in Victoria and the NSW Police Academy libraries of Australia. The workshops have been hosted in Wollongong, Canberra, Sydney, Melbourne and Toronto. There have been representatives from government, industry, defense, emergency services organisations, academia, and society at large at each of the workshops.
- 9.00 AM Registration
- 10.30 AM Morning Tea
- 12.30 PM Lunch
- 4.00 PM Afternoon Tea
- 6.00 PM Dinner (walk to venue)
Invitations for Participation
Direct invitations for participation (over the Internet, or face-to-face in Melbourne) will shortly be sent out to the following researchers and practitioners:
Alan Rubel, University of Wisconsin-Madison
Alexander Hayes, University of Wollongong
Amal Graafstra, DangerousThings.com
Andrew Goldsmith, Flinders University
Ann Light, University of Sussex
Avner Levin, Ryerson University
Charlotte Epstein, University of Sydney
Christine Perakslis, Johnson and Wales University (*contacted: checking schedule)
Daniel Ratner, Engineer and technology entrepreneur (*contacted: awaiting reply)
Darren Palmer, Deakin University (*contacted: awaiting reply)
David Forbes, University of Melbourne (*contacted: awaiting reply)
David Vaile, UNSW
Diana Bowman, University of Michigan (* speaking)
Donna Dulo, Sofia University (*contacted: deliberating)
Eleni Kosta, Tilburg University
Ellen McGee, Ethics consultant (private practice) (*contacted: declined)
Emmeline Taylor, Australian National University
Eugene Kaspersky, Kaspersky Labs
Fritz Allhoff, Western Michigan University
Gary Retherford, Six Sigma Security
Gary T. Marx, MIT
Geoffrey Spinks, University of Wollongong
George Conti (*contacted: unavailable the week of 12th July)
Gordon Wallace, University of Wollongong
Herman Tavani, Rivier College
Ian Warren, Deakin University
Isabel Pederson, University of Ontario Institute of Technology
Jackie Craig, Defence Science Technology Group
Jairus Grove, University of Hawaii at Manoa
Jennifer Seberry, University of Wollongong
Jeremy Pitt, Imperial College London
Jill Slay, UNSW Canberra
Kayla Heffernan, University of Melbourne (*speaking)
Katherine Albrecht, CASPIAN
Kaylene Manwaring, UNSW
Kevin Warwick, Coventry University
Keith Miller, University of Missouri - St. Louis
Kobi Leins (*contacted: in flight transit, submitting abstract)
Lisa Shay, West Point Military College (*contacted: declined as in special training)
Liz McIntyre, CASPIAN
Lyria Bennet Moses, UNSW
Lucy Resnyansky, DSTO
Lindsay Robertson, UOW (*speaking)
Luis Kun, National Defense University (*contacted: awaiting reply)
Marcus Wigan, Swinburne University (*speaking)
Mark Andrejevic, University of Queensland
Mark Burden, University of Queensland
Mark Gasson, University of Reading
Mark Ratner, Northwestern University
Max Michaud-Shields, Deputy Commanding Officer of the 1st Battalion, Royal 22nd Regiment (*contacted: awaiting reply)
Mianna Lotz, Macquarie University
Michael Eldred, Arte-Fact.org
Mirielle Hildebrandt, Vrije Universiteit Brussel
Nick O'Brien, Charles Sturt University
Parag Khanna, New America Foundation
Patrick Lin, California Polytechnic State University (*contacted: declined, on holiday)
Peter W. Singer, New America Foundation
Rain Liivoja, University of Melbourne (*speaking)
Ramona Pringle, Ryerson University
R.E. Burnett, National Defense University (*keynote)
Rafael Capurro, University of Tsukuba, Japan
Rebecca Hester, Virginia Tech University (*contacted: deliberating)
Roba Abbas, University of Wollongong
Roger Bradbury, Australian National University
Roger Clarke, Australian National University
Rob Sparrow, Monash University (*contacted: declined based on workload)
Sharon Bradley-Munn, University of Wollongong (* speaking)
Simon Bronitt, University of Queensland
Susan Dodds, University of Tasmania
Tamara Bonaci, University of Washington
Thomas James Oxley, University of Melbourne (*contacted: cannot make it due to training)
Tim McCormack, Harvard University (*contacted: declined as in flight transit)
William A. Herbert, Hunter College CUNY (*contacted: declined due to other projects)