Social Media and AI

The Twelfth Workshop on the Social Implications of National Security (SINS19)

Human Factors Series

Register here

Credit: hocus-focus

Credit: hocus-focus

Venue: ASU Barrett & O’Connor Washington Center, Level 2, 1800 I Street NW, Suite 300, Washington, DC 20006

Supported: Centre for Science, Policy and Outcomes and the School for the Future of Innovation in Society, Arizona State University

Date: 1 May 2019

Phone: +1-202-446-0386

Convener: Katina Michael, Arizona State University

Registration: https://www.eventbrite.com/e/social-media-ai-workshop-social-implications-of-natl-security-sins19-tickets-60479961192

Overview

Reaching billions of people with specially tailored messages is now possible through microtargeting on social media platforms. Microtargeted campaigns can be designed to capitalize on an individual’s demographic, psychographic, geographic, and behavioral data to predict his or her buying behavior, interests, and opinions. By inferring things based on an amalgamation of evidence and consistent implementation of a strategy, microtargeting does not merely reflect or predict an individual’s beliefs. It can also alter them.

According to some studies, up to 15% of the US population can be swayed through this kind of strategy, much of it done through well-known platforms like Facebook, Google, Twitter, Instagram, and Snapchat. The advent of artificial intelligence (AI) has meant that microtargeting can now be unleashed with the added stealth of machine learning and autonomous systems activity. This ability to penetrate social media networks could be considered a cybersecurity threat at the organizational or national level. As the technology moves beyond algorithmic scripts that cycle through the distribution of fake or real news, social media users will find themselves coming face to face with personalized, bot-generated messaging. How might that sway both consumer and citizen confidence in these systems is yet to be determined.

This workshop will first highlight the issue of social media and AI cases that have attempted to manipulate people and describe various influence campaigns through broadcast or microtargeting strategies. Workshop participants will then consider how governments and organizations are responding to the misuse of online platforms, and evaluate various ways in which AI-based social media might be regulated internationally. The responsibility of social media platform providers will also be brought into the discussion, as algorithms can detect bot-generated and dispersed information. Finally, strategies for preventing and counter-attacking disinformation campaigns will be considered in cases and contexts where such messaging becomes a destabilizing force in communications. Emerging areas of research, such as neuromorphic computing, will be discussed in the context of cyberwarfare and espionage.

Program Schedule

May 1, 2019

8.40 a.m. Registration and Coffee

9.00 a.m. Braden Allenby, Arizona State University, Weaponized Narrative and the Fall of Democracy

10.00 a.m. Networking Break

10.15 a.m. Ethan Burger, Institute for World Politics, Russian Cyber [Hybrid] Operations: A Look at the 2016 Brexit Referendum and the U.S. Presidential Election

11.00 a.m. Proceedings of the IEEE Live Webinar Series. Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems

12.00 p.m. Lunch

1.00 p.m. Katina Michael, School for the Future of Innovation in Society, Arizona State University, Bots without borders: how anonymous accounts hijack political debate

1.30 p.m. Eusebio Scornavacca, University of Baltimore, Artificial Intelligence in Social Media

2.00 p.m. Gary Marchant, Sandra Day O'Connor College of Law, Arizona State University

2.30 p.m. Gary Retherford, How to ensure Democracy through Technology, Implant and Blockchain Proponent [Draft Title]

3.00 p.m. Hot Topics (Group Discussion - led by Patrick Scannell)

4.00 p.m. Areas of Future Research (All)

4.30 p.m. Close.

Keynote - Braden Allenby

Title: Weaponized Narrative and the Fall of Democracy

Abstract: A number of long-term trends, many reflecting the emergence of new information and artificial intelligence technologies, are undermining the social and political stability, and fundamental institutions, of Western democracies such as the United States. While some of the shorter term emerging challenges are being recognized and addressed, the longer term implications for democratic forms of government are neither recognized, nor well understood. This talk will identity a number of emerging potential challenges to democratic norms and institutions, and suggest several potential responses.

import from thumb drive 006.JPG

Biography: Brad Allenby is President’s Professor of Civil, Environmental, and Sustainable Engineering, and of Law; Lincoln Professor of Technology and Ethics; Senior Sustainability Scientist; and co-chair of the Weaponized Narrative Initiative of the Center for the Future of War, at Arizona State University.  He moved to ASU from his previous position as the Environment, Health and Safety Vice President for AT&T in 2004.  Dr. Allenby received his BA from Yale University, his JD and MA (economics) from the University of Virginia, and his MS and Ph.D. in Environmental Sciences from Rutgers University.  He is past President of the International Society for Industrial Ecology and ex-Chair of the AAAS Committee on Science, Engineering, and Public Policy.  He is an AAAS Fellow and a Fellow of the Royal Society for the Arts, Manufactures & Commerce, and has been a U. S. Naval Academy Stockdale Fellow (2009-2010), an AT&T Industrial Ecology Fellow (2007-2009), and a Templeton Research Fellow (2008-2009). He served as Director for Energy and Environmental Systems at Lawrence Livermore National Laboratory (1995-1997), and the J. Herbert Holloman Fellow at the National Academy of Engineering (1991-1992).  His areas of expertise include emerging technologies, especially in the military and security domains; Design for Environment; industrial ecology; sustainable engineering; and earth systems engineering and management.  In 2008 he was the Carnegie Foundation Arizona Professor of the Year.  His latest books are Industrial Ecology and Sustainable Engineering (co-authored with Tom Graedel in 2009), The Techno-Human Condition (co-authored with Dan Sarewitz in 2011), The Theory and Practice of Sustainable Engineering (2012), The Applied Ethics of Emerging Military and Security Technologies (an edited volume released by Ashgate Press in 2015), Future Conflict and Emerging Technologies (2016), Weaponized Narrative: The New Battlespace (co-edited with Joel Garreau, released in 2017), and Moral Injury: Towards an International Perspective (co-edited with Tom Frame and Andrea Ellner, 2017).

Ethan Burger

Abstract:

ethan.jpg

Biography:

Ethan S. Burger, Esq., is a Washington-based international legal consultant and educator.  His areas of interests include corporate governance, transnational crime and Russian affairs.  He has been a full-time faculty member at American University (Transnational Crime and Corruption Center) and the University of Wollongong (Centre for Transnational Crime Prevention).  He has taught on an adjunct basis at Georgetown University Law Center, University of Baltimore, and Washington College of Law.  He holds a A.B. from Harvard University and a J.D. from the Georgetown University Law Center.

Panel: Machine Ethics

Abstract: Machine ethics is the study of how autonomous systems can be imbued with ethical values. Ethical autonomous systems are needed because, inevitably, near future systems are moral agents; consider driverless cars, or medical diagnosis AIs, both of which will need to make choices with ethical consequences. Although machine ethics has been the subject of theoretical and philosophical study for about twenty years, it is only in the past five years that proof-of-concept ethical machines have been demonstrated in the laboratory. This webinar aims to open a discussion of the interesting and difficult technical challenges of engineering ethical machines, alongside wider societal and regulatory questions of how such machines should be governed, if and when they find real-world application.

Panelists:
- Alan Winfield (University of the West of England)
- Katina Michael (Arizona State University)
- Sarah Spiekermann (Vienna University of Economics and Business)
- Louise Dennis (University of Liverpool)
- Jean-François Bonnefon (Université Toulouse-1 Capitole)

Katina Michael - Bots Without Borders

Title: Bots without borders: how anonymous accounts hijack political debate

Abstract: A bot (short for robot) performs highly repetitive tasks by automatically gathering or posting information based on a set of algorithms. They can create new content and interact with other users like any human would. But the power is always with the individuals or organisations unleashing the bot. Politicalbots.org reported that approximately 19 million bot accounts were tweeting in support of either Donald Trump or Hillary Clinton in the week before the US presidential election. Pro-Trump bots worked to sway public opinion by secretly taking over pro-Clinton hashtags like #ImWithHer and spreading fake news stories. Bots have not just been used in the US; they have also been used in Australia, the UK, Germany, Syria and China. Whether it is personal attacks meant to cause a chilling effect, spamming attacks on hashtags meant to redirect trending, overinflated follower numbers meant to show political strength, or deliberate social media messaging to perform sweeping surveillance, bots are polluting political discourse on a grand scale.

katina michael.jpg

Biography: Professor Katina Michael is the Director of the Center for Engineering, Policy and Society in the School for the Future of Innovation in Society at Arizona State University. She has a background in telecommunications engineering, and has completed an information technology and national security law degree. She is the founding Editor-in-Chief of the IEEE Transactions of Technology and Society. In 2017, Katina was awarded the Brian M. O'Connell Distinguished Service Award in the IEEE Society on the Social Implications of Technology.

Eusebio Scornavacca

Title:

Abstract:

scornavacca-eusebio.jpg

Biography: Eusebio Scornavacca is Parsons Professor of Digital Innovation and Director of the UB Center for Digital Communication, Commerce and Culture at University of Baltimore. He also holds the J. & M. Thompson Chair in Management Information Systems at the Merrick School of Business. Prior to joining UB, Professor Scornavacca was a faculty member and director of research at the School of Information Management, Victoria University of Wellington in New Zealand. He has also given presentation across five continents and has held visiting positions in Japan, Italy, France, Finland, Egypt and Brazil. His research interests include mobile and ubiquitous information systems, digital transformation, disruptive ICT innovation and digital entrepreneurship. During the past 20 years he has conducted qualitative and quantitative research in a wide range of industries, including research sponsored by the private sector. Professor Scornavacca's research has appeared in journals such as the Journal of Information Technology, Information & Management, Communications of the ACM, Decision Support Systems, Communications of the AIS, Computers in Human Behavior and the Journal of Computer Information Systems. He has served as track chair at conferences such as ICIS, ECIS, PACIS, AMCIS, ACIS, HICSS and Conf-IRM.

Gary Marchant

Title:

Abstract:

gary marchant.jpg

Biography: Gary Marchant is a Regent's Professor of Law and director of the Center for Law, Science and Innovation. His research interests include legal aspects of genomics and personalized medicine, the use of genetic information in environmental regulation, risk and the precautionary principle, and governance of emerging technologies such as nanotechnology, neuroscience, biotechnology and artificial intelligence.

He teaches courses in Law, Science and Technology, Genetics and the Law, Biotechnology: Science, Law and Policy, Health Technologies and Innovation, Privacy, Big Data and Emerging Technologies, and Artificial Intelligence: Law and Ethics. During law school, he was Editor-in-Chief of the Harvard Journal of Law & Technology and editor of the Harvard Environmental Law Review and was awarded the Fay Diploma (awarded to top graduating student at Harvard Law School). Professor Marchant frequently lectures about the intersection of law and science at national and international conferences. He has authored more than 150 articles and book chapters on various issues relating to emerging technologies. Among other activities, he has served on five National Academy of Sciences committees, has been the principal investigator on several major grants, and has organized numerous academic conferences on law and science issues.

Gary Retherford

Title:

Abstract:

gary.jpg

Biography: Gary is a visionary, disrupter, historian, futurist, cultural change facilitator, sales professional, author, and speaker. On February 6th, 2006, under the heading Six Sigma Security, Gary facilitated the first ever use of human implantable microchipping of employees in a place of business, anywhere in the world, in Cincinnati, OH. Today, Six Sigma Security is a developing community, focused on achieving a perfect secured identity for every citizen of the world, by utilizing the six sigma methodology. Gary has spoken and presented to various organizations in both the private and government sectors as well as contributed and been interviewed on the future of human microchipping. Gary also authored the article, “Blended Training for Six Sigma”- Security Management Magazine, July 2014

Hot Topics (Group Discussion and Brainstorming)

• Is Facebook too blame?

• How do we overcome Conflict 2.0?

• Future Tactics using Advanced AI

  • Anti-Surveillance, Counter-Surveillance and Anti-Attacks and Mimicking-Attacks

  • Neuromorphic Computing, AI and Cybersecurity

• Risks related to Uberveillance

pat+headshot+(1).jpg

Biography: Patrick Scannell has had a 25 year career developing and commercializing innovative technologies. He has led major transformative projects in a variety of tech categories, from the early days of the Internet, to mobile phones, personal computers, Internet of Things, the cable industry, smart cars, and smart grid, as well as next generation platforms like augmented reality and stuff he can’t even talk about. He is comfortable and works regularly in a variety of ecosystems, from VC-backed tech startups up through and including Fortune 100 C-suite and national defense /government environments.

Over the last 5 years, Pat has spent the majority of his time looking at the cumulative effects of technology on the human condition, and on human cognition specifically.  He has drafted three books, targeted at publication through Oxford University Press, that examine the co-evolution of cognition and 'technology' over the arc of human evolution. The first, The History of Thought, Book 1, co-authored with Chet Sherwood, Chair of Anthropology at George Washington University, looks at the evolution of cognition in pre-human species, from 63 million years ago to ~8 million years ago.  The second, The History of Thought, Book 2, co-authored with Tim Taylor, University of Vienna, and editor-in-chief of the Journal of Prehistory, examines the co-evolution of technology and cognition among human species, from 8 million years ago to present. The third book in the series, The Future of Thought, examines the dynamics in present and near-future state of human cognition, as a result of the changing technological environment in which humans' live.