Social Media and AI
The Twelfth Workshop on the Social Implications of National Security (SINS19)
Human Factors Series
Venue: ASU Barrett & O’Connor Washington Center, Level 2, 1800 I Street NW, Suite 300, Washington, DC 20006
Date: 1 May 2019
Convener: Katina Michael, Arizona State University
Reaching billions of people with specially tailored messages is now possible through microtargeting on social media platforms. Microtargeted campaigns can be designed to capitalize on an individual’s demographic, psychographic, geographic, and behavioral data to predict his or her buying behavior, interests, and opinions. By inferring things based on an amalgamation of evidence and consistent implementation of a strategy, microtargeting does not merely reflect or predict an individual’s beliefs. It can also alter them.
According to some studies, up to 15% of the US population can be swayed through this kind of strategy, much of it done through well-known platforms like Facebook, Google, Twitter, Instagram, and Snapchat. The advent of artificial intelligence (AI) has meant that microtargeting can now be unleashed with the added stealth of machine learning and autonomous systems activity. This ability to penetrate social media networks could be considered a cybersecurity threat at the organizational or national level. As the technology moves beyond algorithmic scripts that cycle through the distribution of fake or real news, social media users will find themselves coming face to face with personalized, bot-generated messaging. How might that sway both consumer and citizen confidence in these systems is yet to be determined.
This workshop will first highlight the issue of social media and AI cases that have attempted to manipulate people and describe various influence campaigns through broadcast or microtargeting strategies. Workshop participants will then consider how governments and organizations are responding to the misuse of online platforms, and evaluate various ways in which AI-based social media might be regulated internationally. The responsibility of social media platform providers will also be brought into the discussion, as algorithms can detect bot-generated and dispersed information. Finally, strategies for preventing and counter-attacking disinformation campaigns will be considered in cases and contexts where such messaging becomes a destabilizing force in communications. Emerging areas of research, such as neuromorphic computing, will be discussed in the context of cyberwarfare and espionage.
May 1, 2019
8.40 a.m. Registration and Coffee
9.00 a.m. *Braden Allenby, Arizona State University, Weaponized Narrative and the Fall of Democracy
10.00 a.m. Networking Break
10.15 a.m. Ethan Burger, Institute for World Politics, Contextualizing Russian Interference in 2016 UK Brexit Referendum and the U.S. Presidential Election
11.00 a.m. *Proceedings of the IEEE Live Webinar Series. Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems
12.00 p.m. Lunch
1.00 p.m. *Alex Trauth-Goik, University of Wollongong, Reconceptualising China’s ‘Social Credit System’: From ‘Social Credit’ to ‘Social Trust’?
1.30 p.m. Eusebio Scornavacca, University of Baltimore, Artificial Intelligence in Social Media
2.00 p.m. Gary Marchant, Arizona State University, Legal Aspects of Non-Medical Personal Sensors
2.30 p.m. *Gary Retherford, Six Sigma Security, History of Human Microchipping
3.00 p.m. Hot Topics (Group Discussion - led by Patrick Scannell)
4.00 p.m. Areas of Future Research (Group Discussion - led by Katina Michael)
4.30 p.m. Close.
* Asterisk means that the delivery will happen remotely via Zoom or WebEx.