Big Data, Analytics, and Technology’s Impact on Society

http://sites.ieee.org/futuredirections/tech-policy-ethics/march-2017/big-data-analytics-and-technologys-impact-on-society/

by Lyria Bennett Moses and Greg Adamson

March 2017

The social implications of technology have been with us for as long as humans have created technology, which is to say as long as we’ve been human.

In Paleolithic times, stone tools could be used to kill game or fellow humans. In Greek mythology, Icarus’ hubris was enabled by technology. In our time, headline revelations about National Security Agency spying, Anonymous’ hacking, and security breaches at Sony, at Target – you name it – no longer shock us.

And now, appearing on the horizon, the Internet of Things (IoT) is arising with the “promise” of ubiquitous sensors, and big data analytics to improve our lives and, yet, along with it may come opaque algorithms and a growing sense that, perhaps, George Orwell will be proven prescient [1].

It’s been said that technology is neither good nor bad, but neither is it neutral. Technology does indeed have major, often unforeseen or poorly understood implications for society. Granted, this is the stuff of daily conversation – How secure is our data? How private are our conversations? How long before a trove of data defines our lives in the eyes of others using an opaque algorithm?

We would argue that the dynamics of the market may blind some technologists to the implications of their work, while for others, creativity is the driver and reflection is an afterthought. Conversely, policymakers too often do not fully grasp the implications of technological developments, and how these interact with existing laws and policies. Where policymakers make mistakes, there can be a significant impact on the community.

We come to the social implications of technology from two different backgrounds, but our interests intersect where automation and the use of algorithms can produce – or reduce – social value.

Our challenge is to grasp the ethical and legal implications and impacts of such tools in a potentially sensitive context. For instance, governments and agencies are accumulating data on everyone: should algorithms be applied to tease out insights, particularly in the name of preventing crime and terrorism?

To ensure that the use of these tools does not spin out of the control of a democratic society that applies them, we need to ask questions. “What do agencies want from such data?” “What biases are inherent in the algorithms that produce results?” Perhaps most importantly: “What legal frameworks should society impose for positive, just outcomes?”

At first blush, algorithms just perform automated analysis at high speed, right? But it’s more complicated than that. Not to put too fine a point on it, but a recent op-ed in The New York Times – “Artificial Intelligence’s White Guy Problem” – points out that cultural biases seep into algorithms. To quote briefly from the article:

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters … Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes … [and] we will see ingrained forms of bias built into the artificial intelligence of the future.”

Evaluation, review, oversight, accountability, legal frameworks all seem appropriate if, for instance, the use of big data and analytics for profiling terrorism suspects has undesirable impacts on some communities. It is also an important instance, where analytics are used as the basis for calculating “risk assessment scores” for criminal defendants that can be used to make decisions about bail, parole, and sentencing. As a ProPublica investigation revealed, these tools risk introducing bias against racial minorities [2].

The solution cannot be that those responsible for national security, law enforcement, and criminal justice ignore tools that may offer useful insights. The problem is not the concept of data analytics but how it is developed, used, understood and evaluated. We need to make sure that tools are rigorously evaluated against metrics that test not only accuracy and effectiveness but also the disparity of impact and moral questions [3]. For example, there may be some variables that should be irrelevant to a decision about sentencing even if those variables correlate with high reoffending rates.

So what can we do about the non-neutral, societal implications of technology? The first point is to prioritize technology for the benefit of humanity. As well as providing an environment for examining and where appropriate promoting technology, in recent years technologists have begun to focus on public policy and the importance of ethics, both professional ethics and ethics in the design and implementation of new technologies.

Since the early 1970s technology professional organizations have focused on the context around technology. This includes key aspects of the social implication of technology: Sustainable development and humanitarian technology; Ethics, human values, and technology; Technology benefits for all; Protecting the planet – sustainable technology; and, the Future societal impact of technology advances.  It also includes organizations actively engaged in reaching out to the Science, Technology, and Society community, comprising researchers interested in technology and society from the humanities and social sciences.

Externally, we can bring our logic in humanitarian/development technology to examine, for instance, the 17 U.N. sustainable development goals and to ask thought-provoking questions: Does a particular technology further a specific goal? Given various technology choices, what are the potential outcomes?

The societal implications of technology is a sprawling, pervasive topic and a few very large issues – climate change, nuclear weapons, for instance – may never be bottled again. But an effort is underway to revive critical thinking on a pragmatic level where it could make a difference going forward.

References: 

  1. Mireille Hildebrandt, Smart Technologies and the End(s) of Law (Elgar 2015); Trevor Timm, ‘The government just admitted it will use smart home devices for spying’ The Guardian (9 February 2016) at https://www.theguardian.com/commentisfree/2016/feb/09/internet-of-things-smart-devices-spying-surveillance-us-government.

  2. Julia Angwin et al, ‘Machine Bias’ ProPublica (23 May 2016), <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>.

  3. Lyria Bennett Moses and Janet Chan, ‘Algorithmic prediction in policing: assumptions, evaluation and accountability’, Policing and Society, available online http://www.tandfonline.com/doi/full/10.1080/10439463.2016.1253695.