This edX course was delivered to about 1000 students worldwide. Thank you to the SDGAcademy based out of New York. I appreciate the support I received in the delivery of three modules in this course that looked at privacy, security, ethics and consumer data rights.
Some of the questions posted by the students addressed in this Q&A:
Since the inequalities that exist in the real world seem to be replicated in the digital world, how can we ensure that the rapid evolution taking place with respect to ICT doesn't maintain and/or exacerbate inequalities?
At least the Brazilian Congress has approved July 10th 2018 the General Law of Protection of Personal Data (LGPDP), under the Projeto Lei da Câmara 53/2018, missing yet the approval by the President. What is the situation in other countries and what could be expected with the dynamic change of Apps and the force of Law ?
What is your opinion of microchip implants? They're celebrated for their convenience, but the implications for surveillance are enormous. And they don't seem like they offer the "user" much control. If I decided I didn't want to use it anymore, could I turn it off? Could I reset it? Change my settings? Or is that all done by the entity who installed the device? Who owns the microchip, technically?
Even though Tech is for good, can we see risks in that the integration of IT in society more or less forces us to live our lives according the implementation of IT-based? Will an IT-based society demand people to be IT-active in that society?
As we regard Tech for bad, a more stereotypical people-behavior will certainly incease possibilities that may have bad impact on our lives. Surely Tech may have good impact on values, such as, inclusiveness and availability. But, from a perspective of Tech for good, is Tech always for good, or is it perhaps only for the simplicity of using Tech in an otherwise too complex world?
I like the idea of an opt-in approach to IoT reporting--that I have to agree to let my smart devices report back to their servers about my activity--but I wonder if/how this conflicts with what we've heard in this course about the potential of Big Data. The data gathered by IoT is giving us more information than ever before, and it can be enormously beneficial if used appropriately. But if many people choose to opt-out of reporting, how will this affect the quality of Big Data and, consequentially, the knowledge we are able to glean from it?
Obviously ICTs have great potential and benefits to human being including achievement of SDGs. However, there are huge risks with increased digitalization as mentioned in the chapters of Cybersecurity and The Downsides of Digital. What do you think about the future development of the digital technologies? Should we be optimistic and focus on Tech for Good or do we really need to worry about the risk of Tech for Bad? What can be the worst scenario? Is there any real risk that super-intelligent robots would one day take over the planet of earth?
What happens to the millions of people whose jobs are replaced by technology?
Bridging the digital divide has been an ongoing debate in telecommunications. Initially, while I was working in the telecoms sector we considered the inequalities that existed in access to a "plain old telephone". Remarkably, some people did not have access to a telephone line in Australia, depending on where they lived (rural areas and the outback in particular). At the same time we would measure accessibility as India de-regulated its telecommunications and privatised, as a TPV (telephone per village). Some villages had zero. This all changed to a degree with mobile and satellite-based telephones but the issue was not access in those cases but affordability. Today, many people in developing nations enjoy 2G and 3G mobile telephony. At the same time that markets were deregulating in newly industrialised countries (NIC), the advent of the Internet further changed the debate in the digital divide. Did a household have access to the Internet or not? And if so, what kind of access was it? Narrowband or broadband? Dial-up access to the Internet today is still better than nothing, but with the majority of the world using multimedia online, dial-up access does not get you much. So what level of Internet do you have? And what services do you require it for, again is an important discussion. In some places, wireless broadband is the only access available, and in others it is fibre to the curb and then DSL for the tail end of the communications. Today we are seeing further inequalities arise- those who have access to wearable devices and those that do not, those who have access to implantable medical solutions and those that do not, those who have access to prosthetic exoskeletons and those that do not, those who have advanced epayment systems and those who can only transact at the Point of Sale. Importantly, we can speak of a number of "digital divides" that exist today, not just an individual "digital divide". And yes, inequalities can be exacerbated. In 2016, Australia decided to go fully online with their Census, and yet not all people have access to go online to complete that important statistical survey. Not only did the servers crash hosting the survey, but many people in Australia's rural areas would not have had access even if they wished to fill out the compulsory survey. Technology used in this way discriminates against minority groups, those for whom English is a second language, those suffering from mental illness, and those who really require the services that such a survey may demonstrate. We need to remember that there are people with special needs that have certain accessibility requirements, and need to build in new ways of offering ICT solutions to those that require them but also fall back to processes like "hard copy" surveys (as just one example), when required.
There has no doubt been an increasing emphasis on data protection legislation, regulation and principles across the world, with a few exceptions notable exceptions (e.g. the USA). In the European Union the General Data Protection Regulation (GDPR) has championed a global benchmark with emphasis on 7 principles. These include: Lawfulness, fairness and transparency; Purpose limitation; Data minimisation; Accuracy; Storage limitation; Integrity and confidentiality (security); Accountability. Countries in the European Union must abide by these principles if they wish to continue to do business in the European Union, or for that matter with global customers. The GDPR recognises that companies that are in breach of the Regulation are to be held accountable, and penalised for not abiding by the law. We have already witnessed a number of substantial fines and penalties since the Regulation was enacted. Countries like Brazil that already had strong consumer protections laws, are now adding to the strength by creating 'equivalency' with member countries of the European Union. This makes sense- if you want to do business with another country, in terms of consumer goods and services, then you might well demonstrate that you are trustworthy to make these international connections. While the LGPDP is not exactly the GDPR, the spirit of the regulation is the same. In Australia we have witnessed the introduction of the Consumer Data Right, and Mandatory Data Breach Notification laws. These laws are supposed to make Australians feel "safer" in knowing what companies have been penetrated and if the data they hold has been compromised by hackers. At the same time the Consumer Data Right emphasises things like "data portability" as does the GDPR, demonstrating that consumers have a right to their data. The odd thing with the Consumer Data Right however is that many NGOs and members of the civil society (e.g. financial NGOs and Consumer Representative groups), believe the CDR is counter-productive in data protection. That in fact, the Consumer Data Right was introduced so that companies had greater understanding of the market and their practices, with greater ability to manipulate tariffs, fees, and costs associated with "open banking" and "open energy" systems to come. We have also seen the introduction of AI principles that do describe "sensitive data" and also "de-identification of data", and the need to pre-process information that is collected. AI offers mechanisms to "re-identify" data into the future which nullifies the ability to de-identify. Telecommunications laws are also changing across the world making it mandatory for businesses to keep the data they collect (e.g. Internet services providers) in case of law enforcement requests. More recently in Australia, the Assistance and Access, telecommunication amendment has allowed government to pass a law that demands technical capabilities to thwart consumer services encryption (e.g. in WhatsApp, Telegram, Signal, Google etc). So on the one hand we have "data protection" and on the other hand we have "access to data". It all should come down to the reason systems exist. In theory I can build a service or system that "abides by" data protection laws, consumer data rights laws, and more, but it may be the most unethical service. New innovations are making this space even harder and laws are too slow to ensure rightful processes, and so many lawyers are now pointing to standards and soft-regulations to enable to introduction of advanced services.
For those interested on the topic of microchip implants please see the many peer-reviewed articles I have written in this space, and also additional opinion pieces, interviews and the like. http://www.katinamichael.com/search?q=microchip%20implants
I have conducted many interviews with members of the body modification movement who have really aided my understanding of the role that microchip implants may have in society (and are having for a very small portion of global citizens). From my studies people who adopt implants fall into several categories: technology entrepreneurs who see a real business case for unique ID that is not transferable (although that is arguable with such technology as RFID or NFC); individuals who are citizen scientists and hobbyists and enjoy seeing how far they can take technology; individuals who are interested in body modification from an aesthetic and functional point of view; those individuals who follow the latest fashion or hip trends; and those individuals suffering from a sort of obsession with placing whatever they can in their bodies, at times suffering from body dysmorphic disorder (BDD). There are more categories, e.g. especially in the prosthetics area, but for now I will focus on non-medical solutions.
My personal opinion about microchip implants for non-medical applications is that they should not be used for humans.
Although much media emphasises "convenience" with respect to microchip implants, the jury is out on 'how convenient' they actually are. You require multiple people present at the point of implantation, removal is not easy (unless you have a surgical removalist involved), they often don't work (as has been noted by numerous people, including Andreas Sjostrim of Sojeti who trialled the technology last year). E.g. we have also seen journalists implanted attempt to access a physical apparatus like a photocopier and have had trouble (e.g. BBC reporters). See: "why implants are a bad idea" also, https://www.youtube.com/watch?v=1a2_sdwhF5A
Many countries have had identity schemes instituted, even electronic or so-named Real-ID trusted identifiers. Please see this chapter for a historical overview of identity schemes- some of which dictate that a "Follow Me Number" is granted to a citizen, either at birth, or at the age of 12, or when a citizen becomes a taxpayer. http://www.katinamichael.com/research/2015/7/17/the-auto-id-trajectory-chapter-four-historical-background-from-manual-to-auto-id Many people believe this kind of digital identifier that becomes digital will also curb cybercrime and more.
How much surveillance can a microchip grant? It all depends on the microchip. For the greater part, devices on the market are passive and only have a read length of 10cm. They are in effect, proximity tags or transponders. However, new devices that are bigger and can fit more memory and storage in them, can do a little more. There is also the ability to tether the microchip with tablets or smartphones and use the sensors in the devices to denote aspects of location, identification and condition.
Some microchips can have their own unique ID created, others come with an ID number already on the device- usually about 16 characters in length. Something embedded can be controlled by others, but they have to be able to access your unique ID number to either "deny" you a service, or "offer" a service. Many proponents of the technology state, that even if someone was able to gain access to the unique ID number, that changing the "locks" would simply be blocking out "ID" details to ensure that only the right person gains access to a door or asset.
Some people have demonstrated the use of implants for employees. One could argue how much convenience this gives employees as opposed to the control it grants employers to know when you have arrived at work, when you take restroom breaks, how long your lunch break is, how fast you finish tasks. All of this can be achieved with localised readers or even mobile readers that may/may not be overt.
The microchip is technically owned by the manufacturer or service provider but it all depends on the service agreement. If it is a DIY installation, then you likely own the device, but if it happens via a third party (say like a heart pacemaker), then the device is owned by the third party supplying the service. I have written about this also. Here interestingly, even some wearable devices (like Google Glass), ownership is the company that sells the device and they have a right to "turn off" the device remotely if it is being used against the terms and conditions. http://www.katinamichael.com/research/2017/9/22/implantable-medical-device-tells-all
Please feel free to search more at www.katinamichael.com/find
While I did not speak on microchipping people in the course modules, I appreciate this question was asked as we move toward very innovative ways of identification (with or without the citizen's consent), e.g. the ability to facially recognise people using an electronic ID in Australia (driver's license scheme), or even in China to do a "criminal hit" using facial recognition, or even social rating systems, or biometric systems (like Aadhaar in India). The issue is these technologies are being used retrospectively in ways other than what they were instituted, and consent has really become "fuzzy". And I think this goes back to the "inequality" issue. Members of the "untouchables" in India have no fingerprints, so what happens to their Aadhaar number? These are complex issues.
Theoretically it also becomes easier to socially sort with digital technologies, as was demonstrated in World War 2 with the segregation of minority groups. And today we have much more sophisticated technology than punch cards. http://www.katinamichael.com/interviews/2014/1/23/the-holocaust-survivor
We did not get to address this question on the live Q&A but I would like to emphasise that this question gets to the heart of the matter with respect to the course module name "Tech for Good" as related to the Sustainable Development Goals.
First and foremost let us consider the term "dual-use". What does this mean? That the very same technology can be used for both good and to cause others harm. No technology is a zero-sum game. Life is never black or white. But what happens when technologies are introduced by government or business, and someone has indeed made a decision to apply the technology in one way and not another. Consider "location sensors" embedded in smartphones. Indeed they can be used to play a location-based game, but they can also be used by a suspicious spouse to "track" their partner, and also by authorities in an investigation for evidence-based collection in a given case.
I have asked these same questions to colleagues. Can something ever be considered "good" or "bad"? A technology does not have intent, as some of my close colleagues have argued in the IEEE. Moreso, they argue that a technology can be misapplied or misused but that this kind of harm comes from human beings an individuals tasked with services provisioning in different contexts. We can point to many cases, e.g. Google collecting more than just photographs of our home frontages (i.e., including household MAC addresses and more), or Google continuing to identify your "location" even after tracking had been switched off, or games such as Pokemon Go, that were getting more than just "basic" data and access to user cameras until they were later found out, and more.
Two theoretical approaches used to describe the application of IT in society are (i) technological determinism; or (ii) social shaping of technology. These approaches either dictate that technology is set on a given trajectory from the moment it is created, or that technology is shaped by humans in its application. These two approaches have preoccupied science and technology studies (STS) professionals for decades. Modern day interpretations say that both those approaches are inadequate in explaining what is really going on that we should instead consider "praxistemology" or even "experimental systems", that rather are defined as being "just what they are", an innovation that may or many not be successful dependent on if that cluster of society sees it as relevant or useful.
If we said that "all tech is good" by its very nature, then we could argue that misalignment between societal values and technology construction is the major issue. Does the tech we deploy "fit" or "align" with the values or ethics of a given society? Mostly, we don't consider enough the consequences of innovations, the risks that they bear (which may be anticipated or unanticipated).
Now let's think about it some more-- was the "atomic bomb" good or bad? Did it come with some inherent characteristics? Did society consider the potential harms? How involved were society with the decision to drop the bomb in Hiroshima? Or even drop a second bomb? The works of Norbert Wiener, father of cybernetics, might spur on some more thoughts in this matter. http://www.katinamichael.com/research/2017/6/7/norbert-wiener-and-the-call-for-ethical-engagement
As I noted on the video, I liked this question. Something I have considered both as an academic and as a citizen.
I cannot stress the importance of only gathering data that you require in a given service, that will allow for that service to function. Additionally, I cannot stress the importance of only gathering data for a single purpose and not allowing retrospective use for a purpose than was otherwise originally intended. The maxim should be: "gather only what you need and no more".
I also think it is very important to encrypt data in order to protect it. But most importantly, "pre-processing" is required to remove all sensitive data gathered in a given exercise. Pre-processing means that you don't have to have "personally identifiable information" available or linked to given datasets. If you do happen to require name or address data along with phone numbers (e.g. billing consumers for services rendered) then a process of de-identification is required. De-identification does not mean that re-identification cannot happen down the track through AI but the process of anonymisation of records is very important.
If we gather data from smart devices, smart meters, smart hubs, human activity monitoring systems and the like we need to ensure that the devices have security. This requires a change in Engineering Design mindset where privacy and security functions are seen as integral to the development of any new smart technology.
Free riders are often those accused of not consenting to the use of their data being used in an aggregated fashion by companies or governments. However, these individuals should maintain their right to privacy. At times these individual consumers are accused of being free riders because they don't participate in data collection, yet they wish to reap the benefits of the big data driving decision making. Participatory design is important from the outset, but all too often we see a business sense that says: "build and they will come" without asking the question "is this what you want and is this the best way to get at the questions we need answered"?
Increasingly, we seem to be wanting to collect "everything" and worrying about its use down the track, and yet this is not really the right thing to do. We may require a blockchain approach to harnessing the power of data, through individual consent, allowing people to trace how their data (smart home, IOT devices, social media, Internet search etc) over time and even recompense from its uses.
Big data is here to stay but notwithstanding its major challenges. The greatest issues that will play out in the big data world is in the insurance industry. http://www.katinamichael.com/search?q=big%20data
The term "digital disruption" has been coined to describe the digital transformation processes that many governments and businesses are undergoing. In these times disruption is causing greater uncertainty and making external environments even more turbulent. Instead of stabilising, we can imagine the next 30-50 years to continue causing major changes to the way people interact using technology. Taken too far, this disruption has found its way into "transhumanist beliefs" that usher in a new way of being through becoming one with technology.
Just last week I was discussing the ways in which cybersecurity will be challenged by non-traditional potential harms. Consider CRISPR technology in light of the potential to change humans in a manner like never before. https://www.technologyreview.com/s/612458/exclusive-chinese-scientists-are-creating-crispr-babies/
So many hundreds of thousands of people have voluntarily provided their DNA data via saliva swab to companies like 23andme, ancestry.com, ancestryDNA.com and more. The issue is how can this data be used if hacked? What does it mean to target whole population with bioterrorist means based on their humangenome? Is this in the fact an "offset" in military terms?
The worst case scenario may well be that of candidate 1. Microchip implants being used by totalitarian regimes for uberveillance. Uberveillance is a term my husband MG Michael coined in 2006 which has received international recognition over the last decade plus. It is applying implants for control and usually instituted by persons in authority (e.g. government). http://www.katinamichael.com/media/2007/11/26/scary-stuffhttp://www.katinamichael.com/research/2010/6/1/toward-a-state-of-berveillance
While these scenarios remain possible, I would think that positive outlooks on the adoption of technology will prevail. While there is no one sustainable development goal named "technology", technology must be enabler that wraps around all goals, including those for peace and justice.
We are presently continuing to push the bounds of machine agency, especially through the creation of autonomous systems that have become, in some cases, automated data collection mechanisms, we are beginning to see the deployment of machines like Knightscope's K5 that is reminiscent of the Dalek in Dr Who. However, will this kinds of technologies replace policemen and women? Will the military be largely made up of machines that do the fighting, leaving humans to strategise how best to deploy the automata?
For me, one risk that the digital arena poses is a disconnection and disassociation of people with others, but also with nature. I believe once a comprehensive disassociation takes place through "digital only" platforms where people spend the majority of their clinical lives, we may see some disassociation take place to real problems in the environment, albeit climate change, deforestation, rising sea levels, the pollution of our waterways and more.
So the greatest risk to humanity perhaps, is not from bots or anthropomorphised robots or superintelligent machines but humans themselves.
Hello. This chapter explores cybersecurity as a global issue in the context of the digital revolution, and how ensuring cybersecurity through greater awareness and strong multi-stakeholder partnerships are crucial for achieving the Sustainable Development Goals in a hyper-connected and digitized world.
Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes. Cybersecurity is a global issue that knows no boundaries. It affects individuals and society, small and large organizations and transnational companies, critical infrastructure systems that we all depend on, and even our national security.
Implementing effective cybersecurity measures is particularly challenging today because there are more devices than people, and attackers are using more innovative methods and techniques to compromise systems. The increasing move towards digital records for health, education, government IDs, and just about everything else facilitated by the internet, means that the value of information has become attractive to those who wish to penetrate systems for financial gain, reputational gain, to cause instability, or who just want to demonstrate weaknesses that exist.
The Internet was never ever built with security in mind, and yet so much of the world’s data flows are transacted over public networks that are vulnerable to attack. It is therefore important that corporations and government agencies seek to secure the data they collect on behalf of consumers and citizens. To do this they can use the CIA model, which stands for Confidentiality, Integrity, and Availability in the context of security. Confidentiality of data means that a client can trust that their personal information will not be shared with those that are not explicitly authorized to view it. This can be achieved, in part, by implementing access control mechanisms, such as authorizing only certain people to access and/or manipulate information. With confidentiality, the data is either compromised or it is not. But integrity includes both the correctness and the trustworthiness of the data. The integrity of data has become increasingly important as more sectors adopt data-driven decision-making. If the data underlying the decision is corrupted, the impacts of that decision may be devastating for governments, businesses, communities, and individuals. To preserve integrity, you need prevention mechanisms that block any unauthorized attempts to change the data or any attempts to change the data in unauthorized ways; and detection mechanisms that simply report when the data’s integrity is no longer trustworthy.
These types of integrity mechanisms are particularly important for controlling cyberphysical infrastructure in sectors such as telecommunications, water and waste control, energy, oil and gas refining, and transportation, because these sectors affect large populations, and significant outages that can be harmful to highly urban communities in particular. So if a natural disaster hits and the water control system has been compromised and data is corrupted, the flooding could have devastating consequences that could have otherwise been avoided. So that’s Confidentiality and Integrity. Availability, as it relates to cybersecurity, is knowing that you can access or use a resource or data when you want to. Someone may deliberately deny access to data or to a service by making it unavailable, known as a denial of service attack. These types of attacks generally occur when a hacker overloads a system with superfluous requests, preventing some or all legitimate requests from being fulfilled. It generally means that computers cannot connect to a host machine on the internet, thus denying them the right to carry on with for example, a retail transaction, a cash withdrawal from an automatic teller machine, or the accessing of vital government records. As more and more systems go online, enforcing the confidentiality of data, the integrity of data, and the availability of system access will be crucial to ensuring that the systems function as intended, whether they’re online government services, mobile banking, e-health records, educational tools, or fundamental infrastructure.
Many of the types of cybersecurity issues we’ve discussed thus far fall into the category of “cyber threats”, which exploit weaknesses in infrastructure. Responses to these threats often involve technical rather than legal measures; as such, a variety of organizations ranging from NGOs to intergovernmental bodies are actively involved in cyber defense. In contrast, cybercrime refers exclusively to attacks on private entities with the intent of gaining profit or inflicting damage. It is estimated that cybercrime is costing us 600 billion to 1 trillion annually. As more data is collected online, the consensus is that cost of cybercrime will also rise commensurately. It also follows as the number of devices increase, the greater the number of avenues of attack for hackers to consider to penetrate systems. At the personal level, hackers are interested in your identity and the credentials found on your computer. Just as countries seek to reap the advantages of global reach through online business models, breaches in security can have chilling effect to those starting to use the internet. In countries in Africa, as consumer awareness about cybersecurity grows cyberattacks have had a detrimental impact on development and growth. Most of the population have also been exposed to phishing attacks-the practice of sending fraudulent emails that resemble emails from reputable sources. The aim is to steal sensitive data like credit card numbers and passwords. It’s the most common type of cyber attack. You can help protect yourself through education or a technology solution that filters malicious emails. At the national level, we have seen cyberterrorists steal fingerprint records and claim to have penetrated defense websites, making a mockery of defenses and attracting international attention. The potential to hack DNA databases is also a real possibility. At the international level, multinational organizations have had login and passwords stolen across jurisdictions. Although the potential for cybercrime can be mitigated by enhancing the security of internet networks, only national governments possess the proper legal tools and jurisdiction to prosecute attackers. But this is a truly multi-stakeholder environment, and we need to better understand data sovereignty, the applicability of international humanitarian law, and the United Nations charter in order to create international standards for managing cybercrime that reach across national borders. One such example is the Council of Europe’s 2004 Convention on Cybercrime, which has had some impact on international cooperation and data sharing between nations. Ultimately, security is everyone’s problem, not just IT groups tasked with protecting a government’s or company’s networks and data repositories. In 1992, the OECD produced security guidelines promoting a culture of security by leadership and extensive participation by government and business stakeholders. The main point raised by the OECD is that security has to be factored in during the design of any new technology system. Today, what we call privacy and security “by design” principles are being taught internationally as a way to emphasize the growing importance of cybersecurity. The principles they identified were nine-fold, and include awareness of risks, timely responses to risk, ethical conduct, and continuous reassessment, among others. The aim of cybersecurity is to prevent an attack before it even happens. This is the ideal solution, and where technology is the most helpful. This may take the form of antivirus software, firewalls, among many other toolkits like honeypots that lure hackers to reveal their identifying information. If an attack does happen, then detecting it as soon as possible is just as important. It is knowing what is happening and what is causing the exposure. Auditing systems in intrusion detection are most effective here. Finally, an organization or government agency who has suffered a cybersecurity attack needs to recover from the attack as soon as possible, that is assess and repair the damage caused, and get back to normal operation as soon as possible. It is important to remember that cybersecurity is not a static concern. Organizations need to assess their logical and physical relationships with other systems and partners to determine the level of intra-organizational activities, extra-organizational activities, and those on the internet. And as systems get linked to increase interoperability and efficiency, trust in partnerships is paramount when granting employees of other companies access to your system. Given that the internet is a truly global phenomenon that has a distributed architecture, there is no one country which rules over it. Instead given the ill-defined boundaries of cyberspace, a network of institutions are responsible for addressing threats and international relations. Increasingly, we are moving toward a governance model in cyberspace, and one where disclosure of data breaches is favored rather than closeted and uncoordinated responses to cybercrime.
NGOs, for the greater part, coordinate community-level responses. And one major international institutional response has been the emergence of CERTS (Computer Emergency Response Teams). These teams organize responses to security emergencies, promote the use of valid security technology, and ensure network continuity. Although the majority of CERTs were founded as non-profit organizations, many have transitioned towards public-private partnerships. But these types of organizations lack power at the national and international level.
The International Criminal Police Organization (INTERPOL) has also gotten involved in combating cybercrime, creating a 24/7 ‘Network of Contacts’ in order to help national governments “identify the source of terrorist communications, investigate threats and prevent future attacks.” The 24/7 Network of Contacts, empowered by Article 35 of the Convention on Cyber Crime, is a rare example of direct international intervention and collaboration. I’ve only really scratched the surface of everything there is to cover. But I want to leave you with these final thoughts about creating the culture of security that will help ensure that our data is safe and that technology lives up to its potential to be a useful tool for the betterment of mankind: Good practices need to be taught early and guides need to be developed for citizens, governments, and every other sector; stakeholders need to cooperate by sharing knowledge, especially about specific security incidents; capacity building is paramount when it comes to security at every level, beginning with leadership, strategic and operational staff. When it comes to cybersecurity at the national level, citizens and stakeholders must hold their governments accountable, especially as more and more government systems go online.
The ITU’s Global Cybersecurity Index (GCI) is a fantastic resource here, measuring the commitment of countries to cybersecurity. Harmonization, collaboration, and-above all-education required to make any progress against cybercrime. Empowering organizations to commit to cybersecurity will contribute to SDG 16-to promote justice and strong institutions-thereby ensuring security for all other ICT-related projects for sustainable development. Thank you.
Hello. This chapter will focus on data rights and the role of government in ensuring those rights. Data rights are a question of who owns--and therefore has control over--certain types of information. They tend to fall into three categories: Government Data Rights, Business Data Rights, and Consumer Data Rights. In a government context, a “data right” is a way to refer to a government's right to use valuable intellectual property, such as software or certain types of technical or scientific data.
“Data rights” generally refers to intellectual property in a business context, as well, say in the form of patents that are territorial, granted at the national or regional level.
But what’s on most people’s minds nowadays, and what is most germane to our discussion of data and Sustainable Development Goals, is consumer data rights; that is, an individual’s right to own and control the data that is collected about them, especially by businesses. And since data is such a valuable commodity- it fuels research, innovation, and other public service or private business needs-it makes sense that individuals should have a say in how it is used and who profits from it. When you upload data onto websites or social media platforms, you might not be aware of the company’s default privacy settings or terms and conditions. Most people assume that data voluntarily submitted to the website is kept secure and is not shared with third parties or made publicly available, and yet this is not always true. Accordingly, large Information Communication Technologies companies and vendors have amassed personal information from subscribers from across the globe.
Consider that Facebook had 2.23 billion active monthly users as of June 2018. That’s almost double the population of China, the most populous country in the world! And Facebook collects data about all their subscribers. Since about 2006, people who have used a variety of ICT online platforms have demanded access to the data stored on them. Initially, big ICT firms hired paralegals to deal with these ad-hoc requests by consumers, and organized customized data searches on their behalf.
The issue first arose from desires of individuals to determine the development of their life in an autonomous way, without being perpetually or periodically stigmatized as a consequence of a specific action performed in the past. This right later became known as the Right to be Forgotten, prevalent in the European Union. Social media platforms have become a one-stop shop for intelligence-gathering with respect to law enforcement, certain telecommunications metadata laws also allow authorities access to content that a user has been looking at online via their Internet Service Provider (ISP). Many police officers call this type of data the “cheapest investigative tool. "These trends in policing and data investigation are set to get even more pervasive as new technologies like Apple’s Siri, Amazon Alexa, and other types of voice-recognition devices are capturing private conversations, converting these conversations into data, and storing the data in the cloud-ripe for big data analytics. For you to be well-informed as a citizen and consumer, it’s important to understand the range of data that’s being collected and stored--and it’s not just your online history.
The collection of biometrics, especially facial images and fingerprints, has become a common practice. And what about other types of personal information-arguably the most personal information you have is your DNA? The S and Marper versus United Kingdom case that was heard in the European Court of Human Rights in 2008, determined that holding DNA profiles or samples of individuals who had been arrested, but who were later acquitted or have the charges against them dropped, is a violation of the right to privacy under the European Convention on Human Rights.
But in many countries, the collection, storage, and retention of DNA profiles and DNA cellular samples as well as biometrics in general, are ill-defined. Now we have companies collecting DNA from individuals, services like 23andme and ancestry.com, through which customers can provide a DNA sample and receive a report about their family history and heritage, their proneness to specific genetic diseases and more. Customers provide this DNA information voluntarily, but may not be aware that their DNA is then kept and stored by the company. How else is that information being used? Many of the privacy policies that consumers sign--often without even reading what they’re agreeing to--have provisions that allow companies to share customers’ data with third parties, including marketing companies whose main business driver is the liquidity of data. Ultimately, this means that the owners of this data are the companies or organizations that collect it, not the people who supply it.
Data innovation driven by government open data initiatives in the form of new services is said to drive future growth. But with this type of innovation comes a significant responsibility for stakeholders to address data management. Many governments have been proactive in considering the sociotechnical impact of ICT on citizens and businesses--technology assessments, risk assessments and privacy impact assessments are all mechanisms that examine, more often than not, negative impacts on consumers using evidence, and they allow for these risks to be identified and extrapolated. The state has a responsibility for protecting data, transparency and accountability, in the face of corporations who want to use data to enhance innovation and develop their businesses.
The GDPR has overhauled how businesses process and handle consumer’s personal data. The legislation is designed to "harmonize" data privacy laws across Europe as well as give greater protection and rights to individuals. In short, it is a set of rules that give users more control over their online personal data. Businesses operating in the EU are now required to ask consumers if they can use consumers’ data, and they are prohibited from using someone’s data if the company does not have explicit consent. Companies covered by the GDPR are accountable for their handling of people's personal information. Under GDPR, the "destruction, loss, alteration, unauthorized disclosure of, or access to "people's data has to be reported to a country's data protection regulator.
High-level initiatives like GDPR are a major step in the right direction when it comes to recognizing and protecting people’s right to control their own data. But, in an age when private companies have more data than government agencies, companies will need to lead the way in reforming business practices and restoring consumer trust.
The Chartered Institute of Marketing claim that 67% of consumers would actually share more personal information if organizations were more open about how they will use it. By demonstrating that your business is open, honest and championing best practice, organizations can show their customers the value-add of sharing their data in delivering a more personalized experience. And let’s not forget the most important stakeholder in all of this--you, the consumer, citizen, and individual.
No doubt the key to all of this is consumer education and empowerment, so you are aware of your rights and can critically evaluate how your data will be used and by whom. Children should learn about what happens when they go online or interact with a mobile device, how to interpret user agreements. Because data can be used to support or inform any of the Sustainable Development Goals, privacy and data rights figure into all of them. But, I think the most important for me are SDG 16, which promotes justice and strong, ethical institutions, and SDG 17, which stresses the importance of partnership and cooperation in achieving sustainable development.
Fundamentally, we need stakeholders to come together to protect individuals’ data rights and create new standards, industry guidelines, laws, and even privacy-enhancing technology. We cannot become lax on data rights or the ethical use of data, or think of privacy only as an afterthought. They need to be in the design process from the very beginning. Protecting data rights is a real issue--but as we’ve seen with regulations like GDPR, the work of advocacy groups like Privacy International, and consumer movements like the one to #deleteFacebook, it’s an issue that many people are committed to solving, and there is real reason for hope. For instance, blockchain technology-most often associated with cryptocurrencies and other financial technology, but which can actually be used to securely store any type of information-offers a huge amount of potential for ensuring that people can have more control over their own data.
In Australia, the government has decided to legislate a new “Consumer Data Right” to give Australians greater control over their data, empowering customers to choose to share their data with trusted recipients only for the purposes that they have authorized. This is as a direct result to new emerging open frameworks in the banking sector, but will soon be rolled out to other sectors like energy, and telecommunications, and then economy-wide. It’s an example of how reforms in the private and public sector can feed off and influence each other for the greater good, and, in theory, it will mean that individuals can feel confident that they are in control of who they share their data with.
With new consumer data rights and data protections set in legislation, consumers will negotiate how much personal information they want to share with their service providers, and they will be able to choose if they want to open their private data for public accessibility, making it available for research or other purposes. They may even be able to sell their own data, and benefit from it the way companies are benefiting from it now. But the infrastructure around such initiatives are first being enacted by legislation, then implementable frameworks, and then consumer awareness to utilize these open services. The movement around personal data rights is ongoing and will only become more important as time goes on. And as new laws and standards emerge, there are still a lot of questions about oversight and governance. Who is going to hold companies and governments accountable for their management of data? I urge you to keep yourself informed and get involved in the data rights discussion in your own communities. Thank you.
Hello my name is Katina Michael and I’m a professor of computing and information technology at the University of Wollongong in Australia. This chapter will focus on the role of privacy with respect to Information and Communications Technology and the Sustainable Development Goals, and emphasize why establishing trust between stakeholders-particularly between governments and citizens-is the most important aspect of any ICT intervention.
In July 2015, the UN Human Rights Council appointed its first Special Rapporteur on the Right to Privacy. The motivation for doing so was issues related to security and surveillance, Big Data and open data, health data, and personal data processed by private corporations. The focus was really on the efficacy and proportionality of intrusive measures made possible by advances in ICT.
As governments across the world undergo digital transformation, privacy issues abound in the secure storage of citizens’ personal information. Consider this in the context of SDG 3, good health and wellbeing. Whether sensitive information pertains to one’s health status, criminal records, race or religious affiliation or home address, citizens have an expectation of privacy. But before we talk about the right to privacy as it relates to the digital age, I think it’s important to look at the history of this right and its importance as an issue of international concern.
Article 12 of the Universal Declaration of Human Rights identifies the right to privacy as a key principle that ensures freedom. It states: “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation.” The Universal Declaration of Human Rights was established in 1948 after universal crimes were committed by the Nazis in World War II and evidenced during the Nuremburg International Military Tribunal. The Reich Government kept copious and meticulous hand-written records on a variety of minority groups who were discriminated against based on race, religion, sexual orientation, mental health, and more. Now, although computing power was then in its nascent stage relying greatly on the tabulation of punch cards, it was later discovered that the Hollerith machine was used by the Nazis to process census information that facilitated the identification of individuals who were later sent to the concentration camps and gas chambers.
I realize that this is a bleak way to start a discussion of the right to privacy, but it’s important to establish why privacy was included in the Universal Declaration of Human Rights, after people’s personal identifying information was used against them. On the other hand, however, if we are to examine other acts of dehumanization that have taken place in the 20th and 21st centuries, they often begin with the removal of nationally-recognized identification documents, like passports. If you don’t have a documented identity, you may be denied your rights as a citizen, because you can’t even prove that you are a citizen. Privacy, as it would have been defined in 1948, was basically the right to be left alone. But as ICTs began to permeate government and business, information privacy came to refer to the interest an individual has in controlling, or at least significantly influencing, the way data about themselves is handled and used. This might include sensitive information like: name, date of birth, age, sex, your address; current contact details of family and guardians; bank details; medical records; personal care issues; service records and file progress notes; individual personal plans, assessments or reports; guardianship orders; or even personal correspondence. Other information relating to ethnic or racial origin, political opinions, religious or philosophical beliefs, health, or sexual lifestyle should also be considered confidential, as it could be used against you if it falls into the wrong hands.
But in this day and age of mobile devices, social media, and online platforms that constantly leave behind digital footprints, how is it possible to maintain privacy? This is a particularly important question when websites do not disclose whether or not they are sharing your information with 3rd parties. Default sharing settings or unclear terms and conditions can place individuals at risk of disclosing private information that they may prefer to be confidential. First of all, it’s important to know who has your information. Most governments across the world store electronic records on their citizens, things such as tax records, electronic health records, even student identification records. The open data movement, which advocates for the free flow of data, sees value in making available data that has been funded by taxpayers, so it can contribute to the public good.
So governments are considering opening up some of this data so that it may be accessed by third parties who wish to create innovative services using de-identified information. De-identification aims to allow data to be used by others without the possibility of individuals being identified. Consumer data rights-basically, the idea that you as a consumer should control the data collected about you-are supposed to change the potential for individuals to have their data locked to a legacy provider. These rights, in theory, offer individuals data portability between stakeholders of choice, for example, service providers like your energy or electric company. In the context of the banking sector, the notion of an open banking framework has emerged, so that consumers will be able to access and safely transfer their banking data to trusted parties. Some Non-Government Organizations are suspicious of the consumer data rights movement, claiming that these rights could actually be used to manipulate consumers. Take the energy company example again: by knowing exactly how much energy a consumer uses, much can be determined, inclusive of household activity monitoring stemming from the types of household appliances in use, time of day data, for instance when someone is not at home or when someone chooses to sleep or rise. This is amplified when information about consumers is available publicly online, and big data analytics is able to draw from these various sources to make inferences about an individual’s pattern of life. This process is known as predictive profiling and may be used to on-sell more product.
On the flipside, this is the first time in the history of the world that we are able to gather and share information so quickly and in such quantities, and it can certainly be used for good. By harnessing the power of ICT, by crowdsourcing information from stakeholders around the world, and gathering data from sensors embedded in smart devices, collective awareness can be used to improve people’s lives and achieve sustainable development. So what’s the bottom line? Your data is inevitably going to be collected by someone, somewhere-the question is, do you trust that your data will be used for you and not against you by government agencies and private corporations? In this age of data-driven decision-making, I’d say that trust is just as important-maybe even more important-than privacy. You may be willing to give up control of your private data if you trust the person you’re giving control to. Without trust, explicit consent, transparency and accountability, even the most innovative ICT intervention will run into serious problems upon implementation.
Security breaches that occur from within governments, such as insider attacks by employees, or outside attacks, will have devastating impacts on people. Problems exist particularly when there are weak privacy laws and controls in place. Even a leading state in cybersecurity, like Singapore, can have their systems penetrated. In July 2018, Singapore had to disconnect computers at public healthcare centers from the internet after hackers compromised more than 1.5 million SingHealth patients’ personal information. Cyber attacks on national identity systems will become commonplace as more credentials are gathered and stored online. If the citizen profiles make it onto the dark web, the implications of adopting emerging technologies before they have been tried and tested on large-scale populations will become apparent, and there will be a major backlash from citizens. The dark web refers to encrypted online content that is not indexed on conventional search engines. The dark web is part of deep web, a wider collection of content that doesn't appear through regular internet browsing.
So what is the answer? Do we adopt new technologies to justly transform practices and reap the benefits of all this data? Or do we stick with traditional systems that have known vulnerabilities and learn to live with them? Perhaps what is of greatest importance is to treat privacy and security as functional aspects of any new system. All too often, engineers do not incorporate privacy and security by design for a product that will affect hundreds of millions of people. The long-standing myths are that we need to give up our personal privacy for public safety, and that we need to sacrifice privacy for data analytics.
Function creep in services are also a concern, such as when tax file numbers become de facto national ID numbers, or biometric rollout systems are used retrospectively for unrelated aims. Function creep is the gradual widening of the use of a technology or system beyond the purpose for which it was originally intended. “After the fact” privacy intrusions do not grant citizens an opportunity to consent to the mass-scale changes, rather they are imposed on them without a consultation process. The information gathered through both public and private data may be used for good or ill, depending on the stakeholder. However, we cannot deal in “what-ifs” if we are to adhere to the ethical principles of the Universal Declaration of Human Rights.
At the present time, our laws are not keeping pace with information technology, so what may be considered legal might well be unethical. We are also witnessing transformative changes in state-society relations in many countries. Globalization and the associated range of economic, technological, social, and political developments have supported the rise of individualism away from thinking in terms of the “public good”. So, to summarize: in this chapter we have reviewed concepts related to privacy and confidentiality in the context of human rights and the emergence of new Information Communication Technologies and systems. Both privacy and efficiency are equally important and should be considered in the design and implementation of any ICT system, particularly on large-scale government ICT projects rolled out to citizens.
Great emphasis needs to be placed on engaging civil society in order to develop ICT programs which are robust and trustworthy. No system is impenetrable, but we can reduce end-user vulnerability by working together to better understand the social implications of technology, being aware of the risks, and planning ahead as much as possible to ensure that ICT works for us, and not against us. Thank you.
A profound evidence-based mash up with original footage by director, writer and producer Jordan Brown. If you want to know what is going on today with "screens" this is the documentary to watch. Find a few excerpts available online, and embedded below.
Having observed Jordan work so hard over so many years to bring this message out, I'd encourage you all to give it some time! After all, aren't we all feeling the effects of "screens" at least some of the time, if not all of the time?
A clever choice in title too. See Wizard of Oz. Please show your colleagues, and please show students and family and friends. Pat Scannell you and Jordan are so in sync!
Source: Photos of Wizard of Oz © 1939 Warner Home Video.
Another fascinating documentary in the same vein as Jordan Brown's documentary is this one.
Katina Michael presents the pros and cons of implantables during a Studio Tech Talk at the 2017 Sections Congress held in Sydney Australia. Visit the SC2017 Website: http://sections-congress.ieee.org/
Thanks to the IEEE Society on the Social Implications of Technology (IEEESSIT) for their unending support of our research. See more here: http://ieeessit.org/
Published September 15, 2017 by IEEEtv.org
Complete coverage of IEEESC17: https://ieeetv.ieee.org/event-showcase/sections-congress-2017