This edX course was delivered to about 1000 students worldwide. Thank you to the SDGAcademy based out of New York. I appreciate the support I received in the delivery of three modules in this course that looked at privacy, security, ethics and consumer data rights.
Some of the questions posted by the students addressed in this Q&A:
Since the inequalities that exist in the real world seem to be replicated in the digital world, how can we ensure that the rapid evolution taking place with respect to ICT doesn't maintain and/or exacerbate inequalities?
At least the Brazilian Congress has approved July 10th 2018 the General Law of Protection of Personal Data (LGPDP), under the Projeto Lei da Câmara 53/2018, missing yet the approval by the President. What is the situation in other countries and what could be expected with the dynamic change of Apps and the force of Law ?
What is your opinion of microchip implants? They're celebrated for their convenience, but the implications for surveillance are enormous. And they don't seem like they offer the "user" much control. If I decided I didn't want to use it anymore, could I turn it off? Could I reset it? Change my settings? Or is that all done by the entity who installed the device? Who owns the microchip, technically?
Even though Tech is for good, can we see risks in that the integration of IT in society more or less forces us to live our lives according the implementation of IT-based? Will an IT-based society demand people to be IT-active in that society?
As we regard Tech for bad, a more stereotypical people-behavior will certainly incease possibilities that may have bad impact on our lives. Surely Tech may have good impact on values, such as, inclusiveness and availability. But, from a perspective of Tech for good, is Tech always for good, or is it perhaps only for the simplicity of using Tech in an otherwise too complex world?
I like the idea of an opt-in approach to IoT reporting--that I have to agree to let my smart devices report back to their servers about my activity--but I wonder if/how this conflicts with what we've heard in this course about the potential of Big Data. The data gathered by IoT is giving us more information than ever before, and it can be enormously beneficial if used appropriately. But if many people choose to opt-out of reporting, how will this affect the quality of Big Data and, consequentially, the knowledge we are able to glean from it?
Obviously ICTs have great potential and benefits to human being including achievement of SDGs. However, there are huge risks with increased digitalization as mentioned in the chapters of Cybersecurity and The Downsides of Digital. What do you think about the future development of the digital technologies? Should we be optimistic and focus on Tech for Good or do we really need to worry about the risk of Tech for Bad? What can be the worst scenario? Is there any real risk that super-intelligent robots would one day take over the planet of earth?
What happens to the millions of people whose jobs are replaced by technology?
Bridging the digital divide has been an ongoing debate in telecommunications. Initially, while I was working in the telecoms sector we considered the inequalities that existed in access to a "plain old telephone". Remarkably, some people did not have access to a telephone line in Australia, depending on where they lived (rural areas and the outback in particular). At the same time we would measure accessibility as India de-regulated its telecommunications and privatised, as a TPV (telephone per village). Some villages had zero. This all changed to a degree with mobile and satellite-based telephones but the issue was not access in those cases but affordability. Today, many people in developing nations enjoy 2G and 3G mobile telephony. At the same time that markets were deregulating in newly industrialised countries (NIC), the advent of the Internet further changed the debate in the digital divide. Did a household have access to the Internet or not? And if so, what kind of access was it? Narrowband or broadband? Dial-up access to the Internet today is still better than nothing, but with the majority of the world using multimedia online, dial-up access does not get you much. So what level of Internet do you have? And what services do you require it for, again is an important discussion. In some places, wireless broadband is the only access available, and in others it is fibre to the curb and then DSL for the tail end of the communications. Today we are seeing further inequalities arise- those who have access to wearable devices and those that do not, those who have access to implantable medical solutions and those that do not, those who have access to prosthetic exoskeletons and those that do not, those who have advanced epayment systems and those who can only transact at the Point of Sale. Importantly, we can speak of a number of "digital divides" that exist today, not just an individual "digital divide". And yes, inequalities can be exacerbated. In 2016, Australia decided to go fully online with their Census, and yet not all people have access to go online to complete that important statistical survey. Not only did the servers crash hosting the survey, but many people in Australia's rural areas would not have had access even if they wished to fill out the compulsory survey. Technology used in this way discriminates against minority groups, those for whom English is a second language, those suffering from mental illness, and those who really require the services that such a survey may demonstrate. We need to remember that there are people with special needs that have certain accessibility requirements, and need to build in new ways of offering ICT solutions to those that require them but also fall back to processes like "hard copy" surveys (as just one example), when required.
There has no doubt been an increasing emphasis on data protection legislation, regulation and principles across the world, with a few exceptions notable exceptions (e.g. the USA). In the European Union the General Data Protection Regulation (GDPR) has championed a global benchmark with emphasis on 7 principles. These include: Lawfulness, fairness and transparency; Purpose limitation; Data minimisation; Accuracy; Storage limitation; Integrity and confidentiality (security); Accountability. Countries in the European Union must abide by these principles if they wish to continue to do business in the European Union, or for that matter with global customers. The GDPR recognises that companies that are in breach of the Regulation are to be held accountable, and penalised for not abiding by the law. We have already witnessed a number of substantial fines and penalties since the Regulation was enacted. Countries like Brazil that already had strong consumer protections laws, are now adding to the strength by creating 'equivalency' with member countries of the European Union. This makes sense- if you want to do business with another country, in terms of consumer goods and services, then you might well demonstrate that you are trustworthy to make these international connections. While the LGPDP is not exactly the GDPR, the spirit of the regulation is the same. In Australia we have witnessed the introduction of the Consumer Data Right, and Mandatory Data Breach Notification laws. These laws are supposed to make Australians feel "safer" in knowing what companies have been penetrated and if the data they hold has been compromised by hackers. At the same time the Consumer Data Right emphasises things like "data portability" as does the GDPR, demonstrating that consumers have a right to their data. The odd thing with the Consumer Data Right however is that many NGOs and members of the civil society (e.g. financial NGOs and Consumer Representative groups), believe the CDR is counter-productive in data protection. That in fact, the Consumer Data Right was introduced so that companies had greater understanding of the market and their practices, with greater ability to manipulate tariffs, fees, and costs associated with "open banking" and "open energy" systems to come. We have also seen the introduction of AI principles that do describe "sensitive data" and also "de-identification of data", and the need to pre-process information that is collected. AI offers mechanisms to "re-identify" data into the future which nullifies the ability to de-identify. Telecommunications laws are also changing across the world making it mandatory for businesses to keep the data they collect (e.g. Internet services providers) in case of law enforcement requests. More recently in Australia, the Assistance and Access, telecommunication amendment has allowed government to pass a law that demands technical capabilities to thwart consumer services encryption (e.g. in WhatsApp, Telegram, Signal, Google etc). So on the one hand we have "data protection" and on the other hand we have "access to data". It all should come down to the reason systems exist. In theory I can build a service or system that "abides by" data protection laws, consumer data rights laws, and more, but it may be the most unethical service. New innovations are making this space even harder and laws are too slow to ensure rightful processes, and so many lawyers are now pointing to standards and soft-regulations to enable to introduction of advanced services.
For those interested on the topic of microchip implants please see the many peer-reviewed articles I have written in this space, and also additional opinion pieces, interviews and the like. http://www.katinamichael.com/search?q=microchip%20implants
I have conducted many interviews with members of the body modification movement who have really aided my understanding of the role that microchip implants may have in society (and are having for a very small portion of global citizens). From my studies people who adopt implants fall into several categories: technology entrepreneurs who see a real business case for unique ID that is not transferable (although that is arguable with such technology as RFID or NFC); individuals who are citizen scientists and hobbyists and enjoy seeing how far they can take technology; individuals who are interested in body modification from an aesthetic and functional point of view; those individuals who follow the latest fashion or hip trends; and those individuals suffering from a sort of obsession with placing whatever they can in their bodies, at times suffering from body dysmorphic disorder (BDD). There are more categories, e.g. especially in the prosthetics area, but for now I will focus on non-medical solutions.
My personal opinion about microchip implants for non-medical applications is that they should not be used for humans.
Although much media emphasises "convenience" with respect to microchip implants, the jury is out on 'how convenient' they actually are. You require multiple people present at the point of implantation, removal is not easy (unless you have a surgical removalist involved), they often don't work (as has been noted by numerous people, including Andreas Sjostrim of Sojeti who trialled the technology last year). E.g. we have also seen journalists implanted attempt to access a physical apparatus like a photocopier and have had trouble (e.g. BBC reporters). See: "why implants are a bad idea" also, https://www.youtube.com/watch?v=1a2_sdwhF5A
Many countries have had identity schemes instituted, even electronic or so-named Real-ID trusted identifiers. Please see this chapter for a historical overview of identity schemes- some of which dictate that a "Follow Me Number" is granted to a citizen, either at birth, or at the age of 12, or when a citizen becomes a taxpayer. http://www.katinamichael.com/research/2015/7/17/the-auto-id-trajectory-chapter-four-historical-background-from-manual-to-auto-id Many people believe this kind of digital identifier that becomes digital will also curb cybercrime and more.
How much surveillance can a microchip grant? It all depends on the microchip. For the greater part, devices on the market are passive and only have a read length of 10cm. They are in effect, proximity tags or transponders. However, new devices that are bigger and can fit more memory and storage in them, can do a little more. There is also the ability to tether the microchip with tablets or smartphones and use the sensors in the devices to denote aspects of location, identification and condition.
Some microchips can have their own unique ID created, others come with an ID number already on the device- usually about 16 characters in length. Something embedded can be controlled by others, but they have to be able to access your unique ID number to either "deny" you a service, or "offer" a service. Many proponents of the technology state, that even if someone was able to gain access to the unique ID number, that changing the "locks" would simply be blocking out "ID" details to ensure that only the right person gains access to a door or asset.
Some people have demonstrated the use of implants for employees. One could argue how much convenience this gives employees as opposed to the control it grants employers to know when you have arrived at work, when you take restroom breaks, how long your lunch break is, how fast you finish tasks. All of this can be achieved with localised readers or even mobile readers that may/may not be overt.
The microchip is technically owned by the manufacturer or service provider but it all depends on the service agreement. If it is a DIY installation, then you likely own the device, but if it happens via a third party (say like a heart pacemaker), then the device is owned by the third party supplying the service. I have written about this also. Here interestingly, even some wearable devices (like Google Glass), ownership is the company that sells the device and they have a right to "turn off" the device remotely if it is being used against the terms and conditions. http://www.katinamichael.com/research/2017/9/22/implantable-medical-device-tells-all
Please feel free to search more at www.katinamichael.com/find
While I did not speak on microchipping people in the course modules, I appreciate this question was asked as we move toward very innovative ways of identification (with or without the citizen's consent), e.g. the ability to facially recognise people using an electronic ID in Australia (driver's license scheme), or even in China to do a "criminal hit" using facial recognition, or even social rating systems, or biometric systems (like Aadhaar in India). The issue is these technologies are being used retrospectively in ways other than what they were instituted, and consent has really become "fuzzy". And I think this goes back to the "inequality" issue. Members of the "untouchables" in India have no fingerprints, so what happens to their Aadhaar number? These are complex issues.
Theoretically it also becomes easier to socially sort with digital technologies, as was demonstrated in World War 2 with the segregation of minority groups. And today we have much more sophisticated technology than punch cards. http://www.katinamichael.com/interviews/2014/1/23/the-holocaust-survivor
We did not get to address this question on the live Q&A but I would like to emphasise that this question gets to the heart of the matter with respect to the course module name "Tech for Good" as related to the Sustainable Development Goals.
First and foremost let us consider the term "dual-use". What does this mean? That the very same technology can be used for both good and to cause others harm. No technology is a zero-sum game. Life is never black or white. But what happens when technologies are introduced by government or business, and someone has indeed made a decision to apply the technology in one way and not another. Consider "location sensors" embedded in smartphones. Indeed they can be used to play a location-based game, but they can also be used by a suspicious spouse to "track" their partner, and also by authorities in an investigation for evidence-based collection in a given case.
I have asked these same questions to colleagues. Can something ever be considered "good" or "bad"? A technology does not have intent, as some of my close colleagues have argued in the IEEE. Moreso, they argue that a technology can be misapplied or misused but that this kind of harm comes from human beings an individuals tasked with services provisioning in different contexts. We can point to many cases, e.g. Google collecting more than just photographs of our home frontages (i.e., including household MAC addresses and more), or Google continuing to identify your "location" even after tracking had been switched off, or games such as Pokemon Go, that were getting more than just "basic" data and access to user cameras until they were later found out, and more.
Two theoretical approaches used to describe the application of IT in society are (i) technological determinism; or (ii) social shaping of technology. These approaches either dictate that technology is set on a given trajectory from the moment it is created, or that technology is shaped by humans in its application. These two approaches have preoccupied science and technology studies (STS) professionals for decades. Modern day interpretations say that both those approaches are inadequate in explaining what is really going on that we should instead consider "praxistemology" or even "experimental systems", that rather are defined as being "just what they are", an innovation that may or many not be successful dependent on if that cluster of society sees it as relevant or useful.
If we said that "all tech is good" by its very nature, then we could argue that misalignment between societal values and technology construction is the major issue. Does the tech we deploy "fit" or "align" with the values or ethics of a given society? Mostly, we don't consider enough the consequences of innovations, the risks that they bear (which may be anticipated or unanticipated).
Now let's think about it some more-- was the "atomic bomb" good or bad? Did it come with some inherent characteristics? Did society consider the potential harms? How involved were society with the decision to drop the bomb in Hiroshima? Or even drop a second bomb? The works of Norbert Wiener, father of cybernetics, might spur on some more thoughts in this matter. http://www.katinamichael.com/research/2017/6/7/norbert-wiener-and-the-call-for-ethical-engagement
As I noted on the video, I liked this question. Something I have considered both as an academic and as a citizen.
I cannot stress the importance of only gathering data that you require in a given service, that will allow for that service to function. Additionally, I cannot stress the importance of only gathering data for a single purpose and not allowing retrospective use for a purpose than was otherwise originally intended. The maxim should be: "gather only what you need and no more".
I also think it is very important to encrypt data in order to protect it. But most importantly, "pre-processing" is required to remove all sensitive data gathered in a given exercise. Pre-processing means that you don't have to have "personally identifiable information" available or linked to given datasets. If you do happen to require name or address data along with phone numbers (e.g. billing consumers for services rendered) then a process of de-identification is required. De-identification does not mean that re-identification cannot happen down the track through AI but the process of anonymisation of records is very important.
If we gather data from smart devices, smart meters, smart hubs, human activity monitoring systems and the like we need to ensure that the devices have security. This requires a change in Engineering Design mindset where privacy and security functions are seen as integral to the development of any new smart technology.
Free riders are often those accused of not consenting to the use of their data being used in an aggregated fashion by companies or governments. However, these individuals should maintain their right to privacy. At times these individual consumers are accused of being free riders because they don't participate in data collection, yet they wish to reap the benefits of the big data driving decision making. Participatory design is important from the outset, but all too often we see a business sense that says: "build and they will come" without asking the question "is this what you want and is this the best way to get at the questions we need answered"?
Increasingly, we seem to be wanting to collect "everything" and worrying about its use down the track, and yet this is not really the right thing to do. We may require a blockchain approach to harnessing the power of data, through individual consent, allowing people to trace how their data (smart home, IOT devices, social media, Internet search etc) over time and even recompense from its uses.
Big data is here to stay but notwithstanding its major challenges. The greatest issues that will play out in the big data world is in the insurance industry. http://www.katinamichael.com/search?q=big%20data
The term "digital disruption" has been coined to describe the digital transformation processes that many governments and businesses are undergoing. In these times disruption is causing greater uncertainty and making external environments even more turbulent. Instead of stabilising, we can imagine the next 30-50 years to continue causing major changes to the way people interact using technology. Taken too far, this disruption has found its way into "transhumanist beliefs" that usher in a new way of being through becoming one with technology.
Just last week I was discussing the ways in which cybersecurity will be challenged by non-traditional potential harms. Consider CRISPR technology in light of the potential to change humans in a manner like never before. https://www.technologyreview.com/s/612458/exclusive-chinese-scientists-are-creating-crispr-babies/
So many hundreds of thousands of people have voluntarily provided their DNA data via saliva swab to companies like 23andme, ancestry.com, ancestryDNA.com and more. The issue is how can this data be used if hacked? What does it mean to target whole population with bioterrorist means based on their humangenome? Is this in the fact an "offset" in military terms?
The worst case scenario may well be that of candidate 1. Microchip implants being used by totalitarian regimes for uberveillance. Uberveillance is a term my husband MG Michael coined in 2006 which has received international recognition over the last decade plus. It is applying implants for control and usually instituted by persons in authority (e.g. government). http://www.katinamichael.com/media/2007/11/26/scary-stuffhttp://www.katinamichael.com/research/2010/6/1/toward-a-state-of-berveillance
While these scenarios remain possible, I would think that positive outlooks on the adoption of technology will prevail. While there is no one sustainable development goal named "technology", technology must be enabler that wraps around all goals, including those for peace and justice.
We are presently continuing to push the bounds of machine agency, especially through the creation of autonomous systems that have become, in some cases, automated data collection mechanisms, we are beginning to see the deployment of machines like Knightscope's K5 that is reminiscent of the Dalek in Dr Who. However, will this kinds of technologies replace policemen and women? Will the military be largely made up of machines that do the fighting, leaving humans to strategise how best to deploy the automata?
For me, one risk that the digital arena poses is a disconnection and disassociation of people with others, but also with nature. I believe once a comprehensive disassociation takes place through "digital only" platforms where people spend the majority of their clinical lives, we may see some disassociation take place to real problems in the environment, albeit climate change, deforestation, rising sea levels, the pollution of our waterways and more.
So the greatest risk to humanity perhaps, is not from bots or anthropomorphised robots or superintelligent machines but humans themselves.