Digital Society with Vontobel
New surveillance tech means you'll never be anonymous again
Forget facial recognition. Researchers around the world are creating new ways to monitor you. Lasers detecting your heartbeat and microbiome are already being developed.
The fight over the future of facial recognition is heating up. But it is just the beginning, as even more intrusive methods of surveillance are being developed in research labs around the world.
In the US, San Francisco, Somerville and Oakland recently banned the use of facial recognition by law enforcement and government agencies, while Portland is talking about forbidding the use of facial recognition entirely, including by private businesses. A coalition of 30 civil society organisations, representing over 15 million members combined, is calling for a federal ban on the use of facial recognition by US law enforcement.
Meanwhile in the UK, revelations that London's Metropolitan Police secretly provided facial recognition data to the developers of the Kings Cross Estate for a covert facial recognition system have sparked outrage and calls for an inquiry. The Information Commissioner's Office has launched an investigation into the legality of the program. But the scandal comes at the same time as a landmark ruling by the High Court in Cardiff that said the use of facial recognition by South Wales police is legal. (The decision is likely to be appealed).
Facial recognition is only the tip of the creepy surveillance iceberg, however. If strict regulation is brought in to govern the use of facial recognition, it is possible we may simply see a switch to one, or several, of the other forms of surveillance technologies currently being developed. Many are equally if not more invasive than facial recognition – and potentially even harder to regulate. Here's a look at some of what might be coming down the pipeline.
How you walk
The rapidly growing field of behavioural biometrics is based on recognising individuals from their patterns of movement or behaviour. One example is gait recognition, which may well be the next surveillance technology to hit the mainstream, especially if facial recognition comes under tight regulation. The technique is already being trialled by police in China, which frequently leads the field when it comes to finding new ways to monitor its people, whether they like it or not.
There are a few different ways of recognising an individual from the way they walk. The method being trialled by Chinese police is based on technology from a company called Watrix, and relies on the use of video surveillance footage to analyse a person's movements as they walk. In a recently granted patent, Watrix outlines a method of using a deep convolutional neural network to train an AI system capable of analysing thousands of data points about a person as they move, from the length of their stride to the angle of their arms, and use that to recognise individuals based on their 'gait record'. Watrix claims that its systems achieve up to 94 per cent accuracy, and that it holds the world's largest database of gait records.
The vision-based methods of gait recognition being developed by Watrix and others can be used to identify people at a distance, including in crowds or on the street, in a similar way that facial recognition can – which could make it a quick and easy substitute if regulation is brought in against facial recognition. Increasingly, many video surveillance systems are collecting multi-modal biometrics. That means they may be using facial recognition and gait recognition simultaneously, which at least in theory should both increase the accuracy and tackle issues like identifying people facing away from the cameras.
Another method for identifying people by their walk relies on sensors embedded in the floor. Researchers from the University of Manchester used data from 20,000 footsteps belonging to 127 individuals to train a deep residual neural network to recognise 24 distinct factors, like the person's stride cadence and the ratio of time on toe to time on heel (the people did not need to take off their shoes, as the system analyses movement rather than shape of the foot). Using this system, they were able to identify individuals with over 99 per cent accuracy in three 'real world' scenarios: the workplace, the home environment, and airport security checkpoints.
According to the researchers, the benefits of this kind of identification over vision-based systems are that it is less invasive, and less prone to disruption from objects or other people obscuring the camera's view. Of course, another way of saying that it is less invasive is that it is harder for people to detect when it's being used on them. People might notice when they're being watched by cameras, but they're much less likely to be aware of sensors in the floor.
Your heartbeat and your breathing pattern are as unique as your fingerprint. A small but growing number of remote sensing technologies are being developed to detect vital signs from a distance, piercing through skin, clothes and in some cases even through walls.
In June, the Pentagon went public with a new laser-based system capable of identifying people at a distance of up to 200m. The technology, dubbed Jetson, uses a technique known as laser doppler vibrometry to detect surface movement caused by your heartbeat.
The eventual goal is to be able to identify a target within five seconds based on their cardiac signal, or 'heartprint.' At the moment, however, the Pentagon's system has a number of limitations: the target needs to be standing still, needs to be wearing light clothing (thick clothing, like a heavy coat, can interfere with the signal), and most importantly there needs to be a clear line of sight between the laser and the target.
Coats, walls, even rocks and rubble are no obstacle for another nascent surveillance technology, however. Researchers are hard at work developing radar-based systems capable of tracking vital signs for a range of purposes, from non-invasive monitoring of patients and aiding in medical diagnoses to finding survivors in search and rescue operations.
Monitoring indoor movements
But why bother installing new radars when we're already bathed in a different sort of radiation pretty much all the time? Wi-Fi can also be used to locate individuals, identify their position in the room and whether they're sitting or standing, and even track vital signs.
Until recently, it was thought a dedicated Wi-Fi network was required, in part because the technique depends on knowing the exact position of the Wi-Fi transmitters. In 2018, however, a group of researchers at the University of California built an app which allowed them to figure out the exact location of existing Wi-Fi transmitters in a building. With that information, they were able to use normal smartphones and existing ambient Wi-Fi networks to detect human presence and movement from outside the room. "With more than two Wi-Fi devices in a regular room, our attack can detect more than 99 per cent of user presence and movement in each room tested," the researchers claim.
Some research groups want to go further than just using Wi-Fi to identify people. Based on movement and vital signs, they claim it is possible to monitor the subject's emotional state and analyse their behavioural patterns. These researchers have formed a company to market a 'touchless sensor and machine learning platform for health analytics', which they claim has been deployed in over 200 homes and is being used by doctors and drug companies.
Beyond the potential benefits for healthcare and emergency responders, however, the technology also has obvious applications for surveillance. Technology which is capable of building up a profile of a person's heartbeat and breathing in order to watch for abnormalities in a health context is readily adaptable to being used to identify one person from another. Radar-based security surveillance systems capable of detecting people are already on the market, It's only a matter of time, and perhaps not even very much time, before the ability to identify individual people is layered on top.
Tracking your microbial cells
Every person emits around 36 million microbial cells per hour, and human microbiomes are unique for a certain period of time (a 2015 study found that around 80 per cent of people could be re-identified using their microbiome up to a year later). This means that the constant trail of microbial traces we leave behind us, as well as those we pick up from our surroundings, can be used to help reconstruct a picture of a person's activities and movements, like where they walked, what objects they touched and what environments they have been in.
Monitoring your scent
Identifying people by smell is actually one of the oldest police tricks in the book, but doing it with computers instead of bloodhounds is still in its infancy in comparison with facial and fingerprint recognition. The field of odor biometrics may be useful for individual authentication but is not well suited to mass surveillance – separating exactly who smells like that in a crowd can be tricky, as anyone who has been stuck in public transport on a hot day probably knows.
Then there are the identification techniques designed for very specific use cases. One pioneering suggestion from a team of Japanese researchers for an anti-theft system for cars was based on using 360 sensors to measure the unique shape of the driver's rear end. Despite achieving a 98 per cent accuracy rate in trials, tragically this important security innovation does not seem to have gone any further than lab testing.
The regulation problem
Trying to regulate surveillance technologies one by one is likely to be futile. The surveillance industry is simply developing too fast, and it is too easy to switch from one kind of surveillance to another. The difference between a facial recognition system and one based on behavioural biometrics may simply be a matter of swapping the software on an existing camera network, for example.
Increasing cooperation between government agencies and the private sector also means that regulations like San Francisco’s, which limits only government use of certain types of surveillance, are insufficient according to Katina Michael, a professor in the School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering at Arizona State University.
Amazon is perhaps the prime example for this blurring of the lines between private and government surveillance. Amazon has previously come under criticism for selling facial and emotion-recognition systems to police. More recently, it has been revealed that Amazon is partnering with hundreds of law enforcement agencies in the US, including giving them access to surveillance data gathered through its Ring home doorbell in return for police actively marketing the devices to the community.
"Fundamentally, we need to think about democracy-by-design principles," Michael says. "We just can’t keep throwing technologies at problems without a clear roadmap ahead of their need and application. We need to assess the impact of these new technologies. There needs to be bidirectional communication with the public."
Surveillance changes the relationship between people and the spaces they live in. Sometimes, that change is for the better; there are real benefits from increased security, and the insights which can be gained into how people use public places can be used to help shape those places in the future. At the same time, however, we need to ask ourselves whether the future society we want to live in is one which constantly watches its citizens – or, more likely, one in which citizens are never totally sure when, how and by whom they’re being watched.
Digital Society is a digital magazine exploring how technology is changing society. It's produced as a publishing partnership with Vontobel, but all content is editorially independent. Visit Vontobel Impact for more stories on how technology is shaping the future of society.
Today marks eighty years since Britain declared war over Germany, marking the start of World War Two.
As a result, Australia joined Britain in the allied force and the news of this was announced to the people of Australia on every radio station in the country with a formal speech by Prime Minister of the time Robert Menzies.
This got us thinking with the advancement of technology, how would we receive the news if a global scale war broke out in our modern age?
Dr Katina Michael from School of Computing and Information Technology at University of Wollongong joined Brooke and Natal on Talk and Toast to discuss this topic.
Katina discusses citizen journalism as the quickest way for wartime announcements to spread stating ‘it would come through social media feeds and we would be struggling to know whether this was the truth or not’.
In a world of Donald Trumps and fake news, Katina went on to mention ‘ we would have to wait until we found evidence’.
In a scary situation, we would be forced to assess ‘how powerful is the voice of the people during catastrophes and crisis?’ stated Katina.
While fortunately it has been decades since the world saw such violent conflict on a global scale, the future for technology and communication during warfare has hinted at becoming more sophisticated.
Citation: Brooke Taylor, September 3, 2019, If World War Three Broke Out Today, How Would We Be Told?, AFTRS FM, https://aftrsfm.aftrs.edu.au/if-world-war-three-broke-out-today-how-would-we-be-told/
We’ve all heard the stories of cars being driven into bodies of water because the driver trusted the navigation system. Could technology be making us less intelligent? Trust in tech is the topic of discussion for Arizona State University professor Katina Michael from the School for the Future of Innovation in Society and Alex Halavais, an associate professor in the School of Social and Behavioral Sciences at ASU.
In this segment:
Katina Michael, Arizona State University professor from the School for the Future of Innovation in Society; Alex Halavais, an associate professor in the School of Social and Behavioral Sciences at ASU
Source: Tech episode: https://azpbs.org/horizon/2019/08/is-technology-hurting-our-intelligence/
Source: Whole episode: https://www.pbs.org/video/8-14-19-stock-market-technology-slavery-ze40pa/
Citation: Katina Michael and Alex Halavais with Ted Simons, August 14, 2019, “Is technology hurting our intelligence?”, Arizona Horizon, PBS: Channel 8, https://azpbs.org/horizon/2019/08/is-technology-hurting-our-intelligence/
If you missed the fourth annual TEDxASU event earlier this spring, you’re in luck — the presentations are now available to view online.
The March 25, student-organized event showcased nine speakers with expertise ranging from cancer diagnosis and plastic pollution to space governance and the nexus of art and technology.
“It’s important to us to place a variety of people on stage — students, faculty, community and industry leaders from different backgrounds and perspectives,” says Ammar Tanveer, founder and executive director of TEDxASU.
Organizing around the theme “NextGen,” speakers cast their minds to the 22nd century to imagine what waits on the horizon in their respective fields.
“We settled on NextGen for a couple reasons,” says Tanveer, a doctoral candidate in ASU’s College of Liberal Arts and Sciences. “We wanted to convey that TEDxASU was taking a step forward as an organization, but also to focus on the future broadly, and incorporate talks from different viewpoints and disciplines.”
More than 1,600 people attended the event at Gammage Auditorium, an increase from the attendance at the previous events which were held at Tempe Center for the Arts and the Marston Exploration Theater.
Katina Michael, a professor in the School for the Future of Innovation in Society and School of Computing, Informatics, and Decision Systems Engineering, explored the possibility of widespread brain implants and the dangers inherent in such a technology.
She imagined a future in which humans become so thoroughly integrated with the digital world that bodies became secondary and in some cases, obsolete altogether. Michael examined the benefits of a purely digital existence before calling into question the effects, some of which we’re grappling with already.
“Ladies and gentlemen, we are becoming entangled in the wires and cables,” said Michael. “We lust for high tech but have no overload switch and are short circuiting as a result.”
Source: August 6, 2019, Knowledge Enterprise Development, Arizona State University, https://research.asu.edu/20190806-tedxasu-event-looked-ahead-future
Earlier this summer, almost 100 cars ended up stuck in a muddy Colorado field. An accident in Denver led Google Maps to suggest an alternate route to the airport.
“I took the exit and drove where they told me to,” one woman told a local TV station. The road turned to dirt, then mud. The lead cars got stuck, and the rest had no way of turning around.
None of them apparently paused to think that dirt roads don’t usually lead to major airports. Instead they offered the lemming defense: “Well, all these other cars are in front of me, so it must be OK,” the same woman said.
Blindly following navigation apps has proved to be a losing bet for many people. To wit: A Belgian woman who went to pick up a friend from the train station and ended up in Croatia. The three women in Washington State whose “road” turned out to be a boat launch ramp. The three Japanese tourists in Australia who followed GPS directions to North Stradbroke Island from Brisbane, despite the fact a 9-mile-wide strait separates the two.
How did she not notice she had driven across not just one country, but four or five? That their car was in a lake? An ocean?
It raises the question: Is technology making us stupider?
“Our senses are being overridden by trust in tech,” said Katina Michael, a professor in the School for the Future of Innovation in Society at Arizona State University. “I think we’ve stopped thinking. We don’t want to do the heavy lifting to navigate.”
Michael researches emerging technologies and their social implications.
“Our mind is in the cloud and it has to be on the ground,” she said. “While we have this duality of this virtual and physical space, we tend to allow the virtual to override the physical. That’s what I think is happening. … People have not only lost their ability to navigate, but also their ability to quantify, to assess, to consider even the risks. It’s about us being automated. … We’re getting used to not using our brains in certain ways. That takes away our ability to assess things in general.”
Back to the muddy field outside Denver. What happened there?
“It’s pretty straightforward: Someone trusted their phone nav more than they trusted themselves or their eyes,” said Alex Halavais, an associate professor in the School of Social and Behavioral Sciences at ASU, where he directs the master's degree program in social technologies.
“I think the numbers there are more an issue of people following the car in front of them rather than the nav in front of the first few people,” he said. “It was a matter of trusting the technology over their experience or their eyes. … It’s not that unusual to trust the navigation works, because you have a background that says it has so far, it hasn’t lied to you yet. In this particular case I suspect it was just a matter of once you made that initial decision it was hard to back out of it — literally.”
What about people driving into bodies of water, a scenario we can all agree lacks ambiguity? Why would they not trust their eyes when confronted with the ocean?
“It’s a matter of 'it works better,'” Halavais said. “My wife makes fun of me when we do the whole Star Wars thing — turning off the targeting computer. When I trust myself over my nav, she makes fun of me. Waze does know better than I do in nine cases out of 10. If you have a track record that says the navigation saves you time or effort over a long period of time, you’re going to keep trusting it.”
Michael zeroes in on the one time out of 10 that tech doesn’t work. She says that’s why we trust it.
“Because it works part of the time, and that’s something I spent a good six years analyzing,” she said. She was awarded a large grant in Australia to look into location-based services.
“What we found is that we cannot always trust tech,” she said. “Tech is not reliable. Tech does not always work. It’s not always accurate. We think because it works most of the time, that it works all of the time, which is why Japanese tourists try to drive from Brisbane to Stradbroke Island, why Swiss tourists go down a goat trail, why someone is driving a car and is going down a boat ramp, even though they can see boats in front of them they keep driving because the GPS says keep driving. What’s going on? It’s the fact we’ve been fooled into believing that the technology always works. … When it fails, it fails cataclysmically.”
So Halavais trusts tech and Michael doesn’t. But their two roads do converge in the wood. Halavais contends some of these cases would have happened anyway — people have been making stupid driving mistakes long before apps appeared — but he argues it’s a nice media hook when it can be blamed on nav failure.
“It’s a microcosm of a much larger issue about trust in technology, which is, I would argue, the most pressing issue of today in terms of our society,” he said. “That seems a little overblown, but it certainly is true. What we trust and how we establish trust has been significantly disrupted by technology, over the last 10 years especially. This is just one small bite of that.”
Don’t trust your instinct and you will lose your instinct, Michael says.
“What’s happened to good old common sense?” Michael said. “A lot of things I talk about in my social implications of technology area is about common sense. I’m not saying anything that’s revolutionary. … What is lacking here is our inability to maintain the use of our brain. When you don’t use something, you lose it. My concern is while we’re allegedly the smartest generation, we’re going to become a dumb generation.”
It’s not a question of "kids these days," Halavais said. It’s a multigenerational issue.
“Technology is people,” he said. “That’s kind of the tricky bit. It’s a very complicated question, how we work out the trust in this. Why do I trust Waze? It’s because I know a little bit about the process by which it’s establishing where the police are. It’s because I know there’s a collaboratory of people out there feeding information into it. I trust that more than I trust Google, where I’m not quite sure how the technology is establishing what it finds. It’s a very complicated set of questions. We also know that those who are older are far more likely to spread fake news, so they trust what they see on Facebook far more than the younger generation does. … Where we place our trust is certainly conditioned by our experience.”
Until now, most people were aware of general rules of travel, like the fact that a 400-mile drive is going to take about eight hours, or that you’re going to get about 300 miles from a tank of gas.
“When our spatial awareness goes, a lot of things suffer,” Michael said. “It’s not just being in a car. It can be being at home. It can be responding to a child. It can be responding to an emergency.”
Our senses are being dulled because automation takes them on without context, she said. We text and write messages like machines. We are becoming automated.
“Kids when they write texts and they put a word on each line — it’s spasmodic, like ‘Get me,’” Michael said. “There’s no structure and there’s no flow to a sentence that says, “Hi Mum, I’ve arrived at the station. How was your day? Can you pick me up?” We’re always trying to do the least number of actions. In being algorithmic like that, context is missing.”
Michael is Australian. Once she attended a dinner and sat next to a brilliant woman, top of her field in the world. The woman asked Michael how long it took to drive from Australia to New Zealand.
“There’s a myopia we’re all suffering,” Michael said. “We work on a little thing but the context of the greater scape of the nation or country or world or our suburb or community is missing. People have lost touch. And because you’ve lost touch with that, everything else is relative. Of course you’re going to be duped by the device, because you don’t even know basic geography. You don’t even know what it’s like to be human, in the world, on the earth. You’re thinking that’s not important, or by some bizarre circumstance you were never taught basic fundamental things. I always say, ‘You might not have to change a tire ever in your life, but it’s good to know how to do it.’”
Discoveries West campus Tempe campusSchool for the Future of Innovation in Society School of Social and Behavioral SciencesNew College of Interdisciplinary Arts and Sciences Social science TechnologyResearch Psychology Faculty Media Community Student
Source: Scott Seckel, August 6, 2019, “Losing our minds to tech: Would you drive into a lake just because your navigation system told you to?”, ASUNow, https://asunow.asu.edu/20190806-discoveries-losing-our-minds-tech
Kennedy’s experiments on himself were groundbreaking, but they also showed how easy it is to adversely interfere with our brain when we start to push and prod it. When it comes to two-way interfaces with computers though, the potential risks escalate. This is where we enter deeply uncharted territory, as it’s not clear what happens when our brains come under the influence of machines and apps, or how these capabilities might disrupt social norms and behaviors. As my colleague Katina Michael explored in a recent TEDxASU talk, the social and personal risks associated with brain implants go far beyond their potential health impacts.
Last week missing Belgian backpacker Theo Hayez's father made an emotional plea to Whatsapp to release his missing son's data.
But the messaging service uses end-to-end encryption which appears almost impossible to decode.
We investigate the opaque world of end-to-end encryption and shine a light on the pros and cons of the messaging services - particularly for minors.
Professor Katina Michael - Faculty of Engineering and Information Sciences University of Wollongong
Associate Professor Vanessa Teague - School of Computing and Information Systems at The University of Melbourne
Presenter Hugh Riminton
Producer Skye Docherty
Citation: Katina Michael, Vanessa Teague with Hugh Riminton, June 23, 2019, “WhatsApp Encryption and the Missing Belgian Backpacker”, RadioNational: Sunday Extra, https://www.abc.net.au/radionational/hugh-riminton/9355968; https://www.abc.net.au/radionational/programs/sundayextra/8.45/11225548
Citation: Katina Michael with Caitlin Dugan, June 20, 2019, “Father Pleads for Access to Encrypted Data from WhatsApp”, ABC Illawarra Radio.
Citation: Katina Michael with Caitlin Dugan, June 20, 2019, “Facebook's Cryptocurrency - Calibra”, ABC Illawarra Radio.
6.30 AM Bulletin
university of wollongong. social media expert says the launch of facebook crypto-currency could dramatically change the global banking system. katina michael, says the move by the us tech giant puts pressure on regulators in australia to catch-up. while the company has a huge amount of users who might be swayed into using digital currency. professor michael says the trust in social media companies, needs to be questioned. should be trusting the company that generates fake news and abide by that he should be trusting a company told how data to marketing firm
8.30 AM Bulletin
plans by social media giant facebook to launch online cryptocurrency could spark a huge challenge for australian banks and credit unions, according to university of wollongong expert katina michael, the move would allow users to buy digital currency that could be used globally after being adjusted to the country's exchange rates. professor michael says it will be difficult territory for regulators
9.30 AM Bulletin
says the launch of facebook cryptocurrency could dramatically change the global banking system. katina michael, says the move by the us tech giant puts pressure on regulated in australia to catch up. while the company has a huge amount of users who might be swayed into using their digital currency.professor michael says trust in social media companies need to be questioned
Dr Katina Michael
Expertise: Privacy and Cybersecurity
School of Computer and Information Technology
Professor Michael comments regularly on the social implications of emerging technologies with an emphasis on privacy and national security.
The topics she’s best-versed on are cybersecurity, privacy, technology, ethics, social media, wearables and biotechnology.
She researches the social and ethical implications of emerging technologies.
She has engaged in debates on hot issues; smartphone addiction; Facebook’s privacy breaches; whether humans are being enslaved or empowered by technology; when citizen rights are violated by tech companies or governments and the possibilities and limitations of mechanical upgrades to the human body.
She has also researched on the regulatory environment surrounding the tracking and monitoring of people using commercial global positioning systems (GPS) applications, focusing on people with dementia, mental illnesses, parolees, and minors.
Since 1996 Dr Michael has been studying the impact of microcircuitry and nanotechnology devices in humans.
Her research on location intelligence and resulting behaviours was a precursor to wearable devices like the FitBit.
She delivered a TEDx talk on the future prospects of microchipping people and more recently on the future prospects of brain pacemakers.
She understands the history of computers, and key innovations in design since they were first developed.
She is deeply involved in the Public Interest Technology movement, and technology for good with respect to Sustainable Development Goals.
Dr Michael can talk in depth about automatic identification technologies including bar codes, magnetic stripe cards, smart cards, biometrics, radio-frequency identification tags and transponders.
She can provide an informed opinion on location-based services including Global Positioning Systems, UHF and A-GPS, Wireless Local Area Networks, Cellular and 3G Mobile and IP Location Services.
On computing her knowledge covers context aware applications, mobile media, wearable computing, chip implants and nanotechnology.
She has a strong interest in national security including homeland defence, national identification schemes, counter-terrorism strategies, natural disaster prevention and response, pandemics and government readiness.
On privacy and surveillance, she can discuss dataveillance, sousveillance and uberveillance.
On public policy her expertise covers the Telecommunication Interception Access Act, anti-terrorism laws, standards and guidelines.
Dr Michael works between Australia and the US. While she's in the US, media can reach her via +14804941149.
Source: Katina Michael, June 12, 2019, “Expertise: Privacy and Cybersecurity (Katina Michael)”, UOW Media, https://www.uow.edu.au/media/find-an-expert/katina-michael/
Citation: Katina Michael with Gemma Veness, June 2, 2019, “United States visa applicants now required to hand over social media usernames”, ABC24hour News: Afternoons, https://www.abc.net.au/news/newschannel/
For an alternate perspective: https://www.abc.net.au/news/2019-06-04/us-visa-rules-social-media-accounts/11174262
For just $US60, a company registered in New York state is selling the data of over 2,000 Australian women who have signed up for online dating services.
Information from dating profiles of more than 2,000 Australians is sold to the ABC for approximately $86.55
Between 2,500 and 4,000 companies in the United States buy, sell and share personal data
Services like Paypal, now used by more than 7 million Australians, shares its user data with over 600 different third parties
For one woman, 'Rosie' — who wished to remain anonymous — her file included her age, contact details, place of employment and photographs.
The file also noted that while she did not have children, she would like some in the future.
Rosie's mother told the ABC her daughter was "quite shocked" to learn how intimate details of her life and her future hopes were being sold online for a profit.
"I feel like it's more than one website that this information has come from," she said.
The company that sold the information obtained it from dating apps and websites, but would not respond to questions about exactly how it got the data.
A close up photo of a phone with dating apps like Tinder, Grindr and Bumble on the screen.
PHOTO: It's unclear which apps and services shared the data. (ABC: Tara Cassidy)
The ABC's PM program bought the data as part of an investigation into data privacy.
Sarah, a 27-year-old woman whose data was also included with the purchase, said the she was concerned about safety after learning her data was available for sale.
"It would be pretty easy to track me down even from just my name and profession," she said.
Sarah had previously been doxed online, with contact details and photos of her posted maliciously to the website 4chan.
Listen to Privacy Unravelled, an investigation by PM:
Episode 1: What do Facebook and Google know about us?
Episode 2: The explosion of stored data on us all
Episode 3: How much does the government know about us?
Episode 4: The future of data surveillance
"It's pretty gross to learn that your identity is getting treated like a commodity that's for sale," she said.
"It makes you feel a bit small and powerless."
Dating sites often include the right to share or on-sell client data as part of the terms and conditions of starting an account.
Gathering data points
This case is a classic example of how our data is being sold around the world without our knowledge, according to Katina Michael, a professor in computing and information technology at the University of Wollongong.
"There are companies that are scraping people's data of all types — dating is quite obtrusive — and consumers do not understand what is possible with sophisticated data-scraping algorithms," Professor Michael said.
PHOTO: Consumers cannot possibly know how much data about them is out there, Professor Michael says. (Supplied: Katina Michael)
The companies that accumulate and combine this information are known as data brokers. The US Federal Trade Commission found that one data broker alone had 3,000 pieces of data on nearly every person in the United States.
It is difficult to know exactly how many companies are selling and trading data in this way, but credible estimates put the number of data brokers in the United States alone at between 2,500 and 4,000 companies.
So, are our phones listening to us?
Many suspect their phone is spying on them, but that doesn't mean it is.
University of Maryland law professor Frank Pasquale said brokers would use data to classify people into certain categories that could be discriminatory.
He gave the example of grouping consumers as "elderly and gullible" and then selling their information to gambling marketers.
"People have no idea, nobody has any idea of the vulnerabilities it entails," Professor Pasquale said.
"There's all kinds of data in there that can be used against us — by insurers by employers — and we just sort of have to hope that the laws keep those bad uses at bay."
For Australians, relying on the law to protect our data is difficult as much of it is stored outside Australia's jurisdiction.
"The mere fact that we're using international platforms to begin with means our data was already residing in America," explained Professor Michael.
"For instance, you may have a Gmail account — it may look like you're in Australia but your information is being stored in America."
Services like Paypal, which is now used by more than 7 million Australians, shares its user data with over 600 different third parties.
While data brokers are well-established in the US, they are becoming increasingly involved in new international markets like Australia.
"There's global data brokers that are saying, 'We can use the same algorithms as in the United States and we can apply them to other countries'," Professor Michael said.
Data gets linked to social media
Siva Vaidhyanathan, a professor in Media Studies at the University of Virginia, said when it comes to building these multiple data points into a complete picture of us, no one does it better than tech giants like Facebook and Google.
"Facebook for years purchased commercial databases and government databases, so they could cross-list all that data with the data that they had gathered from you," he said.
For example, if you use one of Facebook's apps on your phone, such as Facebook Messenger, Whatsapp or Instagram, then the tech giant can record your location.
"If you walk through a shopping centre, Facebook keeps track of the shops that you enter and cross-references that with any commercial activity that it has followed," he said, adding that other tech giants like Google collect similar information.
"You've told Facebook who your closest friends are; who your closest family are.
"You have also told Facebook what your political interests are, what music you like and what books you read.
"I couldn't imagine a richer picture of each of us. Facebook essentially has a doppelganger of us in its servers — our expressions and desires."
Professor Michael said her greatest concern was the judgements that would be made about consumers by algorithms based on all of this data.
"It is basically creating classes of people and it's creating segregation," she said.
"When we leave things to algorithms we get things wrong and I'm worried that in the next 10 years we'll see algorithms go out of control.
"Judgements are being made about me that I couldn't even conceive of."
Citation: Flint Duxfield and Scott Mitchell, May 30, 2019, “Personal data of thousands of Australians sold for just $US60”, ABC NEWS, https://www.abc.net.au/news/2019-05-31/online-privacy-personal-data-purchased-for-$us60-warning-experts/11157092?section=business
Katina Michael, professor of computing and IT, Wollongong University
Joseph Turow, professor of communications, University of Pennsylvania
Frank Pasquale, professor of law, University of Maryland
LINDA MOTTRAM: Data on Australians is being gathered — sometimes bought and sold around the world as part of a growing, opaque market in sometimes very personal information about you.
The information that interests these so called "data brokers", can be anything from your address and where you work, to more sensitive information like particular health conditions we might have.
Well, today in part two of our series "Privacy Unravelled" that is running all week on PM, we take a deep dive into how this is happening.
Flint Duxfield is the reporter.
(Telephone ring tone)
FLINT DUXFIELD: Last week I got a call from someone I'm going to call Catherine.
Hello, this is Flint.
FLINT DUXFIELD: Hi, how are you doing?
CATHERINE: Good, thank you.
FLINT DUXFIELD: The reason I'm not telling you her real name is because Catherine is pretty concerned about some of the things I was able to find out about her daughter.
CATHERINE: She was quite shocked with what information it did have on there. You just don't think it is going to be available to anybody.
FLINT DUXFIELD: Even though we've never met, I know the full name of Catherine's daughter, how old she is, her email, where she works, what she does for a living and what she looks like and the fact that she doesn't have kids yet but would probably like some.
And I know all this because I found this information and more for sale on a website in the US selling the dating profiles of Australians.
CATHERINE: It was on Tinder.
FLINT DUXFIELD: Right.
CATHERINE: But, I feel like it is more than one site that all that information has come from.
FLINT DUXFIELD: You don't think that it just came from the one dating site that she was on?
FLINT DUXFIELD: The company selling this information wouldn't say how they'd gotten their hands on it and experts say it's a classic example of the way our data is collected and sold without us having any idea.
KATINA MICHAEL: There are companies scraping people's data of all types. Dating is quite intrusive and consumers don't realise what is possible with the digital web with sophisticated data mining and data scraping algorithms.
FLINT DUXFIELD: Katina Michael is a professor of computing and IT at Wollongong University and she says even if you're careful not to share your personal information on things like dating sites, data about you is still being soaked up by all sorts private companies every single day.
KATINA MICHAEL: Every time a plastic card touches a machine is a data point. Every time you have transacted at a supermarket store is a data point. Every time you've used your loyalty card scheme to have Frequent Flyers and get to a destination through points, these are all accumulations of you.
We basically can constitute you in ones and zeroes in digital data points.
FLINT DUXFIELD: Just about every company you can think of is looking to collect as much information about you as they can from online sites like Amazon and eBay, retailers like Target and Woolworths, music and entertainment sites like Spotify and Netflix and even your credit cards and this information allows those companies to work out a whole raft of things about us.
JOSEPH TUROW: Data are, as some people say, the new oil. Data are the new ways in which to understand customers.
FLINT DUXFIELD: That is Professor Joseph Turow from the University of Pennsylvania who says the kinds of things that can be worked out about us from our data are pretty specific.
JOSEPH TUROW: In today's world everything is sensitive. In an AI world where you can use deep learning to figure out so many different inferences about people, the most benign categories can yield inferences about an individual that that person would shudder to think that people think of that person.
FLINT DUXFIELD: What sorts of things?
JOSEPH TUROW: Well, for example, you can decide based upon a person's shopping habits what personality they have, what diseases they may get, how long they are likely to live.
Do you want that kind of material to be inferred about you? It is quite possible to do this.
FLINT DUXFIELD: Now the ability to work out that kind of thing might seem creepy but the thing that really concerns a lot of privacy experts is that often this data doesn't just stay with the companies that collect it.
It gets bought, sold and shared in ways most of us have no idea about.
Katina Michael again.
KATINA MICHAEL: Where it becomes a little bit manipulative to me, and exploitative, is when third parties that don't have customers begin to say to companies like Woolworths and Coles, for arguments sake, 'I've got some data that you might really like and here's the price,' and it is no longer personal information that is of a subscriber base.
It is third party information that is then being used to further manipulate consumers.
I think when we cross boundaries in corporations and the data being handed over is no longer your own customer data, then that becomes quite intrusive.
FLINT DUXFIELD: Even services like Paypal, which is now used by more than seven million Australians, shares data about its users with over 600 different third parties.
But the companies at the heart of buying and selling data are called data brokers and they're some of the biggest firms you've probably never heard of — companies like Experian, Axciom and Quantium.
KATINA MICHAEL: When the company washes their hands of your personal information and says 'Well, I used it appropriately. I de-identified it, and then I decided to sell it'.
And they say 'We don't care down the value chain how that data is used because we did the right thing'. But as it goes down the value chain, it is being misused and abused in ways that people could never have imagined.
FLINT DUXFIELD: And Katina says that one of the reasons for this is that while companies will often say they only pass on data about you anonymously, what's known as 'de-identified data', several studies have shown that de-identification is, at best, very difficult and at worst, well, a bit of a con.
KATINA MICHAEL: So we have this set of de-identified data that algorithms are now, with some precision, allowing us to re-identify particular customers as they transact and go about their business.
So this notion of selling de-identified data is really bogus.
FLINT DUXFIELD: And the ways that allows data brokers to classify us are kind of scary.
Frank Pasquale is a professor of law at the University of Maryland and he gave me just a couple of examples.
FRANK PASQUALE: AIDS and HIV sufferers, gullible households, always elderly and gullible and that was to be sold to gambling marketers.
FLINT DUXFIELD: To what extent do you think people realise that these companies are amassing all this data and trading it in this way?
FRANK PASQUALE: People have no idea, nobody knows. Nobody knows and nobody really has a sense of the vulnerabilities it entails.
It is like- Paul Ohm calls it the 'Database of Ruin', you know, that these companies are creating about every one of us.
There is all sorts of information in there that eventually can be used against us by insurers, by employers etc and you know, we just have to hope that the laws keep those bad users at bay.
FLINT DUXFIELD: And the amount of information these data brokers have is truly staggering.
The US Federal Trade Commission found that one data broker alone had 3,000 pieces of data on nearly every person in the United States.
And Katina Michael says Australia's data broking industry is rapidly heading down the same path.
KATINA MICHAEL: The way that the Australian operators work is that they've grown in size and they've also been bought out and these global data brokers now are saying 'Ah uh, we can use the same algorithms that we built in the US and we can apply them to other states and we can make more money'.
FLINT DUXFIELD: And so would you expect to see the kind of categories and the kind of uses of data that we see in the US being increasingly used in Australia then?
KATINA MICHAEL: Definitely. I think the mere fact that we are using international platforms to begin with means that our data was already residing in America.
For instance, people don't realise, you may have a Gmail account. Yes, it looks like you're in Australia but that information is being stored in America.
FLINT DUXFIELD: Of course, one of the big risks of storing all this information is the potential for it to get into the wrong hands.
In the last 10 years major data brokers like Equifax, Experian and Lexis Nexis have seen data about hundreds of millions of people get hacked or breached and Frank Pasquale says, that means it's now available for purchase to the highest bidder.
FRANK PASQUALE: There is some news about this, about Chinese data markets where a journalist just went to one of these random data markets and found out all the hotels someone had visited and their credit card charges, something like that.
So, that I think, is a dystopian possibility, right?
And if sort of shrug our shoulders at every data breach and say well, oh, who's going to use that data, who's going to use that data. Oh, I guess all the hospital records get breached but who's really going to put that back together on me etc.
Eventually, I think it is quite possible that you're going to see enough of this sort of escape.
FLINT DUXFIELD: But in some cases the data doesn't even need to be hacked to get into the wrong hands.
In 2011 an investigation by security researcher, Brian Krebs, found that an identity theft operation in the US was just buying data about people from Experian, a major data broker and this is a company which sells, as one of its products, identity theft protection.
But for those who study data brokers, like Katina Michael, even the legal use of this data raises serious concerns — the most obvious of which is lumping together consumer profiles that are then sold on to marketers and used for advertising.
KATINA MICHAEL: This is looking at between 500 and 5,000 data points. They may be micro-analysed to target you in ways that you are oblivious to.
So you see a message maybe in your wall feed on Facebook or you see it popping up on the right hand side perhaps on your Gmail oblivious to the fact that these are calculated, targeted messages that people are sending you, or algorithms are sending you in the hope that they will maximise sales.
FLINT DUXFIELD: Now sometimes getting ads you're interested in is a good thing but there are more questionable uses for targeted ads, like this company which allows you to secretly send ads to your parents, your partner, your friends or anyone really without them having any idea.
EXTRACT FROM ADVERTISEMENT: That person, the target, is exposed to hundreds of items strategically placed as editorial content whether it is proposed marriage, quit smoking, initiate sex or stop riding motorcycles.
FLINT DUXFIELD: That's right, for around 45 bucks they claim you can surreptitiously put articles in someone's social media feeds or on the websites they visit to convince them to do something you want them to do.
EXTRACT FROM ADVERTISEMENT: The most requested tailormade campaign is settle outside of court and get back with your ex.
FLINT DUXFIELD: And it's exactly that kind of surreptitious, targeted advertising which privacy researchers like Frank Pasquale say is being used for some pretty questionable things.
FRANK PASQUALE: One of the things that I think is a common thread in both the US and Australia is like these shady vocational schools, right.
They set up and they are supposed to be higher education but they are just a big waste of money, and there is a lot of evidence that the people that are shown these sorts of ads are people that have been classified via data in a certain social class or being in the sort of gullible type or something like that.
You know, you're being trapped toward either like bad schooling opportunities, bad credit opportunities like payday loans, very high interest loans.
Those are examples of this sort of data playing into marketing schemes to draw people into things that are bad for them or things that are really bad business opportunities.
FLINT DUXFIELD: The way that companies can easily track so many things about us also makes it easier for them to work out how much we're willing to pay and in some cases charge people different prices depending on what's know about them — something called price discrimination.
FRANK PASQUALE: The price discrimination is definitely happening. We're seeing that.
We actually had a division of financial services in New York recently issued guidance for insurers who want to use social media to write life insurance. So they want to pass everything you've been doing on social media to decide how much life insurance to give you.
There's a whole array of companies in that space and I've also given testimony before the US Senate that mentions globally data that is used against people.
For example in India, people that have evidence of being involved in politics were denied loans because they felt that if you are getting into politics, you may be not a reliable credit risk.
So these are all examples of, I think, things where you wouldn't think that you know, oh, I looked at the hang gliding website three times and now my life insurance costs 10 per cent more, right?
But that's exactly what could be happening.
FLINT DUXFIELD: The more these companies know about us, the easier it is for them to decide whether to service us at all.
Several health and car insurance companies for instance now offer discounts if you agree to let them track your car while you're driving or record your health data with a smart watch - which sounds like it might be a good idea, right, if you're healthy and you look after yourself you save money.
But that's not the case for everyone as Professor David Watts from LaTrobe uni explains.
DAVID WATTS: It means that yes, you can be offered particular targeted services that may be beneficial for you. That's the good side of it.
The bad side of it is that it could be used to discriminate against you. It could be sold on to someone else, it could be sold to your employer who may decide that they don't want to employ you because you've engaged in activity and sought medical treatment for it that it morally disapproves of.
And it also means that you actually might not get offered health insurance or your premium might climb through the roof but it can be used and has been used in that American context to deny people insurance and often they are the most underprivileged.
That's a societal problem, you know, does it mean that those who are in the greatest need of healthcare, are denied it.
FLINT DUXFIELD: And Katina Michael says as the amount of data increases, so too does the potential for companies to analyse us in more and more calculated ways.
KATINA MICHAEL: When we systematize and leave things to algorithms, we get things wrong and I'm worried that in the next 10 years we will see algorithms go out of control and I'm concerned that there is a loss of human dignity, there is a loss of freedom in these practices and there is a loss of autonomy when judgements are being made about me in ways that I had never thought possible. I just couldn't even conceive it.
LINDA MOTTRAM: Katina Michael, she's professor of computing at IT at Wollongong University.
Flint Duxfield reporting, Privacy Unraveled series — running all this week on PM.
Now you can find this episode and last night's at the PM website if you search ABC Privacy Unraveled, or just search ABC PM.
And make a date tomorrow to join us at the same time for part three of Privacy Unraveled. Flint will look specifically at what government knows about us. It is quite a bit and what can happen when this data gets into the wrong hands.
That's PM at the same time tomorrow evening, our series Privacy Unraveled.
A short radio interview with Amelia Wood of the University of Wollongong, a second year journalism student on “Smartphone Addiction”.
Citation: Katina Michael with Amelia Wood, “Smartphone Addiction”, University of Wollongong, Journalism Assignment, https://soundcloud.com/amelia-wood-105258743/jrnl203-assignment-3-smartphone-addiction
Las empresas aplican sistemas de inteligencia artificial para almacenar datos de los consumidores y guiar sus elecciones comerciales sin que ellos lo sepan
El gigante comercial Wal-Mart, el mayor minorista del mundo, dispone de una patente que descubre el estado de ánimo de sus clientes simplemente estudiando sus caras. De este forma, aspira a localizar a los compradores descontentos para presentarles una atención especial. El banco australiano Westpac tiene un sistema similar, aunque está centrado en su personal, para que los jefes intervengan si algún empleado lo requiere. Puede haber consumidores y empleados que valoren favorablemente estas tecnologías, porque, gracias a ellas, supuestamente se resolverán antes sus problemas.
No obstante, expertos como la doctora Monique Mann, de la Universidad de Tecnología de Queensland (Australia), alertan de que, incluso en estos casos, se estaría manejando un “concepto de consentimiento anticuado”. Ella se muestra tajante al afirmar: “La ley y los marcos regulatorios se han quedado atrás con respecto a los avances tecnológicos. Esto acarrea serios inconvenientes para la privacidad”. En este contexto, sus colegas Katina Michael y M. G. Michael acuñaron el término “omnivigilancia”, con la siguiente definición: “sistemas de vigilancia generalizada, con tecnología habilitada e integrada en la sociedad, los dispositivos electrónicos y hasta el cuerpo humano”.
El Congreso estadounidense está discutiendo de qué modo interviene la tecnología en los comercios. Las empresas cuentan con innovaciones digitales que les permiten orientar mejor su actividad. Sin embargo, miles de particulares, asociaciones y partidos políticos consideran que estas fórmulas suponen una amenaza para la intimidad. Se refieren a escaparates que graban a los ciudadanos que se paran ante ellos; a espejos dotados de inteligencia artificial para aconsejar a los compradores sobre la ropa que se están probando; a cámaras repartidas por doquier que se fijan en los rostros, los cuerpos y las bolsas de los clientes para clasificarles…
En estas ocasiones, no se trata de regalar datos personales de manera —más o menos— voluntaria, como sucede durante muchas compras en línea, sino que los afectados se pueden sentir violentados, puesto que se les espía, estudia y manipula sin pedirles permiso ni comunicarles lo que se está llevando a cabo con ellos. Además, por si no hubiese suficiente con la presencia de los consumidores para poner en marcha estos métodos de rastreo, los smartphones contribuyen a facilitar esta identificación: cuando los individuos se conectan al WiFi, al activar su bluetooth, etc. Las tiendas físicas de Amazon, la cadena de perfumería Sephora… se valen de algunos de estos mecanimos.
Los ejemplos aumentan día a día. Así, paneles informativos como los instalados en el nuevo centro financiero internacional de Seúl (Corea del Sur) sirven igualmente para vigilar a los clientes y analizar sus desplazamientos. En apariencia, el principal cometido de estas máquinas es proporcionarles ayuda a quienes la necesitan, pero los gestores de este establecimiento los utilizan para saber en todo momento qué hacen quienes les visitan. Y por qué lo hacen. Los minoristas de los Emiratos Árabes Unidos también están avanzando rápidamente en esta línea. Muchos de ellos emplean aparatos de este tipo para contar e identificar a las personas.
“Los programas más solicitados se adelantan a la dirección que seguirán los consumidores. Por esta razón, un gran empresario acaba de comprar 250 de estos sistemas”, revela el director para Oriente Medio y África de la compañía multinacional Milestone Systems, Peter Biltsted. El movimiento no se detendrá, como señala otra voz autorizada, Marwan Khoury, gerente de marketing de otra firma especializada, Axis Communications. Él recuerda que el aparendizaje profundo (deep learning) ya ha conseguido en Japón adaptar la publicidad que se exhibe en una carretera en función del tipo de vehículo que esté circulando delante de ella.
El mercado de la tecnología de última generación para las ventas ascenderá a 1.500 millones de euros en 2020, según los cálculos de la consultora Deloitte. Los mismos desarrollos que explotan los detalles biométricos para garantizar la seguridad —en la prevención de atentados, el control de aduanas, etc.— se están aplicando al comercio. No obstante, si el primero de estos usos ha desencadenado un debate de carácter ético, ¿cómo no lo iba a motivar el segundo? Un ex jefe de la Fuerza fronteriza británica, Tony Smith, subrayó en un foro reciente de la cadena pública BBC que los gobiernos deberían legislar para evitar las prácticas inapropiadas con estos datos.
Como a muchos otros, a él le preocupa que ya sea realidad un itinerario como el que se relata a continuación. De camino a los grandes almacenes, un conductor entra a una estación de servicio para repostar. Mientras llena el depósito de gasolina, observa los anuncios que aparecen en la pantalla del surtidor. En ese instante, el sistema de inteligencia artificial que oculta este monitor le está catalogando: edad, sexo… ¿Lleva gafas? ¿Barba, tal vez? Estos factores le ayudan al robot para asignarle un perfil demográfico, que será transmitido a los anunciantes y que le acompañará a las tiendas, e incluso a su casa, sin que él lo sepa. El ciudadano pensará, sencillamente, que ha visto algunos anuncios en la gasolinera.
JOSEP LLUÍS MICÓ, April 27, 2019, “Nadie se puede librar de la “omnivigilancia””, Lavanguardia, https://www.lavanguardia.com/tecnologia/20190427/461874572551/big-data-inteligencia-artificial-mercado-consumidores-publicidad.html
Citation: Jeffrey Duggan, April 14, 2019, “RFID Journal Live 2019”, ReelyActive Blog, https://reelyactive.com/blog/archives/tag/rfid-2-0
Oh the irony of human-entered data at an RFID conference. Ten years ago, Kevin Ashton, who coined the term “Internet of Things”, explained in RFID Journal:
We need to empower computers with their own means of gathering information […] without the limitations of human-entered data.
Case in point, the badge: the surname and given name are reversed, with the latter mispelled misspelled as a result of human data entry during onsite registration from a paper & pencil form. Nonetheless, this is an excellent example for emphasising the potential of RFID and the IoT!
Indeed, at the co-hosted IEEE RFID event, I, Jeffery Jeffrey, presented a workshop entitled Co-located RFID Systems Unite! focused on this potential now that there are nearly 20 billion RAIN (passive) and BLE (active) units shipping annually. An open architecture for collecting, contextualising and distributing the resulting data is becoming critical, and I was pleased to hear this sentiment echoed on the RFID Journal side by Richard Haig of Herman Kay and Joachim Wilkens of C&A.
Also heard echoed was the prevalence of BLE (active RFID) throughout the conference. Literally.
This contraption which converts radio decodings into musical notes may seem odd at first, but over the past year we’ve learned that art is a powerful tool for conveying to a non-technical audience the prevalence and potential of RFID and IoT in our daily lives. A few attendees were invited to listen with headphones and walk around until they found a silent spot. None were successful.
And we can only expect such prevalence to increase with energy harvesting technology maturing. We were pleased to see Wiliot’s live demo of an energy harvesting BLE tag, making good on their objectives from last year’s conference. Inexpensive battery-free BLE will be key to RFID proliferating to all the physical spaces in which we live, work and play—the BLE receiver infrastructure is often already there.
Which came first: the RFID or the Digital Twin?
The concept of the Digital Twin has also taken off over the past year, and we were pleased to have the opportunity to ask Jürgen Hartmann which came first in the Mercedes-Benz car factory example he presented? His answer was clear:
“Without RFID, for us there is no Digital Twin.”
Ironically, our April Fool’s post from two days previous was about Digital Conjoined Twins where we joked that the digital twin resides in the optimal location: adjacent to the physical entity that it represents. Perhaps not so silly in the context of industrial applications highly sensitive to latency???
RFID projects championed by the organisation’s finance department?
That is exactly what Joachim Wilkens of C&A argued. The success of their retail RFID deployment was in direct consequence of the C-level being on board, but more importantly by having a business case championed by the finance department:
“This is not an IT project, this is a business project.”
While we’ve observed our fair share of tech-driven deployments over the past few years, we’re increasingly seeing measurable business outcomes. For instance, a recent workplace occupancy deployment delivered, within months, a 15% savings in real-estate. That is a business project—one the finance department would love to repeat!
IoT: the next generation
What will we discuss in our RFID Journal Live 2029 blog post when the IoT celebrates its third decade? That may well be in the hands of the next generation. Since we began attending the co-hosted IEEE RFID and RFID Journal Live in 2013, we’ve observed a slow but steady shift in demographics. A younger generation—one which grew up with the Internet—is succeeding the generation instrumental in the development and commercialisation of RFID. On the showroom floor, we’re talking about the Web and APIs. At the IEEE dinner we’re discussing industry-academia collaboration to teach students about applications and ethics. And in the IEEE workshops, ASU Prof. Katina Michael took the initiative to invite one of her undergraduate students to argue the (highly controversial) case for implantables, effectively ceding centre stage to the next generation.
Joseph Cox and Jason Koebler, March 27, 2019, “Facebook Bans White Nationalism and White Separatism”, Motherboard, https://motherboard.vice.com/en_us/article/nexpbx/facebook-bans-white-nationalism-and-white-separatism
Joseph Cox and Jason Koebler, August 23 2018, “The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People”, https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works
Sasha Ingber, March 27, 2019, “Facebook Bans White Nationalism And Separatism Content From Its Platforms”, NPR, https://www.npr.org/2019/03/27/707258353/facebook-bans-white-nationalism-and-separatism-content-from-its-platforms
Brain Implants: Hope or Hype — Katina Michael:
From therapy to entertainment, to the technological singularity — a recent hire to the School for the Future of Innovation in Society opens a window into the endless possibilities, dangers and uncertainty of brain implants at ASU’s TEDx event.
“I'm looking at how they are presently being used and why they are being used,” Michael said. “For example, a person suffering from Parkinson's disease, Tourette syndrome, or dystonia — when they become resistant to drugs and pharmaceuticals — will opt for this procedure as a last resort because it’s the only hope they have of having some quality of life."
Michael will also explore a future where implants are used beyond therapeutics for entertainment, and eventually, the possibility of uploading our own consciousness to technology.
“We don't know the long term effects of being disembodied," Michael said. "But I'm arguing at the end of my talk that to some degree we have already been disembodied by the technological interventions we are using, whether it's smartphones or, whether it's our social media, we have less physical contact with those we love and we have more contact with inanimate objects even if we're using them as vehicles of communication. So this decrease in face-to-face is going to cause problems.”
Source: Isaac Windes, 22 March 2019, “ASU students and faculty look to the future at TEDx NextGen”, StatePress, http://www.statepress.com/article/2019/03/spscience-asu-students-look-to-the-future-at-tedx-nextgen
Fewer than 200 people watched the original live video of the Christchurch massacre, Facebook has said.
None of them reported it immediately to Facebook during the attack, and it took half an hour after the killer started his live video for anyone to report it using Facebook's reporting tools, the company said.
However, this has been challenged. Jared Holt, a reporter for Right Wing Watch, said he was alerted to the livestream and reported it during the attack.
Police carry flowers left by well wishers to the Al Noor Mosque in Christchurch. Fifty people died in the shootings on Friday.
"I was sent a link to the 8chan post by someone who was scared shortly after it was posted. I followed the Facebook link shared in the post. It was mid-attack and it was horrifying. I reported it," Holt tweeted.
* Christchurch mosque shooting accused not allowed TV or newspapers in prison
* 'You just think about what those people went through' - top Facebook executive
* Christchurch shooting demonstrates how social media is used to spread violence
"Either Facebook is lying or their system wasn't functioning properly."
Holt then checked and could find no record of his report on Facebook's internal tool for listing the reports users send off.
"I definitely remember reporting this but there's no record of it in Facebook. It's very frustrating," Holt told Business Insider.
"I don't know that I believe Facebook would lie about this, especially given the fact law enforcement is likely asking them for info, but I'm so confused as to why the system appears not to have processed my flag."
Facebook declined to comment when contacted by Business Insider.
Facebook vice president Chris Sonderby said the social media giant is working around the clock to prevent the video from being shared again.
"The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast," Sonderby said in a statement.
"Including the views during the live broadcast, the video was viewed about 4000 times in total before being removed from Facebook.
"The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended."
The link to the live-stream was posted on anonymous message board 8chan, and shortly after the 17-minute video ended, a download link for it was also posted on the site.
Facebook removed the video and "hashed" it to automatically prevent it being uploaded again, but some users added watermarks or edited the video in order to slip it past the detection algorithms.
In the first 24 hours after the shooting, Facebook removed about 1.5 million versions of the attack video.
"More than 1.2 million of those videos were blocked at upload, and were therefore prevented from being seen on our services," Sonderby said.
"We have been working directly with the New Zealand Police to respond to the attack and support their investigation."
Prime Minister Jacinda Ardern has spoken to Facebook chief operating officer Sheryl Sandberg since the attack.
The Government's Cabinet meeting on Monday is expected to be mostly focused on gun law but it is understood the Government is also keen to call on social networks to do more to fight radicalisation in the wake of the mosque shootings. This could include a call to share more data directly with intelligence agencies.
The Global Internet Forum to Counter Terrorism - a consortium of global technology firms including Facebook, Google and Twitter - said it shared the digital "fingerprints" of more than 800 edited versions of the video.
Neal Mohan, YouTube's chief product officer, told The Washington Postthat his platform also struggled to moderate the video successfully on its platform.
His team finally took unprecedented steps - including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips were altered in ways that outsmarted the company's detection systems, he said.
Despite such efforts, concerns have been raised by a professor of engineering and information sciences about social media's failure to implement preventative measures.
Professor Katina Michael of the University of Wollongong said algorithms can only do so much to prevent certain content being uploaded and human moderators are already forced to wade through screes of questionable content.
"The best algorithms couldn't have stopped this. Having said that, if you [Facebook] can't stop it, don't offer it. If you want to provide the service, perhaps you have to vet the users."
Michael said the current algorithms were set up based on a corporate model that was centred around generating revenue, not looking for controversial content. "It is the failure of not only the algorithms, but human moderators."
Australian prime minister Scott Morrison has asked G20 members to consider practical ways to force companies like Facebook and Google to stop broadcasting atrocities and violent crimes.
Sonderby said Facebook is committed to working with leaders in New Zealand and other governments to help counter hate speech and the threat of terrorism.
Meanwhile, police probing the online presence of the terror suspect and his involvement in far-right chat boards and other internet activity have met with some resistance.
In one email exchange, New Zealand police requested an American-based website preserve the emails and IP addresses linked to a number of posts about the attack, but were met with an expletive-filled reply.
- Stuff with AAP and BusinessInsider.com.au
Katina Michael in Matthew Rosenberg, March 20, 2019, “Alarm raised about Facebook livestream mid-attack in Christchurch, man claims”, stuff.nz, https://www.stuff.co.nz/national/christchurch-shooting/111412396/fewer-than-200-people-watched-shooters-christchurch-massacre-live-video-facebook-says
Disclaimer: The way I was quoted seems to imply that the content moderators at Facebook were partially to blame. This is not what I said in the interview with Matthew. Moderators are not paid to catch this kind of content; they are paid to investigate copyright and controversial content. Humans are at the mercy of the machine on this occasion. It can be likened to 100 people trying to stop leaks in 200,000 buckets. It just cannot happen. In terms of what could have stopped this footage from spreading? Ensuring more predictive AI algorithms, and also total information surveillance of everything coming through servers, and still that is not foolproof.