• Home
  • Speaking
  • Documentary
  • Research
  • Seminars
  • Interviews
  • Opinion
  • Media
  • Search
Menu

Katina Michael

  • Home
  • Speaking
  • Documentary
  • Research
  • Seminars
  • Interviews
  • Opinion
  • Media
  • Search

Behavioural Biometrics - Jennifer Luu Interviews Katina Michael

May 9, 2017

Biography: Ms Jennifer Luu is a student at the University of Technology, Sydney completing a Bachelor of Journalism. She also has begun producing stories at 2SER. I am appreciative that our interview on behavioural biometrics was transcribed by Jennifer. Here it is below, unchanged. An audio of the interview can be found here.

JL: 1. How does facial recognition technology such as iOmniscient in CCTV work?

KM: Facial recognition technology is a biometric system and literally it is looking at distinguished markers on the face and the distance between various items on the face, whether it’s your eyes, your mouth, your nose. And then basically matching your face to something in a database and that’s how you do an exact match or a fuzzy match. Facial recognition is not always 100% accurate. I mean, people change the way they look. They wear glasses, they wear scarves, they shave their hair. But those defining features really don’t move around. Like a nose, forehead and ears, so the distance is like a map for instance that you would use to navigate between one location and another. And so facial recognition makes use of those features on our face and then does an exact match or a fuzzy match or no match at all.

JL: 2. What is your opinion of using the software in surveillance, for example to reduce crime and detect criminals?

KM: It’s very interesting when we start to look at facial biometric systems. A lot of us uphold a lot of selfies to the internet, to facebook for instance. Facebook has been working with law enforcement for a really long time in order to do matching and people don’t realise this in the general community. For the police, Facebook is a one stop shop to look at and find potential suspects of crimes and then to correlate these images back to other images that might look like the suspect to try and get an exact make. So it is not like it hasn’t been happening previously but when we start to use facially recognition systems more broadly, in for instance, surveillance camera technology, to look at where people are, to identify potentially visitors in a particular country, to look out for particular types of criminals or suspected criminals, then we can see that this is something different to what it was originally intended for which was like to surveil movement and surveil people for example to ensure that, you know, shoplifting doesn’t occur in a premises. So when we start to look at mobile surveillance technology that has facial recognition inbuilt, a bit like the iOmniscient technology that is being developed for identification and detection, then we are starting to go into very different territory. It’s really about potentially racial profiling, behavioural surveillance and looking at what people are doing and detecting people with abnormal behaviour verses is potentially in the algorithm known as normal behaviour. It’s really about exceptions.

JL: 3.  How can the algorithm predict what is abnormal as that is kind of subjective isn’t it?

KM: It is, it is really subjective. One of the products that iOmniscient actually offer to the public or offer to corporates is the ability to study a queue, and to look at the queue and to say “well, alright we can see this kind of queuing occurring dynamically and the system learns of the behaviour of those queuing and so for the vast amount of the population, they may queue for example, at an airport in order to get their passport stamped through immigration in a particular way, and, as I said previously, the algorithm is looking for exceptions based on learned behaviour in the first place and then differences from that behaviour. So if you are in a queue, you aren’t doing what most other people do in a queue, you might be singled out, even if you are completely innocent, when these systems are actually in focus. And then what happens is, because the machine might come up with an alert or a flag the human operator, who then intervenes to either question the individual or stop them, might feel they need to act based on the alert or the flag, without further questioning. That is very interesting. I think I’ve gone through a queue previously but for a full body scan and says you know “why have you chosen me instead of those behind me.” The human operator will always say “well it is random and it based on whether that LED goes red or green, and if it is green you have to be body scanned and then patted down” for instance. So it is very interesting how the human operator then responds to the algorithm decision making and that’s one of the worst issues I think at this point, is to look at subjective bias possibly in the algorithms and whether for instance they are singling out people of a racial minority, people who are different for instance, what do you do with people who are disabled? With people who are mentally ill who are queuing? And they haven’t done anything wrong but obviously they are carrying an illness but they may be singled out because they just look different.

JL: 4. With this kind of technology being trialled in Australia, what do you think this means for privacy in Australia?

KM: I think technology is racing far ahead of legality, of policy, and what we are seeing for instance, in the area of Toowoomba and one of the Gold Coast City Councils in Queensland, is that a system is currently being trialled in a Toowoomba City Council without the express permission or guidance or advice of the Queensland Privacy Commissioner Phillip Green. He came out recently in the Guardian and said quite a few vital things regarding privacy invasive behavioral recognition software. He did compare the system of iOmniscient to those racial profiling systems in the US with some very obvious bias. I mean we even see this with google algorithms, for instance, if you search ‘criminal and US’, most likely you will find a black, ethnic, minority group coming up. That is not too different from the way DNA biometric systems worked in the UK. We see, for instance, in DNA profiling in DNA biometrics, that 38% of black, ethnic, minorities, are identified in the DNA national database and the same will happen with photographs. They are just different biometrics. DNA is a biometric. It is unique to individuals. So are faces and so are fingers. And so when we are building these algorithms, whether it’s for google search engine, whether it’s for identifying a potential criminal in a queue, whether it is singling someone out or trying to target someone …. There is bias and that is something that you can try to make the algorithm learn otherwise but when all these red flags are coming up with people who look similar or who act similar, then quite possibly you are almost creating an algorithm that has a life of its own. Where artificial intelligence takes its own learned behaviour and goes forward assuming that the same is true in other situations. But that is one of the issues in camera analytics software and people like the Civil Liberties of Australia Group, The Australian Privacy Foundation, the Queensland Privacy Commission from the government Phillip Green, are basically saying “right it’s time for us to sit down and actually do a privacy risk assessment. Find out how are behavioural biometrics changing the game and how to we address the issues.” It is quite incredible to see that technology is being trialled but the Privacy Act is rather being ignored. As well as things like the Surveillance Devices Act and other things like Workplace Surveillance Acts in various States. What we are doing is almost going in contradiction to the laws that we have in Australia to protect citizenry and to do we really need these biometric behavioural systems at the city local council level? You know, I don’t think so. We have enough CCTV as it is.

JL: 4. Is it kind of hard to bring the legal side into it because from what I have seen, the Privacy Laws in Australia aren’t really uniform and they vary state by state so how difficult would it be to create uniform laws that addressed the technology?

KM: So we do have the Australian Privacy Act at the Federal level and then we have various state-wide Acts of course. And they differ, of course. So do the Surveillance Devices Acts between states and the workplace monitoring and surveillance acts, between states. And some States are more lenient than others. Some require you to actually inform an individual being recorded. Others don’t allow for visual and audio recordings. In terms of bringing things into uniformity, I don’t think that will happen in Australia because we have these state based law. But I do think that we are going to see very interesting amendments to current Acts. For example, in the Northern Territory, we had the Justice Act amended to allow for recordings of victims of domestic violence so that the evidence of bruising, or other physical abuse could be put forward in a State Court, but this was an amendment. So what I think will happen in Australia is that we will see continually amendments occur, but when technology is pushing the boundary, it is almost that the laws are lagging far behind. SO it is not even a problem of one state differing to another, it is the law not even having any idea what to do with this kind of behavioural surveillance technology. For instance, what is happening with the biometric data being gathered in real time and actually acted upon to make decisions in real time? Where is that data being stored? Are the makers of the software at all liable for wrong decisions that have been made? For example, placing innocent people on a suspects list. How do you then tell somebody, you know, “you are not aware of this but actually you were suspected by one of our behavioral surveillance technologies devices that you were a prospective terrorist trying to do something.” And they are currently innocent people. How after you’ve muddied someone’s reputation, and they are not aware of it, completely ignorant of the fact that they were flagged by this visual surveillance system, how do you then recompense them for the fact that they haven’t done anything and yet they have been accused? Even if no charges have been pressed, they’ve been questioned, they’ve been made alarmed. There whole well being and safety has felt contra to the whole reasons these surveillance are being purportedly put in. Here we are saying we are introducing these systems at borders to increase the safety of our citizenry and of visitors to our nation, but in actual fact what you are doing is making somebody feel unsafe by as flagging them as a criminal when they are not. So it is contra to the aims of the application and I guess it all depends on which stakeholder you are and who you think you’re protecting, and what you think the systems aims are. I mean, we had heard of many cases where people had been caught on CCTV, had been further questioned later for a crime, like a murder, and have then based on the feeling or fear, gone and killed themselves, taken their own life, because they’ve been flagged in one of these systems. People don’t know, during the Boston Marathon somebody who was identified prematurely as being one of the people who detonated one of the bombs actually took his own life because he felt like he was being chased by the authorities and yet he was completely innocent. The guilty man was found later.

JL: 4. That’s a really serious aspect of this, so the last question I have is, do you think this type of biometric surveillance is going to become more and more popular in future?

KM: Yes, I mean we are going towards a Minority Report future. That film starring Tom Cruise, is actually very well cemented in so many peoples’ minds. As he is walking through a shopping centre and his iris is being recognised, and his face is being identified via biometrics. Are we going to that? I think we’re sleepwalking towards that type of society. We already see the proliferation of unmanned CCTV cameras in the UK. There are these alarming statistics, so many CCTV cameras per person. And in Australia for instance, we have calculated the number of minutes that people are captured on footage on CCTV, so you really can’t go about your day with any privacy in a public setting. And that is creating stresses between private situations in public spheres. But I think we need to raise alarm bells. What’s coming with human activity monitoring and camera analytics of video footage is something completely different. I mean, you can imagine in the future, one camera being able to hear what you’re saying, see you, listen to your audio actual use a technology that transmits and translates your voice to actual text and then mining what you are talking about. What kind of a world would that be? And so when we’re saying that these camera surveillance technologies will curb antisocial behaviour, I think what will occur is a great risk to societies … and trust because then we will stop actually saying what we’re thinking to our loved ones if we are in public, we will stop sharing intimate conversations in public spheres. For example, in a coffee shop. And we will start to play to a theatre because we will think we will be monitored by visual systems, by audio systems, and then by authorities who could possibly take what we’re saying, the way we are looking completely out of context. And this is the whole thing. The company iOmniscient has chosen a very interesting name but my work and that of MG Michael in the sphere of uberveillance is actually saying there is no such thing as technology-based ‘omniscience’, there is no such thing as an all seeing eye in real time, because context is always missing from situations and events. So I don’t want us to walk down this path but it seems with ongoing councils around the world adopting these technologies and countries and cities, for instance in Bahrain, using iOmniscient technology, according to many people in human rights NGO’s are saying that they are actually targeting visitors and placing them behind bars when they are identified and detected at various locations. I don’t want to walk into that kind of society. I would love to keep our freedoms and our human rights intact.

In 2017, behaviour, biometrics, Jennifer Luu Tags behavioural monitoring, biometrics, audio, visual, visual analytics, detection, exceptions, AI, algorithms, queuing, abnormalities, difference, different, databases, cross-matching, tagging, tracking, surveillance, CCTV, mobile CCTV, real-time, watching, viewing, gaze, social sorting, mentally ill, disabled, minority groups, Minority Report, sensors, information, routes, analysis, machine learning, Acts, Privacy, Surveillance Devices Act, Workplace Surveillance Act, state-based, Australia, Toowoomba, city, councils, Queensland, Privacy Commissioner, PIA, impact, assessment
← Patient Feedback in Product Lifecycle Management of Brain PacemakersKallistos Ware on Religion, Science & Technology →

Contact: katina@uow.edu.au