MG Michael interviewed by LapTop Magazine, September 21, 2008.
These services are especially critical in particular contexts- for instance, in care-related applications that help people track a loved one who is suffering from a progressive case of Alzheimer’s disease. In some instances LBS technology can enable some Alzheimer’s sufferers to live at home longer, instead of being placed in a facility with 24x7 supervision, which can feel like a prison. This technology can also grant carers more freedom, in supporting them in the act of supervision, using alerts and alarms when a given number of conditions are met. Location based services can legitimately be marketed and sold as a safety enhancement but it still should not mean that a patient’s rights are summarily withdrawn. Ideally, consent would still be necessary.
As for convenience-related solutions, such as the one you mention- the teen who exceeds the speed limit, or the young driver who breaks his/her curfew or drives into the city at peak hour, this is a different scenario altogether. The teen driver has all his/her cognitive capabilities. There was a study done some years ago now (I think around 2001), regarding the education of young children and the importance of teaching them to ‘sense’ dangerous situations. The organization was all for ‘educating’ the children, so they could detect and discern when something was not right, and to act appropriately. Giving them a 24x7 location device to carry which would allow a parent to track their every move during the day and to see it scroll up on a map, was considered by this particular organization to be an ‘evasion’ of responsibility. Basically, a child who was ‘street-smart’ and was not carrying a device that could locate them had much higher probability to get out of a difficult situation, than the youngster who had a locational device, pressed the emergency button, and then did not know what to do afterward. So being ‘street-smart’ was more of an advantage.
I do not think that these LBS applications enhance trust. In relationships, a lack of trust means that there is also no bonding, no giving, and no risk-taking. A relationship based on trust is about a deep connection to another; no follow-up checks have to be made, save for those that occur in normal day-to-day conversations for the purposes of logistics. The very act of monitoring destroys trust; implies that one cannot be trusted. Verifiability does not facilitate trust in a human-to-human relationship, but does facilitate trust in a ‘technical sense’, e.g. a human-to-machine relationship. That is, where I am carrying a card that needs to be verified against my credentials in a computer database because there is no other person who can verify it humanly. In addition, how can I learn to trust others if I myself am not trusted? It is possible that the meaning of trust will transcend into the future into a more ‘transactional’ context but if that happens, human relationships are bound to be eroded, and we may find ourselves living in a seemingly ‘trust-less’ society. So is the argument valid (that LBS applications enhance trust)? Certainly when you are talking about human-to-machine connections, but no when we are talking about human-to-human connections.
In our research (that is the work that I am doing with my colleague and partner Dr Katina Michael) we are obviously not arguing to go back to a time, when people lost their lives because they could not make a simple phone call to 911 and have their location revealed automatically to emergency personnel, but we are advocating that surveillance-style location services will do far more harm than good. Knowing that someone else may be monitoring our location throughout the day, without our consent and/ or knowledge, may mean that our decisions are influenced in a certain way, that we act according to what we believe the observer prefers, and thus lose our own identity and purpose in the process. Freedom and trust go hand-in-hand. These are celebrated concepts which have been universally connected to civil liberties by most political societies. Future generations may in fact witness massive shifts in the understanding of traditional metaphysics. The Ancient Greek philosophers warned that we should not be completely taken over by “techne”.
Social LBS deployments, such as friend finders, are growing at mega-speed. We only need to note the potential between the integration of social networking sites and location-based applications. Today, not only can we know whether or not someone is online, but we may also know their exact location.
Even though friend finder LBS applications seemingly look evenly balanced on the surface, i.e. any ‘friend’ who accepts to be on my buddy list, has the capability to perform a ‘find’ on me, as I do on them; the power struggle underneath can be a very different story. First of all, we assume ‘consent’ in such an application, but there is no written agreement, no formal contract which is witnessed by a third party. It is not to say that I could not change my buddy finder settings allowing myself to be ‘invisible’ for a time that I wished to be left alone, but what if my friendship relied on ‘visibility’ and relied on meeting up with the buddies and be constantly seen in particular locations and places.
Today, the x-generation count the number of people they are connected to on Facebook as the number of ‘friends’ they have. It is a somewhat superficial notion of friendship, but one that is being increasingly embraced. It is, however, problematic when my BuddyList grows so big that it is no longer useful- it rather becomes an ‘AnyoneList’ and that is when things can get out of control. One can only imagine how young people could be taken advantage of by deceitful individuals with fake online personas and who may then happen to ‘bump’ into in the physical world. Location relates to one’s physical self, whereas online identification relates only to one’s self in a virtual sense. Online there is a chance of psychological damage but in the physical world there is a lot more at stake, both psychological and bodily.
Consider also the potential for misinformation. Think about the very real possibility that ‘buddies’ who happen to have formal relationships with one another, such as a husband and wife, are alerted that they are each in proximity of one another. Imagine now, that the husband sees his best friend at a nearby location to his wife, raising undue suspicion of the over protective and over controlling spouse. What then? The service may well indeed be considered voluntary, but it has unforeseen consequences.
In addition, as the digital divide grows with the adoption of more high-tech gadgetry, increasingly members of society are finding it more and more difficult to keep pace. There is nothing to stop ‘buddies’ from hijacking their friends’ phone, setting up a pervasive location service, and then misusing the service to their own end. The service may indeed be voluntary, one can opt-out and opt-in as they please, but what if one does not have the knowledge to do so, or feels pressured into opting-in by another? What if opting-out has consequences, like being left behind by the rest of the group, being excluded because one ‘just didn’t know’ about the short-notice outing which was organized via LBS? Again we have ‘the haves’ versus the ‘have-nots’… and this will lead to a number of social acceptance problems. Opting-out will generally equate to ‘losing out’ and being considered ‘different’. To some degree we can already see this happening with the general use of the mobile phone, texting, and having a presence on MySpace or Facebook, etc.
Control here exists in both the ability to find and to be found. Accordingly, control is the overriding theme encompassing all contexts. Mark Weiser, the founding father of ubiquitous computing, once said that the problem surrounding the introduction of new technologies is “often couched in terms of privacy, [but] is really one of control.” Indeed, given that humans do not by nature trust others to safeguard their own individual privacy, in controlling technology we feel we can also control access to any social implications stemming from it. At its simplest, this highlights the different focus between the end result of using technology and the administration of its use! It becomes the choice between the idea that I am given privacy and the idea that I control how much privacy I have.