November 17, 2014

De-webifying

I've finally freed-up some time to resume posting items to this blog. To those who have been checking in from time to time, I deeply appreciate your patience and perseverance. What better way to restart this thing than to share some items pertaining to our rapidly evolving big-data-world.
 
Several months ago I decided the time had come to bid farewell to FaceBook, so deleted my account. and posted a version of what follows to explain my rationale. I am no neo-Luddite; in fact I greatly enjoy the incredible array of modern wonders made possible by technology and I've every confidence our future will be just as amazing to us as our time would be to someone from even the not too distant past. But I'm increasingly concerned about the amount of personal information that is now stored in various forms by all manner of entities from government to commercial. 
 
Usually there is a well-meaning, useful, sometimes even noble purpose for data collection at the outset. Companies like to collect data on people so they can more effectively sell products. The medical community likes to collect data so it can better understand all aspects of health with the intent of providing more effective treatment and even preventive services for higher daily quality of life. Educators like to collect data so they can better understand the learning process and adjust education efforts to be more relevant and effective perhaps even at the individual level. Government agencies presumably like to collect data so they can more effectively use taxpayer monies (at least that's their argument) or, in the intel and law enforcement communities, more rapidly acquire awareness of threats/dangers in the hope of preventing bad things from happening. I get that. But very quickly the ideal clashes with reality. The original intent doesn't account for the nature of people.

Insurance companies seek health information so they can adjust coverage. Let's say advances in genetics allow a doctor to tell you that you have x-probability of developing a serious health problem. Does your insurance company increase your premium or drop your coverage altogether? Let's say behaviorists develop an algorithm they believe predicts with high likelihood an individual's potential for some action. Does law enforcement act on that with the intent to prevent a crime even though no crime has yet been committed? Educators seek insight into home environments through questionnaires administered at school but the children who complete them have no ability to provide context to answers for questions whose wording might reflect a philosophical bias of the person who developed the questionnaire in the first place. Does the school act with law enforcement or social services if they conclude there's something amiss even though that conclusion is based on inherently flawed assumptions?

It is a truth that people begin with one idea in mind but when conditions change and new opportunities emerge their objectives and perspectives change too. The people collecting information may have one idea in mind at the beginning, but after compiling loads of data they tend to find other things that can be done with it that just wasn't imagined when the project was started. Government and commercial entities now have access to personal preferences, real time location tracking, online behavior (websites visited, items purchased, times of activities, networks of contacts, etc.), and religious, political social, and economic beliefs -- able to be derived if not explicitly stated. As I said, I'm not a Luddite nor am I a conspiracy theorist, technophobe, or anti-government activist. I'm just a guy who likes his privacy, recognizes that the human factor is fundamental to everything we do, and who abhors the idea that free will might be replaced by predictive software.

Humans 'reply all' when they shouldn't. They compromise sensitive information. They lose hard-drives full of social security numbers and send out personal contact information and political donation histories when they shouldn't. They exploit their ability to use tools to which they have access to satisfy voyeuristic tendencies (at the benign level) or to gain advantage over rivals (at the nefarious level). They interpret data to suit their biases or to promote agendas. It happens all the time.

I'll continue to use a cellphone and shop online from time to time. And I'll continue posting to this blog; most of the stuff I posted to FaceBook was better suited to a blog format anyway. But I'm ever more reluctant to willingly contribute all the little bits and pieces of my life to the big pool of metadata over which I have no control any more than I have to. 
 
Google's mission statement is extraordinarily noble: "to organize the world’s information and make it universally accessible and useful." It's beyond my ability to imagine all that might be accomplished to the good of humanity when taking that statement at face value. And I believe there was noble purpose in it when conceived just like all the other noble ideas that spur people to try to advance the human condition. Facebook similarly has lofty ideas about creating an environment through which people can connect with each other regardless of physical location. I think FB currently has 1.3 billion users making it the third largest population group in the world (after India and China). There are other examples. But I just can't get past the fact that behind it all are people who can be honest, ethical, well-meaning, and noble one moment but also carry with them all the weaknesses, foibles, ambitions, and ignoble character flaws that make us human. It’s just the way it is.

I’m reminded of Robert Frost’s poem “Mending Wall” with the immortal line "Good fences make good neighbours." There is a great deal of wisdom in that phrase and a deep understanding of the human condition.

No comments:

Post a Comment