This post first appeared in the San Jose Mercury News
by Larry Magid
Data and privacy regulators from governments around the world met in Mexico City last week for the 33rd International Conference of Data Protection and Privacy Commissioners. As you might expect, they were joined by companies anxious to be part of the conversation, along with people from nonprofits that focus on privacy issues.
Directly behind the speakers podium were the logos of sponsoring companies, including Google, which has been at the receiving end of enforcement actions from some of the very regulators who took the stage, such as the Federal Trade Commission, which this year entered into a consent decree with Google over charges that it “used deceptive tactics and violated its own privacy promises to consumers” when it launched Buzz in 2010. Buzz has been discontinued as Google focuses on its newly created Google+ social network.
I was at the conference to participate on a panel on “protecting children in a networked world.”
Carrot or stick?
As I sat in sessions and chatted with delegates in the halls, it became increasingly clear that there are tensions not only between regulators and those they regulate but among regulators themselves, who don’t always agree on whether they should be wielding sticks or dangling carrots.
Many commissioners — especially those from Europe and Latin America — hold on to the idea that their countries must enact very tough privacy laws that tightly restrict what can be collected, how long it can be held and what can be done with it. Yet others advocate a more flexible approach.
At one session, New Zealand Privacy Commissioner Marie Shroff said, “We need to move from focus on compliance and being reactive towards being more strategic and analytical” by developing a better understanding of risks and harms. She said “regulators have been in negative mode and need to be more positive.”
Yet some regulators from Europe talked about the need to enforce the European Union’s “Cookie” legislation that would require consumers to consent to every cookie placed on their machine by a website. Cookies, which have been around since the ’90s, can be very benign — such as allowing a user to store logon credentials — and they can be used to track you as you go from site to site to enable companies to target ads based on your Web habits. Not everyone here agrees that tracking cookies are inherently evil nor even whether they should be disabled by default, but there is widespread agreement that people ought to know if they’re being tracked and at least have the power to turn off tracking.
Big data and life in the fish bowl
There was a lot of conversation about “big data.” Whether it’s Google’s massive cache of user search terms, Facebook’s enormous knowledge of people’s friending patterns or Twitter’s instant awareness of trending topics, there is a great deal of information being collected that can be mined for all sorts of things that range from incredibly useful to a bit scary. Add to that all of the data being collected by health care providers, insurance companies, financial institutions, airlines and even supermarkets with their ubiquitous loyalty programs, and it’s pretty clear that we’re all living in a fish bowl.
The only question is whether those who are looking at that glass bowl can single out fish by name or simply observe patterns of fish in general.
The keynote speaker, The Economist correspondent, Kenneth Cukier, painted a mostly optimistic picture. Cukier mentioned ZestCash, an alternative to payday loan stores for people who may not have access to traditional credit sources. Machine learning allows ZestCash to compare applicant data to aggregate data to more accurately determine whether someone is likely to pay back their loan, which, according to its CEO Douglas Merrill, lowers his risk and allows him to charge lower interest rates than other short-term lenders.
Cukier also mentioned Farecast, which was acquired by Microsoft in 2008 and incorporated into Bing. The service uses “big data” from online travel reservation services to figure out whether fares are rising or dropping on a given route. The technology, according to Microsoft, is “75% accurate.”
Rewards and risks of big data
Jules Polonetsky, director of the Future of Privacy Forum, summarized the debate. “The concern is that these big databases that are being built could be used to discriminate against people, give the government information about what you’re doing or to target you in ways you don’t want with advertising.” Yet, he added, “big data also means being able to learn how infectuous diseases spread or learning from all the searches people do to better give you what you’re looking for. We can learn a huge amount and the question is can we manage to not throw out the baby with the bath water.”
Of course, all of these inferences can be made from aggregate data, which has been “anonymized” so that it can’t be traced to a particular individual, but one of the concerns articulated by some here is the possibility that even aggregate data can be “de-anonymized” to identify specific people, and that is one of the reasons why some regulators are calling for very strict policies about collection and storage of information.
Be the first to comment