The right to be forgotten

There is much chit-chat going on concerning ‘the right to be forgotten’. Much is linked to the digital footprint that you leave behind you. There is a good essay referenced by The Technology Liberal Front that you can read more on. It looks at both angles, not just ‘the right to be forgotten’ but also what happens if everything is forgotten! There is after all much that we do NOT want to forget. You know great achievements, great people, an accurate history, not digitally rewritten as in Orwell’s 1984 😉

Whatever you post online as a persistence value that is difficult to control, at least today after it has been shared online. I’ve said many times before, it is best to post/share only what you are comfortable for the world to see. One should always consider with every word, photo, video one shares, what happens if it gets out in the wild and it has your name on it? How can you get this back, and will you be able to repair any damage that your digital footprint could cause?

Unencrypted portable hard drives really are a problem!

It’s amazing the amount of discussions there are on how to secure information in the cloud when we are walking around with sensitive information on a portable hard drive, maybe even a USB stick!

There have been two cases recently of lost personal information one was information pertaining to Canadian students and the other in April 2013, the Investment Industry Regulatory Organization (IIROC) admitted that the personal information of 52,000 clients from dozens of investment firms had equally been compromised.

Remember the UK HM Revenue and Customs that lost computer discs containing the entire child benefit records, including the personal details of 25 million people – covering 7.25 million families overall in 2007. There are loads of reported cases and probably many more unreported!

OK so how do we solve this? According to Daniel Horovitz it is about security awareness and policies that are enforced. With this I concur with completely. However I am also thinking that if no personal data was stored on any local device anywhere, that it was all web-enabled, private cloud, shared cloud. It would bring closer the BYOD device movement, and surely it must be safer than a mobile HD? Clearly security awareness and policy enforcement is essential, but it still does not seem to be working. If it was then these incidents would not be happening.

Ireland’s Data Protection Commissioner report 2012

Thanks to Robert Streeter ( for sharing this. It gives some interesting reading on the number of DPA breaches and their nature, also some case studies. If you skip over the first couple of sections, the interesting stuff starts at the ‘Complaints and Investigations’ section on Page 7 😉

Google’s ‘Policy Violation Checker’

OMG, I picked this article up on Janet Steinman’s feed in LinkedIn. So what Google are doing is patenting a technology that basically detects written policy violations, e.g. in email messages, even before it is completed. I am wondering it it could be likened to the Autocomplete function.

The article is stating that it will be like having a ‘big brother’ peeking over your shoulder when you write. But I am thinking that if it is similar to the ‘autocomplete’ or ‘spellcheck’ function, maybe it is just another useful function and maybe this article is making more of this than it really is?

However if an organisation was to implement this, and they controlled the ‘policy violation checker’ from a central place, would this mean they could see if a policy had been violated, could they control what employees write in the workplace context. Is this a bad thing? I’m still scratching my head over this one….

Google autocomplete and personal integrity

Wow, Germany courts have done it again! They are so good at protecting the personal privacy of their citizens! Read on, it connects to an individual’s ‘right to be forgotten’.

Google have been been over-ruled concerning how the ‘autocomplete’ function in the search dialog works. Basically this is generated by what other users have been searching for. The reason why this has become a case for personal integrity, and also a person’s reputation is because words associated with a particular person, either by rumor or otherwise, and thus searched by users impacts that person’s reputation.

The case in question was when the complainants’ names were typed into Google’s search bar, the autocomplete function added the ensuing words “Scientology” and “fraud”.The continuing association of their names with these terms infringed their rights to personality and reputation as protected by German law (Articles 823(1) and 1004 of the German Civil Code).

What does this mean for Google? Well once Google has been alerted to the fact that an autocomplete suggestion links someone to libellous words, it must remove that suggestion.

According to Panopticon blog this German ruling is extending the “frontiers of legal protection for personal integrity and how we allocate responsibility for harm. Google says that, in these contexts, it is a facilitator not a generator. It says it should not liable for what people write (scroll down to “Google and the ‘right to be forgotten’” here, in Spain a previous case), not for what they search for (the recent German case). Not for the first time, courts in Europe have allocated responsibility differently.”

Cyberattack $45 million stolen

How can this happen? I guess PCI DSS is not working, although it is the prepaid debit card companies themselves that have been exploited. Apparently they are less secure than other financial institutions? But are they not financial institutions per se themselves?

They are not naming the Visa and Mastercard prepaid card companies in the US that were compromised. I wonder why 😉

I find it amazing that after the first attack in December, that there was an identical one in February. It seems to be that the ring leaders were caught, but what about all the hackers sitting behind this operation? I am sure they are still out there hacking away and getting away with it.

Anonymization of data as the future for data privacy?

There is significant debate going on concerning the use of personal data outside of that which it was collected for in the EU data protection reforms. This follows on from my previous post on the future of data protection. One of the ways seen as mitigating the risks is by anonymization of personal data. So you remove all PII, and make it anonymous so it can be used for whatever purpose. Sounds easy, but it’s not. Other data in public domain could be what was anonymised data invalid. There have been many cases of so called anonymous data becoming de-anonmynised. May Yee posted something in May 2010 on Virtual Shadows.

Clearly the anonymisation of data has enormous value in medical research for example, as it saves lives. However, when it comes to collecting personal information to be anonymised and used for making money, i.e. marketing, I’m a little less enthusiastic. If my personal data is to be used for purposes outside of what it was collected for, anonymised or not, I want to be informed of this, and be given the option to opt-in, not opt-out. It is up to the marketeer to sell to me the value in opting in.