Apple on the road to hell?

Apple has always been the ‘white sheep’ of the corporate world when it comes to privacy. They actually build in privacy as a differentiator, woven into the DNA of their products. However, it’s not easy being a privacy body with all the conflicts out there.

There is for example the conflict of ‘freedom of speech’ vs. ‘privacy’, both are essential for a proper functioning democratic society, but they conflict with each other. The quote that I love from David Brinn’s book ‘Transparent Society’ is that ‘we want privacy for ourself’ but ‘we want transparency for others’. How the hell do we solve this one?

Then if we move back to the reason for this Post, it is the conflict of ‘protection of our kids’ vs. ‘privacy. And Apple have taken this ‘bull by the horns’ and have now launched 2 new features in the latest updates for iOS, iPadOS, and macOS operating systems.

  1. Protect our kids from online predators, and this is a parental control for its Message App. It captures nude images and the child will be presented with a message that this is the case. If they chose to view it anyhow, the parent will be notified.
  2. Our (backed up) photo libraries will be scanned against a library of images by maintained by the National Center for Missing and Exploited Children (NCMEC). by scanning the user’s Photo Library for matches against a table of hash values of known child abuse images.

We all, I am sure, agree that our kids need to be protected, this is a no-brainer, although at what cost is the question? And how effective will this be in practice?

Both initiatives above seem to be logical measures to (1) protect our kids from harmful content, and (2) find the perverts. However, unfortunately this is not going to work, at least long term, and the cost to our privacy will eventually outweigh the benefits of the short-term gain.

Why do I say this?

Online predators hangout together, they share tricks on how to ‘groom’ kids online and offline. One of these will be to get the kid to use another messaging App, there’s loads out there including Telegram and Signal. So what? if parental controls are installed on the kids digital devices -we have it installed- they can’t download anything without my (as a parent) consent, right?

In theory, yes, but most parents wouldn’t see any threat in downloading another messaging App, especially Signal, used as the preferred median of Snowden. This means that all good intentions of Apple are quite useless in practice. Also, it is still a lot of parents that are not tech savvy… and this will probably be the case for another 5-10 years, or maybe much longer…. read on…

So what do we mean by ‘tech savvy parents’? I have been tech savvy for 30 years, before kids were online, and I had a kid who was playing offline, SIM city and the like. I was tech savvy in those days. Now roll forward to 2021 andI have another kid who is 12 years old, and this is where it gets strange. I am tech savvy, most definitely, but not in her world, and maybe I am deceiving myself in thinking I understand her world. In this way she is more tech savvy than I am…. she knows what trolls are and how to deal with them, and I didn’t teach her that! What it means is that our kids can run circles around us, even myself, in their online world, and the online predators know this!

Then let us take the second feature, this time to catch these online predators. Okay they may catch the newbies, and idiots out there, but like I’ve stated above, the ones that are smarter are those guys/gals who hang out together on the ‘deep or dark’ web, or whatever it’s called. They are savvy, and know how to use Tor (The Onion Router) to protect themselves from the efforts of government authorities.

So I would say of the 2 new features, the first is partially effective, and the second could catch online predators which are not a part of this ‘predators community’, or are just incredibly stupid. Of course, they will catch some perverts in the beginning, as the technology is rolled out, mistakes are made…. I remember once when a picture of me, swimming naked in the Baltic was loaded up to the Apple iCloud accidentally…. oppps… panic and then delete… so this will happen, and some guys will be caught… and good thing too, but again…

In the long term the effectiveness of these 2 features will be minimal. If we try and project ourselves to 10-20 years ahead, to see where this will take us and look back again. What I see is the proliferation of these practices… triggered by the initial success on implementation, but then it will be seen just as a step in the direction of a society which has lost its right to a private life. The online predators would have migrated further underground, would have other ways to fulfil their abnormal sexual desires… our kids will still be vulnerable… and we will be vulnerable to the whims of our governments, good or bad.

What will be the next step following this kind of functionality in our digital tools? As mentioned in this article, maybe following in the steps of North Korea, or whatever is happening in China?

So what will we be thinking when we look back to the year 2021 in 2031? That Apple started all this, giving governments an open door, okay, it was just a small window in 2021 -the function is omnipresent in Apple devices, but it was a start to where we could be in 2031, i.e. the panopticon effect will be complete, and eventually ‘freedom of speech’ vs. ‘a right to a private life’, may no longer be a concern for any of us.

Read more here.

Individual choice vs. a right to life -of another individual

A super interesting situation in Italy concerning covid. Basically an Italian business has rolled out covid testing for their 1,200 employees. It is not obligatory, there is a choice. Of the 1,200, 12 employees (1%) have refused.

The problem is that it has become known, who has refused, maybe the employees have stated their stance, the article doesn’t say, what is stated is that the vaccinated employees didn’t feel safe working alongside the non-vaccinated employees. The management has now decided that all unvaccinated employees should take 6 months leave with pay.

This brings to mind quite some dilemmas, some not linked to GDPR compliance, for example how it feels for 99% of employees who have not been offered ‘unpaid leave’… my perception is that it could be perceived as a ‘reward’? How does this work with new employees?

There are many discussions on the freedom to choose versus the safety of the individual. As anyone who knows me, I am an advocate for ‘choice’, it is a human right, and probably the most important word in the GDPR IMHO. However, when the freedom to choose can cause harm to another individual, my stance changes somewhat. Our individual choices should not harm another individual, this impacts their rights as a human being, a right to live. We should care for each other, as a community.

Although I am in the age group 50-60, so I could be biased, except that I remember having this opinion when I was 20 years old, and 30 years old… and so on. “We should not be judged on how we conduct our life, so long as it does not harm another individual”. And this is from a woman who had her first child before she turned 18 years old, and was hence damned to a life of purgatory given society norms in UK during the 1980s.

The conflict of public safety versus privacy, and our right for choice, is articulated somewhat in the GDPR -in a legal way- Article 9.2(b), of which there have been some discussions on LinkedIn. The Article itself is not clear, but if one reads associated Recitals, it gives a picture which supports public safety, versus individual choice. In Recital 52, it is referred to as a derogation which can be made for health purposes, where a serious threat is present.

I am still uncertain if I would stand by this stance if a decision was made to vaccinate all children, I have a 12-year old daughter. These vaccines haven’t been through the rigorous testing which is normal during clinical trials due to a hasty rollout…. but I guess I can append to this Post later when this topic starts to gain some traction.

Anonymisation – who is to blame and what to do?

The IAPP article provides retrospective of the #anonymisation issue under European data protection landscape and reiterates that there is still no ‘one-size-fits-all’ approach to making personal data #anonymised

The current standing can be described as confusion and vacillation, and, probably, the main culprit of this is #WP29 which took contradictory stances to anonymisation in 2007 and 2014, followed by ignorance from #EDPB side and, again, contradictory stances of national DPAs prone to either 2007 or 2014 approaches.

The simple thing is that straightforward ‘disguising of identity’ (e.g. by one-way cryptography), as WP29 suggested in 2007, can no longer be accepted as anonymisation (of course, unless stated otherwise by a national DPA). And the simple thing number two is that there is no industry standard describing step-by-step anonymisation algorithms and techniques. 

From a practical standpoint this calls for a case-by-cases assessment by an anonymisation entity. The recent AEPD-EDPS joint paper ‘on 10 misunderstandings related to anonymisation’ (‘joint paper’) specifically mentions that ‘anonymisation processes need to be tailored to the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons’.

Out of options suggested by the article, the most practical and realistic is, probably, arguing the risks of reidentification is sufficiently remote in every single case where anonymisation is relied on. In fact, this will require an ‘Anonymisation Impact Assessment’ (I have just come up with this term) which must include assessment of re-identification risks. The joint paper acknowledges that such risks are ‘never zero’ (‘except for specific cases where data is highly generalised’) and that ‘a residual risk of re-identification must be considered’.

Until to date, although addressed by WP29 and adopted by the GDPR, the notion of anonymisation and its application still remains ‘terra incognita’.

EDPB Recommendations 01/2020 – softening without being too soft?

Right after the final version of the Recommendations 01/2020 was issued, we (including myself) started to believe that now, here we will live the life! 

A reference to inability to rely on “subjective factors such as the likelihood of public authorities’ access to your data” is gone, data exporters may now assess how the laws are applied in practice, and even previous importer’s experience.

In fact, it may appear nothing more but just starting euphoria. Let’s be honest, we were happy because we understood: in the majority of cases the legislation of a third country will end up in the cohort of “problematic legislation” (para 43.3).

Okay, para 43.3 says that “you may decide to proceed with the transfer without being required to implement supplementary measures, if you consider that you have no reason to believe that relevant and problematic legislation will be applied, in practice, to your transferred data and/or importer”. That’s the exit, isn’t it? Let’s find some practice that “problematic legislation” does not apply to our transfer, and no need to think of supplementary measures. Everyone’s happy.

Not really. EDPB provides significant requirement to “sources of information” confirming our conclusions.

Non-exhaustive list of them is contained in Annex 3 (various reports from various credible organisations, warrants from other entities…), they must be “relevant, objective, reliable, verifiable and publicly available or otherwise accessible” (para 46). “Documented practical experience of the importer with relevant prior instances of requests” alone cannot be relied on (para 47). 

The question here is: do you know a third country with “problematic legislation” but at the same time with “relevant, objective, reliable, verifiable and publicly available or otherwise accessible” practice confirming that there is not really a problem for the transferred data?

In any event, it is clear: supplementary measures are here to stay.

Some fresh thoughts and updates on new #SCC

1. SCC cover data transfers to importers (i) established in thirds countries AND (ii) NOT subject to #GDPR through Article 3(2). This is not clearly articulated in implementing decision and SCC themselves as recitals and articles of both seem to contain controversial information. From confidential sources it’s become known that Directorate-General for Justice and Consumers will soon publish FAQ clarifying these issues. European Commission is not taking any position on the definition of the concept of international data transfers, though.

2. It is not sufficiently clear to what extent negotiating parties may “add other clauses” to SCC? Example I have seen in one of #IAPP articles: would clauses limiting liability between the parties (not towards data subjects, of course) contradict the SCC?

3. As SCC are based on modular principle, one very formal issue is still unclear: when building SCC, should the labels (“Module One: …” etc.) continue to appear in the clauses?
What to do with insertions in the middle of the text (especially for Module Three) if other clauses are used at the same time – is also not perfectly clear?

4. In terms of assessment, new SCC says that parties, when assessing how law and practice in a third country impact an importer’s ability to comply with SCC, are encouraged to take into account “reliable information on the application of the law in practice (such as case law and reports by independent oversight bodies), the existence or absence of requests in the same sector and, under strict conditions, the documented practical experience of the data exporter and/or data importer”. It is a clear shift from strict position taken by EDPB Recommendations 01/2020 that parties should take into account “objective factors, and not rely on subjective factors such as the likelihood of public authorities’ access”.

The final version of #EDPB Recommendations 01/2020 is in the pipeline, and perhaps some important things will be changed compared to the current version for public consultations.

Sensitive employee data made public in Finland

Okay, there were only 7 employees, and this personal data breach which was investigated by the Finnish DPA was concerning a single employee who was on sick-leave.

What is super interesting about this case is that the employer (a family business) put the fact that the employee was on sick leave on the company website. It seems that because the employee was sending an automated response to emails that he/she was on sick leave, gave the idea that this data was now public data.

It then digs into the employment act and secrecy concerning employee data, and the decision was that sanctions would be placed on this business, i.e. it was a personal data breach which has an impact on ‘rights and freedoms’.

Clearly I’ve cut out a load of details here… but what is important is that even the small family businesses are not immune to GDPR sanctions.

1177 result of (Sweden) audit is final

This is a super interesting case. 1177 is the number used in Sweden to ring for your healthcare provider. There was a slight personal data breach reported in 2020 whereby 2.7 calls were publicly available. Apparently the voice data was not encrypted.

The results of the audit by the Swedish Supervisory Authority has resulted in fines of 12 million SEK (1.2 €) to the data controller (Med Help), 650k SEK (65k €) to the Voice Integrate, 500k SEK (50k € county Stockholm) and 250k SEK (25k €) to counties Värmland and Sörmland.

A US update on the TikTok saga

As you know Trump tried to ban TikTok from the US, and a compromise was reached with TikTok that US user data would only be stored in US data-centers. Sounds a bit similar to the Irish ruling in 2020. What I am thinking is that US intelligence have the power/mandate to access data of EU data subjects under FISA 702, so what if China have something similar?

Anyhow despite my speculations, there is a new development. It seems that biometric data may or will be collected by TikTok, as it stands now, only US TikTok users, although consent will be required. Apparently it seems that now all US states require consent for the collection of biometric data!

But what about all the underage users? There is a law which mandates parental consent (of minors) in the US. A significant number of TikTok users are minors, and the mind boggles when it comes to the collection of biometric data of minors…..how aware are the parents. More and more I am coming to the view that TikTok should be banned…. even though my daughter is a user, and the fun and benefits are boundless.

An interesting case from Austria that touches on Article 85 GDPR and refers to the balancing test developed by ECtHR of how to determine whether occurred processing was for journalistic purposes or not.

The case involved personal data mentioned by ex-convict in his Facebook publication. Ironically enough, Austrian DPA and court came to different conclusion using the same balancing tests. Contrary to DPA’s decision, the court found a violation of the data subject’s rights.

While the case is interesting by itself, it also raises thoughts about consistency of application of various tests and methodologies developed by #WP29 and #EDPB in their multiple papers to practical situations. In a nutshell, “two enforcers – three opinions”.

#dataprotection #lawandlegislation #privacy #politicsandlaw #compliance #dataprivacy #gdpr

Risk-based approach

I brushed up on one important thing – risk-based approach (RBA) which WP29 touched upon in 2014 in its “Statement on the role of a risk-based approach in data protection legal frameworks” (WP 218).

So what is RBA really about? The key phrase from the Statements that suggests answer: “… a data controller whose processing is relatively low risk may not have to do as much to comply with its legal obligations as a data controller whose processing is high-risk».

What does in mean? 

RBA does NOT mean that a controller may ignore some of its obligations if it processes low-risk data. This does not lead to compliance and this is a common misconception I’ve seen in my practice many times. Instead, RBA means that in this case a controller may do less to be compliant.

In fact, this is not correct to say that the whole GDPR implies RBA. Only some particular articles in particular cases do so (Art. 25, 30(5), 32-35 and some other).