As you know Trump tried to ban TikTok from the US, and a compromise was reached with TikTok that US user data would only be stored in US data-centers. Sounds a bit similar to the Irish ruling in 2020. What I am thinking is that US intelligence have the power/mandate to access data of EU data subjects under FISA 702, so what if China have something similar?
Anyhow despite my speculations, there is a new development. It seems that biometric data may or will be collected by TikTok, as it stands now, only US TikTok users, although consent will be required. Apparently it seems that now all US states require consent for the collection of biometric data!
But what about all the underage users? There is a law which mandates parental consent (of minors) in the US. A significant number of TikTok users are minors, and the mind boggles when it comes to the collection of biometric data of minors…..how aware are the parents. More and more I am coming to the view that TikTok should be banned…. even though my daughter is a user, and the fun and benefits are boundless.
An interestingcase from Austriathat touches on Article 85 GDPR and refers to the balancing test developed by ECtHR of how to determine whether occurred processing was for journalistic purposes or not.
The case involved personal data mentioned by ex-convict in his Facebook publication. Ironically enough, Austrian DPA and court came to different conclusion using the same balancing tests. Contrary to DPA’s decision, the court found a violation of the data subject’s rights.
While the case is interesting by itself, it also raises thoughts about consistency of application of various tests and methodologies developed by #WP29 and #EDPB in their multiple papers to practical situations. In a nutshell, “two enforcers – three opinions”.
So what is RBA really about? The key phrase from the Statements that suggests answer: “… a data controller whose processing is relatively low risk may not have to do as much to comply with its legal obligations as a data controller whose processing is high-risk».
What does in mean?
RBA does NOT mean that a controller may ignore some of its obligations if it processes low-risk data. This does not lead to compliance and this is a common misconception I’ve seen in my practice many times. Instead, RBA means that in this case a controller may do less to be compliant.
In fact, this is not correct to say that the whole GDPR implies RBA. Only some particular articles in particular cases do so (Art. 25, 30(5), 32-35 and some other).
Enforcement actions against Dutch supermarket and Swedish police remind that the bar for using FRT is very high.
Even if used for security purposes inside the supermarket, giving of relevant privacy notification to customers that FRT is used is not enough. However, no fine was issued by Dutch DPA (only warning).
In another case, Swedish Police unlawfully processed biometric data for facial recognition without conducting DPIA which resulted in a fine of approximately EUR 250,000. In addition, Swedish police was ordered to delete all data transfer to external database (Clearview AI) and notify data subjects that their personal data were transferred.
I am pretty creative when it comes to taking the GDPR legal stuff and working out how to make it work in practice. No business/organisation should hit a wall of what I call ‘GDPR paralysis’ because of something legal which prevents a business from functioning. Our livelihood depends upon a working economy and a healthy GNP. In fact if we didn’t have this, human rights starts to become problematic, because if we as private people do not have access to jobs we lose something which is the most important word in IMHO, and that is CHOICE.
Whenever I am presented with a stop, i.e. “no can’t do”, it is an opportunity to think new. Schrems II is one such example. I did not see it as a stop on international transfers over to the US. It just meant we needed increase diligence, document all and do those Transfer Impact Assessments (TIA) so we understand risks to the rights and freedoms of the natural person. Identify supplementary measures. We need to be realistic.
My take on this previously was to assess risk to the rights and freedoms of the individual, however, now this approach has been kicked out, ignored. I wonder where is the logic, the balance in this decision? Clearly if Mailchimp was being used to send out marketing communications from a Sex Shop, or from a specialist group around a health condition, I could understand this… but an email address used in a standard non-personal communication?
I am wondering which monkey was behind this decision, or am I missing something?
Being based on the #UK and #EU laws, it outlines starting tips for conducting data retention review process (it all, however, begins with data mapping exercise), provides advice on how to decide on retention periods, advises on creation of data retention policy and schedule, and much more.
Attention: the Guidance refers to #anonymization as an acceptable way of handling data when retention period comes to an end. It should be noted here that ‘true’ anonymization is very hard (if possible) to achieve, especially given that there is no industry standard on strict sequence of steps to be taken to render the data anonymized. In addition, amid the constant development of #bigdata and #AI algorithms, data we consider truly anonymized today may not have the same status tomorrow.
Booking.com have been fined €475k because of this. They did report the breach but what is significant about this fine is that a minor incident reported by a customer of a hotel was dismissed on 9 Jan, then a second identical report from another customer of the same hotel triggered an investigation on 13 Jan. However, the report of the breach was not until 7 Feb, 3 days after the internal security investigation was concluded. The fine is because booking.com should -according to the Dutch DPA- have reported the breach on the 13 Jan.
In fact it is common practice, to not report the breach until one is sure there has been a breach, and sometimes even the circumstances of the breach. This case shows that this is not an advisable route.
Here comes one another evidence of why consistent applications of #GDPR across the #EU is just a ‘shimmering dream’ thus far.
Belgian DPA issued a decision where it said that unintentional (due to human error) sending of an e-mail containing personal data does not mean the violation of Article 32 (security of processing), which prevents the incident from being classified as data breach.
This appears to be in contradiction with #WP29 Guidelines on Personal data breach notification and with the recent #EDPB Guidelines 01/2021 on Examples regarding Data Breach Notifications. Both documents, vice versa, addressed examples of mistakenly sent e-mails, while sufficiency or insufficiency of security measures was not named as a factor of whether the incident should be classified as data breach.
Decisions like this clearly erode the idea and value of ‘consistency’ proclaimed by GDPR and promoted by EDPB.
Another non-obvious conclusion made by Belgian DPA is that unlawfully obtained data cannot be further lawfully processed.
We talked about compliance challenges, expectations for 2021 and beyond, career in privacy & data protection, and how expertise in both EEA and Russian jurisdictions may streamline the work in a global company.
Court opined that German Act on Regulatory Offences shall apply, and this is in clear contradiction with GDPR and the position of Conference. What is especially important here is that it is all about fines, which is often the strongest ‘motivation’ to comply (let’s be realistic).
Meanwhile, Austrian and French courts create their own case law on this issue. Overall… it is a beuatiful mess 🙂