Earlier today I had the opportunity to watch the highly useful IAPP webinar entitled What Works: Benchmarking and Improving your Privacy Program. I was particularly intrigued by the comments directed at improving / re – engineering a privacy office. The presenters emphasized the constant evolution of privacy regimes on a global scale, and that today adaptability and flexibility are key for people and structures (such as a privacy office).
That got me thinking about a large part of my career to date – the establishment and re – engineering of research offices at American universities. By “research” I mean the administration of grants, contracts, and other legal instruments that support faculty research. International grants and contracts are a large component in this area. For instance, the NIH (National Institutes of Health) in Washington, D.C., funds research undertaken by European scientists. That global dimension will only continue to increase in a post – pandemic world, although it appears that a robust European posture towards research is in question as I write this.
My own involvement with the establishment and re – engineering of research offices began at Northwestern University in Evanston, Illinois. We had a major challenge at NU as we were re – engineering operations while maintaining the administration of $165M USD in research funding. Subsequent to that, I established two research offices at smaller universities and then established a contracts / industrial agreements office at a larger university in Texas. While at the latter institution I oversaw two additional re – organizations that built upon the original office.
Those universities provided me with a lifetime of unique and challenging experiences. So, here are my thoughts and observations on best practices for building and re – engineering offices, along with specific comments to the privacy office context:
Every university research office was designed to be public facing, client (faculty) – oriented, and collegial with other university offices. It was critical that the research office work effectively with other university offices. What is the parallel situation in privacy? A privacy office that works collaboratively with a security office (or any other office, for that matter).
No research office was meant to operate as an “island” or a “silo.” A privacy office should not be its own island or silo within a company or other organization.
One particular aspect of these offices was that they were designed for staff to “get out” into the greater community of the university – and beyond. It seems to me that privacy office personnel serve in a similar capacity within in their environment.
When re – engineering an office, particular attention must be paid to client satisfaction and “upping your game.” What does your office do well in Version 1.0, and what do you want to do well in Version 2.0? What pressure points need to be eliminated?
Professional development opportunities for staff must be plentiful. I see this as a common thread between the privacy and research worlds. When you think about it, both areas are intellectually vibrant and subject to rapid change. While it is important to stay abreast of such change, getting ahead of said change is more preferable.
How are you going to measure office success? What are the metrics or KPIs? In the realm of research contracting, for instance, one such measure is the length of time to get a contract negotiated and signed. In privacy, one such metric is the length of time it takes to respond to DSARs.
Lastly, the human / interpersonal dimension of an office is just as important as the technical / legally satisfying dimension. Not only must the office be enjoyable for the staff to work in, but it must be viewed – and in reality – as an enjoyable partner within the environment(s) within which it operates. Research management and privacy management are truly Art + Science.
Research offices and privacy offices have more in common than probably many people would have thought. Both operate in a rapidly changing global environment and are intellectually vibrant. It will be quite interesting to see how these offices function and change over the next few years.
IAPP has set up a valuable resource collecting together guidances and statements issued by national DPAs in response to the recent CJEU ruling on the so-called ‘Schrems II’ case. The IAPP will aim to update the register on an ongoing basis.
While privacy pros advise to seek to put in place SCC as a substitution for the invalidated Privacy Shield, it should, however, be noted that SCC are by itself a safeguard with a limited scope of application as: (i) it still does not cover many processing scenarios (e.g., processor-to-controller, processor-to-sub-processor); (ii) it is quite outdated (issued in 2001, 2004 and 2010 in the pre-GDPR world); (iii) its validity has been put on several conditions by the ‘Schrems II’ decision.
As all privacy community already know, the CJEU has today struck down EU-US Privacy Shield scheme, while confirming the validity of SCC.
Arguments against Privacy Shield has changed little since the ‘Schrems I’ decision that invalidated Safe Harbour – governmental intrusion, lack of proportionality, ineffective role of ombudsperson.
What is really new is that a EU-based data controller relying upon SCC is now expected to assess how public authorities in third countries obtain access to personal data and how legal system in those countries works.
Two questions still remain:
1. How such controllers in question are expected to conduct such evaluation? Any methodology in this regard? It may seem somewhat similar to what we have in Article 45(2) – which factors Commission shall evaluate when issuing adequacy decisions. However, a private entity living with SCC is not a EU body and often does not have sufficient resources and understanding as to how to conduct the research and put necessary safeguards in place.
2. Enforcement. Amid DPAs facing lack of financial resources and manpower, the CJEU’s decision puts even extra burden on them. Thus, a newly invented (by CJEU) requirement may easily end up becoming unviable with no practical effect due to insufficient oversight.
Bonus question: taking into account the ‘accountability’ principle, how exporting controllers should demonstrate their compliance with the new obligation?
DPA of Baden-Württemberg (Germany) fined a health insurance company 1’240’000 EUR for insufficient implementation of TOMs resulted in personal data of app. 500 individuals being accidentally processed for advertising purposes without due consent.
The fine is quite high, especially given that there have been some mitigating factors in this case:
not too many data subjects concerned
cooperation with DPA
TOMs were not absent at all, the level of implementation thereof was just insufficient
Besides, no data breaches or other factors posing a (high) risk to data subjects were identified.
The investigation resulted in one of the highest fines issued under Article 32 (if not highest). This can be explained, in particular, by the adoption of the German model for calculating fines under the GDPR.
Anyway, this is another one reminder for controllers and processors about the importance of putting TOMs in place appropriate to the risk as ‘somewhat good’ TOMs will unlikely be enough.
If you are interested in contract negotiation best practices, check out my discussion in the latest installment of NCURA YouTube Tuesday (National Council of University Research Administrators). Not exactly concerned with privacy per se, but I would consider the topic to be within the larger universe of privacy issues.
Privacy is a fundamental human right recognized in the UN Declaration of Human Rights, the International Covenant on Civil and Political Rights and in many other international and regional treaties. Privacy underpins human dignity and other key values such as freedom of association and freedom of speech. It has become one of the most important human rights issues of the modern age. And yet, for many, the GDPR is the beginning of privacy law as we know it. The most remarkable difference being the introduction of some really sizeable fines. So how does this affect the ethics of privacy?
Privacy is, in its nature, an element of compliance. Compliance with privacy laws and with the “intention” of privacy laws is how we show optimal data protection. When talking of compliance, I always say that “Compliance is not about just doing the right thing, but showing we are doing the right thing”. Compliance is only possible with accountability. No one ever challenges the concept that compliance is about doing the right thing. We should remodel our approach to privacy away from compliance with law, but towards the behaviour of doing the right thing. The GDPR helps us to show we are doing the right thing; it helps us to show our accountability, but it is not the reason privacy exists.
Why is this important for companies? Privacy is now a central element of business ethics. It forms part of the corporate approach to mitigating controversial subjects in order to gain public trust and support. No matter what industry, data is essential to the functioning of business. Without an ethical approach to treating data, it will not be entrusted to those who need it most to make business turn and of course, maintain reputation, help avoid significant financial and legal issues, and thus, ultimately benefit everyone involved.
France’s Council of State has ordered the CNIL (French data protection watchdog) to cancel parts of its guidelines on cookies as the ban on cookies walls was not valid. The court explained that the CNIL exceeded its specific mandate under an act called “flexible law” which refers to instruments, such as regulatory authorities’ guidelines, which do not create a legal right or obligation.
Although a recent update of the EDPB Guidelines on consent invalidated ‘cookie walls’, our patient may still be very much alive. There potentially might be similar court decisions in some other Member States.
Recently, the BfDI (German watchdog) said that “cookie-walls are permitted if a comparable service is also offered without tracking, for example, as a paid service”. This happened right after the update of the EDPB Guidelines on consent came out.
PwC developed a facial recognition tool that logs when employees are absent from their computer screens while they work from home. In particular, there have to be a specific excuse for any absence (including toilet breaks).
Too invasive? No doubt. Disproportionate with no likely legal grounds? WP29 Opinion 2/2017 on data processing at work suggests a positive answer, especially given that the tool monitors employees in their private location.
Predictably, this caused a barrage of criticism from different privacy enthusiasts, followed by unconvincing explanations provided by PwC that this tool helps “support the compliance environment required for traders and front office staff in financial institutions”.
Read below to learn more:
At the same time, there might be much more than meets the eye: monitoring of employees from their homes may also occasionally involve monitoring of their family members through webcams. Besides, depending on technical peculiarities and an ability to scan the background in a private premise, such monitoring may also reveal some special categories data about, e.g., employees’ sex life or religious beliefs (Article 9 of the GDPR).
An old issue each privacy pro learnt by heart: “risk of negative consequences (e.g. substantial extra costs)” for data subject = no freely-given consent.
Substantial. But what if extra costs are not substantial? What if, say, 10$ turns into 11$ if you refuse to consent? Is it ok?
At leats, German watchdog seems to say yes. Some privacy pros agree (see below).
One can say that I am picky, indeed, 1$ or even 10$ surcharge will unlikely lead to bankruptcy. But what will happen if this practice becomes commonplace? Right, data subject will overpay every time he/she is requested to give consent. 1$ per one requested consent will turn into 10$ per ten requests. How long could that ‘receipt’ be a year after? 5 years after?
You got to the crux of the matter. While EDPB considers substantiality in the context of a one-off consent request, it does not address the aftermaths when overcharging becomes a rule.