The IAPP article provides retrospective of the #anonymisation issue under European data protection landscape and reiterates that there is still no ‘one-size-fits-all’ approach to making personal data #anonymised.
The current standing can be described as confusion and vacillation, and, probably, the main culprit of this is #WP29 which took contradictory stances to anonymisation in 2007 and 2014, followed by ignorance from #EDPB side and, again, contradictory stances of national DPAs prone to either 2007 or 2014 approaches.
The simple thing is that straightforward ‘disguising of identity’ (e.g. by one-way cryptography), as WP29 suggested in 2007, can no longer be accepted as anonymisation (of course, unless stated otherwise by a national DPA). And the simple thing number two is that there is no industry standard describing step-by-step anonymisation algorithms and techniques.
From a practical standpoint this calls for a case-by-cases assessment by an anonymisation entity. The recent AEPD-EDPS joint paper ‘on 10 misunderstandings related to anonymisation’ (‘joint paper’) specifically mentions that ‘anonymisation processes need to be tailored to the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons’.
Out of options suggested by the article, the most practical and realistic is, probably, arguing the risks of reidentification is sufficiently remote in every single case where anonymisation is relied on. In fact, this will require an ‘Anonymisation Impact Assessment’ (I have just come up with this term) which must include assessment of re-identification risks. The joint paper acknowledges that such risks are ‘never zero’ (‘except for specific cases where data is highly generalised’) and that ‘a residual risk of re-identification must be considered’.
Until to date, although addressed by WP29 and adopted by the GDPR, the notion of anonymisation and its application still remains ‘terra incognita’.