Hashing, Pseudonymization, and the Evolving Boundaries of Personal Data Protection
The Holy Grail Myth: Hashing, Pseudonymization, and the Evolving Boundaries of Personal Data Protection
For over two decades, data hashing has functioned as a putative panacea within the data protection landscape—a cryptographic technique purportedly capable of resolving the fundamental tension between organizational data processing imperatives and individual privacy rights. The underlying premise was deceptively simple: transform personal information into an ostensibly irreversible alphanumeric string, and such data would cease to constitute personal data subject to regulatory oversight. If no party could reconstitute the original value, the reasoning went, the privacy concern would dissolve entirely.
This assumption has proven illusory. The Court of Justice of the European Union’s judgment of September 4, 2025, in European Data Protection Supervisor v. Single Resolution Board (Case C-413/23 P) introduces a nuanced analytical framework for assessing the legal status of pseudonymized data. The Court annulled the General Court’s judgment and remanded the matter for further proceedings, while articulating two foundational principles: first, that the classification of data as “personal” may vary depending upon the processing entity; and second, that data controllers cannot invoke prospective pseudonymization measures to circumvent their obligation to inform data subjects of potential recipients at the moment of data collection.
I. Why Hashing Fails to Achieve Anonymization
Hashing constitutes the transformation of any data string into a fixed-length character sequence through the application of a mathematical function. The process is deterministic—identical input data invariably generate identical output. This characteristic, while essential for data integrity verification and password storage, simultaneously constitutes the technique’s fundamental vulnerability: a hash preserves the capacity for unambiguous identification.
The United States Federal Trade Commission, in its July 2024 publication entitled “No, hashing still doesn’t make your data anonymous,” articulated this proposition without equivocation: the logic pursuant to which hashed data equals anonymous data is “as old as it is flawed.” The Commission emphasized that hashed identifiers continue to enable user tracking and profiling, thereby occasioning tangible harm to affected individuals.
A straightforward thought experiment illustrates the problem. A Polish national identification number (PESEL) comprises eleven digits in a specified format, resulting in a constrained theoretical space of possible combinations. An adversary possessing a hashed database of PESEL numbers need only process all conceivable combinations through the relevant algorithm and compare the results against the stored hashes. With contemporary computational capacity, this undertaking requires minutes rather than years. Indeed, the FTC cautioned as early as 2012 that reversing hashed Social Security numbers requires “less time than it takes you to get a cup of coffee.”
II. The Global Regulatory Landscape
A. European Union: The Conceptual Precision of the GDPR
The European data protection framework has, from its inception, maintained a rigorous distinction between pseudonymization and anonymization. Article 4(5) of the General Data Protection Regulation defines pseudonymization as the processing of personal data such that attribution to a specific data subject becomes impossible without recourse to additional information, provided that such additional information is maintained separately and subject to technical and organizational measures precluding identification.
The operative term is “without.” Pseudonymization does not eliminate the possibility of identification—it merely renders identification more difficult. Pseudonymized data remain personal data within the meaning of the GDPR. Only truly anonymous data—data for which identification is impossible through all reasonably likely means—fall outside the Regulation’s scope.
Recital 26 of the GDPR specifies that assessments of identifiability must account for all objective factors: the costs and time necessary for identification, the technology available at the time of processing, and foreseeable technological developments. This constitutes a dynamic framework—a technique that achieves effective anonymization today may prove inadequate tomorrow.
Article 32 of the GDPR enumerates pseudonymization and encryption as examples of appropriate technical and organizational measures, while simultaneously requiring that safeguards be calibrated to the state of the art, implementation costs, and the nature, scope, context, and purposes of processing. Pseudonymization thus functions as an instrument of risk minimization rather than a dispensation from regulatory obligations.
Opinion 05/2014 of the Article 29 Working Party on anonymization techniques confirms that hashing constitutes a pseudonymization technique rather than an anonymization technique. Hashed data remain personal data unless the risk of re-identification is “insignificant or non-existent.”
B. United States: The FTC’s Pragmatic Approach
The American privacy protection framework, though more fragmented than its European counterpart, has developed a coherent position regarding hashing. The FTC consistently maintains that hashed data are not anonymous and remain subject to the same standards applicable to plaintext data.
The BetterHelp matter (2023) serves as an instructive cautionary example. The online counseling platform transmitted users’ hashed email addresses to Facebook along with information derived from health questionnaires. The FTC determined that BetterHelp knew Facebook would “undo the hashing and reveal the email addresses.” The consequence: Facebook learned which individuals were seeking mental health counseling and leveraged this sensitive information for advertising targeting purposes. The settlement required payment of $7.8 million and imposed an obligation to obtain express user consent prior to any data sharing.
The Nomi matter (2015) concerned consumer tracking within retail establishments through mobile device MAC addresses. The company contended that hashing MAC addresses protected consumer privacy. The FTC complaint stated unequivocally: “Hashing obfuscates the MAC address, but the result is still a persistent unique identifier.” The capacity to track an individual over time persists irrespective of the identifier’s form.
The Premom matter (2023) involved a fertility tracking application that shared unique advertising and device identifiers with third parties, notwithstanding representations that it would share only “non-identifiable data.” The FTC demonstrated that these identifiers enabled third parties to “circumvent operating system privacy controls, track individuals, infer the identity of an individual user, and ultimately associate the use of a fertility app to that user.”
C. California: The CCPA Deidentification Standard
The California Consumer Privacy Act introduces the concept of “deidentified” data, which may be excluded from regulatory coverage. However, the definition contained in Section 1798.140(h) requires satisfaction of three cumulative conditions: implementation of technical measures precluding reidentification (such as salting or robust algorithms), establishment of internal procedures prohibiting reidentification attempts, and public commitments to maintain information in deidentified form.
The mere fact of hashing proves insufficient. Organizations must demonstrate adoption of both technical and organizational measures precluding association of data with specific individuals. The 2025 amendments further tightened breach notification requirements, establishing a thirty-day reporting deadline from incident detection—among the shortest in the United States.
D. China: The PIPL and Data Localization
The Personal Information Protection Law of the People’s Republic of China (PIPL), effective November 2021 and partially modeled on the GDPR, introduces requirements reflecting China’s regulatory priorities. The statute has extraterritorial reach—it governs processing of personal information of individuals located in mainland China, including processing conducted abroad for purposes of providing products or services or analyzing behavior.
The PIPL mandates categorization and management of personal information with appropriate technical measures. Data localization requirements bear particular significance: operators of critical information infrastructure and entities processing data above specified thresholds must store personal information within Chinese territory. Data protection impact assessments are required prior to processing sensitive information, automated decision-making, or cross-border transfers.
According to Bloomberg Law, penalties for serious violations may reach 50 million yuan or five percent of the preceding year’s revenue. Individuals bearing direct responsibility may face fines ranging from 100,000 to 1 million yuan and prohibition from holding managerial positions.
E. International Standards: ISO/IEC 27001:2022
ISO/IEC 27001:2022 constitutes a global reference point for information security management systems. Control 8.24, concerning cryptographic safeguards, requires formal documentation of cryptographic policy specifying which information requires cryptographic protection, which algorithms are authorized, and which encryption levels correspond to particular data sensitivity categories.
The standard recommends AES-256 for symmetric encryption, RSA with minimum 2048-bit keys (4096-bit preferred) for asymmetric encryption, SHA-256 or higher for integrity verification (but not for password storage), and Argon2id, bcrypt, scrypt, or PBKDF2 for password hashing. The explicit differentiation among hash functions appropriate for various applications reflects recognition that no universal approach exists.
III. The Interpretive Breakthrough: The CJEU Judgment in EDPS v. SRB
A. Factual Background
The dispute arose from compensation proceedings for shareholders and creditors of Banco Popular Español following the bank’s resolution in June 2017. The Single Resolution Board (SRB) conducted a two-phase procedure implementing the right to be heard.
During the registration phase, affected parties submitted identity documentation and evidence of ownership of capital instruments. During the consultation phase, they submitted comments on the SRB’s preliminary decision and Deloitte’s valuation. The SRB applied pseudonymization: each comment received a unique thirty-three-character alphanumeric code, and staff members analyzing comments had no access to identification data from the registration phase.
The SRB transmitted 1,104 comments to Deloitte, identified solely by alphanumeric codes. Deloitte neither had nor has access to the database enabling correlation between codes and author identities. The European Data Protection Supervisor determined that the SRB violated the information obligation under Article 15(1)(d) of Regulation 2018/1725 by failing to identify Deloitte as a data recipient.
B. Principal Holdings
The Court annulled the General Court’s judgment with respect to the first ground of appeal. The CJEU held that personal opinions or views, as expressions of an individual’s thinking, are inherently and inextricably linked to that individual. Given that the comments expressed their authors’ personal opinions, no additional analysis of content, purpose, or effects was necessary to establish that they “related to” a natural person within the meaning of Article 3(1) of Regulation 2018/1725.
Regarding identifiability, the Court rejected as unfounded the EDPS’s position that pseudonymized data invariably and with respect to every entity constitute personal data solely by virtue of the existence of information permitting identification of the data subject. The Court indicated that pseudonymization may—depending upon the circumstances of a given case—effectively preclude parties other than the controller from identifying data subjects, such that for those parties the data subject is no longer identifiable or was never identifiable (paragraph 86). Simultaneously, however, the Court cautioned that where one cannot exclude that a recipient will be able reasonably to attribute pseudonymized data to a specific individual—for example, through comparison with other data at its disposal—such data must be regarded as personal data both with respect to the transfer itself and with respect to any subsequent processing by that recipient (paragraph 85).
The holding concerning the relevant perspective for assessing the information obligation bears particular significance. The Court unequivocally determined that this obligation forms part of the legal relationship between data subject and controller, and thus concerns information in the form in which it was provided to the controller—antecedent to any potential transfer to third parties (paragraph 110). Consequently, for purposes of applying the information obligation under Article 15(1)(d), identifiability must be assessed at the moment of data collection and from the controller’s perspective (paragraph 111). The question whether a recipient, upon receiving pseudonymized data, will be able to identify the data subject is irrelevant to discharge of this obligation. As the Court observed, argumentation seeking to adopt the recipient’s perspective would effect an impermissible temporal deferral of compliance review and would disregard the subject matter of this obligation, which is inextricably bound to the relationship between controller and data subject (paragraph 114).
Advocate General Spielmann’s Opinion of February 6, 2025, analyzes these issues in detail, emphasizing the necessity of distinguishing between the controller’s perspective and the recipient’s perspective regarding pseudonymized data.
C. Implications for Practice
The judgment establishes a nuanced, multi-level framework for assessing the status of pseudonymized data. First, the Court confirmed that identical data may possess different legal status depending upon the processing entity: for a controller possessing the key enabling identification, such data remain personal data (paragraph 76), whereas for a recipient lacking any reasonably probable means of identification, they may not possess such character (paragraphs 77, 86–87).
Second, however, the Court expressly indicated that the circumstance that pseudonymized data lack personal data character for a particular recipient does not affect assessment of such data in the context of potential transfer to third parties—where one cannot exclude that such third parties will be able reasonably to attribute the data to the relevant individual, such data must be regarded as personal data both with respect to the transfer and with respect to subsequent processing (paragraph 85).
Third, in the context of the information obligation under Article 15(1)(d) of Regulation 2018/1725, the recipient’s perspective is entirely irrelevant—the controller must inform data subjects of potential recipients at the moment of data collection, irrespective of planned pseudonymization (paragraphs 111–115).
It bears emphasis that the judgment of September 4, 2025 does not constitute the final resolution of the dispute between the EDPS and the SRB. The Court annulled the General Court’s judgment and dismissed as unfounded the SRB’s first ground challenging Article 3(1) of Regulation 2018/1725, holding that the comments transmitted to Deloitte constituted personal data from the SRB’s perspective as controller (paragraph 120). However, the matter was remanded for further proceedings regarding the second ground, concerning alleged violation of the right to good administration under Article 41 of the Charter of Fundamental Rights, which requires factual assessment not yet undertaken by the General Court (paragraphs 121–122).
Commentary from Hunton Andrews Kurth and Inside Privacy confirms the landmark significance of this judgment for EU data protection practice.
IV. Technical Standards: What Actually Works
A. Password Hashing Algorithms: A Hierarchy of Security
Contemporary password security standards require deliberately slow algorithms resistant to hardware-based attacks. Argon2id, winner of the Password Hashing Competition in 2015 and standardized in RFC 9106, is currently regarded as the optimal choice. The algorithm combines resistance to side-channel attacks with resistance to time-memory trade-off attacks.
Bcrypt remains secure with appropriate configuration, notwithstanding its 1990s origins. The minimum work factor should be twelve; for high-security applications, thirteen to fourteen is advisable. Each increment doubles computational cost.
PBKDF2 finds application primarily where FIPS-140 compliance is required. The algorithm lacks resistance to memory-based attacks, rendering it vulnerable to parallelized GPU attacks. NIST SP 800-63B requires a minimum of 10,000 iterations, though security experts recommend 600,000 or more when using PBKDF2-HMAC-SHA-256.
SHA-256, SHA-512, MD5, and SHA-1 are entirely unsuitable for password storage. They were designed for speed—contemporary GPUs compute over 180 billion MD5 hashes per second. The absence of a computational cost factor renders brute-force attacks trivial.
B. Homomorphic Encryption: Future or Present?
Homomorphic encryption enables computation on encrypted data without decryption—a potentially revolutionary technology permitting analysis of personal data while preserving complete confidentiality. Operations performed on ciphertexts yield results identical to operations on plaintexts, and the secret key is never exposed to processing systems.
Significant challenges persist: computational overhead ranges from ten to one thousand times that of operations on plaintext data. Implementation complexity limits deployments to research projects and pilot programs. Applications include cloud processing with untrusted providers, privacy-preserving machine learning, secure multi-party computation, and medical data analysis.
C. Irreversible Hashing: Can a Controller Eliminate Its Own Verification Capability?
A distinct technical question arising in discussions of the boundary between pseudonymization and anonymization is whether one can design a hashing system such that even the controller loses the capacity for subsequent data verification.
Three Technical Models
Three fundamental approaches to hashing may be distinguished from the perspective of controller verification capability:
The Classical Model (Deterministic Hash Without Secret): Application of pure SHA-256 or similar algorithms permits controller verification where the controller knows a candidate input. One need only compute the hash and compare against the stored value. From the controller’s perspective, this is not an irreversible process—brute-force or dictionary attacks remain possible, particularly for data with constrained value spaces (national identification numbers, telephone numbers, email addresses).
The Keyed Hash Model (HMAC): Application of HMAC-SHA-256 with a cryptographic key creates “one-way tokens.” Without knowledge of the key, one cannot verify whether a given input value corresponds to a specific token. Where the controller by design lacks access to the key (for example, where it is stored in an independent HSM with restrictive access policies), verification becomes practically impossible.
The Key Destruction Model: The most radical approach involves hashing data with a key or salt, followed by controlled destruction of the key or mapping table. From that point forward, even the controller lacks any realistic path to determining whether a specific identifier corresponds to a specific record. From the GDPR perspective, such an operation approaches true anonymization—the absence of reasonable means of reidentification signifies departure from the scope of personal data.
When Irreversible Hashing Makes Business Sense
Solutions eliminating the controller’s verification capability find justification in narrowly defined scenarios:
Transition from Personal to Anonymous Data Upon Relationship Termination: Where the business purpose is limited to statistics or aggregation and the relationship to a specific individual is no longer required, one may retain only irreversible identifiers serving, for example, deduplication or trend analysis, while destroying all keys or mappings. Benefits include departure from GDPR scope (no obligations regarding data subject rights, DPIAs, or request fulfillment) and reduced risk in the event of a security breach.
Analytical Systems with One-Way Tokens: Google Cloud Data Loss Prevention describes pseudonymization using cryptographic hashing as a mechanism for creating one-way tokens—the system uses them for correlation and analytics but does not contemplate reversal to original values. Source data may be deleted by the controller while the external system operates exclusively on irreversible tokens.
Privacy by Design as Competitive Advantage: In models where the controller assumes from the outset that it neither wishes nor is able to identify individuals (voting systems, anonymous surveys, whistleblower mechanisms), a solution involving technical impossibility of verification constitutes evidence that reidentification is unrealistic.
When Irreversible Hashing Lacks Business Justification
In most conventional business processes, the essence of processing is precisely the capability for verification or re-identification: customer service, billing, AML/KYC compliance, contract enforcement, and fulfillment of legal obligations. In such cases:
Loss of verification capability entails loss of data value. Where a controller cannot determine whether a given record pertains to a specific user, it cannot process complaints, withdraw consent, execute the right to object, or respond to law enforcement requests.
Conflict with regulatory obligations. The GDPR provides for numerous data subject rights: access, rectification, erasure, restriction, portability, and objection. Where a controller deliberately creates a system in which it is technically unable to associate a record with an individual, it must have robust justification that such data are no longer necessary for any purpose related to that individual—otherwise it violates the principles of minimization and purpose limitation.
Risk of pseudo-anonymization. Even after hashing and key destruction, real reidentification risk may persist through third parties possessing external dictionaries (public registries, leaks from other services). If the controller destroys the key but another party retains the capability to reverse the hashes, this undermines the thesis of effective anonymization.
Practical Recommendation
Irreversible hashing with key destruction constitutes a valuable tool at the terminal stage of the data lifecycle—after utilization for original business purposes and after expiration of legally required retention periods. It is not, however, an appropriate solution for ongoing operational purposes where the controller requires the capability to verify user identity.
In the context of a typical internet service, this means that hashing preserving verification capability (for purposes of blocking banned users or cooperating with law enforcement) remains processing of personal data requiring a lawful basis under the GDPR. The alternative is true anonymization through irreversible hashing, which, however, eliminates the capability to achieve these business objectives.
V. Lessons from History: Major Breaches and Their Consequences
A. Yahoo: The Unsalted Hash Catastrophe
The Yahoo data breach of 2013–2016 affected approximately three billion user accounts—the largest breach in history. Passwords were stored as unsalted MD5 hashes. (A salted hashing algorithm helps protect password hashes against dictionary attacks by introducing additional randomness.) This architectural decision proved catastrophic.
SHA-1 was designed for speed, not password security. Without salting, attackers could employ rainbow tables—precomputed databases of hashes for common passwords. The “vast majority” of passwords were cracked within days of data disclosure. The class action settlement totaled $117.5 million, and Verizon (the parent company) committed to investing $306 million in information security between 2019 and 2022. An additional $80 million settlement resolved claims regarding misleading statements about security measures.
B. LinkedIn: The Repeated Error
The LinkedIn breach of 2012 was initially estimated at 6.5 million accounts. Only in 2016, when the complete dataset appeared on darknet markets, did it emerge that the breach affected 164 million users. Passwords were stored as unsalted SHA-1 hashes—an identical error to Yahoo’s.
LinkedIn’s response exacerbated the situation. In 2012, the company reset passwords only for the identified 6.5 million accounts, leaving 157.5 million compromised accounts active for an additional four years. Cryptographic analysis of the breach demonstrated that the absence of salting enabled immediate password cracking using rainbow tables—forum members voluntarily cracked hashes, identifying passwords such as “passwordlinkedin” and “supermanlinkedin.”
C. Equifax: Plaintext Data
The Equifax breach of 2017 affected 147 million consumers—nearly forty percent of the U.S. population. Social Security numbers were stored in plaintext, without any encryption or hashing. The cause was an unpatched vulnerability in Apache Struts (CVE-2017-5638), disclosed in March 2017.
The settlement totaled $700 million, including $425 million for a consumer compensation fund, $175 million in penalties to fifty states and territories, and $100 million in civil penalties to the Consumer Financial Protection Bureau. The FTC determined that claiming “reasonable safeguards” in a privacy policy while failing to implement basic security measures constitutes a deceptive practice. Final payments were distributed between November and December 2024.
D. British Airways: The Largest GDPR Fine in the United Kingdom
The British Airways breach of 2018 exposed data of approximately 500,000 customers, including login credentials and complete payment card data. Attackers redirected customer traffic to a fraudulent website, harvesting authentication and payment data for over two months.
The ICO initially announced a fine of £183.39 million (1.5% of global turnover). The final penalty was £20 million—an 89% reduction reflecting the airline’s cooperation with the investigation, prompt notification of affected individuals, implemented improvements, and the impact of the COVID-19 pandemic on the aviation industry. This remains the highest GDPR fine imposed by the ICO.
E. Marriott/Starwood: Acquisition Risk
The Marriott/Starwood breach exposed 339 million guest records, including unencrypted passport numbers. The attack commenced in 2014, two years before Marriott’s acquisition of Starwood. The breach was not detected until September 2018.
The ICO determined that Marriott failed to conduct adequate security due diligence during the acquisition. The penalty was £18.4 million (reduced from the initially announced £99.2 million). Inheritance of compromised systems does not relieve responsibility for post-acquisition compliance.
F. Facebook-Cambridge Analytica: Platform Abuse
The Facebook-Cambridge Analytica scandal involved up to 87 million user profiles obtained without informed consent. The application “This Is Your Digital Life” exploited Facebook’s API to collect not only data of users installing the application but also data of their friends—without those friends’ knowledge or consent.
The FTC imposed a $5 billion penalty—one of the highest in Commission history. The ICO fined Facebook £500,000 for exposing user data to “serious risk of harm.” The matter established precedent: platform abuse and unauthorized data use are treated as security breaches for penalty purposes, even absent traditional intrusion.
G. T-Mobile: Repeated Breaches
The T-Mobile breach of 2021 affected 76.6 million U.S. residents. Attackers exploited an unsecured router as an entry point, then conducted a brute-force attack without any rate limiting—a standard industry practice that T-Mobile had not implemented.
The settlement totaled $500 million: $350 million for a compensation fund and $150 million for cybersecurity improvements. In 2024, T-Mobile paid an additional $60 million penalty to CFIUS for failing to report unauthorized data access following the Sprint merger—the highest penalty in CFIUS history.
H. Target: Vendor Compromise
The Target breach of 2013 affected over 110 million consumers. Attackers exploited stolen credentials from an external HVAC vendor to gain network access, then moved laterally to customer databases and installed malware capturing payment card data in real time.
The consumer settlement was $10 million; the multistate settlement was $18.5 million—then the largest of its kind. The total cost of the breach to Target was $202 million. The case illustrates that organizations bear responsibility for the security practices of vendors with network access.
VI. Practical Guidance for Data Controllers
A. Pseudonymization as a Strategic Element, Not a Strategy Per Se
The CJEU judgment in EDPS v. SRB confirms that pseudonymization constitutes a valuable risk-minimization tool but is not a method for evading data protection obligations. A controller cannot avoid the information obligation through planned pseudonymization prior to data transfer to a recipient. The information obligation arises at the moment of data collection and must be discharged from the controller’s perspective.
Pseudonymization may effectively protect data subjects against risks associated with processing by recipients lacking identification means. This does not, however, alter the status of data in the controller-data subject relationship. Privacy policies must inform of all potential recipients, irrespective of planned pseudonymization.
B. Individualized Assessment for Each Processing Participant
Identical data may possess different legal status depending upon the processing entity. A controller possessing the key enabling identification processes personal data. A recipient lacking any reasonably probable identification means may process non-personal data—but only where technical and organizational measures actually preclude identification.
The assessment must account for all objective factors: costs and time necessary for identification, available technology, the possibility of cross-referencing with other datasets, and legal restrictions on access to additional information. The effectiveness of pseudonymization is not a static characteristic—it requires regular review in light of technological developments and evolving recipient capabilities.
C. Documentation and Accountability
The accountability principle under Article 5(2) of the GDPR requires controllers to demonstrate compliance. In the pseudonymization context, this entails documenting the technical and organizational measures implemented, assessments of their effectiveness, the bases for concluding that a recipient lacks identification means, and periodic reviews of safeguard adequacy.
D. Appropriate Algorithm Selection
For passwords: Argon2id as the primary preference, bcrypt as a robust alternative, PBKDF2 only where FIPS compliance is required. Never SHA-256, SHA-512, MD5, or SHA-1.
For data at rest: AES-256 with keys stored in HSM or KMS.
For data in transit: TLS 1.2 as minimum, TLS 1.3 preferred.
For data integrity: SHA-256 or SHA-3 are appropriate, but only for integrity verification—not for confidentiality protection or password storage.
E. Breach Response
Detection of a breach requires immediate action: activation of the response team, isolation of affected systems, and preservation of evidence. Scope and impact assessment should occur within the first days. Notification to the supervisory authority must occur within 72 hours (GDPR Article 33) or 30 days in California. Notification to affected individuals must occur without undue delay, with clear information regarding the breach, risks, and remedial steps.
Where data were effectively encrypted or pseudonymized and keys were stored separately and not compromised, risk to data subjects may be minimal. The European Data Protection Board recognizes that in such scenarios, individual notification may not be required. Documentation justifying such an assessment is, however, essential.
Conclusion
Hashing of personal data has never been and is not the “holy grail” of privacy protection. It is a tool—useful when properly employed, dangerous when treated as a panacea. The CJEU judgment in EDPS v. SRB delineates the boundaries of its effectiveness: pseudonymization may, in specified circumstances, exclude data from the definition of personal data with respect to particular recipients, but it does not relieve the controller of its obligations.
Global convergence of data protection standards is evident. The GDPR, CCPA, PIPL, and FTC position, notwithstanding differences in particulars, express a common conviction: data capable of association with a natural person require protection, irrespective of the cryptographic transformation applied. Organizations relying on hashing as a strategy for obligation avoidance risk not only financial sanctions but fundamental erosion of user trust.
Effective data protection requires a layered approach: appropriate algorithms for appropriate applications, proper key management, network segmentation, continuous monitoring, regular security assessments, and an organizational culture treating privacy as a value rather than a burden. Pseudonymization constitutes one element of this puzzle—important, but insufficient in itself.

Founder and Managing Partner of Skarbiec Law Firm, recognized by Dziennik Gazeta Prawna as one of the best tax advisory firms in Poland (2023, 2024). Legal advisor with 19 years of experience, serving Forbes-listed entrepreneurs and innovative start-ups. One of the most frequently quoted experts on commercial and tax law in the Polish media, regularly publishing in Rzeczpospolita, Gazeta Wyborcza, and Dziennik Gazeta Prawna. Author of the publication “AI Decoding Satoshi Nakamoto. Artificial Intelligence on the Trail of Bitcoin’s Creator” and co-author of the award-winning book “Bezpieczeństwo współczesnej firmy” (Security of a Modern Company). LinkedIn profile: 18 500 followers, 4 million views per year. Awards: 4-time winner of the European Medal, Golden Statuette of the Polish Business Leader, title of “International Tax Planning Law Firm of the Year in Poland.” He specializes in strategic legal consulting, tax planning, and crisis management for business.