<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Data Protection Report</title>
	<atom:link href="https://www.dataprotectionreport.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.dataprotectionreport.com/</link>
	<description>Data protection legal insight at the speed of technology</description>
	<lastBuildDate>Mon, 13 Apr 2026 14:24:07 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.5&amp;lxb_maple_bar_source=lxb_maple_bar_source</generator>

 
	<item>
		<title>How to approach governance of AI agents</title>
		<link>https://www.dataprotectionreport.com/2026/04/how-to-approach-governance-of-ai-agents/</link>
		
		<dc:creator><![CDATA[Susana Medeiros (US), Steve Roosa (US) and Wenda Tang (US)]]></dc:creator>
		<pubDate>Mon, 13 Apr 2026 14:23:59 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[agentic AI]]></category>
		<category><![CDATA[governance]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6815</guid>

					<description><![CDATA[Current approaches to agentic AI governance seem more focused on trying to apply governance after a system is developed, like a Band-Aid, instead of baking in reasonable governance and controls into the guts of the system. In the same way that security teams refer to “trust assurance” as the measures and frameworks that give them... <a href="https://www.dataprotectionreport.com/2026/04/how-to-approach-governance-of-ai-agents/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>Current approaches to agentic AI governance seem more focused on trying to apply governance after a system is developed, like a Band-Aid, instead of baking in reasonable governance and controls into the guts of the system. In the same way that security teams refer to &ldquo;trust assurance&rdquo; as the measures and frameworks that give them confidence that security controls are predictable, transparent, and working effectively and as intended, these governance-forward principles can and should be applied to reduce AI Agent risks throughout the system lifecycle.&nbsp;&nbsp;</p><p>One AI security vendor recently claimed that SOC2 was insufficient for evaluating AI security and controls. Although we might agree with the ultimate conclusion, the vendor made some surprising statements that show an implicit bias for the Band-Aid approach and a poor understanding of how to effectively govern Agentic AI systems.&nbsp;</p><p>Here are a few examples of representations that should raise red flags.</p><p><em>Vendor</em>:</p><p>&ldquo;An AI Agent doesn&rsquo;t have a fixed set of behaviors.&rdquo;</p><p><em>Correct Approach:</em></p><p>AI agents in most settings should have established, fixed behaviors. Success and failure modes should be explicitly defined and governed. This should be done with static logic (code) and is one of the most important controls in any Agentic AI system.&nbsp;</p><p>If success and failure are not well defined and monitored, then arguably the system is constructively allowed to misbehave.&nbsp; And, when it does so, the failure mode will be silent. Trying to detect and stop these system failures after-the-fact with a Band-Aid approach resembles whack-a-mole more than it does governance. Relying on after-the-fact controls is like treating aviation safety as mostly a matter of investigating crashes instead of trying to increase the overall safety and reliability of the system.&nbsp; After-the-fact detection isn&rsquo;t meaningless, but it is the wrong layer to lean on.&nbsp; In agentic AI, as in aviation, the primary safety work should happen before and during system operation.</p><p><em>Vendor</em>:</p><p>&ldquo;The agent may call APIs that are not part of any predefined workflow.&rdquo;</p><p><em>Correct Approach</em>:</p><p>Workflows should be pre-defined, and AI agents should only call APIs for which they are explicitly scoped in advance and subject to pre-defined static logic/control.</p><p>If access to API calls is not limited and controlled, then it becomes easier for those APIs to be accessed either inadvertently or maliciously. &nbsp;Potential exploit chains (which string together multiple agentic components) become deeper and more numerous. To extend the airplane analogy, many commercial airplanes have a control system that sits <em>between</em> pilot action and airplane behavior.&nbsp; Control systems on modern aircraft actively resist or prevent unsafe pilot maneuvers and will limit pitch, bank, and angle of attack to prevent stalls and exceeding safe speeds. These are exactly like the pre-defined limits on API calls: before the system can do the unsafe thing, a control has already prevented that system behavior.</p><p><em>Vendor</em>:</p><p>&ldquo;An AI agent decides at runtime what tools to use.&rdquo;</p><p><em>Correct Approach</em>:</p><p>Although it is true that Agent behavior is dynamic, the vendor leaves open the possibility that natural language (i.e., a call to an AI endpoint) can be allowed to invoke tools.&nbsp; For a system to be reliable, however, tool usage should be controlled with static logic. The reason is because responses from AI endpoints are &ldquo;probabilistic,&rdquo; meaning one can&rsquo;t perfectly control the content of the AI response over time. When tool usage turns on probabilistic responses from AI endpoints then, at some point, they are likely be invoked in ways that make the system unreliable and exploitable.&nbsp; Over time, these small and unexpected responses can chain together and lead to cascading failures.&nbsp; Similar to ungoverned API use, exploit chains become more dangerous and multiply.&nbsp; In other words, when preparing for landing, there should be room for the pilot to make judgment calls based on wind speed, runway length, and physical obstacles, but those judgment calls should only be allowed to flip certain switches in the cockpit.&nbsp;</p><p><strong><u>The Critical Role of Lawyers and Compliance</u></strong></p><p>In the current environment, where Agentic AI risks may not be fully appreciated by existing development and cyber teams, a compliance/procurement/cyber attorney may end up being the first and best line of defense to ensure the organization effectuates real governance and controls over the AI use case.&nbsp;</p><p>With respect to every AI system and AI system component, an organization ideally should begin by posing the following questions:</p><ul class="wp-block-list">
<li>What am I trusting the system or component to do and why?</li>



<li>How do I limit trust (and limit risk)?</li>



<li>What is the system or component allowed to do?</li>



<li>What is the system or component not allowed to do?</li>



<li>What is the data the system is acting upon and what are my rights and limitations with respect to that data?</li>



<li>How does the system enforce the answers to these questions?</li>



<li>What is the risk of scope creep and how can I monitor the actual use over time?</li>
</ul><p>The goal is to surface those areas where trust or scope of action is unwarranted or unnecessary. &nbsp;The governance process is of course a good deal more involved than this, but this is the right conceptual and practical place to start.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Navigating AI compliance with HIPAA essentials</title>
		<link>https://www.dataprotectionreport.com/2026/04/navigating-ai-compliance-with-hipaa-essentials/</link>
		
		<dc:creator><![CDATA[Susan Ross (US)]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 22:49:26 +0000</pubDate>
				<category><![CDATA[HIPAA]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Texas]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6813</guid>

					<description><![CDATA[Healthcare providers are increasingly deploying artificial intelligence (AI) tools for diagnostics, documentation and operational efficiency. In fact, over the last few months, large AI platforms are now marketing AI-enabled tools directly to healthcare providers. Providers must navigate a rapidly evolving regulatory landscape, while keeping in mind existing and longstanding requirements intended to safeguard the privacy and security of... <a href="https://www.dataprotectionreport.com/2026/04/navigating-ai-compliance-with-hipaa-essentials/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p><a href="https://www.nortonrosefulbright.com/en-us/services/bda60796/healthcare">Healthcare</a>&nbsp;providers are increasingly deploying&nbsp;<a href="https://www.nortonrosefulbright.com/en-us/services/71588072/artificial-intelligence">artificial intelligence</a>&nbsp;(<a href="https://www.nortonrosefulbright.com/en-us/services/71588072/artificial-intelligence">AI</a>) tools for diagnostics, documentation and operational efficiency. In fact, over the last few months, large AI platforms are now marketing AI-enabled tools directly to healthcare providers. Providers must navigate a rapidly evolving regulatory landscape, while keeping in mind existing and longstanding requirements intended to safeguard the privacy and security of patient data.  See our  <a href="https://www.nortonrosefulbright.com/en-us/knowledge/publications/55f5440a/navigating-ai-compliance-with-hipaa-essentials">Legal Update</a> for more information on steps you can take.</p><p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Complaint accuses OpenAI of practicing law without a license</title>
		<link>https://www.dataprotectionreport.com/2026/04/complaint-accuses-openai-of-practicing-law-without-a-license/</link>
		
		<dc:creator><![CDATA[Susan Ross (US)]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 13:28:08 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6811</guid>

					<description><![CDATA[A popular public AI tool has been accused in federal court of practicing law without a license.  Please see the post on the Artificial Intelligence page of Inside Tech Law: AI in litigation series: Complaint accuses OpenAI of practicing law without a license &#124; Inside Tech Law &#124; Global law firm &#124; Norton Rose Fulbright <a href="https://www.dataprotectionreport.com/2026/04/complaint-accuses-openai-of-practicing-law-without-a-license/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>A popular public AI tool has been accused in federal court of practicing law without a license.&nbsp; Please see the post on the Artificial Intelligence page of Inside Tech Law:  <a href="https://www.insidetechlaw.com/blog/2026/04/ai-in-litigation-series-complaint-accuses-openai-of-practicing-law-without-a-license">AI in litigation series: Complaint accuses OpenAI of practicing law without a license | Inside Tech Law | Global law firm | Norton Rose Fulbright</a></p><p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>NY DFS’s new MFA guidance: closing common gaps before the next exam</title>
		<link>https://www.dataprotectionreport.com/2026/03/ny-dfss-new-mfa-guidance-closing-common-gaps-before-the-next-exam/</link>
		
		<dc:creator><![CDATA[Ji Won Kim (US) and Susan Ross (US)]]></dc:creator>
		<pubDate>Mon, 23 Mar 2026 05:00:00 +0000</pubDate>
				<category><![CDATA[Cybersecurity]]></category>
		<category><![CDATA[multi-factor authentication]]></category>
		<category><![CDATA[NYDFS]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6806</guid>

					<description><![CDATA[Multi‑factor authentication (MFA) is now a well-established baseline cybersecurity control. The amended New York Department of Financial Services (NY DFS) solidified that understanding and expanded MFA requirements under 23 NYCRR Part 500 (the NY DFS Cybersecurity Regulation). Since November 1, 2025, Covered Entities are required to use MFA for any individual accessing the Covered Entity’s... <a href="https://www.dataprotectionreport.com/2026/03/ny-dfss-new-mfa-guidance-closing-common-gaps-before-the-next-exam/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>Multi&#8209;factor authentication (<strong>MFA</strong>) is now a well-established baseline cybersecurity control. The amended New York Department of Financial Services (<strong>NY DFS</strong>) solidified that understanding and <a href="https://www.dfs.ny.gov/system/files/documents/2025/09/multifactor-authentication.pdf">expanded MFA requirements</a> under <a href="https://govt.westlaw.com/nycrr/Browse/Home/NewYork/UnofficialNewYorkCodesRulesandRegulations?guid=I5be30d2007f811e79d43a037eefd0011&amp;originationContext=documenttoc&amp;transitionType=Default&amp;contextData=(sc.Default)">23 NYCRR Part 500</a> (<strong>the NY DFS Cybersecurity Regulation</strong>). Since November 1, 2025, Covered Entities are required to use MFA for any individual accessing the Covered Entity&rsquo;s information systems, subject to limited exceptions.</p><p>In its &ldquo;<a href="https://www.dfs.ny.gov/system/files/documents/2026/02/Cyber-Public-Training-Lets-Talk-MFA-2026-02-26.pdf">Let&rsquo;s Talk MFA</a>&rdquo; training on February 26, 2026, NY DFS highlighted additional guidance including updated <a href="https://www.dfs.ny.gov/industry_guidance/cybersecurity#faqs">FAQs addressing Section 500.12</a>. Covered Entities have until <strong>April 15, 2026</strong> to certify their compliance with the NY DFS Cybersecurity Regulation in the annual notifications to NY DFS.</p><p>Put simply, NY DFS expects Covered Entities to show that MFA is implemented effectively across their systems, that the scope is complete, and that any risk-based exceptions or compensating controls are well documented, with particular attention to SSO, cloud services, and external-facing systems.</p><h1 class="wp-block-heading">What Changed?</h1><p>The FAQs provide notably granular direction on how NY DFS views common MFA approaches. The FAQs provide guidance on (i) what does and does not qualify as MFA, (ii) push-based authentication, (iii) single sign-on (SSO), (iv) cloud platforms, and (v) when external-facing systems may require MFA.</p><h1 class="wp-block-heading">A Quick Refresher: What NY DFS Means by &ldquo;MFA&rdquo;</h1><p>NY DFS defines MFA as authentication using at least two different factor types:</p><ul class="wp-block-list">
<li>Knowledge &ndash; something you know, such as a password, passphrase, or PIN.</li>



<li>Possession &ndash; something you have, such as a token, authenticator app, or smartcard.</li>



<li>Inherence &ndash; something you are, such as a biometric characteristic.</li>
</ul><p>NY DFS emphasizes utilizing multiple distinct factor categories. Simply layering more checks that fall into a single bucket does not satisfy NY DFS&rsquo;s requirements. For example, requiring two passwords is not considered MFA, because both passwords fall under the same &ldquo;knowledge&rdquo; factor type. Requiring a password (a &ldquo;knowledge&rdquo; factor) plus a token placed on your company-issued device (a &ldquo;possession&rdquo; factor) would constitute MFA since those incorporate two different factor types.</p><h1 class="wp-block-heading">NY DFS Expectations</h1><ul class="wp-block-list">
<li><strong>&ldquo;Possession&rdquo; means real proof of possession</strong>. NY DFS highlights that a possession factor should provide cryptographic or technical proof that the user controls a specific device/token/authenticator at the moment of authentication.</li>
</ul><ul class="wp-block-list">
<li><strong>Push-based MFA is common, but NY DFS is focused on its weaknesses</strong>. NY DFS flags risks associated with push prompts (including &ldquo;MFA fatigue&rdquo;) and points to safeguards such as number-matching or challenge-response, displaying contextual login details, limiting push retries, and using adaptive MFA for suspicious activity.</li>
</ul><ul class="wp-block-list">
<li><strong>Single sign-on (SSO) can work, but SSO alone is not the answer</strong>. NY DFS states SSO may be used; however, MFA must be enforced as part of the authentication process in order to prevent the SSO layer from becoming a bypass route.</li>
</ul><ul class="wp-block-list">
<li><strong>Cloud email and document platforms are firmly in scope</strong>. NY DFS indicates that MFA is required for Covered Entities accessing cloud-based document storage and collaborative platforms.</li>
</ul><ul class="wp-block-list">
<li><strong>External-facing systems: often no MFA, but &ldquo;material risk&rdquo; changes the analysis</strong>. NY DFS notes that many external-facing systems, such as basic marketing pages, may not require MFA. But MFA may be expected where an external-facing system can be used to access other information systems without authentication, or where it otherwise poses a material cybersecurity risk to the Covered Entity, customers, other systems, or nonpublic information. For example, an insurance company&rsquo;s website that contains only publicly available marketing materials would not need MFA. If the website enables access to personal information, however, additional analysis may be necessary.</li>
</ul><h1 class="wp-block-heading">Compensating Controls: Allowed, But Tightly Governed</h1><p>Part 500 permits &ldquo;reasonably equivalent or more secure&rdquo; compensating controls in in place of MFA, but only with specific governance guardrails: CISO approval <strong><u>in writing</u></strong> and periodic review at least annually. Similarly, in examinations, NY DFS often focuses on the reasonableness of the decision-making and the supporting documentation, not just the existence of a tool. A compensating control may be reasonable immediately following the acquisition of another company during the transition period but may not be reasonable the following year.</p><h1 class="wp-block-heading">Our Take</h1><p>NY DFS&rsquo;s recent guidance reinforces that MFA is a baseline expectation under Part 500. In practice, NY DFS supervision focuses on whether MFA is implemented effectively, whether coverage is complete, and whether exceptions or alternatives are supported by documentation and governance. Here are a few steps to help identify potential gaps and achieve compliance:</p><h2 class="wp-block-heading">1. Confirm MFA is implemented broadly and consistently</h2><p>Validating that MFA is not only deployed but consistently enforced across the access paths and information systems is key. NY DFS highlights the scope and sufficiency of implementation, including whether MFA has been implemented with third-party platforms.</p><h2 class="wp-block-heading">2. Review documentation supporting MFA scope and design choices</h2><p>NY DFS emphasizes documentation and reasonableness of risk-based determinations. Even strong technical controls can fall short if the organization cannot explain scope, exceptions, and design decisions.</p><h2 class="wp-block-heading">3. Review and assess whether the existing process for compensating controls meets NY DFS requirements</h2><p>NY DFS highlights governance requirements for compensating controls, including written CISO approval and periodic review at minimum annually.</p><p>The NY DFS guidance is another reminder for Covered Entities to get ready to demonstrate that MFA is consistently enforced across environments, and any exceptions or compensating controls are supported by clear documentation and governance practices.</p><p>Thanks to Jake Alfarah for his assistance with this post.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Cybersecurity and Personal Data: The CNIL toughens its stance</title>
		<link>https://www.dataprotectionreport.com/2026/03/cybersecurity-and-personal-data-the-cnil-toughens-its-stance/</link>
		
		<dc:creator><![CDATA[Nadège Martin (FR), Laura Helloco and Geoffroy Coulouvrat (FR)]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 11:01:55 +0000</pubDate>
				<category><![CDATA[Compliance and risk management]]></category>
		<category><![CDATA[Cybersecurity]]></category>
		<category><![CDATA[Data protection]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6801</guid>

					<description><![CDATA[On 9 February 2026, the Commission Nationale de l&#8217;Informatique et des Libertés (CNIL) published its 2025 report on its enforcement action. Beyond the €487 million &#8211; in cumulative fines &#8211; largely driven (unsurprisingly) by two sanctions related to cookies, another trend deserves attention:&#160;the growing numbers of fines for failure to ensure the security of personal... <a href="https://www.dataprotectionreport.com/2026/03/cybersecurity-and-personal-data-the-cnil-toughens-its-stance/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>On 9 February 2026, the Commission Nationale de l&rsquo;Informatique et des Libert&eacute;s (CNIL) published its 2025 report on its enforcement action. Beyond the &euro;487 million &ndash; in cumulative fines &ndash; largely driven (unsurprisingly) by two sanctions related to cookies, another trend deserves attention:&nbsp;<strong>the growing numbers of fines for failure to ensure the security of personal data and, more specifically, for personal data breaches</strong>.&nbsp;</p><h3 class="wp-block-heading"><strong>A threat already identified &ndash; From awareness to action</strong></h3><p>As early as 2024, the CNIL warned of a&nbsp;<strong>20% increase in breach notifications and a surge in large-scale data breaches.</strong> It noted that attackers regularly exploited the same vulnerabilities, which included compromise of login credentials and failure to detect intrusions, and also frequently involved processors.</p><p>Although the number of sanctions may still appear small relative to the number of data breach notifications (5,629 in 2024), <strong>four significant fines were announced within</strong> <strong>two months</strong>, targeting both controllers and processors:</p><ul class="wp-block-list">
<li>&euro;1.7M against a software publisher in the social welfare sector (December 2025);</li>



<li>&euro;1M against a marketing processor of a streaming platform (December 2025);</li>



<li>&euro;5M against the French public body in charge of employment (January 2026); and</li>



<li>&euro;42M against a major ISP (January 2026).</li>
</ul><h3 class="wp-block-heading"><strong>What to expect in 2026</strong></h3><p>This focus is in line with the CNIL&rsquo;s 2025&ndash;2028 strategic plan, in which cybersecurity features as one of the four priority areas. It is also reflected in the guidance issued on 30 April 2025 regarding how security measures should be strengthened. This emphasised rigorous identity and access management, real-time logging and analysis of network traffic, regular cybersecurity training for staff, and better oversight of security arrangements with processors and subprocessors.</p><p>More specifically, the CNIL now requires companies holding customer, prospect, and user databases comprising data relating to several million individuals to implement <strong>multi-factor authentication </strong>for their employees, partners, processors, and any other parties that can access the database remotely. It also encourages adherence to the recommendations already published by the CNIL and France&rsquo;s National Agency for the Security of Information Systems (ANSSI).</p><p>Compliance with this requirement for multi-factor authentication will be subject to&nbsp;<strong>inspections by </strong><strong>the CNIL from 2026 onwards</strong>. Failure to implement multi-factor authentication may lead to the commencement of enforcement proceedings.</p><p><em>To access the CNIL&rsquo;s 2025 review: </em><a href="https://www.cnil.fr/fr/bilan-sanctions-2025" target="_blank" rel="noreferrer noopener"><em>Sanctions and corrective measures: the CNIL presents its 2025 review | CNIL</em></a><em> &ndash; to access the CNIL&rsquo;s recommendations on multi-factor authentication: <a href="https://cnil.fr/fr/recommandation-mfa" target="_blank" rel="noreferrer noopener">https://cnil.fr/fr/recommandation-mfa</a></em></p><p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Getting ready for California’s new cybersecurity audit requirements</title>
		<link>https://www.dataprotectionreport.com/2026/03/getting-ready-for-californias-new-cybersecurity-audit-requirements/</link>
		
		<dc:creator><![CDATA[Ji Won Kim (US), Shushan Gabrielyan (US) and Leslie Ozuna (US)]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 17:57:34 +0000</pubDate>
				<category><![CDATA[CCPA]]></category>
		<category><![CDATA[audit]]></category>
		<category><![CDATA[cybersecurity audit]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6797</guid>

					<description><![CDATA[On January 1, 2026, the California Privacy Protection Agency’s (“CalPrivacy”) cybersecurity audit regulations (the “Regulations”) took effect after several years of rulemaking and public comment. As previewed in the Data Protection Report, certain businesses subject to the California Consumer Privacy Act as amended by the California Privacy Rights Act are now required to conduct comprehensive... <a href="https://www.dataprotectionreport.com/2026/03/getting-ready-for-californias-new-cybersecurity-audit-requirements/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>On January 1, 2026, the California Privacy Protection Agency&rsquo;s (&ldquo;CalPrivacy&rdquo;) <a href="https://cppa.ca.gov/regulations/pdf/ccpa_statute_eff_20260101.pdf">cybersecurity audit regulations</a> (the &ldquo;Regulations&rdquo;) took effect after several years of rulemaking and public comment. As <a href="https://www.dataprotectionreport.com/2025/08/californias-proposed-cybersecurity-audit-regulation/">previewed in the Data Protection Report</a>, certain businesses subject to the California Consumer Privacy Act as amended by the California Privacy Rights Act are now required to conduct comprehensive annual cybersecurity audits. The now final requirements reflect an ongoing regulatory focus on the implementation and effectiveness of security controls, rather than the mere existence of written policies and procedures.</p><p>During its most recent board meeting at the end of February 2026, CalPrivacy provided further insight into the new audit requirements and what businesses can expect in the coming year.</p><p><strong><u>WHO IS COVERED</u></strong></p><p>Cybersecurity audit requirements apply to a business whose processing of personal information presents &ldquo;significant risk to consumers&rsquo; security.&rdquo; In determining whether processing presents a &ldquo;significant risk,&rdquo; the Regulations contemplate factors such as the business&rsquo;s size and revenue, and the volume and sensitivity of personal information it processes.<a href="#_ftn1" id="_ftnref1">[1]</a> &nbsp;</p><p><strong><u>TIMING</u></strong></p><p>The Regulations establish phased timing for initial compliance depending on a business&rsquo;s gross revenue. A business subject to the requirements must complete a cybersecurity audit report and submit a certification to CalPrivacy by:</p><ul class="wp-block-list">
<li><strong>April 1, 2028</strong>, if the business&rsquo;s annual gross revenue for 2026 was over $100 million as of January 1, 2027. The audit would cover January 1, 2027 to January 1, 2028.</li>



<li><strong>April 1, 2029</strong>, if the business&rsquo;s annual gross revenue for 2027 was between $50 million and $100 million as of January 1, 2028. The audit would cover January 1, 2028 to January 1, 2029.</li>



<li><strong>April 1, 2030</strong>, if the business&rsquo;s annual gross revenue for 2028 was less than $50 million. The audit would cover January 1, 2029 to January 1, 2030.</li>



<li><strong>After April 1, 2030</strong>, if on January 1 of a given year, a business meets the general criteria discussed above in the preceding year, it will need to complete a cybersecurity audit.</li>
</ul><p><strong><u>SCOPE, THOROUGHNESS, AND INDEPENDENCE OF CYBERSECURITY AUDIT</u></strong></p><p>Under section 7123 of the Regulations, the cybersecurity audit must evaluate &ldquo;how the business&rsquo;s cybersecurity program protects personal information . . . and protects against unauthorized activity affecting the availability of personal information.&rdquo; It must span across &ldquo;the business&rsquo;s establishment, implementation, and maintenance of its cybersecurity program, including the related written documentation thereof (e.g., policies and procedures), that is appropriate to the business&rsquo;s size and complexity and the nature and scope of its processing activities, taking into account the state of the art and cost of implementing the components of a cybersecurity program.&rdquo; The cybersecurity audit must also assess each of the 18 components<a href="#_ftn2" id="_ftnref2">[2]</a> listed that the auditor deems applicable to the business&rsquo;s information system, and may cover additional components of a cybersecurity program beyond those listed.</p><p>The cybersecurity audit must result in a written audit report that describes the relevant policies, procedures, and practices assessed and includes, among others, any gaps or weaknesses and documents the business&rsquo;s remediation plan and timeline, any corrections to prior audit reports, and list the titles of up to three individuals responsible for the cybersecurity program.</p><p>While the Regulations do not expressly require submission of the report itself to CalPrivacy, the report must be provided to a member of the business&rsquo;s executive management team responsible for the cybersecurity program, and both the business and the auditor must retain relevant audit documentation for at least five years.</p><p>The cybersecurity audit must be completed by a &ldquo;qualified, objective, independent professional&rdquo; auditor who has knowledge of cybersecurity and how to audit a business&rsquo;s cybersecurity program. The auditor may be internal or external to the business; if they are internal, independence and objectivity must be demonstrable.</p><p><strong><u>RECENT INSIGHTS FROM CALPRIVACY</u></strong></p><p>CalPrivacy indicated during its February 27, 2026 board meeting that it will publish short-form overviews for businesses and more robust compliance checklists for practitioners. CalPrivacy aims to &ldquo;better educate businesses&rdquo; and &ldquo;help practitioners assist those businesses in coming into compliance,&rdquo; with materials to be made available on the CalPrivacy website and promoted in spring and summer 2026. An active and specialized CalPrivacy Audits Division is expected under the recently appointed Chief Privacy Auditor Sabrina Boyce Ross with the support of a growing number of legal specialists and technologists.</p><p><strong><u>TAKEAWAYS</u></strong></p><p>The Regulations provide a framework for demonstrating that a cybersecurity program is designed thoughtfully and operates effectively. As the Regulations allow leveraging existing audit, assessment, or evaluation prepared for other purposes (e.g., an audit that uses the National Institute of Standards and Technology Cybersecurity Framework 2.0), provided they meet the requirements specified, first understanding the existing resources and assessing how they map onto the requirements under the Regulations will help identify opportunities to streamline the growing list of must-haves. Inventorying the relevant documentation and evaluating potential auditors that fit the qualifications now will also help save time and headaches down the road. After all, the first auditable period begins on January 1, 2027, less than ten months from now.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p><a href="#_ftnref1" id="_ftn1">[1]</a> Specifically, businesses that meet the following criteria must complete audits: a business that derives 50% or more of its annual revenues from selling or sharing consumers&rsquo; personal information; or a business that has an annual gross revenue exceeding $25 million in the preceding calendar year; and the business processed the personal information of 250,000 or more consumers or households in the preceding calendar year; or the business processed the <em>sensitive</em> personal information of 50,000 or more consumers in the preceding calendar year.</p><p><a href="#_ftnref2" id="_ftn2">[2]</a> (1) Authentication (including multi-factor authentication and strong password requirements); (2) encryption of personal information at rest and in transit; (3) account management and access controls; (4) inventory and management of personal information and the business&rsquo;s information system; (5) secure configuration of hardware and software; (6) internal and external vulnerability scans, penetration testing, and vulnerability disclosure and reporting; (7) audit-log management; (8) network monitoring and defenses; (9) antivirus and antimalware protections; (10) segmentation of information systems; (11) limitation and control of ports, services, and protocols; (12) cybersecurity awareness regarding evolving threats and countermeasures; (13) cybersecurity education and training for personnel with system access; (14) secure development and coding practices; (15) oversight of service providers, contractors, and third parties; (16) retention schedules and proper disposal of personal information; (17) security-incident response management; and (18) business-continuity and disaster-recovery planning.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>HHS and state AGs fine ambulance firm over $500,000, require enhanced security, privacy, and data minimization practices</title>
		<link>https://www.dataprotectionreport.com/2026/03/hhs-and-state-ags-fine-ambulance-firm-over-500000-require-enhanced-security-privacy-and-data-minimization-practices/</link>
		
		<dc:creator><![CDATA[David Kessler (US), Susana Medeiros (US), Susan Ross (US) and Remi Gambino (US)]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 06:00:00 +0000</pubDate>
				<category><![CDATA[Information governance]]></category>
		<category><![CDATA[data minimization]]></category>
		<category><![CDATA[HIPAA]]></category>
		<category><![CDATA[ransomware]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6788</guid>

					<description><![CDATA[Earlier this year, the Attorneys General of Massachusetts and Connecticut entered into settlement agreements with Comstar, LLC, an ambulance billing firm, relating to alleged HIPAA regulation violations in connection with a ransomware incident.&#160; Comstar is a business associate under HIPAA, and state regulators are authorized to enforce HIPAA under the authority granted by the Health... <a href="https://www.dataprotectionreport.com/2026/03/hhs-and-state-ags-fine-ambulance-firm-over-500000-require-enhanced-security-privacy-and-data-minimization-practices/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>Earlier this year, the Attorneys General of <a href="https://www.mass.gov/doc/comstar-stamped-order/download">Massachusetts</a> and <a href="https://portal.ct.gov/-/media/ag/press_releases/2026/comstar---final-judgment-on-stipulation.pdf?rev=a352c9d2fe0b4456889717d556bfacdc&amp;hash=AB1B7FC92347A3BA676BA8D0023409A9">Connecticut</a> entered into settlement agreements with Comstar, LLC, an ambulance billing firm, relating to alleged HIPAA regulation violations in connection with a ransomware incident.&nbsp; Comstar is a business associate under HIPAA, and state regulators are authorized to enforce HIPAA under the authority granted by the Health Information Technology for Clinical and Economic Health (HITECH) Act.&nbsp; Comstar agreed to pay the Massachusetts Attorney General $415,000 and the Connecticut Attorney General $100,000 as part of the settlements, which also included detailed requirements to enhance Comstar&rsquo;s information security program.&nbsp; These settlement agreements with state regulators follow Comstar&rsquo;s <a href="https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/agreements/hhs-hipaa-agreement-comstar/index.html">agreement</a> with the U.S. Department of Health and Human Services&rsquo; Office for Civil Rights (HHS) last year on May 30, 2025, related to the same event.&nbsp; Each of the agreements specifically state that the terms of the settlement are not an admission of liability or wrongdoing by Comstar.</p><p>Although companies are generally aware of the potential for federal enforcement under HIPAA, companies should also consider the additional risks posed by state regulators enforcing the HIPAA Privacy and Security Rules. Additionally, the Massachusetts and Connecticut settlements demonstrate that state regulators may seek additional fines and requirements beyond those imposed by HHS. &nbsp;Together, these actions underscore the continued regulatory scrutiny HIPAA-covered companies face from both federal and state regulators in what appear to be coordinated state enforcement actions following the HHS settlement last May.</p><p><strong><u>Background</u></strong></p><p>Comstar experienced a ransomware incident in 2022, which involved unauthorized access, encryption, and exfiltration of the company&rsquo;s files. &nbsp;Comstar&rsquo;s forensic investigation confirmed that the threat actor obtained personal information and Protected Health Information (PHI), including names, dates of birth, Social Security numbers (SSN), driver&rsquo;s license numbers, financial account numbers, health insurance information, and medical assessment information. &nbsp;The company disclosed the incident on March 25, 2022, when it provided notification letters to affected individuals. &nbsp;In total, the incident affected the information of 585,621 individuals, including more 320,000 Massachusetts residents and more than 22,000 Connecticut residents.&nbsp; According to HHS&rsquo; investigation, Comstar violated requirements to conduct an accurate and thorough risk assessment related to electronic PHI that it holds.&nbsp;</p><p><strong><u>Cybersecurity Requirements</u></strong></p><p>The May 2025 HHS settlement includes a corrective action plan agreed to by Comstar that requires it to develop a detailed inventory of physical and virtual assets used to collect and process PHI; conduct a risk analysis and prepare a risk management plan as it relates to the confidentiality, integrity, and availability of PHI in Comstar&rsquo;s environment; and revise Comstar&rsquo;s procedures and policies to comply with the HIPAA Privacy and Security Rules, and Breach Notification Rules.</p><p>In comparison, the January 28, 2026 settlements between Comstar and the Attorneys General of Massachusetts and Connecticut include more prescriptive requirements and detail many of the measures that organizations often consider implementing to strengthen their cybersecurity program, including the following steps:</p><ul class="wp-block-list">
<li>Use of encryption for personal information and PHI at rest and in transit.</li>



<li>Conduct annual risk assessments and penetration testing, and implement remediation measures accordingly.</li>



<li>Deploy multi-factor authentication (MFA) for individual user accounts, system administrator accounts, and remote connections to the company&rsquo;s network.</li>



<li>Enhance the cybersecurity program maturity by developing a Written Information Security Program (WISP), and adopting a zero-trust architecture.</li>



<li>Appointing a Chief Information Security Officer (CISO) responsible for maintaining the information security program and advising the CEO on the program, including reporting to the CEO on security risks faced by Comstar on at least a semi-annual basis.</li>



<li>Implement improved security controls, including through access control policies, password management, security monitoring (e.g., SIEM), email filtering, phishing protection, antivirus, data loss prevention (DLP) tools, and endpoint security (e.g., EDR).</li>
</ul><p><strong><u>Privacy &amp; Information Governance Requirements</u></strong></p><p>The settlement agreements with the state regulators also required a number of measures meant to reinforce Comstar&rsquo;s privacy and data minimization practices. &nbsp;In addition to notifying its employees of the consent order&rsquo;s requirements, Comstar must provide specialized training to employees responsible for implementing the company&rsquo;s information security program. &nbsp;This training is required to cover how to safeguard and protect personal information.</p><p>Additionally, the settlement agreements expand the scope of the security awareness and privacy training required for all personnel with access to personal information as defined under state law, rather than solely personnel with access to PHI as planned under the settlement with HHS. &nbsp;The state regulators expect Comstar to provide this training within 90 days of the consent order and annually thereafter.</p><p>The consent orders also require Comstar to comply with the minimum necessary standard of the HIPAA Privacy Rule which provides that PHI should not be collected or maintained when it is not necessary to satisfy a particular purpose. &nbsp;The specific inclusion of the &ldquo;minimum necessary&rdquo; standard aligns with the growing list of regulators that have scrutinized over&#8209;retention practices and data minimization requirements in recent settlements, an enforcement trend we have covered in previous articles, including actions taken by <a href="https://www.dataprotectionreport.com/2024/01/8-million-penalty-to-nydfs-and-another-case-of-over-retention/">NYDFS</a> and the <a href="https://www.dataprotectionreport.com/2024/02/two-ftc-complaints-that-over-retention-of-personal-data-violates-section-5/">FTC</a>.</p><p>Notably, the state regulators emphasized the importance of effective information governance practices by imposing on Comstar precise data archiving requirements. &nbsp;For both the Massachusetts and Connecticut Attorneys General, Comstar agreed to archive patient transportation data within two years of the date of service of the patient. &nbsp;Moreover, where Comstar is required to retain these records for more than two years, the company will &ldquo;to the extent required by applicable law&hellip;archive 2-7 year old records&rdquo; within its offline archival storage under the agreement with the Massachusetts Attorney General.&nbsp; The periods identified by the Massachusetts Attorney General for archival of old records imply a regulatory expectation that records will be kept for no longer than 7 years unless otherwise needed to fulfill regulatory, legal, and contractual requirements.</p><p>These data archiving requirements, coupled with an emphasis on compliance with the &ldquo;minimum necessary&rdquo; standard, were presumably included within the agreements with the goal of increasing the security of older data.&nbsp; First, as many regulators have pointed out, data minimization means less information is available for a threat actor to potentially obtain in a security incident; and, second, by moving older data to offline storage, that data may become more difficult for the threat actor to access when properly configured.&nbsp; This requirement is somewhat similar to the archival and record retention <a href="https://www.ecfr.gov/current/title-17/part-240/section-240.17a-4#p-240.17a-4(a)">requirements</a> national securities exchange members, brokers, and dealers:&nbsp; they &ldquo;must preserve for a period of not less than 6 years, the first two years in an easily accessible place.&rdquo;</p><p><strong><u>Our take</u></strong></p><p>The various HHS and state consent orders with Comstar serve as a helpful blueprint for organizations seeking to enhance their cybersecurity, privacy, and information governance and retention programs, at a time when regulators&rsquo; focus on all three dimensions continues to intensify. &nbsp;As regulators increasingly scrutinize data retention and storage practices and compliance with data minimization standards, companies should develop a plan to update their information governance strategy, with particular attention to records containing personal information.&nbsp; These settlements are particularly interesting because the regulators were not just focused on over-retention &ndash; a mantra that has come out in many recent cyber incident settlements &ndash; but the state regulators focused on a more nuanced approach to protecting older data that does not need to be accessed as regularly by the business.&nbsp; This evidences a more sophisticated approach to information governance and a new focus by regulators on pushing for more refined data security procedures and protections based on not just the sensitivity of the data, but how it is used within the organization and its age.</p><p>Additionally, although the risk of federal enforcement actions following a cybersecurity incident remains a key consideration, HIPAA-regulated entities should also account for the growing likelihood of state enforcement actions. &nbsp;Here, the numerically inclined reader will have noticed that the civil penalties agreed to by Comstar in its agreements with the states Attorneys General were considerably higher than in its agreement with HHS. &nbsp;In this instance, the Massachusetts and Connecticut Attorneys General settled for almost seven times more in civil penalties than HHS.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Heightened Cyber Risks in the Middle East: Geopolitical Tensions Fuel Digital Conflict</title>
		<link>https://www.dataprotectionreport.com/2026/03/heightened-cyber-risks-in-the-middle-east-geopolitical-tensions-fuel-digital-conflictintroduction/</link>
		
		<dc:creator><![CDATA[Tim Jones, Shabnam Karim, Simon Lamb and Sajeedah Bari]]></dc:creator>
		<pubDate>Thu, 05 Mar 2026 14:57:07 +0000</pubDate>
				<category><![CDATA[Cybersecurity]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6786</guid>

					<description><![CDATA[Introduction The latest developments in the Middle East – marked by a significant surge in military activity and retaliatory strikes across the region – have been accompanied by a parallel intensification of cyber operations. It is common in such situations for state-sponsored hackers, hacktivist groups, and advanced persistent threat (APT) units to conduct coordinated campaigns... <a href="https://www.dataprotectionreport.com/2026/03/heightened-cyber-risks-in-the-middle-east-geopolitical-tensions-fuel-digital-conflictintroduction/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p><strong>Introduction</strong></p><p>The latest developments in the Middle East &ndash; marked by a significant surge in military activity and retaliatory strikes across the region &ndash; have been accompanied by a parallel intensification of cyber operations.</p><p>It is common in such situations for state-sponsored hackers, hacktivist groups, and advanced persistent threat (APT) units to conduct coordinated campaigns against government systems, critical infrastructure, and private entities during conflict or warlike times.</p><p>This article examines the evolving cyber threat landscape that could arise from the current regional conflict and the potential implications for organisations operating within the Gulf.</p><p><strong>Cyber Operations Surge in Parallel with Physical Conflict</strong></p><p>As tensions develop, the cyber domain has become an increasingly active battleground. Cyber activity in the Middle East has historically featured a blend of destructive attacks, espionage, and large&#8209;scale disruption. The current conflict reflects similar patterns, including:</p><ul class="wp-block-list">
<li>deployment of wiper malware against government and commercial systems</li>



<li>DDoS campaigns targeting public&#8209;sector platforms and media outlets</li>



<li>attempts to infiltrate energy, aviation, and communications infrastructure</li>



<li>digital influence operations aimed at shaping public narratives</li>
</ul><p>It is common in such times for state&#8209;aligned cyber units to launch broad offensive operations designed to disrupt military command systems, undermine state media channels, and interfere with critical service delivery. These operations typically include both high&#8209;volume DDoS attacks and deeper intrusions targeting energy and aviation infrastructure.</p><p>At the same time, hacktivist groups aligned with various sides of the conflict often take advantage of the volatile environment, conducting opportunistic attacks such as account hijackings, website defacements, and mass dissemination of information or propaganda via compromised applications.</p><p>In response, opposing cyber actors typically target regional defence systems, infrastructure assets, and industrial environments. As is typical in periods of heightened conflict, these activities often blur the lines between state activity, state&#8209;aligned groups, and independent cyber collectives pursuing ideological goals.</p><p><strong>Gulf States: A Surge in Activity</strong></p><p>Entities in the GCC states face elevated cyber risk both as direct targets and as potential collateral victims of spill&#8209;over from the broader regional confrontation.</p><p>The Gulf&rsquo;s strategic importance &ndash; its advanced digital and energy infrastructure, global economic ties, and hosting of international defence assets &ndash; makes it a central area of interest for hostile cyber activity during regional conflict.</p><p>According to reports from various authorities in the GCC, the recent scale of attempted intrusions is significant. For example, as of 18 February 2026, UAE authorities were intercepting between 90,000 and 200,000 cyberattacks per day, with more than 70% linked to state-sponsored threat actors.<a></a></p><p>On 21 February, the UAE Cybersecurity Council announced the successful disruption of coordinated attacks described as &ldquo;terrorist in nature&rdquo;, involving attempted ransomware deployment, network infiltration, and extensive phishing campaigns targeting national platforms.</p><p>These patterns reflect a long&#8209;standing trend: in times of regional tension, Gulf states often experience a surge in activity from sophisticated actors aiming to disrupt energy supplies, defence systems and government networks, compromise sensitive data, or undermine regional stability.</p><p><strong>What are the implications for GCC entities?</strong></p><p>The cumulative effect of these developments is that organisations across the Middle East now face a materially elevated cyber&#8209;risk profile. Industries with the greatest exposure include energy and oil infrastructure, aviation, financial services, defence, telecommunications, and IT service providers. Organisations without direct links to the conflict or only indirect connections to the Middle East may be affected through collateral targeting, opportunistic exploitation, or supply&#8209;chain vulnerabilities.</p><p>To mitigate these risks, GCC organisations should undertake comprehensive exposure assessments, evaluating direct threats as well as indirect or spill&#8209;over impacts. Enhanced governance, robust detection and response mechanisms, and well&#8209;tested incident&#8209;response and business&#8209;continuity plans are essential. Organisations should consider undertaking rapid assessments of third&#8209;party and supply-chain dependencies, resilience testing across critical functions, and conducting tabletop exercises that replicate state&#8209;linked attack scenarios.</p><p><strong>Insurance Implications</strong></p><p>From a cyber insurance standpoint, organisations and insurers in the GCC should anticipate far greater focus on exclusion clauses, which commonly exclude losses arising from &ldquo;war&rdquo; or &ldquo;hostile or warlike action&rdquo; by a government or sovereign actor.</p><p>With the heightened risk of cyber-attacks at present, insureds should look closely at their policies to determine whether they have sufficient cyber cover for state-linked incidents, and where there is doubt, they should seek to include greater clarity.&nbsp;</p><p>Insurers, for their part, should consider aggregation risk (given the sheer volume of cyber-attacks and potential for correlated claims across multiple policyholders), clarify exclusion wording, and ensure that the exclusions align with the distinct realities of digital conflict.</p><p><strong>Conclusion</strong></p><p>Cyber operations have become a defining feature of geopolitical tension in the Middle East, operating alongside and often amplifying physical conflict. For organisations in the Gulf region, this environment demands heightened vigilance, accelerated defensive measures, more stringent compliance measures, and recognition that cyber resilience has become inseparable from broader business continuity and national security.</p><p>In the event of a cyber incident, the NRF team is available to assist on our 24/7 365 hotline. Please contact us via <a href="mailto:databreachresponse@nortonrosefulbright.com" target="_blank" rel="noreferrer noopener">databreachresponse@nortonrosefulbright.com</a> or +44 20 7444 5452 / +971 4 369 6362.</p><p><em>With thanks to Alaa El Kholy for her contribution to this publication.</em></p><p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI and privilege: Assessing recent court rulings</title>
		<link>https://www.dataprotectionreport.com/2026/03/ai-and-privilege-assessing-recent-court-rulings/</link>
		
		<dc:creator><![CDATA[Ellen Blanchard (US), Marc Collier (US), Annmarie Giblin (US), Susana Medeiros (US), Ethan Glenn (US) and Susan Ross (US)]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 01:17:58 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[attorney-client privilege]]></category>
		<category><![CDATA[privilege]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6782</guid>

					<description><![CDATA[We recently drafted an article that discussed court decisions that reached very different conclusions about how the attorney-client privilege and work product doctrine apply to materials submitted to and created by generative AI (GenAI) tools.  A recent decision from the U.S. District Court for the Southern District of New York, United States v. Heppner, underscores... <a href="https://www.dataprotectionreport.com/2026/03/ai-and-privilege-assessing-recent-court-rulings/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>We recently drafted an article that discussed court decisions that reached very different conclusions about how the attorney-client privilege and work product doctrine apply to materials submitted to and created by generative AI (GenAI) tools.&nbsp; A recent decision from the U.S. District Court for the Southern District of New York, <em>United States v. Heppner</em>, underscores the risks of using publicly available GenAI tools in connection with legal matters. The court ruled that client-written prompts and outputs generated using a publicly version of Claude were not protected, and notably concluded no privilege existed rather than conducting a waiver analysis.&nbsp; In contrast, the court in <em>Concord Music Group v. Anthropic</em>, found that GenAI prompts and outputs from Claude generated by plaintiffs during a pre-suit investigation into potential infringement by Anthropic were protected by the work-product doctrine and did not have to be produced.&nbsp; &nbsp; </p><p>To read our analysis of these recent decisions and practical takeaways for safeguarding privilege while using GenAI, click <a href="https://www.nortonrosefulbright.com/en-us/knowledge/publications/f841e401/recent-genai-rulings-highlight-challenges-with-safeguarding-privilege">here</a> to access the article.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Protective order violations lead to sanctions in Uber MDL litigation</title>
		<link>https://www.dataprotectionreport.com/2026/02/protective-order-violations-lead-to-sanctions-in-uber-mdl-litigation/</link>
		
		<dc:creator><![CDATA[David Kessler (US), Ellen Blanchard (US) and Kelly Atherton (US)]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 20:01:25 +0000</pubDate>
				<category><![CDATA[Dispute resolution and litigation]]></category>
		<category><![CDATA[confidential information]]></category>
		<category><![CDATA[protective order]]></category>
		<category><![CDATA[sanctions]]></category>
		<category><![CDATA[sensitive information]]></category>
		<guid isPermaLink="false">https://www.dataprotectionreport.com/?p=6780</guid>

					<description><![CDATA[Even when stringent protective orders are in place, clients are often concerned that the sensitive information they are required to produce in litigation will end up being disclosed or used for improper purposes. Clients often ask whether the protective order is strong enough to ensure that the other side will comply with the provisions. It... <a href="https://www.dataprotectionreport.com/2026/02/protective-order-violations-lead-to-sanctions-in-uber-mdl-litigation/">Continue Reading</a>]]></description>
										<content:encoded><![CDATA[<p>Even when stringent protective orders are in place, clients are often concerned that the sensitive information they are required to produce in litigation will end up being disclosed or used for improper purposes. Clients often ask whether the protective order is strong enough to ensure that the other side will comply with the provisions. It became a little clearer this week that protective order violations can be expensive, even when courts find no bad faith. In a February 17, 2026 Order in the multidistrict litigation (MDL) <em>In re: Uber Technologies, Inc., Passenger Sexual Assault Litigation</em>, No. 3:23-md-03084-CRB (LJC) (N.D. Cal. 2026), Magistrate Judge Lisa Cisneros sanctioned plaintiffs&rsquo; counsel $30,000 plus reasonable attorney&rsquo;s fees for disclosing confidential information across multiple cases, sending a clear message that compliance with protective orders is non-negotiable and self-help is not an option. Judge Cisneros left open the door for more severe sanctions if the plaintiffs&rsquo; counsel continued to violate the Protective Order. This ruling makes it clear that it is important to take the time to negotiate a strong protective order that can be enforced should the need arise.</p><p><strong>Background</strong></p><p>Bret Stanley, one of several plaintiff attorneys in the MDL proceeding against Uber, also represents plaintiffs in other matters against Uber. The Court previously determined that Stanley violated the Protective Order in the MDL by disclosing information that Uber had designated as confidential in the MDL. Uber filed motions to enforce the Protective Order against Stanley in August 2025 and again in December 2025.</p><p><strong>First Protective Order Violation (August 2025)</strong></p><p>Uber argued that either Stanley or his co-counsel violated the Protective Order, which provided that a &ldquo;Receiving Party may use Protected Material that is disclosed or produced by another Party or by a Non-Party in connection with this case <em>only</em> for prosecuting, defending, or attempting to settle this Action or the consolidated [California JCCP] action captioned In re Uber Rideshare Cases&hellip;so long as such use is permitted herein&rdquo; Specifically, Stanley included lists containing protected information, specifically the names of Uber policies and how Uber categorized the policies, in discovery requests served on Uber in other cases in Texas and New Jersey, and nearly identical requests were served by other plaintiffs&rsquo; attorneys in two additional New Jersey cases. In a ruling on the record for an August 12, 2025 hearing, the Court found that Stanley&rsquo;s spreadsheets and document requests in other litigation disclosed the complete substantive contents of certain documents Uber designated as confidential in the MDL, stating that it was &ldquo;pretty straightforward that a disclosure occurred&rdquo; and noting that &ldquo;Stanley could have challenged Uber&rsquo;s confidentiality designations if he had wished to do so, but that he never did.&rdquo;</p><p>On August 18, 2025, the Court adopted a joint proposed order finding that Stanley violated the Protective Order. The Court ordered Stanley to identify all persons outside the MDL to whom he disclosed confidential information and to provide a copy of the order to those persons and courts within three days. The Court also ordered Stanley to take reasonable efforts to retrieve or ensure destruction of all unauthorized confidential information. The Court did not impose monetary sanctions in August.</p><p>However, Stanley failed to notify the appropriate courts and persons of the Court&rsquo;s August 18, 2025 order within the required three-day deadline. Additionally, despite the Courts order that Stanley needed to ensure that the material was destroyed and his co-counsel&rsquo;s knowledge of the violation, his co-counsel refiled the same confidential material in the public record in one of the cases on October 8, 2025, with the filing remaining publicly accessible for approximately thirteen days.</p><p><strong>Second Protective Order Violation (December 2025)</strong></p><p>Separately, Stanley searched for documents related to another matter in the repository of documents that Uber produced in the MDL, including documents designated as confidential under the Protective Order. Stanley then relied on what he found in a confidential document to advance arguments in his unrelated matter, though he did not disclose the document itself to that court or others unauthorized to see it. Stanley had access to this repository of documents because of his membership in the Plaintiff Steering Committee.</p><p><strong>The Court&rsquo;s February 17,2026 Order</strong></p><p><strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; First Motion for Sanctions for the Disclosure of the Policy List</strong></p><p>The Court found that &ldquo;Stanley&rsquo;s disclosure of that information unilaterally rendered Uber&rsquo;s designations meaningless, disregarding the Protective Order&rsquo;s process for challenging a designation if parties reasonably disagree over whether information should be protected&rdquo; and that &ldquo;[a]ttorneys do not have &lsquo;discretion to decide unilaterally that competing rationales&hellip;permit the blatant breach of preexisting protective orders.&rsquo;&rdquo; The Court found that Stanley&rsquo;s interpretation of the Protective Order was impermissible and unreasonable and that some sanction was required under Rule 37(b)(2)(C). On the other hand, Uber had not shown significant actual or likely harm from the disclosure, and its request for attorneys&rsquo; fees was excessive under the circumstances.</p><p>Given Stanley&rsquo;s failure to take reasonable efforts to retrieve or ensure destruction of all unauthorized confidential information after being ordered to in August 2025, the Court granted Uber&rsquo;s first motion for sanctions in part in its February 2026 Order and ordered Stanley to pay Uber $30,000 as partial reimbursement for its attorneys&rsquo; fees within thirty days. The Court considered several factors in reducing the fee award of $168,572.97 original requested by Uber, including:</p><ul class="wp-block-list">
<li>Uber delayed seeking sanctions and appeared to have &ldquo;sandbagged&rdquo; its request, with no indication in its original motion that more than $150,000 in attorneys&rsquo; fees would be sought and &ldquo;remain like a sword handing by a slender thread, ready to drop if Stanley was late or otherwise delinquent in meeting his obligations&rdquo;</li>



<li>Uber&rsquo;s billing records showed the matter was overstaffed with senior attorneys and overworked beyond what was reasonable for the matters in dispute</li>



<li>While disclosure was improper, Uber had not shown actual harm, and the lack of harm informed the Court&rsquo;s assessment of a just sanction</li>



<li>Uber&rsquo;s own conduct in the MDL warranted a reduction in the fees that it should be permitted to recover because the Court previously found that Uber likely violated at least one discovery order and had &ldquo;wasted the Court&rsquo;s and Plaintiffs&rsquo; time&rdquo; through incomplete and misleading disclosures</li>



<li>Stanley directly revealed protected material only to other attorneys and courts</li>



<li>Stanley genuinely, though unreasonably, believed his disclosure did not violate the Protective Order, and Uber had not shown bad faith</li>
</ul><p>The Court also recognized that protective orders for confidential information are not intended to serve as barriers to efficient and effective discovery in other litigation. Accordingly, the Court ordered the parties to meet and confer regarding the degree to which Uber&rsquo;s policy names are &ldquo;CONFIDENTIAL&rdquo; or &ldquo;HIGHLY CONFIDENTIAL &ndash; ATTORNEYS&rsquo; EYES ONLY&rdquo; within the meaning of the Protective Order and &ldquo;to consider whether a modification to the Protective Order is appropriate to allow for limited use of this or other material designated as confidential in serving discovery requests in other litigation, subject to specific safeguards to prevent undue public disclosure.&rdquo;</p><p><strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Second Motion for Sanctions for Conducting an Unauthorized Search</strong></p><p>For the second motion involving Stanley&rsquo;s unauthorized search of the MDL discovery repository, the Court found that Stanley violated the Protective Order &ldquo;when he resorted to self-help to research his [] client&rsquo;s claims in the confidential MDL discovery repository&rdquo; and that sanctions were appropriate under Rule 37(b)(2)(C). The Court ordered Stanley to pay Uber&rsquo;s <em>reasonable</em> attorneys&rsquo; fees for bringing that motion, with the parties encouraged to negotiate the amount.&nbsp;</p><p>The Court also referred Uber&rsquo;s request to disqualify Stanley from the Plaintiff Steering Committee to the appropriate court for resolution.</p><p><strong>Our Take</strong></p><p>Regardless of the type or size of litigation, the majority of the parties will at some point be required to produce highly sensitive and confidential information to the other side. Production of the information does not convey any additional rights or ownership to the receiving party, nor does it grant the receiving party rights to use this information as it sees fit. However as soon as a party produces the information, their ability to control the information decreases dramatically. Protective orders provide rules for this exchange of information allowing parties to share sensitive information with other parties in order to further the truth-seeking function of the court and to avoid surprises at trial without needing to worry that the information will be used for improper purposes or cause harm to the producing party. If a party believes an opponent&rsquo;s designation is unwarranted, the right approach is to meet and confer and raise a challenge to the court if necessary not decide unilaterally what portions of an opponent&rsquo;s confidential documents can and cannot be disclosed.</p><p>This Order makes clear that protective orders are not perfunctory agreements nor orders that are issued at the beginning of litigation and then have little meaning as the case unfolds. Compliance is non-negotiable and courts are prepared to order parties to comply.&nbsp;</p><p>This ruling also highlights the importance of carefully negotiating the protective order and being prepared to enforce it if it becomes necessary. When negotiating the protection of sensitive information, it is crucial for parties to consider cyber security obligations that should be incorporated, as we have discussed in prior <a href="https://www.dataprotectionreport.com/2025/12/happy-e-discovery-day/">blog posts </a>and articles: <a href="https://www.thesedonaconference.org/sites/default/files/conference_papers/Recommended%20%5B03a%5D%20Protective%20Orders_Kessler%2C%20et%20al.%20%28Mar%202015%29.pdf">LINK</a> and <a href="http://www.nortonrosefulbright.com/-/media/files/nrf/nrfweb/knowledge-pdfs/the-obligation-to-secure-your-opponents-data-in-the-age-of-hacking.pdf">LINK</a>. If more courts take the approach Judge Cisneros did, that may give some comfort to clients who have to produce sensitive information in order to comply with their discovery obligations knowing the court &nbsp;will impose sanctions where &ldquo;necessary and sufficient to deter future violations and appropriate to partially compensate [the producing party] for its costs.&rdquo;</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
