<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Global Privacy &amp; Security Blog</title>
	<atom:link href="https://www.stoelprivacyblog.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.stoelprivacyblog.com/</link>
	<description>News and Developments in AI, Data Privacy and Cybersecurity</description>
	<lastBuildDate>Fri, 27 Mar 2026 16:48:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.3&amp;lxb_maple_bar_source=lxb_maple_bar_source</generator>

 
	<item>
		<title>Back to the Future: Cybersecurity Audits</title>
		<link>https://www.stoelprivacyblog.com/2026/03/articles/ai/back-to-the-future-cybersecurity-audits/</link>
		
		<dc:creator><![CDATA[John Pavolotsky]]></dc:creator>
		<pubDate>Thu, 26 Mar 2026 21:27:19 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Cybersecurity]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4646</guid>

					<description><![CDATA[With the RSA Cybersecurity Conference right around the corner from our office in San Francisco, it seems only fitting that the March article focuses on cybersecurity. Long gone are the halcyon days of 1991, when the RSA Conference first started. In the fast-moving world of cybersecurity, 2011, or even 2021, feels antiquated. Hackers seem to...]]></description>
										<content:encoded><![CDATA[<p>With the RSA Cybersecurity Conference right around the corner from our office in San Francisco, it seems only fitting that the March article focuses on cybersecurity. Long gone are the halcyon days of 1991, when the RSA Conference first started. In the fast-moving world of cybersecurity, 2011, or even 2021, feels antiquated. Hackers seem to be one step ahead, using off-the-shelf tools and/or tools freely available on the dark web. From a cybersecurity perspective, agentic AI, at least in the short term, will only further challenge the current dynamics. <em>See</em> <em>Securing and Contracting Agentic AI</em> (Feb.&nbsp;20, 2026). <a href="https://www.stoel.com/insights/publications/securing-and-contracting-agentic-ai">https://www.stoel.com/insights/publications/securing-and-contracting-agentic-ai</a>.</p><p>In cybersecurity, typically, laws and regulations lag market and industry standards and customer expectations. I say typically because some laws and regulations are forward-thinking, usually in terms of their flexibility, and less often, in their specificity. Case in point is Massachusetts&rsquo; 201 CMR chapter 17.00: Standards for the protection of personal information of residents of the Commonwealth, available at: <a href="https://www.mass.gov/regulations/201-CMR-1700-standards-for-the-protection-of-personal-information-of-residents-of-the-commonwealth">https://www.mass.gov/regulations/201-CMR-1700-standards-for-the-protection-of-personal-information-of-residents-of-the-commonwealth</a>. These regulations went into effect on March&nbsp;1, 2010, and borrowed from the GLBA Safeguards Rule (effective May 23, 2003) and the HIPAA Security Rule (effective April 21, 2005). The regulations required a comprehensive written information security program and contained somewhat prescriptive technical requirements, in considerable contrast to the requirement to implement and maintain commercially reasonable protective measures found in most data breach notification statutes. Put otherwise, if someone was drafting an information security policy in 2010, the Massachusetts regulations were as good a place as any to start. &nbsp;</p><p>Fast forward to March 2026. The world seems to be a far less hospitable place. Trust is in even shorter supply. Verification is paramount. Enter the California Consumer Privacy Act Regulations (the &ldquo;CCPA Regulations&rdquo;), effective January 1, 2026: <a href="https://cppa.ca.gov/regulations/pdf/ccpa_statute_eff_20260101.pdf">https://cppa.ca.gov/regulations/pdf/ccpa_statute_eff_20260101.pdf</a>.</p><p>Pursuant to the CCPA Regulations, cybersecurity audits must be performed by a business that either (a) derives at least 50 percent of its annual revenues from selling or sharing consumers&rsquo; personal information (in effect, a data broker) or (b) (i) satisfies the annual revenue threshold (currently, $26.25 million) and (ii) processed the personal information of at least 250,000 consumers or households or processed the sensitive personal information of at least 50,000 consumers, in each case in the preceding calendar year.</p><p>The audit must be performed by an audit professional using audit industry-accepted procedures and standards, such as ISO. If a business does not have a suitable auditor yet, it should consider engaging one as soon as possible.</p><p>The first batch of cybersecurity audit reports under the CCPA Regulations are due in just over two years, on April 1, 2028. This deadline applies to businesses with calendar year 2027 gross revenue greater than $100 million and covers January 1, 2027 to January 1, 2028. The next batches are due on the first of April 2029 (gross revenue between $50 million and $100 million) and April 2030 (gross revenue less than $50 million). For the first batch, two years may seem long, but given that the audit year starts in just over nine months, it is just around the corner. The audit requirements are sprawling, but not unexpected. Components to be assessed during the audit include: encryption of personal information, at rest and in transit; strong passwords; audit log management; account management and access control; employee cyber training; patch management; inventory management; log management; segmentation; internal and external vulnerability scans, penetration testing, and vulnerability disclosure and reporting (e.g., bug bounty and ethical hacking programs); secure development and coding practices; and incident response management. The audit must assess a business&rsquo;s cybersecurity program, and specifically whether said program is &ldquo;appropriate to the business&rsquo;s size and complexity and the nature and scope of its processing activities, taking into account the state of the art and cost of implementing the components of a cybersecurity program.&rdquo; CCPA &sect; 7123(b)(1). For those in the field, if this is bringing back memories of 2010 (201 CMR ch. 17.00) or 2003 (GLBA Safeguards Rule), you are not alone. The assessment is supposed to be context specific; for example, if the business operates exclusively online, restricting and monitoring physical access will be a null set. The requirements are general; compliance with specific standards, <em>e.g.</em>, NIST SP 800-88 for secure document destruction, is not mandated. That said, a seasoned auditor will probe into the technical sufficiency and reasonableness of the protective technological measures. The report mandates a gap analysis and a plan, including a timeframe, to address the gaps. A business may leverage other cybersecurity assessments and audit reports, so long as alone or as supplemented they address the same requirements as the CCPA Regulations.</p><p>In preparing for the audit, now is as good a time as any to create a &ldquo;punch list&rdquo; of the components to be assessed and to map against current policies and practices. If there has not been a penetration test (not to be confused with website vulnerability scanning) in some time, or ever, now is the time to engage a reputable vendor. If there is a process but no documentation, now is the time to prepare such documentation. If there is a documented process, but it has not been pressure tested, now is the time to do so. An auditor will ask. Plus, even as the CCPA Regulations may not apply to a broad swath of businesses for some time, if there is a cybersecurity incident, rest assured that a state regulator can and will ask about the cybersecurity measures at and after the time of the incident, as will counterparties. Further, cybersecurity (and privacy) laws and regulations are living documents, especially in California, and the trend is for them to become more, not less, stringent. Case in point: the CCPA, which has been amended (and strengthened) many times. In the same vein, an amendment in the next few years to require businesses tasked with performing a risk assessment (triggered, <em>e.g.</em>, by a sale or sharing of personal information) to also perform a cybersecurity audit is not farfetched. Last, but not least, concepts (<em>e.g.</em>, the accessible deletion mechanism, for data brokers) first enacted in California tend to spread to other states (with similar bills, <em>e.g.</em>, in Rhode Island and Vermont), meaning that other states may soon follow with cybersecurity audit requirements.</p><p>Where to start? The written information security program, or course.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Securing and Contracting Agentic AI</title>
		<link>https://www.stoelprivacyblog.com/2026/02/articles/ai/securing-and-contracting-agentic-ai/</link>
		
		<dc:creator><![CDATA[John Pavolotsky and Jon Washburn]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 02:25:11 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[Agentic AI]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4643</guid>

					<description><![CDATA[Agentic AI refers to AI systems that can independently plan, decide, and act toward a goal across multiple steps, often invoking tools, APIs, or other systems without continuous human prompting. Unlike traditional generative AI—which produces content in response to a user prompt—agentic AI systems execute workflows, make decisions based on context, and adapt their behavior...]]></description>
										<content:encoded><![CDATA[<p><strong>Agentic AI</strong> refers to AI systems that can <strong>independently plan, decide, and act</strong> toward a goal across multiple steps, often invoking tools, APIs, or other systems without continuous human prompting. Unlike traditional generative AI&mdash;which produces content in response to a user prompt&mdash;agentic AI systems execute workflows, make decisions based on context, and adapt their behavior dynamically over time.</p><p>The critical difference is <strong>agency</strong>. Generative AI is reactive; agentic AI is proactive. When an AI system can autonomously trigger actions&mdash;such as sending emails, modifying records, executing transactions, or orchestrating other systems&mdash;it effectively becomes a <strong>non-human actor </strong>and<strong> potential insider threat</strong>.</p><p><strong>Before deploying agentic AI in a business context, ask:</strong></p><ul class="wp-block-list">
<li><strong>What authority will this agent have? </strong>Can it only read data, or can it modify systems, move funds, or initiate communications?</li>



<li><strong>What decisions can it make without human approval? </strong>Where are the guardrails, and how are they enforced?</li>



<li><strong>What systems can it interact with? </strong>Each integration expands the blast radius of a system failure or compromise. How <em>safe</em> is it?</li>



<li><strong>How will its behavior be audited and monitored? </strong>Can auditors observe what the agent is doing, why it is doing it, and the input used?</li>



<li><strong>What is the failure mode? </strong>How do you stop it if it becomes a threat?</li>
</ul><p>After all these questions have been answered,<strong> complete a risk assessment</strong> that aligns with the <a href="https://www.nist.gov/itl/ai-risk-management-framework">National Institute of Standards and Technology&rsquo;s AI Risk Management Framework</a>. Individually assess all elements of the agentic AI solution, as well as the sum of all parts, prior to deployment and regularly thereafter.</p><p><strong>Identification, Authentication, and Authorization</strong></p><p>Agentic AI will have a unique risk profile in your Identify and Access Management (&ldquo;IAM&rdquo;) system. At minimum, address these three elements:</p><ul class="wp-block-list">
<li>An agent must have a <strong>unique, persistent identity</strong> distinct from developers, users, or service accounts. Avoid shared or embedded credentials, as they magnify risk in agentic deployments.</li>



<li>Agents should authenticate using <strong>strong, non-interactive mechanisms</strong> (e.g., short-lived tokens), with no static passwords or keys baked into prompts or code.</li>



<li>Least privilege is essential. Constrain agents with <strong>fine-grained, task-specific permissions</strong>, not broad system access. Granting excessive permissions leads to systemic risk. An agent should never have more authority than the narrow task it is performing <em>at that moment</em>.</li>
</ul><p><strong>Security Risks and Mitigations</strong></p><p>While the full scope of systemic risk is too broad for this blog post, here are five high-level risks that <em>must</em> be mitigated when considering an agentic AI rollout:</p><p><strong>1. Excessive autonomy </strong>enables agents to take unintended actions with real-world impact. Enforce human-in-the-loop controls for high-risk actions.</p><p><strong>2. Prompt injection and manipulation </strong>can allow external bad actors to force agents to override safeguards. Adopt a secure development framework <a href="https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/">such as the Open Worldwide Application Security Project&rsquo;s</a> and perform vulnerability testing at regular intervals.</p><p><strong>3. Data leakage and oversharing </strong>risks allowing agents to exfiltrate sensitive data during task execution. Limit access only to appropriately classified data and filter output.</p><p><strong>4. Lack of accountability </strong>will result in systemic issues and untrustworthy AI. Require immutable logging, decision traceability, and clear ownership for each deployed agent.</p><p><strong>5. Unmanaged supply chains </strong>risk allowing the use of unapproved tools, vulnerable plugins, undocumented APIs, or unfit or hackable models. Review all system dependencies <em>on a regular cadence</em> and restrict interactions to vetted and managed tools and vendors.</p><p><strong>Contracting Agentic AI</strong></p><p>As expected, technological advancements are outpacing contracting and the law. Best-in-class contractual provisions today may be obsolete in 12-24 months, if not sooner, and even if not, at least for the near term, it will take a village to operationalize and enforce the contractual mandates. Further, most contracts are filed away, never to be seen again unless a dispute develops or the parties are reviewing contract termination logistics. Not revisiting on a regular cadence the security (and other) terms in an agentic AI contract creates substantial risk. Below are a few mitigation options.</p><p>While specificity is usually preferable, for the time being it may be worthwhile to leverage more general language, such as compliance with laws (which will, of course, evolve over time and move somewhat in lockstep with the technology). Specific protections outlined above complement the general &ldquo;compliance with laws&rdquo; language and serve as the foundation to a mutual operational understanding.</p><p>In the near term, consider spending more time negotiating independent third-party audit clauses, for both the developer and the deployer, to help ensure that both parties are receiving the bargained-for protections. An audit in 2031 may look quite different than one in 2026, but at least you will have established the foundation and expectation for confirming mutual compliance.</p><p>Most importantly, best-in-class contractual provisions for agentic AI may, given the nature of the technology, and in particular the need to actively monitor and manage it, create a Potemkin village issue. In particular, the contracting party ostensibly benefiting from the robust language may be lulled into a false sense of security. Put otherwise, best-in-class language without the requisite security and operational resources to monitor and manage the salient aspects of the agentic AI solution could be a recipe for disaster. The deployer will need to, among other things:</p><ul class="wp-block-list">
<li>Constantly communicate with the developer,</li>



<li>Learn about model updates and any other changes implemented by the developer that could impact the performance and behavior of the agentic AI solution,</li>



<li>Validate all integrations,</li>



<li>Know how to disable the agent if it crosses certain lines,</li>



<li>Request appropriate documentation from the developer to perform legally mandated risk assessments, and</li>



<li>Ensure that the logs provided by the deployer are timely, sufficiently granular, and actionable.</li>
</ul><p>The contract should include &ldquo;hooks&rdquo; for these activities that require the developer to reasonably comply. In addition, the contract should permit both the developer and deployer to shut an agent down immediately if it presents a safety risk or on short notice if it crosses certain thresholds or operates outside preset guardrails. Deployments will need to factor &ldquo;kill switch&rdquo; scenarios into operational continuity, and deployers will need to work closely with developers, at least until expectations harden and the engagement becomes more routine. Conversely, the developer will need to dedicate considerable resources in order to minimize failure risk and help ensure successful deployments. Lastly, the developer and deployer will want to regularly (at least annually) review the AI contract to help ensure that it is meeting their needs and addressing emergent risks.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>California AI and Privacy Legislation Update – January 2026</title>
		<link>https://www.stoelprivacyblog.com/2026/01/articles/ai/california-ai-and-privacy-legislation-update-january-2026/</link>
		
		<dc:creator><![CDATA[John Pavolotsky and Elena Miller]]></dc:creator>
		<pubDate>Fri, 23 Jan 2026 20:48:54 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Privacy]]></category>
		<category><![CDATA[AI Laws]]></category>
		<category><![CDATA[Privacy Laws]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4639</guid>

					<description><![CDATA[The new year is off to a quick start. February looms. Businesses are beginning to settle into 2026, and some trends (or at least outlines of such) are beginning to emerge. Businesses are digesting the AI and privacy bills that were signed into law last Fall. California Invasion of Privacy Act (CIPA) litigation shows no...]]></description>
										<content:encoded><![CDATA[<p>The new year is off to a quick start. February looms. Businesses are beginning to settle into 2026, and some trends (or at least outlines of such) are beginning to emerge. Businesses are digesting the AI and privacy bills that were signed into law last Fall. California Invasion of Privacy Act (CIPA) litigation shows no sign of abating. Perhaps the failure of SB 690 (see below), and another year of runway, is driving litigation in this space. As of January 21, 2026, 40 data breaches (impacting more than 500 California residents) have been reported to the California Attorney General, compared to 23 for the same period in 2025. Privacy class action litigation usually follows soon after reporting, suggesting that 2026 will be another, if not more, active year in that space.</p><p>Investigations from regulators follow as well. The beginning of the year is as good a time as any to review the Written Information Security Program (WISP) and to confirm appropriate implementation of the systems and measures that flow from the WISP. Likewise, it is as good a time as any to review and (if appropriate) update website privacy notices, ensure that website consent managers and opt-out mechanisms function as intended, and conduct privacy and cybersecurity training. What&rsquo;s old is new, at least each January.&nbsp;</p><p>The California Privacy Protection Agency continues to focus on data brokers and with the availability of the Delete Request and Opt-Out Platform (DROP) will likely step up enforcement in 2026. Implementation of DROP may not be entirely straightforward, so a detailed project plan and punchlist are recommended.</p><p>The updated California Consumer Privacy Act of 2018 (CCPA) regulations became operative on January 1, 2026. These regulations now mandate, among other items, annual cybersecurity audits, data privacy risk assessments, and pre-use notice and other requirements for automated decision-making technologies, generally with staggered start dates depending on the size of the business.</p><p>The California legislature reconvened on January 5, 2026. As we are in the second year of the 2025-2026 Biennium, legislators may introduce new bills and/or try to progress bills, introduced during the first year, that had not made it to the Governor&rsquo;s desk. For example, a few bills stalled in the Assembly Privacy and Consumer Protection Committee, after having been approved by the Senate: SB 690 (an act to amend Sections 631, 632, 632.7, 637.2, and 638.50 of the Penal Code, relating to crime) aimed to de-conflict CIPA and the CCPA and in so doing legislatively closed the floodgates on CIPA litigation; SB 420 would have required developers of high-risk automated decision systems to conduct impact assessments&nbsp;before&nbsp;making the system publicly available. The current legislative session ends on August 31, 2026.</p><p>The following AI and data privacy laws went into effect on January 1, 2026.</p><ol class="wp-block-list">
<li>AB 316 (Artificial intelligence: defenses)</li>



<li>AB 566 (CCPA: opt-out preference signal)</li>



<li>AB 853 (California AI Transparency Act)</li>



<li>SB 53 (Artificial intelligence models: large developers)</li>



<li>SB 243 (Companion chatbots)</li>



<li>SB 361 (Data broker registration: data collection)</li>



<li>SB 446 (An act to amend Section 1798.82 of the Civil Code, relating to personal information and data breach notification)</li>
</ol><p>AB 566 (the California Opt Me Out Act) requires web browsers to include a clear, one-step setting allowing users to send an opt-out preference signal. AB 853 imposes new transparency and disclosure obligations on GenAI systems, amending the existing California AI Transparency Act. On an even larger scale, SB 53 requires large AI developers to publish risk-management frameworks and report catastrophic safety incidents to the State.</p><p>SB 446 mandates data breach notification to impacted California residents within 30 calendar days of discovery or notification of the data breach, with customary exceptions. A breach notification report must be submitted to the California Attorney General within 15 calendar days of notifying the affected individuals.</p><p>A number of new AI and privacy bills have been introduced.</p><p>SB 300, SB 867, and AB 1609 are new chatbot bills. AB 1064 (Leading Ethical AI Development (LEAD) for Kids Act) was vetoed by Governor Newsom last Fall. A new ballot measure, Parents &amp; Kids Safe AI Act, has been introduced, and signatures are now being sought to put the measure on the ballot in November 2026. It&rsquo;s possible that the California legislature may introduce, and Governor Newsom may sign, new legislation on this topic, following the course of the CCPA, which was originally a proposed ballot measure.</p><p>AB 1542 (an act to amend Sections 1798.100 and 1798.121 of the Civil Code relating to privacy) would, under the CCPA, prohibit a business, service provider, or contractor from selling or sharing sensitive personal information to a third party. Current law allows the consumer to opt out of the selling or sharing of personal information and to limit the use and disclosure of sensitive personal information to certain uses as set out in the statute. AB 1542 may be heard in committee on February 5, 2026.</p><p>Stay tuned. 2026 promises to be another interesting year in California.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From the Kitchen to Compliance: Why Your Turkey and Your Privacy Policy Need Fresh Dates</title>
		<link>https://www.stoelprivacyblog.com/2025/11/articles/privacy/from-the-kitchen-to-compliance/</link>
		
		<dc:creator><![CDATA[Colleen Dewhirst, Kenny Gutierrez and Susan Kimble]]></dc:creator>
		<pubDate>Wed, 26 Nov 2025 19:35:27 +0000</pubDate>
				<category><![CDATA[Privacy]]></category>
		<category><![CDATA[Cookies]]></category>
		<category><![CDATA[Privacy Policy]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4625</guid>

					<description><![CDATA[As you slowly emerge from your tryptophan coma next week, and realize that the first of December is upon us, many complex legal tasks may seem too daunting to face. Luckily, the privacy team at Stoel Rives has developed a plan to keep your privacy program running from the comfort of your post-Thanksgiving stretch pants....]]></description>
										<content:encoded><![CDATA[<p>As you slowly emerge from your tryptophan coma next week, and realize that the first of December is upon us, many complex legal tasks may seem too daunting to face. Luckily, the privacy team at Stoel Rives has developed a plan to keep your privacy program running from the comfort of your post-Thanksgiving stretch pants.</p><p><strong>Privacy Policy Review: Carving Out Time for Compliance</strong></p><p>Once you satisfy the Black Friday itch, consider carving out some time to reflect &ndash; on food, family, festivities of the season, and of course, federal (and state!) privacy requirements. Whether you&rsquo;re in healthcare, tech, finance, retail or otherwise, your privacy policies should reflect the current reality of business operations, and importantly, confirm that an annual review was performed. A &ldquo;last revised: 2020&rdquo; date on your privacy policy is a red flag for regulators &ndash; just like those leftovers on the counter, it&rsquo;s time for a refresh.</p><p>If the types of personal information you collect, your target marketing audience, actual customer numbers, or how you process personal information, and especially health-related data (e.g., protected health information), has changed this year, it might be time for a policy update. Pay attention to vendor practices too &ndash; with rapid adoption of AI, vendors and business associates alike may be changing their capabilities and data processing practices, and the (tur-)key is to know how those changes might impact your data and business operations, and update policies accordingly.</p><p><strong>Dark Meat&hellip; Dark Patterns: When Digital Design Goes Bad</strong></p><p>Just as you might be wary of overcooked dark meat at your Thanksgiving table, businesses should be equally cautious about the &ldquo;dark patterns&rdquo; in their digital interfaces. Dark patterns are deceptive tools or designs used to impair user privacy choices or manipulate user behavior &ndash; like that second serving of pumpkin pie.</p><p>These digital design techniques are deemed a dark pattern if they have the effect of substantially subverting or impairing user autonomy, decision-making, or choice &ndash; a business&rsquo;s intent is not determinative of whether the user interface is a dark pattern, but a factor that is considered.</p><p>Dark patterns can take many forms. A common example is a cookie banner with a bright, oversized &ldquo;ACCEPT ALL COOKIES,&rdquo; with an adjacent, neutral &ldquo;manage preferences&rdquo; button. This presents a visual cue to a user to click the conspicuous button, while not providing an equivalent button to reject &ndash; neither in language nor in the steps taken to effectuate the request. Dark patterns may also be disguised ads, difficult to cancel subscriptions, buried terms, tricks to obtain personal information, and more. Many state privacy laws prohibit the use of dark patterns, and the FTC continues to actively enforce regulations against them. Regulators emphasize that the use of dark patterns invalidates consumer consent. Because these practices can manipulate choices through deception or coercion, they undermine informed decision-making, rendering any consent neither voluntary nor fully informed.</p><p><strong>Cookies and Tracking Technologies: Save Room (and Attention!) for Dessert</strong></p><p>Some things to consider about your cookie policy while digesting your turkey and watching football: A cookie policy explains what cookies are, how they track your activity, and how you can control them.</p><p>Is your website using any new third-party software that may receive personally identifiable information or collect data for&nbsp;the software vendor&rsquo;s own purposes? Tracking technology may be included in software developer kits (SDKs), plug-ins, or other features or functions on your website or application.</p><p>Is your consent manager accurately facilitating consent and consumer-friendly opt-out preferences before dishing out cookies? The California privacy regulatory body has held that website owners, not consent management platforms are responsible for the proper configuration of consent mechanisms (i.e., cookie banner).</p><p>Is your cookie policy compliant with different jurisdictional requirements?&nbsp;Obligations may vary based on location.</p><p>Whether you need a simple temperature check, or a more in-depth review, the privacy team at Stoel Rives is here to help cross annual compliance updates (and lots of other tasks) off of your long holiday to-do list.</p><p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Diving into Data: Managing the Legal Risks of a Data-Driven Economy</title>
		<link>https://www.stoelprivacyblog.com/2025/09/articles/ai/diving-into-data-managing-the-legal-risks-of-a-data-driven-economy/</link>
		
		<dc:creator><![CDATA[Colleen Dewhirst, Kenny Gutierrez and Tanya Huertas-Langevin]]></dc:creator>
		<pubDate>Mon, 29 Sep 2025 23:41:47 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Privacy]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4623</guid>

					<description><![CDATA[Data is fueling innovation like never before. From AI development to strategic decision-making, high-quality data is a powerful business asset. However, it also comes with significant legal considerations. Privacy laws, intellectual property rights, and ethical obligations are all evolving quickly, and businesses must stay ahead of the curve. We’ve written about three areas companies should...]]></description>
										<content:encoded><![CDATA[<p>Data is fueling innovation like never before. From AI development to strategic decision-making, high-quality data is a powerful business asset. However, it also comes with significant legal considerations. Privacy laws, intellectual property rights, and ethical obligations are all evolving quickly, and businesses must stay ahead of the curve.</p><p>We&rsquo;ve written about three areas companies should keep in mind when working with data:</p><ul class="wp-block-list">
<li><strong>Creating datasets:</strong> Privacy and ethics should guide collection. Small details can reveal personal identities, and biased or incomplete data can undermine entire AI systems.</li>



<li><strong>Licensing third-party data:</strong> Pay close attention to ownership, scope, warranties, and liability in agreements.</li>



<li><strong>Using public data:</strong> Public availability does not equal unrestricted use. Licensing terms, privacy laws, and governance protocols still apply.</li>
</ul><p>Data is a powerful driver of innovation and a potential source of significant legal exposure. Proactive governance, thoughtful contracting, and attention to ethics and privacy can help businesses unlock the value of data while minimizing the risks.</p><p><strong>Read the <a href="https://www.stoel.com/insights/publications/diving-into-data-managing-the-legal-risks-of-a-data-driven-economy">full article on stoel.com</a> for practical guidance on building a responsible approach to data.</strong></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Mid-Summer Update on AI, Privacy, and Cybersecurity Developments</title>
		<link>https://www.stoelprivacyblog.com/2025/07/articles/ai/mid-summer-update-on-ai-privacy-and-cybersecurity-developments/</link>
		
		<dc:creator><![CDATA[John Pavolotsky]]></dc:creator>
		<pubDate>Thu, 31 Jul 2025 22:15:07 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Updates]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4621</guid>

					<description><![CDATA[In the world of AI, a month is an eternity. In my last article (https://www.stoelprivacyblog.com/2025/06/articles/ai/ai-legislative-developments-early-days-or-tipping-point/), just over a month ago, I wrote about the much-discussed proposed 10-year moratorium on the enforcement of state AI laws. Ultimately, the Senate voted against it, and the House passed the Senate version of the tax and spending bill. Game...]]></description>
										<content:encoded><![CDATA[<p>In the world of AI, a month is an eternity. In my last article (<a href="https://www.stoelprivacyblog.com/2025/06/articles/ai/ai-legislative-developments-early-days-or-tipping-point/">https://www.stoelprivacyblog.com/2025/06/articles/ai/ai-legislative-developments-early-days-or-tipping-point/</a>), just over a month ago, I wrote about the much-discussed proposed 10-year moratorium on the enforcement of state AI laws. Ultimately, the Senate voted against it, and the House passed the Senate version of the tax and spending bill. Game over? Perhaps not.</p><p>On July 23, 2025, the Executive Office of the President released &ldquo;Winning the Race &ndash; America&rsquo;s AI Action Plan,&rdquo; available here: <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf" target="_blank" rel="noreferrer noopener">https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf</a>. Conveniently, or perhaps not exactly so, I was scheduled to present AI legislative developments at a conference that morning. I was able to skim the plan before the presentation and promised the attendees that I would provide a more nuanced assessment during my evening panel on global AI policy and compliance.</p><p>The plan is wide-ranging and provides under the heading &ldquo;Remove Red Tape and Onerous Regulation&rdquo;: &ldquo;The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states&rsquo; rights to pass prudent laws that are not unduly restrictive to innovation.&rdquo; Policy recommendations include: &ldquo;Led by OMB, work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state&rsquo;s AI regulatory climate when making funding decisions and limit funding if the state&rsquo;s AI regulatory regimes may hinder the effectiveness of that funding or award.&rdquo; If, when, and how this will be implemented remain to be seen. Another policy recommendation is to &ldquo;[r]eview all Federal Trade Commission (FTC) investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation. Furthermore, review all FTC final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set aside any that unduly burden AI innovation.&rdquo; Practically, this may be a relatively short list, if not a null set. The relatively recent <em>Workado</em> and <em>Intellivision</em> complaints and associated settlements both address alleged false and unsubstantiated claims about the efficacy of AI solutions. The <em>Rite-Aid</em> consent decree may be within scope as well, as artificial intelligence technologies undergird many biometric systems. The commissioners who voted in favor of the decree, in December 2023, are now gone.</p><p>Congress is in recess until early September. It&rsquo;s possible that the 10-year (or a shorter duration) moratorium may be reintroduced as a separate bill.</p><p>In the meantime, the states forge ahead, although only a handful of them, including California (AB 410, AB 412, AB 853, and SB 53, as examples) and New York (A06578, S06954, and S00934, to list a few bills), remain in legislative session. Texas Governor Greg Abbott signed into law HB 149 (Texas Responsible Artificial Intelligence Governance Act), which takes a considerably narrower approach than the Colorado AI Act, which, after the failure of SB 25-318, still goes into effect on February 1, 2026, and Virginia HB 2094, which was vetoed by Governor Glenn Youngkin earlier this year.</p><p>Put simply, fragmentation continues and should be expected, at least for the near future. Couple this with extra-national concepts such as Sovereign AI, and other geopolitical factors and considerations, and we can all agree that we live in interesting times.</p><p>Of course, AI is not the only item on the radar. For example, earlier this month, with limited exception, U.S. DOJ Provisions Pertaining to Preventing Access to U.S. Sensitive Personal Data and Government-Related Data by Countries of Concern or Covered Persons, codified at <a href="https://www.ecfr.gov/current/title-28/chapter-I/part-202" target="_blank" rel="noreferrer noopener">https://www.ecfr.gov/current/title-28/chapter-I/part-202</a>, are now fully enforceable. The rules address prohibited transactions and restricted transactions with a country of concern or covered person and introduce considerable compliance obligations.</p><p>Thus, while July typically might be a month to put away the laptop for a bit, it seems that this July was very much an exception. Let&rsquo;s see what August brings.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI Legislative Developments: Early Days or Tipping Point?  </title>
		<link>https://www.stoelprivacyblog.com/2025/06/articles/ai/ai-legislative-developments-early-days-or-tipping-point/</link>
		
		<dc:creator><![CDATA[John Pavolotsky]]></dc:creator>
		<pubDate>Fri, 27 Jun 2025 21:35:49 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4619</guid>

					<description><![CDATA[If tracking AI legislation is giving you whiplash, you’re not the only one. In February, I wrote about the 24-Hour AI News Cycle: https://www.stoelprivacyblog.com/2025/02/articles/ai/the-24-hour-ai-news-cycle-keeping-up-with-legal-and-regulatory-developments/. February is ancient history, and the AI news cycle has become even further compressed since then. Now, at the end of June, we stand at a crossroads. H.R. 1 (“One Big...]]></description>
										<content:encoded><![CDATA[<p>If tracking AI legislation is giving you whiplash, you&rsquo;re not the only one.</p><p>In February, I wrote about the 24-Hour AI News Cycle: <a href="https://www.stoelprivacyblog.com/2025/02/articles/ai/the-24-hour-ai-news-cycle-keeping-up-with-legal-and-regulatory-developments/">https://www.stoelprivacyblog.com/2025/02/articles/ai/the-24-hour-ai-news-cycle-keeping-up-with-legal-and-regulatory-developments/</a>. February is ancient history, and the AI news cycle has become even further compressed since then.</p><p>Now, at the end of June, we stand at a crossroads. H.R. 1 (&ldquo;One Big Beautiful Bill Act&rdquo;), passed by the House on May 22, 2025, contains a 10-year moratorium on the enforcement by a state or any of its political subdivisions of any laws &ldquo;limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.&rdquo;<a href="#_ftn1" id="_ftnref1">[1]</a> State attorney generals from both sides of the aisle have pushed back on the AI moratorium.<a href="#_ftn2" id="_ftnref2">[2]</a> The Senate Committee on Commerce, Science, and Transportation reconciliation text includes in one subsection (q) the moratorium and in the immediately preceding subsection (p), a statement that none of the $500MM in BEAD (Broadband, Equity, Access, and Deployment) funds &ldquo;to construct and deploy infrastructure for the provision of artificial intelligence models, artificial intelligence systems, or automated decision systems&rdquo; may be made available to an eligible (any) state or any political subdivision that does not comply with the moratorium.<a href="#_ftn3" id="_ftnref3">[3]</a> Read together, this would seem to suggest that the moratorium could affect any state, and states that do not comply with the moratorium will not receive BEAD funding. &nbsp;</p><p>Some, including certain U.S. Chamber of Commerce members, support the moratorium, as reflected in a letter dated June 9, 2025 to Senate Majority Leader Thune and Senate Minority Leader Schumer.<a href="#_ftn4" id="_ftnref4">[4]</a> Others, including Senators Cantwell (D-WA) and Blackburn (R-TN), and the attorney generals for their respective states, oppose it.<a href="#_ftn5" id="_ftnref5">[5]</a> The bill would still need to be approved by the full Senate, which could remove the moratorium via amendment, and the House would have one more bite at the apple. &nbsp;&nbsp;</p><p>The tax and spending bill may be presented to President Trump as early as next week. If a bill with the moratorium is signed into law, while laws such as the Colorado AI Act<a href="#_ftn6" id="_ftnref6">[6]</a> and others squarely focused on AI will be within scope, it is an open question as to whether general consumer protections laws and privacy laws or regulations, with a nexus to AI, will be impacted. At the end of 2024 and the beginning of 2025, the Attorney Generals of Oregon and California provided guidance on the application or Oregon<a href="#_ftn7" id="_ftnref7">[7]</a> and California<a href="#_ftn8" id="_ftnref8">[8]</a> law to AI.</p><p>Meanwhile, AI bills continue to progress through state legislatures and in some cases, such as the <em>Texas Responsible AI Governance Act</em> (HB 149)<a href="#_ftn9" id="_ftnref9">[9]</a>, get signed. &nbsp;&nbsp;&nbsp;</p><p>As intimated, the situation is fluid.</p><p>I am still tracking AI bills, and there are many, including not only in California, but the moratorium cannot but enter into the analysis.</p><p>If you are apt, or trying, to take the long view, consider reading <em>Nexus: A Brief History of Information Systems from the Stone Age to AI</em> (Yuval Noah Harari). Harari is a trained historian, and if you have read any of his articles or books, you would not be surprised to find <em>Nexus </em>intellectually sprawling.</p><p>As an aside, I first ran across Harari in an article in <em>The Atlantic</em>, published in late 2018, while preparing to teach Technology Transactions Law to second and third year law students. The article, <em>Why Technology Loves Tyranny</em>, focused on the need to continuously reskill and upskill, and stressed the importance of critical thinking, in the new age of AI. This was roughly four years before the launch of ChatGPT, and likely some time before the term &ldquo;cognitive offloading&rdquo; entered the popular lexicon. I shared the teachings with my students, but the future predicted by Harari seemed quite distant.&nbsp;</p><p><em>Nexus </em>tackles different issues. There, Harari writes: &ldquo;My goal in this book is to provide a more accurate historical perspective on the AI revolution. This revolution is still in its infancy, and it is notoriously difficult to understand momentous developments in real time.&rdquo; (398)&nbsp; Harari stresses the importance of self-correcting mechanisms, especially when and as the network becomes more powerful. This may be especially the case with respect to AI.&nbsp; More broadly, accountability is key, but it must be operationalized thoughtfully, pragmatically, and with sufficient lead time for effective enforcement, at least based on the recent (failed) attempt to amend the Colorado AI Act, which goes into effect on February 1, 2026.<a href="#_ftn10" id="_ftnref10">[10]</a>&nbsp; &nbsp;&nbsp;</p><p>As some know, an article on agentic AI is still in the works.&nbsp;Stay tuned.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p><a href="#_ftnref1" id="_ftn1">[1]</a> https://legiscan.com/US/bill/HB1/2025</p><p><a href="#_ftnref2" id="_ftn2">[2]</a> https://coag.gov/press-releases/attorney-general-phil-weiser-bipartisan-ag-letter-congress-artificial-intelligence-regulations-5-16-25/</p><p><a href="#_ftnref3" id="_ftn3">[3]</a> https://www.commerce.senate.gov/services/files/AD3D04CF-52B4-411F-854B-44C55ABBADDA</p><p><a href="#_ftnref4" id="_ftn4">[4]</a> https://www.uschamber.com/technology/artificial-intelligence/coalition-letter-to-the-senate-supporting-the-moratorium-on-ai-regulation-enforcement</p><p><a href="#_ftnref5" id="_ftn5">[5]</a> https://www.commerce.senate.gov/2025/6/state-attorneys-general-tell-congress-ai-moratorium-will-leave-consumers-citizens-vulnerable-to-ai-fraud-theft-other-harms</p><p><a href="#_ftnref6" id="_ftn6">[6]</a> https://leg.colorado.gov/sites/default/files/2024a_205_signed.pdf</p><p><a href="#_ftnref7" id="_ftn7">[7]</a> https://www.doj.state.or.us/wp-content/uploads/2024/12/AI-Guidance-12-24-24.pdf</p><p><a href="#_ftnref8" id="_ftn8">[8]</a> <a href="https://oag.ca.gov/system/files/attachments/press-docs/Legal%20Advisory%20-%20Application%20of%20Existing%20CA%20Laws%20to%20Artificial%20Intelligence.pdf">https://oag.ca.gov/system/files/attachments/press-docs/Legal%20Advisory%20-%20Application%20of%20Existing%20CA%20Laws%20to%20Artificial%20Intelligence.pdf</a>; <a href="https://oag.ca.gov/system/files/attachments/press-docs/Final%20Legal%20Advisory%20-%20Application%20of%20Existing%20CA%20Laws%20to%20Artificial%20Intelligence%20in%20Healthcare.pdf">https://oag.ca.gov/system/files/attachments/press-docs/Final%20Legal%20Advisory%20-%20Application%20of%20Existing%20CA%20Laws%20to%20Artificial%20Intelligence%20in%20Healthcare.pdf</a></p><p><a href="#_ftnref9" id="_ftn9">[9]</a> https://legiscan.com/TX/text/HB149/id/3249139/Texas-2025-HB149-Enrolled.html</p><p><a href="#_ftnref10" id="_ftn10">[10]</a> https://coag.gov/blog-post/attorney-general-phil-weiser-testimony-on-senate-bill-25-318-5-5-25/</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>DOJ Antitrust Division’s Updated Guidance on Evaluating Corporate Compliance Programs Includes New Focus on AI and Electronic Communications</title>
		<link>https://www.stoelprivacyblog.com/2025/06/articles/ai/doj-antitrust-divisions-updated-guidance-on-evaluating-corporate-compliance-programs-includes-new-focus-on-ai-and-electronic-communications/</link>
		
		<dc:creator><![CDATA[Jenna Poligo, Matthew Segal and Wendy Olson]]></dc:creator>
		<pubDate>Wed, 25 Jun 2025 23:25:37 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[DOJ]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4616</guid>

					<description><![CDATA[As AI tools and modern communication platforms become increasingly embedded in corporate operations, the DOJ Antitrust Division has released an updated Evaluation of Corporate Compliance Programs in Criminal Antitrust Investigations, marking a significant expansion in DOJ expectations around artificial intelligence, ephemeral messaging, and data-driven monitoring. This updated guidance provides a clearer roadmap for what the...]]></description>
										<content:encoded><![CDATA[<p>As AI tools and modern communication platforms become increasingly embedded in corporate operations, the DOJ Antitrust Division has released an updated <em>Evaluation of Corporate Compliance Programs in Criminal Antitrust Investigations</em>, marking a significant expansion in DOJ expectations around artificial intelligence, ephemeral messaging, and data-driven monitoring.</p><p>This updated guidance provides a clearer roadmap for what the Division expects in an effective antitrust compliance program. It also signals that the DOJ&rsquo;s civil teams will be applying similar scrutiny when evaluating compliance in civil enforcement actions&mdash;raising the stakes for companies seeking to mitigate liability across the board.</p><p><strong>Key Takeaways</strong></p><ul class="wp-block-list">
<li><strong>AI Risk Now Front and Center</strong>: The DOJ expects companies to assess and address antitrust risks arising from AI, algorithmic pricing tools, and other emerging technologies.</li>



<li><strong>Compliance Must Keep Pace with Technology</strong>: Compliance teams should understand the technologies in use, be involved in deployment decisions, and ensure policies are updated to reflect current legal and market developments.</li>



<li><strong>Ephemeral Messaging in the Spotlight</strong>: Companies must identify and evaluate the use of non-company communication tools (like Signal or WhatsApp), establish clear policies, and define preservation requirements.</li>



<li><strong>Mid-Level Managers Matter</strong>: Compliance leadership must go beyond the C-suite&mdash;DOJ wants to see &ldquo;tone from the middle&rdquo; with managers modeling ethical behavior across the organization.</li>



<li><strong>Data Analytics and Monitoring Expectations Are Rising</strong>: The DOJ encourages the use of data tools to detect antitrust risks and asks whether compliance teams have timely access to relevant data sources.</li>



<li><strong>Application to Civil Enforcement</strong>: Civil antitrust teams will now assess compliance programs using many of the same factors as criminal prosecutors&mdash;underscoring the need for strong programs even outside of criminal investigations.</li>
</ul><p>Read <a href="https://www.stoel.com/insights/publications/doj-antitrust-divisions-updated-guidance-on-evaluating-corporate-compliance-programs-includes-new-focus-on-ai-and-electronic-communications">the full article</a> to explore what these updates mean for your company&rsquo;s antitrust compliance efforts and how to align your policies with DOJ expectations.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Utah Implements First-in-the-Nation Law Requiring Age Verification for App Store Access</title>
		<link>https://www.stoelprivacyblog.com/2025/05/articles/utah/utah-implements-first-in-the-nation-law-requiring-age-verification-for-app-store-access/</link>
		
		<dc:creator><![CDATA[Elena Miller and John Pavolotsky]]></dc:creator>
		<pubDate>Thu, 15 May 2025 19:06:52 +0000</pubDate>
				<category><![CDATA[Laws / Regulations]]></category>
		<category><![CDATA[Utah]]></category>
		<category><![CDATA[App Store Accountability Act]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4612</guid>

					<description><![CDATA[On May 7, 2025, Utah became the first U.S. state to enact a law requiring app store providers and developers to verify users&#8217; ages and obtain verifiable parental consent for minors to download apps or make in-app purchases. Senate Bill 142, the App Store Accountability Act (the “Act”), sets forth specific compliance obligations for both...]]></description>
										<content:encoded><![CDATA[<p>On May 7, 2025, Utah became the first U.S. state to enact a law requiring app store providers and developers to verify users&rsquo; ages and obtain verifiable parental consent for minors to download apps or make in-app purchases. Senate Bill 142, the <a href="https://le.utah.gov/~2025/bills/static/SB0142.html"><strong>App Store Accountability Act</strong></a> (the &ldquo;Act&rdquo;), sets forth specific compliance obligations for both developers and app store providers, including requirements around age-rating, parental disclosures, and data handling practices.</p><p>Under the Act, app store providers must request age information during account creation and link minor accounts to verified parent accounts. Developers must confirm age categories and seek parental consent before app use or in-app transactions, particularly when implementing a &ldquo;significant change&rdquo; to the app&rsquo;s content or data practices. Personal age verification data must be encrypted and used solely for purposes outlined in the legislation.</p><p>Violations may be enforced as deceptive trade practices by the Utah Attorney General, and starting December 31, 2026, parents may bring private actions for non-compliance&mdash;potentially recovering the greater of $1,000 per violation or actual damages, plus attorneys&rsquo; fees. Developers acting in good faith on app store-provided data may be shielded from liability.</p><p>As similar bills are being considered in states such as California, Kentucky, and West Virginia, app developers may need to prepare for a patchwork of state-specific requirements. Until federal standards are adopted, many may choose to align with the most stringent laws to manage compliance risk.</p><p>Read the <strong><a href="https://www.stoel.com/insights/publications/utahs-app-store-accountability-act-goes-into-effect">full article here</a></strong>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI Is Changing the Risk Landscape—Is Your Insurance Keeping Up?</title>
		<link>https://www.stoelprivacyblog.com/2025/04/articles/ai/ai-is-changing-the-risk-landscape-is-your-insurance-keeping-up/</link>
		
		<dc:creator><![CDATA[Seth Row]]></dc:creator>
		<pubDate>Wed, 23 Apr 2025 18:01:18 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Insurance Coverage]]></category>
		<category><![CDATA[insurance]]></category>
		<guid isPermaLink="false">https://www.stoelprivacyblog.com/?p=4599</guid>

					<description><![CDATA[As businesses race to adopt Artificial Intelligence, the insurance industry is navigating a fast-evolving and complex web of risks. From bodily injury to securities fraud, AI is triggering claims that span nearly every corner of liability coverage. In this post, insurance litigator Seth explores the fast-moving intersection of AI and insurance—and why companies need to...]]></description>
										<content:encoded><![CDATA[<p>As businesses race to adopt Artificial Intelligence, the insurance industry is navigating a fast-evolving and complex web of risks. From bodily injury to securities fraud, AI is triggering claims that span nearly every corner of liability coverage. In this post, insurance litigator Seth explores the fast-moving intersection of AI and insurance&mdash;and why companies need to start paying attention now.</p><p>With over two decades in insurance litigation, Seth notes that no other technology has introduced such a wide and unpredictable range of risks. And the response from insurers? It&rsquo;s a mix of new exclusions, cautious underwriting, and emerging coverage options designed to tackle this frontier.</p><hr class="wp-block-separator has-alpha-channel-opacity"><p><strong>Key Takeaways</strong></p><ul class="wp-block-list">
<li><strong>AI-Driven Claims Are Already Here</strong>: Real-world lawsuits allege harms ranging from AI chatbots encouraging self-harm to discriminatory hiring bots and copyright misuse.</li>



<li><strong>Coverage Varies Widely</strong>: Claims are being made under general liability, D&amp;O, E&amp;O, media liability, and cyber-risk policies&mdash;sometimes successfully, sometimes not.</li>



<li><strong>Exclusions Are on the Rise</strong>: New policy language is carving out AI-related exposures, including broadly worded exclusions for generative AI content and AI-assisted decision-making.</li>



<li><strong>Some Insurers Are Leaning In</strong>: Select carriers are offering AI-specific E&amp;O endorsements, including coverage for data misuse and &ldquo;data poisoning&rdquo; in machine learning.</li>



<li><strong>Risk Management Must Be Proactive</strong>: Companies should integrate legal and risk teams early, audit contracts with AI vendors, and stay informed on evolving governance frameworks like NIST and NYC&rsquo;s Local Law 144.</li>
</ul><hr class="wp-block-separator has-alpha-channel-opacity"><p><strong><a href="https://www.stoel.com/insights/publications/ai-and-insurance-the-awkward-early-days">Read the full article</a></strong> to better understand the shifting contours of insurance coverage in the age of AI&mdash;and what your company can do to manage these emerging risks.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
