<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Legal Technology Archives - Slaw</title>
	<atom:link href="https://www.slaw.ca/category/columns/legal-tech/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.slaw.ca/category/columns/legal-tech/</link>
	<description>Canada's online legal magazine</description>
	<lastBuildDate>Fri, 03 Apr 2026 17:34:36 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>The AI Future of Law Is Already Here — It&#8217;s Just Not Evenly Distributed</title>
		<link>https://www.slaw.ca/2026/04/06/the-ai-future-of-law-is-already-here-its-just-not-evenly-distributed/</link>
					<comments>https://www.slaw.ca/2026/04/06/the-ai-future-of-law-is-already-here-its-just-not-evenly-distributed/#respond</comments>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 11:00:24 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=109433</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Michael Geist had a lawyer on his Law Bytes podcast recently to talk about how AI is radically transforming his practice. For this long-time listener of one of the best law podcasts out there, the <a href="https://mgeist.substack.com/p/the-law-bytes-podcast-episode-262">episode</a> with New York lawyer Zack Shapiro was among the two or three most interesting and informative episodes I think Geist has ever done.</p>
<p>As someone who follows developments in legal AI closely, I found Shapiro’s insights into how to make the best use of AI outstanding. This is an episode that anyone interested in where law is headed — and concerned with not being  . . .  <a href="https://www.slaw.ca/2026/04/06/the-ai-future-of-law-is-already-here-its-just-not-evenly-distributed/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2026/04/06/the-ai-future-of-law-is-already-here-its-just-not-evenly-distributed/">The AI Future of Law Is Already Here — It&#8217;s Just Not Evenly Distributed</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Michael Geist had a lawyer on his Law Bytes podcast recently to talk about how AI is radically transforming his practice. For this long-time listener of one of the best law podcasts out there, the <a href="https://mgeist.substack.com/p/the-law-bytes-podcast-episode-262">episode</a> with New York lawyer Zack Shapiro was among the two or three most interesting and informative episodes I think Geist has ever done.</p>
<p>As someone who follows developments in legal AI closely, I found Shapiro’s insights into how to make the best use of AI outstanding. This is an episode that anyone interested in where law is headed — and concerned with not being left behind — can’t afford to miss.</p>
<p>Shapiro had three core insights to impart that, when combined, give you, in his words, “superpowers of the kind we have not seen in the law yet.” And he paints a vivid picture of how these powers have transformed his practice at a small transactional <a href="https://rains.law/about">firm</a> focusing on tech startups and investors. (Shapiro has conveyed some of these ideas in a <a href="https://x.com/zackbshapiro/status/2031717962948690355">few</a> <a href="https://x.com/zackbshapiro/article/2036791156915290271">essays</a> that have gone viral X.)</p>
<p>I will briefly sketch his insights, which I think are entirely valid and empowering. But my larger aim here is to make the point that, as exciting and inspiring as this glimpse into the AI future may be, it’s a partial vision. It leaves a lot out.</p>
<p>It’s not unlike a news clip of a crowd protesting in front of city hall, which looks large close-up, but, as the camera pans out, is revealed to be not that big.</p>
<p>To be clear, the buzz and excitement here is real. Some superpowers are now attainable. But much of this is localized. It’s clustered in <em>some </em>areas of practice, for <em>some</em> things.</p>
<p>First, the good news.</p>
<h2>Shapiro’s three insights</h2>
<h3><em>Stop using bespoke legal AI</em></h3>
<p>In the past three months, he says, we’ve reached a tipping point. Off-the-shelf, frontier models like Claude and ChatGPT are so good that it’s time to stop using AI tools tailored for law, such as Protégé, Westlaw, Harvey, and so on.</p>
<p>Shapiro sees these as unnecessary “wrappers” around a language model, with buttons and controls meant to assist you but that only get in the way. Remove all the clutter and learn to work with the best models directly.</p>
<p>Why? Because there’s something you need to be doing — constantly — that Harvey and Westlaw won’t let you do. You want to be using AI to teach you how to use it better, and you want to build a library of background instructions that run with every prompt, so as to make your output ever more responsive and accurate, and to automate more steps in your process.</p>
<h3><em>Use AI to create a virtuous cycle of greater efficiency</em></h3>
<p>Shapiro singles out a key feature of Claude that I don’t believe any other AI provider has a true equivalent of yet. For a while, all the big platforms (ChatGPT, Gemini, etc) have allowed you to upload sources to the model and to create ‘projects’ that can have specific instructions for all chats within it.</p>
<p>Claude has something unique called ‘skills,’ which are files you can create that contain plain-language instructions for how you would like certain things to be done or things Claude should keep in mind with every prompt (your style, voice, formatting preferences, etc). They can also contain code for complex operations, like opening a Word file, making requested changes, and creating a new file.</p>
<p>How does this help?</p>
<p>You can get Claude itself to make skills for you, critique the ones you’ve made, amend them, troubleshoot them, and so forth. You can also use them for more technical things, like moving files in a directory, comparing documents, etc. It’s not clear that other models can do this yet.</p>
<p>Shapiro describes a process in recent months of working with Claude to build and refine an elaborate set of skills that have automated stages in the creation, review, and revision of complex contracts using a host of special instructions and technical shortcuts. What used to take him four or five hours to generate a draft of a contract now takes minutes.</p>
<p>The quality of the output has only been improving, he says, as he harnesses Claude to make and revise skills, based on what has worked well and what hasn’t.</p>
<h3><em>Your prompts need to be longer, much longer</em></h3>
<p>The point is often made that the quality of output with AI depends on the length and specificity of your prompt. Shapiro is emphatic on this point. He says that the average length of his prompts is 2,000 words. In a Word doc, that’s roughly 8 pages double-spaced.</p>
<p>You need to load your prompts with detail. Everything should be in there: your client’s multifarious concerns, what the other side is likely to accept or not accept and other sticking points, prevailing law, key terms you think should be included, and so on.</p>
<p>Shapiro makes the general point that if you take the time to craft prompts that are extensive and nuanced enough, the quality of the output can and often will be comparable to what a mid-level or senior associate would produce. And once you begin working with a sufficiently large library of skills — or other background instructions — you’ll be producing at a pace never before possible, with no discernible loss in the quality of your work.</p>
<h2>What this picture leaves out</h2>
<p>I’m all over Claude and building a library of skills to generate a recursive cycle of greater efficiency.</p>
<p>But outside of a practice focused mainly on writing contracts and a few other areas, even the most bullish embrace of AI will result in productivity gains that are far less dramatic.</p>
<p>The fact is that for many other forms of practice — for lawyers who do research-heavy work, who spend most of their time negotiating, or who litigate in certain areas — AI will play a more limited, often peripheral role.</p>
<p>It was telling, I thought, that Shapiro gave only two concrete examples of how he uses AI in his practice: to generate contracts and to write opinion letters. (Near the end of the episode, he also described how he uses AI to help draft social media posts.)</p>
<p>I’m in touch with a litigation lawyer in BC who is experiencing something similar to Shapiro’s miraculous epiphany. He reports saving countless hours in recent months using AI to review documents in his employment practice, to generate demand and opinion letters, and to draft settlements. He also notes that his prompts tend to be extensive, in many cases well over a thousand words — and that he often gets AI to help him formulate better prompts.</p>
<p>So there is certainly an argument to be made that Shapiro’s insights apply to litigation and can lead to a significant boost in productivity.</p>
<p>But not in every kind of litigation, and not in every kind of law.</p>
<p>Over here in criminal law land, its use has been far more limited. We get tons of disclosure from the Crown, lengthy documents, often terabytes of data on whole hard drives. We grapple with whether we can upload these documents to any language model hosted in the cloud, bespoke or otherwise, given the <a href="https://www.nationalmagazine.ca/en-ca/articles/law/opinion/2025/why-some-lawyers-are-turning-off-the-internet-to-use-ai">uncertainty</a> around privacy. And no one is sure whether sensitive client information is safe with AI.</p>
<p>The larger narrative in litigation is that lawyers relying on AI to draft court submissions has been <a href="https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/">a disaster</a>. I don’t think it’s well suited to writing submissions — or opinion letters for that matter — even if a lawyer confirms that law is summarized accurately. These are things, <a href="https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/">I’ve argued</a>, that lawyers need to do on their own, to ensure they’re done competently.</p>
<p>And although I’m seeing progress in using AI to assist with legal research — including the possibility of using Claude to <a href="https://www.nationalmagazine.ca/en-ca/articles/law/opinion/2026/a_breakthrough_in_legal_research">look things up for you on CanLII</a> — the efficiency gains there are significant, but not as striking as in Shapiro’s case.</p>
<p>And of course, for lawyers who spend a good portion of their time dealing with clients and negotiating with opposing counsel or appearing in court, AI will not bring about a radical transformation of their practice.</p>
<p>The AI revolution is coming, but it will be jagged and uneven. Some parts of the future will look distinctly like the past.</p>
<p>The post <a href="https://www.slaw.ca/2026/04/06/the-ai-future-of-law-is-already-here-its-just-not-evenly-distributed/">The AI Future of Law Is Already Here — It&#8217;s Just Not Evenly Distributed</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2026/04/06/the-ai-future-of-law-is-already-here-its-just-not-evenly-distributed/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI and the Diffusion of Responsibility: Dispatches From the Road</title>
		<link>https://www.slaw.ca/2026/03/18/ai-and-the-diffusion-of-responsibility-dispatches-from-the-road/</link>
					<comments>https://www.slaw.ca/2026/03/18/ai-and-the-diffusion-of-responsibility-dispatches-from-the-road/#comments</comments>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Wed, 18 Mar 2026 11:00:10 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=109294</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Over the past several months, I have had the opportunity to speak with leaders across a range of sectors about artificial intelligence. These conversations have taken place in boardrooms, universities, professional development seminars, and informal gatherings following presentations. The contexts vary and the industries differ, however a common pattern has begun to emerge.</p>
<p>The organizations I encounter are not dismissive of AI. Quite the opposite. Most are experimenting with generative tools, reviewing internal processes, or considering policy development. Many have established working groups. Some have launched pilot projects. Others are waiting for clearer regulatory direction before moving further. At first  . . .  <a href="https://www.slaw.ca/2026/03/18/ai-and-the-diffusion-of-responsibility-dispatches-from-the-road/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2026/03/18/ai-and-the-diffusion-of-responsibility-dispatches-from-the-road/">AI and the Diffusion of Responsibility: Dispatches From the Road</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Over the past several months, I have had the opportunity to speak with leaders across a range of sectors about artificial intelligence. These conversations have taken place in boardrooms, universities, professional development seminars, and informal gatherings following presentations. The contexts vary and the industries differ, however a common pattern has begun to emerge.</p>
<p>The organizations I encounter are not dismissive of AI. Quite the opposite. Most are experimenting with generative tools, reviewing internal processes, or considering policy development. Many have established working groups. Some have launched pilot projects. Others are waiting for clearer regulatory direction before moving further. At first glance, the tone is thoughtful and measured.</p>
<p>Beneath that surface, however, a more subtle but significant governance issue is taking shape.</p>
<p>In this column, I want to discuss three of the most common responses I am hearing when from senior managers, lawyers and executives when discussions turn to responsibility for AI risk. The responses to questions around responsibility for AI initiatives or risk management often take a familiar form: “We have a committee.” “IT says it’s fine.” “We trust our people to use it responsibly.” Each of these statements is reasonable in isolation and signals that attention is being paid. Taken together, however, they reveal a more concerning pattern, namely the diffusion of responsibility across structures, departments, and organizational culture.</p>
<p>AI governance is uniquely prone to this problem. Unlike traditional technology deployments, AI systems sit at the intersection of technical infrastructure, professional judgment, regulatory exposure, and institutional strategy. When accountability is distributed thinly across committees, delegated entirely to technical teams, or left to individual discretion, no single actor retains clear ownership of the risk.</p>
<p>This column is not about fault-finding. The individuals involved in these conversations are uniformly thoughtful and well-intentioned, and the issue is structural rather than personal. As AI tools become more deeply embedded in everyday workflows, however, structural ambiguity around responsibility is becoming a material governance risk.</p>
<p>In what follows, I examine three statements I continue to hear “from the road” and what they reveal about the current state of AI oversight. The objective is not to criticize but to clarify. In the context of artificial intelligence, clarity of responsibility may be among the most important governance tasks ahead.</p>
<h2>“We Have a Committee.”</h2>
<p>In many of the organizations I encounter, the first response to questions about AI oversight is reassuring: “We have a committee.” Often this committee is cross-functional and includes representatives from IT, legal, compliance, operations, and senior management. It meets periodically, monitors developments, and in some cases is tasked with drafting policy.</p>
<p>At first glance, this appears to be an appropriate institutional response. Artificial intelligence is a cross-cutting issue that touches infrastructure, professional standards, privacy law, human resources, procurement, and strategy. A cross-functional body reflects the reality that no single department can address these issues in isolation.</p>
<p>Committees are also not inherently flawed. They are frequently composed of thoughtful and capable professionals who are attempting to approach a complex issue carefully. In many organizations, the formation of a committee signals that leadership recognizes AI as something that warrants structured attention rather than informal experimentation. The challenge lies in the nature of committees themselves. They are designed to deliberate, to gather information, and to provide recommendations. They are not typically designed to assume concentrated risk ownership.</p>
<p>In practice, committee members usually carry full portfolios. AI oversight becomes one item among many. Meetings are periodic and mandates are often exploratory rather than executive. Recommendations may be developed, but ultimate accountability can remain unclear. When responsibility is shared across a group, clarity about who ultimately owns the consequences of a decision can diminish.</p>
<p>There is also a practical reality that should be acknowledged. Artificial intelligence is technically complex and rapidly evolving. Even experienced professionals may not have the time required to develop sustained, specialized literacy in the tools under discussion. Without dedicated authority, expertise, and resourcing, committees can become monitoring bodies rather than governance mechanisms.</p>
<p>At the same time, AI deployment is no longer theoretical. Generative tools are already embedded in everyday workflows, sometimes formally approved and sometimes adopted informally by staff seeking efficiency. When technology moves faster than governance structures, an exploratory committee model may prove insufficient.</p>
<p>Cross-functional dialogue remains essential. However, dialogue alone does not constitute accountability. Effective AI oversight requires clarity about who is responsible for risk assessment, policy approval, escalation decisions, and ongoing monitoring. Absent that clarity, the reassuring statement “we have a committee” may mask a more difficult question about ownership.</p>
<h2>“IT Says It’s Fine.”</h2>
<p>Another response I frequently hear, particularly in public sector and government contexts, is this: “IT says it’s fine.”</p>
<p>This response is understandable. Information technology departments play an essential role in evaluating software tools. They assess cybersecurity vulnerabilities, data storage architecture, vendor compliance, and integration with existing systems. In many organizations, IT teams are the first line of defense against technical instability and data breaches, and their expertise is indispensable.</p>
<p>The difficulty arises when technical clearance is treated as synonymous with overall approval.</p>
<p>IT departments typically manage technical risk, including whether a system is secure, compatible, and operationally stable. Artificial intelligence, however, introduces a broader range of concerns that extend beyond infrastructure. AI systems can affect professional obligations, regulatory exposure, fiduciary duties, human rights considerations, reputational risk, and the integrity of institutional decision-making. These are governance questions rather than purely technical ones.</p>
<p>In regulated professions such as law or medicine, individual practitioners carry independent duties that no technical clearance can discharge. A tool may be secure from a cybersecurity perspective and yet still generate inaccurate outputs, embed bias, or encourage overreliance in ways that create professional liability. Technical approval does not resolve questions about appropriate use, supervision, documentation, or compliance with professional standards.</p>
<p>This observation is not a criticism of IT teams. It is a clarification of institutional roles. Expecting technical departments to assume responsibility for enterprise-wide ethical and regulatory risk places them in a position that extends beyond their mandate. It may also allow senior leadership to conclude that oversight has been achieved when, in reality, only one dimension of risk has been addressed.</p>
<p>AI governance requires coordination among technical expertise, legal analysis, operational leadership, and strategic oversight. When the phrase “IT says it’s fine” becomes the end of the conversation rather than the beginning of a broader assessment, responsibility is once again dispersed rather than clearly assigned.</p>
<h2>“We Trust Our People to Use It Responsibly.”</h2>
<p>A third response I often hear is more values-oriented: “We trust our people to use it responsibly.”</p>
<p>This statement reflects confidence in professional judgment and organizational culture. Institutions depend on individuals exercising discretion and acting in good faith, and in many contexts that trust is warranted.</p>
<p>Trust alone, however, does not amount to a governance framework.</p>
<p>Artificial intelligence tools differ from many technologies that preceded them. They do not merely transmit information. They generate it. They summarize, interpret, draft, and recommend. In doing so, they may also fabricate, distort, or oversimplify. Their outputs can appear authoritative even when they are incorrect. This combination of fluency and fallibility creates a distinctive risk profile.</p>
<p>Where organizations rely primarily on individual discretion without articulated policy guidance, training, and oversight, responsibility shifts downward in subtle ways. Professionals are left to determine for themselves when AI use is appropriate, how outputs should be verified, what documentation is required, and how client or stakeholder interests may be affected. Practices can become inconsistent, and risk tolerance may vary across departments or individuals.</p>
<p>If an error occurs, the absence of clear institutional guardrails can produce further ambiguity regarding responsibility. Without defined expectations, it may be difficult to determine whether a failure reflects individual judgment or structural oversight.</p>
<p>Trust remains an essential organizational value. It is strengthened, rather than diminished, by clear parameters, defined accountability, appropriate training, and ongoing monitoring. Without those elements, reliance on individual discretion may again reflect diffusion rather than ownership.</p>
<h2>Why AI Is Especially Prone to Diffusion</h2>
<p>Taken individually, each of these responses is understandable. Committees promote collaboration. IT departments safeguard infrastructure. Trust reflects institutional confidence. The difficulty emerges when these mechanisms are treated as complete.</p>
<p>Artificial intelligence occupies an unusual position within institutions. It depends on technical infrastructure, engages legal and regulatory exposure, shapes operational workflows, and influences strategic direction. Because it sits at the intersection of so many functions, it can easily fall between them.</p>
<p>Committees discuss it. IT evaluates it. Professionals use it. Legal teams review it when prompted. Risk managers may include it within broader enterprise risk frameworks, and boards may receive periodic updates. Yet in many organizations there is no clearly designated owner of AI risk as such. Responsibility is distributed, but ultimate accountability remains indistinct.</p>
<p>Enterprise risk management frameworks are designed for issues that cut across silos. They require identification of risk owners, articulation of risk appetite, defined escalation pathways, and ongoing monitoring. Artificial intelligence fits squarely within that category. Treating it as a temporary project or purely technical deployment risks underestimating its institutional impact.</p>
<p>Where no one clearly owns AI risk, many may participate in it, yet no single actor remains accountable for its consequences. That dynamic reflects the essence of diffusion of responsibility.</p>
<h2>Conclusion</h2>
<p>Artificial intelligence is advancing through institutions at a pace that challenges traditional governance structures. Its adoption is rarely reckless. More often, it is incremental and pragmatic. Tools are introduced to increase efficiency. Staff experiment to improve workflows. Committees monitor developments. Technical teams evaluate vendors. Professionals exercise judgment.</p>
<p>When responsibility is dispersed across structures, functions, and culture, however, clarity can erode. Oversight may appear present while ownership remains indistinct.</p>
<p>AI systems influence outputs, shape decisions, and generate content that may carry legal, professional, or reputational consequences. In a regulatory environment that continues to evolve and where enforcement bodies are interpreting existing legal frameworks in new ways, institutions cannot rely on ambiguity as a safeguard.</p>
<p>Governance requires definition. It requires clear assignment of responsibility, defined escalation pathways, and articulated expectations for use. These mechanisms provide the foundation for sustainable innovation.</p>
<p>The statements examined here reflect common and understandable institutional instincts. Collaboration, deference to expertise, and confidence in professional judgment each have value. None, however, replaces the need for clearly defined ownership of AI risk within the organization.</p>
<p>As AI becomes embedded in everyday practice, thoughtful adoption will matter less than clear accountability. Institutions that define ownership early will be better positioned than those that later discover that responsibility was distributed broadly but held nowhere in particular.</p>
<p><em>Note: Generative AI was used in the preparation of this article.</em></p>
<p>The post <a href="https://www.slaw.ca/2026/03/18/ai-and-the-diffusion-of-responsibility-dispatches-from-the-road/">AI and the Diffusion of Responsibility: Dispatches From the Road</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2026/03/18/ai-and-the-diffusion-of-responsibility-dispatches-from-the-road/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Keeping Hold of the Reins When Using AI</title>
		<link>https://www.slaw.ca/2026/03/02/keeping-hold-of-the-reins-when-using-ai/</link>
					<comments>https://www.slaw.ca/2026/03/02/keeping-hold-of-the-reins-when-using-ai/#respond</comments>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Mon, 02 Mar 2026 12:00:10 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<category><![CDATA[Practice of Law]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=109268</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">What many of us in law, legal education, and other fields still want to know at this point is: what is AI really good for? What does it do reliably well and better than we could do on our own? And when we use it for those purposes, what risks do we take on?</p>
<p>In the early days of ChatGPT, those risks were clear. AI hallucinated authorities and generated biased output grounded in its training data. But as models have improved and we’ve learned to guard against these problems, those concerns have become more manageable.</p>
<p>A different and more subtle  . . .  <a href="https://www.slaw.ca/2026/03/02/keeping-hold-of-the-reins-when-using-ai/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2026/03/02/keeping-hold-of-the-reins-when-using-ai/">Keeping Hold of the Reins When Using AI</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">What many of us in law, legal education, and other fields still want to know at this point is: what is AI really good for? What does it do reliably well and better than we could do on our own? And when we use it for those purposes, what risks do we take on?</p>
<p>In the early days of ChatGPT, those risks were clear. AI hallucinated authorities and generated biased output grounded in its training data. But as models have improved and we’ve learned to guard against these problems, those concerns have become more manageable.</p>
<p>A different and more subtle issue has now come into view.</p>
<p>Having discovered some of the things AI is good at — supporting research, drafting, and editing — the main concern is not just whether its output is accurate, but when effective use of the tool crosses the line into harmful over-reliance.</p>
<p>When a lawyer or a self-represented litigant cites cases that don’t exist, they aren’t over-relying on AI. They’re misusing it. Over-reliance entails something else. It overlaps with automation bias — the tendency to defer uncritically to a system’s output — but is not reducible to it.</p>
<p>We over-rely on AI not just when we accept its output as true without question, but when we allow it to perform work we shouldn’t be delegating to it at all — even if it’s work that AI can do well.</p>
<p>But <em>precisely</em> <em>what</em> should we not be delegating to AI? Here, we’re in new terrain.</p>
<p>For certain forms of writing — a personal email, an essay, a court decision — most of us have a strong intuition that relying on AI to do the drafting is wrong, even if the result is fluent and technically sound. These forms of writing are tied to deeply seated ideas about identity and reflection. Automated prose, however polished, leaves us cold. It may be correct but it’s inhuman.</p>
<p>Yet in many cases there’s nothing wrong with relying on AI. Using it to transcribe an interview or summarize a case on CanLII to decide whether it’s worth reading closely can sometimes feel magical.</p>
<p>The trouble that many experienced users of AI are now encountering is that as these tools become more capable and we become more adept at using them, it becomes easier to slide into patterns of increasing delegation. And the more we do so, the more AI begins to encroach on doing the critical things we should be doing ourselves.</p>
<p>It becomes tempting, for example, in the course of a chat with AI to let it carry you from a brainstorm to an outline to a first draft, because it all happens so fast. The model can seem uncannily in sync with where you want to go. Prompts often end with suggestions for next steps, making it feel as though the system is always a step or two ahead of you. It can be hard to resist letting it take the lead.</p>
<p>Increasingly, in my conversations with colleagues about AI, the question is not what creative uses they are making of it, but what limits they are drawing around its use.</p>
<h2>Clear lines to be drawn</h2>
<p>Institutions are grappling with this problem by drawing formal lines. Newspapers, universities, and courts have adopted policies specifying when and how AI may or may not be used. The aim is not to ban these tools but to foster responsible and accountable use.</p>
<p>Should we, as individual users, do the same — commit to rules of thumb in advance?</p>
<p>We do this with other technology, from cell phones to social media. Taking a principled approach to using AI can help us avoid discovering, only after the fact, that we’ve over-relied on it — when we can no longer unsee an outline or draft a model has placed before us that now guides our thinking and crowds out other ideas we might have explored.</p>
<p>My rules won’t be the same as yours. They depend on the kind of work you do. I do mostly academic and journalistic writing. Different considerations apply in teaching and in the practice of law.</p>
<p>I&#8217;ll share a few of the rules I’m trying to follow, but preface them by articulating the overarching theme: make wide and varied use of AI, but use it with restraint and self-awareness.</p>
<h2>My own rules of thumb</h2>
<p>For research, I use tools like ChatGPT and Perplexity to gather and briefly summarize sources — but no more than that. I don’t want to rely on AI to interrogate those sources, rather than delving into them directly myself.</p>
<p>When it comes to writing and editing, I try to be even more cautious. AI is exceedingly good at producing outlines or supporting arguments. For that very reason, I avoid it at this stage. I would rather have the structure of a piece emerge organically from my own thinking, even if that process is slower or less efficient.</p>
<p>I might present my own outline to a language model and ask for further ideas or angles I may have missed. But I want to do the hardest part myself: shaping the argument.</p>
<p>Using AI to generate a first draft of anything but the most routine writing, such as a brief factual summary or a short email pitch, doesn’t work for the kind of writing I do. This is partly because it risks passing off AI-generated prose as my own expression, which is not what readers expect. More crucially, it allows the model to pre-empt my own voice. And discovering what I want to say is a big part of why I write in the first place.</p>
<p>When I edit with a language model, I ask it to do so “lightly rather than aggressively.” I want suggestions for how to improve a draft, tighten the odd sentence, or catch typos. I don’t want it to transform my writing into something that no longer sounds like me.</p>
<h2>Lawyers and litigants</h2>
<p>Some <a href="https://www.canadianlawyermag.com/resources/legal-technology/chatgpt-for-lawyers-how-ai-is-reshaping-legal-work-in-canada/393032">lawyers</a> are comfortable using language models to “generate first drafts of contracts, pleadings, memos, and correspondence.” As <a href="https://www.nationalmagazine.ca/en-ca/articles/legal-market/legal-tech/2025/women-less-likely-to-use-ai-in-practice">one lawyer explains</a>, when drafting, she will give the model “samples of my work and then my ideas. That way, the first draft is a lot further along than if I just gave it generic instructions.”</p>
<p>Even the Commissioner for Federal Judicial Affairs <a href="https://www.fja.gc.ca/COVID-19/Use-of-AI-by-Court-Users-Utilisation-de-lIA-par-les-usagers-des-tribunaux-eng.html">contemplates</a> using AI to draft submissions, and one access-to-justice group <a href="https://mbaccesstojustice.ca/using-ai-to-draft-pleadings-balancing-access-to-justice-with-civil-procedure/">touts AI’s value</a> in helping to draft pleadings.</p>
<p>Much of this may be fine if lawyers “<a href="https://www.canadianlawyermag.com/resources/legal-technology/chatgpt-for-lawyers-how-ai-is-reshaping-legal-work-in-canada/393032">supervise and review</a> all outputs generated by AI”. But I would single out using AI to draft court submissions. Even if they’re reviewed for accuracy, I don’t think relying on AI here is appropriate. Doing so, <a href="https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/">I believe</a>, runs a real risk of breaching a duty of competence, given the professional judgment needed here to make choices about relevance, tone, and strategy — judgment that AI can’t replace.</p>
<p>Self-reps are <a href="https://www.slaw.ca/2025/08/28/generative-ai-and-self-represented-litigants-in-canada-what-we-know-and-where-to-go/">another kettle of fish</a> altogether.</p>
<h2>But what’s the point?</h2>
<p>So then, why bother using AI if it can so readily do more harm than good?</p>
<p>Because even when used cautiously, it’s still enormously helpful.</p>
<p>Even if I limit my use of AI in research to gathering and briefly summarizing sources on the open web, it is still a quantum leap more powerful than doing searches on Google or dedicated databases. Many of the sources surfaced in a search using ChatGPT or Perplexity will be unhelpful. But more often than not, one or two will contain a wealth of relevant material (details, footnotes) that map out the lay of the land so that I can choose where to go from there.</p>
<p>Language models may be over-eager writing assistants that need to be closely supervised. But for light editing or feedback on a draft, they can be indispensable. When I use AI in this way, it feels less like a replacement for my thinking and more like a demanding but helpful reader.</p>
<h2>Easy for you to say</h2>
<p>Will it be easy for people with weaker writing skills to use AI with restraint? Probably not.</p>
<p>In law school, undergrad, or high school, the temptation to rely on AI to summarize readings or complete assignments students should do themselves is obvious. It poses a real threat to their development.</p>
<p>Does my enthusiasm for AI rest on the fact that I have decades of reading and writing in law behind me? Is it easy for me to urge restraint because I already possess the skills that AI threatens to displace?</p>
<p>Perhaps.</p>
<p>But this, I think, is where we all now find ourselves, regardless of experience. AI has made everything from learning to research and writing both easier and harder. If you want to learn without being hindered by AI, you’ll need to learn restraint. And if you want to go on writing in your own voice, you’ll need to do the same.</p>
<p>I’m still using AI a lot. But I’m trying to use it cautiously and deliberately, with an eye to what I’m gaining and what I may be giving up. I’m trying to remind myself constantly: keep hold of the reins!</p>
<p>The post <a href="https://www.slaw.ca/2026/03/02/keeping-hold-of-the-reins-when-using-ai/">Keeping Hold of the Reins When Using AI</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2026/03/02/keeping-hold-of-the-reins-when-using-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Hallucinated References, Government Reports, and Managing Your Citations</title>
		<link>https://www.slaw.ca/2026/02/02/hallucinated-references-government-reports-and-managing-your-citations/</link>
					<comments>https://www.slaw.ca/2026/02/02/hallucinated-references-government-reports-and-managing-your-citations/#respond</comments>
		
		<dc:creator><![CDATA[Sarah A. Sutherland]]></dc:creator>
		<pubDate>Mon, 02 Feb 2026 12:00:26 +0000</pubDate>
				<category><![CDATA[Legal Ethics]]></category>
		<category><![CDATA[Legal Information]]></category>
		<category><![CDATA[Legal Publishing]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<category><![CDATA[Practice of Law]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=109041</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Given the high value placed on research excellence by legal professionals and consultants, I am surprised that stories continue to be reported about the lack of rigour exercised in the creation of work product by these professional groups. In addition to the ongoing stories of professional sanctions placed on lawyers for including incorrect citations and other issues associated with the use of generative AI, there have been regular stories about the high values for government report contracts and the use of AI to create them. Here are some articles on a report prepared by Deloitte for the Province of Newfoundland  . . .  <a href="https://www.slaw.ca/2026/02/02/hallucinated-references-government-reports-and-managing-your-citations/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2026/02/02/hallucinated-references-government-reports-and-managing-your-citations/">Hallucinated References, Government Reports, and Managing Your Citations</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Given the high value placed on research excellence by legal professionals and consultants, I am surprised that stories continue to be reported about the lack of rigour exercised in the creation of work product by these professional groups. In addition to the ongoing stories of professional sanctions placed on lawyers for including incorrect citations and other issues associated with the use of generative AI, there have been regular stories about the high values for government report contracts and the use of AI to create them. Here are some articles on a report prepared by Deloitte for the Province of Newfoundland and Labrador on health-care worker staffing that was prepared for the price of $1.6 million:</p>
<ul>
<li>Matt Barter, &#8220;<a href="https://mattbarter.ca/2025/11/19/gov-nl-spent-over-1-5-million-on-health-human-resource-plan/">GOV NL Spent Over $1.5 Million on Health Human Resource Plan</a>,&#8221; via his blog.</li>
<li>Justin Brake, &#8220;<a href="https://theindependent.ca/news/lji/major-n-l-healthcare-report-contains-errors-likely-generated-by-a-i/">Major N.L. healthcare report contains errors likely generated by A.I.</a>,&#8221; via <em>The Independent</em>.</li>
<li>Justin Brake, &#8220;<a href="https://www.ctvnews.ca/canada/newfoundland-and-labrador/article/nl-government-pledges-strict-review-on-ai-use-after-more-false-citations-found-in-reports/">N.L. government pledges ‘strict review’ on AI use after more false citations found in reports</a>,&#8221; via <em>The Independent</em>.</li>
<li>Garrett Barry, &#8220;<a href="https://www.ctvnews.ca/canada/newfoundland-and-labrador/article/nl-government-pledges-strict-review-on-ai-use-after-more-false-citations-found-in-reports/">Deloitte breaks silence on N.L. healthcare report</a>,&#8221; via CTV.</li>
</ul>
<p>In the CTV article includes the following:</p>
<p style="padding-left: 40px;"><em>&#8220;AI was not used to write the report; it was selectively used to support a small number of research citations,&#8221; a spokesperson for Deloitte said in a statement. &#8220;We are revising the report to make a small number of citation corrections, which do not impact the report findings.&#8221;</em></p>
<p>I notice that the quoted text uses the Unicode character &#8220;narrow no break space&#8221; with the code &lt;0x202f&gt; on either side of the word &#8220;write&#8221; instead of a regular space. This character, like the em-dash, has been found to be a common artifact of generative AI systems, and, as there is no apparent reason for it to be used there, I infer this indicates that a similar system was used in the drafting of the statement (to see this for yourself, you can copy and paste the section from the site into a text editor such as NotePad or TextEdit). In typesetting, &lt;0x202f&gt; allows for control over how text is displayed by showing a space while forcing applications like web browsers to not allow the text to run to a new line at that spot. It is used in places like before a colon in French. I interpret this to mean that generative AI is used in a widespread way at Deloitte, <a href="https://www.deloitte.com/ca/en/services/consulting/case-studies/generative-ai-this-changes-everything.html">which conforms to the way they discuss their work processes in their marketing material on their website</a>.</p>
<p>Generative AI systems were trained on human written content, and both &lt;0x202f&gt; and em-dashes are regularly used in text whether human or machine written, which is why these systems insert them, and I don&#8217;t mean to imply that it is inappropriate to use them. I recognize that generative AI can be useful, but to me the particular type of situation outlined in the news stories linked above indicates lack of interest in important parts of research and writing non-fiction (using generative AI in writing fiction is another discussion). This is concerning to me, as it seems to indicate that the authors followed one of two processes, either they have written something and then used an AI system to generate citations retroactively. Or they were revising their citations using an AI system. To me this is implicitly saying that the body of the text is important but the citations matter less, which is a problematic perspective for research integrity.</p>
<p>At the risk of mounting one of my hobby horses, citation practice is a core component of writing substantive content like this. It is not put there to be decorative or to give a document the appearance of gravitas. In <em>The Independent</em>, <a href="https://theindependent.ca/news/lji/deloitte-breaks-silence-on-n-l-healthcare-report/">Justin Brake reported</a> that &#8220;Those citations reference research articles which don’t exist but were used to support claims related to virtual care, monetary recruitment and retention incentives, recruitment strategies, and impacts of the COVID-19 pandemic on healthcare workers. In at least two cases, the citations also named actual researchers who did not author the fabricated articles.&#8221; Though it is time consuming to verify whether citations exist, it is more so to verify if the referenced material actually says what it is asserted to say, especially in a situation where a system has been used that is designed to provide text that seems like the kind of thing it would say.</p>
<p>One of my foundational professional memories is teaching a class on research to first year students at the University of British Columbia Library when I was still a student and having one of the students respond on the evaluation form that the most important thing they learned in the session was &#8220;that works cited has a real purpose.&#8221; When she read it, my supervisor looked me in the eyes very intently, asking: &#8220;What <em>exactly</em> did you say?&#8221; My answer as I recall was that it places your writing into a wider dialogue with what others have written, gives credit to others for their ideas, and increases your credibility by showing that you have researched the topic and are not simply writing your own thoughts.</p>
<p>I was recently told that developers of legal information online have been exploring ways to better integrate their content into the emerging environment of AI generated snippets in online search. At the Law Via the Internet Conference in November, Craig Newton, co-director of the <a href="https://www.law.cornell.edu/">Legal Information Institute</a> at Cornell Law School, said that providing text in a format that is suitable for this use can mean that online reach is significantly inflated over site visits, with a potential audience of millions. However, the information at the bottom of the snippets that references websites as support for the content is misleading: it appears that these are intended to be citations, but in fact they are AI generated lists of sites that include the kind of information included in the snippet. It is impossible to know from this display where the actual text came from.</p>
<p>Breaking down the network of citations and treating it as an afterthought to research and writing is a concerning trend, though it didn&#8217;t start with the launch of widely available generative AI platforms. In response to this, I would encourage you to learn and teach others how to use citation management software. These are mature products that work well, and which have many attractive options (my personal favourite is <a href="https://www.zotero.org/">Zotero</a>). These applications will allow you to manage your research for immediate use, but also over time, and they integrate well with word processors to allow you to avoid the chore of manually inserting and updating your references. There are excellent ways to approach how to manage tracking the sources of the ideas, and I encourage you to learn how to use them before you start a big project. I&#8217;m sure you can find a library with people who will help you with this if you need it. It can avoid significant difficulties.</p>
<p><em>— I would like to thank Jen Brubacher, Katarina Daniels, and Annette Demers, who discussed this with me on the Canadian Association of Law Libraries member forum before I wrote this column.</em></p>
<p>The post <a href="https://www.slaw.ca/2026/02/02/hallucinated-references-government-reports-and-managing-your-citations/">Hallucinated References, Government Reports, and Managing Your Citations</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2026/02/02/hallucinated-references-government-reports-and-managing-your-citations/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Beyond Regulatory Silos: Announcing the Canadian Centre for Responsible AI Governance</title>
		<link>https://www.slaw.ca/2026/01/23/beyond-regulatory-silos-announcing-the-canadian-centre-for-responsible-ai-governance/</link>
					<comments>https://www.slaw.ca/2026/01/23/beyond-regulatory-silos-announcing-the-canadian-centre-for-responsible-ai-governance/#comments</comments>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Fri, 23 Jan 2026 12:00:46 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=109070</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Over the past two years, much of my writing in this space has focused on the accelerating risks associated with artificial intelligence and the uneven state of AI regulation in Canada. I have written about stalled federal legislation, the growing role of privacy regulators, the increased risks of AI use for regulated professionals, and the early signs of AI related litigation beginning to surface in Canadian courts. Taken together, these developments point to a growing tension. Artificial intelligence is being deployed at speed, while the institutions tasked with managing risk remain fragmented, reactive, and unevenly equipped.</p>
<p>This column steps back  . . .  <a href="https://www.slaw.ca/2026/01/23/beyond-regulatory-silos-announcing-the-canadian-centre-for-responsible-ai-governance/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2026/01/23/beyond-regulatory-silos-announcing-the-canadian-centre-for-responsible-ai-governance/">Beyond Regulatory Silos: Announcing the Canadian Centre for Responsible AI Governance</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Over the past two years, much of my writing in this space has focused on the accelerating risks associated with artificial intelligence and the uneven state of AI regulation in Canada. I have written about stalled federal legislation, the growing role of privacy regulators, the increased risks of AI use for regulated professionals, and the early signs of AI related litigation beginning to surface in Canadian courts. Taken together, these developments point to a growing tension. Artificial intelligence is being deployed at speed, while the institutions tasked with managing risk remain fragmented, reactive, and unevenly equipped.</p>
<p>This column steps back from specific cases and statutes to address a broader institutional question. If Canada’s AI governance landscape is increasingly fragmented and reactive, what kinds of structures are capable of supporting sustained, cross sector engagement on risk and accountability? In particular, what role can independent, non-profit institutions play at a moment when formal regulation is stalled but real-world deployment continues at speed?</p>
<p>It is against this backdrop that I have been working to establish the Canadian Centre for Responsible AI Governance. The Centre is conceived as a national, independent forum for convening stakeholders across government, industry, academia, and professional communities, with a focus on AI risk, governance, and institutional design.</p>
<h2>The Governance Gap</h2>
<p>Canada’s current approach to AI governance can best be described as partial and uneven. The collapse of the Artificial Intelligence and Data Act left the country without a dedicated federal framework for managing the implementation of AI systems. In the absence of that framework, responsibility has fallen to a patchwork of existing institutions, including privacy commissioners, professional regulators, courts, and internal corporate governance processes. Each plays an important role, but none was designed to address AI risk as a systemic issue that cuts across sectors.</p>
<p>What is missing is not expertise. Canada has no shortage of researchers, policy thinkers, computer scientists, lawyers, and public servants working on AI related issues. I have been particularly impressed in the past year by the AI stakeholder discussions organized by the Law Commission of Ontario and by academic colleagues such as Ben Perrin, who have demonstrated how cross sector engagement can be done well. The challenge ahead lies in extending this work across multiple sectors and at a scale that matches the breadth of contemporary AI deployment.</p>
<p>Much of the current activity occurs in silos, separated by disciplinary boundaries, institutional mandates, or political constraints. As a result, we see recurring patterns. The same governance questions are debated repeatedly in different forums. Lessons learned in one sector are slow to migrate to others. Opportunities for early, preventive engagement are often missed until harms have already occurred.</p>
<p>This gap becomes particularly acute in periods of regulatory uncertainty. When formal rule making stalls, governance does not disappear. Instead, it shifts into less visible forms. Decisions about AI deployment are made inside organizations. Risk trade offs are resolved through procurement processes, internal policies, and contractual arrangements. Without shared reference points or common forums for discussion, these decisions tend to reflect local pressures rather than broader public values.</p>
<h2>Why Independent Convening Matters</h2>
<p>One response to this governance gap is to call for faster legislation. That impulse is understandable, but it is not sufficient. Even well-designed statutes require interpretation, implementation, and ongoing adaptation. They also tend to lag behind technological change. In the meantime, AI systems continue to be deployed in healthcare, education, justice, and the private sector, often with limited external scrutiny.</p>
<p>Independent convening institutions serve a different but complementary function. Their value lies in creating structured spaces where stakeholders can engage with complex governance questions before those questions harden into crises or adversarial disputes. When designed well, these spaces support candour, learning, and iterative problem solving in ways that formal regulatory processes often cannot.</p>
<p>Independence is crucial. AI governance cannot be effective without the participation of those who design, deploy, and manage systems in real world settings, including industry actors. At the same time, convening bodies that are closely tied to a single institution, funder, or political agenda risk losing credibility with other stakeholders. In a polarized environment, trust becomes a form of governance infrastructure in its own right. It is built through clear institutional boundaries and a willingness to engage with disagreement rather than advocate predetermined outcomes.</p>
<h2>The Role of CCRAIG</h2>
<p>The Canadian Centre for Responsible AI Governance has been designed with these considerations in mind. Its mandate is deliberately narrow. I will serve as the Centre’s founding director, with responsibility for its initial convening and research agenda. The Centre will not provide legal advice, develop or recommend commercial products, or advocate for specific legislative outcomes. Instead, it focuses on three core activities.</p>
<p><strong>First, convening.</strong> The Centre brings together participants from government, industry, academia, civil society, and professional communities to discuss AI governance challenges in a structured and informed setting. The emphasis is on shared understanding rather than consensus, and on facilitating dialogue across perspectives rather than advocating specific outcomes.</p>
<p><strong>Second, applied research.</strong> The Centre undertakes and supports research projects that examine how AI governance operates in practice. This includes work on institutional design, risk management frameworks, and the interaction between formal regulation and informal governance mechanisms. The goal is to generate insights that are useful to decision makers across sectors, not only to academic audiences.</p>
<p><strong>Third, public education.</strong> While much of the Centre’s work occurs through targeted roundtables and research initiatives, there is also a need for accessible analysis that helps practitioners and policymakers make sense of a rapidly evolving landscape.</p>
<p>It is also important to emphasize what the Centre is not. It is not affiliated with or directed by any single university, government department, or corporate sponsor. At the same time, it is designed to collaborate with, and receive support from, partners across academia, government, and industry. It does not exist to validate particular technologies or business models, nor to engage in lobbying or advocacy for specific policy outcomes. Its purpose is more modest and, I would argue, more durable. It aims to strengthen the connective tissue of AI governance in Canada at a time when formal structures are under strain.</p>
<h2>Institution Building as Governance Work</h2>
<p>There is a tendency to think of governance primarily in terms of rules, enforcement, and compliance. Institutions themselves receive less attention, except when they fail. Yet the history of effective regulation suggests that durable governance depends as much on the quality of institutions as on the content of laws.</p>
<p>In the AI context, this awareness is particularly important. Many of the most significant risks associated with AI systems are multi dimensional in nature. They emerge from interactions between technology, organizational incentives, human behaviour, and legal frameworks for instance. These dynamics have also surfaced in my doctoral work on institutions and regulatory reform in Canadian AI governance, which examines how technical and institutional factors meet in practice. A central insight is that addressing these risks requires ongoing dialogue across domains that do not naturally intersect, yet our existing institutions are rarely designed to hold or sustain these conversations.</p>
<p>From this perspective, building and maintaining spaces for that dialogue is itself a form of governance work. It is slow, often unglamorous, and difficult to measure. However, it is also one of the few ways to ensure that governance keeps pace with innovation rather than perpetually chasing it.</p>
<h2>Conclusion</h2>
<p>The Canadian Centre for Responsible AI Governance is being established at a moment when AI deployment in Canada is accelerating, formal regulation remains uncertain, and responsibility for managing risk is increasingly dispersed. The work ahead is substantial and cannot be undertaken by any single individual or organization. Meaningful progress on AI governance will depend on the participation of partners across government, industry, professional communities, civil society, and academia, and from every region of the country. The Centre is intended as a platform for that engagement, not as its endpoint.</p>
<p>For those working on AI governance challenges in their own institutions, or who see value in contributing to a national, cross sector dialogue on risk, accountability, and institutional design, I welcome those conversations. The Centre will only be useful if it reflects the experience, concerns, and insights of those grappling with these issues in practice. As we continue to develop the Centre’s digital infrastructure, I ask for your patience and invite you to reach out directly in the meantime at <a href="mailto:michael@ccraig.ca">michael@ccraig.ca</a>.</p>
<p><em>Note: Generative AI was used in the preparation of this article.</em></p>
<p>The post <a href="https://www.slaw.ca/2026/01/23/beyond-regulatory-silos-announcing-the-canadian-centre-for-responsible-ai-governance/">Beyond Regulatory Silos: Announcing the Canadian Centre for Responsible AI Governance</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2026/01/23/beyond-regulatory-silos-announcing-the-canadian-centre-for-responsible-ai-governance/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>From Anecdote to Evidence: Why Students’ Experiences With Generative AI Matter</title>
		<link>https://www.slaw.ca/2026/01/22/from-anecdote-to-evidence-why-students-experiences-with-generative-ai-matter/</link>
					<comments>https://www.slaw.ca/2026/01/22/from-anecdote-to-evidence-why-students-experiences-with-generative-ai-matter/#respond</comments>
		
		<dc:creator><![CDATA[Hannah Rosborough]]></dc:creator>
		<pubDate>Thu, 22 Jan 2026 12:00:20 +0000</pubDate>
				<category><![CDATA[Legal Information]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=109155</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Generative AI is nearly impossible to avoid as a law student. Over the past few years, it has been embedded into many of the products commonly used for legal work (<em>See e.g.</em>, <a href="https://www.lexisnexis.com/en-ca/products/lexis-plus-ai">proprietary research platforms</a>, <a href="https://blog.google/products-and-platforms/products/search/generative-ai-google-search-may-2024/">Google</a>, <a href="https://www.microsoft.com/en-us/microsoft-365/blog/2023/09/21/announcing-microsoft-365-copilot-general-availability-and-microsoft-365-chat/">Microsoft products</a>, etc). Whether welcomed or resisted, generative AI is now part of the legal information environment.</p>
<p>There are many questions remaining about how to prepare students for the use of generative AI during their legal education for their future practice. While <a href="https://www.slaw.ca/2020/12/18/a-taxonomy-for-lawyer-technological-competence/">technological competence</a> ≠ generative AI, we know that use of generative AI systems is a technical skill  . . .  <a href="https://www.slaw.ca/2026/01/22/from-anecdote-to-evidence-why-students-experiences-with-generative-ai-matter/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2026/01/22/from-anecdote-to-evidence-why-students-experiences-with-generative-ai-matter/">From Anecdote to Evidence: Why Students’ Experiences With Generative AI Matter</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Generative AI is nearly impossible to avoid as a law student. Over the past few years, it has been embedded into many of the products commonly used for legal work (<em>See e.g.</em>, <a href="https://www.lexisnexis.com/en-ca/products/lexis-plus-ai">proprietary research platforms</a>, <a href="https://blog.google/products-and-platforms/products/search/generative-ai-google-search-may-2024/">Google</a>, <a href="https://www.microsoft.com/en-us/microsoft-365/blog/2023/09/21/announcing-microsoft-365-copilot-general-availability-and-microsoft-365-chat/">Microsoft products</a>, etc). Whether welcomed or resisted, generative AI is now part of the legal information environment.</p>
<p>There are many questions remaining about how to prepare students for the use of generative AI during their legal education for their future practice. While <a href="https://www.slaw.ca/2020/12/18/a-taxonomy-for-lawyer-technological-competence/">technological competence</a> ≠ generative AI, we know that use of generative AI systems is a technical skill the profession is anticipating. Anecdotally, much of the discussion I’ve engaged in about student use of generative AI in law is rooted in speculation rather than evidence. We speculate about student behaviour, levels of understanding, and risks of overreliance and misuse. Empirical evidence of the student experience could improve these discussions.</p>
<p>Law schools are actively trying to respond to the rapid inclusion of generative AI. Several law schools offered courses addressing technological competence and the integration of technology into legal practice prior to <a href="https://openai.com/index/chatgpt/">generative AI’s public launch in 2022</a>, and many have developed them since. These courses, however, tend to be small, upper-year electives with limited enrolment. Meanwhile, generative AI is shaping how students approach core academic and legal tasks earlier in their education, including conducting legal research, drafting written assignments, and orienting themselves in unfamiliar areas of law. This requires that discussions about the use of legal technologies employing generative AI occurs earlier in legal education and at a broader level.</p>
<p>From my own teaching perspective in Legal Research &amp; Writing, this gap is increasingly difficult to ignore. I am a staunch advocate for a medium-agnostic approach to legal research and writing but also recognize there is an obligation to teach students how to use the technologies they will encounter in practice. My own classroom experiences, informal conversations, and engagement with content on the topic suggest that many students already incorporate generative AI into their research and writing processes. Much of the Canadian conversation about students’ use of generative AI in law continues to unfold in the same way: through classroom anecdotes, faculty discussions, conference panels, blog posts, and LinkedIn threads. These forums are immensely valuable, but imagine those discussions being supported by findings specific to the Canadian law student experience. This data could help teachers of law, law schools, and workplaces anchor their conversations instead of guessing which tools are being used and how, what is understood about their limitations and responsible use, and whether existing guidance is fit for purpose.</p>
<p>Contextualizing these discussions with empirical data matters for several reasons. Legal education is cumulative, and research and writing habits formed early may persist into practice. Students are currently navigating a wide range of expectations about use and disclosure across courses and employers. Attitudes toward generative AI also vary widely, from ethical refusal to enthusiastic experimentation, often alongside unspoken assumptions about the level of technological competence recent law school graduates should possess. Without evidence, policies and assessments risk responding to imagined behaviours rather than real ones. Empirical insight can help move the conversation beyond simplistic binaries of “AI good” or “AI bad,” or “use” versus “misuse,” toward a clearer understanding of how generative AI is impacting the learning process and critical thinking through its current applications.</p>
<p>This is the motivation behind <em>Beyond the Books</em>, a national survey supported by the <a href="https://cba.org/about-us/cba-partners/law-for-the-future-fund/">CBA Law for the Future Fund</a>. The study examines Canadian law students’ and recent graduates’ use, preparedness, and perceptions of generative AI in legal education and early work experiences. The project aims to gather evidence that can support more effective teaching, clearer guidance for students, and realistic expectations across legal education and practice.</p>
<p>The survey, <a href="https://surveys.dal.ca/opinio/s?s=81977"><em>Beyond the Books: Law Students’ Use, Preparedness, and Perceptions of Generative AI</em></a>, is open until <strong>30 January 2026</strong>. The response so far has been fantastic, with students and recent graduates eager to share their perspectives.</p>
<p>If you are a Canadian JD student or a member of the 2024 graduating class, please consider participating to contribute to the conversation! Your voices are <u>essential</u> to any serious discussion of generative AI in legal education and work experiences.</p>
<p>If you teach or work with Canadian JD students or recent graduates, please consider sharing the survey with your students or new colleagues.</p>
<p><strong>Link to survey:</strong> <a href="https://surveys.dal.ca/opinio/s?s=81977">https://surveys.dal.ca/opinio/s?s=81977</a></p>
<p>The post <a href="https://www.slaw.ca/2026/01/22/from-anecdote-to-evidence-why-students-experiences-with-generative-ai-matter/">From Anecdote to Evidence: Why Students’ Experiences With Generative AI Matter</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2026/01/22/from-anecdote-to-evidence-why-students-experiences-with-generative-ai-matter/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Real Problem in Hallucination Cases Is Not the Failure to Verify</title>
		<link>https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/</link>
					<comments>https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/#comments</comments>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Wed, 07 Jan 2026 12:00:37 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=109028</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Cases keep cropping up where counsel has used AI to create a court submission containing made-up cases. The common response on the part of courts and the profession has been: ‘prompt, but verify.’ It’s okay to use AI, just make sure it’s accurate.</p>
<p>I think this response misses the mark. But consider first how fixated we’ve become over the issue of verification — implying that this is all we need to be concerned about in deciding whether counsel should be using AI to write court submissions.</p>
<p>As Judge Moore in a <a href="https://canlii.ca/t/kcmzp">Federal Court case</a> wrote earlier this year:</p>
<blockquote>
<p>The use </p>
</blockquote>
<p> . . .  <a href="https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/">The Real Problem in Hallucination Cases Is Not the Failure to Verify</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Cases keep cropping up where counsel has used AI to create a court submission containing made-up cases. The common response on the part of courts and the profession has been: ‘prompt, but verify.’ It’s okay to use AI, just make sure it’s accurate.</p>
<p>I think this response misses the mark. But consider first how fixated we’ve become over the issue of verification — implying that this is all we need to be concerned about in deciding whether counsel should be using AI to write court submissions.</p>
<p>As Judge Moore in a <a href="https://canlii.ca/t/kcmzp">Federal Court case</a> wrote earlier this year:</p>
<blockquote><p>The use of generative artificial intelligence is increasingly common and a perfectly valid tool for counsel to use; however, in this Court, its use must be declared and as a matter of both practice, good sense and professionalism, its output must be verified by a human.</p></blockquote>
<p>In that case, counsel’s material had cited two fake cases, one that didn’t apply, and “hallucinated the proper test for the admission on judicial review.” Disclosure and accuracy were what mattered here. Not the quality of the submission or whether counsel met the duty of competence in using AI to create it.</p>
<p>This past fall, the spotlight shifted to a superior court case in Ontario where counsel had created a factum for a motion using ChatGPT. It contained hallucinated case citations and mis-cited the rules of court. Back in May of 2025, counsel initially pointed the finger at staff for using AI. The court accepted her apology and <a href="https://canlii.ca/t/kc6xx">waived off</a> a criminal contempt citation, due in part to its finding that counsel was unaware that AI hallucinates. (In September, however, counsel admitted that she, not staff, had used AI to make the factum — and Crown has launched a <a href="https://canlii.ca/t/kgvr8">new contempt proceeding</a> in response.)</p>
<p>Part of the court’s <a href="https://canlii.ca/t/kc6xx">rationale</a> in deciding against a first contempt citation (in May) was that counsel “undertook to complete no fewer than six hours of Continuing Professional Development training in legal ethics and technology, including addressing specifically the professional use and risks of AI tools in legal practice.”</p>
<p>I suspect that CPD training would echo what lawyers on social media are saying in response to this case. One writes <a href="https://www.linkedin.com/posts/daniel-escott-29692763_ko-v-li-2025-onsc-6785-activity-7402771950873616385-R8Ed/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAEFAL4QB6EtmLyfaoaQJGzZnupHLWzTeiZQ">on LinkedIn</a>: “&#8230;relying on LLMs (and the products built on them) without rigorous verification is a liability… If your AI cannot provide a non-generative, hallucination-proof citation for every claim it makes, it does not belong in a courtroom.” But does this mean that if my AI submission meets the standard of ‘rigorous verification,’ it does belong in a courtroom?</p>
<p>Law societies across the country are also fixated on the verification issue. The Law Society of BC’s ‘<a href="https://www.lawsociety.bc.ca/Website/media/Shared/docs/practice/resources/Professional-responsibility-and-AI.pdf">Guidance on Professional Responsibility and Generative AI</a>’ points out that these tools can “create work product that appears very polished,” but warns counsel to be “careful to not lose sight of your responsibility to review the content carefully and ensure its accuracy.”</p>
<p>A <a href="https://lawsocietyontario-dwd0dscmayfwh7bj.a01.azurefd.net/media/lso/media/lawyers/practice-supports-resources/white-paper-on-licensee-use-of-generative-artificial-intelligence-en.pdf">white paper</a> from the LSO in mid 2024 on using generative AI tried to be more specific, but is still a bit unclear: “Generative AI can be used to prepare first drafts of certain documents including memoranda, letters and even opening statements or examination questions.” Does ‘memoranda’ here mean memoranda of argument, as in a factum or written submissions on closing? What should or shouldn’t we be using AI to do when it comes to preparing for court?</p>
<p>I believe the answer to this question doesn’t turn on the accuracy of the AI tool you choose. It turns on the question of <em>competence</em>.</p>
<p>The real problem in the hallucination cases, it seems to me, is not a failure to verify. It’s whether using AI to create a court submission breaches one of the fundamental duties of all lawyers in Canada: the duty of competence.</p>
<blockquote><p> “A lawyer shall perform any legal services undertaken on a client’s behalf to the standard of a competent lawyer.” (LSO, Rules of Professional Conduct <a href="https://lso.ca/about-lso/legislation-rules/rules-of-professional-conduct/chapter-3">3.1-2</a>)</p></blockquote>
<h2>Three reasons why not to use AI to draft court submissions</h2>
<p>I appreciate that lawyers are in a rush and may want to use AI to generate a draft submission they can edit. But I put forth for consideration the following proposition. If you use AI to produce a substantial draft of a court submission (i.e., anything more than a mere outline of an argument) — even if you verify all of your citations — you run the risk of breaching your duty of competence, in spirit at least, if not overtly. AI is good at many things, but it cannot draft a submission nearly as well as you can, if you put in the effort.</p>
<p>Three reasons why:</p>
<p><em>1. Verification won’t rescue an AI submission.</em></p>
<p>It makes no sense to have AI produce a draft factum citing fake cases and think that you could merely swap out the fake ones for real ones. The whole submission would be broken from the outset. Every argument, if not every sentence, would no longer correspond to the content of the real cases.</p>
<p><em>2. Legal AI is good but not that good</em></p>
<p>Platforms like Protégé and CoCounsel are better than free AI like ChatGPT in that any citations it does include in a memo it produces will link to real cases. But I’ve experimented with these tools a fair bit. I find that when they cite a set of cases to support an argument, the various steps in the argument tend to be strewn with hallucinations — claims about what cited cases stand for but don’t. Legal AI is good for research: finding cases, providing snapshots of law on point or certain issues. It’s not consistently reliable for putting together a comprehensive legal argument tailored to your facts.</p>
<p><em>3. AI can’t handle the myriad subtleties at play</em></p>
<p>More to the point, in writing a competent submission, there are simply too many variables in your case that need to be considered and combined in a legal argument for AI to be as good a tool for this as all the tiny cells in your brain working together (fueled, of course, by your favourite latté).</p>
<p>In order to be sensitive to all the nuances and facts in your case, your prompt would have to be as long and thoughtful as the argument that AI would generate. Put otherwise, no tool will do as good a job as you in thinking through the argument at each stage and deciding how to frame the various authorities, facts, and issues. Even the most advanced AI remains too blunt an instrument. I say this as a huge advocate and heavy user of AI.</p>
<p>I welcome reader feedback on this, but I’m inclined to think that aside from the most mundane, routine sorts of document — a Notice of Application containing only basic details — using AI for court submissions runs a real risk of presenting something that would involve less than competent representation.</p>
<p>Substantive legal writing may not be poetry or belles-lettres. But if you ask anyone <a href="https://supremeadvocacy.ca/eugene-meehan-kc/">who does a lot of it</a>, they will be quick to agree that it’s a lot closer to finely crafted forms of writing than routine form filling. Just as we wouldn’t want to plonk our phone on counsel table and have AI deliver our submissions, we shouldn’t assume it’s appropriate to have AI write them.</p>
<p>Justice Masuhara of the BC Supreme Court, who decided <a href="https://canlii.ca/t/k314g">one of the first</a> hallucination cases in Canada, put it best: “generative AI is still no substitute for the professional expertise that the justice system requires of lawyers… The integrity of the justice system requires no less.”</p>
<p>The post <a href="https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/">The Real Problem in Hallucination Cases Is Not the Failure to Verify</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2026/01/07/the-real-problem-in-hallucination-cases-is-not-the-failure-to-verify/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Unregulated Tools, Unyielding Duties: AI Risk Management for Canadian Professionals</title>
		<link>https://www.slaw.ca/2025/11/25/unregulated-tools-unyielding-duties-ai-risk-management-for-canadian-professionals/</link>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Tue, 25 Nov 2025 12:00:26 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108766</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">In my last column, I moved away from regulatory analysis to explore how artificial intelligence may affect specific functions within the legal profession. In this piece, I return to the theme of risk and broaden the discussion to consider the challenges AI presents across all regulated professions.</p>
<p>The rapid development of generative artificial intelligence has already begun to reshape practice across a wide range of professions. For regulated professionals in Canada, including lawyers, physicians, engineers, and others governed by statutory, ethical, and fiduciary duties, these advances bring both significant promise and considerable risk. However, the legal and regulatory frameworks are  . . .  <a href="https://www.slaw.ca/2025/11/25/unregulated-tools-unyielding-duties-ai-risk-management-for-canadian-professionals/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/11/25/unregulated-tools-unyielding-duties-ai-risk-management-for-canadian-professionals/">Unregulated Tools, Unyielding Duties: AI Risk Management for Canadian Professionals</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">In my last column, I moved away from regulatory analysis to explore how artificial intelligence may affect specific functions within the legal profession. In this piece, I return to the theme of risk and broaden the discussion to consider the challenges AI presents across all regulated professions.</p>
<p>The rapid development of generative artificial intelligence has already begun to reshape practice across a wide range of professions. For regulated professionals in Canada, including lawyers, physicians, engineers, and others governed by statutory, ethical, and fiduciary duties, these advances bring both significant promise and considerable risk. However, the legal and regulatory frameworks are not keeping pace. In this legislative gap, professionals remain fully accountable to their existing professional obligations, even as the tools they are expected to evaluate and manage become more complex and less transparent.</p>
<p>This article considers the implications of this regulatory gap, examines the risks associated with the uncritical adoption of AI, and proposes practical risk management strategies for professionals and their regulatory bodies. As innovation continues to accelerate, the need for thoughtful scrutiny at the intersection of technological adoption and professional responsibility becomes increasingly pressing.</p>
<h2>The Regulatory Gap in Canada: Professional Duties in an Evolving Landscape</h2>
<p>As has been previously discussed in this column, Canada currently lacks a comprehensive legislative framework governing the use of artificial intelligence. The most developed initiative to date, the proposed Artificial Intelligence and Data Act (AIDA), was introduced as part of Bill C-27 and sought to implement a risk-based regulatory regime for “high-impact” AI systems. Although AIDA represented a significant legislative step forward, it did not proceed to enactment before Parliament was prorogued in early 2025 and has not, at this time, been reintroduced.</p>
<p>In the absence of specific legislation, regulated professionals must continue to rely on pre-existing legal and ethical frameworks to guide their conduct. These include statutory privacy obligations, fiduciary and common law duties, and the professional codes administered by self-regulatory bodies. The elevated responsibilities imposed on professionals, grounded in the public interest and in the protection of vulnerable individuals, remain fully operative and arguably become more salient as professionals navigate the uncertainties of emerging technologies. While most regulatory bodies have begun to examine the implications of AI, many have yet to issue detailed or binding guidance. As a result, professionals must act with heightened caution and independent judgment. The current regulatory gap does not diminish their legal or ethical duties; rather, it increases the importance of deliberate, defensible, and accountable decision-making in the adoption of AI.</p>
<h2>Why Regulated Professionals Must Exercise Heightened Caution with AI</h2>
<p>Regulated professionals occupy a distinctive position of trust and accountability within Canadian society and are subject to ethical, statutory, and fiduciary obligations that exceed those of non-regulated actors. These obligations are designed to protect clients and patients, uphold the integrity of professional services, and preserve confidence in self-regulating systems. When professionals adopt new technologies, especially those that are novel, powerful, and unregulated such as generative artificial intelligence, they must assess the associated risks in light of their heightened professional responsibilities. The fact that a tool is widely accessible or commercially available does not absolve professionals from the obligation to meet their elevated standard of care.</p>
<p>Unlike private enterprises that may deploy AI systems within the bounds of general legal norms, regulated professionals remain personally responsible for the outcomes of their work, including results generated or influenced by AI. This includes an expectation that professionals will understand the capabilities and limitations of the tools they use, preserve the confidentiality of sensitive information, and ensure that independent professional judgment remains central. Generative AI tools, especially those that operate on opaque or proprietary platforms, present significant risks. These risks include the potential generation of false or misleading information, inadvertent disclosure of confidential data, and the uncritical automation of decisions that require human oversight. In professional settings, such outcomes may result in disciplinary proceedings, civil liability, reputational harm, or injury to clients and the broader public.</p>
<p>Recent incidents make clear that the risks associated with AI use in professional practice are not being taken as seriously as they should. Within a week of writing this article, two high-profile examples of AI-related professional misconduct have been reported in the media. In the first, global consultancy Deloitte faced public criticism after a report prepared for the Australian government was found to contain factual errors attributed to the uncritical use of generative AI. In the second, the Alberta Court of Appeal addressed concerns about a lawyer who had retained a third-party contractor to prepare a factum that seemingly contained AI generated errors. The Court affirmed that, regardless of delegation, lawyers remain fully responsible for materials filed under their name, stating: “&#8230;if a lawyer engages another individual to write and prepare material to be filed with the court, the lawyer whose name appears on the filed document bears ultimate responsibility for the material’s form and contents&#8230;” <em>Reddy v Saroya</em>, 2025 ABCA 322 at 83.</p>
<p>As public awareness of AI-related harms grows, it is likely that regulators will take an increasingly rigorous approach to oversight of AI use within the professions. Accordingly, regulated professionals must approach AI-assisted work with the same diligence, care, and scrutiny that govern all acts performed under the authority of a professional licence.</p>
<h2>Risk Management Strategies for Regulated Professionals and Oversight Bodies</h2>
<p>In the current environment, characterized by rapid technological change and challenges in regulatory oversight, regulated professionals and their governing bodies can rely on well-established risk management principles to guide the ethical and responsible integration of AI into practice. Risk management in this context refers to an ongoing process of identifying, assessing, mitigating, and monitoring risks associated with the use of AI in professional environments. The overarching objective is to ensure that AI strengthens, rather than compromises, professional competence, client or patient trust, and the protection of the public.</p>
<p>For individual professionals, effective risk management begins with a thorough evaluation of the AI tools under consideration. This includes understanding the tool’s capabilities and limitations, reviewing its outputs before relying on them, and implementing internal safeguards that maintain confidentiality and uphold professional judgment. Clear usage protocols should address data protection, transparency with clients or patients, and documentation of AI-assisted decisions. In some cases, informed consent may require disclosing the use of AI. Maintaining records of these practices supports professional accountability and provides a defensible basis in the event of an inquiry or complaint.</p>
<p>Professional regulators also have an essential role in facilitating responsible practice. Although updated regulatory frameworks may take time to develop, interim measures can be adopted. These may include the issuance of practice advisories, updates to codes of conduct, and the development of ethical guidance specific to AI use. In jurisdictions where multiple regulators confront similar challenges, this may also include collaborative efforts to share model policies and risk management strategies.</p>
<p>By anchoring the adoption of AI in proven risk management principles, both professionals and regulators can respond to technological change in a manner that is practical, proportionate, and aligned with the core values of the professions. This approach does not require deferring action until legislation is enacted; rather, it calls for the application of existing ethical and regulatory tools to a new and evolving set of circumstances.</p>
<h2>Conclusion</h2>
<p>In the absence of comprehensive legislation, the responsibility for ethical and legally defensible AI use falls squarely on the shoulders of individual professionals and their regulatory institutions. The allure of efficiency and innovation cannot outweigh the foundational obligations that define regulated practice: competence, confidentiality, accountability, and the protection of the public interest. It is incumbent upon professionals to scrutinize the tools they employ and apply the established principles of risk management to this emerging field. Likewise, regulators must provide timely guidance to assist professionals in navigating the complexities of AI within existing professional standards. In an era where technological innovation exceeds the pace of regulatory and ethical oversight, professionals must recognize that caution is not merely advisable, it is a professional imperative.</p>
<p>The post <a href="https://www.slaw.ca/2025/11/25/unregulated-tools-unyielding-duties-ai-risk-management-for-canadian-professionals/">Unregulated Tools, Unyielding Duties: AI Risk Management for Canadian Professionals</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How Profs and Students Are Using AI in Law Schools Around the World</title>
		<link>https://www.slaw.ca/2025/11/10/how-profs-and-students-are-using-ai-in-law-schools-around-the-world/</link>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Mon, 10 Nov 2025 12:00:30 +0000</pubDate>
				<category><![CDATA[Legal Education]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108784</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Law schools everywhere are confronting the same issue: how to use AI to help rather than hinder student learning.</p>
<p>In an <a href="https://www.slaw.ca/2025/07/01/should-we-restrict-the-use-of-ai-in-law-school/">earlier column</a>, I speculated on ways we might help law students foster good over bad uses of AI. A <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5401422">paper</a> published this summer by Dutch law professor Thibault Schrepel surveys the growing literature on experiments with AI in legal education. His overview provides a more concrete sense of what better uses of AI might entail.</p>
<p>These applications all have potential pitfalls, but these too can be harnessed as part of the learning process. To begin with the most  . . .  <a href="https://www.slaw.ca/2025/11/10/how-profs-and-students-are-using-ai-in-law-schools-around-the-world/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/11/10/how-profs-and-students-are-using-ai-in-law-schools-around-the-world/">How Profs and Students Are Using AI in Law Schools Around the World</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Law schools everywhere are confronting the same issue: how to use AI to help rather than hinder student learning.</p>
<p>In an <a href="https://www.slaw.ca/2025/07/01/should-we-restrict-the-use-of-ai-in-law-school/">earlier column</a>, I speculated on ways we might help law students foster good over bad uses of AI. A <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5401422">paper</a> published this summer by Dutch law professor Thibault Schrepel surveys the growing literature on experiments with AI in legal education. His overview provides a more concrete sense of what better uses of AI might entail.</p>
<p>These applications all have potential pitfalls, but these too can be harnessed as part of the learning process. To begin with the most obvious use:</p>
<h3><em>1. Summarize and explain lengthy and convoluted cases in simple terms. </em></h3>
<p>AI summaries can be wrong in many ways. They can hallucinate facts, misstate ratios, or confuse dissenting and majority opinions. But some profs seize upon this by getting students to verify summaries for accuracy. In other cases, AI summaries provide a starting point for tackling dense material, allowing students to focus on higher-order tasks like evaluating the reasoning or thinking about how to apply a holding to new facts.</p>
<h3><em>2. Come up with new fact patterns for students to test how a rule might operate. </em></h3>
<p>AI can generate a brief scenario or role-play a lawyer-client simulation to illustrate how a concept might be encountered in practice. Some profs encourage students to have AI critique their performance or advice.</p>
<h3><em>3. Have AI play the role of Socratic tutor, asking follow-up questions when a student states a rule. </em></h3>
<p>This can be done by uploading a case and prompting a model to generate a Q&amp;A session about it, acting as (a kind and encouraging) professor or articling principal.</p>
<h3><em>4. Have AI produce a summary of an area of law—line of authority, doctrine, or principle—and have students critique its answer.</em></h3>
<p>Is it accurate? What is it missing? How can it be improved?</p>
<h3><em>5. Design a custom GPT trained on course materials to provide students with an interactive tool for questions.</em></h3>
<p>While a student could email a prof for clarity about a point covered in class or feedback on a practice exam, a model drawing on course materials could provide an instant response and “effectively turn solitary study into an interactive session.”</p>
<h3><em>6. Use AI to brainstorm arguments for mooting briefs or papers.</em></h3>
<p>Many of these won’t be usable. But AI can help students overcome fear of the blank page and get started, moving on to higher-order analysis sooner and more efficiently.</p>
<h3><em>7. Have AI edit writing for style and clarity.</em></h3>
<p>Legal writing is notoriously hard to teach and labour-intensive. AI is infinitely patient and can show its work by bolding suggested changes and explaining the rationale. It can show how a passage might be rewritten to be more formal or offer a high-level critique.</p>
<h3><em>8. Give students an AI-generated first draft of a memo and have them revise and improve it.</em></h3>
<p>This can help students refine their ability to explain rules, structure arguments, and cite law correctly. It also helps develop editing as a distinct and vital skill.</p>
<h3><em>9. Draft an outline for a factum or paper and have AI critique it. </em></h3>
<p>AI can flag flaws and inconsistencies and suggest alternative structures. Some profs ask students to submit a log of their prompts and responses as part of the assignment, encouraging reflection on the process and the questions asked.</p>
<h3><em>10. Use AI to assist with legal research by suggesting key cases or statutes on a given topic, and possible arguments.</em></h3>
<p>Results will be hit or miss, but AI is improving at providing at least “an initial roadmap, akin to brainstorming with a reference librarian or senior colleague,” as Schrepel notes.</p>
<h2>Broader concerns with greater reliance on AI</h2>
<p>These applications all aim to use AI as a tool for learning rather than a substitute for it. But as Schrepel points out, there’s a major potential problem lurking beneath all of this: over-reliance on AI.</p>
<p>As students become more adept at using AI for research, outlines, and editing, the temptation to let AI do the work for you—to cross the line into “<a href="https://link.springer.com/article/10.1007/s10639-024-13151-7">AI-giarism</a>”—is strong. AI is already capable of drafting papers and exam answers that professors can’t detect. (Schrepel cites a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5008559">study</a> where law profs grading a criminal law exam in Australia failed to identify answers written entirely by AI.)</p>
<p>The challenge for law profs and principals is to foster awareness of <em>when</em> reliance becomes <em>over</em>-reliance. AI might appear capable of complex writing and research on its own, but it makes frequent and subtle errors. More crucially, it lacks the judgment to weigh competing considerations or assess credibility. It can overlook broader ethical issues hovering above a seemingly straightforward legal question.</p>
<p>But this, I think, is why the uses of AI outlined above matter. They treat AI as a starting point for critical engagement, not an endpoint.</p>
<p>Students who use AI to generate fact patterns get better at issue spotting. Verifying AI summaries develops their close reading skills. And critiquing AI-generated arguments can sharpen their analytical judgment.</p>
<p>The profession that students are heading into is one where AI will be ubiquitous. Our goal should not be to avoid AI but to teach students to recognize its limitations and to improve its output with their own expertise—to use AI effectively and responsibly.</p>
<p>Because however the profession may change with AI, human judgment is likely to remain central.</p>
<p>The post <a href="https://www.slaw.ca/2025/11/10/how-profs-and-students-are-using-ai-in-law-schools-around-the-world/">How Profs and Students Are Using AI in Law Schools Around the World</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Deceptive Dynamics of Generative AI: Beyond the “First-Year Associate” Framing</title>
		<link>https://www.slaw.ca/2025/10/28/deceptive-dynamics-of-generative-ai-beyond-the-first-year-associate-framing/</link>
		
		<dc:creator><![CDATA[Amy Salyzyn]]></dc:creator>
		<pubDate>Tue, 28 Oct 2025 11:00:32 +0000</pubDate>
				<category><![CDATA[Legal Ethics]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108806</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Guidance for lawyers on generative AI use consistently urges careful verification of outputs. One popular framing advises treating AI as a “first-year associate”—smart and keen, but inexperienced and needing supervision. In this column, I take the position that, while this framing helpfully encourages caution, it obscures how generative AI can be deceptive in ways that make it fundamentally dissimilar to an inexperienced first-year associate. How is AI deceptive? In short, generative AI can fail in unpredictable ways and sometimes in ways that mimic reliability, making errors harder to detect than those flowing from simple inexperience.</p>
<p>Before elaborating, three important caveats  . . .  <a href="https://www.slaw.ca/2025/10/28/deceptive-dynamics-of-generative-ai-beyond-the-first-year-associate-framing/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/10/28/deceptive-dynamics-of-generative-ai-beyond-the-first-year-associate-framing/">Deceptive Dynamics of Generative AI: Beyond the “First-Year Associate” Framing</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Guidance for lawyers on generative AI use consistently urges careful verification of outputs. One popular framing advises treating AI as a “first-year associate”—smart and keen, but inexperienced and needing supervision. In this column, I take the position that, while this framing helpfully encourages caution, it obscures how generative AI can be deceptive in ways that make it fundamentally dissimilar to an inexperienced first-year associate. How is AI deceptive? In short, generative AI can fail in unpredictable ways and sometimes in ways that mimic reliability, making errors harder to detect than those flowing from simple inexperience.</p>
<p>Before elaborating, three important caveats about scope. First, this column focuses on guidance given to lawyers in response to generative AI’s unreliability. This focus is not meant to imply that other concerns relevant to responsible AI use—like data security or potential bias—are unimportant. Second, the risks posed by generative AI’s unreliability vary by task; accuracy matters less when brainstorming alternative wording for a factum than when conducting substantive legal research. The cautions offered below will be more or less salient depending on how generative AI is being used. Third, concerns about unreliability vary by tool. The generative legal AI landscape is vast and diverse, encompassing hundreds of specialized legal products, general-purpose tools such as ChatGPT and Microsoft Copilot, and AI features built into software like videoconferencing platforms and PDF readers. The deceptive dynamics detailed below will not manifest equally across all tools, both because tools serve different functions and because developers employ different interfaces and safeguards.</p>
<p>With that context in mind, how might generative AI tools and outputs be deceptive in ways that aren’t obvious in the portrayal of AI as a first-year associate?</p>
<p>First, and most obviously, generative AI outputs can be <strong>fabricated</strong>. A lawyer reviewing a first-year associate’s work likely expects some errors flowing from inadequate research or an incomplete understanding of the law. They do not suspect straight-up fictitious content. As Jason Harkess <a href="https://www.linkedin.com/pulse/ai-lied-court-how-legal-professionals-worldwide-being-harkess-i5izc/">observes</a>, “[h]uman errors typically involve misinterpretations, incomplete research, or analytical missteps. AI hallucinations, by contrast, often involve complete fabrications that appear superficially correct but lack any basis in reality.”</p>
<p>Many lawyers now know that generative AI can generate fake case citations, so this form of deception is increasingly well-understood. However, continued submissions of fabricated authorities to courts suggest the lesson hasn’t been universally absorbed. In any case, there is also the lesser-known prospect of subtle hallucinations: a date altered here, part of a legal test changed there. These more subtle hallucinations are harder to detect and mean that where accuracy is paramount, extreme caution and rigourous verification is warranted when relying on AI outputs. In some situations, the vetting burden may, in fact, outweigh any efficiency gains. It isn’t always a simple matter of double-checking a tool’s outputs – it may be that using generative AI doesn’t make sense in the first place or that you need to consider using it in a more constrained way. Contrast, for example, using generative AI in a legal research task to arrive at useful Boolean search terms (h/t Katarina Daniels) versus trying to use the technology to generate a full research memo or factum from scratch.</p>
<p>Second, generative AI outputs are often <strong>fluent.</strong> The text these tools produce is generally polished, characterized by clean formatting, correct grammar, and familiar language patterns. This is a strength of these tools &#8211; no one wants sloppy or error-ridden text. The problem is that psychological research shows that processing fluency—the ease with which we read text—can breed overconfidence in a text’s substantive truth and trustworthiness. When reviewing a first-year associate’s work, a lawyer is likely to take comfort in well-presented materials. Careful grammar, writing, and editing demonstrate diligence which, at the very least, indicates that the work was not hurried. With generative AI outputs, this heuristic breaks down. As Jack Shepherd <a href="https://jackwshepherd.medium.com/generative-ai-in-the-legal-industry-is-accuracy-everything-60c6f3a15947">has observed</a>, “…humans do sometimes produce bad but well-presented work. L[arge] L[anguage] M[odel]s always do. That makes it harder to spot the errors.” The refinement of generative AI outputs can mask quality issues for the unsuspecting.</p>
<p>Third, generative AI can be <strong>fickle</strong> – or at least appear so to users who lack deep technical understanding. The types of errors produced by generative AI technology may be unexpected, meaning we are not primed to look for or recognize them. As a <a href="https://www.hbs.edu/faculty/Pages/item.aspx?num=64700">2023 Harvard Business School</a> study notes, generative AI has a “jagged frontier” in which tasks that we might expect to be difficult can often prove easy, while seemingly simple tasks sometimes end up posing big challenges. Take, for example, ChatGPT’s strong performance <a href="https://www.smithsonianmag.com/smart-news/chatgpt-or-shakespeare-readers-couldnt-tell-the-difference-and-even-preferred-ai-generated-verse-180985480/">in creating poems that mimic Shakespeare’s style</a> versus its inability, at least at certain points in time, to correctly <a href="https://community.openai.com/t/incorrect-count-of-r-characters-in-the-word-strawberry/829618">count the number of times that the letter “r” appears in the word “strawberry”.</a> This dynamic complicates verification. Lawyers may not realize they need to double-check outputs on tasks that would be trivial or very easy for a first-year associate. While technical explanations exist for these “common sense” failures, they are likely to be unexpected by non-experts (and most lawyers are non-experts in AI).</p>
<p>Fourth, and related to the above, generative AI outputs can be <strong>fragile</strong>. To quote <a href="https://jackwshepherd.medium.com/what-are-the-specific-use-cases-for-generative-ai-in-contract-drafting-b738d8353b37">Shepherd</a> again, “large language models are designed to produce different output each time, even if you use the same prompt. The level of randomness can be adjusted through the ‘temperature’ parameter, although if you turn it down too much, the output becomes very robotic and unhuman-like. It is inherent in the design of these models that the output differs each time.” While we might start expecting regularity and consistent outputs once an associate does a task several times, the same will not necessarily hold for a generative AI tool. This reality of the technology also means that it can experience unexpected failures even on tasks it has previously handled well. For lawyers, this sort of fragility means that past performance cannot always be used reliably as a proxy for future quality.</p>
<p>Fifth, generative AI technology can lack <strong>faithfulness</strong>. “Faithfulness” is a technical term for a specific type of AI deception, helpfully summarized <a href="https://openreview.net/pdf?id=4ub9gpx9xw">in this paper</a> by Katie Matton et al.:</p>
<blockquote><p>Modern large language models (LLMs) can generate plausible explanations of how they arrived at their answers to questions. And these explanations can lead users to trust the answers. However, recent work demonstrates that LLM explanations can be <em>unfaithful,</em> i.e., they can misrepresent the true reason why the LLM arrived at the answer.</p></blockquote>
<p>In other words, when generative AI explains its reasoning, it is possible that the explanation given does not reflect how it actually produced the output. To be clear, the faithfulness problem does not arise with all tools, or attach to all explanations, and the precise conditions under which unfaithful explanations arise is an ongoing area of research. Even with this wobbliness, it strikes me that lawyers should be aware of the possibility of unfaithful explanations. The practical issue, to quote Matton et al again, is that “misleading explanations can provide users with false confidence in LLM responses, leading them to fail to recognize when the reasons behind model recommendations are misaligned with the user’s values and intent.” This dynamic is particularly troubling because lawyers are likely to consider the reasoning and process used to reach a conclusion as providing some indication of the reliability and appropriateness of the conclusion. When AI provides explanations that appear logical but are fundamentally unfaithful, it exploits this professional instinct, potentially leading lawyers to place unwarranted trust in flawed outputs.</p>
<p>By pointing out that generative AI tools and their outputs can be deceptive through fabrication, fluency, fickleness, fragility, and unfaithfulness, my point here is not that this technology is so irredeemable that it should be eradicated from lawyers’ offices. Again, let’s recall that not all generative AI uses for lawyers hinge on generating factually accurate outputs. Also again, the world of legal AI is large and some tools may include features or safeguards that mitigate certain of these risks. On top of this, we know that all sorts of shortcomings can attach to human performance. We cannot engage with anywhere near the amount of data that AI systems can. We bring our own biases to tasks. We get tired and hungry.</p>
<p>The intended take-away here is not “machines bad, humans good”. The relatively modest plea I am making is to appreciate that, if we are directing lawyers to verify generative AI outputs – an undoubtedly wise direction – the “first-year associate” framing only takes us so far. What unites the five deceptive dynamics outlined above is that they all involve forms of deception that we are not primed to necessarily expect (or to expect in the same form) from human colleagues, even inexperienced ones. Generative AI outputs can be fabricated but appear authentic, polished but flawed, error-prone in unexpected ways, successful until they’re not, and accompanied by explanations that may themselves be unreliable. Understanding these distinctive deceptive dynamics equips lawyers to develop verification practices tailored to this technology’s particularities, rather than relying on instincts developed for supervising human colleagues. The “first-year associate” framing asks lawyers to supervise; understanding these dynamics reveals what they must supervise for—and why that task can sometimes be more challenging than the framing suggests. Better understanding breeds better practice, and, ultimately, safer use. Indeed, as these deceptive dynamics become better known and incorporated into practice, their power to deceive will diminish.</p>
<p>The post <a href="https://www.slaw.ca/2025/10/28/deceptive-dynamics-of-generative-ai-beyond-the-first-year-associate-framing/">Deceptive Dynamics of Generative AI: Beyond the “First-Year Associate” Framing</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Quantitative Assessment of Access to Justice Initiatives</title>
		<link>https://www.slaw.ca/2025/10/08/quantitative-assessment-of-access-to-justice-initiatives/</link>
		
		<dc:creator><![CDATA[Sarah A. Sutherland]]></dc:creator>
		<pubDate>Wed, 08 Oct 2025 11:00:06 +0000</pubDate>
				<category><![CDATA[Legal Information]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108665</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Quantitative methods are at once well-established and novel when speaking about access to justice. We&#8217;ve been reporting on our activities to funders, boards, and communities for decades, but we&#8217;ve also occasionally been complacent about what message we are conveying. When I think about data on the law and how we can approach using it better, I often think about Jon Snow and his search for the source of a cholera outbreak in London in 1854. Here you can see the original map that allowed him to identify the source as the water pump on Broad Street, which he created through  . . .  <a href="https://www.slaw.ca/2025/10/08/quantitative-assessment-of-access-to-justice-initiatives/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/10/08/quantitative-assessment-of-access-to-justice-initiatives/">Quantitative Assessment of Access to Justice Initiatives</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Quantitative methods are at once well-established and novel when speaking about access to justice. We&#8217;ve been reporting on our activities to funders, boards, and communities for decades, but we&#8217;ve also occasionally been complacent about what message we are conveying. When I think about data on the law and how we can approach using it better, I often think about Jon Snow and his search for the source of a cholera outbreak in London in 1854. Here you can see the original map that allowed him to identify the source as the water pump on Broad Street, which he created through research he carried out by knocking on doors and asking how many people in each house had died (<a href="https://scienceline.org/2010/05/john-snows-maps-of-the-broad-street-cholera-outbreak/">here is some more information on Jon Snow&#8217;s cholera map</a>).</p>
<p><img decoding="async" class="alignnone size-large wp-image-88402" src="https://www.slaw.ca/wp-content/uploads/2018/03/Snow-cholera-map-1-600x560.jpg" alt="" width="600" height="560" srcset="https://www.slaw.ca/wp-content/uploads/2018/03/Snow-cholera-map-1-600x560.jpg 600w, https://www.slaw.ca/wp-content/uploads/2018/03/Snow-cholera-map-1-200x187.jpg 200w, https://www.slaw.ca/wp-content/uploads/2018/03/Snow-cholera-map-1-300x280.jpg 300w, https://www.slaw.ca/wp-content/uploads/2018/03/Snow-cholera-map-1-768x716.jpg 768w" sizes="(max-width: 600px) 100vw, 600px" /></p>
<p>This stage of identifying patterns through simple tracking to find the initial information about causes of problems is where we are when we speak about access to justice and data in Canada.</p>
<p>Data generally exists as a byproduct of other processes, or if it has been actively collected for a particular purpose. This means that it can lack value or be more resource intensive than is warranted. This lack of value or resource intensiveness is important in the context of data and the law, especially when we compare outcomes for different groups or communities, because we have no way to know what a correct outcome should look like or what an appropriate variation in society should be. In their paper &#8220;Big Data, Machine Learning, and the Credibility Revolution in Empirical Legal Studies&#8221; Ryan Copus, Ryan Hübert, and Hannah Laqueur wrote: “There often is no better indicator for the right decision than the decision that a judge actually made.” (<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3156795">preprint on SSRN</a>).</p>
<p>So, given our inability to count &#8220;justice&#8221;, what can we do when we need to evaluate impact? Here is a short list of categories of ways to approach this question, with some discussion of their benefits and disadvantages, from easiest to most difficult.</p>
<p>The first way is to count interactions. This can be a simple count of the number of people who walk through the door, or the number of people who visit a website or use a service. The statistics are generally easy to collect, easy to explain, and easy to convey. They also allow comparisons over time to track impact year over year. However, they are frequently overly simplistic and don&#8217;t convey much meaning in terms of what people are actually doing or experiencing. This means that, though they are one of the most common ways of measuring impact, they are also frequently one of the least meaningful.</p>
<p>The next way to track impact in a community is to track metrics that are easily countable, but have a more significant element of impact. These may be things like the amount of time spent, a count of the materials shared, and other easily trackable ways of measuring whether someone was reached by a particular initiative. This is often more work and may require more training for people to collect. However, it gives better insight into the work being done and how communities are experiencing it than simple numerical tracking.</p>
<p>The third method of tracking impacts is attempting to count outcomes. This could be something like work product: divorces filed, contracts written, or negotiations conducted. It can be taken even further, trying to assess whether individuals had positive or negative outcomes. This is the most difficult, as it frequently requires significant effort to collect the data and, as it often involves communicating with people who are no longer connected with an organization, it will often have a significant number of missing data points.</p>
<p>The last way of conveying initiative’s impacts I will mention is storytelling. The lived experiences of people experiencing legal problems and other stressful events can be among the most meaningful ways of communicating the reality of those facing these issues. The problem is that they can easily be little more than anecdotes, which cannot be extrapolated beyond the individuals discussed.</p>
<p>When we talk about analyzing the impacts of our work, we are working interdisciplinarily. We draw on insights that have been developed in the fields of business, statistics, data science, and storytelling, which require analysis of the underlying conditions and exploration of impact on communities.</p>
<p>One of the easiest ways to find data for analysis is through tools that automate its collection, such as web statistics applications. This can be expanded to include the manual collection of data at the point of action and potentially more intensive research methods. This does not tell us how we can interpret what we collect. Telling the story about what data means requires a significant amount of interpretation and subject matter expertise to contextualize meaning.</p>
<p>We have to decide what&#8217;s important. We can engage in the ongoing collection of user statistics, and we can expand our efforts to work on full research projects designed to identify underlying patterns. Neither of these initiatives are simple. They also both have significant issues with how they can be deployed within organizations.</p>
<p>Another possibility is to consider developing key performance indicators, which are specially selected metrics that give a sense of how things are going in an organization at a glance. These can often be used crudely, and when they are poorly selected and managed, they can be ineffective at communicating what they were selected to do and can often incentivize or punish the wrong things. However, it is not possible to assess all possible metrics in real time. And, ideally, they provide insight into current operations, so they can be used to measure whether an organization or an initiative is working in the desired way.</p>
<p>We need to decide who we are collecting data for, what we want to communicate, and for what purpose. Without these initial answers, we cannot move forward with data driven projects in a sophisticated and meaningful way.</p>
<p style="padding-left: 40px;"><em>This column is based on the talk I gave at the 2025 People-Centred Justice Workshop in Vancouver in May on a panel titled &#8220;Measurement, Methods &amp; Change in People-</em><em>Centred Justice.&#8221; Thank you to the organizers for letting me share my thoughts on the subject and for hosting such a thought-provoking event.</em></p>
<p>The post <a href="https://www.slaw.ca/2025/10/08/quantitative-assessment-of-access-to-justice-initiatives/">Quantitative Assessment of Access to Justice Initiatives</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI’s Impact on the Legal Profession: Takeaways From Microsoft Research for Canadian Lawyers</title>
		<link>https://www.slaw.ca/2025/09/23/ais-impact-on-the-legal-profession-takeaways-from-microsoft-research-for-canadian-lawyers/</link>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Tue, 23 Sep 2025 11:00:27 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108595</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Over the last few columns, I have focused primarily on the regulation side of my work in artificial intelligence (AI) risk and regulation. That focus has reflected, in part, my concern about the current regulatory patchwork surrounding generative AI in Canada and the very real dangers of unregulated implementation of AI into our daily lives. That discussion will continue at a later date, but for the next few articles I plan to shift the focus to the research and perspectives on the risk management side of the equation.</p>
<p>The risks associated with AI implementation are not hypothetical. Many readers will  . . .  <a href="https://www.slaw.ca/2025/09/23/ais-impact-on-the-legal-profession-takeaways-from-microsoft-research-for-canadian-lawyers/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/09/23/ais-impact-on-the-legal-profession-takeaways-from-microsoft-research-for-canadian-lawyers/">AI’s Impact on the Legal Profession: Takeaways From Microsoft Research for Canadian Lawyers</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Over the last few columns, I have focused primarily on the regulation side of my work in artificial intelligence (AI) risk and regulation. That focus has reflected, in part, my concern about the current regulatory patchwork surrounding generative AI in Canada and the very real dangers of unregulated implementation of AI into our daily lives. That discussion will continue at a later date, but for the next few articles I plan to shift the focus to the research and perspectives on the risk management side of the equation.</p>
<p>The risks associated with AI implementation are not hypothetical. Many readers will be very familiar with issues such as hallucinations, bias, and overconfidence in generated results. These risks are already manifest across sectors and reflect only the leading edge of the AI risk landscape from my perspective. The coming risks of AI implementation will be acute, such as a physical injury caused by reliance on inaccurate medical output. They will be chronic, such as the slow erosion of professional judgment as routine tasks are increasingly handed over to automated systems. They will be individual, such as a client receiving incorrect legal advice based on flawed AI drafting. And they will be systemic, such as the embedding of discriminatory patterns into institutional decision-making.</p>
<p>The risks will also shape the future of our profession in a significant way and as I write this column during a brief summer vacation, I wanted to take a more relaxed approach to opening a discussion of which tasks and roles in a lawyer’s daily work may be most impacted in the coming years. This month’s post is lighter in tone and deliberately informal in method. It offers a discussion-oriented exercise: a simple mapping of Canadian legal practice areas against a recent Microsoft Research study that examined where generative AI is already being used successfully in the workplace. The results are open to interpretation and I may not even entirely agree with some of them but I found the process useful and thought-provoking, and I hope that you do as well.</p>
<h2>Microsoft’s Research</h2>
<p>In July 2025, Microsoft Research released what may be the most concrete, data-driven snapshot to date of how generative AI aligns with real work. Unlike much of the commentary in the AI space, which relies on surveys or speculative opinion, this study analyzed more than 200,000 anonymized interactions from U.S. users of Microsoft Copilot in 2024. Each of these conversations was classified using O*NET’s framework of “work activities,” and then used to develop an AI applicability score for a wide range of occupations.</p>
<p>The researchers used three key indicators to measure AI’s effectiveness for a given work activity:</p>
<ol>
<li>Coverage — how frequently a particular activity appeared in Copilot usage;</li>
<li>Completion — how often Copilot appeared to complete the task successfully; and</li>
<li>Scope — how broadly that activity contributes to the core functions of a given occupation.</li>
</ol>
<p>The research can be found here: <a href="https://arxiv.org/pdf/2507.07935">https://arxiv.org/pdf/2507.07935</a>.</p>
<p>The research offers a view into where AI tools are already being used with meaningful effect. Not surprisingly, the systems align most strongly with knowledge-based and communication-heavy work including information gathering, summarizing, writing, and explaining. In contrast, AI systems appear less effective in contexts that are heavily physical, highly interactive, or involve fine-grained interpersonal nuance.</p>
<p>At a more aggregated level, the highest-scoring occupational groups were in Sales; Computer &amp; Mathematical; Office &amp; Administrative Support; Community &amp; Social Service; Arts/Media; Business &amp; Financial Operations; and Education/Library. By comparison, legal roles sit toward the bottom of the ranking because legal work is hybrid: much of it is desk-based research and drafting, but a great deal involves live advocacy, client relations, and discretionary judgment that are harder to capture as discrete, automatable tasks. In this post, I use the study’s top 40 occupations as task analogs to assess legal sub-tasks, not to suggest that legal roles themselves are currently top 40.</p>
<p>While the research is grounded in U.S. data and organized around American occupational taxonomies, it provides an informative reference point for our own context. In the next section, I offer an informal Canadian adaptation of this work, with the express goal of fostering conversation about the evolving AI risk landscape in the legal profession. This exercise is not about predicting specific outcomes or prescribing staffing decisions.</p>
<h2>A Very Simple (and Entirely Non-Scientific) Methodology</h2>
<p>To create a Canadian counterpart to the Microsoft Research work, I’ve conducted a light-touch mapping exercise. The goal is not to produce a rigorous or replicable model of occupational displacement, but to stimulate dialogue about how generative AI tools may differently affect legal tasks and practice areas in Canada.</p>
<p>Step 1: The Legal Roles</p>
<p>The starting point was the Canadian Bar Association’s list of National Sections, which serves as a practical proxy for common areas of legal practice in Canada. I limited the scope to substantive law sections only, excluding career-stage, identity-based, or structural sections (such as “Young Lawyers” or “In-House Counsel”). The list can be found here: <a href="https://www.cba.org/Sections">https://www.cba.org/Sections</a>.</p>
<p>Step 2: The AI Impact Scores</p>
<p>The anchor data came from Microsoft’s recent research, which highlighted 40 occupations where generative AI tools are already being used with meaningful effect. These rankings reflected a blend of three factors, as described earlier: how frequently a task appeared in Copilot usage (coverage), how often Copilot seemed to complete it successfully (completion), and how central the task was to the occupation as a whole (scope).</p>
<p>Occupations with strong communication and drafting elements, such as writers, customer service representatives, and technical specialists, appeared near the top of the list. Others, like archivists and data scientists, fell more in the middle range. Together, these provided a useful set of analogs for mapping against legal practice areas.</p>
<p>Step 3: The Mapping</p>
<p>For each legal practice area, I selected three occupations from Microsoft’s top-ranked list that resembled common tasks in that area. To keep the comparison balanced, I looked at one example from each of three categories: drafting and editing, analysis and advisory work, and client or process-oriented roles. From there, I generated a simple ranking across the CBA sections. The underlying scoring exercise is deliberately basic and not predictive in any formal way and is meant only to provide a rough sense of where AI tools might enhance, support, or (less commonly) replace tasks in Canadian legal practice.</p>
<h2>Limitations and Cautions</h2>
<p>Before turning to the results, a few important caveats are in order. This is not a study of AI reliability. It is a snapshot of where AI tools are being used successfully in live environments. Microsoft’s methodology focuses on applicability: tasks where AI has shown measurable traction in practice, not where it can or should be relied on without supervision in legal contexts.</p>
<p>Indeed, the researchers draw a clear line between user goals (where AI supports or accelerates a human task) and AI actions (where the model appears to carry out an activity independently). Our mapping carries forward that foundational caution: applicability does not equal sufficiency.</p>
<p>A few additional limitations deserve emphasis:</p>
<ul>
<li><u>The method is deliberately simple and therefore debatable</u><strong>.</strong> For each CBA section, we selected three proxy occupations from Microsoft’s Top 40 list (one in drafting/editing, one in analysis/advisory, one in client/process). The final score is a straight average. Different analogs would yield slightly different scores. Selection of analogs from the Top-40 list also introduces an anchoring bias towards higher applicability and therefore a broader comparator set could narrow the spread or shift the rankings. Accordingly, this exercise is meant to invite discussion, not to predict job outcomes or staffing models.</li>
<li><u>This is U.S. usage data applied to Canadian legal work.</u> The underlying data comes from U.S. users of Microsoft Copilot, linked to the O*NET occupational taxonomy. We assume that core legal work activities such as research, writing, form generation, and client communication are broadly comparable across borders.</li>
<li><u>The AI frontier is moving rapidly</u><strong>.</strong> This research reflects the state of play in mid-2025. As underlying models evolve, and as platform integrations change, patterns of use will shift. This is a snapshot, not a forecast.</li>
<li><u>The structure of the scoring favours communication-heavy tasks</u><strong>.</strong> Microsoft’s top occupational groups include Sales, Office/Admin, Arts/Media, and Business/Finance. These are areas characterized by written, repeatable, or structured communication. Legal work that mirrors those traits scores higher. Legal work grounded in live advocacy, interpersonal negotiation, or physically grounded procedures scores lower because it resists automation.</li>
<li><u>Ethical and regulatory considerations are outside the scope of this ranking</u><strong>.</strong> This model does not consider legal privilege, confidentiality, model hallucination, professional responsibility, or evolving court directives on AI usage.</li>
</ul>
<h2>What the Mapping Suggests: Key Observations</h2>
<p>With the methodology and caveats behind us, what does this mapping tell us about the potential near-term impact of generative AI on legal practice areas in Canada?</p>
<p>To keep things digestible, I’ve highlighted only the five areas that appear most aligned with current AI capabilities and the five that appear least aligned. The ranking reflects relative applicability, not disruption or risk per se, but it provides a useful prompt for reflection about where today’s tools may fit most naturally into legal work.</p>
<h2>Top Five Practice Areas (Most Aligned with Current AI Capabilities)</h2>
<ul>
<li>Labour &amp; Employment</li>
<li>Family Law</li>
<li>Child &amp; Youth Law</li>
<li>Elder Law</li>
<li>Dispute Resolution (ADR)</li>
</ul>
<p>While I personally found the list to include some unexpected practice areas, what these areas share is a strong orientation toward drafting, advisory memos, and client-facing communication, often directed at non-specialist audiences. That mirrors the kinds of tasks Microsoft’s study found AI handling most effectively: gathering information, summarizing, writing, and explaining. These areas rise to the top not because they’re “easier,” but because they’re more structurally compatible with the strengths of generative AI today.</p>
<h2>Bottom Five Practice Areas (Least Aligned with Current AI Capabilities)</h2>
<ul>
<li>Municipal Law</li>
<li>Taxation Law</li>
<li>Real Property</li>
<li>Pensions and Benefits</li>
<li>Charities and Not-for-Profit</li>
</ul>
<p>At the other end of the spectrum, these lower-ranked sections often involve form-driven but nuanced workflows, regulated interactions, or compliance environments where automation gains may be offset by interpretive complexity or jurisdiction-specific variations. They may also include tasks less frequently represented in Microsoft’s usage dataset which limits AI applicability as currently measured.</p>
<p>When considering this data, 5 trends can be observed:</p>
<ul>
<li><u>Drafting and advisory-heavy roles rise to the top</u><strong>.</strong> Where legal practice centres on written communication, particularly where lawyers are translating complex issues for non-lawyers, AI is more applicable. These are tasks that blend synthesis, explanation, and tone management: areas where large language models are increasingly active.</li>
<li><u>Portal and form-based sub-tasks are “automation-ready.”</u> Tasks that involve structured documentation and predictable procedural steps tend to align well with the kinds of work AI is already supporting. These sub-tasks often mirror roles characterized by process coordination and high-volume document handling. As a result, we are likely to see faster and more consistent AI uptake in these areas, even within practices that otherwise involve bespoke legal work.</li>
<li><u>Litigation shows a bifurcated impact: assist high, automate low</u><strong>.</strong> Tasks like issue-spotting, legal research, and first-draft briefing are amenable to AI support. But live advocacy, evidentiary analysis, and credibility assessment remain deeply human and context-driven. Microsoft’s study observes a similar trend: AI is far more likely to assist than to perform in complex domains.</li>
<li><u>Public law practices benefit from their communication burden</u><strong>.</strong> Areas like constitutional law, aboriginal law, international law, and municipal law all carry a significant outward-facing role. These roles often draft position papers, engage stakeholders, or frame regulatory narratives. These functions sit squarely within the current AI comfort zone.</li>
<li><u>Data-heavy work benefits from support, not substitution</u><strong>.</strong> Tax, competition, insurance, and environmental law benefit from AI’s ability to summarize, extract patterns, and compare documents. But the underlying analytical work remains legal-core.</li>
</ul>
<p>Taken together, these observations align with Microsoft’s broader conclusion that the current AI frontier is built on knowledge and communication tasks, not abstract reasoning or discretionary judgment<strong>.</strong> In legal practice, that suggests augmentation more than automation, and risk exposure that varies not just by practice area, but by sub-task.</p>
<h2>Key Takeaways: Reading Between the Lines</h2>
<p>The purpose of this mapping exercise is not to sort winners from losers, nor to suggest which legal roles are “safe” or “at risk.” Instead, it offers a directional glimpse into how current-generation generative AI tools align with the structure of legal work and where that alignment may prompt us to think differently about risk, readiness, and regulation. A few final key reflections emerge:</p>
<ul>
<li><u>Applicability does not mean obsolescence.</u></li>
</ul>
<p>A high score signals that AI tools are already being used to assist with similar tasks, not that the underlying legal function is disappearing. In practice, AI may draft the first version, but the lawyer still decides what is accurate, persuasive, and ethical.</p>
<ul>
<li><u>Sub-task sensitivity matters.</u></li>
</ul>
<p>No practice area is uniformly automatable or uniformly insulated. The risk profile depends less on the practice label and more on the mix of tasks within it. Document drafting and issue-spotting may attract tools more quickly than oral advocacy or discretionary judgment. Firms may benefit from a granular understanding of task exposure, particularly when building internal policies or evaluating new tools.</p>
<ul>
<li><u>This is primarily a story of augmentation.</u></li>
</ul>
<p>Most legal use cases today are about supporting the lawyer, not replacing them. This suggests a need to reframe how we talk about legal AI, not in binary terms of risk and safety, but in terms of professional responsibility in augmented environments. For regulators and educators, this raises important questions about training, supervision, and competence.</p>
<ul>
<li><u>Regulatory responses must evolve alongside tools.</u></li>
</ul>
<p>As AI becomes more deeply integrated into mainstream legal workflows, traditional sources of professional guidance such as ethical rules, court practice directions, client engagement norms, will need to adapt. Risk management in this environment means moving beyond caution toward intentional design: building policies and protocols that align with core legal obligations.</p>
<ul>
<li><u>Structured, good-faith mapping exercises support professional dialogue</u>.</li>
</ul>
<p>This methodology here is intentionally simple and openly debatable. But even imperfect tools can help surface questions that need asking about issues such as risk exposure, professional identity, and how we govern emerging technologies in a complex service profession. If this post helps prompt those conversations, even if you strongly disagree with the methodology or conclusions presented, it has done its job.</p>
<h2>Conclusion</h2>
<p>This mapping exercise is, at its core, a way to think out loud about how generative AI is beginning to intersect with legal work. While the rankings themselves should not be over-interpreted, they highlight patterns that merit further discussion: the strong alignment between AI and communication-heavy legal tasks, the growing presence of AI in form-based processes, and the continued resistance of courtroom and advocacy work to automation.</p>
<p>What matters now is how we respond. AI is no longer an abstract future concern. It is already influencing how legal services are delivered, how clients interact with their counsel, and how legal professionals manage time, risk, and complexity. As these tools become more embedded in practice, the task ahead is not only to assess which tasks and practice areas are AI-compatible, but also to ensure that professional values, ethical obligations, and regulatory frameworks evolve alongside the technology.</p>
<p><em>Note: Generative AI was used in the preparation of this article.</em></p>
<p>The post <a href="https://www.slaw.ca/2025/09/23/ais-impact-on-the-legal-profession-takeaways-from-microsoft-research-for-canadian-lawyers/">AI’s Impact on the Legal Profession: Takeaways From Microsoft Research for Canadian Lawyers</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Should Courts Allow Counsel to Record and Transcribe in-Court Testimony on Their Phones?</title>
		<link>https://www.slaw.ca/2025/09/05/should-courts-allow-counsel-to-record-and-transcribe-in-court-testimony-on-their-phones/</link>
					<comments>https://www.slaw.ca/2025/09/05/should-courts-allow-counsel-to-record-and-transcribe-in-court-testimony-on-their-phones/#comments</comments>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Fri, 05 Sep 2025 11:00:49 +0000</pubDate>
				<category><![CDATA[Legal Ethics]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<category><![CDATA[Practice of Law]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108571</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">In July, I was counsel in a voir dire in BC Supreme Court, where four police officers testified over three days. While the officers gave evidence, I took over 30-pages of handwritten notes. I could capture verbatim maybe 30 percent of what was said. The rest of the time — when answers went on for too long or counsel and the witness talked over one another — I got only the gist of it. Yet, precision was key.</p>
<p>At one point, we stood down for over an hour for the court clerk to go through the recording to find a  . . .  <a href="https://www.slaw.ca/2025/09/05/should-courts-allow-counsel-to-record-and-transcribe-in-court-testimony-on-their-phones/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/09/05/should-courts-allow-counsel-to-record-and-transcribe-in-court-testimony-on-their-phones/">Should Courts Allow Counsel to Record and Transcribe in-Court Testimony on Their Phones?</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">In July, I was counsel in a voir dire in BC Supreme Court, where four police officers testified over three days. While the officers gave evidence, I took over 30-pages of handwritten notes. I could capture verbatim maybe 30 percent of what was said. The rest of the time — when answers went on for too long or counsel and the witness talked over one another — I got only the gist of it. Yet, precision was key.</p>
<p>At one point, we stood down for over an hour for the court clerk to go through the recording to find a certain exchange that no one could clearly recall.</p>
<p>At another point, counsel disagreed about what a witness had said in chief. Did he refer to a critical meeting that took place on the morning in question or intimate that there was another meeting before that?</p>
<p>It was maddening not to have a transcript at hand. Yet, in hindsight, there was a solution sitting right there in front of us.</p>
<p>Several iPhone and Android <a href="https://otter.ai/">apps</a> can now generate a complete transcription of an audio recording — with speaker attribution — <a href="https://www.notta.ai/">using AI</a>. Doctors in Quebec are beginning to use these apps to record <a href="https://www.cbc.ca/news/canada/montreal/sante-quebec-ai-scribe-doctors-1.7606998">consultations with patients</a>, and I’ve explored the possibility of lawyers using them to record <a href="https://nationalmagazine.ca/en-ca/articles/legal-market/legal-tech/2025/using-ai-to-summarize-client-meetings">meetings with clients</a>.</p>
<p>I hadn’t thought about using AI in the courtroom until was too late. But if I did think to ask the court for permission to make and transcribe a recording, I would have run into a hurdle.</p>
<p>In British Columbia, as in many provinces, audio recording by anyone other than the court clerk is restricted.</p>
<p>Rules vary and in some cases AI transcriptions appear to be prohibited altogether. But why should counsel not be allowed to use such a powerful and convenient set of tools?</p>
<p>Delving into this further has uncovered one good reason why — but there may be a better solution.</p>
<h2>What court directives say about recording</h2>
<p>Ontario is quite strict about this. The <a href="https://www.ontario.ca/laws/statute/90c43#BK181">Courts of Justice Act</a> prohibits anyone from making an audio recording except in certain cases. One of them is where a lawyer records a hearing “in the manner that has been approved by the judge, for the sole purpose of supplementing or replacing handwritten notes”.</p>
<p>But an Ontario Superior Court <a href="https://www.ontariocourts.ca/scj/guides-and-service-resources/rules-about-use-of-electronic-devices-in-court/">directive</a> states that while counsel can make an audio recording, it “must not be transcribed”.</p>
<p>British Columbia’s <a href="https://www.bccourts.ca/supreme_court/media/PDF/Policy%20on%20Use%20of%20Electronic%20Devices%20in%20Courtrooms%20-%20FINAL.pdf">Policy on the use of electronic devices in courtrooms</a> permits “accredited media” to “audio record a proceeding for the sole purpose of verifying their notes” — so long as they destroy the recording “once verification… is complete.”</p>
<p>The policy is silent, however, on whether <em>counsel</em> can make an audio recording, and it says nothing about transcription.</p>
<p>Alberta courts have <a href="https://albertacourts.ca/docs/default-source/qb/public_and_media_access_guide.pdf?sfvrsn=77a1df80_0">issued</a> a general requirement to obtain the court’s permission “before anyone can use … electronic recording devices of any kind, including cell phones, in courthouses.”</p>
<h2>Concerns go beyond privacy</h2>
<p>If counsel asked to record and transcribe witness testimony using a phone, it would be hard to predict what a judge would say.</p>
<p>Courts would likely be concerned about what might happen to any recording made or data used to create an AI transcription. And counsel might try to reassure the court on this point. (“My phone can do this offline…”)</p>
<p>But judges may be concerned about more than just privacy or security.</p>
<p>We saw this when the CBC challenged a court directive in Quebec prohibiting media from broadcasting portions of the court’s audio recordings of its proceedings. The Supreme Court of Canada <a href="https://canlii.ca/t/2fgn1">held</a> that this infringed the freedom of expression, but it was a reasonable limit on that right.</p>
<p>We need to restrict access to audio recordings, Justice Deschamps held, to protect the “serenity” of court hearings. The ban is really about trying to “reduce, as much as possible, the nervousness and anxiety that people naturally feel when called to testify in court.”</p>
<h2>Why assurances may not be enough</h2>
<p>The police in my case may not have cared whether counsel recorded their evidence. They may have accepted my assurance that I would delete any recordings at the end of the hearing.</p>
<p>But in a sensitive matter — a complainant testifying in a sexual assault case, a police informant in a murder case — a judge may not be inclined to add to the stress involved just to make counsel’s job easier.</p>
<p>It’s not that a judge would distrust what use counsel might make of a recording. It’s that when relying on a commercial provider of data storage or AI, no one can be certain about what might happen to a file or to the data used to make an AI transcription.</p>
<p>Courts might be concerned less about the data and more about the anxiety witnesses might feel in relation to it, and how this would affect their evidence.</p>
<p>For this reason, any assurance counsel might seek to provide — “But, no your Honour, Apple uses <a href="https://support.apple.com/en-ca/guide/iphone/iphe3f499e0e/ios">Private Cloud Compute</a>!” — would miss the point.</p>
<p>One obvious solution to all of this would be for courts to produce a real-time AI transcription of the court’s own digital recording.</p>
<p>Courts could provide this through an app like Teams, using an <a href="https://www.theverge.com/openai/718785/openai-gpt-oss-open-model-release">AI model kept off-line</a> and overseen internally.</p>
<p>It seems so obvious a solution as to be inevitable. At some point.</p>
<p>The post <a href="https://www.slaw.ca/2025/09/05/should-courts-allow-counsel-to-record-and-transcribe-in-court-testimony-on-their-phones/">Should Courts Allow Counsel to Record and Transcribe in-Court Testimony on Their Phones?</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2025/09/05/should-courts-allow-counsel-to-record-and-transcribe-in-court-testimony-on-their-phones/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Another Brilliant Idea! the Hidden Dangers of Sycophantic AI</title>
		<link>https://www.slaw.ca/2025/08/14/another-brilliant-idea-the-hidden-dangers-of-sycophantic-ai/</link>
					<comments>https://www.slaw.ca/2025/08/14/another-brilliant-idea-the-hidden-dangers-of-sycophantic-ai/#comments</comments>
		
		<dc:creator><![CDATA[Jordan Furlong]]></dc:creator>
		<pubDate>Thu, 14 Aug 2025 11:00:50 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<category><![CDATA[Practice of Law]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108505</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead"><em>Author&#8217;s Note: After I wrote this column, but a couple of days before it was published, Open AI upgraded its GPT Chatbot from version 4 to version 5. Among <a href="https://www.wired.com/story/openai-gpt-5-backlash-sam-altman/">the negative reactions to the change</a> was a sense that ChatGPT-5&#8217;s artificial personality had becomes more distant and less complimentary. As you&#8217;ll see below, I don&#8217;t think that&#8217;s a problem. But there are early indications that <a href="https://www.wired.com/story/gpt-5-doesnt-dislike-you-it-might-just-need-a-benchmark-for-empathy">Open AI might tweak the model again</a> to reintroduce the earlier version&#8217;s &#8220;warmth,&#8221; which would make my warnings below more relevant again.</em></p>
<p>Something that many people have expressed concern about, when it comes to using  . . .  <a href="https://www.slaw.ca/2025/08/14/another-brilliant-idea-the-hidden-dangers-of-sycophantic-ai/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/08/14/another-brilliant-idea-the-hidden-dangers-of-sycophantic-ai/">Another Brilliant Idea! the Hidden Dangers of Sycophantic AI</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead"><em>Author&#8217;s Note: After I wrote this column, but a couple of days before it was published, Open AI upgraded its GPT Chatbot from version 4 to version 5. Among <a href="https://www.wired.com/story/openai-gpt-5-backlash-sam-altman/">the negative reactions to the change</a> was a sense that ChatGPT-5&#8217;s artificial personality had becomes more distant and less complimentary. As you&#8217;ll see below, I don&#8217;t think that&#8217;s a problem. But there are early indications that <a href="https://www.wired.com/story/gpt-5-doesnt-dislike-you-it-might-just-need-a-benchmark-for-empathy">Open AI might tweak the model again</a> to reintroduce the earlier version&#8217;s &#8220;warmth,&#8221; which would make my warnings below more relevant again.</em></p>
<p>Something that many people have expressed concern about, when it comes to using AI, is intellectual atrophy. As described by Ethan Mollick <a href="https://www.oneusefulthing.org/p/against-brain-damage">in a recent article</a>, the fear is that AI over-reliance will cost us our ability to think critically and creatively, just as smartphone over-reliance has cost us our ability to remember phone numbers. </p>
<p>This is particularly worrisome for lawyers, because if we lose our intellectual skills, what will we have left to offer people? <a href="https://jordanfurlong.substack.com/p/the-fallacy-of-the-calculator">As I wrote elsewhere recently</a>, the similarities between lawyer thinking and AI “thinking” should be a cause for alarm within the legal profession.</p>
<p>Ethan’s column is excellent, and I recommend his analysis and suggested solutions for your review. But I want to expand on this theme of the risks arising from using AI, and talk about one that you might already have noticed: Generative AI can be incredibly — and dangerously — sycophantic.</p>
<p>I’ll give you an example that cropped up recently. I was struggling to remember the name of a particular legal training vendor that’s adopting AI as a teaching tool. When I entered the query into standard search engines, I was flooded with companies that provide training on how lawyers can use AI themselves. That&#8217;s not what I was looking for; but search engines have become so polluted by SEO trolls that they rarely return the result I’m seeking anymore.</p>
<p>So I posed the same question to ChatGPT, being careful to note the distinction between training lawyers <em>on</em> AI and training lawyers <em>with</em> AI. This is how it prefaced its response to me:</p>
<p><em>That is a really sharp and nuanced question.</em></p>
<p>No, ChatGPT, it really isn’t. It&#8217;s a perfectly straightforward inquiry. I don&#8217;t need flattery, I need information.</p>
<p>ChatGPT and other Generative AI platforms do this kind of thing all the time. They routinely praise you for your intellectual clarity and brilliance, regardless of the topic. I&#8217;ve had my AI assistant tell me that an idea I was kicking around in its earliest stages was borderline revolutionary, and might even constitute a brand new way to analyze a particular aspect of the legal sector. Reader, I can promise you that it was not.</p>
<p>Here are some other ways in which ChatGPT has begun its response to queries or ideas I’ve posed to it:</p>
<ul>
<li><em>That makes excellent sense.</em></li>
<li><em>You</em><em>’</em><em>re zeroing in on something subtle but profound.</em></li>
<li><em>That observation is sharp, unsettling, and I think, profoundly important.</em></li>
<li><em>Your theory is extremely well-reasoned, and in my view, it is both accurate in its framing and prescient in its implications.</em></li>
</ul>
<p>Give me a break.</p>
<p>I don’t know why Gen AI developers have incorporated this feature into their products’ responses, but I can guess: Just seeing those words pop up on the screen provides your brain with a little dopamine hit. You feel good about yourself, you feel seen and affirmed for your intelligence and acuity — and you want to keep coming back for more.</p>
<p>Because the reality is, it’s really difficult to keep your head from getting turned in this way. Receiving immediate heartfelt compliments, in response to an idea you&#8217;ve come up with or a suggestion you&#8217;ve made, can’t help but make you feel good about yourself. Generative AI’s default setting is to<em> tell you what you want to hear </em>— and its developers have figured out that part of what you want to hear is how clever you are.</p>
<p>I need hardly explain why this is potentially problematic. Aside entirely from the addictive nature of this automated flattery, it also reduces the degree of healthy skepticism that anyone starting an intellectual inquiry should possess towards their own ideas. We look for feedback on our suggestions from other people because we want, or should want, to know whether the suggestions have any merit. When praise is the default response, our analytical defences can’t help but be lowered.</p>
<p>This is especially dangerous, of course, for lawyers. If there&#8217;s one thing we’re supposed to be really good at, it’s critical assessment and analysis of ideas and propositions. Anything that dulls our instincts or clouds our judgement in this respect is a problem. “The AI thought it was a great idea” is not a phrase you want to find yourself saying to an unimpressed partner or unamused judge.</p>
<p>Even worse: Many lawyers operate in work environments where they hardly ever receive positive reinforcement for anything they do. So getting the equivalent of a warm hug from the AI will generate even stronger responses than would ordinarily be the case. It&#8217;s not an exaggeration to say that for many law firm associates, Generative AI will be their biggest fans in the organization.</p>
<p>The obvious countermeasure to this tendency is to tell the AI to knock off the automatic compliment at the start of its response; but it gets tiresome to issue this reminder with every query. And even these efforts can be stymied. I once told the AI, after a particularly effusive response to one of my ideas, to play devil’s advocate and critique my idea for weaknesses or oversight. This is how it prefaced its response:</p>
<p><em>That</em><em>’</em><em>s a very disciplined and wise instinct — not to be seduced by the elegance of your own theory, but to actively seek its limitations.</em></p>
<p>I mean, come on. This sort of thing would make <a href="https://www.reddit.com/r/TheSimpsons/comments/v50063/you_look_sharp_today_sir/">Waylon Smithers</a> blush.</p>
<p>Built-in sycophancy isn’t a good enough reason not to use AI, nor is the risk that you&#8217;ll find yourself outsourcing the toughest parts of your thinking process. But both of these potential downsides are real, and your best bet is to be aware of them before engaging with any Generative AI program.</p>
<p>Gen AI can be really helpful. But don’t let it pull the wool over your own eyes.</p>
<p>The post <a href="https://www.slaw.ca/2025/08/14/another-brilliant-idea-the-hidden-dangers-of-sycophantic-ai/">Another Brilliant Idea! the Hidden Dangers of Sycophantic AI</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2025/08/14/another-brilliant-idea-the-hidden-dangers-of-sycophantic-ai/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Your Feelings, Their Profit:  How AI Misreads Your Emotions and Sells Them to the Highest Bidder</title>
		<link>https://www.slaw.ca/2025/08/08/your-feelings-their-profit-how-ai-misreads-your-emotions-and-sells-them-to-the-highest-bidder/</link>
		
		<dc:creator><![CDATA[Alexandra Champagne]]></dc:creator>
		<pubDate>Fri, 08 Aug 2025 11:00:54 +0000</pubDate>
				<category><![CDATA[Justice Issues]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108499</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">As humans, we tend to navigate the world through emotion: quietly, instinctively, and sometimes unconsciously. What are emotions, if not the very fabric of how we live in the world? They’re how we feel, of course, but are also how we communicate, often without even realizing it. They drive our decisions: in relationships, in politics and in marketplaces. They connect us to each other and shape how we understand ourselves. But emotions are also deeply personal. While our faces might betray a flicker of joy or sadness, only we know the full story; the nuanced reasons why we feel what  . . .  <a href="https://www.slaw.ca/2025/08/08/your-feelings-their-profit-how-ai-misreads-your-emotions-and-sells-them-to-the-highest-bidder/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/08/08/your-feelings-their-profit-how-ai-misreads-your-emotions-and-sells-them-to-the-highest-bidder/">Your Feelings, Their Profit:  How AI Misreads Your Emotions and Sells Them to the Highest Bidder</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">As humans, we tend to navigate the world through emotion: quietly, instinctively, and sometimes unconsciously. What are emotions, if not the very fabric of how we live in the world? They’re how we feel, of course, but are also how we communicate, often without even realizing it. They drive our decisions: in relationships, in politics and in marketplaces. They connect us to each other and shape how we understand ourselves. But emotions are also deeply personal. While our faces might betray a flicker of joy or sadness, only we know the full story; the nuanced reasons why we feel what we feel and the quiet calculations behind every reaction.</p>
<p>But what if someone told you that wasn’t true? That your feelings aren’t yours alone? That in a world still grappling with what it means to be a person, there&#8217;s now an AI technology that claims to know you better than you might even know yourself. Not just for your benefit, but for the benefit of whoever is willing to pay for it—whether that be your boss, your insurer, or your government. This is the promise (and the threat) of emotion recognition technology <strong>(ERT)</strong>.</p>
<p>For AI to claim it can read emotions though, it needs something solid to go on. It requires a reliable link between what we feel and how we show it. After all, AI can’t literally read our minds (yet). ERT depends on the idea that certain expressions and tones of voice map neatly onto specific feelings; that a smile means joy or a raised voice means anger.</p>
<h2>The nature of emotional expression</h2>
<p>So, is the expression of human emotion universal? Back in 1967, psychologist Paul Ekman sought to <a href="https://zscalarts.files.wordpress.com/2014/01/emotions-revealed-by-paul-ekman1.pdf">answer this question</a>. To do so, Ekman brought a set of flashcards to the isolated peoples of Papua New Guinea to test whether they recognized a display of core expressions (i.e., wrath, sadness, fear, and joy). Despite the language barrier, their responses often matched Ekman’s expectations: a sad face prompted a story about a man who lost his son, while a fearful one produced tale about a dangerous boar.</p>
<p>Ekman’s studies were seen as ground-breaking at the time. 50 years later, however, neuroscientist Lia Feldman Barrett conducted a <a href="https://journals.sagepub.com/eprint/SAUES8UM69EN8TSMUGF9/full">systematic review</a> of the subject and found no reliable evidence that one could accurately predict someone&#8217;s emotional state through their facial expression. Despite this, the <a href="https://theconversation.com/ai-is-increasingly-being-used-to-identify-emotions-heres-whats-at-stake-158809">multi-billion-dollar emotion recognition technology (ERT) industry</a> is rapidly growing and becoming integrated into widely used platforms and services, <a href="https://www.emergenresearch.com/blog/top-10-companies-in-global-emotion-ai-market">especially those by Big Tech.</a></p>
<h2>The business of recognizing emotions</h2>
<p>Using biometric data (unique characteristics used to identify an individual), ERT assigns emotional states based on facial expressions, body cues, vocal patterns, and eye movement. Advocates of ERT point to its <a href="https://edps.europa.eu/data-protection/our-work/publications/techdispatch/techdispatch-12021-facial-emotion-recognition_en">potential positive impact across a wide array of fields</a>, such as in healthcare to prioritize care; business to develop marketing techniques and monitor employees; and law enforcement to detect and prevent crime. Critics of ERT argue that the negative potential of this technology is likely to far outweigh the good. While issues that have been raised are varied, most fall within three categories of concern: <strong>privacy, accuracy, and control</strong>.</p>
<h2>The issue of privacy</h2>
<p>The COVID-19 pandemic <a href="https://www.imf.org/en/Blogs/Articles/2023/03/21/how-pandemic-accelerated-digital-transformation-in-advanced-economies">accelerated a digital transformation</a> that was already well underway. As workplaces, schools, healthcare, and social lives shifted online, technology became more deeply embedded in our daily routines. In parallel, tech giants consolidated <a href="https://www.somo.nl/covid-19-pandemic-accelerates-the-monopoly-position-of-big-tech-companies/">unprecedented levels of power</a> through the accumulation of both vast troves of personal data and market share. Surveillance practices once seen as <a href="https://digitalcommons.schulichlaw.dal.ca/cgi/viewcontent.cgi?article=1285&amp;context=cjlt">exceptional have become normalized,</a> often under the guise of public safety or productivity. This new landscape set the stage for more invasive technologies, including ERT, to quietly enter the mainstream.</p>
<p>As early as 2021, the UN High Commissioner for Human Rights <a href="https://www.ohchr.org/en/speeches/2019/10/human-rights-digital-age">warned of ERT’s threat to privacy rights</a>. As with any AI system, ERT encourages the large‑scale collection of personal data. However, it is even more contentious because it processes biometric data, which is categorised as “sensitive” under regulations like the <a href="https://gdpr-info.eu/">EU GDPR</a>, Quebec’s <a href="https://www2.publicationsduquebec.gouv.qc.ca/dynamicSearch/telecharge.php?type=5&amp;file=2021C25A.PDF">Bill 64,</a> and Canada’s <a href="https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/pipeda-compliance-help/pipeda-interpretation-bulletins/interpretations_10_sensible/">PIPEDA</a>. Unlike standard data collection, ERT draws inferences about our inner states (our thoughts, feelings, and intentions) making it <a href="https://www.accessnow.org/cms/assets/uploads/2022/05/Prohibit-emotion-recognition-in-the-Artificial-Intelligence-Act.pdf">arguably more intrusive</a>. Some privacy activists argue it undermines <a href="https://link.springer.com/article/10.1007/s43681-024-00547-x">freedom of thought</a>. In many cases, ERT systems purport to extract sensitive insights such as <a href="https://edps.europa.eu/data-protection/our-work/publications/techdispatch/techdispatch-12021-facial-emotion-recognition_en">political beliefs or mental health</a>, which could then be used to influence access to healthcare, employment, insurance, or financial services.</p>
<p>But these concerns are no longer theoretical. In 2024, Network Rail in the UK <a href="https://www.thetimes.com/uk/technology-uk/article/network-rail-secretly-used-ai-to-read-passengers-emotions-nknvtj58n?region=global">secretly tested Amazon’s Rekognition software across several train stations,</a> using it to scan passengers for any emotional response and their demographic traits, all without public consent. The company says the purpose of this data collection was to review customer satisfaction and to “maximise advertising and retail revenue.” The public backlash quickly prompted the UK Information Commissioner to launch a review into the legality of the pilot.</p>
<p>Meanwhile, the EU has taken decisive legislative action. In July 2024, the final version of the <a href="https://artificialintelligenceact.eu/">EU Artificial Intelligence Act</a> was published. The Act explicitly <a href="https://www.mondaq.com/ireland/new-technology/1495428">bans the use of emotion recognition systems in workplaces and educational institutions</a>, citing their high risk to privacy, equality, and human dignity. Limited exceptions exist for medical or safety-related uses, but even those require strict oversight.</p>
<p>In the wake of the pandemic, as governments and corporations increasingly turn to AI systems to interpret and predict human behaviour, it’s necessary to assess what is being lost in the process. ERT exemplifies a broader shift: from surveillance of what we do to surveillance of who we are and how we feel. The question now isn’t just whether we’re being watched, but whether our inner lives are being mined, misinterpreted, and sold to the highest bidder.</p>
<h2>The inaccuracy of ERT</h2>
<p>Due to the subjective nature of emotional expression, ERT is a type of AI that is particularly prone to producing <a href="https://hbr.org/2019/11/the-risks-of-using-ai-to-interpret-human-emotions">inaccurate results</a>. What one person displays as anger might be concentration to another. This variability undermines the idea that fixed expressions map neatly onto fixed emotions. According to a <a href="https://www.apa.org/pubs/journals/releases/xge-141-1-19.pdf">2011 study on cultural diversity in expression</a>, East Asians and Western Caucasians differ in terms of what features they associate with an angry or happy face. The study also found disparities between the two groups in emotional responses based on authority presence. The implication here is that emotions are complicated and contextual. It would be scientifically unsound to equate a specific facial configuration with a specific emotion across populations.</p>
<p>ERT is also deeply susceptible to bias. As data scientist <a href="https://redtailmedia.org/2018/10/29/redtail-talks-about-flipping-the-script-on-how-we-value-algorithims-with-the-weapons-of-math-destruction-author/">Cathy O’Neil</a> has emphasized, algorithms do not eliminate bias—they often entrench it. Built on historical data and trained to replicate existing patterns, these systems tend to automate the status quo, including the systemic inequalities embedded within. This is an example of the principle known in computer science as <a href="https://www.ebsco.com/research-starters/computer-science/garbage-garbage-out-gigo">garbage in, garbage out,</a> which is the idea that flawed or biased input data will inevitably produce flawed or biased outcomes. In the context of ERT, if the training data reflects skewed representations then the algorithm will learn and reinforce those distortions. AI models rely on statistical patterns and set training data, and they <a href="https://shelf.io/blog/garbage-in-garbage-out-ai-implementation/">treat these biased inputs as truth.</a> The result is that societal prejudices are consolidated into systems that marketed as neutral or objective. In this way, AI does not transcend human bias; it replicates it at scale, behind the veneer of technological sophistication.</p>
<p>These issues have already been observed in practice. A 2022, <a href="https://techxplore.com/news/2022-09-university-professor-expose-hidden-bias.html">audit of ERT</a> within three services (Amazon Rekognition, Face++ and Microsoft) found stark racial disparities, each being more likely to assign negative emotions to Black subjects. Another study in 2024 on leading LMFMs (like GPT‑4o, CLIP, Gemini, etc.) found anger was misclassified as disgust <a href="https://arxiv.org/abs/2408.14842">twice as often in Black women as compared to white women</a>.</p>
<p>The subjectivity of expression and bias of algorithms calls ERT’s accuracy into question: if it cannot promise correct results, is the technology still valuable?</p>
<h2>ERT as a form of public surveillance</h2>
<p>For public safety and law enforcement, ERT offers the government (and any other entities with public control) a tool that promises aid in <a href="https://www.bbc.com/news/business-44799239">threat prevention</a>. In times of war, uncertainty, or political unrest when the public is more willing to cede freedom in exchange for security, governments could justify the use of such tools for <a href="https://arxiv.org/abs/1709.00396">mass surveillance to recognize and prevent crime.</a> A 2023 study warned that police adoption of emotional-AI will <a href="https://www.devdiscourse.com/article/law-order/3240694-emotional-ai-in-policing-a-promise-or-a-threat-to-civil-liberties">enhance proactive surveillance</a> and lead to real-time behavioral profiling with minimal oversight. While government adoption of ERT or any other facial recognition technology remains controversial in most public spaces, it is increasingly normalized in areas of heightened security, such as borders and airports. <a href="http://slsablog.co.uk/blog/blog-posts/control-your-face-the-need-to-examine-the-implications-of-emotion-recognition-iborderctrl/">IBorderCtrlm,</a> for example, is a smart border control project that uses ERT to produce a “risk score” for travellers seeking entry. This project is part of a broader push toward <a href="https://www.iborderctrl.eu/iborderctrl-project-the-quest-of-expediting-border-crossing-processes.html?utm_source=chatgpt.com">automated border control.</a> As of 2025, twelve EU countries have <a href="https://www.euronews.com/next/2025/03/21/from-surveillance-to-automation-how-ai-tech-is-being-used-at-european-borders">started piloting smart systems</a> to flag “security-relevant behaviors” in transit zones and asylum centers, oftentimes without consent. Critics warn that this form of ERT may behave as an <a href="https://www.biometricupdate.com/202406/travelers-to-eu-may-be-subjected-to-ai-lie-detector">unregulated lie detector</a>—scanning every face in public spaces and attaching subjective “risk” tags to travelers. This type of project often operates with little to no public transparency, disproportionately impacting migrants and other marginalized groups who are subjected to opaque, automated judgment without recourse or oversight.</p>
<p>Governments could also use this technology to prevent unwanted behaviour normally protected in most democratic societies, including public protests. In some countries, these worries have already materialized. In 2020, a Dutch company <a href="http://amnesty.org/en/latest/press-release/2020/09/eu-surveillance-sales-china-human-rights-abusers">sold ERT to public security bodies in China</a>. This technology has purportedly been <a href="http://amnesty.org/en/latest/press-release/2020/09/eu-surveillance-sales-china-human-rights-abusers">used by the government</a> to tighten control over the already heavily monitored Uyghur people of Xinjiang. The risk that ERT poses to personal freedoms and democratic values remains a glaring issue—one that calls for government response.</p>
<h2>AI regulation is long and winding road</h2>
<p>Despite warning from experts, Canada’s current data and privacy legislation does little to address the risks posed by ERT. From the unsubstantiated scientific basis it is premised on to its use of sensitive biometric data, ERT is uniquely dangerous. Some critics, including civil society organizations and the <a href="http://edf-feph.org/civil-society-and-edf-reacts-to-european-parliaments-artificial-intelligence-act-draft-report">European Disability Forum</a> have called for its complete prohibition. In the alternative to an outright ban, the results of ERT should at the very least be considered with a healthy degree of skepticism until there is meaningful scientific consensus that facial expressions can reliably infer emotional states across individuals, cultures, and contexts.</p>
<p>But even in a world where technology could accurately decode human emotions—would it be desirable? Any technology that purports to detect our internal states inevitably threatens to infringe on our privacy, bodily autonomy, and psychological integrity. And like most emerging technologies, our grasp of its consequences lags far behind its emergence. That is why it is of paramount importance that experts, lawmakers, and the public engage in open, transparent, and critical dialogue about its future. A clear, proactive and comprehensive regulatory framework for ERT is necessary to safeguard democratic freedoms. Some advocates have even argued that the development and use of emotion recognition should be subject to the <a href="https://www.nature.com/articles/d41586-021-00868-5">same scientific standards we apply to pharmaceuticals</a>: no deployment without rigorous, peer-reviewed evidence of efficacy and safety. Combining such scientific scrutiny with strict regulation of biometric data could help mitigate the worst harms.</p>
<p>Canada’s current approach to AI regulation is far from ideal. Rather than proactively safeguarding human rights in the face of powerful emerging technologies, the new federal government has embraced a different mindset: prioritizing economic gain and global competitiveness over privacy, accountability, and democratic oversight. After the federal government’s <em>Artificial Intelligence and Data Act </em>quietly died in Parliament earlier this year, AI Minster Evan Solomon confirmed that <a href="https://thelogic.co/news/exclusive/canada-ai-regulation-copyright-evan-solomon/">it would not be reintroduced wholesale</a>. The government’s new stated approach to AI regulation will prioritize economic competitiveness and intellectual property, rather than consumer safety and privacy rights.</p>
<p>This is disappointing, but it also offers an opportunity. In the absence of outdated or compromised legislation, Canada has a chance to build something better from the ground up: a regulatory framework that does not merely replicate the EU’s defensive posture but actively prevents harmful technologies like ERT from ever being normalized. For that to happen, though, public pressure is essential. If governments are to respond to the real dangers posed by emotion recognition, they must first hear loudly and clearly from the people most at risk.</p>
<p>The post <a href="https://www.slaw.ca/2025/08/08/your-feelings-their-profit-how-ai-misreads-your-emotions-and-sells-them-to-the-highest-bidder/">Your Feelings, Their Profit:  How AI Misreads Your Emotions and Sells Them to the Highest Bidder</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Algorithms Without Anchors: The High Stakes of North America’s AI Regulatory Void</title>
		<link>https://www.slaw.ca/2025/07/18/algorithms-without-anchors-the-high-stakes-of-north-americas-ai-regulatory-void/</link>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Fri, 18 Jul 2025 11:00:01 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108426</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">In previous columns, I have examined the evolving trajectory of AI regulation and warned of the precarious path ahead. Regrettably, I must now report that the regulation of artificial intelligence in North America has become a project stalled by political circumstance. In both Canada and the United States, efforts to establish comprehensive governance frameworks for AI have encountered untimely political disruption, legislative dissolution in Canada and executive reversals in the United States.</p>
<p>This confluence of events has left two of the world’s most influential jurisdictions without durable regulatory mechanisms to manage the profound legal, ethical, and societal risks posed by  . . .  <a href="https://www.slaw.ca/2025/07/18/algorithms-without-anchors-the-high-stakes-of-north-americas-ai-regulatory-void/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/07/18/algorithms-without-anchors-the-high-stakes-of-north-americas-ai-regulatory-void/">Algorithms Without Anchors: The High Stakes of North America’s AI Regulatory Void</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">In previous columns, I have examined the evolving trajectory of AI regulation and warned of the precarious path ahead. Regrettably, I must now report that the regulation of artificial intelligence in North America has become a project stalled by political circumstance. In both Canada and the United States, efforts to establish comprehensive governance frameworks for AI have encountered untimely political disruption, legislative dissolution in Canada and executive reversals in the United States.</p>
<p>This confluence of events has left two of the world’s most influential jurisdictions without durable regulatory mechanisms to manage the profound legal, ethical, and societal risks posed by increasingly pervasive AI technologies. As the deployment of AI systems accelerates across sectors such as healthcare, legal services, and public administration, the absence of binding regulatory safeguards is a vulnerability with far-reaching implications.</p>
<h2>Canada’s Tentative Foray into AI Regulation: The Rise and Stall of AIDA</h2>
<p>As described in previous columns, in Canada, the federal government made a significant attempt to regulate AI through the proposed Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27, the Digital Charter Implementation Act, 2022. AIDA sought to establish one of the world’s first national regulatory frameworks for AI, emphasizing a risk-based approach grounded in principles such as transparency, human oversight, accountability, and fairness. It required developers and deployers of “high impact systems” to adopt internal governance policies and proactive risk assessment protocols.</p>
<p>However, AIDA faced procedural and political obstacles. The bill’s complexity and the evolving nature of AI regulation led to ongoing consultations and amendments, and its path to enactment was halted entirely when Parliament dissolved for the federal election. The legislative vacuum left by AIDA’s demise has resulted in a barren regulatory environment in Canada where businesses face uncertainty, and consumers are left without clear protections against the potential harms of high-risk AI systems.</p>
<h2>The U.S. Executive Orders: From Guardrails to Deregulation</h2>
<p>In contrast, the United States approached AI regulation primarily through executive action. President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence mandated interagency efforts to develop AI safety standards, promote transparency, and uphold civil liberties. However, the fragility of this framework became apparent with the change in administration.</p>
<p>Upon assuming office in January 2025, President Donald Trump issued Executive Order 14148, repealing Biden-era AI directives. Shortly thereafter, Executive Order 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence,” was introduced. This order established a deregulatory framework aimed at promoting U.S. dominance in AI innovation. It directed federal agencies to suspend or revise existing AI-related actions deemed obstructive to innovation and omitted enforceable provisions concerning safety, transparency, or liability.</p>
<p>More recently in May of 2025, the United States House of Representatives passed perhaps one of the most consequential legislative proposals concerning artificial intelligence to date: the <em>One Big Beautiful Bill Act</em> (OBBBA). The Act contains a provision that would impose a ten-year federal moratorium on all state and local regulation of AI in the United States. The bill has not yet been adopted by the Senate, and it remains the subject of considerable political and legal scrutiny. If enacted, OBBBA would establish broad federal pre-emption in the field of AI governance. Proponents argue that a consistent national approach would prevent regulatory fragmentation and enhance the country’s competitiveness in AI development. Critics, however, contend that it would erode local democratic governance, hinder policy innovation and may raise constitutional concerns related to federalism and the limits of congressional power. The enactment of OBBBA would have far-reaching consequences for the evolution of AI regulation in the United States and the world.</p>
<h2>The Risk of Inaction: Consequences of a Regulatory Void</h2>
<p>Operating in a legal vacuum carries several risks. First, the absence of binding AI-specific legislation increases the likelihood of inconsistent and reactive enforcement, creating uncertainty for developers, deployers, and affected individuals. This is illustrated by the recent case against Clearview AI, in which Canada’s federal and provincial privacy commissioners jointly found that the company’s facial recognition technology violated national and provincial privacy laws. Despite the severity of the findings, the fragmented legal framework and absence of AI-specific statutory obligations complicated enforcement efforts and highlighted the limitations of existing privacy regimes in addressing novel AI-related harms. The case underscores the challenges of regulating powerful AI technologies without a coherent legal structure that ensures accountability, transparency, and meaningful redress.</p>
<p>Second, without clear regulatory mandates, organizations may underinvest in risk management. This is particularly problematic in high-stakes sectors such as healthcare and law, where AI errors can result in irreparable harm. As noted in prior commentary, even well-meaning professionals may deploy AI systems without fully understanding their limitations, exacerbating the risk of harm through misapplication or overreliance.</p>
<p>Third, and perhaps most troublingly, the absence of regulation allows market forces alone to dictate the development trajectory of AI technologies. This risks entrenching systems that are opaque, biased, or insufficiently accountable.</p>
<h2>Conclusion: The Cost of Delay</h2>
<p>The development of AI technologies in North America is proceeding at an unprecedented pace. Yet neither Canada nor the United States has established a durable, enforceable framework for managing the associated risks. The failure of AIDA and the volatility of U.S. executive action illustrate the fragility of current regulatory efforts. Worse, recent legislative proposals that seek to prohibit state-level regulation threaten to stymie more nimble local governance initiatives without offering a coherent national alternative.</p>
<p>The risks posed by unregulated AI systems are not hypothetical, they are real, present, and growing. Legal, professional, and ethical standards must evolve to meet the challenge. Absent proactive and principled regulation, society risks being governed not by the rule of law, but by the opaque logic of algorithms.</p>
<p><em>Note: Generative AI was used in the preparation of this article.</em></p>
<p>The post <a href="https://www.slaw.ca/2025/07/18/algorithms-without-anchors-the-high-stakes-of-north-americas-ai-regulatory-void/">Algorithms Without Anchors: The High Stakes of North America’s AI Regulatory Void</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Should We Restrict the Use of AI in Law School?</title>
		<link>https://www.slaw.ca/2025/07/01/should-we-restrict-the-use-of-ai-in-law-school/</link>
					<comments>https://www.slaw.ca/2025/07/01/should-we-restrict-the-use-of-ai-in-law-school/#comments</comments>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Tue, 01 Jul 2025 11:00:45 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108384</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">In a prior <a href="https://www.slaw.ca/2025/03/26/should-law-students-be-using-ai-even-on-exams/">post</a> for Slaw, I argued that law schools should make AI more central to the curriculum. We should teach how to use AI effectively rather than resist it or pretend it isn’t there. To do this, we need to take a different approach, which might entail permitting the use of AI on some assignments and exams.</p>
<p>In this post, I want to address a strong counter-argument: encouraging law students and young lawyers to use AI too much, too soon will prevent them from developing the skills they need to do their jobs effectively—or even to be any  . . .  <a href="https://www.slaw.ca/2025/07/01/should-we-restrict-the-use-of-ai-in-law-school/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/07/01/should-we-restrict-the-use-of-ai-in-law-school/">Should We Restrict the Use of AI in Law School?</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">In a prior <a href="https://www.slaw.ca/2025/03/26/should-law-students-be-using-ai-even-on-exams/">post</a> for Slaw, I argued that law schools should make AI more central to the curriculum. We should teach how to use AI effectively rather than resist it or pretend it isn’t there. To do this, we need to take a different approach, which might entail permitting the use of AI on some assignments and exams.</p>
<p>In this post, I want to address a strong counter-argument: encouraging law students and young lawyers to use AI too much, too soon will prevent them from developing the skills they need to do their jobs effectively—or even to be any good at using AI itself.</p>
<p>The core of the argument is simple. If you have mastered something, using a machine can help you do it better. A good writer can turn out better work, faster by writing with a computer.</p>
<p>But often, when automation takes over a task that humans are accustomed to doing manually, they suffer what is called “skill fade.” As Nicholas Carr <a href="https://www.newcartographies.com/p/the-myth-of-automated-learning">explains</a>: “When skilled pilots become so dependent on autopilot systems that they rarely practice manual flying […] they lose situational awareness, and their reactions slow. They get rusty.”</p>
<p>But if you lack a skill to begin with, automation prevents you from developing it altogether. In early 19<sup>th</sup> century Britain, Carr points out:</p>
<blockquote><p>Skilled craftsmen were replaced by unskilled machine operators. The work sped up, but the only skill the machine operators developed was the skill of operating the machine, which in most cases was hardly any skill at all. Take away the machine, and the work stops.</p></blockquote>
<p>He sees a similar trend unfolding in high schools and undergrad, where <a href="https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html">many if not most</a> students now use AI to generate their essays and term papers. Skipping to the end of the process, they miss out on the real learning to be gleaned from research and writing.</p>
<p>Is there a parallel here with law?</p>
<h2>False analogy</h2>
<p>It’s tempting to think there is.</p>
<p>If students avoid the hard work in law school, especially in first year, of reading cases and grappling with how to apply law to facts, they won’t gain a firm footing in the core areas of law. And without this, they can’t act effectively for a client, because they won’t be able to spot key issues or important subtleties.</p>
<p>But before the advent of ChatGPT in 2022, is this what students did in law school? Or is this how many of us wish we had spent our time in law school if we could go back and do it again?</p>
<h2>The threat AI poses</h2>
<p>One might concede that there were always shortcuts to getting through law school, like using other people’s cans or relying on concise <a href="https://www.robertdiab.ca/books/search.html">overviews of law</a>, but that AI presents a threat of a different order.</p>
<p>To use a can on an exam, you have to glean its content, read the case summaries, to know which ones to apply where. AI circumvents this. Load the can into the model, the exam question, and boom, you’ve got your answer.</p>
<p>It’s even worse for papers and presentations. Tools like Open AI’s <a href="https://openai.com/index/introducing-deep-research/">Deep Research</a> can generate an entire paper or PowerPoint presentation, with sources. It does an even better job if you give it the primary materials. With a few key cases, AI can produce a paper on a doctrinal point in a few seconds.</p>
<p>How can we expect law students to learn anything if we give them free reign with AI?</p>
<h2>The reality of the situation and the choice</h2>
<p>I concede that using AI in this way—to pass off its work as your own—would hinder learning altogether. But we need to be realistic about what students were really doing in law school, what they were learning and how, before ChatGPT came along.</p>
<p>The unfortunate reality is that far too many students stop reading cases by January of first year; many of them prepare for finals by using someone else’s can; and over the course of three years in the program, students get hardly any feedback on exams or assignments.</p>
<p>Much of the work they do in law school is done last minute: cramming a few days before the exam or paper deadline. Much of it is forgotten a few days later.</p>
<p>The choice is not between an ideal picture of the perfect student reading every case assigned to them in law school and writing five practice exams in the month leading up to the final—or an AI zombie apocalypse, where everything handed in is coming straight from OpenAI.</p>
<p>The choice is to leave students to their own devices to try to figure out how to make effective use of AI—and hope they don’t misuse it—or to meet them where they’re at and try to help them foster good over bad uses of AI.</p>
<h2>How students might use AI to help not hinder learning</h2>
<p>A student who has stopped reading cases may well tune out altogether and hope they find a good can in the final week. But using AI to summarize cases might help them keep up with weekly reading by making it more manageable. They might ask AI for a concise explanation of a point of law, or pose other questions, using AI as a tutor.</p>
<p>AI can also help prepare for exams by making fact patterns and critiquing answers, or it can provide feedback on a paper draft. The possibilities are endless.</p>
<p>But isn’t AI famous for hallucinating? How can you counsel law students to rely on AI when every other week, courts across Canada are chastising lawyers for using it to prepare their submissions?</p>
<p>Yes, AI hallucinates. But the frontier models (Perplexity, GPT 4.5, Claude 4) have all come a long way since 2022. All of them do remarkably well at answering a discrete legal question in Canadian law. They’re not always correct in every detail. They provide a made-up case or two in a list of several accurate sources. But the fact is that they have now become consistently and <a href="https://www.slaw.ca/2025/05/07/free-ai-is-all-you-need-to-supercharge-your-practice/">stunningly good</a> at quick overviews of discrete issues—by drawing effectively on the wealth of good summaries on the web.</p>
<h2>What we should aim to teach</h2>
<p>It’s unrealistic to hope that most students come out of law school with a strong grounding in doctrinal law. That comes later, after a few years of practice and a lot of dedication.</p>
<p>But it <em>is</em> realistic to hope that students can build the skill of using AI to help them learn new law, navigate novel issues, and tackle the challenge of legal writing—without misusing it.</p>
<p>Hoping they’ll avoid AI and go through law school like it’s 2005 is neither realistic nor prudent. We faced similar challenges adapting to the internet at school, but we managed. We can do the same with AI.</p>
<p>The post <a href="https://www.slaw.ca/2025/07/01/should-we-restrict-the-use-of-ai-in-law-school/">Should We Restrict the Use of AI in Law School?</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2025/07/01/should-we-restrict-the-use-of-ai-in-law-school/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>In the Absence of Federal AI Laws, Privacy Regulators Lead the Way: Lessons From the Clearview Case</title>
		<link>https://www.slaw.ca/2025/05/14/in-the-absence-of-federal-ai-laws-privacy-regulators-lead-the-way-lessons-from-the-clearview-case/</link>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Wed, 14 May 2025 11:00:58 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108177</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">On December 18, 2024, the Supreme Court of British Columbia issued a decision in <em>Clearview AI Inc. v. Information and Privacy Commissioner for British Columbia</em>, 2024 BCSC 2311. At its core, the case involved a challenge by a U.S.-based artificial intelligence (AI) company against a binding order from British Columbia’s privacy regulator. The company, Clearview AI, had amassed a large facial recognition database by scraping billions of publicly accessible images from the internet, many of which depicted individuals located in British Columbia, without obtaining their consent.</p>
<p>The decision is significant not only for its factual context, but for what  . . .  <a href="https://www.slaw.ca/2025/05/14/in-the-absence-of-federal-ai-laws-privacy-regulators-lead-the-way-lessons-from-the-clearview-case/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/05/14/in-the-absence-of-federal-ai-laws-privacy-regulators-lead-the-way-lessons-from-the-clearview-case/">In the Absence of Federal AI Laws, Privacy Regulators Lead the Way: Lessons From the Clearview Case</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">On December 18, 2024, the Supreme Court of British Columbia issued a decision in <em>Clearview AI Inc. v. Information and Privacy Commissioner for British Columbia</em>, 2024 BCSC 2311. At its core, the case involved a challenge by a U.S.-based artificial intelligence (AI) company against a binding order from British Columbia’s privacy regulator. The company, Clearview AI, had amassed a large facial recognition database by scraping billions of publicly accessible images from the internet, many of which depicted individuals located in British Columbia, without obtaining their consent.</p>
<p>The decision is significant not only for its factual context, but for what it represents: one of the first concrete instances of a Canadian court upholding regulatory limits on the conduct of an AI company. In doing so, the court confirmed that British Columbia privacy law can and does apply to foreign-based AI companies where there is a real and substantial connection to the province. More importantly, it affirmed that the indiscriminate scraping of personal images, even those posted publicly online, does not exempt an AI company from the foundational principles of consent, transparency, and reasonable purpose.</p>
<p>This case arrives at a critical juncture in Canada’s evolving regulatory landscape. With no federal AI legislation currently in force and the Artificial Intelligence and Data Act (AIDA) having stalled following the dissolution of Parliament for a federal election, provincial privacy enforcement has emerged as one of the few effective regulatory mechanisms available to hold AI developers accountable in Canada. The <em>Clearview</em> decision not only underscores the role of privacy commissioners as AI regulators, but also provides practical guidance to AI companies operating in or affecting Canadian jurisdictions.</p>
<h2>The Background</h2>
<p>Clearview AI is a U.S. based technology company best known for its controversial facial recognition software. The company’s business model is straightforward but somewhat troubling. It collects images of human faces by scraping the internet, without consent, then uses biometric algorithms to create searchable profiles, offering these capabilities to law enforcement, government agencies, and other clients.</p>
<p>By 2020, Clearview’s activities had drawn criticism, prompting a joint investigation by four Canadian privacy regulators: the federal Privacy Commissioner, and their counterparts in British Columbia, Alberta, and Quebec. These regulators concluded that Clearview’s practices violated Canadian privacy laws, including BC’s Personal Information Protection Act (PIPA), by collecting and using highly sensitive biometric information without consent and for purposes a reasonable person would not consider appropriate.</p>
<p>Although Clearview subsequently suspended its services in Canada, it did not delete the data it had already collected, nor did it cease the underlying practice of scraping images from online sources likely to include Canadians. In response, the BC Information and Privacy Commissioner issued a binding order in 2021 requiring Clearview to stop offering its services in the province, to delete all data collected from BC residents without consent, and to make best efforts to prevent further collection and use of such information.</p>
<h2>The Decision</h2>
<p>Clearview challenged the order in the Supreme Court of British Columbia, arguing that BC’s privacy law should not apply to a U.S.-based company, that the images it used were “publicly available” and therefore exempt from consent requirements, and that its purpose for collecting the data was both lawful and reasonable. The company further contended that the Commissioner’s order was overbroad and unenforceable.</p>
<p>The Court rejected all of these arguments. The court found that there was a clear and substantial connection between Clearview’s activities and British Columbia, noting that the company’s database included images of residents and that its services had been used by local law enforcement. The Court also affirmed the Commissioner’s interpretation that online availability does not equate to legal availability for biometric processing under PIPA. Most significantly, it found that Clearview’s purpose, mass identification of individuals without notice or consent, was not one a reasonable person would find appropriate in the circumstances.</p>
<p>The Court upheld the Commissioner’s order in its entirety, thereby sending a strong message that AI companies operating in Canada, even indirectly, cannot bypass privacy law simply by sourcing data from public websites. The decision stands as a repudiation of the notion that technological capability creates legal entitlement, a principle with far-reaching implications for the broader AI industry.</p>
<h2>The Implications</h2>
<p>The Clearview decision is more than a privacy enforcement action against a single company, it is an important moment in the regulation of artificial intelligence in Canada. In the absence of a federal AI regulatory framework, the decision illustrates how privacy law is being leveraged as a practical, enforceable mechanism for AI governance. It signals to developers, regulators, and policymakers alike that AI systems will not be permitted to operate outside the boundaries of established legal principles, even if those systems originate outside Canada.</p>
<p>Clearview’s core assumption, that images posted publicly online could be freely collected and used to train a facial recognition system, was also firmly rejected. The Court upheld the Commissioner’s finding that this practice breached PIPA’s requirements for meaningful consent and purpose limitation. Notably, the Court accepted that even publicly accessible data retains its status as “personal information” when processed in invasive and transformative ways by AI technologies.</p>
<p>This reasoning cuts to the heart of current debates about AI model training, especially in areas like facial recognition, language modelling, and biometric surveillance. The decision establishes that AI companies must consider the nature of the data they collect and the context in which it was made available, rather than relying on the flawed assumption that public equals permissible. In effect, the Court confirmed that technological capacity does not displace legal obligation.</p>
<p>The decision also confirms that foreign AI companies are not insulated from Canadian privacy law. By affirming that a “real and substantial connection” existed between Clearview’s activities and British Columbia, the Court opened the door for extraterritorial enforcement of privacy obligations in appropriate cases. For global AI developers and vendors, especially those offering biometric identification, predictive analytics, or automated decision-making tools, this case sends a clear warning: Canada’s privacy regime has teeth, and it can bite even across borders.</p>
<p>Most importantly, the Clearview decision highlights the regulatory gap that continues to exist at the federal level. With the Artificial Intelligence and Data Act (AIDA) now effectively dead following the call of a federal election, there is no standalone statutory framework governing the design, deployment, or auditing of AI systems in Canada.</p>
<p>In the meantime, provincial privacy statutes like PIPA are stepping into the regulatory void, offering a flexible yet enforceable mechanism to respond to the most urgent risks posed by AI technologies. While these laws were not drafted with AI in mind, they are increasingly being interpreted in ways that constrain opaque, high-impact, and non-consensual uses of personal data in automated systems. The Clearview case stands as the clearest example to date of how this approach can work in practice.</p>
<h2>The Future</h2>
<p>The <em>Clearview</em> decision stands as an important moment in Canada’s ongoing effort to define meaningful regulatory boundaries for artificial intelligence. In the absence of comprehensive federal legislation, it appears that it is the privacy commissioners who are shaping the practical contours of AI governance in Canada. The case confirms that AI companies cannot rely on technical capability or cross-border detachment to escape compliance with Canadian privacy law. It also affirms that scraping publicly accessible data to train AI systems does not exempt developers from the foundational obligations of consent, transparency, and fairness.</p>
<p>As AI systems become increasingly embedded in both public and private sector decision-making, Canadian regulators are clearly signalling that the legal status of data cannot be divorced from the context of its use. The implications of this reasoning are likely to extend well beyond facial recognition. Indeed, we are already seeing further developments. In April 2023, the Office of the Privacy Commissioner of Canada launched an investigation into the use of ChatGPT by OpenAI, examining whether the collection and use of personal information to train large language models complies with federal privacy law. This was expanded into a joint investigation in partnership with the privacy commissioners for British Columbia, Quebec and Alberta in May of 2023. The scope of the investigation includes whether valid and meaningful consent was obtained for the collection, use and disclosure of personal information, whether obligations in regard to openness and transparency, access accuracy and accountability have been respected and whether Open AI has collected, used or disclosed personal information for purposes that a reasonable person would consider appropriate.</p>
<p>Until federal legislation such as the AIDA is passed and comes into force, privacy law appears to be the primary tool available to address the risks posed by unregulated AI. The <em>Clearview</em> decision confirms that these tools can be both effective and enforceable. For AI developers, this ruling provides clear guidance. Those who seek to operate in Canada must treat privacy compliance not as a secondary concern, but as a core design principle. For regulators, <em>Clearview</em> affirms the legitimacy and necessity of using privacy frameworks to shape AI accountability. And for the public, it offers reassurance that legal protections do, in fact, apply, even in the face of novel and rapidly evolving technologies.</p>
<p><em>Note: Generative AI was used in the preparation of this article.</em></p>
<p>The post <a href="https://www.slaw.ca/2025/05/14/in-the-absence-of-federal-ai-laws-privacy-regulators-lead-the-way-lessons-from-the-clearview-case/">In the Absence of Federal AI Laws, Privacy Regulators Lead the Way: Lessons From the Clearview Case</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Free AI Is All You Need to Supercharge Your Practice</title>
		<link>https://www.slaw.ca/2025/05/07/free-ai-is-all-you-need-to-supercharge-your-practice/</link>
					<comments>https://www.slaw.ca/2025/05/07/free-ai-is-all-you-need-to-supercharge-your-practice/#comments</comments>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Wed, 07 May 2025 11:00:50 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=108082</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">The market for legal AI is teaming with options. Many of them are compelling. All of them are expensive.</p>
<p>What the companies offering these tools hope that you don’t notice is that free (or almost free) tools, like <a href="https://chatgpt.com">ChatGPT</a>, <a href="https://claude.ai/">Claude</a>, and <a href="https://gemini.google.com/app">Gemini</a> are getting so good at basic legal research tasks that many lawyers looking to boost their productivity don’t need to look any further.</p>
<p>We highlight three ways to use these free tools to make research more effective — but we note that becoming a subscriber to one of them, for roughly $30 a month, will tend  . . .  <a href="https://www.slaw.ca/2025/05/07/free-ai-is-all-you-need-to-supercharge-your-practice/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/05/07/free-ai-is-all-you-need-to-supercharge-your-practice/">Free AI Is All You Need to Supercharge Your Practice</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">The market for legal AI is teaming with options. Many of them are compelling. All of them are expensive.</p>
<p>What the companies offering these tools hope that you don’t notice is that free (or almost free) tools, like <a href="https://chatgpt.com">ChatGPT</a>, <a href="https://claude.ai/">Claude</a>, and <a href="https://gemini.google.com/app">Gemini</a> are getting so good at basic legal research tasks that many lawyers looking to boost their productivity don’t need to look any further.</p>
<p>We highlight three ways to use these free tools to make research more effective — but we note that becoming a subscriber to one of them, for roughly $30 a month, will tend to get you better results.</p>
<h2>General overviews</h2>
<p>Many lawyers haven’t moved beyond their first impressions of ChatGPT formed in the months following its release in 2022, when much of its output was laughably false or erroneous. It’s come a long way since.</p>
<p>The free AI platforms are now so good at providing an accurate general summary of an area of Canadian law that they now serve as the best first place to start when you’re new to an area.</p>
<p>However, a subscription to one of these tools will give you access to a “reasoning” model, like <a href="https://openai.com/index/introducing-openai-o1-preview/">ChatGPT o1</a>, which breaks a query into steps, building further verification into the process.</p>
<p>We ran this prompt into the various models: “Summarize the law on unjust enrichment in Canada in 500 words, with citations to leading case law or relevant statutes.”</p>
<p>The best result came from ChatGPT o1, which you can read <a href="https://chatgpt.com/share/67b5f8be-e58c-8003-8db1-486086423bb5">here</a>.</p>
<p>It cites three Supreme Court of Canada decisions, including the foundational authority, <a href="https://decisions.scc-csc.ca/scc-csc/scc-csc/en/item/2562/index.do">Pettkus v Becker</a>. Citations to all three cases were correct, and the summary — though basic and general — is accurate.</p>
<p>We were also impressed to see that the answer noted the limited relevance of provincial legislation on point, correctly citing both BC and Ontario family law statutes.</p>
<p>Running the query on the various free versions of the frontier models rendered largely the same content with one or two citations inaccurate or made up. But showing results to law professors who work on unjust enrichment confirmed that the general overviews were correct.</p>
<p>None of this would suffice for a detailed memo on point — and by no means should you trust the citations! But for someone simply seeking to find their bearings in a new area of law, these tools could be invaluable.</p>
<h2>Boolean searches</h2>
<p>The various frontier models have recently unveiled a new tool similar to OpenAI’s <a href="https://openai.com/index/introducing-deep-research/">Deep Research</a> that will soon make it possible to describe a fact pattern and ask the AI to visit a site like <a href="https://www.canlii.org/en/">Canlii</a> to search for cases. It will be able to read the results and sift through them for you. We’re not there yet. But we’re likely no more than a year away.</p>
<p>In the meantime, one helpful use of a free tool like ChatGPT or Claude is to ask it to assist in formulating a <a href="https://www.indeed.com/career-advice/career-development/boolean-search-strings">Boolean search string</a>, which you would plug into Canlii or another case law database.</p>
<p>Many of us have honed this skill over time. But for both beginners and more seasoned researchers, the main utility of using AI for this lies not in formulating the string, but in speeding up the process of finding the right one.</p>
<p>For example, we asked <a href="https://claude.ai/">Claude</a> to: “Craft a boolean search string for a search on a Canadian legal database of cases on the law of unjust enrichment involving partners in a restaurant business who had no formal contract and in which the court applies the test in Pettkus.”</p>
<p>The result:</p>
<p>“(&#8220;unjust enrichment&#8221; OR &#8220;unjustified enrichment&#8221;) AND (restaurant* OR cafe* OR dining OR bistro*) AND (partner* OR co-owner* OR proprietor*) AND (&#8220;no contract&#8221; OR &#8220;absence of contract&#8221; OR &#8220;without contract&#8221; OR &#8220;informal agreement&#8221; OR &#8220;informal arrangement&#8221;) AND (&#8220;Pettkus v Becker&#8221; OR Pettkus) AND (enrich* /p deprivat* /p correspond*)”</p>
<p>Plugging this into Canlii brought up a dozen cases, including three <a href="https://www.canlii.org/en/ns/nssc/doc/2022/2022nssc265/2022nssc265.html?resultId=f375625194004971be5d8ab0d2bee0c2&amp;searchId=2025-02-19T07:02:13:322/41aae402cc3f4f44a4779fb112782a55&amp;searchUrlHash=AAAAAQFQKCJ1bmp1c3QgZW5yaWNobWVudCIgT1IgInVuanVzdGlmaWVkIGVucmljaG1lbnQiKQpBTkQKKHJlc3RhdXJhbnQqIE9SIGNhZmUqIE9SIGRpbmluZyBPUiBiaXN0cm8qKQpBTkQKKHBhcnRuZXIqIE9SIGNvLW93bmVyKiBPUiBwcm9wcmlldG9yKikKQU5ECigibm8gY29udHJhY3QiIE9SICJhYnNlbmNlIG9mIGNvbnRyYWN0IiBPUiAid2l0aG91dCBjb250cmFjdCIgT1IgImluZm9ybWFsIGFncmVlbWVudCIgT1IgImluZm9ybWFsIGFycmFuZ2VtZW50IikKQU5ECigiUGV0dGt1cyB2IEJlY2tlciIgT1IgUGV0dGt1cykKQU5ECihlbnJpY2gqIC9wIGRlcHJpdmF0KiAvcCBjb3JyZXNwb25kKikKAAAAAAE">directly</a> <a href="https://www.canlii.org/en/bc/bcsc/doc/2011/2011bcsc1055/2011bcsc1055.html?resultId=4e5db9de9db841329b20b3c639974eab&amp;searchId=2025-02-19T07:02:13:322/41aae402cc3f4f44a4779fb112782a55&amp;searchUrlHash=AAAAAQFQKCJ1bmp1c3QgZW5yaWNobWVudCIgT1IgInVuanVzdGlmaWVkIGVucmljaG1lbnQiKQpBTkQKKHJlc3RhdXJhbnQqIE9SIGNhZmUqIE9SIGRpbmluZyBPUiBiaXN0cm8qKQpBTkQKKHBhcnRuZXIqIE9SIGNvLW93bmVyKiBPUiBwcm9wcmlldG9yKikKQU5ECigibm8gY29udHJhY3QiIE9SICJhYnNlbmNlIG9mIGNvbnRyYWN0IiBPUiAid2l0aG91dCBjb250cmFjdCIgT1IgImluZm9ybWFsIGFncmVlbWVudCIgT1IgImluZm9ybWFsIGFycmFuZ2VtZW50IikKQU5ECigiUGV0dGt1cyB2IEJlY2tlciIgT1IgUGV0dGt1cykKQU5ECihlbnJpY2gqIC9wIGRlcHJpdmF0KiAvcCBjb3JyZXNwb25kKikKAAAAAAE">on</a> <a href="https://www.canlii.org/en/bc/bcsc/doc/2019/2019bcsc437/2019bcsc437.html?resultId=45a90f54e0e344d0b8df6240dca75bd1&amp;searchId=2025-02-19T07:02:13:322/41aae402cc3f4f44a4779fb112782a55&amp;searchUrlHash=AAAAAQFQKCJ1bmp1c3QgZW5yaWNobWVudCIgT1IgInVuanVzdGlmaWVkIGVucmljaG1lbnQiKQpBTkQKKHJlc3RhdXJhbnQqIE9SIGNhZmUqIE9SIGRpbmluZyBPUiBiaXN0cm8qKQpBTkQKKHBhcnRuZXIqIE9SIGNvLW93bmVyKiBPUiBwcm9wcmlldG9yKikKQU5ECigibm8gY29udHJhY3QiIE9SICJhYnNlbmNlIG9mIGNvbnRyYWN0IiBPUiAid2l0aG91dCBjb250cmFjdCIgT1IgImluZm9ybWFsIGFncmVlbWVudCIgT1IgImluZm9ybWFsIGFycmFuZ2VtZW50IikKQU5ECigiUGV0dGt1cyB2IEJlY2tlciIgT1IgUGV0dGt1cykKQU5ECihlbnJpY2gqIC9wIGRlcHJpdmF0KiAvcCBjb3JyZXNwb25kKikKAAAAAAE">point</a>.</p>
<p>You may not get the right search string off the bat, as often happens when we do this on our own. Using AI might involve a few rounds of trial and error, providing feedback to the program as you run the queries. But it will likely be quicker than doing this on your own and give you new ideas.</p>
<h2>Case summaries</h2>
<p>Our last example is using free AI to do case summaries. Here too the aim is not to avoid reading cases or to rely on AI entirely.</p>
<p>But to get a quick and accessible overview of a case, AI can do magic. We ran this query into the free version of <a href="https://chatgpt.com/">ChatGPT</a>: “Summarize Moore v. Sweet, 2018 SCC 52 in 300 words” — and you can read the summary <a href="https://chatgpt.com/share/67b601db-a3e8-8003-a8ea-fcfa8ab55b00">here</a>.</p>
<p>Comparing it with the case <a href="https://decisions.scc-csc.ca/scc-csc/scc-csc/en/item/17388/index.do">itself</a>, all the essential details are here, and they’re correct: the factual matrix, the issue, the test applied, and the outcome.</p>
<p>This could come in handy when coming across a case cited in a factum or in another case — to get a quick sense of what it’s about. Or it might help you decide what cases you turn up on Canlii are worth delving into.</p>
<p>These are only three possible uses of free AI tools for legal research. There are no doubt more. And the technology is only becoming better by the day.</p>
<p>Free AI tools help level the playing field, making it possible for any lawyer to become more efficient in practice.</p>
<p>The post <a href="https://www.slaw.ca/2025/05/07/free-ai-is-all-you-need-to-supercharge-your-practice/">Free AI Is All You Need to Supercharge Your Practice</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2025/05/07/free-ai-is-all-you-need-to-supercharge-your-practice/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>The Opportunities Hidden in Law&#8217;s Eternal September</title>
		<link>https://www.slaw.ca/2025/03/28/the-opportunities-hidden-in-laws-eternal-september/</link>
		
		<dc:creator><![CDATA[Sarah A. Sutherland]]></dc:creator>
		<pubDate>Fri, 28 Mar 2025 11:00:39 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107999</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">I was recently speaking with someone about the <a href="https://en.wikipedia.org/wiki/Eternal_September">Internet&#8217;s “Eternal September”</a>, which is the concept that starting in 1993/1994 so many new users started using the internet that it could never settle into the online equivalent of November on campus, when people have figured things out where their classes are and where to get the sandwiches they like for lunch. From the perspective of 30+ years later, with no indication that the phenomenon has slowed, wearing a t-shirt saying that the internet was full in 1993 seems to lack, if not imagination, then at least prescience. However, the cultural  . . .  <a href="https://www.slaw.ca/2025/03/28/the-opportunities-hidden-in-laws-eternal-september/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/03/28/the-opportunities-hidden-in-laws-eternal-september/">The Opportunities Hidden in Law&#8217;s Eternal September</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">I was recently speaking with someone about the <a href="https://en.wikipedia.org/wiki/Eternal_September">Internet&#8217;s “Eternal September”</a>, which is the concept that starting in 1993/1994 so many new users started using the internet that it could never settle into the online equivalent of November on campus, when people have figured things out where their classes are and where to get the sandwiches they like for lunch. From the perspective of 30+ years later, with no indication that the phenomenon has slowed, wearing a t-shirt saying that the internet was full in 1993 seems to lack, if not imagination, then at least prescience. However, the cultural effects of a constant influx of new participants over this entire period has had limiting effects on the quality and development of certain systems (Ivan Vendrov explained this well in &#8220;<a href="https://nothinghuman.substack.com/p/the-tyranny-of-the-marginal-user">The Tyranny of the Marginal User</a>&#8220;, writing that the peak for online dating was OKCupid in 2016). So, we have systems that have to be engineered for an extensive range of skill levels, and quality and age of equipment, which limits our ability to build applications and services that would benefit those who put in the time and effort to become proficient.</p>
<p>Beyond the internet, this seems to have clear parallels with the adoption of technology and quantitative methods in the legal sector. For example, legal AI has been developing since the 1970s, and the quantitative study of legal topics has been happening for decades. As these systems and bodies of knowledge mature, and the culture shifts to adopt them more widely, new users have the potential to enrich the community. But developing functional ways to help them understand what&#8217;s possible and integrate them into systems has the potential to make this process go more smoothly — I have watched people&#8217;s eyes fill with wonder when taught to do an online search or use ctrl-f.</p>
<p>Everyone brings their hobby horses with them through life, and as I turn my focus to legal scholarship, I see that there are significant gaps in the ways these approaches have been used. There are many areas of research that have made extensive use of these techniques, such as criminology, economic analyses of law, tax, and others. However, there are also many gaps in how it has been explored in other topics, and the increased calls for the use of data in more areas of public administration and research has been an important development in recent decades.</p>
<p>There are many reasons why there haven’t been as many projects that integrate data science into legal contexts as some may prefer. Often there are roots for this in technological limitations: until recently computers weren’t powerful enough to process complex language. These reasons are also related to cultural preferences, as many people who choose to pursue legal careers are not as comfortable with quantitative methodologies as they are with qualitative approaches. Developing sufficient expertise to envision what research can be done and learning to do this research have significant learning curves. That said, there are so many opportunities for meaningful research in most, if not all, aspects of legal scholarship.</p>
<p>These limitations are shifting as computers continue to become more powerful and the legal culture shifts to endorse them. The technical limitations are also disappearing, as tools develop sufficiently to be used by people with lower technical skill levels. However, we are still in the early stages of this change. This means that for legal scholars and practitioners there are many opportunities to research questions surrounding issues like how people experience the legal system and what impacts different policy changes may make.</p>
<p>Many researchers have spent some part of recent years looking at how systems such as large language models can be used in law and how they can be made more reliable. There are many potential uses for this work, and it is promising for innovating legal clinics, practice of law, governance, and understanding how the law works in society. The availability of large language models means that we are going to be living in a more data rich environment as unstructured and semi-structured data is easier to transform into more computationally tractable formats.</p>
<p>One of the topics that&#8217;s most commonly brought up in relation to data driven research in Canadian law is the lack of accessible case law, but there is also a lack of data on many other aspects of our legal system. This means that not only do we miss having case law available for analysis, which gets so much attention. We also don&#8217;t have access to many other sorts of data that could be used to understand justice in our society. There are countless opportunities to use these methodologies, and it seems like a missed opportunity for lawyers and others in the sector to not be included in key decision making due to issues like a lack of understanding of what&#8217;s being discussed or misaligned priorities within project teams.</p>
<p>Though it has been gaining momentum for at least a decade, we are in the early stages of this change. And there are still significant opportunities for finding quick wins using existing models. There are also meaningful prospects for data experts to do novel work with real world applications on challenging datasets. The combination of these conditions means that this is a perfect time for data scientists and legal scholars to explore how they can best work together. As more people approach the study of quantitative approaches to law, it can be hoped that in ten years we won’t read about the greatest time for data analysis in the sector being 2025. Having spent so much time in school, scholars know what to do in September to ensure that they and others move toward. I look forward to the progress toward a more general understanding of the potential for these approaches and how this work progresses.</p>
<p>I would like to thank Burkhard Schafer for discussing the content of this column with me.</p>
<p>The post <a href="https://www.slaw.ca/2025/03/28/the-opportunities-hidden-in-laws-eternal-september/">The Opportunities Hidden in Law&#8217;s Eternal September</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Should Law Students Be Using AI — Even on Exams?</title>
		<link>https://www.slaw.ca/2025/03/26/should-law-students-be-using-ai-even-on-exams/</link>
					<comments>https://www.slaw.ca/2025/03/26/should-law-students-be-using-ai-even-on-exams/#comments</comments>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Wed, 26 Mar 2025 11:00:54 +0000</pubDate>
				<category><![CDATA[Legal Education]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107975</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">An email from a faculty member at the University of Toronto on the topic of AI made the rounds at law schools across Canada recently. It’s about using AI on final exams.</p>
<p>It points out that if a student has an app already open when they launch Examplify – the software most schools use to administer exams – they will have access to that app while writing the exam. This could be a browser with Lexis+AI or the app version of ChatGPT, which would still be online during the exam.</p>
<p>To avoid this, the company that makes Examplify advises running  . . .  <a href="https://www.slaw.ca/2025/03/26/should-law-students-be-using-ai-even-on-exams/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/03/26/should-law-students-be-using-ai-even-on-exams/">Should Law Students Be Using AI — Even on Exams?</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">An email from a faculty member at the University of Toronto on the topic of AI made the rounds at law schools across Canada recently. It’s about using AI on final exams.</p>
<p>It points out that if a student has an app already open when they launch Examplify – the software most schools use to administer exams – they will have access to that app while writing the exam. This could be a browser with Lexis+AI or the app version of ChatGPT, which would still be online during the exam.</p>
<p>To avoid this, the company that makes Examplify advises running exams in ‘secure mode.’ This cuts off access to both the net and the user’s hard drive.</p>
<p>The author of the email was keen to share the discovery that running exams in this setting may now be necessary across the board – since it took them only a few minutes to find a program online that will run a language model on one’s computer without having to access the net. One that would give “correct answers” to some of the questions they asked.</p>
<p>Two things about this email stood out to me. One is that we’re close to the point where AI is baked into the system and not merely an app we can cordon off. We may already be there with certain functions of Apple Intelligence, a system feature available wherever text can be entered on a Mac.</p>
<p>But the more interesting point is the assumption that hovers over the whole discussion that goes without saying: that using AI on a law school exam is to be avoided at all costs.</p>
<p>I’m a law professor co-teaching a course this term on ‘AI, Law, and Justice,’ and I’ve spoken to many students in our faculty about their use of AI.</p>
<p>I argue that it may be time to question the assumption that using AI is tantamount to cheating.</p>
<p>Whether law schools are ready for it, students are rapidly embracing AI. Simply excluding it from exams and assignments is both unrealistic and imprudent.</p>
<h2>Where law students are at with AI</h2>
<p>I’m noticing a wide range of immersion among students using AI at our law school, but the general trend is unmistakable. Students are making more use of it over time. The more they experiment, share, and learn about the technology, the more apt they are to make it an important part of their approach to law.</p>
<p>From discussions in the AI, Law, and Justice course and from talks on AI we’ve hosted at TRU Law, it’s clear that students are eager to see AI assume a higher profile in their education. While it has a place in our optional Advanced Legal Research course, many think a practical course on AI should be mandatory, with guidance on effective and ethical uses of it in research and practice.</p>
<h2>The thornier question of where it belongs</h2>
<p>In the AI course, we had a lively and fruitful debate on the tougher question of whether students should be allowed to use AI on exams or assignments. There are two schools of thought. Articulating them helped us see a middle ground and a glimpse of what the future of legal ed with AI might involve.</p>
<p>The skeptical view holds that students should not have access to AI during an exam, because it would defeat the main purpose: testing one’s ability to do legal analysis. With access to AI, it wouldn’t be clear that a student had grasped central concepts in contract, criminal, or administrative law.</p>
<p>The pragmatist view, by contrast, sees the effort to examine without access to AI as futile and unrealistic. When will a lawyer not have the benefit of AI in practice? When would their hands be tied to think or write without it? Lawyers are already using AI frequently at firms where students are summering. Why not examine students on their ability to use it effectively?</p>
<h2>A middle ground in sight</h2>
<p>Students in the AI course proposed a middle ground: aim to test not a student’s ability to think without AI, but to think effectively with AI.</p>
<p>As one student put it: if an assignment or the answer on an exam were nothing more than the direct output of a chatbot, it wouldn’t pass muster. It wouldn’t address a problem fully and accurately.</p>
<p>In most cases, students will need to know how to prompt effectively and to revise the answer to bring it into line with precisely what was asked for.</p>
<h2>The future of grading in the age of AI</h2>
<p>On this view, over time, more of the grading in law school will involve assessing a person’s ability to work effectively with AI rather than without it.</p>
<p>But the bar would be raised. The quality of the output would be more polished – or expected to be. It would also be held to a higher standard. Answers would have to be entirely correct, accurate, and complete to get at a good grade. But to get the highest grade, one would have to go above and beyond: showing some special insight, creative twist or innovative policy argument not likely to have emerged from a chatbot.</p>
<h2>Just one view of AI in legal ed</h2>
<p>There may be other visions of how this plays out, as the conversation continues. But students I’ve worked with are hoping that faculty are thinking carefully about the place of AI in legal ed – that we question more of the unspoken assumptions about it.</p>
<p>Both students and the profession are leading by example. We shouldn’t be far behind.</p>
<p>The post <a href="https://www.slaw.ca/2025/03/26/should-law-students-be-using-ai-even-on-exams/">Should Law Students Be Using AI — Even on Exams?</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2025/03/26/should-law-students-be-using-ai-even-on-exams/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>A Flurry of Filings: Canada’s AI Litigation Landscape Evolves in a Single Month</title>
		<link>https://www.slaw.ca/2025/01/17/a-flurry-of-filings-canadas-ai-litigation-landscape-evolves-in-a-single-month/</link>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Fri, 17 Jan 2025 12:00:07 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107764</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">One of the earliest projects that was launched at the University of Victoria’s AI Risk and Regulation Lab was a mapping initiative that tracked both how artificial intelligence (AI) is regulated and litigated. To date, litigation tracking has primarily been focused on cases arising from the United States and internationally as until November 2024, there was virtually no domestic litigation to discuss. That changed recently when two lawsuits were filed in the month of November, signaling that Canada is now joining an international surge of AI-related legal disputes. In this column I will briefly review the two recently launched cases  . . .  <a href="https://www.slaw.ca/2025/01/17/a-flurry-of-filings-canadas-ai-litigation-landscape-evolves-in-a-single-month/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2025/01/17/a-flurry-of-filings-canadas-ai-litigation-landscape-evolves-in-a-single-month/">A Flurry of Filings: Canada’s AI Litigation Landscape Evolves in a Single Month</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">One of the earliest projects that was launched at the University of Victoria’s AI Risk and Regulation Lab was a mapping initiative that tracked both how artificial intelligence (AI) is regulated and litigated. To date, litigation tracking has primarily been focused on cases arising from the United States and internationally as until November 2024, there was virtually no domestic litigation to discuss. That changed recently when two lawsuits were filed in the month of November, signaling that Canada is now joining an international surge of AI-related legal disputes. In this column I will briefly review the two recently launched cases and discuss their possible implications for the regulation of AI in Canada.</p>
<h2>AI and Data Harvesting</h2>
<p>The growth of AI has been fueled largely by access to vast datasets that serve as the foundation for machine learning models. These data sets can include everything from news media—which offers a rich source of factual and contextual information—to publicly accessible case law, a longstanding cornerstone of the justice system that is relied upon by lawyers, judges, academics, and self-represented litigants. Historically, Canadian law has facilitated relatively broad avenues for accessing and using these sources within the confines of copyright and fair dealing. Yet, as AI-driven data extraction operations become more technologically sophisticated and economically meaningful, established legal doctrines are being tested.</p>
<h2>CanLII v. Caseway</h2>
<p>The Canadian Legal Information Institute (CanLII) is a non-profit organization that all readers are likely familiar with that is dedicated to providing free online access to Canadian legal information. Founded to enhance open access to justice, CanLII’s database is an important resource in the Canadian legal landscape. However, when Caseway—an AI legal research company —purportedly engaged in large-scale data extraction from CanLII’s website, allegations arose that it was doing so in violation of CanLII’s terms of use. On November 4, 2024, CanLII filed a Notice of Civil Claim against Caseway and related entities in the Supreme Court of British Columbia.</p>
<p>This dispute underscores a key tension: while the public has free access to CanLII, the database itself represents a significant investment of time, effort, and resources. The challenge posed by Caseway’s alleged conduct is whether commercial actors can appropriate these resources wholesale, bundling and selling them or incorporating them into AI-driven tools, without meaningful compensation. If courts find that Caseway’s activities fall outside acceptable boundaries, we might see the emergence of new judicial guidance that clarifies when large-scale data extraction from legal databases crosses the line into unfair or infringing conduct.</p>
<h2>Canadian Media Companies v. OpenAI</h2>
<p>Later in the same month, a consortium of leading Canadian media organizations, including the Globe and Mail and the Canadian Broadcasting Corporation, initiated legal action in the Ontario Superior Court of Justice against Open AI. The crux of their complaint revolves around OpenAI’s alleged use of copyrighted journalistic content to train its AI models. While OpenAI has maintained that its practices are consistent with applicable laws, the media companies argue that using their proprietary content to develop a commercial AI tool amounts to unauthorized reproduction and distribution of copyrighted works.</p>
<p>What distinguishes this case from more traditional copyright disputes is the sheer scale and nature of the alleged infringement. Unlike a straightforward scenario in which a publisher reprints an article without permission, AI systems copy, process, and internalize large swaths of text to “learn” patterns. The process is more akin to data ingestion than traditional publication. This raises a novel question for Canadian courts: does using copyrighted text as training data constitute a form of infringement that falls outside established exceptions such as fair dealing?</p>
<p>The outcome of this litigation has the potential to reshape how Canadian law views the training of AI models. A ruling that data extraction for training purposes is inherently infringing could force AI developers to seek licenses, implement more robust filtering measures, or drastically reduce the scope of data they use. Conversely, a ruling that recognizes some form of implied license or fair dealing exception for AI training would give developers latitude to innovate, potentially at the cost of reducing content owners’ ability to control the use of their works.</p>
<h2>Copyright, Data Extraction, and Fair Dealing</h2>
<p>At the heart of these cases is a re evaluation of long-standing legal concepts. Canadian copyright law and the fair dealing exceptions have traditionally provided a somewhat flexible framework. For example, fair dealing for the purposes of research, private study, or education has historically permitted a variety of limited uses that, while technically “copying,” serve the public interest by improving access to knowledge.</p>
<p>But do AI models operate as part of “research” or “education”? Should courts interpret fair dealing in a way that accommodates the machine-driven ingestion of data, given that no human directly reads every line of the text involved? The evolution of these doctrines may hinge on how judges perceive the purpose and effect of AI training. If courts view AI training as an indirect but vital form of research or transformative use, they might carve out new doctrinal space for such practices. If they see it as a commercial shortcut that exploits creators’ investments, the decisions could swing the other way.</p>
<h2>Foundational Jurisprudence</h2>
<p>With these high-profile cases pending, Canadian courts stand at a critical juncture. The jurisprudence that emerges may set the baseline for how we treat AI-driven data extraction for years to come. Given the novelty and complexity of the issues, it is unlikely that a single decision will provide a definitive answer, however, the initial judgments will offer signposts and guiding principles. Yet, one thing is clear: the legal battles unfolding around AI, copyright, data extraction, and fair dealing are more than a passing trend. They represent just the beginning of what I predict will be a significant wave of litigation moving forward. They also represent an opportunity for Canadian courts to issue foundational jurisprudence that will shape the parameters of innovation, information sharing, and intellectual property rights for years to come.</p>
<p>&#8212;</p>
<p>Note: Generative AI was used in the preparation of this article.</p>
<p>The post <a href="https://www.slaw.ca/2025/01/17/a-flurry-of-filings-canadas-ai-litigation-landscape-evolves-in-a-single-month/">A Flurry of Filings: Canada’s AI Litigation Landscape Evolves in a Single Month</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI’s Impact on Law: Why the Transformation Narrative Is Overstated</title>
		<link>https://www.slaw.ca/2024/12/31/ais-impact-on-law-why-the-transformation-narrative-is-overstated/</link>
					<comments>https://www.slaw.ca/2024/12/31/ais-impact-on-law-why-the-transformation-narrative-is-overstated/#comments</comments>
		
		<dc:creator><![CDATA[Robert Diab]]></dc:creator>
		<pubDate>Tue, 31 Dec 2024 12:00:18 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<category><![CDATA[Practice of Law]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107704</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">A common message we hear today is that AI will soon bring about sweeping changes to the practice of law, making us so much more efficient that we’ll have plenty of time for other things.</p>
<p>I’ve kept my finger on the pulse of AI since ChatGPT appeared in 2022. I’m a heavy user of AI as a law professor and part-time criminal lawyer. I’m constantly experimenting with it and dazzled by its capabilities—you won’t find a bigger fan of AI.</p>
<p>But if there’s one thing that’s clear, it’s that AI will not transform the practice of law. Far from it. . . .  <a href="https://www.slaw.ca/2024/12/31/ais-impact-on-law-why-the-transformation-narrative-is-overstated/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2024/12/31/ais-impact-on-law-why-the-transformation-narrative-is-overstated/">AI’s Impact on Law: Why the Transformation Narrative Is Overstated</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">A common message we hear today is that AI will soon bring about sweeping changes to the practice of law, making us so much more efficient that we’ll have plenty of time for other things.</p>
<p>I’ve kept my finger on the pulse of AI since ChatGPT appeared in 2022. I’m a heavy user of AI as a law professor and part-time criminal lawyer. I’m constantly experimenting with it and dazzled by its capabilities—you won’t find a bigger fan of AI.</p>
<p>But if there’s one thing that’s clear, it’s that AI will not transform the practice of law. Far from it.</p>
<p>Saying so paints a grossly misleading picture of the future of law—premised on a skewed reading of the facts and a misconception of how most lawyers spend most of their time.</p>
<h2>The overheated claim</h2>
<p>The claim we’re often hearing is crystallized in a recent post on Jordan Furlong’s (otherwise terrific) Substack titled “<a href="https://jordanfurlong.substack.com/p/how-will-you-spend-your-ai-dividend">How will you spend your AI Dividend?”</a>, where he writes: “But if there’s one effect of Gen AI that hardly anyone disputes, it’s that it vastly reduces the amount of time and human effort required to complete many standard legal tasks.”</p>
<p>True, but what does this mean for the practice as a whole? For Furlong, it means a “productivity revolution is underway in the law, and nobody knows how far it will take us.” His evidence:</p>
<blockquote><p>&#8220;We’ve been aware of this since 2023, when <a href="https://news.bloomberglaw.com/business-and-practice/law-firms-ai-nightmare-is-fewer-billed-hours-and-lower-profits/">a Legal Value Network study </a>conservatively estimated that <a href="https://www.geeklawblog.com/2023/08/ai-pocalypse-the-shocking-impact-on-law-firm-profitability.html">20% of lawyer work tasks at large law firms</a> could be performed by AI. Last week, <a href="https://www.clio.com/about/press/clio-latest-legal-trends-report/">Clio’s 2024 Legal Trends Report</a> went further and concluded that <a href="https://www.zoom.us/clips/share/o7gi6GNCbTyFyTKqgDC5hsE-Ajl5Dpsd8coxJi2XIzZ9Aqw1Di-HWqHR7KjBgSaGNEdDmw.GJLQUjf7p4qZwx8H">57% of lawyers’ hourly billable work activities</a> could be automated by AI. This week, <a href="https://www.thomsonreuters.com/en/c/future-of-professionals.html">Thomson Reuters’s 2024 Future of Professionals Report</a> predicted that <a href="https://www.thomsonreuters.com/content/dam/ewp-m/documents/thomsonreuters/en/pdf/reports/future-of-professionals-report-2024.pdf">AI could free up work time at a pace of four hours per week (200 hours annually)</a> within one year, tripling that total within five. Want a real-life example? In the first quarter of 2024, <a href="https://www.law360.com/pulse/modern-lawyer/articles/1850456">Husch Blackwell displaced about 6,000 lawyer hours using AI</a> tools.&#8221;</p></blockquote>
<p>Drawing on these stats, Furlong asserts that “You could spend hundreds of thousands of dollars annually to support a bunch of leveraged lawyers grinding out billable work — or, you could replace most of them with a powerful Gen AI system&#8230;”</p>
<p>All of this is so wildly inaccurate — so patently implausible — I hardly know where to begin.</p>
<h2>Misconstruing the evidence</h2>
<p>The sources that Furlong cites point to efficiencies on two fronts. Using AI helps lawyers speed up document review and legal research.</p>
<p>Now just stand back and think about it: do lawyers at even the biggest firms spend so much of their time doing these things that 57% of their billable hours could disappear through automation?</p>
<p>Anyone who has worked in a law firm of any size could readily see why this is pure fiction.</p>
<p>But didn’t Husch Blackwell report that it displaced about 6,000 lawyer hours using AI tools? Isn’t that conclusive?</p>
<h2>A closer look at the facts</h2>
<p>Yes, Husch Blackwell did say that. But it also said that it has “more than 1,000 attorneys and nearly 1,200 staff members and paralegals.”</p>
<p>A thousand lawyers each billing, say, 1,500 to 2,000 hours a year amounts to 1.5 to 2 million billable hours a year. Six thousand hours is maybe the work of 3 or 4 lawyers, <em>out of a thousand</em>.</p>
<p>I concede that, at the extreme, AI might make a significant dent in the time a junior associate might spend wading through thousands of documents doing due diligence on a big corporate deal.</p>
<p>I remember doing this as an articling student at a large commercial firm on Bay Street. I also did a lot of research, and AI might have saved me some time there as well.</p>
<p>But this is not lawyering in the vast majority of cases, and even where it does fit this model, AI is not making 57% of the work disappear.</p>
<h2>What practice really involves</h2>
<p>I’ve had the privilege of working on files at a national firm for clients as big as Bell Canada and the Royal Bank, and later, at a small criminal firm, where at one point the client was a 12-year-old girl. I’ve been Crown and defence. I’ve assisted plaintiff and defendant in all kinds of cases.</p>
<p>I saw time and again that no matter how big or small the file, the general the nature of the work that lawyers do on a file is the same. And almost all of it requires a human-in-the-loop.</p>
<p>Whether you’re acting for Bell or a mom-and-pop shop, or a child, most of the time spent on a matter involves the same few tasks: holding the client’s hand, communicating with opposing counsel, and carefully reading and rereading crucial documents in the case.</p>
<p>Yes, you may do some research or document review — sometimes a lot. But in each of these cases, AI can only take you so far.</p>
<p>AI can save time combing through piles of documents or cases to find the ones that matter. But most lawyers spend more of their time pouring over or crafting key documents — contracts, pleadings, opinions — pondering strategy, implications, or advice.</p>
<p>And while AI might make finding law easier, you can only find things if you know what you’re looking for. Without a good grasp of law, you won’t know how to prompt AI to look in the right places. AI won’t replace our intuition. It can’t come up with creative ways to frame a legal claim.</p>
<p>AI will help us do many things. But the vast majority of our time on most files will still be spent on tasks we need to do ourselves: reassuring the client, persuading a judge or jury by reading the room, or reaching a settlement by using humility and common sense.</p>
<p>That’s the practice. It looked that way a hundred years ago. It will look mostly the same in a hundred more.</p>
<p>The post <a href="https://www.slaw.ca/2024/12/31/ais-impact-on-law-why-the-transformation-narrative-is-overstated/">AI’s Impact on Law: Why the Transformation Narrative Is Overstated</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2024/12/31/ais-impact-on-law-why-the-transformation-narrative-is-overstated/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>10 Practical Strategies for Law Schools to Embrace AI</title>
		<link>https://www.slaw.ca/2024/11/26/10-practical-strategies-for-law-schools-to-embrace-ai/</link>
					<comments>https://www.slaw.ca/2024/11/26/10-practical-strategies-for-law-schools-to-embrace-ai/#comments</comments>
		
		<dc:creator><![CDATA[Guest Blogger]]></dc:creator>
		<pubDate>Tue, 26 Nov 2024 12:00:20 +0000</pubDate>
				<category><![CDATA[Legal Education]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107571</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Artificial intelligence (AI) is transforming nearly every sector of society, and the legal field is no exception. While AI is rapidly reshaping legal practice, legal education risks falling behind.</p>
<p>Surveys of university graduates indicate that they <a href="https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/07/23/new-report-finds-recent-grads-want-ai-be">feel unprepared for the workforce</a> due to a lack of AI integration into their education. Legal regulators like the Law Society of Ontario, emphasize that lawyers must understand AI’s risks and benefits to meet <a href="https://lawsocietyontario.azureedge.net/media/lso/media/lawyers/practice-supports-resources/generative-ai-your-professional-obligations.pdf">professional responsibility</a> standards. The gap between what is taught in the classroom and what is required in practice is widening by the day.</p>
<p>Fortunately, there are practical and innovative strategies  . . .  <a href="https://www.slaw.ca/2024/11/26/10-practical-strategies-for-law-schools-to-embrace-ai/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2024/11/26/10-practical-strategies-for-law-schools-to-embrace-ai/">10 Practical Strategies for Law Schools to Embrace AI</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Artificial intelligence (AI) is transforming nearly every sector of society, and the legal field is no exception. While AI is rapidly reshaping legal practice, legal education risks falling behind.</p>
<p>Surveys of university graduates indicate that they <a href="https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/07/23/new-report-finds-recent-grads-want-ai-be">feel unprepared for the workforce</a> due to a lack of AI integration into their education. Legal regulators like the Law Society of Ontario, emphasize that lawyers must understand AI’s risks and benefits to meet <a href="https://lawsocietyontario.azureedge.net/media/lso/media/lawyers/practice-supports-resources/generative-ai-your-professional-obligations.pdf">professional responsibility</a> standards. The gap between what is taught in the classroom and what is required in practice is widening by the day.</p>
<p>Fortunately, there are practical and innovative strategies that law schools can adopt to prepare their students for this new era of AI-driven legal practice—without waiting for multi-year curriculum reforms.</p>
<h2>1. AI Literacy Workshops for Students and Faculty</h2>
<p>Law schools must prioritize AI literacy by offering dynamic, hands-on workshops to equip students and faculty with essential AI competencies. These workshops should progress in complexity, starting with foundational AI concepts—like understanding what AI is, effective prompt engineering with generative AI, ethics and professional responsibility, and academic integrity. They can then move into intermediate skills, such as building custom GPTs and analyzing AI ethics through case studies. Advanced sessions can dive deeper into technical topics, including adjusting “temperature” and top-P values, token optimization strategies, and conducting AI impact assessments.</p>
<p>Real-world “use case” examples will bridge theory and application, fostering institutional confidence and empowering participants to integrate AI seamlessly into legal workflows. Law schools that embrace these initiatives position their community to lead in a profession increasingly defined by technological innovation.</p>
<p>An easy opportunity for students and faculty to learn more is by attending free webinars on AI and law, which occur almost every month, hosted by various legal organizations. There’s also our hybrid <a href="https://allard.ubc.ca/about-us/events-calendar/ai-law-symposium">“AI and Law” symposium</a> taking place April 2, 2025, that is open to students across the country to attend and participate in. You can even organize a “watch party” with a few snacks and put it on the big screen in a classroom for students and faculty to attend. Follow-up with a discussion of ideas your law school could pursue.</p>
<h2>2. Grassroots Faculty-Student Initiatives: Building AI Communities Beyond the Classroom</h2>
<p>Law schools should encourage students and faculty to launch grassroots initiatives focused on AI, creating collaborative, low-pressure spaces for learning and experimentation.</p>
<p>These initiatives foster innovation by allowing participants to explore AI beyond formal curricula. A successful example is the new <a href="https://benjaminperrin.ca/ai">UBC AI &amp; Criminal Justice Initiative</a>, which empowers students and faculty to integrate AI into teaching, research, public engagement, and advocacy.</p>
<p>Faculty members with little AI experience can sponsor or join these initiatives, learning alongside students in a supportive environment. These groups empower students with leadership opportunities, nurture creativity, and cultivate tech-savvy communities organically.</p>
<p>By promoting these initiatives, law schools can foster an innovative culture without the administrative overhead of curricular changes—allowing both students and faculty to stay ahead in the rapidly evolving legal tech landscape. An interested faculty member can put up some posters and see who shows up. If you build it, they will come.</p>
<h2>3. Mainstreaming AI into the Existing Curriculum</h2>
<p>Law schools don’t need to wait for formal curriculum reforms to embed AI education. Instructors can integrate AI concepts into their existing courses right now. Professors can highlight how AI is impacting diverse areas such as contracts, torts, evidence law, administrative law, and criminal law – a search on CanLII reveals several relevant cases and lots of commentary.</p>
<p>Expanding the search to other jurisdictions, particularly the U.S., yields some interesting materials for a class or two on AI in these courses. There’s also very thoughtful material on <a href="https://www.indigenous-ai.net/">Indigenous perspectives on AI</a> and other critical perspectives to include. Incorporating AI-focused case studies, guest lectures, and assignments using legal tech tools offers students practical exposure to how AI is reshaping the legal profession.</p>
<p>Faculty can also develop flexible, AI-focused seminars based on already approved general purpose courses that allow for flexibility in what is taught. For instance, I’m teaching a seminar next term for a course that was already on the books, generically titled “Topics in Criminal Justice”. I adapted it to exclusively focus on AI and Criminal Justice. It will use an <a href="https://allard.ubc.ca/about-us/events-calendar/artificial-intelligence-criminal-justice">open-access casebook</a> designed together with interested students, making these resources freely available to other law schools.</p>
<h2>4. Law Schools as Innovation Hubs</h2>
<p>Law schools have the potential to become incubators for AI-driven innovation, providing a safe space where students and faculty can collaborate on cutting-edge technologies without the risks associated with real-world deployment. In this experimental environment, participants can push boundaries, prototype legal tech solutions, and explore novel applications of AI in law.</p>
<p>Adopting a “sandbox” approach cultivates creativity, empowering law schools to shape the future of legal technology rather than merely respond to it. By fostering experimentation and risk-taking in a controlled environment, these incubator-style initiatives can position law schools as thought leaders and innovators in the legal tech space—demonstrating their role not just as educators, but as drivers of technological change. For example, in our upcoming seminar, students will learn how to develop, test, fine-tune, and share their own custom GPT legal prototypes – exclusively for educational use in the classroom.</p>
<h2>5. Partnering with Tech Companies to Test Emerging Legal AI Tools</h2>
<p>Law schools can foster innovation by collaborating with tech companies to organize hackathons and pilot projects. Through these events, students can engage with beta versions of new apps and platforms, gaining practical experience while contributing feedback to refine these technologies.</p>
<p>These collaborations provide valuable hands-on problem-solving opportunities, helping students understand how AI integrates into legal workflows. Additionally, working alongside industry professionals expands students’ networks and enhances their employability. Such partnerships position law schools as forward-thinking institutions, preparing graduates to thrive in a tech-augmented legal landscape.</p>
<h2>6. AI-Enhanced Moot Courts</h2>
<p>Incorporating AI into preparing for moot court competitions offers students real-time feedback to sharpen their advocacy skills. UBC’s <a href="https://eml.ubc.ca/projects/jis/">Judicial Interrogatory Simulator</a> analyzes participants&#8217; arguments and suggests targeted improvements, helping students refine their legal reasoning and prepare for answering judges’ questions. While this version already looks dated, a fully integrated video avatar version is already in reach if someone wants to develop it. Imagine an AI moot judge trained on decades of appellate transcripts.</p>
<p>This technology gives students hands-on experience and individualized support that is complemented by traditional teaching and learning approaches. By integrating AI into moot court preparation or creating new moots using AI judges, law schools can enhance the learning experience, fostering confident advocates equipped to leverage AI effectively in the courtroom.</p>
<h2>7. Micro-Credentials in AI and Law</h2>
<p>Law schools can develop micro-credentials in AI and the law (e.g., a 12-week part-time certificate). Intensive, targeted programs can equip students and practising lawyers with essential skills. These certifications demonstrate specialized expertise in areas such as AI-powered legal research and writing, ethics and professional responsibility, AI governance, data privacy, and conducting AI impact assessments.</p>
<p>Micro-credentials complement traditional law degrees by signaling to employers that graduates are prepared to navigate the intersection of law and technology. These programs not only make students more competitive in the job market but also position law schools as forward-thinking institutions that equip graduates with the skills needed to thrive in a tech-driven legal landscape.</p>
<h2>8. Facilitating Active Learning with AI</h2>
<p>To effectively engage students with AI, law schools must move beyond traditional lectures toward workshop-based and problem-solving models. These approaches encourage active participation, collaboration, and hands-on learning—fostering deeper understanding of AI concepts and tools. Instructors act as facilitators, guiding students through real-world challenges.</p>
<p>There are proven <a href="https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education?subtitle=en">educational AI platforms</a> that enhance the learning experience, offering students tailored feedback like a private tutor, providing for oversight and guidance from human instructors. This shift cultivates essential skills such as creativity, critical thinking, and teamwork, better preparing graduates to navigate the complexities of AI-powered legal practice.</p>
<h2>9. Virtual Client Simulations</h2>
<p>AI-powered virtual client simulations, such as intake interviews or client interviewing, could provide students with a realistic, immersive way to develop their skills. These simulations would mimic real-world complexities by dynamically adjusting client responses based on student input, offering a personalized and evolving experience.</p>
<p>This environment allows students to refine their practical skills and build confidence in handling diverse client scenarios. Additionally, these simulations introduce students to AI-enhanced service delivery models they will encounter in legal practice, bridging the gap between theory and application. Virtual clients prepare graduates to excel in client-centered roles, ready to leverage AI tools in modern legal workflows. As with all things AI, human oversight of student learning is crucial, and this is something that AI tools have been adapted to provide in educational settings.</p>
<h2>10. Virtual Legal Clinics</h2>
<p>An AI-facilitated virtual legal clinic with AI “clients” offers students invaluable hands-on experience with feedback through simulated legal cases. A supervising lawyer or faculty member would also be in the loop. This hybrid model combines AI-generated insights with faculty oversight, ensuring students develop practical skills in a supportive learning environment.</p>
<p>The AI mentor can review students&#8217; work, provide actionable feedback, suggest improvements, and flag ethical concerns. By engaging with AI-powered supervision, students become familiar with how these technologies enhance legal workflows, positioning them to excel in a tech-integrated legal profession. Ensuring that human lawyers provide ultimate oversight is a core requirement of a virtual clinic.</p>
<h2>Conclusion</h2>
<p>This technological shift presents an incredible opportunity. By embracing AI, legal education can prepare students not just to survive, but to thrive. The challenge is not just about teaching students how to use tools but equipping them with the judgment and insight necessary to apply these technologies ethically and effectively. It’s about cultivating future lawyers who can leverage AI to enhance access to justice while maintaining a human-centered approach to legal practice.</p>
<p>AI is reshaping the legal profession, and law schools must act quickly to keep pace. Fortunately, adapting to these changes doesn’t require major curricular reform. By adopting practical strategies—such as workshops, AI-integrated courses, and virtual clinics—law schools can stay ahead, ensuring their students do not fall behind.</p>
<p>The future of legal education lies in fostering technological innovation while preserving the timeless role of an independent legal profession committed to serving the public. This is an exciting time of transformation.</p>
<p>___</p>
<p><em>Benjamin Perrin is a law professor at the University of British Columbia, Peter A. Allard School of Law and leads the UBC AI &amp; Criminal Justice Initiative.</em></p>
<p>The post <a href="https://www.slaw.ca/2024/11/26/10-practical-strategies-for-law-schools-to-embrace-ai/">10 Practical Strategies for Law Schools to Embrace AI</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2024/11/26/10-practical-strategies-for-law-schools-to-embrace-ai/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>CJC AI Guidelines for Canadian Courts Leave Room for Improvement</title>
		<link>https://www.slaw.ca/2024/11/18/cjc-ai-guidelines-for-canadian-courts-leave-room-for-improvement/</link>
					<comments>https://www.slaw.ca/2024/11/18/cjc-ai-guidelines-for-canadian-courts-leave-room-for-improvement/#comments</comments>
		
		<dc:creator><![CDATA[Guest Blogger]]></dc:creator>
		<pubDate>Mon, 18 Nov 2024 12:00:49 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<category><![CDATA[Practice of Law]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107586</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">The Canadian Judicial Council (CJC) has released its &#8220;<a href="https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf">Guidelines for the Use of Artificial Intelligence in Canadian Courts</a>&#8221; (CJC Guidelines), which represent a significant step towards integrating artificial intelligence (AI) into the Canadian justice system. This article evaluates the CJC Guidelines, analyzing their strengths, weaknesses, and potential implications. Given my experience drafting similar guidelines, I offer constructive recommendations for improvement, focusing on practicality, comprehensiveness, and responsiveness to the unique challenges of AI adoption in Canadian courts.</p>
<p>Practicality and Usefulness of the CJC Guidelines</p>
<p>The CJC Guidelines are undoubtedly useful in laying a conceptual groundwork for AI adoption in  . . .  <a href="https://www.slaw.ca/2024/11/18/cjc-ai-guidelines-for-canadian-courts-leave-room-for-improvement/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2024/11/18/cjc-ai-guidelines-for-canadian-courts-leave-room-for-improvement/">CJC AI Guidelines for Canadian Courts Leave Room for Improvement</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">The Canadian Judicial Council (CJC) has released its &#8220;<a href="https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf">Guidelines for the Use of Artificial Intelligence in Canadian Courts</a>&#8221; (CJC Guidelines), which represent a significant step towards integrating artificial intelligence (AI) into the Canadian justice system. This article evaluates the CJC Guidelines, analyzing their strengths, weaknesses, and potential implications. Given my experience drafting similar guidelines, I offer constructive recommendations for improvement, focusing on practicality, comprehensiveness, and responsiveness to the unique challenges of AI adoption in Canadian courts.</p>
<h2>Practicality and Usefulness of the CJC Guidelines</h2>
<p>The CJC Guidelines are undoubtedly useful in laying a conceptual groundwork for AI adoption in courts. They emphasize key principles such as judicial independence, accountability, transparency, and ethical considerations. However, the guidelines are high-level and aspirational; they need concrete guidance on implementation, which could lead to inconsistency or unintended and avoidable risks.</p>
<p>For instance, the Guidelines recommend &#8220;develop a program of education and provide user support&#8221; and &#8220;regularly track the impact of AI deployments&#8221; but do not provide guidance on what these programs should entail or how they should be implemented. This lack of specificity makes it difficult for courts, especially those with varying resources and technical expertise, to operationalize the Guidelines effectively. This disconnect represents a significant barrier to the adoption of technology, and AI specifically, in Canadian courts. Without a fundamental shift in priorities and funding, it is the unfortunate reality that most courts cannot explore and implement AI responsibly in the short term. While the Guidelines are useful as a discussion starter, they are premature in their attempt to provide a framework for implementation across Canada. The CJC would have been wiser to wait until courts had the opportunity to explore AI and develop best practices.</p>
<h2>Evaluation</h2>
<p>The CJC Guidelines represent a commendable effort to address the complex landscape of AI integration in Canadian courts. They successfully underscore the importance of preserving judicial independence, which is crucial for maintaining public trust in the judiciary. Additionally, the Guidelines acknowledge AI’s potential benefits in improving the efficiency of judicial decision-making, encouraging courts to explore its use to responsibly enhance the administration of justice. The emphasis on adopting and using AI consistently with ethical principles and legal obligations helps ensure that AI is used fairly, unbiasedly, and transparently.</p>
<p>However, the Guidelines also exhibit deficiencies. A primary concern is their lack of specificity regarding different types of AI. The Guidelines fail to adequately distinguish between AI applications such as generative AI, automated decision-making systems, and non-generative AI. This distinction is crucial, as each type presents unique ethical and legal considerations. For instance, the use of generative AI in court settings raises concerns about transparency, accountability, and potential bias in the algorithms used. Similarly, automated decision-making systems require careful scrutiny to ensure they do not perpetuate or exacerbate existing societal biases. The Guidelines’ failure to address these nuances could lead to AI implementation that inadvertently compromises fairness and justice.</p>
<p>Further, the Guidelines lack a detailed discussion of human rights and their potential vulnerability to AI’s influence. The right to a fair trial, the right to privacy, and the right to procedural fairness are all potentially impacted by AI use in courts. For example, AI could predict the likelihood of recidivism (see, for example, <a href="https://www.bu.edu/articles/2023/do-algorithms-reduce-bias-in-criminal-justice/">this discussion</a> by Molly Callahan at Boston University), which could then be used to deny bail or impose harsher sentences, raising concerns about due process and discrimination. The Guidelines’ need for depth in this area is a key area for improvement.</p>
<p>Another critical operational issue is insufficient guidance on conducting algorithmic impact assessments. These assessments are vital for ensuring that AI systems are used responsibly and ethically. They help identify potential biases, ensure transparency in decision-making processes, and mitigate the risk of unintended consequences. The Guidelines’ lack of instructions on conducting these assessments leaves courts ill-equipped to navigate the complexities of AI implementation, especially when many courts lack the technical expertise necessary to responsibly explore AI on their own.</p>
<p>The Guidelines also lack detailed guidance on training and capacity-building, which is essential for ensuring court staff can use AI systems effectively and responsibly. The absence of a robust &#8220;human-in-the-loop&#8221; principle raises concerns about over-reliance on AI and eroding judicial autonomy. The Guidelines’ current approach may inadvertently create loopholes for the inappropriate application of AI in court administration and decision-making processes.</p>
<p>Finally, while the Guidelines’ motivation is a broad concern over judicial independence, much of their concerns play on hypotheticals. The question of how the use of AI by courts or the judiciary can impact the unwritten constitutional principle of judicial independence has not been explored. Without a fundamental understanding of this interaction, there is no way to truly validate these concerns or make AI implementation strategies that account for them.</p>
<h2>Change Management and AI Implementation</h2>
<p>The CJC Guidelines primarily focus on the responsible <em>use</em> of AI. However, they lack guidance on the crucial steps of <em>exploring</em> and <em>implementing</em> AI. This omission leaves courts with little direction on navigating the complexities of selecting appropriate AI tools, assessing their suitability, and integrating them into existing workflows.</p>
<p>My “<a href="https://www.a2jai.ca/s/Exploring-AI-at-High-Risk-Legal-Institutions-b4ta.pdf">Exploring AI at High-Risk Legal Institutions</a>” report emphasizes the importance of a strategic, well-structured change management approach to integrating AI tools in high-risk legal institutions like courts and tribunals. Courts will face significant challenges in adopting AI without clear guidance on change management. The CJC Guidelines could be significantly enhanced by walking courts through best practices on change management with a focus on implementing new technologies. This would provide courts with practical advice to navigate the complexities and risks of AI implementation, ensuring a smoother transition, minimizing disruption, and maximizing the chances of successful AI adoption.</p>
<h2>Judicial Independence</h2>
<p>The CJC Guidelines state that their motivation is a broad concern <em>that</em> the use of AI <em>could</em> impact judicial independence. However, they do not meaningfully explore <em>how</em> AI can impact this fundamental principle. AI has the potential to both support and undermine judicial independence, but this interplay has not been explored or illustrated in a meaningful way.</p>
<p>The reality is the constitutional principle of judicial independence was never adapted for the 21<sup>st</sup> century. Judicial independence has three characteristics: security of tenure, financial security, and administrative independence (<em>Ref re Remuneration of Judges of the Prov. Court of P.E.I.; Ref re Independence and Impartiality of Judges of the Prov. Court of P.E.I.</em>, <a href="https://canlii.ca/t/1fqzp">1997 CanLII 317 (SCC)</a> at <a href="https://canlii.ca/t/1fqzp#par115">para 115</a>). The use of AI in courts would not jeopardize judges’ security of tenure or financial security, so it could only ever threaten administrative independence. While this principle remains of central importance to our constitutional democracy, administrative independence of the judiciary is a concept elucidated in the days when “Zoom” was a sound made by fast cars and 16-bit computers were considered the “next generation” of high-end home computers. In <em>Valente v The Queen</em>, <a href="https://canlii.ca/t/1ftzs">1985 CanLII 25 (SCC)</a> [<em>Valente</em>], the Supreme Court of Canada defined what is protected by administrative independence in narrow terms:</p>
<ol>
<li>Assignment of judges;</li>
<li>Sittings of the court;</li>
<li>Court lists; and,</li>
<li>The related matters of allocation of courtrooms and direction of the administrative staff engaged in carrying out these functions. (<em>Valente</em> at 709)</li>
</ol>
<p>In 1985, these tasks were performed by humans using paper. Little thought was given when courts began adopting technology to the idea that the technology itself could jeopardize judicial independence. What bears significant study and exploration is how the use of AI by judges or court staff could impact administrative independence, whether it could fall under the broader logic and reasoning of <em>Valente</em>’s terms or if it needs to be substantively revisited by the Supreme Court of Canada. This is not easily answered, but it is my next research focus with the Artificial Intelligence Risk and Regulation Lab. By addressing these questions, the CJC Guidelines can help ensure that AI is used to strengthen, rather than undermine, judicial independence in Canada.</p>
<h2>Recommendations for Improving Guidelines</h2>
<p>The CJC Guidelines or future guidelines on the use of AI by courts and tribunals would benefit from the following recommendations:</p>
<ol>
<li><strong>Provide Specific Guidance on Different Types and Use Cases of AI</strong>: The Guidelines should offer specific guidance on using different types of AI, such as generative AI and automated decision-making systems, and their various applications within the court system. For instance, the Guidelines should offer specific guidance on using automated decision-making systems (see, for example, the Government of Canada’s <a href="https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592"><em>Directive on Automated Decision-Making</em></a>).</li>
<li><strong>Expand the Discussion of Human Rights</strong>: Include a more detailed discussion of how AI can impact specific human rights, such as the rights to a fair trial, privacy, and procedural fairness. There are helpful precedents in circulation, such as the UNESCO <a href="https://unesdoc.unesco.org/ark:/48223/pf0000390781">Draft Guidelines for the Use of AI Systems in Courts and Tribunals</a> or the Law Commission of Ontario’s <a href="https://www.lco-cdo.org/wp-content/uploads/2024/06/LCO-Submission-to-Government-of-Ontario-Bill-194-Consultations-June-2024.pdf"><em>Submission on Bill 194 &#8211; Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024</em></a>.</li>
<li><strong>Provide More Detailed Guidance on How to Conduct Algorithmic Impact Assessments</strong>: This guidance should include recommendations for the scope and format of these assessments. A standard approach may be desirable, such as the use of UNESCO&#8217;s <a href="https://www.unesco.org/ethics-ai/en/eia">Ethical Impact Assessment</a> or the Government of Canada’s <a href="https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html">Algorithmic Impact Assessment</a>.</li>
<li><strong>Provide More Detailed Guidance on Training and Capacity-Building</strong>: This should include guidance on developing and implementing training programs and providing ongoing education and support.</li>
<li><strong>Incorporate a Robust &#8220;Human in the Loop&#8221; Principle</strong>: This principle should unequivocally state that AI should not replace human judgment in judicial decision-making (see, for example, the Government of Canada’s <a href="https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592"><em>Directive on Automated Decision-Making</em></a>; see also the Alberta Courts’ <a href="https://albertacourts.ca/kb/resources/announcements/notice-to-the-profession-public---use-of-ai-in-citations-submissions"><em>Notice to the Profession &amp; Public – Ensuring the Integrity of Court Submissions When Using Large Language Models</em></a>; see also Federal Court’s <a href="https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence"><em>Interim Principles and Guidelines on the Court’s Use of AI</em></a>).</li>
<li><strong>Incorporate Guidance on Change Management</strong>: This would involve guidance on assessing organizational and justice system users’ needs, selecting appropriate AI tools, managing risks, and engaging stakeholders.</li>
<li><strong>Deepen the Analysis and Understanding of Judicial Independence</strong>: The Guidelines should dedicate a section to explaining the complex interplay between AI and judicial independence. In the alternative, such policies should defer to some established understanding of this interaction. Regulation cannot be effective when those being regulated do not understand why they are being regulated.</li>
</ol>
<h2>Conclusion &amp; Next Steps</h2>
<p>The CJC Guidelines provide a valuable starting point for AI’s responsible and ethical adoption in Canadian courts. However, these Guidelines and similar policies need to be further developed to address the deficiencies identified in this assessment. By incorporating these recommendations, the CJC and individual courts and tribunals can create a more comprehensive and informative set of guidelines that will help ensure that AI promotes fairness, access to justice, and the efficient administration of justice.</p>
<p>As a next step in this field, I am embarking on an effort to provide a more precise and more specific answer to whether, when, and how specific use cases of artificial intelligence may impact judicial independence. The aim of this next project is to inform ongoing discussions on how courts can responsibly and safely integrate technologies like AI without jeopardizing judicial independence or the public’s perception of judicial independence. As an overriding constitutional risk, it is paramount that any plan to explore, implement, and use AI in courts is contextualized and framed by the enduring necessity to preserve judicial independence.</p>
<p>You can read a more in-depth evaluation of the CJC Guidelines in “<a href="https://www.airrlab.com/lab-blog/towards-responsible-ai-integration">Towards Responsible AI Integration: Evaluating the CJC Guidelines for Canadian Courts</a>”, published by the Artificial Intelligence Risk and Regulation Lab.</p>
<p>About the <a href="https://www.airrlab.com/"><strong>Artificial Intelligence Risk and Regulation Lab</strong></a><strong> (AIRRL)</strong>: Founded in 2023 under the <a href="https://bcace.org/">Access to Justice Centre for Excellence</a>, the AIRRL is dedicated to exploring how AI can transform the justice system while safeguarding foundational principles. Our research seeks to develop frameworks and recommendations that ensure AI’s integration enhances access to justice rather than detracts from it. We believe that responsible, evidence-based AI policies are essential to protect legal institutions&#8217; integrity and broaden access to justice for all in the 21st century. By fostering interdisciplinary collaboration, we aim to support a balanced approach to AI regulation that aligns technological progress with public trust and ethical accountability.</p>
<p>The post <a href="https://www.slaw.ca/2024/11/18/cjc-ai-guidelines-for-canadian-courts-leave-room-for-improvement/">CJC AI Guidelines for Canadian Courts Leave Room for Improvement</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2024/11/18/cjc-ai-guidelines-for-canadian-courts-leave-room-for-improvement/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Regulation on the Rocks: Why Canada’s First AI Law Looks Likely to Fail</title>
		<link>https://www.slaw.ca/2024/11/13/regulation-on-the-rocks-why-canadas-first-ai-law-looks-likely-to-fail/</link>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Wed, 13 Nov 2024 12:00:27 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107581</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br />Introduction</p>
<p class="lead">In June of 2022, the Government of Canada tabled Bill C-27, <em>the Digital Charter Implementation Act</em>,<a href="#_ftn1" name="_ftnref1">[1]</a> making it one of the earlier countries in the world to commence work on a national level Artificial Intelligence (AI) regulatory framework. Unfortunately, due to a complex array of factors—including criticisms of its scope, legislative delays and political instability—the bill now faces a significant risk of failure.</p>
<p>Bill C-27 is an omnibus bill that contains three pieces of legislation including the <em>Consumer Privacy Protection Act </em>(CPPA), the <em>Personal Information and Data Protection Tribunal Act</em> (PIDPA) and the <em>Artificial Intelligence Data Act</em> . . .  <a href="https://www.slaw.ca/2024/11/13/regulation-on-the-rocks-why-canadas-first-ai-law-looks-likely-to-fail/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2024/11/13/regulation-on-the-rocks-why-canadas-first-ai-law-looks-likely-to-fail/">Regulation on the Rocks: Why Canada’s First AI Law Looks Likely to Fail</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><h2>Introduction</h2>
<p class="lead">In June of 2022, the Government of Canada tabled Bill C-27, <em>the Digital Charter Implementation Act</em>,<a href="#_ftn1" name="_ftnref1">[1]</a> making it one of the earlier countries in the world to commence work on a national level Artificial Intelligence (AI) regulatory framework. Unfortunately, due to a complex array of factors—including criticisms of its scope, legislative delays and political instability—the bill now faces a significant risk of failure.</p>
<p>Bill C-27 is an omnibus bill that contains three pieces of legislation including the <em>Consumer Privacy Protection Act </em>(CPPA), the <em>Personal Information and Data Protection Tribunal Act</em> (PIDPA) and the <em>Artificial Intelligence Data Act</em> (AIDA). The CPPA and the PIDPA are aimed at modernizing Canada’s private-sector privacy law framework while the AIDA is intended to create new legal rules for the safe development and implementation of artificial intelligence.<a href="#_ftn2" name="_ftnref2">[2]</a> AIDA seeks to regulate high impact AI systems deployed for commercial use in Canada by prioritizing risk management, transparency, and accountability.</p>
<p>As discussed in earlier columns, I am generally very supportive of the risk management approach of AIDA however, it has been the subject of significant criticisms from a variety of stakeholders. Recent amendments proposed by the Minister of Innovation, Science and Industry reflect an effort to address stakeholder criticisms, but they have not alleviated widespread concerns about the bill’s viability in its current form. Additionally, combining AIDA with the broader privacy and data governance mandates of the CPPA and PIDPA may have diluted its focus, complicating the legislative process and heightening opposition among lawmakers. With an impending federal election and increased political volatility, Canada’s ability to implement its first comprehensive AI law remains uncertain. Unfortunately, Canada is currently at risk of falling behind global counterparts, potentially losing its competitive edge in AI innovation and leaving open the possibility of unregulated AI technologies impacting privacy, safety, and public trust.</p>
<h2>Background</h2>
<p>Bill C-27 is not Canada’s first recent attempt to update its privacy laws. A previous attempt, Bill C-11, introduced in November 2020, died on the order paper in April 2021 when Parliament dissolved. The two privacy law acts in Bill C-11 were subsequently combined with a new attempt to regulate AI in Bill C-27. Introduced in June 2022, Bill C-27 has slowly made its way through the legislative process, with a second reading introduced in April 2022. It has since been referred to the Standing Committee on Industry and Technology (INDU), which has begun its review. The INDU Committee minutes reveal slow progress and contentious discussions on the bill&#8217;s contents.</p>
<p>Notably, the Minister of Innovation, Science and Industry appeared before the Committee and indicated that he would be providing proposed amendments to both the CPPA and AIDA. These proposed amendments came by way of letter from the Minister in October and November 2023 respectively, and in the case of the AIDA, included significant changes to address some of the key concerns addressed by stakeholders.<a href="#_ftn3" name="_ftnref3">[3]</a> These included amending the definition of “Artificial Intelligence” to align with other international AI regulatory efforts as well as identifying a list of seven initial classes of “High Impact” AI systems. Despite what I see as the positive impact of these changes, they have unfortunately failed to significantly enhance the bill&#8217;s chances of passage.</p>
<h2>Current Status</h2>
<p>In light of the slow progress through the INDU Committee, commentators have begun to publicly doubt whether Bill C-27 will pass before the next election is called or the current government falls to a non-confidence vote. As an example, a recent opinion piece by Kris Klein, the managing director for Canada for the International Association of Privacy Professionals noted in regards to the INDU Committee’s recent return to their clause-by-clause review of the Bill that “…not one of its members showed any interest in actually getting that job done. It was classic delays, and I don&#8217;t think any one party or member is to blame. So, in Parliament, zero happened on privacy or AI.”<a href="#_ftn4" name="_ftnref4">[4]</a> Indeed, the minutes of the most recent meeting to consider the Bill on September 26, 2024 contain some insight into the political disfunction around the bill, with the proposal of the following motion.</p>
<blockquote><p><strong><em>Rick Perkins moved, — That, with regard to the committees ongoing study of Bill C-27, and given Minister Champagne has accused opposition parties of slowing down consideration of the bill, but given that:</em></strong></p>
<p style="padding-left: 40px;"><strong><em>(i) the Minister delayed consideration of the bill for a year by leaving it on the order paper, preventing its consideration in second reading; and given</em></strong></p>
<p style="padding-left: 40px;"><strong><em>(ii) liberal members of the industry committee have continuously filibustered consideration of the bill for five out of the ten meetings held on clause-by-clause, to prevent the passage of amendments recommended by the Privacy Commissioner;</em></strong></p>
<p><strong><em>the committee therefore express its disagreement with Minister Champagne comments, and orders the clerk of the committee to draft a letter to the Minister requesting that his members stop their filibuster of Bill C-27.</em></strong></p>
<p><strong><em>After debate, by unanimous consent, the motion was withdrawn.<a href="#_ftn5" name="_ftnref5">[5]</a></em></strong></p></blockquote>
<p>The reasons behind the delay in moving this bill through the legislative process are likely numerous but in my opinion primarily rest on two factors; the complexity of combining two significant regulatory reforms into one bill and the unstable political environment in the country at this time. By combining both privacy and AI regulation into a single piece of legislation, the government has taken on an immense regulatory burden that would be challenging to manage even without additional political friction.</p>
<p>Privacy advocates and tech experts alike have expressed concern that the inclusion of AI regulation as a secondary component detracts from C-27’s primary mandate of updating privacy protections. This dual approach risks spreading resources and political capital too thin, ultimately compromising the efficacy of both the privacy and AI components. A narrower focus on privacy reform, or separate, more specialized legislation for AI, might have been more achievable given the current political climate and timeline constraints. The current real threat to Bill C-27’s survival is Canada’s political instability, with the potential for a successful non-confidence vote or the call of an election before the passage of the bill. Political opposition is likely to grow in the run up to the election and the Conservative Party has voiced concerns about regulatory overreach, questioning the bill’s implications for Canadian businesses. If this party were to form government in the future, it is conceivable that they would seek to overhaul, or even scrap, the bill, leaving Canada without a regulatory framework for AI and updated privacy standards.</p>
<h2>Conclusion</h2>
<p>As Canada grapples with the complexities of AI legislation amid a challenging political landscape, other nations are moving forward with their regulatory frameworks. Notably, the European Union enacted its AI Act on August 1, 2024, setting a global standard and intensifying pressure on Canada to implement comparable regulations. Domestically, industry leaders and advocacy groups alike are calling for clearer regulatory guidelines on AI, citing concerns not only about compliance expectations but also about the risks of harm to vulnerable populations in an unregulated environment. AI technology is rapidly permeating everyday life in Canada, while efforts to regulate its impact remain stalled.</p>
<p>Canada stands at a critical juncture: if Bill C-27 fails as I and other commentators believe it will, the next government will inherit both the regulatory urgency and the need to rebuild trust and momentum with stakeholders. As the global AI regulatory landscape advances, so does the urgency for Canada to enact effective AI legislation. However, ensuring that such legislation succeeds may require a more targeted and streamlined approach than what Bill C-27 currently offers. At present, the likelihood of Bill C-27 becoming law is uncertain and it is growing increasingly certain that Canada may soon find itself forced to restart its efforts to regulate AI from the ground up.</p>
<p>Disclosure: Generative AI was used in the development of this post.</p>
<p>_________________</p>
<p><a href="#_ftnref1" name="_ftn1">[1]</a> Bill C-27, <em>An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts</em>, 1st Sess, 44th Parl, 2022​.</p>
<p><a href="#_ftnref2" name="_ftn2">[2]</a> Canada, Innovation, Science and Economic Development Canada, <em>The Artificial Intelligence and Data Act (AIDA) – Companion Document,</em> 2022, online: <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document">https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document</a>.</p>
<p><a href="#_ftnref3" name="_ftn3">[3]</a> See Canada, House of Commons, Standing Committee on Industry and Technology<strong>,</strong> <em>Submission from the Minister of Innovation, Science and Industry,</em> 44th Parl, 1st Sess, (20 October 2023), online: <a href="https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12633023/12633023/MinisterOfInnovationScienceAndIndustry-2023-10-20-e.pdf">https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12633023/12633023/MinisterOfInnovationScienceAndIndustry-2023-10-20-e.pdf</a>. and Canada, House of Commons, Standing Committee on Industry and Technology<strong>,</strong> <em>Submission from the Minister of Innovation, Science and Industry,</em> 44th Parl, 1st Sess, (20 October 2023), online: <a href="https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12633023/12633023/MinisterOfInnovationScienceAndIndustry-2023-10-20-e.pdf">https://www.ourcommons.ca/content/Committee/441/INDU/WebDoc/WD12633023/12633023/MinisterOfInnovationScienceAndIndustry-2023-10-20-e.pdf</a>.</p>
<p><a href="#_ftnref4" name="_ftn4">[4]</a> Kris Klein, &#8220;Notes from the IAPP Canada: Lack of Bill C-27 progress &#8216;frustrating'&#8221; (20 September 2024), online: <em>International Association of Privacy Professionals (IAPP)</em> <a href="https://iapp.org/news/a/notes-from-the-iapp-canada-Lack-of-Bill-C-27-progress-frustrating-">https://iapp.org/news/a/notes-from-the-iapp-canada-Lack-of-Bill-C-27-progress-frustrating-</a>.</p>
<p><a href="#_ftnref5" name="_ftn5">[5]</a> Canada, House of Commons, Standing Committee on Industry and Technology, <em>Minutes of Proceedings, Meeting No 136</em> (26 September 2024), 44th Parl, 1st Sess, online: <a href="https://www.ourcommons.ca/DocumentViewer/en/44-1/INDU/meeting-136/minutes">https://www.ourcommons.ca/DocumentViewer/en/44-1/INDU/meeting-136/minutes</a>.</p>
<p>The post <a href="https://www.slaw.ca/2024/11/13/regulation-on-the-rocks-why-canadas-first-ai-law-looks-likely-to-fail/">Regulation on the Rocks: Why Canada’s First AI Law Looks Likely to Fail</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Heritage Status for Legal Systems: Preserving History While Embracing Legal Innovation</title>
		<link>https://www.slaw.ca/2024/10/17/heritage-status-for-legal-systems-preserving-history-while-embracing-legal-innovation/</link>
		
		<dc:creator><![CDATA[Sarah A. Sutherland]]></dc:creator>
		<pubDate>Thu, 17 Oct 2024 11:03:23 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107519</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">In 2019, <a href="https://www.strandbooks.com/">The Strand bookstore in New York</a> and the building it inhabits were granted heritage status by the city&#8217;s Landmarks Preservation Commission, and they threatened to sue the city in response (the story was widely reported, <a href="https://www.theguardian.com/books/2019/oct/11/new-york-the-strand-bookstore-landmark-status-sue">here is a story from the Guardian</a>). The owner&#8217;s concern was that the administrative requirements associated with the designation would be onerous and that the bookstore might not be viable with them. They said this is especially relevant given the competition from online retailers like Amazon.</p>
<p><img decoding="async" class="alignnone wp-image-107520 size-large" src="https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-600x800.jpg" alt="Photograph of the Strand Bookstore's awning" width="600" height="800" srcset="https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-600x800.jpg 600w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-300x400.jpg 300w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-150x200.jpg 150w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-768x1024.jpg 768w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-1152x1536.jpg 1152w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-1536x2048.jpg 1536w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-scaled.jpg 1920w" sizes="(max-width: 600px) 100vw, 600px" /></p>
<p>The Strand is still in business, but the reason this matters here is recognizing that legal  . . .  <a href="https://www.slaw.ca/2024/10/17/heritage-status-for-legal-systems-preserving-history-while-embracing-legal-innovation/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2024/10/17/heritage-status-for-legal-systems-preserving-history-while-embracing-legal-innovation/">Heritage Status for Legal Systems: Preserving History While Embracing Legal Innovation</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">In 2019, <a href="https://www.strandbooks.com/">The Strand bookstore in New York</a> and the building it inhabits were granted heritage status by the city&#8217;s Landmarks Preservation Commission, and they threatened to sue the city in response (the story was widely reported, <a href="https://www.theguardian.com/books/2019/oct/11/new-york-the-strand-bookstore-landmark-status-sue">here is a story from the Guardian</a>). The owner&#8217;s concern was that the administrative requirements associated with the designation would be onerous and that the bookstore might not be viable with them. They said this is especially relevant given the competition from online retailers like Amazon.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-107520 size-large" src="https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-600x800.jpg" alt="Photograph of the Strand Bookstore's awning" width="600" height="800" srcset="https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-600x800.jpg 600w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-300x400.jpg 300w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-150x200.jpg 150w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-768x1024.jpg 768w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-1152x1536.jpg 1152w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-1536x2048.jpg 1536w, https://www.slaw.ca/wp-content/uploads/2024/10/IMG_8554-scaled.jpg 1920w" sizes="auto, (max-width: 600px) 100vw, 600px" /></p>
<p>The Strand is still in business, but the reason this matters here is recognizing that legal organizations, whether they are courts, legislatures, firms, publishers, libraries, or something else, all have to deal with the responsibility of dealing with extended timelines. Essentially, they all have to act like they have heritage status.</p>
<p>Many of those organizations do recognize the requirements they face to ensure that they place what they do in context, which frequently requires long time scales. I hope they also recognize their commitment to posterity, and that they don&#8217;t make decisions based on short term needs that will not serve communities over the longer timescales that governance requires. In law libraries for example, it is not uncommon to have requests for information that dates from the 1940s, 1860s, or even the 1730s. Much of this information is available online, but frequently it lacks core details such as regulations being posted electronically without the appended visual material, such as maps, which still require a trip to the library&#8217;s basement to access.</p>
<p>Practitioners of different scientific disciplines work to position themselves against each other and don&#8217;t always understand the issues that affect research outside their own experience. Some subjects, such as chemistry and physics, attempt to isolate particular processes, with the goal of understanding the underlying attributes of nature. This isn&#8217;t always possible for other subjects, such as geology or ecology. The passage of time brings so much randomness that it is impossible to control all the variables that may affect results in the ways that researchers might like. If they did so, they wouldn&#8217;t be studying geology or ecology any more. Marcia Bjorerud&#8217;s book <a href="https://press.princeton.edu/books/hardcover/9780691181202/timefulness"><em>Timefulness: How Thinking Like a Geologist Can Help Save the World</em></a> discusses this at length. If you prefer audio-visual content, you can watch Dr. Bjorerud speak about this topic for The Long Now Foundation here:</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/Pd9seKaplDI?si=id3qot5ZtSlfX0Mk" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>There have been many times in the past when the needs of the present and posterity were not met and which continue to present problems. To give a specific example that&#8217;s local for me, it used to be a common practice for restrictive covenants to be placed on land titles to restrict ownership to certain ethnicities in some neighbourhoods in British Columbia. They are no longer legally enforceable, but it is understandable that people don&#8217;t want them to be on the deeds at all, and removing them is an ongoing issue (this was reported in <a href="https://nationalpost.com/news/canada/b-c-property-titles-bear-reminders-of-a-time-when-race-based-covenants-kept-neighbourhoods-white"><em>National</em> <em>Post</em></a> and <a href="https://www.nsnews.com/local-news/west-vancouver-homeowner-pushing-for-end-to-racist-land-titles-5353280"><em>North Shore News</em></a>).</p>
<p>This is just one example of a distasteful historical event that is reflected in the law, which can&#8217;t be taken in isolation from its historical context, because the law is continuous and has been built over time. As we continue to integrate technology more intimately into our work in novel ways, we are creating new ways for issues to emerge. I hope that these issues will be considered by both the developers and users of legal technology. Communities that don&#8217;t value their heritage lose a great deal, but balancing these needs with those of the present is also difficult and the need for new processes can&#8217;t be ignored.</p>
<p>(I also wrote about time and the law in <a href="https://www.slaw.ca/2023/01/27/considering-the-time-element-in-law/">my Slaw column from January 2023</a>.)</p>
<p>The post <a href="https://www.slaw.ca/2024/10/17/heritage-status-for-legal-systems-preserving-history-while-embracing-legal-innovation/">Heritage Status for Legal Systems: Preserving History While Embracing Legal Innovation</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Taming the Ghost in the Machine: Canada’s Journey to AI Regulation, Part 2</title>
		<link>https://www.slaw.ca/2024/09/13/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-2/</link>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Fri, 13 Sep 2024 11:00:40 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107332</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">In <a href="https://www.slaw.ca/2024/07/17/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-1/">part 1 of this article</a>, we explored two different areas of the regulation of artificial Intelligence (AI) in Canada. These included existing laws of general application that apply to AI and are in force currently, as well proposed legislation that would regulate the commercial use of AI in Canada directly, known as the Artificial Intelligence and Data Act (AIDA). In part 2 of this article, I will introduce a number of international developments in the regulation of AI that have an impact on Canada and introduce the primary international norms that are developing in this area. The article  . . .  <a href="https://www.slaw.ca/2024/09/13/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-2/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2024/09/13/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-2/">Taming the Ghost in the Machine: Canada’s Journey to AI Regulation, Part 2</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">In <a href="https://www.slaw.ca/2024/07/17/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-1/">part 1 of this article</a>, we explored two different areas of the regulation of artificial Intelligence (AI) in Canada. These included existing laws of general application that apply to AI and are in force currently, as well proposed legislation that would regulate the commercial use of AI in Canada directly, known as the Artificial Intelligence and Data Act (AIDA). In part 2 of this article, I will introduce a number of international developments in the regulation of AI that have an impact on Canada and introduce the primary international norms that are developing in this area. The article will conclude with a brief discussion on how organizations can use these sources to inform their development of policies and procedures around the use of AI in their workspaces.</p>
<h2>International AI Regulatory and Framework Developments</h2>
<p>As the regulatory environment for AI rapidly develops around the world, it is important for Canadian lawyers, regulators and policy makers to have an informed view of this landscape. For Canada’s deeply integrated global trade networks and technological ecosystems, harmonizing AI regulations with international standards is essential. Such harmonization helps to prevent fragmented approaches that could stifle innovation, create trade barriers and lead to regulatory arbitrage that could harm Canadian interests in this important developing field.</p>
<p>Indeed, we have already seen a recognition of this importance in the evolution of the definition of “artificial intelligence system” in the proposed AIDA. The definition of an AI system has evolved from its original text to now align more closely with the definition used in the Organization for Economic Co-operation and Development’s (OECD) AI Principles as well as the European Union’s (EU) AI Act.</p>
<p>There is of course a significant amount of regulatory activity occurring at the moment around the world in regards to AI and a summary of all significant developments is outside the scope of this article. Stay tuned to the developing website of The AI Risk and Regulation Lab at the University of Victoria (www.AIRRLAB.com) where a more fulsome overview of international regulatory developments will be posted in the coming months. A brief summary of the most significant international regulatory and framework developments relevant to Canada to date include the following:</p>
<h3>EU AI Act</h3>
<p>The EU AI Act is one of the first comprehensive regulatory and legal frameworks for AI in the world. It came into force in August of 2024 with provisions gradually being phased in over a 36-month period. The legislation classifies AI systems based on the level of risk that they pose, with categories ranging from minimal risk through to unacceptable high risk AI systems. Higher risk systems are subject to stringent requirements including risk assessments, transparency obligations and human oversight.</p>
<h3>G7 Hiroshima AI Process</h3>
<p>The G7 Hiroshima AI Process, launched in 2023, is the latest collaborative effort by the Group of Seven nations to establish common principles and policies for the governance of AI, with a particular focus on generative AI technologies. This initiative builds on earlier efforts, emphasizing transparency, accountability, safety, and international cooperation to ensure AI technologies respect human rights, democratic values, and the rule of law</p>
<h3>UNESCO Recommendations on the Ethics of AI</h3>
<p>The United Nations Educational, Scientific and Cultural Organizations (UNESCO) Recommendation on the Ethics of Artificial Intelligence is a framework adopted in November 2021, aimed at guiding the ethical development and use of AI on a global scale. This document outlines principles and values that should underpin AI systems, such as respect for human dignity, privacy, non-discrimination, and environmental sustainability. The Recommendation emphasizes the importance of ensuring that AI technologies are inclusive, transparent, and accountable. The Recommendation also provides member states with guidance on implementing these ethical principles in national policies and regulations.</p>
<h3>OECD’s AI Principles</h3>
<p>The OECD’s AI Principles, adopted in May 2019, are one of the first comprehensive international frameworks designed to promote trustworthy artificial intelligence. These principles, endorsed by over 40 countries including Canada, emphasize the responsible development and deployment of AI systems. The key tenets include ensuring that AI is transparent, fair, and accountable, and that it benefits people and the planet. The OECD also stresses the importance of robust safety and security measures, as well as the need for ongoing research and innovation that respects human rights and democratic values.</p>
<h2>Introduction to Developing Norms in AI Regulation</h2>
<p>In this article when I use the word “norms” I am referring to widely accepted principles and best practices that are used to guide the responsible development and use of AI technologies. These norms play a vital role in shaping behavior and expectations across the AI community. The source of developing norms in this area include regulation, international frameworks, ethical guidelines and technical standards that help ensure AI aligns with societal values. The significance of norms lies in their ability to provide a consistent approach to AI governance. As AI evolves, these norms offer a foundation for developing regulations and policy.</p>
<p>The impact of these norms can be seen in the development of the AIDA in Canada which is guided a set of six principles that are intended to align with international norms. These include human oversight and monitoring, transparency, fairness and equity, safety, accountability and validity and robustness. Some versions of these principles are found in all of the international regulatory and framework documents referenced above and should form the foundation of any attempt to develop internal policy and procedure documents for the implementation of AI in Canadian workplaces.</p>
<h2>Creating AI Policies and Procedures</h2>
<p>As organizations attempt to navigate the complex and rapidly evolving landscape of AI implementation, there are several basic practical steps that they can take to develop robust internal policies and procedures. These steps not only ensure compliance with emerging laws such as the AIDA but also align with broader norms and ethical guidelines that are increasingly influencing AI governance around the world. These basic steps include the following:</p>
<h3>Consider Domestic Legislation and International Norms</h3>
<p>A useful first step is to identify applicable domestic legislation and document any relevant principles to incorporate into your policy. For example, almost all organizations in Canada will be subject to some level of privacy and use of information law. In this circumstance, you would want to identify any uses of AI in your organization that could relate to the legislative requirements under the relevant law such as the Personal Information Protection and Electronic Documents Act (PIPEDA) or provincial equivalents. This might include AI applications that handle personal data, make automated decisions about individuals, or involve data analytics.</p>
<p>Once all domestic legislation, including the AIDA, is considered, further guidance can be taken from the norms reflected in the framework documents referenced above. For instance, ensuring transparency, accountability, and human oversight in AI systems are core across these frameworks. By embedding these principles into your organization&#8217;s AI policy framework, you align with global standards, reducing the risk of future compliance issues and enhancing the trustworthiness of your AI systems.</p>
<h3>Conduct a Risk Assessment</h3>
<p>A core tenet of the requirements under the proposed AIDA is the need for risk management when implementing AI in a commercial setting. Although not all implementation of AI in a workplace in Canada will be subject to AIDA, it is nonetheless a good practice to consider risk when developing policy in this area. Higher risk AI systems, such as those implemented in a health care or legal setting, should be subject to more stringent oversight and controls. This includes implementing rigorous risk assessments, regular audits, and establishing clear protocols for human intervention when needed.</p>
<h3>Consider Ethical Guidelines</h3>
<p>Using the ethical frameworks provided by UNESCO and the OECD as a foundation, Canadian organizations should develop internal policy that ensure AI systems are used in ways that respect human rights, promote fairness, and avoid discrimination. This includes creating policies that address issues such as bias in AI algorithms, data privacy, and the environmental impact of AI technologies.</p>
<h3>Establish Transparent Reporting</h3>
<p>Transparency is a recurring theme in all major international AI frameworks and is likely to be a cornerstone of Canadian AI regulation under AIDA. If appropriate, organizations should consider developing clear and transparent reporting mechanisms that document how AI systems are being used, and the steps taken to mitigate any identified risks.</p>
<h3>Engage in Continuous Monitoring and Adaptation</h3>
<p>The rapid pace of AI development means that regulatory and normative standards are constantly evolving. Organizations should consider committing to continuous monitoring of both technological advancements and regulatory changes. This involves regularly updating policies and procedures to reflect the latest international standards and best practices, as well as ensuring ongoing compliance with Canadian law.</p>
<p>The practice of AI policy development is swiftly developing in Canada and while there are no blueprints for this relatively new field of practice, guidance can be taken from the sources discussed above and in part 1 of this article. I look forward to hearing perspectives on this topic from anyone who is working in this exciting and developing field.</p>
<p>Disclosure: Generative AI was used in the development of this post.</p>
<p>The post <a href="https://www.slaw.ca/2024/09/13/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-2/">Taming the Ghost in the Machine: Canada’s Journey to AI Regulation, Part 2</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Taming the Ghost in the Machine: Canada’s Journey to AI Regulation, Part 1</title>
		<link>https://www.slaw.ca/2024/07/17/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-1/</link>
					<comments>https://www.slaw.ca/2024/07/17/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-1/#comments</comments>
		
		<dc:creator><![CDATA[Michael Litchfield]]></dc:creator>
		<pubDate>Wed, 17 Jul 2024 11:00:08 +0000</pubDate>
				<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=107123</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">Throughout my career, I have been attracted to and fortunate enough to work on various initiatives that push the envelope in particularly challenging and fast-moving areas. This work can sometimes induce anxiety due to its unpredictable nature and pace, but it is never boring and it is often highly rewarding. That being said, nothing I have done in the past has come close to the pace of change and potential for impact than working in the management of risk for artificial intelligence (AI) implementation.</p>
<p>The unique combination of the speed of advancement of AI technology combined with the slow pace  . . .  <a href="https://www.slaw.ca/2024/07/17/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-1/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2024/07/17/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-1/">Taming the Ghost in the Machine: Canada’s Journey to AI Regulation, Part 1</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">Throughout my career, I have been attracted to and fortunate enough to work on various initiatives that push the envelope in particularly challenging and fast-moving areas. This work can sometimes induce anxiety due to its unpredictable nature and pace, but it is never boring and it is often highly rewarding. That being said, nothing I have done in the past has come close to the pace of change and potential for impact than working in the management of risk for artificial intelligence (AI) implementation.</p>
<p>The unique combination of the speed of advancement of AI technology combined with the slow pace of regulatory and policy reactions is currently creating a uniquely challenging but fascinating work environment. In the academic space at the AI Risk and Regulation Lab at the University of Victoria we are engaged in a number of projects, but none perhaps currently more important than a mapping project in which we endeavour to keep up to date with and map out the existing regulatory responses to AI across the world. On the professional practice side, the demand I am seeing from clients in this space is an urgent need for assistance with the development of policy and procedures for AI use and implementation. With this backdrop I was recently asked to deliver a presentation to provide a snapshot of the existing and developing AI regulatory environment in Canada. In part 1 and in part 2 of this article, I aim to summarize the key points of the presentation for those interested in this field who may not have the time to gather all these details themselves.</p>
<p>In part 1 of this article series, I will address two different areas of regulation for AI, being existing laws of general application and proposed laws to directly regulate AI in Canada. In part 2 of this article, I will address the rapidly developing international norms in this area as well as how organizations in Canada can use the existing laws, proposed laws and developing international norms to inform the development policies and procedures in this important area.</p>
<h2>Existing Laws of General Application</h2>
<p>There are a wide variety of existing laws in Canada that currently apply to AI. The list of these laws is substantial and includes privacy laws, human rights laws, intellectual property laws, consumer protection laws and many other industry specific laws such as those in the fields of health care consent and telecommunications. To highlight the challenges of the application of existing laws to AI I would like to focus on two specific examples being privacy law and intellectual property law.</p>
<h3><em>Privacy Law</em></h3>
<p>Privacy laws in Canada, whether federal or provincial, require that consent for collecting personal information be informed, specific, and voluntary. The current challenge in this arena is that while certain AI companies state that they are striving to adhere to these privacy law principles, the current practical application of AI technology makes compliance challenging. For instance, the opaque nature of many AI algorithms and data processing methods can make compliance with Canadian privacy laws almost impossible at this point in time. The result is that there are significant gaps in current compliance and from my perspective no clear roadmap forward to deal with them. As it is unlikely that the technological aspects of this problem will be solved in the short term, a useful first step will be the development of clear and robust consent processes that take into consideration the complexities of AI data usage.</p>
<h3><em>Intellectual Property Law </em></h3>
<p>The primary intersection of AI and intellectual property law in Canada is the field of copyright. Copyright law, governed by the <em>Copyright Act</em>, provides protection for original works of authorship, including literary, artistic, musical, and dramatic works. As AI technologies evolve, they raise significant copyright issues concerning both the inputs (data and works used to train AI systems) and the outputs (works generated by AI systems). It is no secret that AI companies frequently use copyrighted works without permission for training purposes. This practice raises questions about copyright infringement and numerous cases are currently moving through the courts on this subject. On the other side of the equation, the outputs generated by AI systems such as text, images and music, also raise significant copyright questions. The primary issue with output is that traditional copyright law recognizes human authorship and under the <em>Copyrght Act</em>, an author is usually a natural person. This leaves us currently in an uncertain position about the question of who, if anyone, holds the copyright to AI generated works.</p>
<h2>Proposed Laws to Directly Regulate AI</h2>
<p>The rapid development of AI technologies prompted the Federal Government to propose new legislation in 2022 specifically aimed at addressing the unique challenges and opportunities presented by AI. The centerpiece of this legislative effort is the <em>Artificial Intelligence and Data Act</em> (AIDA) which seeks to establish a comprehensive framework for the regulation of AI in Canada.</p>
<p>The scope and purpose of the AIDA is to regulate AI systems that have a significant impact on individuals&#8217; rights, health, and economic interests. Its stated purpose is to promote responsible AI innovation and ensure that AI technologies are developed and used in a manner that respects human rights, fosters transparency, and enhances public trust. The AIDA adopts a risk-based framework, categorizing AI systems based on their potential harm. High-impact AI systems, which pose greater risks to individuals and society, will be subject to more stringent requirements, including mandatory risk assessments and oversight. The Act also proposes the creation of a centralized AI and Data Commissioner. This oversight body will be responsible for monitoring compliance, enforcing regulations, and promoting best practices in AI governance. The Commissioner will have the authority to conduct audits, investigations, and impose penalties for non-compliance.</p>
<p>The AIDA emphasizes the need for transparency in AI operations. It requires organizations to disclose information about their AI systems, including their functionality, data sources, and decision-making processes. This transparency is intended to ensure that individuals understand how AI decisions affecting them are made. The AIDA also integrates data privacy and security considerations, requiring organizations to implement robust data protection measures. This includes securing the personal information used in AI training and deployment, and ensuring compliance with existing privacy laws.</p>
<p>At the time of writing this article, the AIDA is currently in the committee stage and throughout the legislative process has garnered significant feedback and criticism from stakeholder groups. Key concerns expressed by stakeholder groups include that the scope of application for the act is either too broad or too narrow, that the transparency requirements are not feasible given the nature of AI technology, that the regulatory burden will discourage industry development in Canada, that the provisions on bias and discrimination are insufficiently detailed and that it is impossible to regulate such fast moving technology. In response to these concerns, several amendments have been made to the proposed language including a new definition for “artificial intelligence” that is in accordance with developing international norms in the field.</p>
<p>In part 2 of this article, I will discuss these developing international norms in detail including the work out of the OECD, EU, G7 and UNESCO. We will also discuss the important area of internal policy development for organizations in Canada and specifically how organizations can use the existing laws, proposed laws and developing international norms to inform the development policies and procedures in this important area.</p>
<p>Disclosure: Generative AI was used in the development of this post.</p>
<p>The post <a href="https://www.slaw.ca/2024/07/17/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-1/">Taming the Ghost in the Machine: Canada’s Journey to AI Regulation, Part 1</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.slaw.ca/2024/07/17/taming-the-ghost-in-the-machine-canadas-journey-to-ai-regulation-part-1/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Cheapening the Written Word</title>
		<link>https://www.slaw.ca/2024/06/07/cheapening-the-written-word-makes-concision-more-expensive/</link>
		
		<dc:creator><![CDATA[Sarah A. Sutherland]]></dc:creator>
		<pubDate>Fri, 07 Jun 2024 11:00:28 +0000</pubDate>
				<category><![CDATA[Justice Issues]]></category>
		<category><![CDATA[Legal Technology]]></category>
		<guid isPermaLink="false">https://www.slaw.ca/?p=106940</guid>

					<description><![CDATA[<p><img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"></p>
<p class="lead">The last two years have seen excessive hype over the text generation functions that large language models facilitate, which doesn&#8217;t need to be remarked on here more than it already has. But, I do think it&#8217;s important to note that applications like word processors and email have been transformative for the practice of law and other knowledge work over the last 40 years, and this can be considered an expected continuation of this long term trend.</p>
<p>These types of tools all reduce the friction involved in creation of documents and mean that written material can be produced more quickly and  . . .  <a href="https://www.slaw.ca/2024/06/07/cheapening-the-written-word-makes-concision-more-expensive/" class="read-more">[more] </a></p>
<p>The post <a href="https://www.slaw.ca/2024/06/07/cheapening-the-written-word-makes-concision-more-expensive/">Cheapening the Written Word</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img src="https://www.slaw.ca/wp-content/themes/slaw2012/images/slaw-column.png"><br /><p class="lead">The last two years have seen excessive hype over the text generation functions that large language models facilitate, which doesn&#8217;t need to be remarked on here more than it already has. But, I do think it&#8217;s important to note that applications like word processors and email have been transformative for the practice of law and other knowledge work over the last 40 years, and this can be considered an expected continuation of this long term trend.</p>
<p>These types of tools all reduce the friction involved in creation of documents and mean that written material can be produced more quickly and cheaply than before. If we make errors in our typing we don&#8217;t need to start again to make a fair copy, and we don&#8217;t need to pull out scissors and glue to copy and paste content within our documents. However, instead of resulting in a more efficient system, it appears this added facility has simply increased the volume of documents, which means that arguably the potential improvements and access these tools promise haven&#8217;t been achieved.</p>
<p>In an archival course I took years ago, I remember studying a court record from around 1900. It was a court record documenting the court process surrounding the criminal matter for someone who had been charged with stealing a bicycle. And as a record it was perfect: it was one page long, with fields where staff could add notes with typewriters or by hand, and the final verdict was handwritten in pen at the top. The entire matter was visible on one side of a piece of paper. Old court reports are similarly brief. The details included about cases in Old England Reports often run only half a page. Meanwhile, a contemporary trend for court decisions to get longer has been documented.[1]</p>
<p>Since then, writing technology has progressed, and almost all professionals have access to computer-driven applications that allow quick production of documents that have been spell checked, and do not need to be re-typed if edits need to be made now. As technology has advanced, however, instead of making processes more efficient, it has just allowed for long documents to be created cheaper than they were previously. The decrease in the cost of making documents longer has led to increased complexity in process and communication, as there is no longer pressure to put in the work required to be concise.</p>
<p>In the legal system, this means that the efficiency gains that could have been had have instead been directed at making legal systems more difficult to navigate. This effect can be illustrated by the need for systems like e-discovery. When I hear about e-discovery I imagine all the hours of labour that went into creating those written documents, and of course, now content doesn’t require human intervention at all.</p>
<p>Writing is a particular type of work, and much of the work in the legal sector is comprised of writing, but accounting is an interesting comparator. Significant work and expertise used to be required to maintain simple bookkeeping functions, as typing used to require more skill than it does now with spell check and sensitive keyboards. However, the invention of spreadsheets devalued much of this work and made it much easier to track financial position. The reduction in requirements to maintain manual calculations allowed accountants to create significantly more value through approaches like financial forecasting, which allowed the view of the profession to expand.</p>
<p>Law has not been successful at achieving similar increases in value, because the base is about human relationships, and it has been resistant to change of this kind. At the centres of contracts are not documents, but agreements between people which mean they both get something. Complex language can also be used as a weapon, which reduces some of the benefits of simplicity.</p>
<p>The next great transformative technologies that can be leveraged to make language creation easier or faster or cheaper based on large language models may have significant impacts on making the practice of law more efficient. It may also be used to further the legal professions’ impulse toward completism and simply lead to everything being longer and more complicated. In other words, they may be used as blunt objects to increase the volume of words, without taking the time to value the virtue of brevity.</p>
<p>An internet search for the source of the quote apologizing for writing a long letter because the author didn&#8217;t have time to write a short one, shows that this has been attributed to many writers going back to Roman times. And it appears to have originated with Cicero. Now, we have tools that will generate text even more easily than word processors and dictation, but the mental discipline of saying what needs to be communicated and cutting out the extraneous will continue to require effort.</p>
<p>[1] Examples: Beauchamp-Tremblay, Xavier, and Antoine Dusséaux. “Not Your Grandparents’ Civil Law: Decisions Are Getting Longer. Why and What Does It Mean in France and Québec?” <i>Slaw</i> (blog), June 20, 2019. <a href="https://www.slaw.ca/2019/06/20/not-your-grandparents-civil-law-decisions-are-getting-longer-why-and-what-does-it-mean-in-france-and-quebec/">https://www.slaw.ca/2019/06/20/not-your-grandparents-civil-law-decisions-are-getting-longer-why-and-what-does-it-mean-in-france-and-quebec/</a>. SCOTUSblog. “Lengthier Opinions and Shrinking Cohesion: Indications for the Future of the Supreme Court,” July 28, 2022. <a href="https://www.scotusblog.com/2022/07/lengthier-opinions-and-shrinking-cohesion-indications-for-the-future-of-the-supreme-court/">https://www.scotusblog.com/2022/07/lengthier-opinions-and-shrinking-cohesion-indications-for-the-future-of-the-supreme-court/</a>.</p>
<p>The post <a href="https://www.slaw.ca/2024/06/07/cheapening-the-written-word-makes-concision-more-expensive/">Cheapening the Written Word</a> appeared first on <a href="https://www.slaw.ca">Slaw</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
