<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>James Serra&#039;s Blog</title>
	<atom:link href="https://www.jamesserra.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.jamesserra.com</link>
	<description>Big Data and Data Warehousing</description>
	<lastBuildDate>Fri, 27 Feb 2026 22:00:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
<site xmlns="com-wordpress:feed-additions:1">22449246</site>	<item>
		<title>T-SQL Tuesday #192: What career risks have you taken?</title>
		<link>https://www.jamesserra.com/archive/2026/03/t-sql-tuesday-192-what-career-risks-have-you-taken/</link>
					<comments>https://www.jamesserra.com/archive/2026/03/t-sql-tuesday-192-what-career-risks-have-you-taken/#respond</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Tue, 03 Mar 2026 16:00:00 +0000</pubDate>
				<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20999</guid>

					<description><![CDATA[<p>I’m honored to be hosting T-SQL Tuesday — edition #192. For those who may not know the history, T-SQL Tuesday started back in December 2009 when Adam Machanic (Blog&#160;&#124;LinkedIn) sent out the first invitation for what he called a monthly <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2026/03/t-sql-tuesday-192-what-career-risks-have-you-taken/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2026/03/t-sql-tuesday-192-what-career-risks-have-you-taken/">T-SQL Tuesday #192: What career risks have you taken?</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<div class="wp-block-image">
<figure class="alignleft is-resized"><img data-recalc-dims="1" decoding="async" src="https://sqlrblog.files.wordpress.com/2020/10/t-sql-tuesday-logo.jpg?w=1180" alt="" style="width:143px;height:auto"/></figure>
</div>


<p>I’m honored to be hosting <strong>T-SQL Tuesday — edition #192</strong>.</p>



<p>For those who may not know the history, T-SQL Tuesday started back in December 2009 when Adam Machanic (<a href="http://dataeducation.com/" target="_blank" rel="noreferrer noopener">Blog</a>&nbsp;|<a href="https://www.linkedin.com/in/adammachanic/" target="_blank" rel="noreferrer noopener">LinkedIn</a>) sent out the first invitation for what he called a monthly “blog party.” The idea was simple: pick a topic, publish on the second Tuesday of the month, and share what you’ve learned. No gatekeepers — just community.</p>



<p>Today, Steve Jones (<a href="https://voiceofthedba.com/" target="_blank" rel="noreferrer noopener">Blog</a>|<a href="https://www.linkedin.com/in/way0utwest/" target="_blank" rel="noreferrer noopener">LinkedIn</a>) organizes the event and maintains the <a href="http://tsqltuesday.com/" target="_blank" rel="noreferrer noopener">website</a>&nbsp;with links to past topics and posts, preserving more than a decade of shared knowledge.</p>



<p>Everyone is welcome to participate. If you’ve been thinking about writing, this is your invitation.</p>



<p>The topic I’d like us to explore is one that’s a little more personal than technical:&nbsp;<strong>What career risks have you taken?</strong></p>



<p>Not just the safe moves. Not the obvious promotions. The real risks — the moments where you weren’t sure how it would turn out.</p>



<p>That’s what I’d love to hear about.</p>



<p>I’ve been in the IT industry for 40 years, and here’s what I’ve learned — not from theory or motivational posters, but from lived experience: if you don’t take risks, your career will stall.</p>



<p>Success doesn’t happen by accident. Nobody “accidentally” ends up in a great job with great pay, and nobody stumbles into owning a successful business. If you ask anyone who has built something meaningful whether they did it without risk, the answer is always the same: “No way.” They took chances. Some worked, some didn’t — but they moved.</p>



<p>Risk-taking was modeled for me early. My dad was a bricklayer on Long Island in his late 20s with a young family — me and my two sisters. Bricklaying is real work: eight hours in 95-degree heat or freezing winters, with a physical toll every single day. After falling off a scaffold and breaking his leg, he decided he wasn’t going to do that for the rest of his life.</p>



<p>So my parents took a huge risk. They moved our entire family from New York to Las Vegas in the mid-70s — when the mob still ran most of the casinos — and my dad became a craps dealer. We knew almost no one there, and it could have gone very differently. But it paid off. He made more money, saved his body from years of physical wear, and built a nearly 40-year career as a dealer and floorman. He loved it.</p>



<p>Years later, in my late 20s, I faced a similar moment. I was working in Las Vegas as a software engineer (what we used to call a programmer), making very little money. Vegas didn’t pay well for IT back then, and we were living paycheck to paycheck with our third child on the way. I didn’t want to struggle indefinitely, so we moved to Houston — sight unseen — without knowing a single person there, for a much higher-paying job.</p>



<p>It was terrifying, but it paid off. That move opened doors and changed the trajectory of my career. When I look back, I’m proud we had the courage to do it. Meanwhile, many coworkers stayed in Vegas. They played it safe, avoided discomfort, and their careers largely stayed where they were.</p>



<p>Over time, I learned something important: you have to get comfortable being uncomfortable. Not every risk involves moving your family across the country, but advancing your career may require changing roles internally, leaving a stable company, switching industries, or even starting something on your own. And sometimes not taking the risk is actually the bigger risk — especially if you’re counting on your career to support your family.</p>



<p>When I face a big decision, I slow it down. I create a pros-and-cons list over several days and talk to people I trust. That process helps remove emotion and brings clarity. I’ve also seen a few truths play out repeatedly: people hate change, we’re addicted to comfort, and the first time you do something uncomfortable it’s hard — but the tenth time is much easier. Growth lives outside your comfort zone.</p>



<p>There are plenty of examples in our industry. Brent Ozar left traditional employment to build his own consulting company. Paul Randal did the same. Those weren’t small bets — they were career-defining risks (I hope those two share their stories as a reply to my post).</p>



<p>So I’ll leave you with this: what career risks have you taken? Did they pay off? What did you learn? Because after four decades in this field, I’m convinced of one thing — change is constant. You can either embrace it and accelerate your career, or resist it and stay on the same rung of the ladder.</p>



<h3 class="wp-block-heading">T-SQL Tuesday Rules</h3>



<p>To participate:</p>



<p>First, write a blog post on the topic:</p>



<p><strong>“What career risks have you taken?”</strong></p>



<p>Be honest. Be reflective. What did you bet on? What scared you? What worked… and what didn’t?</p>



<p>Then:</p>



<ul class="wp-block-list">
<li>Include the phrase <strong>“T-SQL Tuesday #192”</strong> somewhere in your post.</li>



<li>Include the T-SQL Tuesday logo in your post.</li>



<li>Link back to this invitation so others can easily find the event.</li>



<li>Schedule your post to go live on <strong>Tuesday, March 10, 2026</strong> (the second Tuesday of the month).</li>



<li>Share it on social media (LinkedIn, Twitter, Bluesky) using the hashtag <strong>#tsql2sday</strong>.</li>
</ul>



<p>Optional — but appreciated: leave a comment on this post with a link to your contribution so I don’t miss it (I&#8217;ll do a recap on March 16).</p>



<p>I’m genuinely looking forward to reading your stories. Career risks are rarely comfortable in the moment… but they’re often the moments that shape everything that follows.</p>The post <a href="https://www.jamesserra.com/archive/2026/03/t-sql-tuesday-192-what-career-risks-have-you-taken/">T-SQL Tuesday #192: What career risks have you taken?</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2026/03/t-sql-tuesday-192-what-career-risks-have-you-taken/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20999</post-id>	</item>
		<item>
		<title>Building Power BI Reports: Desktop vs Fabric</title>
		<link>https://www.jamesserra.com/archive/2026/02/building-power-bi-reports-desktop-vs-fabric/</link>
					<comments>https://www.jamesserra.com/archive/2026/02/building-power-bi-reports-desktop-vs-fabric/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Thu, 26 Feb 2026 16:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20934</guid>

					<description><![CDATA[<p>Why this comparison feels confusing If you’re a Power BI report author who’s just getting into Microsoft Fabric, you’ve probably asked the same question I hear over and over: am I supposed to stop using Power BI Desktop now? It’s <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2026/02/building-power-bi-reports-desktop-vs-fabric/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2026/02/building-power-bi-reports-desktop-vs-fabric/">Building Power BI Reports: Desktop vs Fabric</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<h2 class="wp-block-heading" id="why-this-comparison-feels-confusing">Why this comparison feels confusing</h2>



<p id="why-this-comparison-feels-confusing">If you’re a Power BI report author who’s just getting into Microsoft Fabric, you’ve probably asked the same question I hear over and over: am I supposed to stop using Power BI Desktop now?</p>



<p>It’s a fair question. Power BI Desktop is a Windows app that has traditionally been the place where report authors do everything: get data, transform it, model it, and build the report. Microsoft even describes that “connect, shape/transform, then load” experience as part of how Power BI Desktop works with Power Query.</p>



<p>Fabric changes the feel of that workflow because Power BI is now also a first-class experience in the browser inside the Fabric portal. And that browser experience isn’t just “view and share” anymore. You can edit semantic models in the service, including using Power Query for import models and building reports directly from that same environment.</p>



<p>This shift also matters a lot for people who simply couldn’t rely on Power BI Desktop before. If you’re on a Mac, using a Chromebook, working from a locked-down corporate machine, or in an environment where installing desktop software requires jumping through weeks of approval hoops, the browser-based Power BI experience is a genuine unlock. For the first time, you can build reports and work with semantic models using nothing more than a modern browser. That alone explains why so many people are revisiting the “Desktop vs Fabric” question now—it’s no longer just about preference, it’s about access.</p>



<p>The goal here is simple: help you decide, when you’re starting a brand-new report, which authoring surface is going to feel smooth… and which one is going to make you mutter “why is that button gray?” (we’ve all been there).</p>



<h2 class="wp-block-heading" id="the-from-scratch-paths">The from-scratch paths</h2>



<p>Let’s define “from scratch” the way report authors actually experience it:</p>



<p>You start with nothing, you need to acquire data, shape it, model it, and then build visuals on a canvas. That’s the full loop.</p>



<p>In Desktop, that loop is straightforward because it’s all in one place.</p>



<p>In Fabric (browser), there are two common “from scratch” starting points, and they matter:</p>



<p>One path is “create a new import semantic model in the service using Get data / Transform data, then create a report from it.” Microsoft documents this directly in the semantic model editing experience: you can add new import tables using Get data, shape them with Transform data (Power Query), and then create a report from that model.</p>



<p>The other path is “Create a quick report,” which is a simplified browser experience meant to get you moving fast (often by pasting data or starting from an existing semantic model). The quick report documentation is explicit about what it supports today and what it doesn’t.</p>



<p>Here’s the “picture in your head” diagram I recommend keeping around:</p>



<p><em>Desktop-first Get Data (many connectors) → Power Query (full) → Model view (full) → Report view (full)</em></p>



<p><em>Fabric browser-first (import model route) Create semantic model (Get data) → Power Query online (import-only) → Web model editor (most core modeling) → Web report editor</em></p>



<p><em>Fabric browser-first (quick report route) Existing semantic model or paste data (limited) → Autogenerated visuals → Switch to full edit if needed</em></p>



<p>That last one is great for prototypes, demos, and “I just need to see something.” It is not the same thing as building a durable, repeatable solution. Microsoft even calls out that pasted data can’t be updated later without redoing the Create workflow.</p>



<h2 class="wp-block-heading" id="data-import-in-practice">Data import in practice</h2>



<p>Power BI Desktop is still the “widest funnel” for data import. Desktop is built around connecting to one or many data sources through Power Query, and it’s where the connector ecosystem (including custom connectors) has been most complete historically.</p>



<p>In the Fabric browser authoring experience, you can absolutely bring in data for import models. But Microsoft lists several connector-related limitations for adding import tables and for enabling query editing/refresh in the web model editing experience. Custom connectors and a specific set of connectors (including OLE DB, R, and Python) aren’t supported for adding import tables in web editing, and models using those connectors also don’t support query editing in the Power Query editor in that web experience.</p>



<p>There’s also a “connection setup reality” that can surprise new Fabric authors: in the web Power Query experience, you can use existing personal cloud connections, but you can’t create new personal cloud connections inside the editor. That setup happens elsewhere.</p>



<p>And if you’re tempted to start with the quick report “Paste data” option, Microsoft is very explicit about the limits: no way to update the pasted data later, a 512 KB paste size cap, and other constraints like an eight-table limit and naming restrictions.</p>



<p>So the author takeaway is pretty clean:</p>



<p>If “from scratch” means “I don’t know what source I’ll need, and I might need some quirky connector or a custom connector,” Desktop is the safer, less-surprising start.</p>



<p>If “from scratch” means “my data is already in an environment Fabric understands well, and I’m building an import semantic model from supported connectors,” browser-first can work.</p>



<h2 class="wp-block-heading" id="data-transformation-and-the-power-query-reality-check">Data transformation: Desktop vs Fabric’s distributed approach</h2>



<p>In Desktop, Power Query is the primary data transformation surface. It’s mature, predictable, and designed for shaping data before loading it into the model.</p>



<p>In Fabric (browser), the Power Query editor exists — but it’s not the only transformation option, and this is where the comparison changes.</p>



<p>Fabric doesn’t narrow transformation. It redistributes it.</p>



<p>Instead of assuming all shaping happens inside Power Query within the report tool, Fabric gives you multiple upstream transformation options:</p>



<ul class="wp-block-list">
<li>Dataflow Gen2 (Power Query at the service level)</li>



<li>Spark notebooks for large-scale transformation</li>



<li>T-SQL in the Warehouse</li>



<li>Lakehouse table transformations</li>



<li>Pipelines and orchestration</li>



<li>Python-based or semantic link workflows</li>
</ul>



<p>In Desktop, Power Query is the workbench.<br>In Fabric, Power Query is one tool in a broader data platform.</p>



<p>The web Power Query editor is optimized for import models. It does not support Direct Lake or DirectQuery transformations, and dynamic data sources are not supported. You can edit parameters in the service, but you cannot create them there.</p>



<p>That sounds like a limitation — until you realize what Fabric expects.</p>



<p>Desktop assumes:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The report author shapes the data.</p>
</blockquote>



<p>Fabric assumes:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The platform shapes the data. The author models and visualizes it.</p>
</blockquote>



<p>If your organization is serious about Fabric, Power Query inside the report becomes a last-mile cleanup tool — not the primary ETL engine.</p>



<p>That’s not weaker. It’s architectural.</p>



<h2 class="wp-block-heading" id="data-modeling-and-semantic-model-editing">Data modeling and semantic model editing</h2>



<p>For a long time, “modeling” was the easy part of this comparison: Desktop won.</p>



<p>That’s no longer true in a blanket way.</p>



<p>About a year ago, Power BI in Fabric added a feature to allow you to create and edit semantic models in the service: create measures, calculated columns, calculated tables, relationships, properties, and even define row-level security roles.</p>



<p>So yes, the web experience is a real modeling environment now.</p>



<p>But Microsoft also explicitly lists functional gaps between Desktop model view and the service. These include: not being able to change a table’s storage mode, no View as dialog, and other limitations like feature tables and certain data categories.</p>



<p>Another important difference shows up around relationship automation. </p>



<p>In Power BI Desktop, relationship creation can happen in two distinct ways when you load data. First, if you are connecting to a relational source like SQL Server or Azure SQL and that database has actual foreign key constraints defined, Desktop can read that metadata and automatically create relationships in the model during initial load (assuming auto-detect is enabled). In that case, it is not guessing — it is importing real source-defined relationships.</p>



<p>Second, when usable foreign key metadata does not exist — which is common in many warehouses, views, or lake environments — Desktop falls back to heuristics. It will attempt to infer relationships based on matching column names, compatible data types, and cardinality patterns (for example, detecting which side appears unique). This inference behavior is what most people think of as “auto-detect relationships.” It is convenient, but optional, and many experienced modelers disable it to avoid accidental joins.</p>



<p>In the Fabric browser experience, neither of those automatic behaviors occurs. When you import data using Power Query in the service, relationships defined in the underlying source system are not automatically brought into the semantic model. There is also no background auto-detect process that later scans for new relationships based on column names or data patterns. All relationships must be created explicitly by the author.</p>



<p>There’s also a workflow difference worth calling out: semantic model editing in the service uses AutoSave, and Microsoft notes changes are permanent with no option to undo in that experience.</p>



<p>And if you use DAX Query View: Microsoft documents that Desktop-saved DAX queries aren’t visible in the web DAX query view, and queries written in the web are discarded when you close the browser. </p>



<p>So modeling is now a “depends” conversation:</p>



<p>If your modeling needs stay within the web-supported feature set, the Fabric browser experience can work well.</p>



<p>If you need the full Desktop modeling surface area (and the deeper tooling workflows many modelers rely on), Desktop still feels like the more complete workbench.</p>



<h2 class="wp-block-heading" id="report-visualization-and-the-canvas-experience">Report visualization and the canvas experience</h2>



<p>Here’s the part that surprises many people: the report canvas gap is smaller than the data prep and modeling gaps.</p>



<p>Microsoft describes editing view in the Power BI service as the place where you create and edit reports in the browser, similar to Report view in Desktop.</p>



<p>Microsoft also notes that the ribbon is the main part of the report editor that differs between Desktop and the service, and the actions available vary depending on what you select on the canvas.</p>



<p>In practical terms, if you’re building visuals, arranging a layout, formatting, and polishing a report, the browser experience is “close enough” that many authors can be productive quickly.</p>



<p>Custom visuals are not a major dividing line either. Microsoft documentation for importing visuals says you can import visuals from AppSource in both Power BI Desktop and the Power BI service, and you can also import from a file.</p>



<p>The place the browser report editor shines for brand-new authors is the quick report experience: paste data and let Power BI auto-generate visuals, then switch to full edit if you want. Just remember those quick report limitations (especially the “no way to update pasted data later” part).</p>



<h2 class="wp-block-heading" id="side-by-side-comparison-for-building-from-scratch">Side-by-side comparison for building from scratch</h2>



<p>The table below summarizes what matters most for report authors building a new report end-to-end.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th class="has-text-align-left" data-align="left">Stage</th><th class="has-text-align-left" data-align="left">Power BI Desktop</th><th class="has-text-align-left" data-align="left">Power BI in Fabric (browser)</th><th class="has-text-align-left" data-align="left">Practical implication for new Fabric authors</th></tr></thead><tbody><tr><td>Data import starting point</td><td>Get Data connects to many sources through Power Query.</td><td>Create supports quick reports (existing semantic model or paste data), and web semantic model editing supports Get data for import models.</td><td>Browser has two “modes”: quick reports (fast, limited) and import model creation/editing (more capable).</td></tr><tr><td>Connector breadth</td><td>Broader connector ecosystem, including custom connectors.</td><td>Web model editing can’t add import tables from custom connectors and a listed set (including OLE DB, R, Python).</td><td>If you suspect you’ll need niche connectors, Desktop is safer.</td></tr><tr><td>Paste / lightweight data entry</td><td>Not the typical primary path (most authors use Get Data).</td><td>Paste/manual entry is supported but capped at 512 KB and can’t be updated later; other limits apply.</td><td>Great for prototypes, fragile for maintained solutions.</td></tr><tr><td>Data transformation</td><td>Full Power Query authoring in Desktop.</td><td>Power Query editor exists for import semantic models; but only for import storage mode, not Direct Lake/DirectQuery.</td><td>If your Fabric work leans into Direct Lake, expect transformation to move upstream.</td></tr><tr><td>Dynamic data sources</td><td>Possible to author patterns that later hit service refresh constraints.</td><td>Dynamic data sources aren’t supported in the web Power Query editor.</td><td>Browser-first reduces certain risky patterns (sometimes by simply saying “no”).</td></tr><tr><td>Parameters</td><td>Create with Manage Parameters in Power Query.</td><td>Can edit/review parameter settings, but not create parameters.</td><td>Parameter-heavy transformation patterns push you toward Desktop authoring.</td></tr><tr><td>Core modeling</td><td>Full Model view; rich editing and inspection workflows.</td><td>Can create measures, calculated columns/tables, relationships, properties, and roles in the service.</td><td>Web modeling is real now for everyday modeling tasks.</td></tr><tr><td>Modeling gaps</td><td>Generally the “superset” experience.</td><td>Notable gaps: can’t change storage mode; View as dialog not supported; other listed gaps.</td><td>If you rely on those gaps, Desktop remains your main tool.</td></tr><tr><td class="has-text-align-left" data-align="left">Relationship automation</td><td class="has-text-align-left" data-align="left">Auto-detect relationships can import source-defined relationships and infer new ones using heuristics; behavior can be enabled or disabled.</td><td class="has-text-align-left" data-align="left">No auto-detect feature; relationships are never inferred and must be explicitly defined.</td><td class="has-text-align-left" data-align="left">Fabric favors deliberate modeling over convenience-driven guesses.</td></tr><tr><td>DAX query workflow</td><td>Desktop can save DAX queries with the model.</td><td>Web DAX query tabs are discarded on close; web doesn’t show Desktop-saved DAX queries.</td><td>For repeatable diagnostics, Desktop still feels more durable.</td></tr><tr><td>Report visualization</td><td>Report view in Desktop for authoring.</td><td>Editing view in the service for authoring; ribbon differs but core canvas editing is supported.</td><td>For visuals/layout, the gap is smaller than most people expect.</td></tr><tr><td>Custom visuals</td><td>Import visuals via AppSource or file.</td><td>Same ability to import visuals via AppSource or file in the service.</td><td>Not a major deciding factor for most report authors.</td></tr></tbody></table></figure>



<h2 class="wp-block-heading" id="how-to-choose-without-overthinking-it">How to choose without overthinking it</h2>



<p>Here’s the bottom line I want you to walk away with:</p>



<p>If your “from scratch” report is going to live or die on data import and transformation flexibility, start in Desktop. You’ll get the widest connector coverage, the most mature Power Query patterns (including creating parameters), and fewer “this feature isn’t supported here” surprises.  Desktop is still the most predictable, self-contained transformation workbench when you expect to do most of the shaping inside the report tool itself.</p>



<p>If your “from scratch” report is mostly about building visuals on top of a semantic model that already exists in the Fabric environment, the browser experience can be a perfectly legitimate authoring surface. The web report editor is capable, and web semantic model editing covers a lot of day-to-day modeling tasks now. And if your transformation work is happening upstream — in Dataflow Gen2, Spark, T-SQL, or Lakehouse processes — the browser experience fits naturally into that Fabric-first architecture.</p>



<p>And here’s the most Fabric-era statement I can make without drifting into platform debates:</p>



<p>It’s not really Desktop versus Fabric. It’s Desktop plus Fabric, and you choose the surface based on where your friction will be:</p>



<ul class="wp-block-list">
<li>If your friction is report-level data shaping (cleaning, merging, parameterizing inside the report), Desktop will usually feel smoother.</li>



<li>If your friction is platform-level data engineering (Spark notebooks, Dataflow Gen2, T-SQL in a Warehouse, Lakehouse pipelines), that work belongs in Fabric — not in Desktop.</li>



<li>If your friction is purely report-canvas design (building visuals, layout, formatting, iterating quickly), either surface can work — and the browser may be perfectly sufficient.</li>
</ul>



<p>One last practical bridge worth knowing about: Microsoft documents that for Direct Lake models, the web experience can offer an “Edit in Desktop” path that launches live editing in Power BI Desktop (Windows-only), and Microsoft also documents how Desktop can create and live-edit Direct Lake semantic models via the OneLake catalog. That’s a big deal because it means you can stay in a Fabric-first architecture while still using Desktop as your authoring workbench when you need it.</p>



<h2 class="wp-block-heading"><strong>Other feature differences worth knowing about</strong></h2>



<p>Beyond modeling and Power Query, there are a handful of features that still exist in Power BI Desktop (or related desktop tools) that don’t yet have full parity inside the Fabric browser experience. These aren’t deal-breakers for most authors—but they matter depending on your workflow.</p>



<p><strong>Paginated reports</strong> are one example. These aren’t created in Power BI Desktop at all—they’re authored using Power BI Report Builder, a separate Windows tool. And there’s no browser-based paginated authoring surface inside Fabric today. If you need pixel-perfect, printable reports (invoices, regulatory forms, operational reports), that’s still a desktop story.</p>



<p><strong>External Tools integration</strong> is another. Desktop supports launching tools like DAX Studio and Tabular Editor directly from the ribbon. Those tools are central to advanced modeling, diagnostics, and automation workflows. The Fabric browser experience has no equivalent because those tools rely on local integration.</p>



<p><strong>Custom visuals discovery</strong> also feels different. You can use custom visuals in both Desktop and the service, but Desktop provides a richer, more integrated Visuals Marketplace experience for browsing and adding them directly from AppSource.</p>



<p><strong>Q&amp;A and natural language exploration</strong> exists in the broader Power BI service experience, but the tuning and authoring surface for it is still more mature in Desktop. If you’ve used Q&amp;A heavily for model validation or exploration, you’ll notice the difference.</p>



<p><strong>Power BI Goals (scorecards)</strong> live in the service ecosystem but aren’t part of the browser-based report authoring surface in Fabric. They operate alongside reports—not inside the modeling workflow.</p>



<p>None of these gaps mean the browser experience is weak. They simply reflect that Desktop evolved for deep authoring over many years, while Fabric’s browser surface is optimized for accessibility, shared modeling, and cloud-first workflows.</p>



<p>More info:</p>



<p><a href="https://learn.microsoft.com/en-us/power-bi/fundamentals/power-bi-overview" title="">What is Power BI</a></p>The post <a href="https://www.jamesserra.com/archive/2026/02/building-power-bi-reports-desktop-vs-fabric/">Building Power BI Reports: Desktop vs Fabric</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2026/02/building-power-bi-reports-desktop-vs-fabric/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20934</post-id>	</item>
		<item>
		<title>Making Sense of Microsoft’s AI Strategy: Work IQ, Fabric IQ, Foundry IQ</title>
		<link>https://www.jamesserra.com/archive/2026/02/making-sense-of-microsofts-ai-strategy-work-iq-fabric-iq-foundry-iq/</link>
					<comments>https://www.jamesserra.com/archive/2026/02/making-sense-of-microsofts-ai-strategy-work-iq-fabric-iq-foundry-iq/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Thu, 12 Feb 2026 16:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20947</guid>

					<description><![CDATA[<p>A few weeks ago, I found myself staring at a slide full of new Microsoft AI names and thinking… wait a second. Work IQ. Fabric IQ. Foundry IQ. Agent 365. Agent Factory. And that’s before we even get into Copilot <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2026/02/making-sense-of-microsofts-ai-strategy-work-iq-fabric-iq-foundry-iq/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2026/02/making-sense-of-microsofts-ai-strategy-work-iq-fabric-iq-foundry-iq/">Making Sense of Microsoft’s AI Strategy: Work IQ, Fabric IQ, Foundry IQ</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>A few weeks ago, I found myself staring at a slide full of new Microsoft AI names and thinking… wait a second. <strong>Work IQ. Fabric IQ. Foundry IQ. Agent 365. Agent Factory.</strong> And that’s before we even get into <strong>Copilot Studio, Copilot Studio Light, and Microsoft Foundry</strong>. If you’re a smart technical leader and you’re feeling a little overwhelmed, you’re not alone. I was confused too. The pace of new terms and announcements over the past few months has been fast. Very fast. This post is my attempt to slow it down and simplify what’s actually happening underneath the names.</p>



<p>Here’s the mental model that helped me. There are three <em>capability umbrellas</em> (also called <em>IQ layers</em> or <em>architecture labels</em>). Underneath them are <em>build and orchestration tools</em>. And wrapping around all of it is a <em>runtime and governance layer</em> that makes AI safe and scalable. Once you see that structure, the noise quiets down and the pieces start to fit together.</p>



<p>Let’s start with the three IQ layers, because these are not products you buy. They describe what intelligence you are trying to achieve as an organization.</p>



<p>You won’t find “Work IQ” or “Foundry IQ” as standalone SKUs or admin blades. I’m using them here as architectural labels to describe Microsoft’s direction, not official product names.  They are the intelligence outcomes Microsoft wants organizations to achieve.</p>



<h3 class="wp-block-heading">Work IQ</h3>



<p>Work IQ is intelligence about how work actually happens inside your company. It models the way people collaborate — their documents, meetings, conversations, relationships, and day-to-day interaction patterns across Microsoft 365. It understands meetings, chats, emails, documents, collaboration signals, overload patterns, and exception handling. It’s powered by Microsoft 365 data and organizational graph signals.</p>



<p>When Work IQ is functioning well, you can answer questions like: Which teams are buried in manual exception handling? Where are email threads exploding? Who is spending ten hours a week chasing status updates?</p>



<p>But it goes a step further.</p>



<p>This intelligence layer becomes the personalization engine for AI agents. Instead of a generic assistant, Work IQ helps agents understand who you work with, what you work on, and how you get things done. It brings user context into the equation — not just data context.</p>



<p>For architects, Work IQ is effectively the user-context layer. It’s the bridge between user behavior and the actions agents take inside the enterprise environment. It connects human workflow signals to intelligent action.</p>



<p>It’s intelligence about human workflow — not just data.</p>



<h3 class="wp-block-heading">Fabric IQ</h3>



<p>Fabric IQ is intelligence grounded in structured enterprise data. It lives in Microsoft Fabric and understands your KPIs, trends, anomalies, lineage, and semantic models. But more importantly, Fabric IQ sits on top of the enterprise data estate and injects business meaning into it through a semantic layer and ontology. It turns raw datasets into entities, relationships, metrics, timelines, and operational context that agents can reason about.</p>



<p>This is the analytical brain.</p>



<p>It doesn’t just see tables. It sees business reality: customers, inventory, regions, exceptions, dependencies, patterns. It recognizes supply chain bottlenecks, revenue shifts, production throughput constraints, and operational performance trends because the data has been modeled with intent.</p>



<p>Fabric IQ is what allows AI to reason over trusted, governed, enterprise-grade data (structured, semi-structured, and unstructured) instead of guessing from disconnected spreadsheets.</p>



<p>From a solution architecture view, Fabric IQ becomes the data-context layer — essential for designing analytics, decision systems, and operational AI pipelines. It’s where analytics maturity meets AI capability.</p>



<p>And for DevOps teams, Fabric IQ introduces a new discipline: semantic models must be versioned, governed, deployed, and monitored the same way we handle code. Because once agents depend on business meaning, that meaning becomes production infrastructure.</p>



<p>In short, Fabric IQ gives AI structured business context — not just raw data.</p>



<h3 class="wp-block-heading">Foundry IQ</h3>



<p>Foundry IQ is intelligence grounded in knowledge and reasoning. This is where Microsoft Foundry comes in. It’s the evolution of what used to be Azure AI Studio, then Azure AI Foundry, and now Microsoft Foundry. Foundry is where models are selected, grounded in enterprise content, evaluated, secured, and managed.</p>



<p>But architecturally, Foundry IQ handles the most difficult part of agent design: knowledge retrieval and grounding.</p>



<p>It connects agents to policy-controlled knowledge bases across Microsoft 365, cloud storage, data platforms, and internal repositories — all through a unified retrieval engine. Agents can perform iterative search, multi-source reasoning, and permission-aware grounding instead of relying on brittle manual retrieval stacks or static document embeddings.</p>



<p>Foundry IQ is what happens when AI understands contracts, policies, procedures, SLAs, regulatory constraints, and unstructured documents — and can reason across them safely. It’s the reasoning layer that makes AI context-aware instead of generic.</p>



<p>In modern architectures, this becomes the knowledge-access layer. It enables traceability, auditing, and reliable decision-making for autonomous workflows. Instead of an agent simply generating an answer, it can show where the information came from, respect access boundaries, and operate within governance constraints.</p>



<p>In short, Foundry IQ gives AI controlled, secure access to enterprise knowledge — and the ability to reason over it responsibly.</p>



<h3 class="wp-block-heading">How the Three IQ Layers Work Together</h3>



<p>If you want a simple picture in your head, imagine a three-layer stack. At the foundation is <strong>Fabric IQ</strong>, your structured data intelligence (data-context). In the middle is <strong>Foundry IQ</strong>, your reasoning and knowledge grounding layer (knowledge-context). On top is <strong>Work IQ</strong>, your understanding of how humans are actually operating day to day (user-context). Together, those layers create the conditions for meaningful enterprise agents.</p>



<p>Let’s make this concrete.</p>



<p>Suppose a company wants to build an AI agent to help manage supply chain delays. This is not a theoretical example. This is the kind of use case I hear from customers constantly.</p>



<p><strong>Fabric IQ</strong> detects anomalies in delivery metrics. It sees that certain suppliers are trending late beyond historical norms. It notices that on-time delivery percentages are dipping in specific regions. It correlates delays with upstream bottlenecks. This is data-driven awareness.</p>



<p><strong>Foundry IQ</strong> then grounds the agent in supplier contracts, SLAs, penalty clauses, and internal policies. It understands what the agreement actually says about late deliveries. It interprets escalation thresholds. It knows which suppliers have stricter terms and which ones allow flexibility. This is contextual reasoning.</p>



<p><strong>Work IQ</strong> observes that operations teams are overloaded handling these exceptions manually. It sees long email chains, recurring “delay review” meetings, and individuals spending hours every week tracking updates from vendors. It identifies patterns of reactive work that are consuming capacity. This is workflow intelligence.</p>



<p>Now you introduce the <strong>Agent</strong>. The agent combines those three streams of intelligence. It recommends which delays need escalation based on contractual impact. It drafts communications to suppliers referencing the correct SLA language. It suggests internal reprioritization. It surfaces issues before they become crises. It becomes a force multiplier instead of just another dashboard.</p>



<h3 class="wp-block-heading">Runtime, Governance, and Industrial Scale</h3>



<p>But this is where responsible leaders pause and ask the right question. Who monitors this agent? Who ensures it’s accurate? Who ensures it stays within policy? And maybe the most important question: who is accountable when it makes a recommendation that affects the business?</p>



<p>That’s where <strong>Agent 365</strong> comes in. Agent 365 is the runtime and governance layer that sits over everything. It monitors agent decisions, tracks accuracy over time, enforces compliance boundaries, and provides deep observability into how agents are behaving in the real world. It gives you visibility into what the agent is doing, why it is doing it, and whether it is staying within the guardrails you defined.</p>



<p>This isn’t just about logging activity. It’s about operational control. It’s about knowing when performance drifts, when policies change, when human override is required. It’s about giving leadership confidence that AI is not operating in the shadows but within a governed framework.</p>



<p>Without this layer, you have experiments — clever, promising, but fragile. With it, you have enterprise-grade systems that can scale responsibly.</p>



<p>And once this supply chain pattern works, <strong>Agent Factory</strong> allows you to scale it. Agent Factory is about industrializing agent creation. Instead of building one-off AI projects in isolation, you create repeatable blueprints that can extend to finance for invoice exceptions, HR for policy guidance, or operations for maintenance alerts. It’s the production line for enterprise agents.</p>



<p>Most companies today are experimenting with AI in pockets — a pilot here, a proof of concept there. It works, but it doesn’t scale. Agent Factory shifts you from experimentation to manufacturing. You define standard patterns for how agents are grounded in Fabric data, reason through Foundry, and are governed through Agent 365. You build templates, not just individual agents.</p>



<p>So when another department needs an agent, you don’t start from scratch. You extend the blueprint. That’s the difference between isolated AI projects and a true enterprise AI platform — designed once, governed consistently, deployed many times.</p>



<p>Now let’s untangle the product names, because this is where confusion often peaks.</p>



<h3 class="wp-block-heading">Build and Orchestration Tools</h3>



<p>To turn strategy into reality, you need <strong>build and orchestration tools</strong>. This is where <strong>Microsoft Foundry, Copilot Studio, and Copilot Studio Light</strong> come into play.</p>



<p><strong>Microsoft Foundry</strong> is the engineering backbone for enterprise AI. It’s where foundation models (such as large language models like GPT-4-class systems), embedding models for retrieval and vector search, and task-specific AI systems like classifiers or anomaly detection models are selected, configured, grounded in enterprise data, evaluated for quality and safety, and prepared for production use. If you’re building serious AI solutions — the kind that must meet performance standards, compliance requirements, and security controls — this is where that work happens.</p>



<p>You can absolutely build agents in Microsoft Foundry. But you’re building them at the engineering layer. That means defining prompts, retrieval pipelines, tool calling, evaluation loops, safety filters, deployment endpoints — often in a pro-code environment. Foundry is not just a playground for models. It’s an engineering environment. It supports model evaluation, grounding techniques, prompt flows, fine-tuning scenarios, safety configurations, and deployment management. In other words, it’s where AI moves from experimentation to disciplined engineering.</p>



<p><strong>Copilot Studio</strong> sits one layer above that. It’s the orchestration platform. This is where you design agents visually, connect them to business systems, define actions, build workflows, and determine how agents interact with users and applications. Copilot Studio is about behavior. It’s where you decide what the agent can do, what systems it can call, how it responds to certain triggers, and how it escalates when needed.</p>



<p>The clean distinction is this:<br>Foundry is pro-code, engineering-first agent construction.<br>Copilot Studio is low-code/no-code, workflow-first agent orchestration.</p>



<p>If Foundry is about building intelligence correctly, Copilot Studio is about putting that intelligence to work inside real business processes.</p>



<p><strong>Copilot Studio Light</strong> — formerly known as Agent Builder — is the streamlined, no-code experience embedded directly inside Microsoft 365 Copilot. It allows business users to quickly create internal, knowledge-based agents grounded in SharePoint sites, Teams content, or organizational documents. It’s intentionally simple. It lowers the barrier to entry. But it’s important to understand the distinction: Copilot Studio Light is ideal for quick internal assistants and knowledge helpers. When you need advanced workflows, external system integrations, deeper customization, or structured governance controls, you step up to full Copilot Studio — and potentially integrate with Microsoft Foundry for more advanced model engineering.</p>



<p>When you step back, the architecture becomes clearer. Microsoft Fabric provides structured, governed enterprise data. Microsoft Foundry engineers and evaluates the intelligence layer — and can be used to build highly customized agents directly in a pro-code way. Copilot Studio is where you build and orchestrate business agents using a low-code/no-code approach, including their workflows, actions, and system integrations. Copilot Studio Light enables fast, internal, knowledge-based agents created directly inside Microsoft 365 Copilot. Together, they form a layered system rather than a collection of disconnected tools — pro-code where you need maximum control, and low/no-code where you need speed and broad adoption.</p>



<p>Here’s my slightly opinionated take. Microsoft isn’t just releasing features. It’s assembling an AI operating model for the enterprise. But when terminology evolves quickly — and it has — even experienced leaders can feel like they’re chasing moving targets. The risk is that we focus on memorizing product names instead of designing cohesive systems.</p>



<p>The real question isn’t whether you’ve tried Copilot Studio Light or deployed something in Microsoft Foundry. The real question is whether you are building isolated AI experiments or a layered, governed, repeatable AI system. Are your agents grounded properly? Are they orchestrated thoughtfully? Are they designed with scale in mind?</p>



<p>That’s the shift. Not from one product to another — but from experimentation to architecture.</p>



<p>Start simple. Pick one high-impact use case. Ground it in Fabric. Reason through Foundry. Understand human impact through Work IQ. Deploy responsibly with Copilot Studio. Govern with Agent 365. Then scale with Agent Factory.</p>



<p>I’ve been in this industry long enough to see naming waves come and go. The technology shifts. The labels evolve. But the pattern of success remains consistent. The companies that win don’t chase terminology. They understand architecture. They build deliberately. They scale responsibly.</p>



<p>If you’ve felt overwhelmed by the recent flood of AI terminology, you’re not alone. I was there too. But once you see the layered model underneath, the strategy becomes much less intimidating. And that’s when you can stop decoding names and start building intelligence.</p>



<h3 class="wp-block-heading">Summing it up</h3>



<p><strong>Fabric IQ</strong> = Intelligence grounded in your structured business data<br>(Architectural label — not a SKU.)</p>



<p><strong>Foundry IQ</strong> = Intelligence grounded in enterprise knowledge, policies, and reasoning<br>(Architectural label — not a product name.)</p>



<p><strong>Work IQ</strong> = Intelligence about how work actually happens across meetings, messages, and collaboration patterns<br>(Architectural label describing direction.)</p>



<p><strong>Microsoft Foundry</strong> = Where foundation models (LLMs, embedding models, task-specific models) are selected, grounded, evaluated, and engineered — and where highly customized agents can be built in a pro-code environment.</p>



<p><strong>Copilot Studio</strong> = Where business agents are built and orchestrated using a low-code/no-code approach — defining workflows, actions, triggers, and system integrations.</p>



<p><strong>Copilot Studio Light</strong> = Where quick, internal, knowledge-based agents are created directly inside Microsoft 365 Copilot for fast adoption and simple use cases.</p>



<p><strong>Agent 365</strong> = The runtime layer for monitoring, observability, compliance, and governance across deployed agents.</p>



<p><strong>Agent Factory</strong> = The enterprise pattern for scaling and industrializing agent creation across departments.</p>



<p>The simple mental model:</p>



<p>Architectural layers create intelligence.<br>Engineering and orchestration tools build agents.<br>Agent 365 governs them.<br>Agent Factory scales them.</p>



<p>Pro-code where you need control.<br>Low-code/no-code where you need speed.</p>The post <a href="https://www.jamesserra.com/archive/2026/02/making-sense-of-microsofts-ai-strategy-work-iq-fabric-iq-foundry-iq/">Making Sense of Microsoft’s AI Strategy: Work IQ, Fabric IQ, Foundry IQ</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2026/02/making-sense-of-microsofts-ai-strategy-work-iq-fabric-iq-foundry-iq/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20947</post-id>	</item>
		<item>
		<title>Three Ways to Use Snowflake Data in Microsoft Fabric</title>
		<link>https://www.jamesserra.com/archive/2026/01/three-ways-to-use-snowflake-data-in-microsoft-fabric/</link>
					<comments>https://www.jamesserra.com/archive/2026/01/three-ways-to-use-snowflake-data-in-microsoft-fabric/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Thu, 29 Jan 2026 16:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20892</guid>

					<description><![CDATA[<p>Organizations increasingly want Snowflake and Microsoft Fabric to coexist without duplicating data or fragmenting governance. With Fabric OneLake and open table formats like Iceberg and Delta, there are now multiple ways to make Snowflake data available inside Fabric—each with different <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2026/01/three-ways-to-use-snowflake-data-in-microsoft-fabric/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2026/01/three-ways-to-use-snowflake-data-in-microsoft-fabric/">Three Ways to Use Snowflake Data in Microsoft Fabric</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>Organizations increasingly want Snowflake and Microsoft Fabric to coexist without duplicating data or fragmenting governance. With Fabric OneLake and open table formats like Iceberg and Delta, there are now multiple ways to make Snowflake data available inside Fabric—each with different tradeoffs around cost, performance, and ownership.</p>



<p>This post walks through three practical architectures for using Snowflake data in Fabric OneLake, when each option makes sense, and the key tradeoffs to consider.</p>



<p>The most important decision across all three options is which platform is the system of record and primary writer—most tradeoffs flow directly from that choice.</p>



<p><strong>1) Snowflake-managed Iceberg table + OneLake shortcut</strong></p>



<p>In this pattern, Snowflake reads and writes an Iceberg table stored in external object storage such as ADLS or S3. Microsoft Fabric does not copy the data; instead, Fabric creates a OneLake shortcut to the Iceberg table’s storage location so Fabric engines can query the data in place. Snowflake remains the system of record and the primary writer.</p>



<p>This option is best when you already have Iceberg tables managed by Snowflake and want zero data duplication while still enabling Fabric analytics. The tradeoff is that Fabric access is largely read-oriented, and the experience is less native than using <a href="https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-and-delta-tables" title="">Delta format</a> (how everything is stored in OneLake) in terms of table discovery and management.</p>



<p>From a cost perspective, you pay for Snowflake compute to create and maintain the Iceberg tables and for Fabric compute when querying the data. You do not pay for OneLake storage, ingestion pipelines, or replication. Any additional cost depends on where the external storage lives and whether cross-service or cross-region reads apply.</p>



<p>When to use this option</p>



<ul class="wp-block-list">
<li>You already have Snowflake-managed Iceberg tables in external object storage</li>



<li>Snowflake must remain the system of record and primary writer</li>



<li>You want zero data duplication and minimal architectural change</li>



<li>Fabric is used mainly for read-only analytics or exploration</li>
</ul>



<p>When to avoid this option</p>



<ul class="wp-block-list">
<li>You want Power BI <a href="https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-overview" title="">Direct Lake</a> with a fully native Delta experience, rather than Direct Lake enabled via Iceberg→Delta virtualization</li>



<li>You expect Fabric to write to, optimize, or manage the tables natively (for example, Delta optimization, compaction, or schema evolution owned by Fabric)</li>



<li>You want first-class table discovery, governance, and lifecycle management that aligns fully with Fabric-native Delta tables</li>
</ul>



<p>One-line summary:</p>



<p>Best for quick, low-friction access to existing Snowflake Iceberg data with minimal change, but limited Fabric-native capabilities.</p>



<p>More info: <a href="https://learn.microsoft.com/en-us/fabric/onelake/onelake-shortcuts">Unify data sources with OneLake shortcuts &#8211; Microsoft Fabric | Microsoft Learn</a></p>



<p><strong>2) Snowflake writes Iceberg tables directly into OneLake</strong></p>



<p>In this model, Snowflake is configured to write Iceberg tables directly into OneLake, creating a single shared physical copy of the data that both Snowflake and Fabric can access. No shortcut and no replication are required, and Fabric works directly against the Iceberg data where it lives.</p>



<p>This is the cleanest long-term architecture for open table interoperability, assuming current <a href="https://learn.microsoft.com/en-us/fabric/onelake/onelake-iceberg-tables" title="">Iceberg support in OneLake</a> meets your needs and Snowflake remains the primary writer (or you are comfortable defining shared write semantics).</p>



<p>Cost-wise, you pay for Snowflake compute to write and maintain the Iceberg tables and for Fabric compute to run analytics workloads. You do not pay for duplicate storage, replication, or ingestion jobs. This is often the most cost-efficient option at scale when both platforms need access to the same data.</p>



<p>When to use this option</p>



<ul class="wp-block-list">
<li>You want a single shared physical copy of data across Snowflake and Fabric</li>



<li>Snowflake remains the primary writer, but Fabric needs deep analytical access</li>



<li>You are intentionally adopting open table formats for long-term interoperability</li>



<li>You want to minimize storage duplication and ongoing data movement costs</li>
</ul>



<p>When to avoid this option</p>



<ul class="wp-block-list">
<li>You need Fabric-native Delta features (such as Direct Lake without relying on Iceberg→Delta virtualization)</li>



<li>You want Fabric to be the primary write, optimization, and lifecycle management engine</li>



<li>Your organization is not ready to clearly define write ownership, schema evolution, and conflict resolution between Snowflake and Fabric</li>
</ul>



<p>One-line summary:</p>



<p>Best long-term architecture for shared analytics using open table formats, with strong cost efficiency and clean data ownership.</p>



<p>More info: <a href="https://learn.microsoft.com/en-us/fabric/onelake/onelake-iceberg-snowflake"></a><a href="https://learn.microsoft.com/en-us/fabric/onelake/onelake-iceberg-snowflake">Use Snowflake with Iceberg tables in OneLake &#8211; Microsoft Fabric | Microsoft Learn</a>, <a href="https://www.snowflake.com/en/developers/guides/getting-started-with-iceberg-in-onelake/">Building an Iceberg Lakehouse with Snowflake and Microsoft OneLake</a></p>



<p><strong>3) Fabric mirroring from Snowflake to OneLake (Delta format)</strong></p>



<p>With mirroring, Fabric continuously replicates Snowflake tables into OneLake and stores them as Delta tables, delivering the most seamless Fabric-native experience across Lakehouse, Warehouse, Power BI Direct Lake, and AI workloads.</p>



<p>This option is ideal when you want maximum simplicity and performance inside Fabric and are comfortable with Snowflake no longer being the primary analytics engine for those datasets.</p>



<p>From a cost standpoint, you pay for Snowflake compute and cloud services to read source tables and capture changes, and you pay for Fabric capacity to perform replication, optimization, and downstream analytics. You do not pay for OneLake storage for the mirrored data (the mirroring storage cost is free up to a limit based on capacity &#8211; for more information, see&nbsp;<a href="https://learn.microsoft.com/en-us/fabric/mirroring/overview#cost-of-mirroring">Cost of mirroring</a>&nbsp;and&nbsp;<a href="https://azure.microsoft.com/pricing/details/microsoft-fabric/">Microsoft Fabric Pricing</a>), nor do you pay for building or operating ingestion pipelines. While storage is effectively free in OneLake, mirroring is typically the most compute-intensive option.</p>



<p>When to use this option</p>



<ul class="wp-block-list">
<li>You want the most seamless, high-performance Fabric experience</li>



<li>Power BI Direct Lake, Warehouse, and AI workloads are top priorities</li>



<li>Fabric will be the primary analytics platform going forward</li>



<li>You prefer simplicity over shared-write complexity</li>
</ul>



<p>When to avoid this option</p>



<ul class="wp-block-list">
<li>Snowflake must remain the primary analytics engine</li>



<li>You want to avoid continuous replication compute costs</li>



<li>You are standardizing on open formats like Iceberg across platforms</li>
</ul>



<p>One-line summary:</p>



<p>Best for Fabric-first analytics teams that want maximum performance and simplicity, at the cost of higher compute usage.</p>



<p>More info: <a href="https://learn.microsoft.com/en-us/fabric/mirroring/snowflake"></a><a href="https://learn.microsoft.com/en-us/fabric/mirroring/snowflake">Microsoft Fabric Mirrored Databases From Snowflake &#8211; Microsoft Fabric | Microsoft Learn</a>, <a href="Tutorial: Configure a Microsoft Fabric Mirrored Database From Snowflake - Microsoft Fabric | Microsoft Learn"></a><a href="https://learn.microsoft.com/en-us/fabric/mirroring/snowflake-tutorial">Tutorial: Configure a Microsoft Fabric Mirrored Database From Snowflake &#8211; Microsoft Fabric | Microsoft Learn</a></p>



<h3 class="wp-block-heading">S<strong>ummary comparison table</strong></h3>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th>Dimension</th><th>Snowflake-managed Iceberg + Shortcut</th><th>Snowflake writes Iceberg to OneLake</th><th>Fabric Mirroring (Delta)</th></tr></thead><tbody><tr><td>Primary writer</td><td>Snowflake</td><td>Snowflake</td><td>Fabric</td></tr><tr><td>Physical data duplication</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/274c.png" alt="❌" class="wp-smiley" style="height: 1em; max-height: 1em;" /> None (shared external storage)</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/274c.png" alt="❌" class="wp-smiley" style="height: 1em; max-height: 1em;" /> None (shared OneLake storage)</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Yes (replicated into OneLake)</td></tr><tr><td>Format in OneLake</td><td>Iceberg</td><td>Iceberg</td><td>Delta</td></tr><tr><td>Fabric experience</td><td>Read-oriented, less native</td><td>Good, improving</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2b50.png" alt="⭐" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Best / most native</td></tr><tr><td>Power BI Direct Lake</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/26a0.png" alt="⚠" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Supported via OneLake Iceberg→Delta virtualization*</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/26a0.png" alt="⚠" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Supported via OneLake Iceberg→Delta virtualization*</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Supported (native Delta)</td></tr><tr><td>Ongoing sync</td><td>Manual / external</td><td>Native (shared)</td><td>Automatic (CDC-based)</td></tr><tr><td>Separate storage charges</td><td>Yes (external storage)</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/274c.png" alt="❌" class="wp-smiley" style="height: 1em; max-height: 1em;" /> No (included with Fabric capacity)</td><td><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/274c.png" alt="❌" class="wp-smiley" style="height: 1em; max-height: 1em;" /> No (included with Fabric capacity)</td></tr><tr><td>Compute cost</td><td>Snowflake + Fabric</td><td>Snowflake + Fabric</td><td>Snowflake + Fabric (highest)</td></tr><tr><td>Best for</td><td>Minimal change, zero-copy</td><td>Long-term open architecture</td><td>Fabric-first analytics</td></tr></tbody></table></figure>



<p>*Direct Lake support for Iceberg relies on OneLake’s ability to surface the table as Delta, which may not be guaranteed for all Iceberg configurations or evolution patterns (<a href="https://learn.microsoft.com/en-us/fabric/onelake/onelake-iceberg-tables" title="">more info</a>).</p>



<p>Iceberg support in OneLake is evolving quickly, and the gap between Iceberg and Delta experiences in Fabric is narrowing. Over time, Option 2 becomes increasingly attractive as a long-term, open-table architecture—while mirroring remains the fastest path to a fully Fabric-native experience today.</p>The post <a href="https://www.jamesserra.com/archive/2026/01/three-ways-to-use-snowflake-data-in-microsoft-fabric/">Three Ways to Use Snowflake Data in Microsoft Fabric</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2026/01/three-ways-to-use-snowflake-data-in-microsoft-fabric/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20892</post-id>	</item>
		<item>
		<title>Getting Your Data GenAI-Ready: The Next Stage of Data Maturity</title>
		<link>https://www.jamesserra.com/archive/2026/01/getting-your-data-genai-ready-the-next-stage-of-data-maturity/</link>
					<comments>https://www.jamesserra.com/archive/2026/01/getting-your-data-genai-ready-the-next-stage-of-data-maturity/#respond</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 14 Jan 2026 16:00:00 +0000</pubDate>
				<category><![CDATA[Azure Purview]]></category>
		<category><![CDATA[Data warehouse]]></category>
		<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20874</guid>

					<description><![CDATA[<p>I remember a meeting where a client’s CEO leaned in and asked me, “So, we have tons of data&#8230; Why can’t we just add AI and call it a day?” He was excited—who isn’t these days? But my team and <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2026/01/getting-your-data-genai-ready-the-next-stage-of-data-maturity/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2026/01/getting-your-data-genai-ready-the-next-stage-of-data-maturity/">Getting Your Data GenAI-Ready: The Next Stage of Data Maturity</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>I remember a meeting where a client’s CEO leaned in and asked me, “So, we have tons of data&#8230; Why can’t we just add AI and call it a day?” He was excited—who isn’t these days? But my team and I just shared a knowing look. We had seen this story again and again: everyone wants the shiny AI solution, yet few realize how much work it takes to get the data ready. (If only it were that easy!)</p>



<p>In this post, let’s talk honestly about what it takes to get your enterprise data <em>AI-ready</em>—specifically GenAI-ready. We’ll walk through how this fits into an enterprise data maturity journey (is it Stage 5, or just the “deep end” of Stage 4?), why being AI-ready is about a lot more than just technology, and how focusing on data quality, governance, and architecture can make or break your AI aspirations. We’ll also look at examples of generative AI applications (think chatbots, document summarizers, intelligent search, and copilots) that all share one thing: they demand well-prepared data. And because I’m a fan of practical advice (and biased), I’ll highlight a couple of Azure tools – Microsoft Purview and Microsoft Fabric – that can help you on this journey.</p>



<p>So grab a coffee, and let’s dive in. By the end, you should have a clearer idea of how to start getting your data truly <em>AI-ready</em>.</p>



<p>Note: If you need a refresher on OpenAI and LLMs, check out my blogs <a href="https://www.jamesserra.com/archive/2024/03/introduction-to-openai-and-llms/">Introduction to OpenAI and LLMs</a>, <a href="https://www.jamesserra.com/archive/2024/06/introduction-to-openai-and-llms-part-2/">Introduction to OpenAI and LLMs – Part 2</a> and <a href="https://www.jamesserra.com/archive/2025/01/introduction-to-openai-and-llms-part-3/">Introduction to OpenAI and LLMs – Part 3</a>, along with a presentation on that topic that I did for the Toronto Data Professional Community which you can <a href="https://www.youtube.com/watch?v=kXwtb0oSup0">view </a>and download the <a href="https://torontodpc.ca/resources/70">slides</a>. </p>



<h2 class="wp-block-heading">From Stage 4 to Stage 5: Data Maturity Meets AI</h2>



<p>Every organization progresses through some kind of <strong>data maturity model</strong>. You might have seen one of those charts (maybe the one with four stages in my <a href="https://www.amazon.com/Deciphering-Data-Architectures-Warehouse-Lakehouse/dp/1098150767" title="">book</a>) that show a company’s evolution from basic reporting all the way to advanced analytics. Typically it goes something like:</p>



<ul class="wp-block-list">
<li><strong>Stage 1 – Reactive analytics:</strong> Siloed data, manual reports, and ad-hoc analyses (basically just reacting to past events).  Often Excel files are emailed all over (spreadmarts).</li>



<li><strong>Stage 2 – Informative analytics:</strong> A centralized data warehouse and BI provide a single source of truth with standard reports.  Focus on stages 1 and 2 is historical reporting.  Usually the size and types of data it can handle is limited and solution is not very scalable and data is ingested infrequently (i.e. every night).</li>



<li><strong>Stage 3 – Predictive analytics:</strong> Can handle larger quantity of data, different types, and more frequently (streaming). Data science and machine learning efforts begin, aiming to forecast trends and outcomes, and decisions can be made real-time.</li>



<li><strong>Stage 4 – Transformative analytics:</strong> Can handle any type of data, no matter the size, type, or speed.  Advanced analytics (even some AI) is embedded in processes, and a data-driven culture is in full swing.</li>
</ul>



<p>Now, along comes generative AI. It feels like a leap beyond what we used to consider “advanced analytics.” So where does GenAI fit? It could be a Stage 5 – AI-Driven Enterprise, or simply an evolution of Stage 4 (in reality, it’s a bit of both). If your company is still in Stage 2 or 3, you have foundational work to do before jumping to GenAI – you can’t just skip to the top. But if you’re solidly in Stage 4, getting AI-ready is the next natural step. The key point is that becoming GenAI-ready is a continuation of the data maturity journey, building on everything you’ve achieved so far in BI and analytics, and then going further.</p>



<h2 class="wp-block-heading">What Does it Mean to Be GenAI-Ready?</h2>



<p>Let’s cut through the buzzwords. Being “GenAI-ready” isn’t about having the latest AI tool; it’s about your data being ready to support those tools. It means you’ve set up the plumbing and prepared the ingredients so that an AI system can actually do something useful (and trustworthy) with your data.</p>



<p>How can you tell if your data is AI-ready? Here are a few signs:</p>



<ul class="wp-block-list">
<li><strong>Clean, quality data:</strong> It’s accurate, consistent, and up-to-date – no “garbage in” to poison your AI’s outputs.</li>



<li><strong>Clear context and governance:</strong> Data carries business meaning (thanks to good metadata and definitions) and is properly governed, so everyone knows what it represents and how it can be used.</li>



<li><strong>Managed unstructured data:</strong> Documents, emails, and other unstructured sources are stored in accessible formats and tagged with relevant info, ensuring AI models can find and interpret them.</li>



<li><strong>Flexible architecture:</strong> Your data platform can easily integrate new sources and deploy AI models without major rework – it’s built to scale and adapt as AI use cases grow.</li>
</ul>



<p>If you read that list and feel a bit overwhelmed, you’re not alone! Very few organizations check every box. The idea is to see where you stand and where you need to improve, because any weaknesses in your data will be magnified by AI. Remember, GenAI is not a magic wand – it will amplify whatever you feed it. So feed it well.</p>



<h2 class="wp-block-heading">Building an AI Solution: As Complex as Building a Data Warehouse (or More)</h2>



<p>All those fundamentals – data quality, governance, architecture – underscore one truth: building a robust AI solution is no less complex than building a data warehouse or any major analytics platform (in fact, it can be more).</p>



<p>Why more complex? For starters, GenAI projects often involve diverse data types (text, documents, images, chat logs) and require new infrastructure (like vector databases for embeddings or real-time data pipelines) that you might not have needed in a traditional BI project. Additionally, developing AI is an iterative process of training, fine-tuning, and validating models, which means your data pipelines must be flexible and your team prepared for a lot of experiment-and-learn cycles. In other words, it&#8217;s a different style of cooking – you add ingredients, taste, adjust, and repeat – not a simple follow-the-recipe dish.</p>



<p>The takeaway? Approach AI initiatives with the same rigor as your big data projects – if not more. Plan thoroughly, ensure the fundamentals are solid, and involve the right experts (data engineers, data scientists, domain specialists) from day one. It’s a marathon, not a sprint (with a few extra hurdles on the track), but with preparation and teamwork, you’ll reach the finish line.</p>



<h2 class="wp-block-heading">GenAI Applications in the Wild: Why They Need AI-Ready Data</h2>



<p>Let’s ground this in some real-world examples. What do we actually <em>do</em> with GenAI in a business setting, and why does data prep matter so much? Here are a few popular use cases:</p>



<ul class="wp-block-list">
<li><strong>AI Chatbots and Virtual Agents:</strong> They need a curated, trusted knowledge base of information; otherwise the bot will either give wrong answers or be embarrassingly clueless.</li>



<li><strong>Document Summarization and Analysis:</strong> The documents must be machine-readable (not just scanned images) and organized, so the AI can find key points and accurately summarize them.</li>



<li><strong>Intelligent Search:</strong> This requires indexing and integrating your data (often via semantic or vector search). Without well-prepared data, the AI won’t retrieve the answers users are looking for.</li>



<li><strong>AI Copilots for Employees:</strong> These AI assistants (for coding, marketing, finance, etc.) rely on internal data. If that data is siloed, outdated, or poorly defined, the copilot’s guidance will be far less useful.</li>
</ul>



<p>Across all these examples, the pattern is clear: the quality and readiness of data determines whether a GenAI application succeeds or face-plants. Even a state-of-the-art model can’t overcome messy, siloed, or untrustworthy data. I’ve seen brilliant technical prototypes get scrapped because the underlying data pipeline wasn’t sustainable or the outputs couldn’t be trusted by end users. Generative AI can do amazing things, but only if our data house is in order.</p>



<h2 class="wp-block-heading">Tools for the Journey: Azure Purview and Microsoft Fabric</h2>



<p>By now you might be thinking, “Alright, we need to improve a lot of things… where do we even start, and are there tools to help?” The good news is yes – there are platforms that align well with this journey. In the Azure ecosystem, two worth highlighting are <strong>Microsoft Purview</strong> and <strong>Microsoft Fabric</strong>:</p>



<p><strong>Microsoft Purview:</strong> Purview is Azure’s data governance service. It helps you discover, catalog, and track the lineage of data across your organization. In short, Purview makes it easier to ensure your data is well-defined, trustworthy, and compliant – exactly what you need before unleashing AI on it.</p>



<p><strong>Microsoft Fabric:</strong> Fabric is Microsoft’s unified analytics platform, combining data engineering, data warehousing, and data science tools in one place. It’s built with AI in mind – using OneLake to store all your data and integrating seamlessly with Azure AI services. This means you can develop and deploy AI solutions faster, without stitching together a dozen separate systems.</p>



<p>Of course, tools alone won’t magically make your data AI-ready. But leveraging platforms like Purview and Fabric can accelerate the process by reinforcing good practices (governance, single source of truth, scalable architecture) as you embark on your GenAI projects.</p>



<h2 class="wp-block-heading">Conclusion: Begin Your AI-Ready Data Journey</h2>



<p>If you’ve stuck with me this far, you know that getting data GenAI-ready is a journey, not an overnight task. The best way to start is by assessing where you are today. What stage of data maturity are you at, and where are the gaps? Maybe you have lots of data but little governance, or great dashboards but poor data quality. Identifying one or two key areas to improve is a great first step.</p>



<p>Next, consider running a pilot project that leverages GenAI on a small scale. Pick a use case that excites people but is manageable—perhaps an internal Q&amp;A chatbot or an AI-generated report summary. As you execute it, pay attention to what’s blocking you. Are you scrambling to clean data or define metrics? Use those lessons to shore up your data foundations.</p>



<p>Also, remember to celebrate the “boring” work that enables AI – like setting up a data catalog, cleaning datasets, and defining business terms. These may not feel like innovation, but they directly boost your AI projects. And keep the collaboration going: getting data ready for AI isn’t just an IT task, so you need buy-in from leadership and participation from business teams. When everyone sees that better data leads to better AI, it’s much easier to get support for data quality and governance efforts.</p>



<p>If you’ve built a data warehouse or analytics platform before, you already know the playbook: define the goal, get the data in shape, build iteratively, and keep improving. GenAI is simply another chapter in that story.</p>



<p>So, roll up those sleeves and start laying the groundwork. Each improvement in data quality, governance, and architecture moves you closer to the day you can confidently say, “Yes—we’re ready for AI.”</p>The post <a href="https://www.jamesserra.com/archive/2026/01/getting-your-data-genai-ready-the-next-stage-of-data-maturity/">Getting Your Data GenAI-Ready: The Next Stage of Data Maturity</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2026/01/getting-your-data-genai-ready-the-next-stage-of-data-maturity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20874</post-id>	</item>
		<item>
		<title>Fabric Cost Analysis (FCA)</title>
		<link>https://www.jamesserra.com/archive/2025/12/fabric-cost-analysis-fca/</link>
					<comments>https://www.jamesserra.com/archive/2025/12/fabric-cost-analysis-fca/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 24 Dec 2025 16:03:01 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20851</guid>

					<description><![CDATA[<p>Monitoring costs in Microsoft Fabric can be trickier than it first appears. You might assume it&#8217;s just a flat fee per capacity (easy, right?), but real-world usage tends to add complexity. Maybe you pause and resume a capacity, scale it <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/12/fabric-cost-analysis-fca/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/12/fabric-cost-analysis-fca/">Fabric Cost Analysis (FCA)</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>Monitoring costs in Microsoft Fabric can be trickier than it first appears. You might assume it&#8217;s just a flat fee per capacity (easy, right?), but real-world usage tends to add complexity. Maybe you pause and resume a capacity, scale it up or down, or leverage extra features – suddenly you’re wondering where those additional charges came from. Many organizations struggle to distinguish which costs are included in their Fabric capacity and which aren’t. They also want guidance on optimizing resources and practicing good FinOps (Financial Operations) like chargeback and showback to internal teams.</p>



<p>This complexity matters because Microsoft Fabric is the fastest-growing unified data platform in Microsoft’s history, now adopted by over 28,000 customers. Yet for many organizations, the capacity model remains one of the biggest obstacles to adoption. Understanding what’s included versus what’s not, the real impact of reservations, and how Azure quotas affect usage and cost is genuinely hard—and that uncertainty slows down expansion.</p>



<p>Enter <a href="https://aka.ms/FabricCostAnalysis" title="">Fabric Cost Analysis (FCA)</a> – a free, open-source solution available to everyone on a <a href="https://aka.ms/FabricCostAnalysis" title="">Microsoft GitHub repository</a>, designed to shine a light on all your Microsoft Fabric costs. FCA was developed by a multidisciplinary team (<a href="https://www.linkedin.com/in/cdupui/">Cedric Dupui</a>,&nbsp;<a href="https://www.linkedin.com/in/mlomani/">Manel Omani</a>,&nbsp;<a href="https://www.linkedin.com/in/antoine-richet-22a44021/">Antoine Richet</a>, and led by&nbsp;<a href="https://www.linkedin.com/in/casteres/">Romain Casteres</a>) with expertise spanning FinOps, Data, and Go-To-Market, with a clear goal: turn a major adoption barrier into a strategic lever for growth.</p>



<p>Conceived directly from customer questions, FCA answers the things people actually want to know: <em>What are we really paying for? What’s included? Where are the optimization opportunities?</em> It doesn’t just track costs—it builds trust, helps organizations explain spend internally, and ultimately accelerates Fabric adoption.</p>



<p>Built entirely on Fabric itself (talk about eating your own dog food!), FCA demonstrates the platform’s power end-to-end and highlights the use of GenAI through an agentic experience, allowing users to access cost insights in natural language. Deployed at several customers, FCA has been presented to the global community at FabCon in Vienna, the AI Tour in Latin America, and across various community events around the world. The aim is simple: enable everyone to confidently control the cost of the Fabric platform.</p>



<p>Important: Keep in mind that FCA is a solution accelerator, not an official Microsoft product. That means it’s community-supported and carries no formal Microsoft support. It’s been shared freely with the community to help Fabric users, but you use it at your own risk (in practice it’s quite robust, but if something breaks, you’ll be relying on community help). Now, let&#8217;s break down what FCA offers and how it works.</p>



<h3 class="wp-block-heading">Content</h3>



<p>In this post, we&#8217;ll cover the main aspects of the Fabric Cost Analysis tool: its architecture, the data it uses as inputs, the outputs it provides (reports and a nifty data agent), and how you can set it up and get support. By the end, you should have a clear picture of how FCA can help you monitor and optimize your Fabric costs—and remove friction from Fabric adoption.</p>



<h3 class="wp-block-heading">Architecture</h3>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-2-scaled.png?ssl=1"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="1024" height="327" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-2.png?resize=1024%2C327&#038;ssl=1" alt="" class="wp-image-20861" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-2-scaled.png?resize=1024%2C327&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-2-scaled.png?resize=300%2C96&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-2-scaled.png?resize=768%2C245&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-2-scaled.png?resize=1536%2C490&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-2-scaled.png?resize=2048%2C654&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-2-scaled.png?w=2360&amp;ssl=1 2360w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></figure>



<p><em>Figure: High-level architecture of the Fabric Cost Analysis (FCA) solution.</em></p>



<p>This diagram shows how cost and usage data flows through Microsoft Fabric. Cost data (such as Azure billing info) is ingested via Fabric Pipelines and Notebooks into a Fabric Lakehouse, where it&#8217;s stored both in raw form and as optimized Delta Parquet files. This dual storage (raw and curated) allows Power BI’s Direct Lake to access the data directly for analytics, enabling fast and interactive cost reporting.</p>



<p>There’s even a built-in Fabric Data Agent (think of it like an AI assistant for your data) on top of the FCA data model. In short, FCA uses Fabric to monitor Fabric—a self-referential approach that showcases the platform’s capabilities while solving a very real business problem.</p>



<h3 class="wp-block-heading">FCA Inputs</h3>



<p>FCA gathers a variety of data and loads it into your Fabric Lakehouse to analyze costs from multiple angles. The key inputs include:</p>



<ul class="wp-block-list">
<li><strong>Azure cost data in FOCUS format</strong>: FCA pulls Azure billing details using the FinOps Open Cost and Usage Specification (<a href="https://focus.finops.org/" title="">FOCUS</a>), providing a normalized, analysis-ready cost model.</li>



<li><strong>Enriched documentation data</strong>: Reference data from Microsoft Learn documentation adds context and definitions, making reports easier to understand.</li>



<li><strong>Azure Reservations (optional)</strong>: If you use reservations for Fabric capacities, FCA shows whether they’re actually delivering savings.</li>



<li><strong>Azure Quotas (optional)</strong>: Quota data helps you understand usage versus limits—critical for both cost control and capacity planning.</li>
</ul>



<p><em>Note:</em> FCA is scoped exclusively to Microsoft Fabric costs. Unrelated Azure services are filtered out, keeping the focus exactly where it belongs.</p>



<h3 class="wp-block-heading">FCA Outputs</h3>



<p>Once the data is collected and processed, FCA delivers two primary outputs: a comprehensive Power BI report and a Fabric Data Agent that allows natural-language exploration of your cost data. Together, they support classic FinOps scenarios—inform, optimize, and operate—while making cost conversations easier across technical and business teams.</p>



<h4 class="wp-block-heading">Report</h4>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-3-scaled.png?ssl=1"><img data-recalc-dims="1" decoding="async" width="1024" height="573" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-3.png?resize=1024%2C573&#038;ssl=1" alt="" class="wp-image-20862" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-3-scaled.png?resize=1024%2C573&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-3-scaled.png?resize=300%2C168&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-3-scaled.png?resize=768%2C430&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-3-scaled.png?resize=1536%2C859&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-3-scaled.png?resize=2048%2C1146&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-3-scaled.png?w=2360&amp;ssl=1 2360w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></figure>



<p><em>Figure: Example of an FCA report page (the “Summary” page).</em></p>



<p>Each report page focuses on a different aspect of Fabric cost management. The Home page provides a high-level snapshot, clearly separating costs covered by capacity from additional charges—making it much easier to explain spend internally. The Summary page breaks down costs by capacity and region, helping organizations understand how usage is distributed.</p>



<p>Other pages include Capacity Usage (to identify under- or over-utilization), Reservations (to validate savings), Cost Detail (for granular analysis and forecasting), Quotas (to track limits and usage), and a Support page that explains billing nuances and metrics. You can use the reports as-is or extend them using the underlying data model.</p>



<h3 class="wp-block-heading">Data Agent</h3>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-4.png?ssl=1"><img data-recalc-dims="1" decoding="async" width="966" height="1024" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-4.png?resize=966%2C1024&#038;ssl=1" alt="" class="wp-image-20867" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-4.png?resize=966%2C1024&amp;ssl=1 966w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-4.png?resize=283%2C300&amp;ssl=1 283w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-4.png?resize=768%2C814&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-4.png?resize=1449%2C1536&amp;ssl=1 1449w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/12/image-4.png?w=1832&amp;ssl=1 1832w" sizes="(max-width: 966px) 100vw, 966px" /></a></figure>



<p><em>Figure: Example of  data agent communication in both English and French.</em></p>



<p>One of FCA’s most powerful features is its Fabric Data Agent integration. This agentic, GenAI-powered experience allows users to ask questions like:</p>



<ul class="wp-block-list">
<li>“Which Fabric capacities cost the most this quarter?”</li>



<li>“How have my Fabric costs trended over the last six months?”</li>
</ul>



<p>The Data Agent retrieves answers directly from the FCA semantic model, making cost analysis accessible even to non-technical users. It works in multiple languages and can be accessed via the web or Microsoft Teams. This capability reinforces FCA’s core mission: remove friction, build confidence, and make cost transparency part of everyday decision-making.</p>



<h3 class="wp-block-heading">Setup</h3>



<p>Getting started with FCA is intentionally simple. The solution is delivered through the <a href="https://github.com/microsoft/fabric-toolbox/tree/main" title="">Microsoft Fabric Toolbox</a> on GitHub and includes an automated, guided deployment process. A one-click deployment sets up pipelines, notebooks, the Lakehouse, datasets, and reports.  Check it out <a href="https://github.com/microsoft/fabric-toolbox/tree/main/monitoring/fabric-cost-analysis" title="">here</a>.</p>



<p>As long as you have the appropriate Fabric workspace permissions and an Azure subscription for billing data, deployment typically takes just minutes. Many users start with a limited scope to explore the solution, then expand to full production once comfortable.</p>



<h3 class="wp-block-heading">Support</h3>



<p>Because FCA is a community-driven solution, support happens openly on GitHub. Bugs, questions, and feature requests are handled through GitHub issues, which also serve as the project backlog. The team actively monitors feedback and incorporates improvements over time.</p>



<p>One important reminder: do not open Microsoft support tickets for FCA. Instead, use the GitHub repository, where both the creators and the broader community can help.</p>



<h3 class="wp-block-heading">Final Thoughts</h3>



<p>Fabric Cost Analysis turns one of the most common frustrations with Microsoft Fabric—cost transparency—into a strength. Built by practitioners, shaped by customers, and shared freely with the community, FCA helps organizations understand what they’re paying for, optimize usage, and confidently scale Fabric.</p>



<p>If you’re running Microsoft Fabric capacities, FCA is well worth deploying. It demystifies billing, highlights optimization opportunities, and helps teams move from hesitation to expansion—exactly what a modern FinOps-driven platform needs.</p>



<p>Check out the demo presentation available on YouTube :&nbsp;<a href="https://youtu.be/ZRtxJgFGfi4">Fabric Cost Analysis</a>.</p>



<p><em>Happy cost optimizing.</em></p>The post <a href="https://www.jamesserra.com/archive/2025/12/fabric-cost-analysis-fca/">Fabric Cost Analysis (FCA)</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/12/fabric-cost-analysis-fca/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20851</post-id>	</item>
		<item>
		<title>Microsoft Ignite Announcements Nov 2025</title>
		<link>https://www.jamesserra.com/archive/2025/12/microsoft-ignite-announcements-nov-2025/</link>
					<comments>https://www.jamesserra.com/archive/2025/12/microsoft-ignite-announcements-nov-2025/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Tue, 02 Dec 2025 16:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20774</guid>

					<description><![CDATA[<p>Announced at&#160;Microsoft Ignite&#160;two weeks ago were many new product features related to the data platform. Check out the&#160;Major announcements&#160;and&#160;Book of News. I went through the many announcements and picked out the most interesting ones, so you don&#8217;t have to: Microsoft <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/12/microsoft-ignite-announcements-nov-2025/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/12/microsoft-ignite-announcements-nov-2025/">Microsoft Ignite Announcements Nov 2025</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>Announced at&nbsp;<a href="https://ignite.microsoft.com/">Microsoft Ignite</a>&nbsp;two weeks ago were many new product features related to the data platform. Check out the&nbsp;<a href="https://www.microsoft.com/en-us/microsoft-fabric/blog/2024/11/19/accelerate-app-innovation-with-an-ai-powered-data-platform/">Major announcements</a>&nbsp;and&nbsp;<a href="https://news.microsoft.com/ignite-2025-book-of-news/" title="">Book of News</a>. I went through the many announcements and picked out the most interesting ones, so you don&#8217;t have to:</p>



<p><strong>Microsoft SQL Server 2025</strong>, with built-in AI and developer-first enhancements, is now generally available. The platform enables customers to securely use data they already have and work in the familiar T-SQL language. It provides:</p>



<ul class="wp-block-list">
<li>A way to access AI models of choice, hosted locally or in the cloud, and to securely use data to best fit business needs.</li>



<li>Simplified data processing with native JSON support, built-in REST APIs and change event streaming.</li>



<li>Near real-time analytics by replicating SQL Server data to Microsoft OneLake with database mirroring in Microsoft Fabric.</li>



<li>Increased workload performance, uptime and concurrency for SQL Server apps with enhanced query optimization, optimized locking and improved failover reliability.</li>



<li>Improved credential management and fewer potential vulnerabilities with Microsoft Entra ID for authentication through Microsoft Azure Arc.</li>



<li>GitHub Copilot integration in Visual Studio Code and SQL Server Management Studio 22 for better productivity.</li>



<li>A new Microsoft Python driver for SQL Server (mssql-python) for a fast and developer-friendly experience in Windows, macOS and Linux.</li>
</ul>



<p>(<a href="https://techcommunity.microsoft.com/blog/SQLServer/sql-server-2025-is-now-generally-available/4470570" title="">more info</a>) (<a href="https://www.sqlservercentral.com/articles/sql-server-2025-has-arrived" title="">Bob Ward post</a>)</p>



<p><strong>Microsoft Azure DocumentDB now generally available</strong><br>Microsoft Azure DocumentDB, the first managed service built on the open-source DocumentDB standard, is generally available. Now governed by the Linux Foundation, DocumentDB delivers an open and community-driven MongoDB-compatible engine with multicloud flexibility, running consistently across Azure, other clouds and on premises. This gives organizations freedom from proprietary lock-in and the ability to standardize on open source while operating at a global scale.  Azure DocumentDB was previously known as Azure Cosmos DB for MongoDB (vCore).  Note that this new Azure DocumentDB is not the same product as the original DocumentDB that became Cosmos DB—it’s a newly branded, separate vCore-based MongoDB service. (<a href="https://devblogs.microsoft.com/cosmosdb/azure-documentdb-is-now-generally-available/" title="">more info</a>)</p>



<p><strong>Azure HorizonDB, a new PostgreSQL database, in private preview</strong><br>Microsoft Azure HorizonDB, a new PostgreSQL cloud database service for building or modernizing mission-critical apps, is now in private preview. Integrated with Microsoft Foundry, Microsoft Fabric, Visual Studio Code and more, Azure HorizonDB streamlines development with the following features:</p>



<ul class="wp-block-list">
<li>Transactions and vector search up to three times faster than open-source PostgreSQL, based upon internal benchmarking.</li>



<li>Scale-out compute to 15 replicas with 192 vCores each.</li>



<li>Auto-scaling storage up to 128 TB.</li>



<li>Advanced DiskANN vector indexing for AI workloads and native semantic operators.</li>



<li>AI-readiness with pre-provisioned models.</li>
</ul>



<p><br>Organizations can right-size consumption to their workloads’ needs and save capacity for future requirements by independently scaling compute and storage scale. Modern authentication with Microsoft Entra ID and security features like Microsoft Defender and private endpoints support enterprise-grade protection.  It is optimized for performance, scale, and AI apps and designed to compete directly with AWS Aurora and GCP AlloyDB. (<a href="https://techcommunity.microsoft.com/blog/adforpostgresql/announcing-azure-horizondb/4469710" title="">more info</a>)</p>



<p><strong>Microsoft Fabric databases</strong>, now generally available, bring together SQL database and Cosmos DB in a new, unified software-as-a-service (SaaS) experience for organizations to manage, analyze and activate their data. Fabric databases provide instant provisioning, autonomous architecture, enterprise-grade security and native AI integration — including support for vector data and retrieval-augmented generation (RAG) patterns — to help teams build intelligent, real-time apps. (<a href="https://blog.fabric.microsoft.com/en-us/blog/fabric-databases-a-unified-saas-native-experience-for-modern-data-workloads-generally-available" title="">more info</a>)</p>



<p><strong>Announcing Microsoft Fabric IQ: The Semantic Intelligence Platform</strong><br>Microsoft Fabric IQ is the new semantic intelligence layer that elevates Fabric from a unified&nbsp;<em>data</em>&nbsp;platform to a unified&nbsp;<em>intelligence</em>&nbsp;platform. It turns your unified data estate, already consolidated in OneLake, into a live, structured, connected model of how your business operates. It bridges the gap between where your data lives and how your teams and AI reason, decide, and act.</p>



<p>Fabric IQ combines five integrated capabilities into one semantic intelligence system:</p>



<ul class="wp-block-list">
<li><a href="https://learn.microsoft.com/en-us/fabric/iq/ontology/overview" title="">Ontology</a> (preview): shared model of business entities, relationships, rules, and objectives.  This was just released at Ignite.</li>



<li><a href="https://learn.microsoft.com/en-us/fabric/data-warehouse/semantic-models" title="">Semantic Model</a>: trusted BI definitions, now extended beyond analytics into operations and AI.  Semantic models have been available for a long time.</li>



<li><a href="https://learn.microsoft.com/en-us/fabric/graph/overview" title="">Graph model</a> (preview): native graph engine for multi-hop reasoning and system-wide insights.  This was released a couple of months ago.</li>



<li><a href="https://learn.microsoft.com/en-us/fabric/data-science/concept-data-agent" title="">Data Agent</a> (preview): virtual analysts that answer business questions using structured business meaning.  This has been available for about a year.</li>



<li><a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/operations-agent" title="">Operations Agent </a>(preview): autonomous agents that reason, learn, and act in real time to advance outcomes.  This has been available for about a year.</li>
</ul>



<p></p>



<p>So to clarify: Fabric IQ is a new &#8220;umbrella&#8221; name for five features, only one of which is brand new &#8211; Ontology.  </p>



<p>More about Ontology:</p>



<p>An ontology in Microsoft Fabric is a shared, machine-understandable vocabulary that defines the core business concepts that exist within an organization—things like customers, products, orders, and assets, rather than just the raw tables that store data. It provides a <em>business-level semantic layer</em> that standardizes terminology across domains, ensuring all teams and tools refer to the same entity names, properties, and relationships. This eliminates inconsistencies that commonly arise when different teams model the same concepts in different ways.</p>



<p>Once created, the ontology is bound to actual data sources in Fabric, including lakehouse tables, event streams, and semantic models. This binding process maps columns to properties, links identifiers to relationships, and transforms raw table rows into typed entity instances enriched with consistent semantics, metadata, provenance, and timestamps. The ontology becomes the layer through which physical data is interpreted, giving Fabric a unified view of business meaning across disparate systems.</p>



<p>When Graph in Microsoft Fabric is enabled, the ontology is materialized as a graph where each entity instance becomes a node and each relationship becomes an edge. This graph representation allows visual exploration (e.g. browsing relationships, lineage, dependencies), graph-style queries or algorithms (e.g. pathfinding, impact analysis) and also semantic queries: you ask in terms of business concepts (not tables) — e.g. “Find all shipments that are delayed and associated with high-risk routes” — and Fabric handles the underlying joins, filtering, data reconciliation. This enables a far richer and more intuitive way to navigate enterprise data.</p>



<p>Using an ontology brings multiple benefits over traditional table-centric models. It enforces cross-domain consistency and governance by providing a single semantic standard for the entire organization. It simplifies integration by allowing data from different sources—batch, real-time, structured, or semantic—to be unified under the same conceptual model. Its semantic richness enables deeper context and expressiveness, supporting advanced analytics and reasoning that are difficult or impossible with flat tables.</p>



<p>Finally, because the ontology abstracts business meaning away from physical schemas, teams gain flexibility and agility. Changes to underlying tables or new data sources can often be accommodated simply by rebinding them to the ontology, without rewriting analytics or business logic. Combined with Fabric&#8217;s graph and AI capabilities, the ontology becomes a powerful foundation for agents, copilots, and intelligent applications that interact with data in terms of real business concepts.</p>



<p>To create an ontology in Microsoft Fabric, you begin by defining the ontology itself—either manually or by generating it from an existing semantic model. When generated, each table is converted into an entity type, its columns become properties, and any existing relationships are turned into ontology relationships. After the initial creation, the next step is to refine and rename these entity types so they reflect business-friendly concepts rather than raw technical table names, such as transforming “dimproduct” into “Product” or “factsales” into “SaleEvent.”</p>



<p>With the conceptual model established, you then bind your actual data sources to the ontology. This involves mapping tables or event streams to the appropriate properties and relationships, defining identifiers and keys, and handling differences between time-series and static data. If desired, you can then enable Graph support, which turns the ontology into a first-class graph structure. This unlocks graph traversal, lineage insights, dependency analysis, and more advanced graph operations.</p>



<p>Finally, once the ontology is bound and optionally graph-enabled, you can query your data through the business-level semantic layer rather than through raw tables. Queries are expressed using business concepts—such as “Show all Orders for Customer X between date Y and date Z”—and Fabric handles the necessary joins, filters, and reconciliations. If natural-language querying is enabled, users can interact with the data even more intuitively, relying on the ontology to interpret their intent.</p>



<p>In short, Fabric ontology is a business-level semantic layer you create over your data that standardizes meaning across the organization and makes it far easier for users—and AI agents—to ask questions and get answers in business terms rather than table structures. It unifies data from disparate sources, represents it as a connected graph when enabled, and provides a consistent, intelligent foundation for analytics, exploration, and automation.  It differs from a sematic model in that a semantic model describes <em>how your data is stored</em>, while an ontology describes <em>what your business actually is</em>.</p>



<p><strong>Fabric OneLake and Databricks integration announcements</strong>: <a href="https://learn.microsoft.com/en-us/fabric/mirroring/azure-databricks" title="">Mirroring data into OneLake</a> – already generally available; By the end of 2025, Azure Databricks will enable&nbsp;native reading from OneLake&nbsp;through Unity Catalog in preview, allowing users to seamlessly access data stored in OneLake without duplication or complex pipelines; Looking ahead, Azure Databricks will support&nbsp;writing and storing data directly in OneLake, without any additional storage resources to manage.&nbsp;</p>



<p><strong>Preview of&nbsp;OneLake shortcuts to SharePoint&nbsp;and OneDrive</strong>,&nbsp;allowing you to bring your unstructured,&nbsp;productivity data&nbsp;into OneLake&nbsp;without copying files or building custom ETL flows.&nbsp;</p>



<p><strong>Fabric capacity overage</strong>/<strong>expanding surge protection</strong> &#8211; To help you gain control over the jobs running on your Fabric capacities, Microsoft is expanding surge protection and introducing a new tool called Fabric capacity overage—both of which will be released into preview in Q1 2026—and adding Fabric capacity events in the Real-Time hub. First,&nbsp;<a href="https://learn.microsoft.com/fabric/enterprise/surge-protection" target="_blank" rel="noreferrer noopener">surge protection</a>&nbsp;will now let you set limits on specific workspace activity to protect your capacities from unexpected surges from non-critical workspaces.&nbsp;&nbsp;Also to be released is Fabric capacity overage which admins can turn on for specific capacities, allowing them to automatically pay for excess consumption and avoid throttling whenever high-traffic periods occur. Rather than over-provisioning for rare spikes, you can right-size your capacity for typical usage and enable overage only when needed. Admins can even set a 24-hour limit so you don’t break your budget, and the feature can be toggled on or off in seconds. These tools are designed to work together to help you prevent over-use and maintain smooth, uninterrupted operations even during peak demand.</p>



<p><a href="https://blog.fabric.microsoft.com/en-GB/blog/announcing-data-clustering-in-fabric-data-warehouse-preview/" title=""><strong>Data Clustering in Fabric Data Warehouse (Preview)</strong></a> &#8211; Fabric Data Warehouse introduces data clustering capabilities to optimize query performance and reduce storage costs through intelligent data organization</p>



<p><strong>IDENTITY columns (Preview) in Fabric Data Warehouse</strong>, a long-awaited feature that simplifies surrogate key generation during data ingestion. IDENTITY columns automatically produce unique values for each new row, eliminating the need for manual key assignments and eliminating the risk of key duplication and key integrity issues. (<a href="https://learn.microsoft.com/en-us/fabric/data-warehouse/identity" title="">more info</a>)</p>



<p><strong>Fabric Warehouse Snapshots</strong> have GA&#8217;d.  Create read-only views of your warehouse at a specific point in time.  (<a href="https://blog.fabric.microsoft.com/en-us/blog/warehouse-snapshots-in-microsoft-fabric-freeze-data-unlock-reliable-reporting/" title="">more info</a>)</p>



<p><strong>Mirroring for SQL Server in Fabric</strong> for all in-market versions of SQL Server from SQL Server 2016 to SQL Server 2025 is Generally Available. (<a href="https://blog.fabric.microsoft.com/en-GB/blog/mirroring-for-sql-server-in-microsoft-fabric-generally-available/" title="">more info</a>)</p>



<p><a href="https://blog.fabric.microsoft.com/en-GB/blog/fabric-capacity-events-in-real-time-hub-preview/" title=""><strong>Fabric Capacity Events in Real-Time Hub (Preview)</strong></a><strong> </strong>&#8211; Real-Time Hub now streams Fabric capacity events, enabling proactive monitoring and management of compute resources and workload performance.</p>



<p><a href="https://azure.microsoft.com/en-us/updates?id=523768" title=""><strong>Cosmos DB in Microsoft Fabric</strong></a> is now Generally Available &#8211; Cosmos DB integrates natively with Microsoft Fabric, enabling seamless NoSQL workloads alongside analytics and AI within a unified data platform.</p>



<p><a href="https://blog.fabric.microsoft.com/en-GB/blog/whats-new-for-fabric-data-agents-at-ignite-2025-unlocking-deeper-data-reasoning-and-seamless-ai-interoperability/" title=""><strong>What&#8217;s New for Fabric Data Agents at Ignite 2025</strong></a> &#8211; Fabric Data Agents gain enhanced reasoning capabilities and improved AI interoperability, enabling more sophisticated data analysis and automated insights.</p>



<p><strong>ReadWrite access controls within lakehouse</strong> (preview) is now supported for items via <a href="https://learn.microsoft.com/en-us/fabric/onelake/security/get-started-security" title="">OneLake security</a>. This enhancement gives data owners the ability to grant precise write permissions to users—without requiring elevated workspace roles like Admin or Member. With ReadWrite access, workspace viewers or users with only Read access can now write data to specific tables and folders in a lakehouse, while remaining restricted from creating or managing Fabric items.&nbsp;(<a href="https://blog.fabric.microsoft.com/en-GB/blog/fine-grained-readwrite-access-to-lakehouse-data-with-onelake-security/" title="">more info</a>)</p>



<p><strong>Some name change history</strong>: Azure AI Studio was rebranded as <a href="https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/ignite-2024-streamlining-ai-development-with-an-enhanced-user-interface-accessib/4295859" target="_blank" rel="noreferrer noopener">Azure AI Foundry</a> at Microsoft Ignite 2024, introducing a unified platform for building and managing AI applications. In <a href="https://azure.microsoft.com/en-us/blog/microsoft-foundry-scale-innovation-on-a-modular-interoperable-and-secure-agent-stack/" target="_blank" rel="noreferrer noopener">Ignite 2025</a>, it was again renamed, this time to <a href="https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/accelerating-enterprise-ai-with-microsoft-foundry/4471122" target="_blank" rel="noreferrer noopener">Microsoft Foundry</a>, along with the announcement of many new features in Foundry (workflow, direct integration with m365).</p>



<p>More info:</p>



<p><a href="https://campaigns.endjin.com/t/t-l-gudhja-zltkjnui-h/">Azure at Microsoft Ignite 2025: All the intelligent cloud news explained</a></p>



<p><a href="https://campaigns.endjin.com/t/t-l-gudhja-zltkjnui-k/">Reflections from Microsoft Ignite 2025</a> &#8211; Podcast</p>



<p><a href="https://azure.microsoft.com/en-us/blog/microsoft-databases-and-microsoft-fabric-your-unified-and-ai-powered-data-estate/" title="">Microsoft Databases and Microsoft Fabric: Your unified and AI-powered data estate</a></p>



<p><a href="https://blog.fabric.microsoft.com/en-us/blog/fabric-november-2025-feature-summary/" title="">Fabric November 2025 Feature Summary</a></p>The post <a href="https://www.jamesserra.com/archive/2025/12/microsoft-ignite-announcements-nov-2025/">Microsoft Ignite Announcements Nov 2025</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/12/microsoft-ignite-announcements-nov-2025/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20774</post-id>	</item>
		<item>
		<title>Starting Your First Data Warehouse: A Practical Learning Guide</title>
		<link>https://www.jamesserra.com/archive/2025/11/starting-your-first-data-warehouse-a-practical-learning-guide/</link>
					<comments>https://www.jamesserra.com/archive/2025/11/starting-your-first-data-warehouse-a-practical-learning-guide/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 12 Nov 2025 16:00:00 +0000</pubDate>
				<category><![CDATA[Data warehouse]]></category>
		<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20750</guid>

					<description><![CDATA[<p>I had a great question asked of me the other day and thought I would turn the answer into a blog post. The question is &#8220;I&#8217;m an experienced DBA in SQL Server/SQL DB, and my company is looking to build <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/11/starting-your-first-data-warehouse-a-practical-learning-guide/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/11/starting-your-first-data-warehouse-a-practical-learning-guide/">Starting Your First Data Warehouse: A Practical Learning Guide</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>I had a great question asked of me the other day and thought I would turn the answer into a blog post.  The question is &#8220;I&#8217;m an experienced DBA in SQL Server/SQL DB, and my company is looking to build their first data warehouse using Microsoft Fabric.  What are the best resources to learn how to do your first data warehouse project?&#8221;.  So, below are my favorite books, videos, blogs, and learning modules to help answer that question:</p>



<h2 class="wp-block-heading"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4da.png" alt="📚" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Foundational Books (Start Here)</h2>



<p>I highly recommend starting with the classics from Ralph Kimball. Though they predate modern cloud platforms, their core concepts remain essential (I talk about all his books <a href="https://www.jamesserra.com/archive/2013/05/ralph-kimball-books/" target="_blank" rel="noreferrer noopener">here</a>).</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><th>Book</th><th>Main Focus</th><th>Primary Audience</th><th>When to Use It</th><th>Modern Relevance</th></tr><tr><td><strong>1&#x20e3; The Data Warehouse Toolkit</strong></td><td>Dimensional modeling—designing fact/dimension tables and schemas.</td><td>Data architects, data modelers</td><td>Early design phase—defining the logical data model.</td><td>Still foundational; all modern warehouses (Fabric, Snowflake, Redshift) use dimensional modeling concepts.</td></tr><tr><td><strong>2&#x20e3; The Data Warehouse Lifecycle Toolkit</strong></td><td>End-to-end methodology—scoping, planning, modeling, ETL, deployment.</td><td>BI/DW project leads, architects, PMs</td><td>Before/during the first implementation—as a roadmap.</td><td>Conceptually useful, but technical examples are on-premises. Use for governance and methodology.</td></tr><tr><td><strong>3&#x20e3; The Data Warehouse ETL Toolkit</strong></td><td>Data pipelines—extraction, transformation, loading, metadata, quality.</td><td>ETL/ELT developers, data engineers</td><td>During implementation—when building the pipelines.</td><td>Still highly relevant—logic applies to Fabric Dataflows, Data Factory, Synapse pipelines.</td></tr></tbody></table></figure>



<p>They also sell all three books as a bundle on <a href="https://www.amazon.com/Kimballs-Data-Warehouse-Toolkit-Classics-dp-1118875184/dp/1118875184/" title="">Amazon</a>. I used these same books when I was a DBA building my first warehouse.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2728.png" alt="✨" class="wp-smiley" style="height: 1em; max-height: 1em;" /> <strong>Shameless plug:</strong> My own book is a continuation of what has transpired since the Kimball era. Order <a href="https://www.amazon.com/Deciphering-Data-Architectures-Warehouse-Lakehouse/dp/1098150767" title="">here</a>.</p>
</blockquote>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f393.png" alt="🎓" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Learning Modern Architectures</h2>



<p>Kimball&#8217;s books don&#8217;t cover data lakes or lakehouses (they weren’t around yet), so I recommend the following:</p>



<ul class="wp-block-list">
<li><a href="https://www.amazon.com/Building-Medallion-Architectures-Designing-Delta/dp/1098178831" title="">Building Medallion Architectures</a> – A great book that introduces the medallion pattern (bronze/silver/gold layers) for lakehouse design.</li>



<li><strong>Learning paths on Fabric if using it to build your data warehouse:</strong></li>



<li><a href="https://learn.microsoft.com/en-us/training/paths/work-with-data-warehouses-using-microsoft-fabric/" title="">Implement a data warehouse with Microsoft Fabric</a> – A guided Microsoft Learn path.</li>



<li><a href="https://learn.microsoft.com/en-us/training/paths/get-started-fabric/" title="">Get started with Microsoft Fabric</a> – Helpful if Fabric is brand new to you &#8211; you may first want to start with the overview modules.</li>



<li><a href="https://learn.microsoft.com/en-us/fabric/data-warehouse/tutorial-introduction" title="">Microsoft Fabric Data Warehouse End-to-End Tutorial</a> – A practical walkthrough.</li>



<li><strong>Books on Fabric if using it to build your data warehouse:</strong></li>



<li><a href="https://www.amazon.com/Fundamentals-Microsoft-Fabric-End-End/dp/1098172922" title="">Fundamentals of Microsoft Fabric: Designing End-to-End Analytics Solutions</a></li>



<li><a href="https://www.amazon.com/Learn-Microsoft-Fabric-artificial-intelligence/dp/1835082289/" title="">Learn Microsoft Fabric: A Practical Guide to Performing Data Analytics in the Era of AI</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f3ae.png" alt="🎮" class="wp-smiley" style="height: 1em; max-height: 1em;" /> YouTube Videos to Check Out</h2>



<p><strong>My own videos:</strong></p>



<ul class="wp-block-list">
<li><a>Deciphering Data Architectures</a>: <a>Modern Data Warehouse, Data Fabric, Data Lakehouse, Data Mesh</a></li>



<li><a href="https://www.youtube.com/watch?v=N8OW0btxNbA" target="_blank" rel="noopener" title="">Data Architectures and Microsoft Fabric</a></li>



<li><a href="https://www.youtube.com/watch?v=34sI2e30JUM&amp;t=299s" title="">Microsoft Fabric: Lakehouse vs Warehouse</a></li>
</ul>



<p><strong>Other recommended videos:</strong></p>



<ul class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=StIBBb69wDw" title="">Creating Your First Data Warehouse in Microsoft Fabric</a> (by <a href="https://www.youtube.com/@GuyInACube" target="_blank" rel="noreferrer noopener">Guy in a Cube</a> – they have many great Fabric videos)</li>



<li><a href="https://www.youtube.com/watch?v=yRerKDM1h74" title="">Data Warehouse vs Data Lake vs Data Lakehouse</a></li>



<li><a href="https://gotopia.tech/sessions/3149/whats-the-best-big-data-architecture-for-you" title="">What&#8217;s the Best Big Data Architecture for You?</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f4f0.png" alt="📰" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Blogs to Follow</h2>



<ul class="wp-block-list">
<li><a href="https://www.jamesserra.com/" title="">James Serra&#8217;s Blog</a> (another shamless plug)</li>



<li><a href="https://sqltechblog.com/2025/07/28/navigating-modern-data-architecture-dw-lakehouse-and-lakebase-explained/" title="">Navigating Modern Data Architecture: DW, Lakehouse, and Lakebase Explained</a></li>



<li><a href="https://www.integrate.io/blog/author/bill-inmon/" title="">Bill Inmon</a></li>



<li><a href="https://www.kimballgroup.com/data-warehouse-business-intelligence-resources/kimball-techniques/" title="">Ralph Kimball</a></li>



<li><a href="https://medium.com/@piethein" title="">Piethein Strengholt</a></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f393.png" alt="🎓" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Workshops Worth Taking</h2>



<p><strong>My own workshops:</strong></p>



<ul class="wp-block-list">
<li><a href="https://ecm.elearningcurve.com/Deciphering-Data-Architectures-p/da-05-a.htm" title="">Deciphering Data Architectures</a> (via eLearningCurve)</li>
</ul>



<p><strong>Others I recommend:</strong></p>



<ul class="wp-block-list">
<li><a href="https://training.dataversity.net/courses/dab0126-data-architecture-bootcamp-practical-exercises-in-architecture-design" title="">DAB0126: Data Architecture Bootcamp: Practical Exercises in Architecture Design</a> &#8211; Dataversity (Jan 20-22, 2026)</li>



<li><a href="https://www.ewsolutions.com/data-warehouse-training" title="">Data Warehousing 101</a> – EWSolutions</li>



<li><a href="https://tdwi.org/events/seminars/virtual-live/2025/oct/adv-all-arch-all-data-architecture-essentials-building-a-data-foundation-for-enterprise-analytics.aspx" target="_blank" rel="noreferrer noopener">Data Architecture Essentials: Building a Data Foundation for Enterprise Analytics</a> – TDWI</li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading"><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Final Thoughts</h2>



<p>If you&#8217;re a DBA transitioning into data architecture, you don&#8217;t need to reinvent the wheel. Lean on the enduring frameworks from Kimball, combine them with modern cloud-based architecture strategies, and pace yourself through trusted books, tutorials, and workshops.</p>



<p>Have go-to resources that helped you build your first data warehouse? Share them with me in the comments below or via my email <a href="mailto:jamesserra3@gmail.com">jamesserra3@gmail.com</a>—I’m always on the lookout for great new content.</p>The post <a href="https://www.jamesserra.com/archive/2025/11/starting-your-first-data-warehouse-a-practical-learning-guide/">Starting Your First Data Warehouse: A Practical Learning Guide</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/11/starting-your-first-data-warehouse-a-practical-learning-guide/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20750</post-id>	</item>
		<item>
		<title>Microsoft Purview: The Key Benefits of Data Governance</title>
		<link>https://www.jamesserra.com/archive/2025/10/microsoft-purview-the-key-benefits-of-data-governance/</link>
					<comments>https://www.jamesserra.com/archive/2025/10/microsoft-purview-the-key-benefits-of-data-governance/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 15 Oct 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[Azure Purview]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20715</guid>

					<description><![CDATA[<p>I still see a lot of confusion about the functionality of Microsoft Purview ever since multiple products were combined into it, so I wanted to write this blog to help clear up that confusion. Microsoft Purview is a comprehensive solution <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/10/microsoft-purview-the-key-benefits-of-data-governance/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/10/microsoft-purview-the-key-benefits-of-data-governance/">Microsoft Purview: The Key Benefits of Data Governance</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>I still see a lot of confusion about the functionality of Microsoft Purview ever since multiple products were combined into it, so I wanted to write this blog to help clear up that confusion.</p>



<p>Microsoft Purview is a comprehensive solution for managing, protecting, and governing data across an organization. But it’s important to understand that <em>“Purview”</em> is actually an umbrella brand that includes three main areas of functionality:</p>



<ol class="wp-block-list">
<li><strong>Data Governance (Azure Purview)</strong> – focuses on discovering, classifying, and managing data across on-premises, multi-cloud, and SaaS environments.</li>



<li><strong>Data Security (M365 Purview)</strong> – covers <em>data loss prevention, insider risk management, information protection,</em> and <em>adaptive protection</em>. This set of capabilities evolved from what was previously known as Microsoft Information Protection (MIP).</li>



<li><strong>Data Compliance (M365 Purview)</strong> – focuses on <em>compliance manager, eDiscovery and audit, communication compliance, data lifecycle management,</em> and <em>records management</em>. These capabilities came from what was formerly called Microsoft Information Governance.</li>
</ol>



<p>This blog will focus only on the first area—data governance—often referred to as <em>Azure Purview</em>. It’s the part of Microsoft Purview that gives organizations visibility into their data landscape: where data lives, how it’s used, who owns it, and whether it can be trusted.</p>



<p>Data governance is the foundation for any data-driven organization. It ensures that everyone—from analysts to executives—can discover and use reliable, secure, and well-understood data. Below are the six major business benefits of Microsoft’s data governance capabilities in Azure Purview.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">1. Centralized Data Catalog and Discovery</h2>



<p><em>(Preventing “Reinventing the Wheel”)</em></p>



<p>Imagine if every department in your organization could see, at a glance, what data already exists and what it means. That’s exactly what Azure Purview’s data catalog enables.</p>



<p>A data catalog is like an intelligent inventory of all your data assets—databases, reports, files, dashboards, and even SaaS data sources. Azure Purview automatically scans your environments and builds a <strong>central catalog</strong> containing metadata (the “data about the data”) such as file names, owners, schemas, and data types.</p>



<p>This catalog serves as a <strong>single source of truth</strong> across the enterprise. Users can search for data using familiar business terms, browse datasets, and immediately see what’s available instead of starting from scratch. For example, instead of recreating a customer sales dataset, an analyst can search “sales revenue” and find the certified, approved dataset that already exists.</p>



<p>Each data asset in the catalog also includes <strong>data lineage</strong>—a visual representation of where data originates, how it moves through systems, and how it’s transformed along the way. If a report shows unexpected numbers, you can trace them back through every step to the original source system.</p>



<p>Beyond visibility, the catalog encourages <strong>collaboration and reuse</strong>. Data stewards can add notes, business definitions, and quality tags, making it easier for others to understand and trust the data. Certified datasets can be flagged so users know they’re safe and accurate to use.</p>



<p>The result: no more duplicated work or disconnected knowledge. Teams can focus on insights instead of spending hours searching for or recreating data.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">2. Sensitive Data Identification and Classification</h2>



<p><em>(Improving Security and Building Trust)</em></p>



<p>Every organization holds sensitive data—personal details, financial records, health information—and managing that data responsibly is critical for both compliance and reputation. Azure Purview helps by automatically <strong>discovering and classifying sensitive data</strong> across your entire environment.</p>



<p>As Purview scans your data estate, it uses AI-powered rules and predefined patterns to detect sensitive information such as credit card numbers, Social Security numbers, or email addresses. Once detected, it applies <strong>classification labels</strong>—metadata tags that categorize data (see <a href="https://www.jamesserra.com/archive/2024/07/classifications-and-sensitivity-labels-in-microsoft-purview/" title="">Classifications and sensitivity labels in Microsoft Purview</a>).</p>



<p>This automatic classification provides <strong>visibility</strong> into where sensitive data resides and how it’s used. You can quickly identify which databases or files contain personal data and assess whether they’re being stored or shared appropriately.</p>



<p>From a business perspective, this capability is essential for compliance with regulations such as GDPR, HIPAA, and CCPA. You can demonstrate that your organization knows where its sensitive data is, who owns it, and what safeguards are in place.</p>



<p>And while data governance (Azure Purview) focuses on identifying and cataloging sensitive information, data security (M365 Purview) takes it a step further—enforcing rules to prevent data loss, manage insider risk, and apply encryption policies. Together, these two sides of Purview ensure that data is both understood and protected.</p>



<p>It’s important to note that while Azure Purview can identify and classify sensitive data, it doesn’t actually secure that data—it focuses on metadata, not the content itself. To protect the underlying information, you would use M365 Purview for items like Word documents or coordinate with, for example, an Oracle DBA to secure an Oracle database.</p>



<p>In short, Azure Purview strengthens trust in your data by making the invisible visible. You can’t protect what you don’t know you have—and with Purview, you finally do.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">3. Governed, Self-Service Data Access</h2>



<p><em>(Centralized and Compliant Access to Data)</em></p>



<p>Even with well-cataloged data, employees often face a major roadblock: getting access. Azure Purview streamlines this challenge with governed self-service access.</p>



<p>When a user discovers a dataset in the catalog, they can <strong>request access directly</strong> through the Purview portal. This request is automatically routed to the dataset’s owner or steward, who can approve or deny it with one click. The entire process is logged for transparency and compliance.</p>



<p>This approach replaces the inefficient back-and-forth of emails or IT tickets with a <strong>centralized, automated workflow</strong>. Users get faster access to the data they need for insights, while data owners maintain control and oversight.</p>



<p>Note that Azure Purview can only automatically grant read access in limited cases—currently, primarily for certain Azure data sources such as Azure SQL Database. For most other systems, a data administrator or DBA must manually grant access outside of Purview (for example, within Oracle or SAP), and then return to Purview to mark the request as approved.</p>



<p>Governed access ensures that sensitive data is shared responsibly. Access requests can be tied to specific policies—for example, only Finance department members can access financial results. Purview enforces these rules automatically, maintaining compliance without slowing productivity.</p>



<p>In many organizations, this becomes a cultural shift: teams start to share more data because they can do so safely and traceably. The catalog becomes not just a repository, but a trusted marketplace for data assets, where users can browse, request, and use data confidently.</p>



<p>The business benefit is twofold: decision-making speeds up, and risk goes down. People can access what they need, when they need it, without bypassing governance controls.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">4. Data Quality and Health Monitoring</h2>



<p><em>(Ensuring Reliable, Actionable, and Governed Data)</em></p>



<p>High-quality data is the foundation of trustworthy analytics and sound decision-making. Azure Purview now includes powerful data quality and health monitoring capabilities that help organizations continuously measure, improve, and govern the quality of their data.</p>



<h3 class="wp-block-heading"><strong>Data Quality</strong></h3>



<p>With Purview’s new data quality model, organizations can identify and fix data quality issues using a no-code/low-code approach. Business users, data stewards, or domain owners can define rules at different levels—business domains, data products, or individual data assets—without writing complex code.</p>



<p>Purview also provides a growing library of out-of-the-box (OOB) rules that check for common problems such as duplicate rows, empty fields, and missing or non-unique values. <strong>Copilot</strong> can even suggest new rules automatically, helping teams establish data standards faster.</p>



<p>Once rules are configured, the data quality model evaluates your data against those rules and generates data quality scores at the asset, product, or domain level. These scores give you a quick snapshot of how your data measures up to the business rules you’ve defined.</p>



<p>Data profiling and data quality scans further enhance visibility:</p>



<ul class="wp-block-list">
<li><strong>Data profiling</strong> provides quick insights from a small sample set to spot potential issues early.</li>



<li><strong>Data quality scans</strong> perform in-depth analysis across full data sets to detect inconsistencies and anomalies.</li>
</ul>



<p>When problems are detected—such as a sudden drop in a quality score or unusual data patterns—Purview can generate data quality actions that highlight what needs attention. These actions can be assigned to specific people to resolve issues using tools like Azure Data Factory. Once the fix is complete, the issue can be marked as resolved, maintaining a full audit trail.</p>



<p>You can also configure data quality alerts that notify users when certain conditions are met—for example, when the data quality score for the Sales domain drops below 50%. Alerts appear within Purview and can also be sent by email, ensuring that data stewards can take quick corrective action.</p>



<p>Finally, Purview allows you to define data access policies at the business domain, data product, or glossary-term level. Any time a glossary term is applied to a data product, all its associated policies—such as access limits, required approvals, or permissions for data copies—are automatically enforced. This unifies governance and quality under one framework.</p>



<h3 class="wp-block-heading"><strong>Health Controls</strong></h3>



<p>Purview’s health controls track your organization’s overall progress toward complete data governance. These controls measure how well your data environment aligns with governance standards and provide a governance health score.</p>



<p>Examples of health controls include:</p>



<ul class="wp-block-list">
<li><strong>Metadata completeness</strong> – Are key fields documented?</li>



<li><strong>Cataloging</strong> – Are all major data sources registered?</li>



<li><strong>Classification</strong> – What percentage of assets are classified?</li>



<li><strong>Access entitlement</strong> – Who can access what data, and is it governed?</li>



<li><strong>Data quality</strong> – How does the data score against your rules?</li>
</ul>



<p>Data officers can configure thresholds that define red, yellow, or green indicators for each metric. For instance, you might set a target that 80% of data assets must be classified or that 90% should be mapped to data products for discoverability.</p>



<h3 class="wp-block-heading"><strong>Health Actions</strong></h3>



<p>Whenever Purview detects gaps or misalignments—such as unclassified assets or unmapped data products—it automatically creates health actions. These actions appear in a new Action Center that aggregates governance-related tasks by role, data product, or business domain.</p>



<p>Each action includes recommendations for how to fix the issue and can be assigned to an owner for resolution. Clicking an action provides direct guidance on how to bring the data asset or domain back into compliance. As teams complete these actions, the organization’s overall governance posture improves.</p>



<p>This approach turns governance into an interactive, team-based process. Instead of static reports, Purview provides a living dashboard of your data’s health—showing where to focus next and tracking progress over time. By cleaning up outstanding actions and maintaining high-quality data, your organization strengthens both trust and agility in its data-driven decision-making.</p>



<p>Note that data quality and health monitoring were one of the new features added last year (see <a href="https://www.jamesserra.com/archive/2024/04/microsoft-purview-update/" title="">Microsoft Purview new data governance features</a>).</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">5. Business Glossary</h2>



<p><em>(Speaking a Common Data Language)</em></p>



<p>Many data problems start not with numbers, but with words. Departments often use different terms for the same concept, leading to confusion and misaligned reporting.</p>



<p>Azure Purview solves this with a Business Glossary—a central library of business terms and definitions. This glossary creates a shared vocabulary that connects business language to technical data assets.</p>



<p>For instance, the term <em>“Active Customer”</em> might be defined as “a customer with at least one purchase in the last 12 months.” That definition is stored in Purview and linked to the specific datasets and reports that use it. Everyone—from Finance to Marketing—can see and use the same definition.</p>



<p>This not only ensures consistent understanding but also prevents data disputes. When executives review dashboards or metrics, they know that terms like “Revenue” or “Churn Rate” mean the same thing across the organization.</p>



<p>Glossary terms can also be linked to governance policies. For example, a glossary term like “Personal Data” can automatically trigger stricter access controls or encryption requirements whenever it’s applied to a dataset.</p>



<p>The result: clearer communication, fewer misunderstandings, and stronger alignment between business and IT.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">6. Business Domains and Data Products</h2>



<p><em>(Organizing Data for Business Context and Discoverability)</em></p>



<p>A major advancement in Azure Purview’s data governance framework is the introduction of <strong>business domains</strong> and <strong>data products</strong>—two concepts that bring structure, business meaning, and reusability to your data catalog. These features help organizations align their data estate with how the business actually operates.</p>



<h3 class="wp-block-heading"><strong>Business Domains</strong></h3>



<p>A business domain is a framework for organizing data around a common business purpose or capability, such as <em>Sales</em>, <em>Finance</em>, or <em>Marketing</em>. Think of it as a <em>mini catalog inside your main data catalog</em>—a logical boundary that aligns your data assets with your organizational structure.</p>



<p>Business domains make it easier to manage business concepts, assign ownership, and define governance rules. They differ from the technical domains used in the Data Map (known as collections), which group assets by project, technology, or ownership. However, business domains can be mapped to these collections, so assets tied to a business domain are automatically linked to the corresponding technical assets beneath it.</p>



<p>Within each business domain, you can:</p>



<ol class="wp-block-list">
<li><strong>Create and manage business domains</strong> to organize and curate your catalog.</li>



<li><strong>Assign owners and stewards</strong> responsible for data governance within that domain.</li>



<li><strong>Relate business domains</strong> to the underlying data collections in the Data Map.</li>



<li><strong>Create glossary terms</strong> for key business concepts—using Copilot to suggest relevant terms automatically.</li>



<li><strong>Monitor the health of your domains</strong>, taking timely actions to keep them well-governed.</li>



<li><strong>Define business objectives and key results (OKRs)</strong>—such as increasing sales by 10% or reducing support cases by 3%—and track progress directly within Purview.</li>



<li><strong>Define critical data elements (CDEs)</strong>, which logically group key pieces of information (for example, mapping “CustID” in one table and “CID” in another under a single “Customer ID” concept).</li>
</ol>



<p>Business domains provide a way to connect the technical world of data assets with the business world of strategy and operations. By aligning data governance structures with business functions, organizations can make data more understandable, more accessible, and more relevant.</p>



<h3 class="wp-block-heading"><strong>Data Products</strong></h3>



<p>A data product represents a curated group of data assets packaged together for a specific business use case or purpose. Data products are assigned to business domains and act as logical business concepts that make data easier to find and use.</p>



<p>Instead of users hunting across dozens of individual tables or files, a data product bundles them all into one logical unit. For example, a “Global Sales Revenue for 2023CY” data product could include tables, files, and Power BI reports related to sales performance. When users request access to that data product, they automatically get access (after approval) to all the associated assets—no more requesting permissions for 15 separate tables.</p>



<p>Data products streamline governance and improve efficiency:</p>



<ul class="wp-block-list">
<li><strong>Organization:</strong> Each data product belongs to one business domain but can be discovered across multiple domains.</li>



<li><strong>Discoverability:</strong> Data products are searchable using natural language. For instance, you can type, <em>“Show me daily retail sales data for the past six months,”</em> and Purview will surface relevant products.</li>



<li><strong>Context:</strong> Descriptions within data products include use cases, examples, and instructions for analysis.</li>



<li><strong>Ownership:</strong> Each data product has an owner—often a data scientist, analyst, or steward—responsible for maintaining its accuracy and usefulness.</li>



<li><strong>Governance:</strong> Access requests, approvals, and policies apply at the data product level, ensuring consistency and compliance.</li>
</ul>



<p>Data products also tie back to glossary terms and governance policies. For instance, if a glossary term labeled “Customer Data” includes specific access policies, applying that term to a data product automatically enforces those policies.</p>



<p>The hierarchy in Azure Purview now looks like this:<br>Business Domains → Data Products → Data Assets.</p>



<p>An example might be:<br>Sales (Business Domain) → Global Sales Revenue for 2023CY (Data Product) → Global Sales for 2023 Power BI Report (Data Asset).</p>



<p>This model gives users a clean, intuitive way to explore data. Instead of sifting through thousands of assets, they can browse or search by business domain, open a data product, and find everything they need in one place. On the Data Product Search page, users can explore data products, view their details, and track data access requests using the “My Data Access” tab—all from a single, business-centric interface.</p>



<h3 class="wp-block-heading"><strong>Why Business Domains and Data Products Matter</strong></h3>



<p>Together, business domains and data products transform Azure Purview from a technical catalog into a business-aligned data marketplace.</p>



<ul class="wp-block-list">
<li><strong>Business domains</strong> give your data structure and purpose.</li>



<li><strong>Data products</strong> make it consumable and actionable.</li>
</ul>



<p>This approach empowers teams to focus on business outcomes—like growing revenue or improving customer satisfaction—while ensuring the data behind those goals is well-organized, governed, and easy to find.</p>



<p>It’s the next step in making data governance not just a compliance exercise, but a true business enabler.</p>



<p>Note that business domains and data products were one of the new features added last year (see <a href="https://www.jamesserra.com/archive/2024/04/microsoft-purview-update/" title="">Microsoft Purview new data governance features</a>).</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading">Conclusion</h2>



<p>Microsoft Purview is far more than a simple data catalog—it’s a unified ecosystem for governing, securing, and understanding data across your entire organization. While <strong>M365 Purview</strong> focuses on <em>data security</em> (protecting sensitive information and preventing data loss) and <em>data compliance</em> (managing records, retention, and regulatory requirements), <strong>Azure Purview</strong> is all about <em>data governance</em>—helping you <em>discover, understand, organize, and trust</em> your data.</p>



<p>The <strong>data catalog</strong> prevents duplication and drives data discovery.<br>The <strong>classification engine</strong> identifies sensitive data, improving transparency and compliance readiness.<br>The <strong>governed access framework</strong> streamlines how users request and receive permission to use data while maintaining control.<br>The new <strong>data quality and health monitoring capabilities</strong> ensure that your data is accurate, complete, and reliable—empowering your teams to make decisions with confidence.<br>The <strong>business glossary</strong> aligns everyone around a consistent vocabulary, breaking down communication barriers between business and technical teams.<br>And the introduction of <strong>business domains and data products</strong> brings everything together—organizing data into meaningful business contexts and packaging it into reusable, governed assets that anyone in the organization can find and use.</p>



<p>Together, these components create a living, breathing governance framework that turns data chaos into clarity. Azure Purview provides not just visibility into your data, but also accountability, structure, and business meaning—all built on a foundation of automation and AI assistance.</p>



<p>In short:</p>



<ul class="wp-block-list">
<li><strong>Azure Purview</strong> helps you <em>govern and understand your data.</em></li>



<li><strong>M365 Purview</strong> helps you <em>protect and comply with your data.</em></li>
</ul>



<p>By combining these two sides of Purview, organizations can finally achieve what most only talk about: a complete, end-to-end data governance and protection strategy—one that’s modern, scalable, and aligned to how the business actually operates.</p>The post <a href="https://www.jamesserra.com/archive/2025/10/microsoft-purview-the-key-benefits-of-data-governance/">Microsoft Purview: The Key Benefits of Data Governance</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/10/microsoft-purview-the-key-benefits-of-data-governance/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20715</post-id>	</item>
		<item>
		<title>Announcements from the Microsoft Fabric Community Conference</title>
		<link>https://www.jamesserra.com/archive/2025/09/announcements-from-the-microsoft-fabric-community-conference-3/</link>
					<comments>https://www.jamesserra.com/archive/2025/09/announcements-from-the-microsoft-fabric-community-conference-3/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 24 Sep 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20682</guid>

					<description><![CDATA[<p>A bunch of new features for Microsoft Fabric were announced at the&#160;Microsoft Fabric Community Conference&#160;(FabCon Vienna) recently. Here are all the new features that I found most interesting, with some released now and others coming soon: More info: FabCon Vienna: <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/09/announcements-from-the-microsoft-fabric-community-conference-3/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/09/announcements-from-the-microsoft-fabric-community-conference-3/">Announcements from the Microsoft Fabric Community Conference</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>A bunch of new features for Microsoft Fabric were announced at the&nbsp;<a href="https://www.sharepointeurope.com/european-microsoft-fabric-community-conference/" title="">Microsoft Fabric Community Conference</a>&nbsp;(FabCon Vienna) recently. Here are all the new features that I found most interesting, with some released now and others coming soon:</p>



<ul class="wp-block-list">
<li>There is now <a href="https://aka.ms/OneLake-Shortcuts-Mirroring-FabConVienna" target="_blank" rel="noreferrer noopener">mirroring support for Oracle and Google BigQuery</a> (preview) in Fabric. See <a href="https://www.youtube.com/watch?v=vgi5yb7KlxY" title="">video</a> and more info on <a href="https://learn.microsoft.com/en-us/fabric/mirroring/google-bigquery" title="">BigQuery</a> mirroring and <a href="https://learn.microsoft.com/en-us/fabric/mirroring/oracle" title="">Oracle</a> mirroring</li>



<li>Fabric is now extending&nbsp;<a href="https://learn.microsoft.com/en-us/fabric/data-science/concept-data-agent" target="_blank" rel="noreferrer noopener">Fabric data agents</a>&nbsp;to support all mirrored databases (preview)</li>



<li>Now in <a href="https://aka.ms/Shortcut-transformations-FabCon-Vienna" target="_blank" rel="noreferrer noopener">public preview is a new OneLake shortcut transformations</a>&nbsp;to automatically convert JSON and Parquet files to Delta tables.  <a href="https://www.jamesserra.com/archive/2025/07/microsoft-fabric-shortcut%E2%80%91based-ai-transformations/" title="">More info</a></li>



<li><a href="https://aka.ms/OneLake-Sec-PuPr" target="_blank" rel="noreferrer noopener">OneLake security is now in full preview</a>, and also available is a new tab in the <a href="https://aka.ms/Secure-Tab-PuPr" target="_blank" rel="noreferrer noopener">OneLake catalog called Secure</a>, where you can manage the security and permissions for all your data items (see <a href="https://www.youtube.com/watch?v=UiFm5AjKXHQ" title="">video</a>). <a href="https://learn.microsoft.com/en-us/fabric/onelake/security/data-access-control-model" title="">More info</a></li>



<li><a href="https://aka.ms/Fabric-Graph-Blog" target="_blank" rel="noreferrer noopener">Graph in Fabric</a>&nbsp;(preview) is designed to enable organizations to visualize and query relationships that drive business outcomes. Built upon the proven architecture principles of LinkedIn’s graph technology, graph in Fabric can help you reveal connections across customers, partners, and supply chains (see <a href="https://www.youtube.com/watch?v=TFrAAdRdyVc" title="">video</a>)</li>



<li><a href="https://aka.ms/Maps-Docs" target="_blank" rel="noreferrer noopener">Maps in Fabric</a> (preview) can help you bring geospatial context to your agents and operations by transforming enormous volumes of location-based data into interactive, real-time visualizations that drive location-aware decisions and enhance business awareness (see <a href="https://www.youtube.com/watch?v=zdZOrYR049E" title="">video</a>).  <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/map/create-map" title="">More info</a></li>



<li>Released the <a href="https://aka.ms/Fabric-Extensibility-Toolkit" target="_blank" rel="noreferrer noopener">Fabric Extensibility Toolkit</a> into preview—an evolution of the Microsoft Fabric Workload Development Kit but newly designed to help <em>any</em> developer bring their data apps to Fabric for their own organizations along with a simplified architecture and additional automation to drastically streamline development. Developers can now simply build their own Fabric items, and everything else like distribution, user interface, and security is taken care of for you.  <a href="https://learn.microsoft.com/en-us/fabric/extensibility-toolkit/" title="">More info</a></li>



<li>Introduced the <a href="https://aka.ms/Fabric-MCP" target="_blank" rel="noreferrer noopener">preview of Fabric MCP</a>, a developer-focused Model Context Protocol that enables AI-assisted code generation and item authoring in Microsoft Fabric. Designed for agent-powered development and automation, it streamlines how you build using Fabric’s public APIs with built-in templates and best-practice instructions. It also integrates with tools like Microsoft Visual Studio Code and GitHub Codespaces and is fully open and extensible</li>



<li>Added horizontal tabs for open items, support for multiple active workspaces, and a new object explorer—all designed to make multitasking in Fabric smoother, faster, and more intuitive.  <a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=Monthly-update:category#post-28106-_Toc208595338" title="">More info</a></li>



<li>General availability of an end-to-end <a href="https://blog.fabric.microsoft.com/en-us/blog/migrating-to-fabric-data-warehouse-guide-now-available?ft=All" target="_blank" rel="noreferrer noopener">migration experience natively built into Fabric</a>, enabling Azure Synapse Analytics (data warehouse) customers to transition seamlessly to Microsoft Fabric. The migration experience allows you to migrate both metadata and data from Synapse Analytics and comes with an intelligent assessment, guided support, and AI-powered assistance to minimize the migration effort.  <a href="https://learn.microsoft.com/en-us/fabric/fundamentals/migration" title="">More info</a></li>



<li>You can now add, view and manage multiple lakehouses in a <a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=All#post-28106-_Toc208595354" title="">single unified view</a></li>



<li><a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=All#post-28106-_Toc208595359">Mirrored database support for Fabric Data Agent</a>: Enables users to directly connect mirrored database artifacts including Azure Cosmos DBs, Azure SQL, Oracle, Snowflake, Databricks, and other databases using open mirroring. Including this integration allows users to leverage Data Agent’s Natural Language to SQL (NL2SQL) capability, so users can ask questions in plain English, and receive LLM- powered insights across their data estate</li>



<li><a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=All#post-28106-_Toc208595360">CI/CD support in Fabric Data Agent</a>: Now supports CI/CD, ALM flow, and Git integration, enhancing management, version control, and collaboration for Data Agent artifacts. These features promote reliable, scalable, and auditable development practices by enabling systematic management of changes, dedicated workspaces for development stages, and broad data source support. Git integration tracks all modifications, supports branching for independent experimentation, and enables controlled merging, improving teamwork and allowing quick reversion if issues arise.  <a href="https://learn.microsoft.com/en-us/fabric/data-factory/cicd-pipelines" title="">More info</a></li>



<li><a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=All#post-28106-_Toc208595363">Discover which query examples influenced the Data Agent response</a>: When the Data Agent answers a question, it reviews the <a href="https://learn.microsoft.com/fabric/data-science/data-agent-configurations#data-source-example-queries" target="_blank" rel="noreferrer noopener">example queries you’ve provided</a> and uses them to guide its reasoning. With the latest update, creators can view exactly which example queries were used during a run-step to help shape the agent’s response</li>



<li><a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=All#post-28106-_Toc208595364">Download Diagnostics in Data Agent</a>: Download a diagnostics file for any run step in the Data Agent chat canvas—giving you clear visibility into how the agent processed your question behind the scenes. The file includes details like which tools were used, how the question was interpreted, the intermediate reasoning steps, and any errors or fallback logic that occurred.  <a href="https://learn.microsoft.com/en-us/fabric/data-science/evaluate-data-agent" title="">More info</a></li>



<li><a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=All#post-28106-_Toc208595368">MERGE Transact-SQL (Preview)</a>: This command blends INSERT, UPDATE, and DELETE operations all into a single statement based on your specified conditions between two tables, improving readability and providing a uniform standard for transformations across your ETL jobs.  <a href="https://learn.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=fabric" title="">More info</a></li>



<li><a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=All#post-28106-_Toc208595382">Anomaly detection in Real-time Intelligence (Preview)</a>: Quickly spot unusual patterns or behaviors in streaming data with no coding required. You select an Eventhouse, choose ID fields to monitor, and the system automatically tests models to recommend the best fit for your data. Detected anomalies appear directly in the interface, and you can experiment with models before publishing results to the Real-Time Hub. With instant alerts via Teams or email, this no-code tool empowers both analysts and business users to act fast on real-time insights</li>
</ul>



<p></p>



<p>More info:</p>



<p><a href="https://www.microsoft.com/en-us/microsoft-fabric/blog/2025/09/16/fabcon-vienna-build-data-rich-agents-on-an-enterprise-ready-foundation" title="">FabCon Vienna: Build data-rich agents on an enterprise-ready foundation</a></p>



<p><a href="https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary?ft=Monthly-update:category" title="">Fabric September 2025 Feature Summary</a></p>



<p><a href="https://thedataengineroom.blogspot.com/2025/09/fabcon-vienna-2025-key-announcements.html" title="">Fabcon Vienna 2025 &#8211; Key announcements</a></p>



<p><a href="https://jocelynnhartwig.com/blog/fabcon-2025" title="">FabCon Europe 2025 Recap</a></p>The post <a href="https://www.jamesserra.com/archive/2025/09/announcements-from-the-microsoft-fabric-community-conference-3/">Announcements from the Microsoft Fabric Community Conference</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/09/announcements-from-the-microsoft-fabric-community-conference-3/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20682</post-id>	</item>
		<item>
		<title>New AI capabilities in SQL Server 2025</title>
		<link>https://www.jamesserra.com/archive/2025/09/new-ai-capabilities-in-sql-server-2025/</link>
					<comments>https://www.jamesserra.com/archive/2025/09/new-ai-capabilities-in-sql-server-2025/#respond</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Tue, 09 Sep 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20610</guid>

					<description><![CDATA[<p>(Side note: Excited to share that the audio version of my book &#8220;Deciphering Data Architectures: Choosing Between a Modern Data Warehouse, Data Fabric, Data Lakehouse, and Data Mesh&#8221; is now accessible for those who prefer this format. The audio reader <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/09/new-ai-capabilities-in-sql-server-2025/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/09/new-ai-capabilities-in-sql-server-2025/">New AI capabilities in SQL Server 2025</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p><em>(Side note: Excited to share that the audio version of my book &#8220;Deciphering Data Architectures: Choosing Between a Modern Data Warehouse, Data Fabric, Data Lakehouse, and Data Mesh&#8221; is now accessible for those who prefer this format. The audio reader truly brings the content to life!  Order your copy <a href="https://www.amazon.com/Deciphering-Data-Architectures-Warehouse-Lakehouse/dp/B0F5RPDX9B" target="_blank" rel="noopener" title="">here </a>or from most other online bookstores.)</em></p>



<p>I’ve had a number of customers asking about the new AI features in SQL Server 2025, so I wanted to write a quick post summarizing what’s available today. In reality, there are two major capabilities worth highlighting: Copilot in SQL Server Management Studio (SSMS) and support for vector data types with semantic search.</p>



<p>Keep in mind that SQL Server 2025 is still in preview, with Release Candidate (RC0) recently announced. You can find the full list of updates in the official documentation: <a href="https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2025?view=sql-server-ver17" target="_blank" rel="noopener" title="">What&#8217;s new in SQL Server 2025 Preview</a>.</p>



<h3 class="wp-block-heading">Copilot in SQL Server Management Studio (SSMS)</h3>



<p>One of the most exciting new AI features in SQL Server 2025 is the introduction of <strong><a href="https://learn.microsoft.com/en-us/ssms/copilot/copilot-in-ssms-overview" target="_blank" rel="noopener" title="">Copilot in SSMS</a></strong> (requires SSMS 21). This capability allows you to interact with your data using natural language instead of writing T-SQL from scratch. With Copilot, DBAs and developers can quickly perform text and data analytics without needing to move data elsewhere.</p>



<p>Copilot in SSMS lets you:</p>



<ul class="wp-block-list">
<li>Ask natural-language questions about your database and environment</li>



<li>Get help writing and optimizing T-SQL</li>



<li>Explore and analyze data without knowing SQL in detail</li>
</ul>



<p></p>



<p>For example:<br>Type <em>“List all orders from the last 30 days for New York customers”</em> and Copilot will return both the T-SQL query:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
SELECT * 
FROM Orders 
WHERE OrderDate &gt; DATEADD(DAY, -30, GETDATE()) 
  AND City = 'New York';

</pre></div>


<p>and the corresponding results. This makes it easy to learn T-SQL, validate your intent, and accelerate development.</p>



<h5 class="wp-block-heading">Privacy, Security, and Responsible AI</h5>



<p>Copilot in SSMS is built on <a href="https://learn.microsoft.com/en-us/ssms/copilot/use-azure-openai-with-copilot-in-ssms" target="_blank" rel="noopener" title="">Azure OpenAI resources</a>, which you provision in your own subscription. Importantly, it does not retain prompts, responses, or system metadata, and your data is never used to train or retrain AI models. It follows Microsoft’s <a href="http://Responsible AI practices for Azure OpenAI models" title=""></a><a href="https://learn.microsoft.com/en-us/legal/cognitive-services/openai/overview" target="_blank" rel="noopener" title="">Responsible AI practices for Azure OpenAI models</a>, with more details available in the <a href="https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/openai/data-privacy?tabs=azure-portal" title=""></a><a href="https://learn.microsoft.com/en-us/legal/sql/ssms/transparency-note-copilot" target="_blank" rel="noopener" title="">Transparency Note for Copilot in SQL Server Management Studio</a> and <a href="https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy" target="_blank" rel="noopener" title="">Data, privacy, and security for Azure OpenAI Service</a>.</p>



<h5 class="wp-block-heading">Supported Databases and Permissions</h5>



<p>Copilot works across:</p>



<ul class="wp-block-list">
<li>SQL Server</li>



<li>Azure SQL Database</li>



<li>Azure SQL Managed Instance</li>



<li>SQL Database in Microsoft Fabric</li>
</ul>



<p></p>



<p>It respects your database permissions. If your login doesn’t have access to a table, Copilot won’t be able to run queries against it. For example, if you don’t have permission to query <code>Sales.Orders</code>, a request to select from that table will fail.</p>



<h5 class="wp-block-heading">Beyond Query Writing</h5>



<p>Copilot isn’t just for writing queries—it’s also useful for database and environment exploration. You can ask questions like:</p>



<ul class="wp-block-list">
<li>“What version of SQL Server is this instance running?”</li>



<li>“Which columns store email addresses?”</li>



<li>&#8220;How will changing compatibility mode affect query performance?&#8221;</li>



<li>“What queries have executed most frequently in the last two hours?”</li>



<li>“What’s the difference between a full and log backup?”</li>
</ul>



<p></p>



<p>It can even help with database development tasks, such as creating tables, indexes, or generating sample data.</p>



<h5 class="wp-block-heading">Getting Started</h5>



<p>To use Copilot in SSMS, you’ll need (see <a href="https://learn.microsoft.com/en-us/ssms/copilot/copilot-in-ssms-install" target="_blank" rel="noopener" title="">instructions</a>):</p>



<ol class="wp-block-list">
<li>An Azure OpenAI endpoint and deployment in your subscription (see <a href="https://learn.microsoft.com/en-us/ssms/copilot/use-azure-openai-with-copilot-in-ssms" target="_blank" rel="noopener" title="">create the necessary Azure OpenAI resources</a>). Note that the future SSMS 22 will use your Github Copilot account instead of an Azure OpenAI endpoint &#8211; see <a href="https://techcommunity.microsoft.com/blog/sqlserver/what%E2%80%99s-next-for-copilot-in-ssms/4451066" target="_blank" rel="noopener" title="">What’s next for Copilot in SSMS</a>.</li>



<li>The latest version of SQL Server Management Studio with Copilot enabled.</li>
</ol>



<p></p>



<p>Once configured, you’ll be able to start writing Language-Assisted Queries (LAQ)—natural language instructions that Copilot translates into valid T-SQL, returning both the code and results (like charts or tables).</p>



<p>More info on using Copilot in SSMS can be found <a href="https://learn.microsoft.com/en-us/ssms/copilot/copilot-in-ssms-chat" title="">here</a>.</p>



<h3 class="wp-block-heading">Vector Data Types with Semantic Search</h3>



<p>The second major AI capability in SQL Server 2025 is support for <a href="https://learn.microsoft.com/en-us/sql/t-sql/data-types/vector-data-type?view=sql-server-ver17&amp;tabs=csharp" target="_blank" rel="noopener" title="">vector data types</a> and <a href="https://learn.microsoft.com/en-us/sql/t-sql/functions/vector-functions-transact-sql?view=sql-server-ver17" target="_blank" rel="noopener" title="">functions</a>, enabling true <a href="https://en.wikipedia.org/wiki/Semantic_search" target="_blank" rel="noopener" title="">semantic search</a> directly inside the database. With this feature, you can store AI-generated <a href="https://en.wikipedia.org/wiki/Word_embedding" target="_blank" rel="noopener" title="">embeddings</a> as vectors, compare them using functions like <code><a href="https://learn.microsoft.com/en-us/sql/t-sql/functions/vector-search-transact-sql?view=sql-server-ver17" target="_blank" rel="noopener" title="">VECTOR_SEARCH()</a></code>, and generate embeddings inline with <a href="https://learn.microsoft.com/en-us/sql/t-sql/functions/ai-generate-embeddings-transact-sql?view=sql-server-ver17" target="_blank" rel="noopener" title=""><code>AI_GENERATE_EMBEDDINGS()</code>.</a></p>



<p>This unlocks use cases such as:</p>



<ul class="wp-block-list">
<li>Find products similar to this one</li>



<li>Show me documents related to this query</li>



<li>Surface support tickets about login issues</li>
</ul>



<p></p>



<p>Instead of matching just keywords, SQL Server can now search by meaning.</p>



<h5 class="wp-block-heading">How Semantic Search Works</h5>



<ol class="wp-block-list">
<li>Text → Embeddings<br>Text (like descriptions, reviews, or documents) is converted into embeddings — high-dimensional numeric vectors created by an AI model such as Azure OpenAI.</li>



<li>Store in a VECTOR Column<br>SQL Server introduces a new <code>VECTOR</code> column type to persist embeddings.</li>



<li>Similarity Search with VECTOR_SEARCH()<br>Queries compare embeddings based on semantic closeness, not just exact words.</li>
</ol>



<p></p>



<h5 class="wp-block-heading">Example: Finding Similar Products</h5>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
-- Step 1: Create OpenAI model
CREATE EXTERNAL MODEL MyOpenAI_Embedding_Model
WITH (
      LOCATION = 'https://my-azure-openai-endpoint.openai.azure.com/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15',
      API_FORMAT = 'Azure OpenAI',
      MODEL_TYPE = EMBEDDINGS,
      MODEL = 'text-embedding-ada-002'
);

-- Step 2: Create table with a VECTOR column
CREATE TABLE Products (
    ProductID INT PRIMARY KEY,
    Name NVARCHAR(100),
    Description NVARCHAR(MAX),
    DescriptionEmbedding VECTOR(1536)  -- new in SQL Server 2025, stores AI-generated vector
);

-- Step 3: Create vector index (DiskANN)
CREATE VECTOR INDEX vec_idx_Products_DescriptionEmbedding
ON Products (DescriptionEmbedding)
WITH (
    METRIC = 'cosine',      -- can also use 'dot' or 'euclidean'
    TYPE = 'diskann',       -- algorithm type (only DiskANN supported in preview)
    MAXDOP = 4              -- optional, controls degree of parallelism
)
ON &#x5B;PRIMARY];               -- or your preferred filegroup

-- Step 4: Insert data with embeddings
INSERT INTO Products (ProductID, Name, Description, DescriptionEmbedding)
VALUES (
    1,
    'Wireless Earbuds',
    'Bluetooth earbuds with noise cancellation and 24-hour battery life.',
    AI_GENERATE_EMBEDDINGS('Bluetooth earbuds with noise cancellation and 24-hour battery life.' USE MODEL = MyOpenAI_Embedding_Model)
);

-- Step 5: Run a semantic search
DECLARE @queryVector VECTOR(1536) = 
    AI_GENERATE_EMBEDDINGS('Noise-cancelling wireless headphones' USE MODEL = MyOpenAI_Embedding_Model);

SELECT TOP 5 p.ProductID, p.Name, p.Description, v.distance AS Similarity
FROM VECTOR_SEARCH(
    TABLE = Products AS p,
    COLUMN = DescriptionEmbedding,
    SIMILAR_TO = @queryVector,
    TOP_N = 5,
    METRIC = 'cosine'
) AS v
ORDER BY v.distance;

</pre></div>


<p>This returns the top 5 products whose descriptions are semantically closest to “noise-cancelling wireless headphones” — even if those exact words never appear.</p>



<h5 class="wp-block-heading">Why It Matters</h5>



<p>Traditional Full-Text Search (FTS) matches words, stems, or phrases. That fails when:</p>



<ul class="wp-block-list">
<li>Synonyms are used (earbuds vs. headphones)</li>



<li>Users type casual language (can’t log in vs. authentication error)</li>



<li>Context or intent matters more than literal words</li>
</ul>



<p></p>



<p>With vector search, SQL Server can now match based on meaning.</p>



<p>Example:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
DECLARE @queryVector VECTOR(1536) = 
    AI_GENERATE_EMBEDDINGS('I can’t log into the portal');

SELECT TOP 5 TicketID, Subject, Body, Distance
FROM VECTOR_SEARCH(
    TABLE = SupportTickets,
    COLUMN = DescriptionEmbedding,
    SIMILAR_TO = @queryVector,
    TOP_N = 5,
    METRIC = 'cosine'
ORDER BY Distance;

</pre></div>


<p>This surfaces tickets about login problems — even if the phrase “log into the portal” never appears.</p>



<h5 class="wp-block-heading">SQL Server as a Vector Database</h5>



<p>SQL Server 2025 brings capabilities typically found in specialized vector databases like Pinecone, Weaviate, or Milvus, — as well as in search services like Azure Cognitive Search (now called Azure AI Search), but with the advantage of being integrated into the same platform enterprises already use for structured data.</p>



<p>That means you can:</p>



<ul class="wp-block-list">
<li>Avoid managing a separate vector database</li>



<li>Add semantic search to catalogs, support tickets, or documents</li>



<li>Build Retrieval-Augmented Generation (RAG) systems with T-SQL and Azure OpenAI</li>
</ul>



<p></p>



<p>And since Copilot in SSMS works with vector search, you can simply type &#8220;Show me electronics that block outside sound&#8221; and SQL Server will translate that into a semantic search using vector comparison.</p>



<p>The bottom line: SQL Server 2025 turns your database into a vector-powered AI engine, enabling smarter, more relevant search and analytics without leaving T-SQL.</p>



<p>SQL Server 2025’s vector-powered AI engine can certainly enhance a simple <code>SELECT</code> statement by making it return more relevant text results. But its real value shines as part of a larger Generative AI (GenAI) architecture, where the database retrieves contextually relevant text for a user’s question and sends both to a large language model (LLM). This approach, known as Retrieval-Augmented Generation (RAG), improves accuracy by grounding the LLM’s response in your own data. I covered this pattern in more detail in my post <em><a href="https://www.jamesserra.com/archive/2024/03/introduction-to-openai-and-llms/" title="">Introduction to OpenAI and LLMs</a></em>.</p>The post <a href="https://www.jamesserra.com/archive/2025/09/new-ai-capabilities-in-sql-server-2025/">New AI capabilities in SQL Server 2025</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/09/new-ai-capabilities-in-sql-server-2025/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20610</post-id>	</item>
		<item>
		<title>Azure IoT Operations</title>
		<link>https://www.jamesserra.com/archive/2025/08/azure-iot-operations/</link>
					<comments>https://www.jamesserra.com/archive/2025/08/azure-iot-operations/#respond</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Tue, 19 Aug 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[Azure Arc]]></category>
		<category><![CDATA[IoT]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20561</guid>

					<description><![CDATA[<p>I want to talk about a fairly new product that you may not be aware of: Azure IoT Operations, which GA&#8217;d last November (it was first announced at Ignite in November 2023). Here is an excellent short video showing it <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/08/azure-iot-operations/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/08/azure-iot-operations/">Azure IoT Operations</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>I want to talk about a fairly new product that you may not be aware of: <a href="https://azure.microsoft.com/en-us/products/iot-operations" title="">Azure IoT Operations</a>, which GA&#8217;d last November (it was first announced at Ignite in November 2023).  Here is an excellent short <a href="https://www.youtube.com/watch?v=cw9nhPFE_qE&amp;t=14s&amp;ab_channel=MicrosoftIoTDevelopers" title="">video</a> showing it in action (note: watching it may make you hungry).</p>



<p>Think of Azure IoT Operations as the control center for your digital operations at the edge. It sits directly alongside your machines and sensors—on the factory floor, in warehouses, or across distributed sites—and ensures that data from these environments flows smoothly into the cloud. Built on modern, scalable infrastructure, it connects operational systems with business applications, making it easier to unlock insights, improve efficiency, and respond quickly to changes in your environment.</p>



<p>The real impact comes from bringing together Information Technology (IT) and Operational Technology (OT). For years, IT has handled business data and applications while OT has run the physical equipment. Azure IoT Operations unifies the two. Imagine a factory where equipment performance data is captured in real time and immediately analyzed to predict failures before they happen. You can then use Microsoft Fabric to build real-time dashboards with visualizations that show the health of each asset and trigger alerts when anomalies are detected. These dashboards can be displayed right on the shop floor, giving operators the information they need to take immediate action and prevent costly downtime. The result is lower maintenance costs, higher productivity, and more consistent product quality. By securely connecting these systems to the cloud, companies can also optimize energy usage, reduce waste, and make faster, data-driven decisions. In short, Azure IoT Operations helps organizations modernize operations, cut costs, and gain the agility needed to compete in today’s digital-first economy.</p>



<p>Here is an Azure IoT Operations architecture overview:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="547" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image.png?resize=1024%2C547&#038;ssl=1" alt="" class="wp-image-20563" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image.png?resize=1024%2C547&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image.png?resize=300%2C160&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image.png?resize=768%2C410&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image.png?resize=1536%2C821&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image.png?w=2027&amp;ssl=1 2027w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>At its core, Azure IoT Operations is a set of data services that run on Azure Arc–enabled edge Kubernetes clusters, where you can deploy Azure IoT Operations by using the Azure portal or the Azure CLI.. These services work together to make it easier to capture, process, and route data from assets on the shop floor to applications in the cloud.</p>



<ul class="wp-block-list">
<li>MQTT broker – An edge-native broker that enables event-driven architectures, allowing devices and applications to exchange messages reliably in real time.</li>



<li>Connector for OPC UA – Handles the complexity of OPC UA communication with servers and devices, making it easier to integrate industrial equipment into the data pipeline.</li>



<li>Data flows – Provide data transformation and contextualization capabilities. They let you enrich and reshape raw messages and then route them to a variety of destinations, including cloud endpoints.</li>



<li>Operations experience – A web-based UI that gives operational technology (OT) teams a unified view to manage assets, devices, and data flows. IT administrators can also use <a href="https://learn.microsoft.com/en-us/azure/azure-arc/site-manager/overview" title="">Azure Arc site manager (preview)</a> to group IoT Operations instances by physical location, making it easier for OT users to find and manage deployments.</li>
</ul>



<p></p>



<p>Together, these services create a consistent, secure, and flexible environment for IT and OT teams to collaborate, ensuring that edge data is not only captured but made meaningful and actionable.</p>



<p>One of the biggest challenges in IoT is balancing the needs of IT and OT. IT teams want control, governance, and security. OT teams—those who actually run the machines on the shop floor or in the field—want simple tools that let them keep operations running without opening tickets for IT every time a change is needed. This is exactly where the Azure IoT Operations experience comes in.</p>



<p>The experience is a web-based interface that provides a unified view into your IoT Operations environment. Instead of sifting through Kubernetes manifests or JSON configs, OT users can log into a simple portal and manage the things that matter to them: assets, data flows, and system health.</p>



<p>Through this experience, you can:</p>



<ul class="wp-block-list">
<li>Manage assets by defining machines, controllers, and sensors so they are represented in a way that makes sense for operations staff.</li>



<li>Configure data flows that determine how telemetry moves from devices, through the MQTT broker, into transformations, and eventually to cloud endpoints like Event Hubs or Fabric.</li>



<li>Apply transformations directly at the edge to contextualize data before sending it on—reducing both latency and bandwidth costs.</li>



<li>Monitor status and health with visibility into system throughput, errors, and connectivity, all in one place.</li>
</ul>



<p></p>



<p>Here is a sample of the overview page in the Azure IoT Operations experience:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-3.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="765" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-3.png?resize=1024%2C765&#038;ssl=1" alt="" class="wp-image-20589" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-3.png?resize=1024%2C765&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-3.png?resize=300%2C224&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-3.png?resize=768%2C573&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-3.png?w=1200&amp;ssl=1 1200w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>The real value is that it lets OT teams operate independently, while IT still has the governance and security controls they need in the background. IT administrators use Azure Arc to deploy and secure the underlying infrastructure, and they can group multiple IoT Operations instances by physical location using the Azure Arc site manager (preview). But once everything is in place, OT staff can work in the experience every day without needing deep technical expertise.</p>



<p>Here is a sample of building a data flow in the Azure IoT Operations experience:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-1.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="691" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-1.png?resize=1024%2C691&#038;ssl=1" alt="" class="wp-image-20577" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-1.png?resize=1024%2C691&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-1.png?resize=300%2C203&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-1.png?resize=768%2C518&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/08/image-1.png?w=1200&amp;ssl=1 1200w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>To connect to the cloud from Azure IoT Operations, you can use the following data flow destination endpoints:</p>



<ul class="wp-block-list">
<li><a href="https://learn.microsoft.com/en-us/azure/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint">Azure Event Grid and other cloud-based MQTT brokers</a></li>



<li><a href="https://learn.microsoft.com/en-us/azure/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint">Azure Event Hubs or Kafka</a></li>



<li><a href="https://learn.microsoft.com/en-us/azure/iot-operations/connect-to-cloud/howto-configure-adlsv2-endpoint">Azure Data Lake Storage</a></li>



<li><a href="https://learn.microsoft.com/en-us/azure/iot-operations/connect-to-cloud/howto-configure-fabric-endpoint">Microsoft Fabric OneLake</a></li>



<li><a href="https://learn.microsoft.com/en-us/azure/iot-operations/connect-to-cloud/howto-configure-adx-endpoint">Azure Data Explorer</a></li>
</ul>



<p></p>



<p>Microsoft always supports three generally available (GA) versions of Azure IoT Operations at any one time: the latest version, and the two previous minor versions.</p>



<p>Currently, there are only two minor versions available.&nbsp;<a href="https://azure.microsoft.com/support/plans">Azure support</a>&nbsp;is currently available for the following versions:</p>



<ul class="wp-block-list">
<li><a href="https://github.com/Azure/azure-iot-operations/releases/tag/v1.2.36">1.2.x</a> (latest preview version)</li>



<li><a href="https://github.com/Azure/azure-iot-operations/releases/tag/v1.1.59">1.1.x</a> (latest GA version)</li>



<li><a href="https://github.com/Azure/azure-iot-operations/releases/tag/v1.0.9">1.0.x</a> (previous minor GA version)</li>
</ul>



<p></p>



<p>Make sure to check out the <a href="https://learn.microsoft.com/en-us/azure/iot-operations/overview-iot-operations#supported-environments" title="">supported environments</a> and <a href="https://learn.microsoft.com/en-us/azure/iot-operations/overview-iot-operations#supported-regions" title="">supported regions</a>.</p>



<p>Note that Azure IoT Operations is different from a product I talked about in a previous blog post called <a href="https://www.jamesserra.com/archive/2022/03/azure-iot-central/" title="">Azure IoT Central</a>. When deciding between Azure IoT Operations and Azure IoT Central, it helps to understand that while both deal with connecting and managing IoT devices, they serve very different purposes. Azure IoT Operations is best thought of as a set of building blocks for creating industrial-strength IoT solutions, particularly in environments that require hybrid architectures and edge processing. It gives you full control over protocols like OPC UA, Modbus, and MQTT, and allows you to design the data flows, monitoring, and integration points exactly as your business needs. This flexibility makes it ideal for complex industrial scenarios where low latency and tight IT/OT integration are essential.</p>



<p>Azure IoT Central, on the other hand, is a fully managed SaaS offering. Instead of building from the ground up, you get a ready-made platform with device provisioning, templates, monitoring, and dashboards already included. It’s designed for speed and simplicity, letting you connect devices and gain insights with very little setup. The tradeoff is that you sacrifice some deep customization and architectural control. For most businesses that need a predictable, subscription-based model and quick time to value, IoT Central is a great fit.</p>



<p>The choice ultimately comes down to how much control and flexibility you need versus how quickly you want to deploy. If you are building a large-scale, industrial IoT platform that requires fine-grained control and edge-heavy workloads, Azure IoT Operations is the right tool. If your priority is getting started fast with minimal overhead, Azure IoT Central will meet your needs. Both approaches are valid—it’s about matching the tool to the problem you’re solving.</p>



<p>More info:</p>



<p><a href="https://www.youtube.com/watch?v=-PHjq_kO060&amp;ab_channel=MicrosoftEvents" title="">Accelerate industrial transformation with Azure IoT operations</a> (Ignite video)</p>



<p><a href="https://www.youtube.com/watch?v=b2QDEscJ4QM&amp;ab_channel=STYAVA" title="">Real time data ingestion with Azure IoT Operations and Microsoft Fabric CWB virtual meet up</a> (video)</p>



<p><a href="https://blogs.perficient.com/2025/05/27/azure-iot-operations-empowering-the-future-of-connectivity-and-automation/" title="">Azure IoT Operations: Empowering the Future of Connectivity and Automation</a></p>



<p><a href="https://blog.techimpulse.dev/posts/aio-1/" title="">Uncovering Azure IoT Operations: A New Era of Industrial IoT</a></p>The post <a href="https://www.jamesserra.com/archive/2025/08/azure-iot-operations/">Azure IoT Operations</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/08/azure-iot-operations/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20561</post-id>	</item>
		<item>
		<title>Microsoft Fabric shortcut‑based AI transformations </title>
		<link>https://www.jamesserra.com/archive/2025/07/microsoft-fabric-shortcut%e2%80%91based-ai-transformations/</link>
					<comments>https://www.jamesserra.com/archive/2025/07/microsoft-fabric-shortcut%e2%80%91based-ai-transformations/#respond</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 23 Jul 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20490</guid>

					<description><![CDATA[<p>A new feature has just been released in Microsoft Fabric that I was so impressed with that I decided to blog about it. Available in public preview is the ability to create a shortcut to a raw text file and <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/07/microsoft-fabric-shortcut%e2%80%91based-ai-transformations/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/07/microsoft-fabric-shortcut%e2%80%91based-ai-transformations/">Microsoft Fabric shortcut‑based AI transformations </a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>A new feature has just been released in Microsoft Fabric that I was so impressed with that I decided to blog about it. Available in public preview is the ability to create a shortcut to a raw text file and pull the info every two minutes into a Delta Lake table. While pulling in the text, you can perform AI transformations. This removes the need for complex data‑integration pipelines and significantly reduces time to insight.</p>



<p>Here are the supported AI transformations:</p>



<ul class="wp-block-list">
<li><strong>Text Summary</strong>:  Generates concise summaries from long-form text.</li>



<li><strong>Translate</strong>: Translates text between supported languages.</li>



<li><strong>Sentiment</strong>: Labels text sentiment as positive, negative, or neutral.</li>



<li><strong>PII detection</strong>: Finds and redacts personally identifiable information (names, phone numbers, emails).</li>



<li><strong>Name recognition</strong>: Extracts named entities such as people, organizations, or locations.</li>
</ul>



<p></p>



<p>It&#8217;s very easy to setup: While in Fabric, go to a Lakehouse and under the Files section, select <strong>+ New  Shortcut</strong> and create a connection that points to a <em>folder </em>of .txt files in ADLS, Blob Storage, S3, Dataverse, GCS, OneDrive (preview), SharePoint Folder (preview) or OneLake in another Fabric workspace. Only .txt files are supported now, but soon JSON and Parquet will be supported (UPDATE 10/21/25: JSON and Parquet support is now <a href="https://blog.fabric.microsoft.com/en-us/blog/from-files-to-delta-tables-parquet-json-data-ingestion-simplified-with-shortcut-transformations/" title="">available</a>, with Excel in the future). Then in the wizard, select the <em>folder </em>and from <strong>Transform</strong>, select one of the five AI transformation options listed:</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="576" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image.png?resize=1024%2C576&#038;ssl=1" alt="" class="wp-image-20497" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image.png?w=1024&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image.png?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image.png?resize=768%2C432&amp;ssl=1 768w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>Once you complete creating the shortcut, it will take a few minutes for the initial transform to finish.  Each table and folder selected will appear as a unique shortcut in the Files section, under which the transformation will show up in Delta format. The columns in the transformed Delta file/table depend on the AI transform you chose. For example, if you selected Text Summary, the columns would be &#8220;text&#8221; and &#8220;TextSummary&#8221;:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-5-scaled.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="51" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-5.png?resize=1024%2C51&#038;ssl=1" alt="" class="wp-image-20548" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-5-scaled.png?resize=1024%2C51&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-5-scaled.png?resize=300%2C15&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-5-scaled.png?resize=768%2C38&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-5-scaled.png?resize=1536%2C77&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-5-scaled.png?resize=2048%2C102&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-5-scaled.png?w=2360&amp;ssl=1 2360w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>If you selected Translate, the resulting columns look like:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-6.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="53" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-6.png?resize=1024%2C53&#038;ssl=1" alt="" class="wp-image-20549" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-6.png?resize=1024%2C53&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-6.png?resize=300%2C16&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-6.png?resize=768%2C40&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-6.png?resize=1536%2C80&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-6.png?resize=2048%2C106&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-6.png?w=2360&amp;ssl=1 2360w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>If you selected Sentiment, the resulting columns look like:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-1-scaled.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="170" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-1.png?resize=1024%2C170&#038;ssl=1" alt="" class="wp-image-20542" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-1-scaled.png?resize=1024%2C170&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-1-scaled.png?resize=300%2C50&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-1-scaled.png?resize=768%2C127&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-1-scaled.png?resize=1536%2C255&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-1-scaled.png?resize=2048%2C340&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-1-scaled.png?w=2360&amp;ssl=1 2360w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>If you selected PII detection, it does not create a Delta table.  Instead, a .txt file is created with the PII info replaced with asterisks:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-4.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="687" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-4.png?resize=1024%2C687&#038;ssl=1" alt="" class="wp-image-20547" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-4.png?resize=1024%2C687&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-4.png?resize=300%2C201&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-4.png?resize=768%2C515&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-4.png?resize=1536%2C1031&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-4.png?w=1709&amp;ssl=1 1709w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>If you selected Name recognition, the resulting columns look like:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-7-scaled.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="256" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-7.png?resize=1024%2C256&#038;ssl=1" alt="" class="wp-image-20552" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-7-scaled.png?resize=1024%2C256&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-7-scaled.png?resize=300%2C75&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-7-scaled.png?resize=768%2C192&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-7-scaled.png?resize=1536%2C384&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-7-scaled.png?resize=2048%2C511&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-7-scaled.png?w=2360&amp;ssl=1 2360w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>The engine checks the source folder every&nbsp;two minutes. New, modified, or deleted files are reflected in the Delta table. Use the resulting table in reports, notebooks, or downstream pipelines (if using it for reports, you would next need to use the Load to Tables option on the resulting folder where the Delta table is located and choose the Parquet file type to copy it to the Tables section). The status of the transformation can be observed under&nbsp;<strong>Manage shortcut</strong>:</p>



<figure class="wp-block-image size-full"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-3.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="975" height="773" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-3.png?resize=975%2C773&#038;ssl=1" alt="" class="wp-image-20545" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-3.png?w=975&amp;ssl=1 975w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-3.png?resize=300%2C238&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/07/image-3.png?resize=768%2C609&amp;ssl=1 768w" sizes="auto, (max-width: 975px) 100vw, 975px" /></a></figure>



<p>Since a shortcut points to a folder, and you can only select only one transform format for each folder, you will want to create a folder for each type of transform you will do, and put only the files for that particular transform in it.  For example, create a folder called TranslateFiles and put all the files to translate text between languages in that folder, then create a shortcut that uses the <em>Translate </em>transform format.  Create another folder called SentimentFiles and put all files to determine the sentiment labels in that folder, then create a shortcut that uses the <em>Sentiment </em>transform format.  So if you use all the transform formats, you will wind up with five folders and five shortcuts. </p>



<p>Here are some ideas for using this new feature:</p>



<ul class="wp-block-list">
<li><strong>Customer‑feedback sentiment dashboard</strong>: Point a shortcut at yesterday’s review exports, choose&nbsp;<em>Sentiment</em>, and publish a Power&nbsp;BI report that surfaces net sentiment by product, region and time period.</li>



<li><strong>Multilingual support intelligence</strong>: Route global support logs through&nbsp;<em>Translat</em>e&nbsp;(to English) followed by optional&nbsp;<em>Text Summary</em> (do this by chaining two shortcuts together), giving managers a single, prioritized list of issues across every market.</li>



<li><strong>Privacy‑first LLM fine‑tuning</strong>: Apply&nbsp;<em>PII detection</em>&nbsp;to call center transcripts to create a compliant dataset that is immediately ready for model training.</li>



<li><strong>Real‑time market signals</strong>: Stream a newswire feed through&nbsp;<em>Name recognition</em>&nbsp;to tag companies, people and locations, enrich your knowledge graph and trigger alerts the moment a relevant story breaks.</li>
</ul>



<p></p>



<p>This new feature allows you to import unstructured data and semi-structured data into Fabric while transforming it on the fly, all without writing any code or building any pipelines!</p>



<p>Also note there is <a href="https://learn.microsoft.com/en-us/fabric/onelake/shortcuts-file-transformations/transformations" title="">another feature</a> if you need to create shortcuts to csv files to import into a Delta Lake table.</p>



<p>More info:</p>



<p><a href="https://blog.fabric.microsoft.com/en-us/blog/accelerating-insights-from-unstructured-text-with-ai-powered-onelake-shortcut-transformations/" title="">Accelerating Insights from Unstructured Text with AI Powered OneLake Shortcut Transformations</a></p>



<p><a href="https://learn.microsoft.com/en-us/fabric/onelake/shortcuts-ai-transformations/ai-transformations" title="">AI-powered transforms in OneLake shortcut transformations</a></p>



<p><a href="https://docs.azure.cn/en-us/ai-services/language-service/overview" title="">What is Azure AI Language?</a></p>The post <a href="https://www.jamesserra.com/archive/2025/07/microsoft-fabric-shortcut%e2%80%91based-ai-transformations/">Microsoft Fabric shortcut‑based AI transformations </a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/07/microsoft-fabric-shortcut%e2%80%91based-ai-transformations/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20490</post-id>	</item>
		<item>
		<title>Becoming a Presenter: My Journey and Tips</title>
		<link>https://www.jamesserra.com/archive/2025/06/becoming-a-presenter-my-journey-and-tips/</link>
					<comments>https://www.jamesserra.com/archive/2025/06/becoming-a-presenter-my-journey-and-tips/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Tue, 17 Jun 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[Presentation]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20463</guid>

					<description><![CDATA[<p>I still remember my very first presentation many years ago. Someone in my team asked me to demo a project I’d been working on, and I was absolutely terrified. Public speaking is famously a top fear for most people, and <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/06/becoming-a-presenter-my-journey-and-tips/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/06/becoming-a-presenter-my-journey-and-tips/">Becoming a Presenter: My Journey and Tips</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>I still remember my very first presentation many years ago. Someone in my team asked me to demo a project I’d been working on, and I was absolutely terrified. Public speaking is famously a top fear for most people, and I certainly felt that panic. But I also realized that getting good at presenting could really help my career. So I braced myself, took a deep breath, and did it. It was not nearly as hard as I was expecting, and once I started the presentation my nerves went away.  Over time I just kept presenting at more and more events, and eventually I got comfortable doing it. Today, presenting is a big part of my job and something I truly enjoy. If I can go from <em>scared to death</em> to loving it, you can too!</p>



<h2 class="wp-block-heading">Why Present?</h2>



<p>Why should you even think about presenting if it makes you nervous? Well, speaking in public has a bunch of benefits. I found that preparing a talk really forced me to learn my subject inside-out. I also built my personal brand and network – people started remembering me when I spoke on something I was passionate about. Presenting helped me become a better communicator overall (including at non-technical events like my speech at my daughter&#8217;s wedding), and it’s been critical to my career success. Plus, there’s something really fun about teaching others and seeing their eyes light up when they understand what I am presenting.</p>



<ul class="wp-block-list">
<li><strong>Passion:</strong> If there’s a topic you care about, why not share it? I always focus on areas that excite me, because that energy comes through in my talk.</li>



<li><strong>Learning:</strong> I learned early on that “teaching is learning twice.” By presenting, I forced myself to really understand the material.</li>



<li><strong>Career Boost:</strong> Presenting led to new opportunities for me. (Honestly, I went from an unknown to speaking at big conferences, and my income nearly tripled over a few years.) It can happen to you too with some persistence.</li>
</ul>



<h2 class="wp-block-heading">Finding Topics and Venues</h2>



<p>New presenters often ask: <em>Where do I even present?</em> The good news is, there are many places to start small and work your way up.</p>



<ul class="wp-block-list">
<li><strong>Start Local:</strong> I began with a quick “lunch and learn” at my office and my local user group/Meetup. These are low-pressure ways to share something with peers.</li>



<li><strong>Virtual Meetups:</strong> Online chapters and community streams are great too. Even during the pandemic, I did plenty of talks from my home office.</li>



<li><strong>Community Events:</strong> Look for events like SQLSaturday, local tech conferences, or hackathons. They often have speaker calls or welcome new speakers in “lightning” or “new speaker” tracks.</li>



<li><strong>Larger Conferences:</strong> Eventually I submitted abstracts to bigger conferences. They often encourage new speakers, so don’t be shy about trying.</li>
</ul>



<p>Keep a list of <em>every</em> idea you have for a talk. I use OneNote and even just a simple text file. It doesn’t have to be unique or groundbreaking – presenters often reuse and update topics. For each idea, jot down a working title and a few bullet points on what you’d cover. I even keep a running list of talks I’ve given (title, event, date) on my <a href="https://www.jamesserra.com/presentations/" title="">blog</a>, and upload my slides online after each talk. Seeing that list grow over time was really motivating for me.</p>



<h2 class="wp-block-heading">Writing Your Abstract</h2>



<p>Once you pick a topic and find a venue, you’ll usually need to write an abstract (a short description of your session) that you submit to the conference and hopefully get chosen. Here are a few things I do to improve my chances of getting chosen:</p>



<ul class="wp-block-list">
<li><strong>Hit a Pain Point or Hot Topic:</strong> Think about what problems people have right now. For example, if everyone at work is talking about a new tool or feature, that’s a good hook.</li>



<li><strong>Know Your Audience:</strong> Will it be beginners, mid-level, or experts? Tailor your abstract so the reader knows it’s just for them.</li>



<li><strong>Get Feedback:</strong> I often ask a colleague or a mentor to review my abstract. A fresh set of eyes can catch stuff I missed.</li>
</ul>



<p>I try to write the abstract <em>before</em> making slides. It forces me to outline the key points. In fact, I often create a simple outline or “story arc” for the talk first: intro, main points, conclusion. This makes it easier when I actually build the slides.</p>



<h2 class="wp-block-heading">Building the Presentation</h2>



<p>When I sit down to create slides, I follow a few personal rules to keep them effective and engaging:</p>



<ul class="wp-block-list">
<li><strong>One Idea Per Slide:</strong> I try to put a single main idea or image on each slide. This keeps the audience focused. I’ve learned that people only retain maybe 20-30% of what you say, so I try not to overload slides with text.</li>



<li><strong>Less Text, More Graphics:</strong> I try to avoid many words on slides. Instead, I use diagrams, charts, or photos that illustrate the point. A good picture can say more than a paragraph of text.</li>



<li><strong>Tell Stories:</strong> Whenever possible, I add a short story or real-world example from my own experience. Stories are memorable and they keep people awake!</li>



<li><strong>Outline First, Then Slides:</strong> I actually create a bullet-point outline of my talk before bothering with slide design. Once the outline looks solid, I flesh out each bullet into a slide (or two), making sure each has a clear purpose.</li>



<li><strong>Rehearse with Tools:</strong> I use the <a href="https://support.microsoft.com/en-us/office/rehearse-your-slide-show-with-speaker-coach-cd7fc941-5c3b-498c-a225-83ef3f64f07b?ns=POWERPNT&amp;version=90" title="">PowerPoint Speaker Coach</a> or <a href="https://support.microsoft.com/en-us/office/speaker-coach-in-microsoft-teams-meetings-30f50d15-5f62-4e09-b3bf-cadeb806386a" title="">Teams Speaker Coach</a> or other apps (even a simple timer app) to practice my pacing and get feedback. This helps avoid surprises, like running out of time or having too much content.  After you do many presentations, you will likely develop an innate sense of how long your deck will take to present.</li>
</ul>



<p>Above all, I imagine I’m explaining the topic to a smart friend who’s curious but does not know the topic well. This mindset keeps the talk balanced between informative and entertaining.</p>



<h2 class="wp-block-heading">Before I Step Up (Preparation)</h2>



<p>I’m a big believer in not leaving anything to chance before a talk. Here’s my pre-talk checklist:</p>



<ul class="wp-block-list">
<li><strong>Time It:</strong> I wear a watch or use my phone’s timer, so I can keep track of time without looking at the clock. I also ask the session chair or room monitor for a 10-minute warning. Many of the best clickers (like my <a href="https://www.amazon.com/Logitech-Professional-Presenter-Presentation-Wireless/dp/B002GHBUTU/" title="">Logitech R800</a>) even has a built-in timer or vibration alert.</li>



<li><strong>Backup Plan:</strong> I always bring a copy of my slides on a USB stick <em>and</em> have them on the cloud. If my laptop fails or I can’t get on the Internet, I can quickly pull my slides up. I also shut off any screen savers or auto-updates on my device to avoid interruptions.</li>



<li><strong>Pointer/Clicker:</strong> A wireless clicker with a laser pointer is a lifesaver. It frees me to walk around and not be glued to the keyboard.</li>



<li><strong>ZoomIt (or equivalent):</strong> For technical talks, I often need to highlight parts of the screen. I use <a href="https://learn.microsoft.com/en-us/sysinternals/downloads/zoomit" title="">ZoomIt </a>so I can draw on the screen during a demo.</li>



<li><strong>Health &amp; Dress:</strong> I have a small bottle of water onstage in case my mouth goes dry. I also make sure to wear something at least as professional as the audience – often just a step above what people expect.</li>



<li><strong>Arrive Early:</strong> At least 15 minutes early, if possible. This gives me time to test the projector/mic, make sure my slides look good on the big screen, and to calm my nerves.</li>
</ul>



<p>For virtual talks, I do extra prep too: I use an upgraded <a href="https://www.amazon.com/dp/B01N5UOYC4/" title="">webcam</a> and a powerful mic, and even join from a second device to see what the audience will see. I mute notifications and put my laptop in “Do Not Disturb” mode so I don’t get pop-up distractions. Little things like that help me appear professional and keep the focus on my content.</p>



<h2 class="wp-block-heading">When You&#8217;re Up There</h2>



<p>Once I’m in front of the audience (in-person or on camera), I follow a few “on stage” tips I’ve learned:</p>



<ul class="wp-block-list">
<li><strong>Eye Contact:</strong> I try to look at different people around the room for a few seconds each. This makes it feel more conversational.</li>



<li><strong>Energy and Passion:</strong> I consciously add energy to my voice. I vary my tone and volume a bit – louder on key points, and I pause for effect. I smile and show that I’m enjoying the topic.  I also try to add some humor along the way.</li>



<li><strong>Avoid Fillers:</strong> I watch out for filler words (“um”, “ah”, “you know”, &#8220;like&#8221;). Whenever I feel one coming, I just pause instead. A short silence is much better than a string of “ums”.</li>



<li><strong>Tell Personal Stories:</strong> If there’s a good story or memory related to the slide, I share it. This keeps things human and relatable.</li>



<li><strong>Engage the Audience:</strong> I like to ask a couple of quick, simple questions to the group. It wakes people up and lets me gauge interest. If someone asks a question, I repeat it for everyone and then answer.</li>



<li><strong>Watch the Clock:</strong> I keep an eye on my progress. If I see we’re running late, I might skip or speed up less essential parts.</li>



<li><strong>Stay Humble:</strong> I try to come off as someone friendly, not a know-it-all. If I don’t know an answer to a question, I’ll admit that and say I’ll look into it.</li>
</ul>



<p>Throughout the talk, I use my voice and hands to emphasize points. Walking around is fine, but I’ll usually pause and stand still when making a key statement. That helps people absorb it.</p>



<h2 class="wp-block-heading">After the Talk</h2>



<p>I’ve found the most rewarding part often comes after the talk—when people come up to share their thoughts or ask questions.</p>



<ul class="wp-block-list">
<li><strong>Q&amp;A:</strong> I love hanging around afterward to chat. I put my email on the last slide, and I tell people they can email me if they think of questions later.</li>



<li><strong>Feedback:</strong> If there’s a survey or feedback form, I encourage attendees to fill it out honestly. I read every comment later.</li>



<li><strong>Reflect:</strong> I jot down notes right after the session about what went well and what didn’t.</li>



<li><strong>Share Slides:</strong> I usually mention at the start or end where they can find my slides.</li>
</ul>



<p>And of course, <strong>celebrate</strong> a little. Every presentation I do, big or small, feels like a milestone. I give myself a quick mental high-five for facing the fear and getting through it.</p>



<h2 class="wp-block-heading">Final Thoughts</h2>



<p>If you’ve stuck with me this far, here’s my main message: <strong>Just do it.</strong> Seriously, sign up for a talk (even a tiny one) and go for it. The path to becoming a great presenter <em>does</em> start with that very first talk. You’ll learn more in that single presentation than weeks of reading about it. And you’ll have fun along the way.</p>



<p>So, schedule your first presentation soon. Write the abstract, make the slides, practice a bit, and get it on the calendar. Trust me, the anxiety you feel now will fade after a few minutes into the talk, and you’ll end up feeling proud and energized afterward. The first step is the hardest, but once you take it, you’ll never look back.</p>



<p>Go ahead – take that risk. You’ve got this!</p>The post <a href="https://www.jamesserra.com/archive/2025/06/becoming-a-presenter-my-journey-and-tips/">Becoming a Presenter: My Journey and Tips</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/06/becoming-a-presenter-my-journey-and-tips/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20463</post-id>	</item>
		<item>
		<title>Microsoft Build announcements</title>
		<link>https://www.jamesserra.com/archive/2025/05/microsoft-build-announcements/</link>
					<comments>https://www.jamesserra.com/archive/2025/05/microsoft-build-announcements/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Thu, 22 May 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20394</guid>

					<description><![CDATA[<p>Once again there were a number of Microsoft Build announcements related to data and AI, and some were very impressive. Below are my favorites. Everything announced at Build can be found in the&#160;Microsoft Build 2025 Book of News. Here you <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/05/microsoft-build-announcements/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/05/microsoft-build-announcements/">Microsoft Build announcements</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>Once again there were a number of <a href="https://build.microsoft.com/en-US/home" title="">Microsoft Build</a> announcements related to data and AI, and some were very impressive.  Below are my favorites.</p>



<p>Everything announced at Build can be found in the&nbsp;<a href="https://news.microsoft.com/build-2025-book-of-news/" title="">Microsoft Build 2025 Book of News</a>.</p>



<p>Here you go:</p>



<p><strong>Cosmos DB in Fabric (preview)</strong>: First there was <a href="https://blog.fabric.microsoft.com/en-US/blog/announcing-sql-database-in-microsoft-fabric-public-preview" title="">SQL Database in Fabric</a>, and now Cosmos DB.  You can now store semi-structured NoSQL data in Cosmos DB in Fabric, alongside your relational data in SQL databases, enabling a unified data platform for your applications. This further positions Fabric as a complete data platform to handle all your organizational needs, from operational to analytics and BI. Check out this <a href="https://www.youtube.com/watch?v=whP2vDatcH8&amp;t=326s&amp;ab_channel=MicrosoftFabric" title="">video </a>to see it in action.  You can try&nbsp;<a href="https://aka.ms/fabric-cosmosdb" target="_blank" rel="noreferrer noopener">Cosmos DB in Fabric today</a>&nbsp;or learn more by reading the&nbsp;<a href="https://aka.ms/FabricCosmosDBNoSQLBlog" target="_blank" rel="noreferrer noopener">Cosmos DB in Fabric blog</a>.</p>



<p><strong>Digital twin builder in Fabric (preview</strong>): A new capability designed to help organizations bridge their physical and digital worlds to create an AI ready foundation for their operations. It simplifies the creation and management of data-driven digital twins at scale, enabling users to understand system-wide connections and create unparalleled insights and operational efficiency.  <a href="https://blog.fabric.microsoft.com/en-us/blog/digital-twin-builder-in-microsoft-fabric-real-time-intelligence-revolutionizing-digital-twin-creation-and-management" title="">More info</a>.  Here is a <a href="https://www.youtube.com/watch?v=Jy-_g7OrVpo&amp;t=104s&amp;ab_channel=MicrosoftFabric" title="">video </a>demo.</p>



<p><strong>Microsoft 365 Copilot Tuning (available via Early Access Program)</strong>: This is huge!  Offers customers a new way to unlock the value of fine-tuning for organizations without the cost and complexity of other solutions.&nbsp; Now, makers can use the low-code tooling in Microsoft Copilot Studio to take advantage of highly automated fine-tuning “recipes” that can use your enterprise data to train models to assist with domain-specific tasks.&nbsp; Once training is finished, agents,&nbsp;<a href="https://youtu.be/UeA91UF4di4" target="_blank" rel="noreferrer noopener">built&nbsp;in Agent Builder</a>&nbsp;with just a few clicks, can then tap into these fine-tuned, task-specific models and easily integrate them into the Microsoft 365 apps.  <a href="https://techcommunity.microsoft.com/blog/microsoft365copilotblog/introducing-microsoft-365-copilot-tuning/4414762" title="">More info</a> and <a href="https://www.youtube.com/watch?v=aDmNUqSGfeI" title="">video</a>.</p>



<p><strong>Copilot Studio multi-agent orchestration</strong>: Also huge!  Enables agents to exchange data, collaborate on tasks, and divide their work based on each agent’s expertise. For example, multiple agents can collaborate across HR, IT, and marketing to help onboard a new employee.  See this <a href="https://www.youtube.com/watch?v=AagTqh1ctXU" title="">Video</a> and <a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/advanced-generative-actions" title="">more info</a>.</p>



<p><strong>Warehouse Snapshots in Microsoft Fabric&nbsp;(preview</strong><a href="https://blog.fabric.microsoft.com/en-us/blog/warehouse-snapshots-in-microsoft-fabric-public-preview?ft=All"><strong>)</strong></a>: A new capability to provide a stable, read-only view of your data warehouse at a specific point in time. With Warehouse Snapshots, you can confidently support analytics, reporting, and historical analysis without worrying about the volatility of live data updates. <a href="https://blog.fabric.microsoft.com/en-us/blog/warehouse-snapshots-in-microsoft-fabric-public-preview?ft=All" title="">More info</a>.</p>



<p><strong>Materialized Lake Views in Fabric (preview soon)</strong>: Allows you to build declarative data pipelines using SQL, complete with built-in data quality rules and automatic monitoring of data transformations. In essence, an MLV is a persisted, continuously updated view of your data that simplifies how you implement multi-stage Lakehouse processing, commonly referred to as <a href="https://dataengineering.wiki/Concepts/Data+Architecture/Medallion+Architecture" target="_blank" rel="noreferrer noopener"><em>medallion architecture</em></a>.  <a href="https://blog.fabric.microsoft.com/en-us/blog/announcing-materialized-lake-views-at-build-2025?ft=All" title="">More info</a>.</p>



<p><strong>Mirroring for SQL Server in Microsoft Fabric (preview)</strong>: For all in-market versions of SQL Server from&nbsp;SQL Server 2016&nbsp;to&nbsp;SQL Server 2025. <a href="https://blog.fabric.microsoft.com/en-us/blog/22820?ft=All" title="">More info</a>.</p>



<p><strong>Microsoft Fabric SKU estimator</strong>: Predict your capacity needs by entering details about how your team plans to use Fabric.  <a href="https://www.microsoft.com/en-us/microsoft-fabric/capacity-estimator" title="">Use it.</a></p>



<p><strong>Fabric roadmap tool</strong>: Brings the upcoming features all together in one place, with a cleaner interface, real-time updates, and direct integration with the internal planning tool used by the Fabric team. <a href="https://roadmap.fabric.microsoft.com" title="">See it.</a> </p>



<p><strong>Shortcut transformations (preview)</strong>: Introduces the ability to transform data as it’s shortcut into Fabric including changing the data format into Delta tables or applying AI transformations to unstructured data—such as summarizing text, translating content, or classifying documents. <a href="https://aka.ms/ShortcutTransformations" title="">More info</a>.</p>



<p><strong>Fabric data agent integration with Microsoft Copilot Studio (preview)</strong>: In Fabric, you can create&nbsp;<a href="https://learn.microsoft.com/en-us/fabric/data-science/concept-data-agent" target="_blank" rel="noreferrer noopener">data agents</a>—virtual business analysts which enable you to engage in natural language conversations about your data in OneLake. Building on this capability, Microsoft is bringing these data agents into Copilot Studio. Once you connect your Fabric data agent to your custom agent in Copilot Studio, it can be deployed across Microsoft Teams and Microsoft 365 Copilot to reason over complex datasets, get insights directly from data in OneLake (respecting data access permissions), and take action. Your agents can even automate tasks like sending emails or triggering workflows, making it easier for users to interact with enterprise data and make data-driven actions in context.&nbsp; Check out the <a href="https://www.youtube.com/watch?v=GCtk2HPxZi8" title="">video</a>.</p>



<p><strong>Fabric data agent with Copilot in Power BI (preview soon)</strong>: Previously, you may have access to multiple resources, but finding the right data to answer specific questions can be challenging. Copilot was limited to the right pane of a single report, allowing questions only about that open report. This new standalone Copilot in Power BI addresses this by streamlining the process.&nbsp;This full-screen Copilot experience, easily accessible from the left navigation of Fabric, can help you find and ask questions about any data you have access to. It serves two purposes. First, you can ask Copilot to find Power BI reports, semantic models, apps (coming soon), and Fabric data agents that you have access to. It will match across many different factors to quickly find the most relevant items (if you already know which resource to use, you can manually add it to the Copilot session and interact with it directly for more relevant results).  Second, it allows you to ask natural language questions and receive accurate, relevant answers from your available Fabric resources.  For example, using Copilot’s summary capabilities allows you to quickly identify the most interesting data within your report. Often, reports can become quite complex, and you can easily spend 30 minutes to a couple of hours combing through all the details. However, Copilot can help you sift through the report, giving you easy to digest overviews of your data. Just ask Copilot to ‘Tell me about trends in sales’ or more specific topics, like ‘What should I know about bike sales in Washington?’.  You may also have a business question that cannot be answered by the existing report. Typically, what you’d have to do in these cases is work with an analyst to get a new visual added to the report, which can take time, delaying your ability to make a data driven decision. This can be especially cumbersome if it’s not something you’d want to track long term. Also, with the standalone Copilot experience, you can easily ask questions on a semantic model, allowing you to quickly find answers to ad hoc questions.  Finally, you can also ask questions against your Fabric data agents.&nbsp; Check out the <a href="https://www.youtube.com/watch?v=UvH4X9XoNAo" title="">video </a>and <a href="https://blog.fabric.microsoft.com/en-us/blog/extracting-deeper-insights-with-fabric-data-agents-in-copilot-in-power-bi?ft=All" title="">more info</a> and <a href="https://powerbi.microsoft.com/en-us/blog/the-next-era-of-copilot-in-power-bi-chat-with-your-data/" title="">more info</a>.</p>



<p><strong>Prepare your data for AI</strong>: Power BI is introducing new capabilities to help you get your data ready for natural language experiences with Copilot. The first step to get the best results from Power BI Copilot is always to have a semantic model that follows best practices. However, models often require more assistance to be fully prepared for the optimal interactions with AI. Now available in Power BI Desktop is tooling features to help you prepare your data for AI (coming soon for the Power BI service). These features allow you to provide more context about your model, help guide Copilot to the right data in the model and help increase the quality of Copilot output.  <a href="https://learn.microsoft.com/en-us/power-bi/create-reports/copilot-prepare-data-ai" title="">More info</a>.</p>



<p><strong>Dataflow Gen2 parameterization (preview)</strong>: Leveraging&nbsp;<a href="https://learn.microsoft.com/power-query/power-query-query-parameters">query parameters</a>&nbsp;while authoring Dataflows Gen2 has been possible for a long time, however, it was not possible to override the parameter values when refreshing the dataflow. The ability to&nbsp;<a href="https://community.fabric.microsoft.com/t5/Fabric-Ideas/Enable-to-pass-variables-from-pipelines-as-parameters-into-a/idi-p/4499148">pass values from a pipeline into a Dataflow parameter for refresh</a>&nbsp;has been one of the top ideas in the Fabric ideas portal since Dataflow Gen2 was released.</p>



<p><strong>SQL Server 2025 (public preview)</strong>: Microsoft announced the public preview of SQL Server 2025 at Build, positioning it as the AI-ready enterprise database. This release brings native AI integration into the SQL engine, allowing developers to build intelligent applications using familiar T-SQL syntax. Key features include built-in vector search for semantic queries, enhanced support for JSON and regular expressions, and compatibility with popular AI frameworks like LangChain and Semantic Kernel. It also introduces an open-source Python driver and GitHub Copilot integration within SQL Server Management Studio (SSMS) 21. Security and performance improvements include support for Microsoft Entra managed identities, optimized locking, and intelligent query processing. SQL Server 2025 also connects seamlessly with Microsoft Fabric for real-time analytics and supports hybrid deployments via Azure Arc.  <a href="https://www.microsoft.com/en-us/sql-server/blog/2025/05/19/announcing-sql-server-2025-preview-the-ai-ready-enterprise-database-from-ground-to-cloud/" title="">More info</a>.</p>



<p>More info:</p>



<p><a href="https://azure.microsoft.com/en-us/blog/powering-the-next-ai-frontier-with-microsoft-fabric-and-the-azure-data-portfolio" title="">Powering the next AI frontier with Microsoft Fabric and the Azure data portfolio</a></p>



<p><a href="https://blog.fabric.microsoft.com/en-us/blog/get-to-insights-faster-with-saas-databases-and-chat-with-your-data-experiences?ft=All" title="">Get to insights faster with SaaS databases and “chat with your data”</a></p>



<p><a href="https://www.microsoft.com/en-us/microsoft-365/blog/2025/05/19/introducing-microsoft-365-copilot-tuning-multi-agent-orchestration-and-more-from-microsoft-build-2025/" title="">Introducing Microsoft 365 Copilot Tuning, multi-agent orchestration, and more from Microsoft Build 2025</a></p>



<p><a href="https://blog.fabric.microsoft.com/en-us/blog/fabric-may-2025-feature-summary?ft=All" title="">Fabric May 2025 Feature Summary</a></p>The post <a href="https://www.jamesserra.com/archive/2025/05/microsoft-build-announcements/">Microsoft Build announcements</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/05/microsoft-build-announcements/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20394</post-id>	</item>
		<item>
		<title>Deciphering Data Architectures: When to Use a Warehouse, Fabric, Lakehouse, or Mesh</title>
		<link>https://www.jamesserra.com/archive/2025/05/deciphering-data-architectures-when-to-use-a-warehouse-fabric-lakehouse-or-mesh/</link>
					<comments>https://www.jamesserra.com/archive/2025/05/deciphering-data-architectures-when-to-use-a-warehouse-fabric-lakehouse-or-mesh/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Tue, 06 May 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20333</guid>

					<description><![CDATA[<p>As discussed in my blog and book &#8220;Deciphering Data Architectures: Choosing Between a Modern Data Warehouse, Data Fabric, Data Lakehouse, and Data Mesh&#8221; (Amazon), organizations are often challenged with choosing the right data architecture to meet their business goals—especially as <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/05/deciphering-data-architectures-when-to-use-a-warehouse-fabric-lakehouse-or-mesh/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/05/deciphering-data-architectures-when-to-use-a-warehouse-fabric-lakehouse-or-mesh/">Deciphering Data Architectures: When to Use a Warehouse, Fabric, Lakehouse, or Mesh</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>As discussed in my <a href="https://www.jamesserra.com/" title="">blog</a> and book &#8220;Deciphering Data Architectures: Choosing Between a Modern Data Warehouse, Data Fabric, Data Lakehouse, and Data Mesh&#8221; (<a href="https://www.amazon.com/Deciphering-Data-Architectures-Warehouse-Lakehouse/dp/1098150767/" title="">Amazon</a>), organizations are often challenged with choosing the right data architecture to meet their business goals—especially as AI and data-driven decision-making take center stage. To help clarify, here’s a quick review of the four core architectures, followed by guidance on when to use each.  Each architecture includes five stages of data movement &#8211; ingest, store, transform, model, and visualize (described <a href="https://www.jamesserra.com/archive/2024/08/microsoft-fabric-reference-architecture/" title="">here</a>).</p>



<p><strong>Modern Data Warehouse (MDW)</strong><br>A Modern Data Warehouse architecture combines a data lake for storing raw, unstructured, and semi-structured data, with a relational data warehouse for serving structured and curated data to business users. This hybrid architecture offers the best of both worlds: the flexibility and scalability of a data lake with the governance, performance, and usability of a traditional data warehouse​. (<a href="https://www.jamesserra.com/archive/2021/04/modern-data-warehouse-explained/" title="">more info</a>).  How the MDW architecture looks like at a high level:</p>



<figure class="wp-block-image size-large is-resized"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-1.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="188" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-1.png?resize=1024%2C188&#038;ssl=1" alt="" class="wp-image-20360" style="width:632px;height:auto" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-1.png?resize=1024%2C188&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-1.png?resize=300%2C55&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-1.png?resize=768%2C141&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-1.png?resize=1536%2C282&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-1.png?w=1654&amp;ssl=1 1654w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p><strong>Data Fabric</strong><br>Data Fabric is an evolved form of the Modern Data Warehouse, enriched with more technologies to support real-time processing, metadata catalogs, data virtualization, APIs, and governance tools. It creates a unified architecture that allows users to seamlessly access and manage distributed data across various platforms and formats, enhancing scalability, automation, and security​. (<a href="https://www.jamesserra.com/archive/2021/06/data-fabric-defined/" title="">more info</a>).  In purple are the data fabric features:</p>



<figure class="wp-block-image size-large is-resized"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-2.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="381" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-2.png?resize=1024%2C381&#038;ssl=1" alt="" class="wp-image-20361" style="width:629px;height:auto" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-2.png?resize=1024%2C381&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-2.png?resize=300%2C112&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-2.png?resize=768%2C286&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-2.png?w=1340&amp;ssl=1 1340w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p><strong>Data Lakehouse</strong><br>A Data Lakehouse attempts to merge the capabilities of data lakes and relational data warehouses into one platform, typically by using a transactional storage layer like Delta Lake, Apache Iceberg, or Apache Hudi on top of a data lake. It allows both raw data storage and structured querying in a single repository, enabling cost-efficient analytics and simplified architecture without a separate relational data warehouse​. (<a href="https://www.jamesserra.com/archive/2023/03/using-a-data-lakehouse/" title="">more info</a>).  This is how Microsoft Fabric operates &#8211; with data from lakehouses and warehouses all stored in a delta lake called OneLake &#8211; there is no relational storage.  See <a href="https://www.jamesserra.com/archive/2024/08/microsoft-fabric-reference-architecture/" title="">Microsoft Fabric reference architecture</a>.  In the diagram below Delta Lake is added and the RDW is removed.  Note that the model step is still included as you want something like a semantic model as the interface to the data instead of a list of folders and files. </p>



<figure class="wp-block-image size-large is-resized"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-3.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="382" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-3.png?resize=1024%2C382&#038;ssl=1" alt="" class="wp-image-20362" style="width:627px;height:auto" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-3.png?resize=1024%2C382&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-3.png?resize=300%2C112&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-3.png?resize=768%2C286&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-3.png?w=1459&amp;ssl=1 1459w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p><strong>Data Mesh</strong><br>Unlike the Modern Data Warehouse, Data Fabric, and Data Lakehouse—which all rely on a centralized architecture where operational data is ingested into a central system—Data Mesh decentralizes data ownership, allowing each domain to retain its own operational data and take responsibility for building and managing its own analytical data. Rather than depending on a central IT team, domains treat data as a product and create their own analytics pipelines. Importantly, Data Mesh is a conceptual framework, not a technology, and each domain can choose to implement its analytical architecture using a Modern Data Warehouse, Data Fabric, or Data Lakehouse—whichever best fits their specific needs​.  (<a href="https://www.jamesserra.com/archive/2021/02/data-mesh/" title="">more info</a>)</p>



<p>This diagram shows on the left picture the centralized approaches used for the first three architectures, where operational data is copied to a central location owned by IT, who then creates the analytical data. In a data mesh, data is kept within several domains within a company, such as manufacturing, sales, and suppliers. Each domain has its own mini-IT team that takes its operational data, cleans it, and makes it available as analytical data that it owns, using its own compute and storage infrastructure.  This results in a <em>decentralized</em> architecture where data, people, and infrastructure are scaled out – the more domains you have, the more people and infrastructure you get. </p>



<figure class="wp-block-image size-large is-resized"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-4.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="305" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-4.png?resize=1024%2C305&#038;ssl=1" alt="" class="wp-image-20363" style="width:621px;height:auto" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-4.png?resize=1024%2C305&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-4.png?resize=300%2C89&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-4.png?resize=768%2C229&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-4.png?resize=1536%2C458&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image-4.png?w=1772&amp;ssl=1 1772w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>Here is a very high-level use case for each architecture (in ascending cost and complexity):  </p>



<p><strong>Modern Data Warehouse</strong><br>Ideal for organizations handling relatively small volumes of data (typically &lt;1TB), particularly those already familiar with relational data warehouses (RDWs). If your dataset is very small, you may even skip implementing a data lake. This architecture excels at structured reporting and business intelligence scenarios, offering well-established design patterns and a low barrier to adoption. However, its scalability for AI and real-time use cases is limited.</p>



<ul class="wp-block-list">
<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Best for: Traditional BI, reporting, and small-scale analytics</li>



<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Advantages: Easier adoption, well-known patterns, minimal learning curve</li>



<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/26a0.png" alt="⚠" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Considerations: Limited scalability for AI; less suitable for large, diverse datasets</li>
</ul>



<p></p>



<p><strong>Data Fabric</strong><br>Suited for companies that must integrate and analyze a wide variety of data sources differing in size, speed, and format. It’s also appropriate when modernizing a legacy environment where a full rewrite (e.g., of many stored procedures) would be cost-prohibitive. Data Fabric supports real-time access, federated queries, and AI-driven use cases via a semantic layer. However, it demands strong data governance and integration discipline.</p>



<ul class="wp-block-list">
<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Best for: Real-time integration, federated access, and complex data landscapes</li>



<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Advantages: Unified access layer, real-time support, AI-ready semantic modeling</li>



<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/26a0.png" alt="⚠" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Considerations: Complex to implement; requires robust governance framework</li>
</ul>



<p></p>



<p><strong>Data Lakehouse</strong><br>Best used as a flexible and cost-efficient solution for combining raw data storage and structured analytics in a single platform. It supports both AI and traditional BI, with transactional layers like Delta Lake, Apache Iceberg, or Apache Hudi providing additional functionality. A good rule of thumb: &#8220;Use it until you can’t.&#8221; When performance or governance needs outgrow the Lakehouse, offload specific datasets to an RDW as needed.</p>



<ul class="wp-block-list">
<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Best for: Unified analytics platforms, scalable AI workloads, and mixed data types</li>



<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Advantages: Strong AI support, balance of structure and flexibility, lower cost</li>



<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/26a0.png" alt="⚠" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Considerations: Moderate organizational change; governance needs to be layered in</li>
</ul>



<p></p>



<p><strong>Data Mesh</strong><br>Designed for very large, domain-oriented enterprises experiencing major pain points with scalability and central IT bottlenecks. Data Mesh decentralizes control, giving each domain responsibility for managing its own data pipelines and analytics. Each domain can implement its analytical architecture using a Modern Data Warehouse, Data Fabric, or Data Lakehouse—whichever suits their needs. This approach fosters cross-domain AI scalability but requires a high degree of organizational maturity and a cultural shift toward treating data as a product.</p>



<ul class="wp-block-list">
<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Best for: Large enterprises with mature data practices and strong domain ownership</li>



<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2705.png" alt="✅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Advantages: Scales AI across domains, reduces bottlenecks, promotes autonomy</li>



<li><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/26a0.png" alt="⚠" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Considerations: Long implementation timelines, requires cultural and process change</li>
</ul>



<p></p>



<p><strong><em>Most companies won’t adopt just one architecture. Instead, they’ll blend elements of several, depending on use cases, legacy systems, team capabilities, and AI goals. The key is aligning architecture with both your <strong>technical constraints</strong> and your <strong>organizational maturity</strong>.</em></strong></p>



<p>Here is a table from my <a href="https://www.amazon.com/Deciphering-Data-Architectures-Warehouse-Lakehouse/dp/1098150767/" title="">book</a> that compares all the architectures, along with RDW and data lake:</p>



<figure class="wp-block-image size-large is-resized"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="946" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image.png?resize=1024%2C946&#038;ssl=1" alt="" class="wp-image-20353" style="width:645px;height:auto" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image.png?resize=1024%2C946&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image.png?resize=300%2C277&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image.png?resize=768%2C709&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/05/image.png?w=1033&amp;ssl=1 1033w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>I have done a video describing and comparing all four architectures that you can view <a href="https://www.youtube.com/watch?v=adGyMQnmc4k&amp;ab_channel=SanDiegoSQLServerUsersGroup" title="">here</a>.  If you want to learn more about these architectures and the concepts behind them, then check out my <a href="https://www.amazon.com/Deciphering-Data-Architectures-Warehouse-Lakehouse/dp/1098150767/" title="">book</a>.</p>



<p>Also, I will be discussing this topic on May 14th at 1:00pm ET for a webinar on &#8220;Navigating the Human Elements to Modernize your Data for AI Transformation&#8221; with myself and Christopher Samulski from Argano. Learn how to build a well-organized, informed, and empowered data team. Plus, we&#8217;re raffling off 5 copies of my book to attendees! Register here: <a href="https://bit.ly/4cyoM46">https://bit.ly/4cyoM46</a>.</p>The post <a href="https://www.jamesserra.com/archive/2025/05/deciphering-data-architectures-when-to-use-a-warehouse-fabric-lakehouse-or-mesh/">Deciphering Data Architectures: When to Use a Warehouse, Fabric, Lakehouse, or Mesh</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/05/deciphering-data-architectures-when-to-use-a-warehouse-fabric-lakehouse-or-mesh/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20333</post-id>	</item>
		<item>
		<title>Announcements from the Microsoft Fabric Community Conference</title>
		<link>https://www.jamesserra.com/archive/2025/04/announcements-from-the-microsoft-fabric-community-conference-2/</link>
					<comments>https://www.jamesserra.com/archive/2025/04/announcements-from-the-microsoft-fabric-community-conference-2/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 09 Apr 2025 15:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20297</guid>

					<description><![CDATA[<p>A ton of new features for Microsoft Fabric were announced at the&#160;Microsoft Fabric Community Conference recently. Here are all the new features that I found most interesting, with some released now and others coming soon: More info: FabCon 2025: Fueling <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/04/announcements-from-the-microsoft-fabric-community-conference-2/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/04/announcements-from-the-microsoft-fabric-community-conference-2/">Announcements from the Microsoft Fabric Community Conference</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>A ton of new features for Microsoft Fabric were announced at the&nbsp;<a href="https://www.fabricconf.com/">Microsoft Fabric Community Conference</a> recently. Here are all the new features that I found most interesting, with some released now and others coming soon:</p>



<ul class="wp-block-list">
<li>With <strong>OneLake security</strong> (preview), you define access once, and Fabric enforces it consistently across all engines (Power BI, Spark notebook, SQL analytics endpoint, Excel, Onelake file explorer, API calls). Data owners can create security roles, grant precise permissions, and control access at the row and column level—for example, restricting Personally Identifiable Information (PII) while keeping other data available. This security propagates automatically, ensuring that whether users query via SQL or build Power BI reports, they only see what they’re authorized to access. OneLake security replaces the existing OneLake data access roles preview feature. OneLake security will be available in public preview within a few months. In the meantime, if you are interested in trying OneLake security and providing feedback, please visit this early access <a href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR_BIbobSVbtGoFFUDM3gfGJUNlBWWVpMNDU5NzY5U1NBQVFHOUJPNE5CNS4u" title="">sign-up page</a>.&nbsp; <a href="https://blog.fabric.microsoft.com/en-us/blog/the-next-evolution-of-onelake-security-enters-early-preview?ft=All" title="">More info</a></li>



<li>Copilot and AI capabilities will be enabled for all paid SKUs in Fabric, making these tools accessible to everyone within the coming weeks. With this latest update, customers on F2 and above can use Copilot and AI capabilities, such as Fabric data agents, to streamline workflows, generate insights, and drive impactful decisions.  <a href="https://blog.fabric.microsoft.com/en-us/blog/copilot-and-ai-capabilities-now-accessible-to-all-paid-skus-in-microsoft-fabric" title="">More info</a></li>



<li>Now available is a preview of a migration experience, called <strong>Migration Assistant</strong>, that is natively built into the Fabric UI, enabling Azure Synapse Analytics (data warehouse) customers to transition seamlessly to Microsoft Fabric. With a built-in, intelligent assessment, guided support, and AI-powered assistance, this experience simplifies migration of code and data while helping customers unlock Fabric’s unified data foundation, AI-driven analytics, and enhanced performance—without the complexity of traditional migrations.&nbsp; <a href="https://blog.fabric.microsoft.com/en-us/blog/public-preview-of-migration-assistant-for-fabric-data-warehouse?ft=All" title="">More info</a></li>



<li>Organizations can now use Azure AI Foundry to connect customized, conversational agents, created in Fabric. AI developers can now use Azure AI Agent Service to securely ground AI agent outputs with enterprise knowledge in <em>Fabric data agents (formerly known as AI skills)</em>, so that responses are accurate, relevant, and contextually aware. By combining Fabric’s sophisticated data analysis over enterprise data with Azure AI Foundry’s cutting-edge GenAI technology, businesses can create custom conversational AI agents leveraging domain expertise.&nbsp;.  <a href="https://blog.fabric.microsoft.com/en-us/blog/empowering-agentic-ai-by-integrating-fabric-with-azure-ai-foundry" title="">More info</a></li>



<li>The <strong>Variable library</strong> (preview) is a new item type in Microsoft Fabric that allows users to define and manage variables at the workspace level, so they could soon be used across various workspace items, such as data pipelines (already available!), notebooks, Shortcut for lakehouse and more.  <a href="https://learn.microsoft.com/en-us/fabric/cicd/variable-library/variable-library-overview" title="">More info</a></li>



<li>The preview of&nbsp;<strong>User Data Functions&nbsp;</strong>introduces a way for developers to implement and reuse custom business logic in Fabric data science and data engineering workflows, streamlining development and improving efficiency.  <a href="https://learn.microsoft.com/en-us/fabric/data-engineering/user-data-functions/user-data-functions-overview" title="">More info</a></li>



<li>The<strong>&nbsp;</strong>preview of&nbsp;<strong>Command Line Interface&nbsp;(CLI)</strong>&nbsp;in Fabric introduces a new terminal that allows users and admins to execute commands across Fabric using interactive prompts or scripts, enabling a seamless, code-first experience without relying on clicks.&nbsp; <a href="https://blog.fabric.microsoft.com/en-us/blog/introducing-the-fabric-cli-preview?ft=All" title="">More info</a></li>



<li>The general availability of&nbsp;<strong>Tags</strong>, which allows users to optimally describe items they own, and help enhance organization and discoverability of data in Fabric.  <a href="https://learn.microsoft.com/en-us/fabric/governance/tags-overview" title="">More info</a></li>



<li>Several enhancements to<strong>&nbsp;</strong><a href="https://learn.microsoft.com/en-us/fabric/data-factory/create-first-dataflow-gen2" target="_blank" rel="noreferrer noopener"><strong>Dataflow Gen2</strong></a><strong>&nbsp;</strong>including the general availability of&nbsp;<a href="https://learn.microsoft.com/en-us/fabric/data-factory/dataflow-gen2-incremental-refresh" target="_blank" rel="noreferrer noopener"><strong>Incremental Refresh</strong></a>, and the preview to&nbsp;<a href="https://learn.microsoft.com/en-us/fabric/data-factory/dataflow-gen2-migrate-from-dataflow-gen1" target="_blank" rel="noreferrer noopener"><strong>save a Dataflow Gen1 as a Dataflow Gen2</strong></a>.&nbsp;</li>



<li><a href="https://learn.microsoft.com/en-us/fabric/database/mirrored-database/overview" target="_blank" rel="noreferrer noopener"><strong>Database Mirroring</strong></a>&nbsp;has been enhanced with key new enterprise capabilities, including a new source (Azure Database for PostgreSQL flexible server) and support for connecting to data sources over the&nbsp;<a href="https://learn.microsoft.com/en-us/data-integration/gateway/" target="_blank" rel="noreferrer noopener"><strong>On-premises and Virtual Network Data Gateways</strong></a><strong>.</strong>&nbsp;</li>



<li>The preview of key&nbsp;<a href="https://learn.microsoft.com/en-us/fabric/release-plan/data-factory" target="_blank" rel="noreferrer noopener"><strong>orchestration enhancements</strong></a>&nbsp;is now available, enabling the creation of metadata-driven pipelines that orchestrate Dataflow Gen2 (CI/CD) parameterized invocation from Data Pipelines.</li>



<li>Read more about all the Data Factory enhancements in the&nbsp;<a href="https://aka.ms/FabConDataFactoryBlog" target="_blank" rel="noreferrer noopener">What’s new with Fabric Data Factory</a>.</li>



<li>Concerning real-time intelligence (RTI), announced was the preview of new&nbsp;<strong>eventstream connectors&nbsp;</strong>which allows users to bring in data from additional non-Microsoft sources, including Weather, Solace PubSub+, ADX Table Streamify, MQTT v5, Event Grid Namespaces, and Confluent with Schema Registry.&nbsp;  <a href="https://blog.fabric.microsoft.com/en-us/blog/unlock-the-power-of-real-time-intelligence-in-the-era-of-ai-why-fabric-real-time-intelligence-is-a-game-changer?ft=All" title="">More info</a></li>



<li>The preview of&nbsp;<a href="https://blog.fabric.microsoft.com/en-us/blog/introducing-autoscale-billing-for-data-engineering-in-microsoft-fabric?ft=All" target="_blank" rel="noreferrer noopener"><strong>Autoscale Billing for Spark</strong></a>&nbsp;helps optimize Spark job costs by offloading Data Engineering workloads to a serverless billing mode. Capacity admins can set a max capacity units (CUs) limit in capacity settings, ensuring Spark jobs use dedicated CUs instead of shared Fabric Capacity.&nbsp;</li>



<li>A new<strong>&nbsp;</strong><a href="https://aka.ms/enhanced-Copilot-in-Fabric-notebooks" target="_blank" rel="noreferrer noopener"><strong>Copilot experience in Fabric notebooks</strong></a>&nbsp;that improves productivity with in-cell interactions, better code generation, and seamless Fabric integration.&nbsp;</li>



<li>The preview of&nbsp;<strong><a href="https://blog.fabric.microsoft.com/en-us/blog/announcing-ai-functions-for-easy-llm-powered-data-enrichment?ft=All" target="_blank" rel="noreferrer noopener">AI functions</a></strong>&nbsp;provides powerful capabilities to apply LLM-powered transformations, such as summarization, classification, and text generation to your OneLake data—all with a single line of code.</li>



<li>A modern get-data experience with&nbsp;<a href="https://aka.ms/OneLake-Catalog-Excel" target="_blank" rel="noreferrer noopener"><strong>OneLake catalog integration in Microsoft Excel</strong></a>&nbsp;(<strong>in Office Insiders Fast)</strong>&nbsp;enables users to explore the OneLake catalog directly from Excel, expanding accessibility beyond the existing Microsoft Teams integration.</li>



<li>The preview of&nbsp;<a href="https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-power-bi-desktop" target="_blank" rel="noreferrer noopener"><strong>Direct Lake semantic models in Power BI</strong>&nbsp;<strong>desktop</strong></a><strong>,&nbsp;</strong>which allows users to build Power BI semantic models for lightning-fast reports that query data directly from OneLake without scheduling refreshes and without data duplication. This feature will also enable users to add in tables from multiple Fabric artifacts in the same Direct Lake semantic model for ultimate reusability of OneLake data.</li>



<li>Coming soon, the preview of&nbsp;<strong><a href="https://www.microsoft.com/en-us/security/blog/2025/03/31/new-innovations-in-microsoft-purview-for-protected-ai-ready-data/#:~:text=Microsoft%20Purview%20for%20Copilot%20in%20Fabric" target="_blank" rel="noreferrer noopener">Microsoft Purview for Copilot in Power BI</a></strong>. The integration will enable discovery of data risks such as sensitive data in user prompts and responses, protect sensitive data with Insider Risk Management to identify and investigate risky AI usage, and govern AI usage with audit, eDiscovery, retention policies, and non-compliant usage detection.&nbsp;</li>



<li><strong><a href="https://blog.fabric.microsoft.com/en-GB/blog/introducing-sql-audit-logs-for-fabric-datawarehouse/" title="">SQL audit logs</a></strong> are now in preview in Microsoft Fabric Data Warehouse! Audit Logs provide a detailed record of warehouse activity, capturing essential information such as when events occur, which triggered them, and the T-SQL statement behind the event. This feature is crucial for security and compliance, helping organizations monitor access patterns, detect anomalies, and meet regulatory requirements.</li>



<li>Many new features coming to help with performance for data warehouse: result-set caching, self-managed performance for scale, intelligent workload management, custom SQL pools, data clustering, warehouse snapshots, proactive statistics refresh, and incremental statistics refresh.</li>



<li>New features coming to shortcuts and mirroring: Cross-network connectivity, table discovery, manage shortcut sessions, key value support, and mirror sources behind a firewall.</li>
</ul>



<p></p>



<p>More info:</p>



<p><a href="https://www.microsoft.com/en-us/microsoft-fabric/blog/2025/03/31/fabcon-2025-fueling-tomorrows-ai-with-new-agentic-capabilities-and-security-innovations-in-fabric/" title="">FabCon 2025: Fueling tomorrow’s AI with new agentic capabilities and security innovations in Fabric</a>&nbsp;</p>



<p><a href="https://www.youtube.com/watch?v=G4SaK0XFyyA&amp;t=3s&amp;ab_channel=LevelUpYourData" title="">Microsoft Fabric Community Conference Recap – #FabCon 2025 Highlights!</a> (video)</p>



<p><a href="https://blog.fabric.microsoft.com/en-us/blog/fabric-march-2025-feature-summary/" title="">Fabric March 2025 Feature Summary</a></p>



<p><a href="https://www.microsoft.com/en-us/security/blog/2025/03/31/new-innovations-in-microsoft-purview-for-protected-ai-ready-data" title="">New innovations in Microsoft Purview for protected, AI-ready data</a></p>The post <a href="https://www.jamesserra.com/archive/2025/04/announcements-from-the-microsoft-fabric-community-conference-2/">Announcements from the Microsoft Fabric Community Conference</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/04/announcements-from-the-microsoft-fabric-community-conference-2/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20297</post-id>	</item>
		<item>
		<title>Real-Time Intelligence in Microsoft Fabric</title>
		<link>https://www.jamesserra.com/archive/2025/03/real-time-intelligence-in-microsoft-fabric/</link>
					<comments>https://www.jamesserra.com/archive/2025/03/real-time-intelligence-in-microsoft-fabric/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 12 Mar 2025 14:00:00 +0000</pubDate>
				<category><![CDATA[Microsoft Fabric]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20209</guid>

					<description><![CDATA[<p>In today’s data-driven world, organizations need the ability to analyze and act on data as it flows in real time. Microsoft Fabric provides a powerful ecosystem for real-time intelligence, enabling businesses to process, store, analyze, and visualize data with minimal <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/03/real-time-intelligence-in-microsoft-fabric/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/03/real-time-intelligence-in-microsoft-fabric/">Real-Time Intelligence in Microsoft Fabric</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>In today’s data-driven world, organizations need the ability to analyze and act on data as it flows in real time. Microsoft Fabric provides a powerful ecosystem for real-time intelligence, enabling businesses to process, store, analyze, and visualize data with minimal latency. In this blog I will introduce you to the key components that make real-time intelligence possible in Microsoft Fabric, giving you a high-level understanding of how they work together.</p>



<p>Here is a diagram of the components used with Real-Time Intelligence (RTI) in Fabric, followed by a description of each:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-2.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="501" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-2.png?resize=1024%2C501&#038;ssl=1" alt="" class="wp-image-20218" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-2.png?resize=1024%2C501&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-2.png?resize=300%2C147&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-2.png?resize=768%2C376&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-2.png?resize=1536%2C752&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-2.png?w=2048&amp;ssl=1 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<h3 class="wp-block-heading">Eventstream: Capturing and Processing Data in Motion</h3>



<p><a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-streams/overview?tabs=enhancedcapabilities" title="">Eventstream</a> is the foundational component for real-time data ingestion in Microsoft Fabric. It allows users to collect and process data from multiple sources, such as IoT devices, applications, and external event hubs. Eventstreams are event listeners and wait for messages to be sent to them (data is pushed). This is very unlike pipelines, notebooks, and other traditional data processing tools which pull the data from their sources.  Eventstream provides built-in connectors, transformation capabilities, and seamless integration with other Fabric components to ensure that data is formatted and routed efficiently. It supports ingestion from various real-time sources such as Azure Event Hubs, Apache Kafka, and Azure IoT Hub (listed on the left in the above diagram and described <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-streams/overview?tabs=enhancedcapabilities#bring-events-into-fabric" title="">here</a>). Additionally, it provides low-latency data streaming while enabling data transformation and enrichment before storage or analysis, such as aggregate, filter, and join (full list of operations listed <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-streams/route-events-based-on-content#supported-operations" title="">here</a>).  Eventstream supports sending data to the destinations listed on the bottom right of the above diagram and described <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-streams/overview?tabs=enhancedcapabilities#route-events-to-destinations" title="">here</a>.  Eventstream has the functionality of Azure Eventhub and Azure Stream Analytics.  It is a no-code environment that has an authoring canvas that looks like this:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="344" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?resize=1024%2C344&#038;ssl=1" alt="" class="wp-image-20265" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?resize=1024%2C344&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?resize=300%2C101&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?resize=768%2C258&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?resize=1536%2C516&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?resize=2048%2C688&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?w=2360&amp;ssl=1 2360w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-12.png?w=3540&amp;ssl=1 3540w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<h3 class="wp-block-heading">Eventhouse: Storing Real-Time Data Efficiently</h3>



<p>Once real-time data is ingested, it needs a place to be stored and accessed quickly. <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/eventhouse" title="">Eventhouse</a> is a high-performance, scalable storage solution designed specifically for event-driven data. It provides structured storage optimized for real-time analytics, ensuring that data remains accessible for downstream processing. Eventhouse is optimized for large-scale event storage and retrieval, supports time-series and event-based analytics, and integrates seamlessly with other Fabric components for querying and reporting.  It can handle up to millions of events per hour.  Eventhouse has the functionality of Azure Data Explorer and contains <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/create-database" title="">KQL databases</a>, which allow you to query billions of rows in just a few seconds.  You can think of Eventhouse as simply managing a group of KQL databases.  An Eventhouse looks like:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-6.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="521" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-6.png?resize=1024%2C521&#038;ssl=1" alt="" class="wp-image-20250" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-6.png?resize=1024%2C521&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-6.png?resize=300%2C153&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-6.png?resize=768%2C391&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-6.png?resize=1536%2C782&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-6.png?resize=2048%2C1042&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-6.png?w=2360&amp;ssl=1 2360w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>And a KQL database looks like:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="429" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?resize=1024%2C429&#038;ssl=1" alt="" class="wp-image-20262" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?resize=1024%2C429&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?resize=300%2C126&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?resize=768%2C322&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?resize=1536%2C643&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?resize=2048%2C857&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?w=2360&amp;ssl=1 2360w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-11.png?w=3540&amp;ssl=1 3540w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<h3 class="wp-block-heading">Activator: Triggering Automated Actions</h3>



<p>Real-time Intelligence is not just about monitoring data—it’s about taking action based on insights. <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/data-activator/activator-introduction" title="">Activator </a>enables automated workflows by triggering actions based on data patterns, thresholds, or anomalies detected in the event stream. It provides configurable event-driven triggers, integrates with Power Automate, Azure Functions, and other automation tools, and supports business logic and rules-based processing to automate responses efficiently.  In short, it is a rule-based engine that detects conditions in event streams and triggers responses: Eventstream captures data in motion, and Activator monitors the data for conditions that warrant a response, and when a condition is met, an alert can notify users, while an action can trigger an automated workflow to address the issue.  You can create an alert such that when the number of bikes falls below five, an action is taken that sends a person a Teams message:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="462" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?resize=1024%2C462&#038;ssl=1" alt="" class="wp-image-20269" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?resize=1024%2C462&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?resize=300%2C135&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?resize=768%2C346&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?resize=1536%2C693&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?resize=2048%2C923&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?w=2360&amp;ssl=1 2360w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-14.png?w=3540&amp;ssl=1 3540w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<h3 class="wp-block-heading">KQL Queryset: Analyzing Real-Time Data</h3>



<p>Kusto Query Language (KQL) is a powerful query language designed for fast and efficient data exploration. <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/create-query-set" title="">KQL Queryset</a> allows users to run real-time queries against event data stored in Eventhouse, enabling deep insights and pattern detection. It facilitates high-speed querying for real-time event data, supports aggregations, pattern matching, and anomaly detection, and enables data filtering and transformation for dashboards and reports.  In addition to KQL, users can also leverage T-SQL for querying structured data in real time, making it easier for those familiar with SQL-based analytics to perform real-time analysis within Microsoft Fabric.  The query workspace looks like this:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="492" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?resize=1024%2C492&#038;ssl=1" alt="" class="wp-image-20252" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?resize=1024%2C492&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?resize=300%2C144&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?resize=768%2C369&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?resize=1536%2C738&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?resize=2048%2C984&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?w=2360&amp;ssl=1 2360w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-7.png?w=3540&amp;ssl=1 3540w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<h3 class="wp-block-heading">Real-Time Dashboard: Visualizing Insights Instantly</h3>



<p>Real-time intelligence is only valuable if decision-makers can interpret the data quickly. Microsoft Fabric enables the creation of <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/dashboard-real-time-create" title="">Real-Time Dashboards</a>, providing live visualizations of streaming data without relying on external tools (Real-Time Dashboards do not use Power BI). These dashboards help organizations monitor KPIs, detect anomalies, and make data-driven decisions in real time. They offer live data visualizations with minimal latency, customizable dashboards with interactive filtering, and integration with event-driven alerts and automated actions.  If you are from the Power BI world, think of Real-Time Dashboards like DirectQuery, but without the need to load data into a semantic model.  It is the equivalent of Azure Data Explorer dashboards.  Here is what a Real-Time Dashboard looks like:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-8.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="550" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-8.png?resize=1024%2C550&#038;ssl=1" alt="" class="wp-image-20254" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-8.png?resize=1024%2C550&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-8.png?resize=300%2C161&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-8.png?resize=768%2C413&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-8.png?resize=1536%2C826&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-8.png?resize=2048%2C1101&amp;ssl=1 2048w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-8.png?w=2360&amp;ssl=1 2360w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<h3 class="wp-block-heading">Real-Time Hub: The Centralized Control Panel</h3>



<p>The <a href="https://learn.microsoft.com/en-us/fabric/real-time-hub/real-time-hub-overview" title="">Real-Time Hub</a> serves as the central interface for managing all Real-Time Intelligence components in Microsoft Fabric. It allows users to configure, monitor, and optimize real-time data processing, ensuring smooth operation across various components. By consolidating management, monitoring, and alerting capabilities into one place, the Real-Time Hub enhances visibility and control over event-driven data, helping organizations make faster, more informed decisions. The Real-Time Hub looks like:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-9.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="526" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-9.png?resize=1024%2C526&#038;ssl=1" alt="" class="wp-image-20257" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-9.png?resize=1024%2C526&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-9.png?resize=300%2C154&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-9.png?resize=768%2C395&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-9.png?resize=1536%2C789&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-9.png?w=1892&amp;ssl=1 1892w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<h2 class="wp-block-heading">Power BI and its role in Real-Time Intelligence</h2>



<p>Power BI is a comprehensive business intelligence platform that enables users to create reports and dashboards using a variety of data sources. While Power BI can display real-time data, it is not exclusively designed for real-time analytics. Instead, it supports multiple data connectivity modes, including batch processing, live querying, and streaming data ingestion.</p>



<p>A Real-Time Dashboard in Microsoft Fabric using RTI is specifically built for monitoring live data streams, ensuring near-instant updates as new information arrives. These dashboards are optimized for low-latency event tracking and are commonly used for operational monitoring, alerting, and tracking key performance indicators (KPIs) that change rapidly.</p>



<p>In contrast, creating a real-time dashboard in Power BI involves using streaming datasets or DirectQuery mode. Streaming datasets allow for continuous data updates, but they often lack the ability to perform advanced aggregations and transformations compared to RTI components in Microsoft Fabric. DirectQuery mode enables real-time data querying from sources like Eventhouse, but it can introduce some latency depending on query complexity and source performance.</p>



<p>The main differences between a real-time dashboard in RTI and Power BI include data latency, processing methods, and visualization capabilities. While Power BI is excellent for historical analysis, trend discovery, and batch-reporting, a real-time dashboard in RTI is tailored for immediate event-driven insights with minimal delay. By leveraging both, organizations can achieve a balanced approach to real-time intelligence and long-term analytics.  An example Power BI report using Eventhouse data:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-13.png?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="807" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-13.png?resize=1024%2C807&#038;ssl=1" alt="" class="wp-image-20266" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-13.png?resize=1024%2C807&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-13.png?resize=300%2C236&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-13.png?resize=768%2C605&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-13.png?resize=1536%2C1210&amp;ssl=1 1536w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/03/image-13.png?w=1932&amp;ssl=1 1932w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<h2 class="wp-block-heading">Summary</h2>



<p>In short, here is how all the components work:</p>



<ol start="1" class="wp-block-list">
<li><strong>Eventstream</strong> captures and processes live data from various sources.</li>



<li><strong>Eventhouse</strong> stores the ingested data for quick access and analysis.</li>



<li><strong>Activator</strong> triggers automated responses based on pre-defined rules.</li>



<li><strong>KQL Queryset</strong> enables fast querying and analysis of real-time data.</li>



<li><strong>Real-Time Dashboards</strong> provide instant visibility into key metrics.</li>



<li><strong>Real-Time Hub</strong> acts as the centralized interface for managing all these components.</li>



<li><strong>Power BI</strong> generates historical reports from sematic models built from the stored data in Eventhouse.</li>
</ol>



<p></p>



<p>You can create a logical copy of KQL database data located in an eventhouse by turning on <strong>OneLake availability</strong>, and all the KQL database data is made available in OneLake. Turning on <strong>OneLake availability</strong> means that you can query the data in your KQL database because it exists in OneLake (in Delta Lake format) using other Fabric engines such as Direct Lake mode in Power BI, Warehouse, Lakehouse, Notebooks, and more. The OneLake availability is used via Lakehouse shortcuts: go into a Lakehouse and create a shortcut to Microsoft OneLake, and you will see tables under the KQL DB if you have enabled &#8220;OneLake availability“.  If not, no tables will show up. More info at <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-house-onelake-availability">Eventhouse OneLake Availability &#8211; Microsoft Fabric | Microsoft Learn</a>.</p>



<p>Not only can RTI be used to generate reports/dashboards/queries on new data coming from real-time streaming sources such as Internet of Things (IoT) devices, but it can also be used to replace the operational reporting on source systems to take some of the load off of those systems. For example, you can have an inventory database in Azure SQL Database that an application is using to get up-to-the-second Power BI reports that contain inventory counts. With RTI, the data in the Azure SQL Database can be copied into an eventhouse using eventstream via change data capture (CDC) within milliseconds of it being created, and the Power BI reports can instead be run against the eventhouse to get those inventory counts, reducing the compute load on the application system.  For more info on this, check out <a href="https://blog.fabric.microsoft.com/en-US/blog/operational-reporting-with-microsoft-fabric-real-time-intelligence/" title="">Operational Reporting with Microsoft Fabric Real-Time Intelligence</a>.</p>



<p>Make sure to check out the Real-Time Intelligence <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/" title="">documentation</a> and also what is <a href="https://learn.microsoft.com/en-us/fabric/release-plan/real-time-intelligence" title="">new and planned</a>.  An excellent RTI tutorial where you&#8217;ll learn how to set up and use the main features of Real-Time Intelligence using a sample set of data can be found <a href="https://learn.microsoft.com/en-us/fabric/real-time-intelligence/tutorial-introduction" title="">here</a>.</p>



<p>More info:</p>



<p><a href="https://sandervandevelde.wordpress.com/2024/06/12/microsoft-fabric-rti-eventhouse-and-real-time-dashboards/" title="">Microsoft Fabric RTI: Eventhouse and Real-Time&nbsp;Dashboards</a></p>The post <a href="https://www.jamesserra.com/archive/2025/03/real-time-intelligence-in-microsoft-fabric/">Real-Time Intelligence in Microsoft Fabric</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/03/real-time-intelligence-in-microsoft-fabric/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20209</post-id>	</item>
		<item>
		<title>Azure SQL offerings</title>
		<link>https://www.jamesserra.com/archive/2025/02/azure-sql-offerings/</link>
					<comments>https://www.jamesserra.com/archive/2025/02/azure-sql-offerings/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Tue, 18 Feb 2025 16:00:00 +0000</pubDate>
				<category><![CDATA[Azure SQL Database]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20170</guid>

					<description><![CDATA[<p>There are three Azure SQL products with so many different deployment options, service tiers, and compute tiers that it can get quite confusing when choosing the right option for your workload. So, I thought I would write this blog to <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/02/azure-sql-offerings/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/02/azure-sql-offerings/">Azure SQL offerings</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>There are three Azure SQL products with so many different deployment options, service tiers, and compute tiers that it can get quite confusing when choosing the right option for your workload. So, I thought I would write this blog to help out a bit.</p>



<p>Azure SQL is a cloud-based suite of database services designed to offer flexibility, scalability, and ease of management. It comprises three main products, each catering to different deployment needs and compatibility requirements. Within each of the products are various deployment options, service tiers, and compute tiers:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays.jpg?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="576" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays.jpg?resize=1024%2C576&#038;ssl=1" alt="" class="wp-image-20172" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays.jpg?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays.jpg?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays.jpg?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays.jpg?w=1280&amp;ssl=1 1280w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>



<p>Below I expand on the options from the above diagram:</p>



<h2 class="wp-block-heading">Azure SQL Products</h2>



<h3 class="wp-block-heading"><strong>1. SQL Server on a Virtual Machine (VM)</strong></h3>



<p>For organizations needing full control over their SQL Server environment, running SQL Server on an Azure VM is a great option. This Infrastructure-as-a-Service (IaaS) offering provides complete access to the operating system and database engine, enabling users to configure and manage SQL Server as they would on-premises. It is best suited for:</p>



<ul class="wp-block-list">
<li>Lift-and-shift migrations with minimal changes.</li>



<li>Applications requiring full SQL Server features.</li>



<li>Custom configurations and third-party integrations.</li>



<li>See <a href="https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal?view=azuresql" title="">Provision SQL Server on Azure VM</a> and <a href="https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-vm-size?view=azuresql" title="">VM size</a>.</li>
</ul>



<h3 class="wp-block-heading"><strong>2. Azure SQL Database</strong></h3>



<p>Azure SQL Database is a fully managed, Platform-as-a-Service (PaaS) database solution that automates maintenance tasks like patching, backups, and scaling. It is ideal for modern cloud applications and is available in two <strong>deployment options</strong>:</p>



<h4 class="wp-block-heading"><strong>Single Database</strong></h4>



<ul class="wp-block-list">
<li>A dedicated, isolated database with predictable performance.</li>



<li>Best for applications that require resource guarantees at the database level.</li>



<li>Supports service tiers: General Purpose, Business Critical, and Hyperscale.</li>



<li>Can use the serverless compute tier (only in the General Purpose and Hyperscale tiers), allowing dynamic scaling of resources based on workload demand.</li>



<li>See <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/single-database-overview?view=azuresql" title="">What is a single database in Azure SQL Database?</a></li>
</ul>



<h4 class="wp-block-heading"><strong>Elastic Pool</strong></h4>



<ul class="wp-block-list">
<li>Multiple databases share a set of resources, optimizing cost efficiency.</li>



<li>Best for SaaS applications with variable workloads across multiple databases.</li>



<li>Supports service tiers: General Purpose, Business Critical, and Hyperscale.</li>



<li>Does not support the serverless compute tier.</li>



<li>See <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-pool-overview?view=azuresql" title="">Elastic pools help you manage and scale multiple databases in Azure SQL Database</a>.</li>
</ul>



<h3 class="wp-block-heading"><strong>3. Azure SQL Managed Instance</strong></h3>



<p>Azure SQL Managed Instance is an instance-scoped deployment option that provides near 100% compatibility with SQL Server while delivering full PaaS benefits. This makes it ideal for organizations looking to modernize their database infrastructure with minimal friction. Key benefits include:</p>



<ul class="wp-block-list">
<li>Native support for SQL Server features like cross-database queries, linked servers, and SQL Agent.</li>



<li>Built-in high availability, automated maintenance, and security.</li>



<li>Supports service tiers: General Purpose and Business Critical.</li>



<li>Does not support the serverless compute tier.</li>



<li>See <a href="https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview?view=azuresql" title="">What is Azure SQL Managed Instance?</a></li>
</ul>



<h2 class="wp-block-heading"><strong>Service Tiers in Azure SQL</strong></h2>



<p>Azure SQL Database and Managed Instance offer three service tiers to cater to different workload needs:</p>



<h3 class="wp-block-heading"><strong>1. General Purpose</strong></h3>



<ul class="wp-block-list">
<li>Balanced performance and cost-effective for most applications.</li>



<li>Available in both Azure SQL Database (single and elastic pool) and Azure SQL Managed Instance.</li>



<li>Supports the serverless compute tier (only for single database deployment).</li>



<li>See <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tiers-sql-database-vcore?view=azuresql" title="">vCore purchasing model &#8211; Azure SQL Database</a>.</li>
</ul>



<h3 class="wp-block-heading"><strong>2. Business Critical</strong></h3>



<ul class="wp-block-list">
<li>Designed for applications requiring high transaction rates and low-latency I/O performance.</li>



<li>Includes built-in high availability with multiple replicas.</li>



<li>Available in Azure SQL Database (single and elastic pool) and Azure SQL Managed Instance.</li>



<li>Does not support the serverless compute tier.</li>



<li>See <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tiers-sql-database-vcore?view=azuresql" title="">vCore purchasing model &#8211; Azure SQL Database</a>.</li>
</ul>



<h3 class="wp-block-heading"><strong>3. Hyperscale</strong></h3>



<ul class="wp-block-list">
<li>Optimized for extremely large databases, supporting up to 128 TB.</li>



<li>Provides rapid scaling of compute and storage independently.</li>



<li>Available for Azure SQL Database (single database and elastic pool deployments).</li>



<li>Supports the serverless compute tier for single database deployment.</li>



<li>See <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale?view=azuresql" title="">Hyperscale service tier</a></li>
</ul>



<h2 class="wp-block-heading"><strong>Serverless Compute Tier</strong></h2>



<p>The serverless compute tier is a cost-effective option that allows automatic scaling of compute resources based on demand. It is available in the General Purpose and Hyperscale service tiers for Azure SQL Database (single database deployment).</p>



<h3 class="wp-block-heading"><strong>Benefits of Serverless Compute Tier:</strong></h3>



<ul class="wp-block-list">
<li>Autoscaling of CPU and memory based on workload fluctuations.</li>



<li>Automatic pausing of the database during inactivity, reducing costs.</li>



<li>Best suited for intermittent, unpredictable workloads that do not require continuous availability.</li>



<li>See <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview?view=azuresql&amp;tabs=general-purpose" title="">Serverless compute tier for Azure SQL Database</a>.</li>
</ul>



<h2 class="wp-block-heading"><strong>Choosing the right Azure SQL option</strong></h2>



<p>When selecting an Azure SQL offering, consider factors like workload requirements, compatibility, cost, and management overhead. Below is a quick reference guide:</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><th>Feature</th><th>SQL Server on VM</th><th>Azure SQL Database (Single)</th><th>Azure SQL Database (Elastic Pool)</th><th>Azure SQL Managed Instance</th></tr><tr><td>Fully Managed (PaaS)</td><td>No</td><td>Yes</td><td>Yes</td><td>Yes</td></tr><tr><td>Full SQL Server Compatibility</td><td>Yes</td><td>Partial</td><td>Partial</td><td>Nearly Full</td></tr><tr><td>Best for Lift-and-Shift</td><td>Yes</td><td>No</td><td>No</td><td>Yes</td></tr><tr><td>Best for SaaS Apps</td><td>No</td><td>No</td><td>Yes</td><td>No</td></tr><tr><td>Best for Modernization</td><td>No</td><td>Yes</td><td>Yes</td><td>Yes</td></tr><tr><td>Supports General Purpose</td><td>N/A</td><td>Yes</td><td>Yes</td><td>Yes</td></tr><tr><td>Supports Business Critical</td><td>N/A</td><td>Yes</td><td>Yes</td><td>Yes</td></tr><tr><td>Supports Hyperscale</td><td>N/A</td><td>Yes</td><td>Yes</td><td>No</td></tr><tr><td>Supports Serverless Compute</td><td>N/A</td><td>Yes (General Purpose, Hyperscale)</td><td>No</td><td>No</td></tr></tbody></table></figure>



<h2 class="wp-block-heading"><strong>Conclusion</strong></h2>



<p>Azure SQL offers a diverse set of solutions to accommodate different business and technical requirements. Whether you need full control with SQL Server on a VM, a fully managed single database, a cost-effective elastic pool, or near full SQL Server compatibility with Managed Instance, Azure SQL provides the flexibility to optimize cost, performance, and scalability. Understanding these options and service tiers will help you make an informed decision tailored to your specific workload needs.</p>



<p>This diagram may help with choosing the right option for your use case:</p>



<figure class="wp-block-image size-large"><a href="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays2.jpg?ssl=1"><img data-recalc-dims="1" loading="lazy" decoding="async" width="1024" height="576" src="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays2.jpg?resize=1024%2C576&#038;ssl=1" alt="" class="wp-image-20174" srcset="https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays2.jpg?resize=1024%2C576&amp;ssl=1 1024w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays2.jpg?resize=300%2C169&amp;ssl=1 300w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays2.jpg?resize=768%2C432&amp;ssl=1 768w, https://i0.wp.com/www.jamesserra.com/wp-content/uploads/2025/02/Moving-to-Azure-SQL-from-VM-based-SQL-Azure-Tuesdays2.jpg?w=1280&amp;ssl=1 1280w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></figure>The post <a href="https://www.jamesserra.com/archive/2025/02/azure-sql-offerings/">Azure SQL offerings</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/02/azure-sql-offerings/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20170</post-id>	</item>
		<item>
		<title>Cool AI sites</title>
		<link>https://www.jamesserra.com/archive/2025/01/cool-ai-sites/</link>
					<comments>https://www.jamesserra.com/archive/2025/01/cool-ai-sites/#comments</comments>
		
		<dc:creator><![CDATA[James Serra]]></dc:creator>
		<pubDate>Wed, 29 Jan 2025 16:00:00 +0000</pubDate>
				<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[SQLServerPedia Syndication]]></category>
		<guid isPermaLink="false">https://www.jamesserra.com/?p=20100</guid>

					<description><![CDATA[<p>As I researched and wrote my OpenAI and LLMs blogs (see Introduction to OpenAI and LLMs, Introduction to OpenAI and LLMs – Part 2 and Introduction to OpenAI and LLMs – Part 3, along with a presentation on that topic <span class="excerpt-dots">&#8230;</span> <a class="more-link" href="https://www.jamesserra.com/archive/2025/01/cool-ai-sites/"><span class="more-msg">Continue reading &#8594;</span></a></p>
The post <a href="https://www.jamesserra.com/archive/2025/01/cool-ai-sites/">Cool AI sites</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></description>
										<content:encoded><![CDATA[<p>As I researched and wrote my OpenAI and LLMs blogs (see <a href="https://www.jamesserra.com/archive/2024/03/introduction-to-openai-and-llms/">Introduction to OpenAI and LLMs</a>, <a href="https://www.jamesserra.com/archive/2024/06/introduction-to-openai-and-llms-part-2/">Introduction to OpenAI and LLMs – Part 2</a> and <a href="https://www.jamesserra.com/archive/2025/01/introduction-to-openai-and-llms-part-3/" title="">Introduction to OpenAI and LLMs – Part 3</a>, along with a presentation on that topic that I did for the Toronto Data Professional Community which you can&nbsp;<a href="https://www.youtube.com/watch?v=kXwtb0oSup0">view&nbsp;</a>and download the&nbsp;<a href="https://torontodpc.ca/resources/70">slides</a>), I found and played with many fascinating AI products and features. I am continually amazed at the progress we are making with AI, especially those in the GenAI world, and feel like we are just getting started.  Here are my favorites:</p>



<p><a href="https://www.heygen.com/" title="">HeyGen </a>&#8211; Speak to an avatar live.  Check out their<a href="https://labs.heygen.com/guest/interactive-avatar?tab=public" title=""> demos</a> where you can interact with avatars such as a therapist, fitness coach, and doctor.  They keep adding new ones.  Amazing stuff!</p>



<p><a href="https://openai.com/index/sora/">OpenAI Sora</a> – Create video from text.  My favorite is <a href="https://cdn.openai.com/sora/videos/gold-rush.mp4" title="">Historical footage of California during the gold rush</a>.<br><br><a href="https://openai.com/index/introducing-the-gpt-store/">GPT Store</a> &#8211; Discover and create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.  My favorites are <a href="https://chatgpt.com/g/g-z77yDe7Vu-books" title="">Books</a>, <a href="https://chatgpt.com/g/g-iEHLy1NSw-movies-gpt" title="">Movies</a>, and <a href="https://chatgpt.com/g/g-FGhasb1tZ-therapist-psychologist-fictional-not-real-therapy" title="">Therapist/Psychologist</a>.</p>



<p><a href="https://help.openai.com/en/articles/8400625-voice-mode-faq">ChatGPT advanced voice mode</a> – Have spoken interactive conversations with ChatGPT, where you can <a href="https://nerdschalk.com/how-to-share-screen-with-chatgpt-advanced-voice-mode/">screen share</a> and also <a href="https://nerdschalk.com/how-to-stream-live-video-to-chatgpt-advanced-voice-mode/">share live video</a>.  I had a long conversation with ChatGPT while driving alone to help keep me from getting sleepy &#8211; we discussed the best Yankee teams of all time.  The ChatGPT voice is animated so it&#8217;s like talking to real person (choose from nine lifelike output voices for ChatGPT, each with its own distinct tone and character).  Screen share can be used for things like help guiding you through settings on your computer in order to help troubleshoot a problem, while live video can be used in ways such as helping you recognize objects or tell you the color of a shirt (great for colorblind people like me).</p>



<p><a href="https://openai.com/index/dall-e-3/">ChatGPT DALL-E 3</a> – Create images from text.  This used to be a separate product but is now built into ChatGPT.  I recently asked it to draw me a picture of a crawfish blowing out a birthday cake for a crawfish boil I did for someone&#8217;s birthday and sent that picture with the birthday invitations.</p>



<p><a href="https://techcommunity.microsoft.com/blog/azure-ai-services-blog/create-a-custom-text-to-speech-avatar-through-self-service/4366021">Azure AI Speech Studio</a> – Create a custom text to speech avatar using the <a href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/custom-neural-voice" title="">Azure AI Custom Neural Voice</a> and the <a href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/text-to-speech-avatar/custom-avatar-create" title="">Custom Avatar Self-Service</a> capabilities in <a href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/speech-studio-overview" title="">Azure AI Speech Studio</a>.  You can use the video and voice of anyone you wish.  I hope to one day create an avatar that looks and sounds like me and then talk to it.  Freaky!</p>



<p><a href="https://www.clonos.io/">Clonos</a> &#8211; Create your own virtual avatar from existing video and sound, and make it say anything.  See examples of Clonos in the sports world that is very funny at memerunngergpt on Instagram, where they modify video from sports figures to say hilarious things in their own voice (warning: foul language).</p>



<p><a href="https://copilot.microsoft.com/" title="">Microsoft Copilot</a> &#8211; An AI chatbot that is now in many Microsoft products.  Check out a few of my favorites: <a href="https://support.microsoft.com/en-us/office/use-copilot-in-microsoft-teams-meetings-0bf9dd3c-96f7-44e2-8bb8-790bedf066b1" title="">Copilot in Teams</a>, <a href="https://www.jamesserra.com/archive/2024/07/copilot-in-microsoft-fabric/" title="">Copilot in Microsoft Fabric</a>, <a href="https://support.microsoft.com/en-us/windows/welcome-to-copilot-on-windows-675708af-8c16-4675-afeb-85a5a476ccb0" title="">Copilot on Windows</a>, <a href="https://support.microsoft.com/en-us/office/welcome-to-copilot-in-word-2135e85f-a467-463b-b2f0-c51a46d625d1" title="">Copilot in Word</a>, <a href="https://www.microsoft.com/en-us/microsoft-365/blog/2025/01/15/copilot-for-all-introducing-microsoft-365-copilot-chat/" title="">Microsoft 365 Copilot&nbsp;Chat</a>.</p>



<p>If you have not used <a href="https://chatgpt.com/" title="">ChatGPT</a>, you need to do so immediately! One way to use it that you may not be aware of is via roleplay, where OpenAI takes on a persona. This is a great way to learn about a subject matter. For example, I am reading about the crusades, so I prompted OpenAI: &#8220;Pretend you are a knight from the first crusade and fought from the very beginning to the very end of the crusade. I will ask you questions, and I want you to answer in the first person. Draw upon historical knowledge and accounts of the first crusade to immerse yourself in the mindset, beliefs, and experiences of such a knight, responding to me as though you were truly from that era&#8221;. Then I was able to ask questions directly to a &#8220;knight&#8221; and get all sorts of great info.  Other amazing persona&#8217;s you can ask it to emulate are such things as &#8220;pretend you are SQL Server and I&#8217;m using SSMS&#8221; and &#8220;pretend you are the game Zork&#8221; (for you old-timers out there like me).</p>



<p>I also used ChatGPT on my iPhone and told it to pretend it was Santa Clause and to talk to my 6-year old grandson.  I turned on voice mode, which has a seasonal Santa voice, and my grandson had a long and animated conversation with Santa (ChatGPT)!</p>



<p>To see some of the capabilities of OpenAI and ChatGPT, check out <a href="https://www.youtube.com/@OpenAI">OpenAI YouTube</a> that has demo videos. My favorites are <a href="https://www.youtube.com/watch?v=wfAYBdaGVxs&amp;ab_channel=OpenAI" title="">Interview Prep with GPT-4o</a>, <a href="https://www.youtube.com/watch?v=rKp36MmRlXA&amp;ab_channel=OpenAI" title="">Live demo of GPT-4o&#8217;s vision capabilities</a>, and <a href="https://www.youtube.com/watch?v=XOXMwsq7ACs&amp;ab_channel=OpenAI" title="">Interview roleplay with GPT-4o voice and vision</a>.</p>



<p>A helpful tip on asking questions with ChatGPT: you will get wrong answers sometimes, especially if your question (&#8220;prompt&#8221;) does not have much information.  If you get a wrong answer, tell ChatGPT via a follow-up prompt that the answer is wrong and tell it what the correct answer is.  If it then responds with the correct answer, prompt it with &#8220;change my original prompt to prevent the incorrect answer if I were to ask the question again&#8221;.  You will then get an improved prompt that you can use to ask the question again, or to build upon it.</p>



<p>More info:</p>



<p><a href="https://time.com/7204530/ai-companions/" title="">AI Companions Will Change Our Lives</a></p>The post <a href="https://www.jamesserra.com/archive/2025/01/cool-ai-sites/">Cool AI sites</a> first appeared on <a href="https://www.jamesserra.com">James Serra's Blog</a>.]]></content:encoded>
					
					<wfw:commentRss>https://www.jamesserra.com/archive/2025/01/cool-ai-sites/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		<enclosure url="https://cdn.openai.com/sora/videos/gold-rush.mp4" length="44452469" type="video/mp4" />

		<post-id xmlns="com-wordpress:feed-additions:1">20100</post-id>	</item>
	</channel>
</rss>
