<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:webfeeds="http://webfeeds.org/rss/1.0">

<channel>
	<title>SQLBI</title>
	<atom:link href="https://www.sqlbi.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.sqlbi.com</link>
	<description>Business Intelligence with passion</description>
	<lastBuildDate>Mon, 20 Apr 2026 20:01:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
            <webfeeds:icon>https://www.sqlbi.com/logo/icon.svg</webfeeds:icon>
            <webfeeds:logo>https://www.sqlbi.com/logo/icon.svg</webfeeds:logo>
            <webfeeds:accentColor>ec5d5d</webfeeds:accentColor>
            <webfeeds:related layout="card" target="browser" />
            <webfeeds:analytics id="UA-7095697-2" engine="GoogleAnalytics" />
        	<item>
		<title>Understanding parameter types in DAX user-defined functions (UDF)</title>
		<link>https://www.sqlbi.com/articles/understanding-parameter-types-in-dax-user-defined-functions-udf/</link>
					<comments>https://www.sqlbi.com/articles/understanding-parameter-types-in-dax-user-defined-functions-udf/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 20:00:43 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[UDF]]></category>
		<category><![CDATA[User-defined functions]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=895583</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/parameter-types.png" class="webfeedsFeaturedVisual" /></figure>This article describes the parameter types available in DAX user-defined functions, focusing on the specialized reference types MEASUREREF, COLUMNREF, TABLEREF, and CALENDARREF. In a previous article, Introducing user-defined functions in DAX, we described the syntax for creating user-defined functions, including&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/parameter-types.png" class="webfeedsFeaturedVisual" /></figure><p>This article describes the parameter types available in DAX user-defined functions, focusing on the specialized reference types MEASUREREF, COLUMNREF, TABLEREF, and CALENDARREF.<br />
<span id="more-895583"></span></p>
<p>In a previous article, <a href="https://www.sqlbi.com/articles/introducing-user-defined-functions-in-dax/">Introducing user-defined functions in DAX</a>, we described the syntax for creating user-defined functions, including the two passing modes (VAL and EXPR) and the fundamental parameter types SCALAR and TABLE. In this article, we build on that foundation and focus on the complete type system, with particular attention to the reference types introduced in March 2026 that provide better documentation, stronger validation, and improved IntelliSense support.</p>
<p>Before diving into the new types, let us briefly recap the full picture of parameter types and passing modes available in DAX user-defined functions.</p>
<h2>Parameter types and passing modes</h2>
<p>Each parameter of a user-defined function has two properties: a type, which describes what kind of value the parameter accepts, and a passing mode, which describes how the value is transferred from the caller to the body of the function. The following table summarizes all the valid combinations.</p>
<div style="display: flex;gap: 80px">
<table width="310">
<tbody>
<tr>
<td width="110"><strong>Type</strong></td>
<td width="200"><strong>Passing mode</strong></td>
</tr>
<tr>
<td>ANYVAL</td>
<td>VAL</td>
</tr>
<tr>
<td><strong>SCALAR (*)</strong></td>
<td>VAL / EXPR</td>
</tr>
<tr>
<td>TABLE</td>
<td>VAL / EXPR</td>
</tr>
<tr>
<td>ANYREF</td>
<td>EXPR</td>
</tr>
<tr>
<td>MEASUREREF</td>
<td>EXPR</td>
</tr>
<tr>
<td>COLUMNREF</td>
<td>EXPR</td>
</tr>
<tr>
<td>TABLEREF</td>
<td>EXPR</td>
</tr>
<tr>
<td>CALENDARREF</td>
<td>EXPR</td>
</tr>
</tbody>
</table>
<table width="109">
<tbody>
<tr>
<td width="109"><strong>SCALAR (*) Subtype</strong></td>
</tr>
<tr>
<td>VARIANT</td>
</tr>
<tr>
<td>INT64</td>
</tr>
<tr>
<td>DECIMAL</td>
</tr>
<tr>
<td>DOUBLE</td>
</tr>
<tr>
<td>STRING</td>
</tr>
<tr>
<td>DATETIME</td>
</tr>
<tr>
<td>BOOLEAN</td>
</tr>
<tr>
<td>NUMERIC</td>
</tr>
</tbody>
</table>
</div>
<p>SCALAR and TABLE are the two types that work with both VAL and EXPR. When no passing mode is specified, the default is VAL for both. ANYVAL is a shortcut that forces the VAL passing mode and accepts any scalar value. All the remaining types (ending with “REF”) force the EXPR passing mode. The passing mode keyword can be omitted for these types because only EXPR is valid.</p>
<p>The types in the lower part of the Type/Passing Mode table (MEASUREREF, COLUMNREF, TABLEREF, and CALENDARREF) are specializations of ANYREF. They share the same passing mode, but they restrict the kind of expression the caller can provide. These are the types we focus on in the rest of this article.</p>
<h3>ANYREF and its limitations</h3>
<p>ANYREF declares a parameter that accepts any expression and is always passed as an expression. It is the most permissive reference type: the function accepts whatever expression the caller provides: a measure reference, a column reference, a table reference, or an arbitrary DAX calculation. The expression provided is substituted into the body of the function wherever the parameter appears. It is important to highlight that a DAX formula is accepted by ANYREF as a valid argument: ANYREF should not be interpreted as “a reference to any existing object” but rather “a reference to any expression”. Indeed, writing ANYREF without EXPR, or EXPR without ANYREF, has the same meaning and produces the same effects.</p>
<p>This flexibility comes at a cost. Because ANYREF accepts anything, the function author cannot make assumptions about the nature of the expression. Is it a measure that triggers a context transition? Is it a simple column reference? Is it an arbitrary calculation? With ANYREF, the answer could be any of these. The function code must therefore be <a href="https://jumpcloud.com/it-index/what-is-defensive-coding">defensive</a>: whether the expression may or may not trigger a context transition, the function author should use an explicit CALCULATE to ensure consistent behavior if a context transition is needed – something that would not be necessary if the parameter passed were a measure reference.</p>
<p>The lack of specificity also affects the caller’s experience. IntelliSense and other development tools cannot provide meaningful guidance when the parameter accepts just any expression. The developer who calls the function must rely on documentation, or on reading the function body, to understand what is expected.</p>
<p>When a parameter is declared as ANYREF, <a href="https://docs.sqlbi.com/dax-style/dax-naming-conventions#parameters">we recommend</a> using the suffix <em>Expr</em> in the parameter name: for example, <em>amountExpr</em> or <em>targetExpr</em>.</p>
<p>For example, here is a model-dependent function that filters customers whose purchase amount (provided as ANYREF) is greater than a minimum value (<em>lowerAmount</em>):</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; highlight: [2]; title: ; snippet: Function; notranslate">
Local.TopCustomersAnyRefA = ( 
    amountExpr : ANYREF, 
    lowerAmount : DOUBLE 
) =&gt;
    FILTER (
        Customer, 
        amountExpr &gt; lowerAmount
    )
</pre>
<p>The <em>Local.TopCustomersAnyRefA</em> function can be used in three different versions of the <em>AnyRef A</em> measure: as a measure reference, as an expression, and as an expression embedded in CALCULATE, respectively. The expression used is the same as that defined in the <em>Total Quantity</em> measure:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Total Quantity = 
SUM ( Sales&#x5B;Quantity] )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
AnyRef-A 1 = 
CALCULATE ( 
    &#x5B;Sales Amount],
    Local.TopCustomersAnyRefA ( &#x5B;Total Quantity], 20 )
)
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
AnyRef-A 2 = 
CALCULATE ( 
    &#x5B;Sales Amount],
    Local.TopCustomersAnyRefA ( SUM ( Sales&#x5B;Quantity] ), 20 )
)
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
AnyRef-A 3 = 
CALCULATE ( 
    &#x5B;Sales Amount],
    Local.TopCustomersAnyRefA ( CALCULATE ( SUM ( Sales&#x5B;Quantity] ) ), 20 )
)
</pre>
<p>The second version of <em>AnyRef A</em>, which has an expression not embedded in CALCULATE, returns the same values as <em>Sales Amount</em> because the <em>Local.TopCustomerAnyRefA</em> function returns all customers: the result of the <em>amountExpr</em> argument is evaluated without filtering the iterated customer, since the context transition is missing.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-130.png" width="600" /></p>
<p>We can fix the function by embedding the <em>amountExpr</em> parameter in a CALCULATE, which is redundant but harmless when the argument is a measure reference. However, this would prevent using a column reference if the developer wanted to provide a <em>Customer</em> column as the argument of <em>amountExpr</em>. Not that it would have been a good idea, but ANYREF does not impose any restrictions on the argument to use:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.TopCustomersAnyRefB = ( 
    amountExpr : ANYREF, 
    lowerAmount : DOUBLE 
) =&gt;
    FILTER (
        Customer, 
        CALCULATE ( amountExpr ) &gt; lowerAmount
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
AnyRef-B 2 = 
CALCULATE ( 
    &#x5B;Sales Amount],
    Local.TopCustomersAnyRefB ( SUM ( Sales&#x5B;Quantity] ), 20 )
)
</pre>
<p>This way, all the versions of the measure AnyRef-B return the same value.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-126.png" width="600" /></p>
<h3>MEASUREREF</h3>
<p>A MEASUREREF parameter accepts only a reference to a measure defined in the semantic model. The caller must provide the name of an existing measure; arbitrary DAX expressions are not accepted.</p>
<p>This restriction has an important semantic implication. A measure reference always triggers a context transition when evaluated in a row context. When we declare a parameter as MEASUREREF, we inform the reader of the function code that a context transition will occur wherever this parameter is used within an iterator. This makes the code easier to think about because the parameter’s behavior is predictable.</p>
<p>With ANYREF, the function author should wrap the parameter in an explicit CALCULATE to guarantee context transition, because the caller might provide an expression that does not trigger it on its own, as we illustrated with the previous examples for ANYREF. With MEASUREREF, CALCULATE is redundant for this purpose, though it causes no harm. The constraint imposed by the MEASUREREF type guarantees the behavior.</p>
<p>When a parameter is declared as MEASUREREF, <a href="https://docs.sqlbi.com/dax-style/dax-naming-conventions#parameters">we recommend</a> using the suffix <em>Measure</em> in the parameter name: for example, <em>salesMeasure</em> or <em>targetMeasure</em>.</p>
<p>For this example, we created a version of the function we used in the previous ANYREF example, this time specifying MEASUREREF as the type of the <em>amountMeasure</em> parameter:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.TopCustomersMeasureRef = ( 
    amountMeasure : MEASUREREF, 
    lowerAmount : DOUBLE 
) =&gt;
    FILTER (
        Customer, 
        amountMeasure &gt; lowerAmount
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
MeasureRef 1 = 
CALCULATE ( 
    &#x5B;Sales Amount],
    Local.TopCustomersMeasureRef ( &#x5B;Total Quantity], 20 )
)
</pre>
<p>There is only one version of the measure we can use: the one that provides <em>Total Quantity</em> as an argument.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image3-112.png" width="389" /></p>
<p>Indeed, trying to provide an expression as the argument for <em>amountMeasure</em> generates a syntax error:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
MeasureRef 2 = 
CALCULATE ( 
    &#x5B;Sales Amount],
    Local.TopCustomersMeasureRef ( SUM ( Sales&#x5B;Quantity] ), 20 )
)
</pre>
<p>The declaration of <em>MeasureRef 2</em> would return the following error: <em>An invalid argument type was passed into parameter ‘amountMeasure’ of the user-defined function. Expected ‘MEASUREREF’ but got ‘SCALAR’.</em></p>
<p>The error message mentions SCALAR because the expression <em>SUM ( Sales[Quantity] )</em> could be evaluated before executing the <em>Local.TopCustomersMeasureRef</em> function, and its result would be a scalar in that case. However, the important part is that the expected argument should have been MEASUREREF, and it is not.</p>
<h3>COLUMNREF</h3>
<p>A COLUMNREF parameter accepts only a reference to a column defined in a table in the semantic model. The caller must provide a qualified column reference, such as <em>Sales[Unit Price]</em> or <em>Product[Unit Price]</em>; arbitrary expressions are not accepted.</p>
<p>COLUMNREF is particularly useful when writing model-independent functions. Instead of hardcoding column names in the function body (which would create a dependency on the model structure), we declare the columns as COLUMNREF parameters and let the caller specify which columns to use. This design makes the function portable across models with different table and column names.</p>
<p>COLUMNREF parameters work well in combination with two DAX functions designed for inspecting reference parameters, TABLEOF and NAMEOF:</p>
<ul>
<li><strong>TABLEOF</strong> retrieves the table where a given column is defined: if the caller passes <em>Sales[Unit Price]</em> as the <em>priceColumn</em> parameter, then <em>TABLEOF ( priceColumn )</em> returns the <em>Sales</em> This combination allows us to reduce the number of parameters in the function signature. Instead of asking the caller for both a table and a column from that table, we can ask for only the column and from there, derive the table by using TABLEOF.</li>
<li><strong>NAMEOF</strong> returns the name of a column reference as a string, which can be useful for dynamic operations that require the column name in text form.</li>
</ul>
<p>When a parameter is declared as COLUMNREF, <a href="https://docs.sqlbi.com/dax-style/dax-naming-conventions#parameters">we recommend</a> using the suffix <em>Column</em> in the parameter name: for example, <em>priceColumn</em> or <em>dateColumn</em>.</p>
<p>We start with an educational example, <em>SumProduct</em>, which just multiplies two columns row-by-row and sums the result:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
SumProduct = ( 
    quantityColumn: COLUMNREF, 
    priceColumn: COLUMNREF 
) =&gt;
    SUMX (
        TABLEOF ( quantityColumn ),
        quantityColumn * priceColumn
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Total Cost = 
SumProduct ( Sales&#x5B;Quantity], Sales&#x5B;Unit Cost] )
</pre>
<p>The result of the <em>Total Cost</em> measure computed this way is identical to <em>SUMX ( Sales, Sales[Quantity] * Sales[Unit Cost] )</em>. However, this first example is meant to be merely educational to show that by using TABLEOF, it is possible to obtain the table from a column reference parameter without an additional parameter for the table reference.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image4-106.png" width="338" /></p>
<p>However, this simple example already shows an important limitation: the formula inside the function assumes that the two columns belong to the same table. If this condition is not true, the error could be misleading. For example, the following measure generates a syntax error and is not valid:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Total Cost Mismatch = 
SumProduct ( Sales&#x5B;Quantity], &#039;Product&#039;&#x5B;Unit Cost] )
</pre>
<p>The syntax error is: <em>A single value for column ‘Unit Cost’ in table ‘Product’ cannot be determined.</em> This error is not very clear because it is generated by the SUMX function used in <em>SumProduct</em> when referencing columns from two different tables. Unfortunately, the current version of UDFs in preview comes with limitations in what we are going to describe now, but validating the parameters is something we want to introduce in this article. Ideally, we would like to customize the error message by validating that the two arguments belong to the same table. We achieve this by using the following code:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
SumProductSafe = ( 
    quantityColumn: COLUMNREF, 
    priceColumn: COLUMNREF 
) =&gt;
    IF (
        NAMEOF ( TABLEOF ( quantityColumn ) ) == NAMEOF ( TABLEOF ( priceColumn ) ),
        SUMX (
            TABLEOF ( quantityColumn ),
            quantityColumn * priceColumn
        ),
        ERROR ( &quot;All the column references must belong to the same table&quot; )
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Total Cost Safe Mismatch = 
SumProduct ( Sales&#x5B;Quantity], &#039;Product&#039;&#x5B;Unit Cost] )
</pre>
<p>In this case, the error message should be: <em>All the column references must belong to the same table.</em> Unfortunately, the current implementation does not support this kind of validation before execution. We hope that Microsoft will support such validation before the user-defined functions are generally available, by using the syntax in this example, or equivalent.</p>
<p>The result of the <em>Total Cost</em> measure computed by <em>SumProduct</em> is identical to <em>SUMX ( Sales, Sales[Quantity] * Sales[Unit Cost] )</em>. However, this first example is meant to be merely educational.</p>
<p>For a more meaningful example, consider a scenario in which a <em>PriceRange</em>-disconnected table in the model defines price ranges (a more complete coverage of this scenario is available in the DAX Pattern, <a href="https://www.daxpatterns.com/static-segmentation/">Static segmentation</a>).</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image5-90.png" width="303" /></p>
<p>We can create a model-independent function that retrieves the segment corresponding to a specified value. In order to be model-independent, the function exposes all the model dependencies as parameters, which in this case are all column references:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
RangeLookupUnchecked = (
    search        : SCALAR VAL,
    minColumn     : COLUMNREF,
    maxColumn     : COLUMNREF,
    targetColumn  : COLUMNREF
) =&gt;
    SELECTCOLUMNS (
        FILTER ( 
            TABLEOF ( minColumn ),
            minColumn &lt;= search &amp;&amp; maxColumn &gt; search 
        ),
        &quot;@Result&quot;, targetColumn
    )
</pre>
<p>The <em>RangeLookupUnchecked</em> function does not validate that the three columns belong to the same table. An error in the arguments provided to the function might be difficult to interpret. Therefore, we would like to create a safer version of the function that verifies that all the column references do belong to the same table, and returns a specific error otherwise:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
RangeLookup = (
    search        : SCALAR VAL,
    minColumn     : COLUMNREF,
    maxColumn     : COLUMNREF,
    targetColumn  : COLUMNREF
) =&gt;
    IF (
        NAMEOF ( TABLEOF ( minColumn ) ) == NAMEOF ( TABLEOF ( maxColumn ) )
            &amp;&amp; NAMEOF ( TABLEOF ( minColumn ) ) == NAMEOF ( TABLEOF ( targetColumn ) ),
        SELECTCOLUMNS (
            FILTER ( 
                TABLEOF ( minColumn ),
                minColumn &lt;= search &amp;&amp; maxColumn &gt; search 
            ),
            &quot;@Result&quot;, targetColumn
        ),
        ERROR ( &quot;All the column references must belong to the same table&quot; )
    )
</pre>
<p>Currently, the syntax error from an invalid column reference occurs before the code that generates the customized error, but we hope to make this check possible in the future. We could also define a version of the function for the <a href="https://www.daxpatterns.com/dynamic-segmentation/">dynamic segmentation</a> pattern, which returns a table and can be used as a CALCULATE filter in a measure (with the same disclaimer for the validation code that might not be executed as we would like in the current preview of UDFs):</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
ValuesInSegment = (
    filterColumn  : COLUMNREF,
    minColumn     : COLUMNREF,
    maxColumn     : COLUMNREF,
    targetColumn  : COLUMNREF
) =&gt;
    GENERATE (
        TABLEOF ( targetColumn ),
        FILTER ( 
            VALUES ( filterColumn ),
            IF (
                NAMEOF ( TABLEOF ( minColumn ) ) == NAMEOF ( TABLEOF ( maxColumn ) )
                    &amp;&amp; NAMEOF ( TABLEOF ( minColumn ) ) == NAMEOF ( TABLEOF ( targetColumn ) 
),
                minColumn &lt;= filterColumn &amp;&amp; maxColumn &gt; filterColumn,
                ERROR ( &quot;minColumn, maxColumn, and targetColumn arguments must belong to the same table&quot; )
            )
        )
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Segmented Sales = 
CALCULATE ( 
    &#x5B;Sales Amount],
    ValuesInSegment (
        Sales&#x5B;Net Price],
        PriceRange&#x5B;Min Price], PriceRange&#x5B;Max Price], PriceRange&#x5B;Segment]
    )
)
</pre>
<p>The result of <em>Segmented Sales</em> filters <em>Sales Amount</em> only for the segment grouped in the visual, whereas the original <em>Sales Amount</em> measure ignores that filter because it comes from a disconnected table.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image6-82.png" width="299" /></p>
<h3>TABLEREF</h3>
<p>A TABLEREF parameter accepts only a reference to a table defined in the semantic model. The caller must provide the name of an existing table; table expressions such as FILTER or SELECTCOLUMNS are not accepted.</p>
<p>This type is useful when the function needs to operate on a model table and must guarantee that the provided argument is an actual table from the model, not a derived or filtered table expression. By us constraining the parameter to a table reference, the function can rely on the table having the full set of columns and relationships defined in the model.</p>
<p>When a parameter is declared as TABLEREF, <a href="https://docs.sqlbi.com/dax-style/dax-naming-conventions#parameters">we recommend</a> using the suffix <em>Table</em> in the parameter name: for example, <em>salesTable</em> or <em>customerTable</em>.</p>
<p>Using TABLEREF is probably uncommon because TABLE EXPR is more flexible and does not impose a restriction on the table that should be evaluated inside the function. However, we may want to ensure that the table is a model table so we can use functions like ISFILTERED and ISCROSSFILTERED using a valid table argument. For example, the <em>HasRelationships</em> function returns TRUE if the <em>sourceTable</em> filters <em>targetTable</em> in the current filter context, meaning that there are one or more relationships connecting the two tables and propagating the filter context:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
HasRelationships = (
    targetTable : TABLEREF,
    sourceTable : TABLEREF
) =&gt; 
    CALCULATE (
        ISCROSSFILTERED ( targetTable ),
        sourceTable,
        REMOVEFILTERS ()
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Product filter Sales = 
HasRelationships ( 
    Sales,
    &#039;Product&#039;
)
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
PriceRange filter Sales = 
HasRelationships ( 
    Sales,
    PriceRange
)
</pre>
<p>The <em>Product filter Sales</em> and <em>PriceRange filter Sales</em> measures show how to use the <em>HasRelationships</em> function.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image7-70.png" width="423" /></p>
<p>The example is merely educational. We suggest using TABLEOF whenever possible to reduce the number of parameters, and considering TABLE EXPR instead of TABLEREF to give more flexibility to the developers using a function.</p>
<h3>CALENDARREF</h3>
<p>A CALENDARREF parameter accepts only a reference to a calendar defined in the semantic model. CALENDARREF is designed for calendar-based time intelligence functions.</p>
<p>When a parameter is declared as CALENDARREF, <a href="https://docs.sqlbi.com/dax-style/dax-naming-conventions#parameters">we recommend</a> using the suffix <em>Calendar</em> in the parameter name — for example, <em>dateCalendar</em>.</p>
<p>As an example, we create a <em>DatesPYTD</em> function that applies a previous year-to-date transformation by combining DATESYTD and SAMEPERIODLASTYEAR:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
DatesPYTD = ( targetCalendar : CALENDARREF ) =&gt;
    CALCULATETABLE (
        DATESYTD ( targetCalendar ),
        SAMEPERIODLASTYEAR ( targetCalendar )
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
YTD Sales = 
CALCULATE ( 
    &#x5B;Sales Amount],
    DATESYTD ( &#039;Gregorian&#039; )
)
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
PYTD Sales = 
CALCULATE (
    &#x5B;Sales Amount],
    DatesPYTD ( &#039;Gregorian&#039; )
)
</pre>
<p>The result of <em>PYTD Sales</em> is like <em>YTD Sales,</em> shifted by one year.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image8-58.png" width="448" /></p>
<h2>Why use specific reference types instead of ANYREF</h2>
<p>The new reference types are specializations of ANYREF: they share the same passing mode (EXPR), but they restrict the accepted expressions. A natural question is, “why should we bother with the restriction when ANYREF already works?”. There are two primary reasons.</p>
<p>The first reason is validation. When we use a specific reference type, the engine and IntelliSense can enforce constraints at the point of the function call. If a developer mistakenly passes a column reference to a MEASUREREF parameter, the error is reported immediately with a clear message. If a developer passes a FILTER expression to a TABLEREF parameter, the engine rejects it before the function body executes. With ANYREF, these mistakes would produce confusing errors deep inside the function body or, worse, incorrect results without any error at all.</p>
<p>The second reason is documentation. A function signature is the first thing a developer reads when deciding whether and how to use a function. A parameter declared as MEASUREREF immediately communicates that the function expects a measure, that context transition will occur, and that arbitrary expressions are not accepted. A parameter declared as COLUMNREF communicates that the caller must provide a column from a model table. A parameter declared as ANYREF communicates none of these things; the developer must read the function body to understand what is expected, even though adopting a consistent <a href="https://docs.sqlbi.com/dax-style/dax-naming-conventions#parameters">naming convention for the parameters</a> helps clarify that.</p>
<p>These two reasons reinforce each other. Better documentation reduces the likelihood of mistakes, and stronger validation catches the mistakes that still occur. Together, they make functions easier to use, easier to maintain, and safer to share across models and libraries.</p>
<h2>Conclusions</h2>
<p>The parameter type system in DAX user-defined functions (UDFs) provides a spectrum from the most permissive type (ANYREF) to the most restrictive (MEASUREREF, COLUMNREF, TABLEREF, and CALENDARREF), which are specializations of ANYREF that restrict the accepted expressions to specific categories.</p>
<p>The rule is simple: use the most specific parameter type that satisfies your function’s requirements. If the function expects a measure, use MEASUREREF. If it expects a column, use COLUMNREF. If it expects a model table reference, use TABLEREF. If it expects a calendar, use CALENDARREF. Reserve ANYREF for those cases where the function genuinely needs to be able to accept any kind of expression. The more specific the type, the clearer the intent of the function, the stronger the validation, and the more helpful the development tools become for the developers who use your functions.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/understanding-parameter-types-in-dax-user-defined-functions-udf/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Using visual calculation to highlight an entire row</title>
		<link>https://www.sqlbi.com/tv/using-visual-calculation-to-highlight-an-entire-row/</link>
					<comments>https://www.sqlbi.com/tv/using-visual-calculation-to-highlight-an-entire-row/#respond</comments>
		
		<dc:creator><![CDATA[Alberto Ferrari]]></dc:creator>
		<pubDate>Tue, 07 Apr 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Power BI]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=895219</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/3fAP2vDNIYQ/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>How to highlight a row based solely on the maximum value in the last column using visual calculations, a tool that can be used efficiently to format visuals.]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/3fAP2vDNIYQ/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>How to highlight a row based solely on the maximum value in the last column using visual calculations, a tool that can be used efficiently to format visuals.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/using-visual-calculation-to-highlight-an-entire-row/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to navigate the lattice of visual calculations</title>
		<link>https://www.sqlbi.com/tv/how-to-navigate-the-lattice-of-visual-calculations/</link>
					<comments>https://www.sqlbi.com/tv/how-to-navigate-the-lattice-of-visual-calculations/#respond</comments>
		
		<dc:creator><![CDATA[Alberto Ferrari]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 20:00:24 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[Visual calculations]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=895558</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/how-to-navigate-lattice.jpg" class="webfeedsFeaturedVisual" /></figure>This video explains how to navigate the lattice of visual calculations in DAX. The visual context goes beyond the row context and filter context by adding the concept of levels within the lattice. The video shows how to use COLLAPSEALL&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/how-to-navigate-lattice.jpg" class="webfeedsFeaturedVisual" /></figure><p>This video explains how to navigate the lattice of visual calculations in DAX. The visual context goes beyond the row context and filter context by adding the concept of levels within the lattice. The video shows how to use COLLAPSEALL and EXPAND to move between levels, using COLLAPSEALL as an absolute reference point from which you can reach any desired level.</p>
<p>A key topic is the difference between ROWS/COLUMNS and VALUES of ROWS/COLUMNS. ROWS and COLUMNS always return all values at the current level, ignoring filters, while VALUES of ROWS/COLUMNS respects the active filter context. The video also shows how to create reusable visual calculation functions that encapsulate lattice navigation logic, simplifying otherwise complex code.</p>
<p>These techniques build on the article <a href="https://www.sqlbi.com/articles/using-visual-calculations-to-highlight-an-entire-row/">Using visual calculations to highlight an entire row</a>: We recommend reading the article and watching its companion video before this SQLBI+ video.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/how-to-navigate-the-lattice-of-visual-calculations/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Using visual calculations to highlight an entire row</title>
		<link>https://www.sqlbi.com/articles/using-visual-calculations-to-highlight-an-entire-row/</link>
					<comments>https://www.sqlbi.com/articles/using-visual-calculations-to-highlight-an-entire-row/#respond</comments>
		
		<dc:creator><![CDATA[Alberto Ferrari]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 20:00:04 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[Visual calculations]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=895191</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image2-125.png" class="webfeedsFeaturedVisual" /></figure>Visual calculations can be used efficiently to format visuals. This article presents an interesting technique to highlight a row based solely on the maximum value in the last column. When it comes to visuals, users may want to specific cells&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image2-125.png" class="webfeedsFeaturedVisual" /></figure><p>Visual calculations can be used efficiently to format visuals. This article presents an interesting technique to highlight a row based solely on the maximum value in the last column.<br />
<span id="more-895191"></span></p>
<p>When it comes to visuals, users may want to specific cells highlighted in order to spot important information quickly. While browsing the forums, we came across an interesting requirement that can easily be solved with a DAX measure: highlight an entire row based on the value in the last column of the visual only. In our example, we highlight Wide World Importers because it has the maximum value (71,904.98) in the last year (2026).</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-129.png" width="1000" /></p>
<h2>Introducing the measure solution</h2>
<p>The scenario can be easily solved with a regular measure that computes the last visible year, then the values of different brands in that year, and finally searches for the brands with the maximum value (there may be more than just one) and produces “Yellow” for those brands only:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Color =
VAR LastYear =
    CALCULATE ( MAX ( &#039;Date&#039;&#x5B;Year] ), ALLSELECTED () )
VAR BrandsInLastYear =
    CALCULATETABLE (
        ADDCOLUMNS ( VALUES ( &#039;Product&#039;&#x5B;Brand] ), &quot;@Sales&quot;, &#x5B;Sales Amount] ),
        ALLSELECTED ( &#039;Product&#039;&#x5B;Brand] ),
        &#039;Date&#039;&#x5B;Year] = LastYear
    )
VAR MaxValueInLastYear =
    MAXX ( BrandsInLastYear, &#x5B;@Sales] )
VAR BrandsWithMaxValueInLastYear =
    SELECTCOLUMNS (
        FILTER ( BrandsInLastYear, &#x5B;@Sales] = MaxValueInLastYear ),
        &#039;Product&#039;&#x5B;Brand]
    )
VAR CurrentBrand =
    SELECTEDVALUE ( &#039;Product&#039;&#x5B;Brand] )
VAR Result =
    IF ( CurrentBrand IN BrandsWithMaxValueInLastYear, &quot;Yellow&quot; )
RETURN
    Result
</pre>
<p>A good DAX developer writes this code and obtains the desired result. However, the code has a couple of drawbacks worth pointing out:</p>
<ul>
<li>Using a model measure to change the behavior of a visual is a bit overkill. Every visual has specific requirements that necessitate creating multiple measures whose sole purpose is to alter the aesthetics of the report.</li>
<li>The measure works fine if the visual includes the year on the columns and the brand on the rows. Changing the structure of the visual, for example, using the<em> Product[Category]</em> on the columns, requires also changing the measure code.</li>
</ul>
<h2>Implementing a visual calculation</h2>
<p>It would be much more convenient to embed the aesthetic changes in a visual calculation, so that the model does not become messy. Using a visual calculation requires less intuitive code because it requires navigating the visual lattice using EXPAND and COLLAPSE. On the other hand, the ability to reference ROWS and COLUMNS, and to use nice functions like LAST, simplifies some of the calculations:</p>
<div class="dax-code-title">Visual calculation</div>
<pre class="brush: dax; title: ; snippet: Visual calculation; notranslate">
Color = 
VAR LastYear = 
    CALCULATE ( 
        CALCULATE ( 
            LAST ( &#x5B;Year], COLUMNS ), 
            EXPAND ( COLUMNS ) 
        ),
        COLLAPSEALL( ROWS COLUMNS ) 
    )
VAR RowsInLastYear = 
    CALCULATETABLE ( 
        CALCULATETABLE ( 
            FILTER ( 
                ROWS COLUMNS,
                &#x5B;Year] = LastYear
            ),
            EXPAND ( ROWS COLUMNS )
        ),
        COLLAPSEALL ( ROWS COLUMNS )
    )
VAR MaxValueInLastYear = MAXX ( RowsInLastYear, &#x5B;Sales Amount] )
VAR BrandsWithMaxValueInLastYear = 
    SELECTCOLUMNS ( 
        FILTER ( RowsInLastYear, &#x5B;Sales Amount] = MaxValueInLastYear ), 
        &quot;Brand&quot;, &#x5B;Brand] 
    )
VAR CurrentBrand = &#x5B;Brand]
RETURN
    IF ( CurrentBrand IN BrandsWithMaxValueInLastYear, &quot;Yellow&quot; )
</pre>
<p>The main advantage of using a visual calculation is that the code is in the visual rather than in the model. Therefore, if the visual is copied to a different page and reorganized, the visual calculation code can be modified to work in the new visual.</p>
<p>However, the solution is not optimal yet. The visual calculation needs to reference the column names in multiple places. For example, if one uses a different measure, they need to modify the code. If <em>Category</em> is in the rows rather than <em>Brand</em>, the code needs to be adapted again. The chances of making mistakes are quite high.</p>
<p>In this scenario, functions are king. A good way to think about the scenario is to write a function that takes a table representing the matrix content and returns the name of the brand (or whatever column is required) with the maximum value for the year (or, again, whatever column is required). The visual calculation will only need to build the matrix, pass it down to the function along with the names of the columns to use, and then decide the color to use.</p>
<p>A possible implementation of the function is the following:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
RowHeaderOfLastCol = 
    ( 
        matrix : TABLE, 
        colHeader : COLUMNREF, 
        rowHeader : COLUMNREF, 
        valueExpr : EXPR 
    ) =&gt; 
    VAR LastColHeader = MAXX ( matrix, colHeader )
    VAR RowsInLastCol = 
        FILTER (
            matrix,
            colHeader = LastColHeader
        )
    VAR MaxValueInLastCol = MAXX ( RowsInLastCol, valueExpr )
    VAR RowWithMaxValueInLastCol = 
        FILTER (
            RowsInLastCol, 
            valueExpr = MaxValueInLastCol
        )
    VAR Result = SELECTCOLUMNS ( RowWithMaxValueInLastCol, &quot;rowHeader&quot;, rowHeader )
    RETURN Result
</pre>
<p>As you may notice, the function no longer references <em>Brand</em>, <em>Sales Amount,</em> and <em>Year</em>. The function is, by its nature, generic. It receives a table representing the matrix content and searches for the row headers that contain the maximum value in the last column.</p>
<p>The visual calculation is much simpler, as it only needs to build the table containing the matrix and then invoke the function:</p>
<div class="dax-code-title">Visual calculation</div>
<pre class="brush: dax; title: ; snippet: Visual calculation; notranslate">
Color = 
VAR BestBrands = 
    CALCULATETABLE ( 
        CALCULATETABLE ( 
            RowHeaderOfLastCol ( ROWS COLUMNS, &#x5B;Year], &#x5B;Brand], &#x5B;Sales Amount] ),
            EXPAND ( ROWS COLUMNS )
        ),
        COLLAPSEALL ( ROWS COLUMNS )
    )
VAR CurrentBrand = &#x5B;Brand]
RETURN 
    IF ( CurrentBrand IN BestBrands, &quot;Yellow&quot; )
</pre>
<p>The main advantage is that the reference to the column names appears in just one line: the function call. The remaining part of the visual calculation does not need to be adapted for a different visual, thus making it easier to produce different implementations of the same business logic.</p>
<p>As an example, the following is a visual calculation that highlights rows in a matrix that contains the category as the top level, and the brand as the second level only:</p>
<pre class="brush: dax; title: ; notranslate">
Color = 
VAR BestBrands = 
    CALCULATETABLE ( 
        CALCULATETABLE ( 
            RowHeaderOfLastCol ( VALUES (ROWS COLUMNS ), &#x5B;Year], &#x5B;Brand], &#x5B;Sales Amount] ),
            EXPAND ( &#x5B;Year] ),
            EXPAND ( &#x5B;Brand] )
        ),
        COLLAPSE ( ROWS COLUMNS )
    )
VAR CurrentBrand = &#x5B;Brand]
RETURN 
    IF ( CurrentBrand IN BestBrands, &quot;Yellow&quot; )
</pre>
<p>The result is that brands are highlighted locally in the current category.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-125.png" width="1000" /></p>
<h2>Conclusions</h2>
<p>In its simplicity, this example shows several useful techniques in the management of your code, and also several DAX features that are not trivial, like using ROWS COLUMNS to create a table with the content of the matrix, the navigation in the lattice, and the capability of passing columns of the virtual table down to functions.</p>
<p>Going into the many details would take much longer, and that type of content would not be suitable for a short article and its related video. If you are interested in learning more about these topics, we have published a longer video, <a href="https://www.sqlbi.com/tv/how-to-navigate-the-lattice-of-visual-calculations/">How to navigate the lattice of visual calculations</a>, available to all our <a href="https://www.sqlbi.com/p/plus/">SQLBI+</a> subscribers.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/using-visual-calculations-to-highlight-an-entire-row/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>SQLBI+ updates in April 2026</title>
		<link>https://www.sqlbi.com/blog/marco/2026/04/06/sqlbi-updates-in-april-2026/</link>
					<comments>https://www.sqlbi.com/blog/marco/2026/04/06/sqlbi-updates-in-april-2026/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Mon, 06 Apr 2026 19:58:35 +0000</pubDate>
				<category><![CDATA[SQLBI+]]></category>
		<category><![CDATA[Visual calculations]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=blogpost&#038;p=895577</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/how-to-navigate-lattice.jpg" class="webfeedsFeaturedVisual" /></figure>We released a new session in SQLBI+: How to navigate the lattice of visual calculations: This video explains how to navigate the lattice of visual calculations in DAX, using COLLAPSEALL and EXPAND to move between levels. A key topic is&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/how-to-navigate-lattice.jpg" class="webfeedsFeaturedVisual" /></figure><p>We released a new session in <strong>SQLBI+</strong>:</p>
<ul>
<li><a href="https://www.sqlbi.com/tv/how-to-navigate-the-lattice-of-visual-calculations/"><img decoding="async" class="nozoom" style="margin: 0px 0px 5px 15px;border: none !important" src="https://cdn.sqlbi.com/wp-content/uploads/how-to-navigate-lattice.jpg" alt="How to navigate the lattice of visual calculations" width="128" height="72" align="right" />How to navigate the lattice of visual calculations</a>: This video explains how to navigate the lattice of visual calculations in DAX, using COLLAPSEALL and EXPAND to move between levels. A key topic is the difference between ROWS/COLUMNS and VALUES of ROWS/COLUMNS. The video also shows how to create reusable visual calculation functions that encapsulate lattice navigation logic, simplifying otherwise complex code.</li>
</ul>
<p>Stay tuned for new <a href="https://www.sqlbi.com/p/plus/">SQLBI+</a> content coming in 2026!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/blog/marco/2026/04/06/sqlbi-updates-in-april-2026/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Third Edition of the Mastering DAX Video Course &#8211; unplugged</title>
		<link>https://www.sqlbi.com/blog/marco/2026/04/01/the-third-edition-of-the-mastering-dax-video-course-unplugged/</link>
					<comments>https://www.sqlbi.com/blog/marco/2026/04/01/the-third-edition-of-the-mastering-dax-video-course-unplugged/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Wed, 01 Apr 2026 06:30:09 +0000</pubDate>
				<category><![CDATA[Book]]></category>
		<category><![CDATA[DAX]]></category>
		<category><![CDATA[Video course]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=blogpost&#038;p=895316</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/unplugged.jpg" class="webfeedsFeaturedVisual" /></figure>Alberto and I recorded an unplugged session to talk about the new edition of the Mastering DAX Video Course. You can watch it above, but if you prefer a quick read, here are some thoughts. The question we heard the&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/unplugged.jpg" class="webfeedsFeaturedVisual" /></figure><p>Alberto and I recorded an unplugged session to talk about the new edition of the <a href="https://www.sqlbi.com/p/mastering-dax-video-course/">Mastering DAX Video Course</a>. You can watch it above, but if you prefer a quick read, here are some thoughts.</p>
<p>The question we heard the most over the years was some version of &#8220;where do I start?&#8221; People kept telling us that our material was great but too advanced, that you needed another book before ours. That was fair, and it pushed us to rethink the whole approach. This edition starts from zero. Well, almost. We still assume you know Power BI Desktop, but when it comes to DAX, we build everything from the ground up, at a calmer pace, with new ways to explain things like the filter context.</p>
<p>The training ended up being more than 30 hours, which is a lot. But we designed it so that you do not have to finish it. Wherever you stop, you already know something useful. If you reach halfway (chapter 10), you have a solid level for most real-world scenarios. The rest is there to further improve your skills and for when you need it.</p>
<p>We are also quite excited about the new modules on user-defined functions and calendar-based time intelligence. User-defined functions in particular will change the way people write and maintain DAX in the coming years, including how AI tools interact with DAX code. That is a topic we got into during the video, and it is worth hearing Alberto&#8217;s take on it.</p>
<p>A few things we did not include in this edition: optimization and VertiPaq internals are gone. They have <a href="https://www.sqlbi.com/books/optimizing-dax-second-edition/">their own book</a> now. Honestly, if you write good DAX and follow the basics (like filter columns, not tables), you rarely need to optimize anything. DAX is simple, but not easy. It takes practice, and the exercises (coming soon after the training ships) are there to help with that.</p>
<p>The unplugged video has a few other moments that did not make it into this recap: some about what it is like to record 30 hours of technical content in a foreign language, and some that are just fun to watch. What are you waiting for? Watch the video, and enjoy DAX!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/blog/marco/2026/04/01/the-third-edition-of-the-mastering-dax-video-course-unplugged/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Mastering DAX 3rd Edition Unplugged</title>
		<link>https://www.sqlbi.com/tv/mastering-dax-3rd-edition-unplugged/</link>
					<comments>https://www.sqlbi.com/tv/mastering-dax-3rd-edition-unplugged/#respond</comments>
		
		<dc:creator><![CDATA[Alberto Ferrari]]></dc:creator>
		<pubDate>Wed, 01 Apr 2026 06:30:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Unplugged]]></category>
		<category><![CDATA[Video course]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=894110</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/8voXQhK4BXc/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>Alberto and I discuss the third edition of the Mastering DAX Video Course: what changed, why we redesigned it to be more accessible, and what new content we added. &#160;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/8voXQhK4BXc/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>Alberto and I discuss the third edition of the Mastering DAX Video Course: what changed, why we redesigned it to be more accessible, and what new content we added.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/mastering-dax-3rd-edition-unplugged/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>DAX User-Defined Functions vs Calculation Groups</title>
		<link>https://www.sqlbi.com/tv/dax-user-defined-functions-vs-calculation-groups/</link>
					<comments>https://www.sqlbi.com/tv/dax-user-defined-functions-vs-calculation-groups/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Thu, 26 Mar 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">http://www.sqlbi.com/?post_type=video&#038;p=894335</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/LttGF0D-YBM/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>Should you use DAX user-defined functions (UDF) or calculation groups? Learn when to use either, and how they complement each other in the design of a semantic model in Power BI.]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/LttGF0D-YBM/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>Should you use DAX user-defined functions (UDF) or calculation groups? Learn when to use either, and how they complement each other in the design of a semantic model in Power BI.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/dax-user-defined-functions-vs-calculation-groups/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>DAX user-defined functions (UDF) vs. calculation groups</title>
		<link>https://www.sqlbi.com/articles/dax-user-defined-functions-udf-vs-calculation-groups/</link>
					<comments>https://www.sqlbi.com/articles/dax-user-defined-functions-udf-vs-calculation-groups/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 20:00:43 +0000</pubDate>
				<category><![CDATA[Analysis Services]]></category>
		<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[Tabular]]></category>
		<category><![CDATA[UDF]]></category>
		<category><![CDATA[User-defined functions]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=894313</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/udf-vs-cg.jpg" class="webfeedsFeaturedVisual" /></figure>This article describes the different roles of user-defined functions and calculation groups, explaining when to use either, and how they complement each other in the design of a semantic model. The introduction of user-defined functions (UDFs) in DAX changes the&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/udf-vs-cg.jpg" class="webfeedsFeaturedVisual" /></figure><p>This article describes the different roles of user-defined functions and calculation groups, explaining when to use either, and how they complement each other in the design of a semantic model.<br />
<span id="more-894313"></span></p>
<p>The introduction of user-defined functions (UDFs) in DAX changes the way we think about code reuse. Before UDFs existed, calculation groups were the only mechanism for sharing common logic across multiple calculations. Many developers adopted calculation groups not because they were the ideal tool for code reuse, but because there was no alternative.</p>
<p>Now that user-defined functions are available, it is time to revisit this practice. User-defined functions and calculation groups serve fundamentally different purposes. Understanding the distinction between the two is essential for building well-organized, efficient semantic models.</p>
<h2>Three tools, three purposes</h2>
<p>A semantic model offers three different tools to a DAX developer, and each serves a distinct role:</p>
<ul>
<li><strong>Measures</strong> expose specific calculations to the end user in the semantic model.</li>
<li><strong>Calculation groups</strong> expose common filters or transformations that can be applied to any measure.</li>
<li><strong>User-defined functions</strong> allow a developer to write code once and reuse it everywhere, in measures, calculated columns, security roles, and also calculation groups!</li>
</ul>
<p>The key distinction is the target audience. Measures and calculation groups are visible to the report user; they appear in reports and visuals. On the other hand, user-defined functions are invisible to report users. A function is simply a tool for the developer to arrange code in the best possible way. The report user never directly sees, selects, or interacts with a function.</p>
<p>This separation leads to a clear design principle: the decision to expose a feature as a measure or as a calculation group is a user-facing decision. By contrast, the sharing of business logic is an internal implementation detail of the semantic model.</p>
<h2>When to use calculation groups</h2>
<p>Calculation groups remain the best tool when the goal is to provide the end user with a choice that applies to all measures in a visual. Two scenarios illustrate this well.</p>
<p>The first scenario is a common filter. When users need to select a filter that applies to every measure in a report, a calculation group lets them do so through a single slicer. For example, a calculation group with items like “Current Month” and “Last Quarter” allows the user to choose which period to analyze. The selected calculation item is then applied to all the measures in the visual without requiring the developer to create separate measure variants for each combination.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-128.png" width="600" /></p>
<p>The second scenario is a common transformation. When all the numbers in a visual should be divided by 1,000 or by 1,000,000 for readability, a calculation group applies that transformation to every measure at once. The user simply selects the desired scale factor, and the transformation is applied to all the measures.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-124.png" width="600" /></p>
<p>In both cases, the defining characteristic is the same: the calculation group applies a single transformation to all measures without distinction. This is precisely what makes calculation groups valuable to the report user; the same characteristic limits their flexibility when the goal is something different.</p>
<h2>A practical example: new and returning customers</h2>
<p>Consider a scenario in which the user must choose between analyzing new and returning customers. A “new customer” is one whose first purchase falls within the current filter context; a “returning customer” is one whose first purchase occurred before the current filter context. This is a good candidate for a calculation group because the same segmentation logic applies to every measure in the visual. Whether the user is viewing <em>Sales Amount</em>, <em># Orders</em>, <em>Margin</em>, or <em>Margin %</em>, the filter restricts the data to the same subset of customers.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image3-111.png" width="600" /></p>
<p>The user selects “New customers&#8221; or “Returning customers” from a slicer, and every measure in the visual is filtered accordingly. This is a user-facing decision: the user chooses which customer segment to analyze. A calculation group is the correct tool for this purpose:</p>
<div class="dax-code-title">Calculation item in Customers Group table</div>
<pre class="brush: dax; title: ; snippet: Calculation item; table: Customers Group; notranslate">
New customers = 
VAR CustomersWithNewDate =
    CALCULATETABLE (
        SUMMARIZECOLUMNS (
            Sales&#x5B;CustomerKey],
            &quot;@NewCustomerDate&quot;, 
                CALCULATE ( MIN ( Sales&#x5B;Order Date] ) )
        ),
        ALLEXCEPT ( Sales, Customer )
    )
VAR NewCustomers =
    FILTER (                              
        CustomersWithNewDate,
        &#x5B;@NewCustomerDate] IN VALUES ( &#039;Date&#039;&#x5B;Date] )
    )
VAR Result =
    CALCULATE(
        SELECTEDMEASURE(),
        NewCustomers
    )
RETURN Result
</pre>
<p>&nbsp;</p>
<div class="dax-code-title">Calculation item in Customers Group table</div>
<pre class="brush: dax; title: ; snippet: Calculation item; table: Customers Group; notranslate">
Returning customers = 
VAR MinDate = MIN ( &#039;Date&#039;&#x5B;Date] )
VAR CustomersWithNewDate =
    CALCULATETABLE (
        SUMMARIZECOLUMNS (
            Sales&#x5B;CustomerKey],
            &quot;@NewCustomerDate&quot;, 
                CALCULATE ( MIN ( Sales&#x5B;Order Date] ) )
        ),
        ALLEXCEPT ( Sales, Customer )
    )
VAR ExistingCustomers =
    FILTER (
        CustomersWithNewDate,
        &#x5B;@NewCustomerDate] &lt; MinDate
    )
VAR ReturningCustomers =
    SELECTCOLUMNS (
        ExistingCustomers,
        &quot;CustomerKey&quot;, Sales&#x5B;CustomerKey]
    )
VAR Result =
    CALCULATE(
        SELECTEDMEASURE(),
        KEEPFILTERS ( ReturningCustomers )
    )
RETURN Result
</pre>
<p>This example works well because the calculation item applies the same segmentation logic to every measure without distinction. The user makes a single choice, and the entire visual responds. Please note that a full description of the new and returning customers pattern is available in the<a href="https://www.daxpatterns.com/new-and-returning-customers/"> New and returning customers</a> article at <a href="http://www.daxpatterns.com">www.daxpatterns.com</a>.</p>
<h2>Why calculation groups are not ideal for code reuse</h2>
<p>Despite their versatility, calculation groups are not a good tool for sharing code across multiple calculations. The reason is structural: calculation groups, like measures, do not have parameters.</p>
<p>In DAX, the typical way to pass information to shared code in a measure or calculation group is through the filter context. A developer modifies the filter context before calling a measure or applying a calculation item, and the shared code reads the information from that modified context. This approach works, but it is expensive. Transferring information through the filter context may require additional work at query time, thereby degrading performance.</p>
<p>A user-defined function accepts parameters directly. The function is expanded in the query plan, much like macros in other languages like C/C++. Because the parameters are resolved before the code is executed, the result is a more efficient query plan that does not generate additional work at execution time.</p>
<p>The performance difference may be negligible on a small model with a simple calculation. However, as the complexity of the shared logic grows, and especially when the same logic is invoked multiple times within a single query, the overhead of passing parameters through the filter context adds up. Using a function with explicit parameters can help us avoid this overhead.</p>
<h2>User-defined functions for business logic</h2>
<p>User-defined functions are the primary tool for sharing and reusing code within and across models (more details here: <a href="https://www.sqlbi.com/articles/model-dependent-and-model-independent-user-defined-functions-in-dax/">Model-dependent and model-independent user-defined functions in DAX</a>). Whenever a piece of business logic is complex enough to warrant a single definition, especially logic containing parameters that might vary, or rules that should only be defined in one place, a UDF is the appropriate choice.</p>
<p>We have two examples to clarify this.</p>
<p>The first scenario is encapsulating business logic. Consider the customer segmentation from the previous section. The logic that identifies new customers is business logic: it defines what “new” means. If we place that logic in a function, we can reuse it both in the calculation item and in a standalone measure. The business rule is defined once; every consumer of that rule, whether a calculation item or a measure, calls the same function. The business logic is defined in three model-independent functions embedded in two model-dependent functions that are referenced later in calculation items and measures:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
DaxPatterns.NewReturning.Absolutes.CustomersWithNewDate = (
    TxCustomerKeyColumn: COLUMNREF, 
    CustomerTable: TABLEREF, 
    TxDateColumn: COLUMNREF 
) =&gt;
    CALCULATETABLE (
        SUMMARIZECOLUMNS (
            TxCustomerKeyColumn,
            &quot;@NewCustomerDate&quot;, 
                CALCULATE ( MIN ( TxDateColumn ) )
        ),
        ALLEXCEPT ( TABLEOF ( TxCustomerKeyColumn ), CustomerTable )
    )
</pre>
<p>&nbsp;</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
DaxPatterns.NewReturning.Absolutes.NewCustomers = ( 
    TxCustomerKeyColumn: COLUMNREF, 
    CustomerTable: TABLEREF, 
    TxDateColumn: COLUMNREF, 
    DateColumn: COLUMNREF 
) =&gt; 
    FILTER (                              
        DaxPatterns.NewReturning.Absolutes.CustomersWithNewDate ( 
            TxCustomerKeyColumn, 
            CustomerTable, 
            TxDateColumn 
        ),
        &#x5B;@NewCustomerDate] IN VALUES ( DateColumn )
    )
</pre>
<p>&nbsp;</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
DaxPatterns.NewReturning.Absolutes.ReturningCustomers = (
    TxCustomerKeyColumn: COLUMNREF, 
    CustomerTable: TABLEREF, 
    TxDateColumn: COLUMNREF, 
    DateColumn: COLUMNREF
) =&gt;
    VAR MinDate = MIN ( DateColumn )
    VAR ExistingCustomers =
        FILTER (
            DaxPatterns.NewReturning.Absolutes.CustomersWithNewDate ( 
                TxCustomerKeyColumn, 
                CustomerTable, 
                TxDateColumn 
            ),
            &#x5B;@NewCustomerDate] &lt; MinDate
        )
    VAR ReturningCustomers =
        SELECTCOLUMNS (
            ExistingCustomers,
            &quot;CustomerKey&quot;, TxCustomerKeyColumn
        )
    RETURN ReturningCustomers
</pre>
<p>&nbsp;</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.NewCustomers = () =&gt; 
DaxPatterns.NewReturning.Absolutes.NewCustomers ( 
    Sales&#x5B;CustomerKey], 
    Customer, 
    Sales&#x5B;Order Date], 
    &#039;Date&#039;&#x5B;Date] 
)
</pre>
<p>&nbsp;</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.ReturningCustomers = () =&gt; 
DaxPatterns.NewReturning.Absolutes.ReturningCustomers ( 
    Sales&#x5B;CustomerKey], 
    Customer, 
    Sales&#x5B;Order Date], 
    &#039;Date&#039;&#x5B;Date] 
)
</pre>
<p>The calculation items can be defined using much shorter expressions that reference the model-dependent functions:</p>
<div class="dax-code-title">Calculation item in Customers Fx table</div>
<pre class="brush: dax; title: ; snippet: Calculation item; table: Customers Fx; notranslate">
New customers = 
CALCULATE (
    SELECTEDMEASURE (),
    NewCustomers ()
)
</pre>
<p>&nbsp;</p>
<div class="dax-code-title">Calculation item in Customers Fx table</div>
<pre class="brush: dax; title: ; snippet: Calculation item; table: Customers Fx; notranslate">
Returning customers = 
CALCULATE (
    SELECTEDMEASURE (),
    ReturningCustomers ()
)
</pre>
<p>The same user-defined functions can be used in measures:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Sales New Customers = 
CALCULATE ( 
    &#x5B;Sales Amount],
    Local.NewCustomers ()
)
</pre>
<p>&nbsp;</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Sales Returning Customers = 
CALCULATE ( 
    &#x5B;Sales Amount],
    Local.ReturningCustomers ()
)
</pre>
<p>The business logic lives in exactly one place. If the definition of “new customer” changes, we update the function, and every consumer reflects the change.</p>
<p>The measures we created help us illustrate the second scenario: side-by-side visuals. Suppose we want a report that shows both the standard <em>Sales Amount</em> and the <em>Sales Amount</em> for new customers in the same visual, as two separate columns, together with two other columns with <em>Margin</em> and <em>Margin %</em>. If we rely solely on a calculation group, the selected calculation item is applied to every measure in the visual. We cannot have one measure with the calculation item active (<em>Sales Amount</em>) and the other two measures without the “new customer” (<em>Margin</em> and <em>Margin %</em>) in the same visual. By defining a standalone measure that calls the function, we can place both the <em>Sales Amount</em> and the <em>Sales New Customers</em> measures in the same visual without interference.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image4-105-scaled.png" width="700" /></p>
<p>If you are interested in another practical, step-by-step example of this approach that describes how we transform an existing calculation into a more generic function, read the article <a href="https://www.sqlbi.com/articles/creating-functions-for-the-like-for-like-dax-pattern/">Creating functions for the like-for-like DAX pattern</a>.</p>
<h2>When not to use a function</h2>
<p>One could argue that every single calculation should be written as a UDF, with measures and calculation items serving merely as function calls with the appropriate parameters. We do not suggest going to this extreme. If a measure contains a simple SUM or a straightforward subtraction, there is no benefit in wrapping it inside a function. Moving a simple operation into a function only hides the calculation and makes the code harder to read.</p>
<p>The guideline is pragmatic: use a function when the logic is complex enough to benefit from a single definition, when it contains parameters that might vary, or when the same business rule must be applied in multiple places. For simple calculations, write the code directly in the measure.</p>
<p>For example, the calculation items in the Period calculation group may be simple enough not to require a separate function definition. Consider the fact that the actual implementation in the sample file is longer because of the need to simulate a specific “current” date:</p>
<div class="dax-code-title">Calculation item in Period table</div>
<pre class="brush: dax; title: ; snippet: Calculation item; table: Period; notranslate">
Current Year = 
CALCULATE ( 
    SELECTEDMEASURE(),
    PARALLELPERIOD ( &#039;Date&#039;&#x5B;Date], 0, YEAR )
)
</pre>
<div class="dax-code-title">Calculation item in Period table</div>
<pre class="brush: dax; title: ; snippet: Calculation item; table: Period; notranslate">
Last Year = 
CALCULATE ( 
    SELECTEDMEASURE(),
    PARALLELPERIOD ( &#039;Date&#039;&#x5B;Date], -1, YEAR )
)
</pre>
<h2>Conclusions</h2>
<p>User-defined functions and calculation groups are complementary tools that serve different audiences. A calculation group is a feature exposed to report users; it lets them apply a common filter or transformation to all measures in a visual. A user-defined function is invisible to the user; it is a tool for the developer to organize business logic in one place and reuse it wherever needed.</p>
<p>The design of a semantic model benefits from keeping this distinction clear. Decide what the user needs to see and interact with; that determines whether to use a measure or a calculation group. Then decide how to implement the underlying logic efficiently, and consider user-defined functions as a tool available for that implementation.</p>
<p>When business logic is complex or shared across multiple measures, place it in a function. Let measures and calculation items call that function with the appropriate parameters. The result is a semantic model that is easier to maintain, consistent in its business rules, and more efficient in its query plans.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/dax-user-defined-functions-udf-vs-calculation-groups/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Mastering DAX video course, 3rd edition</title>
		<link>https://www.sqlbi.com/articles/mastering-dax-video-course-3rd-edition/</link>
					<comments>https://www.sqlbi.com/articles/mastering-dax-video-course-3rd-edition/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Tue, 24 Mar 2026 14:00:54 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=894561</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/mastering-dax3-video-course-email.png" class="webfeedsFeaturedVisual" /></figure>The third edition of the Mastering DAX video course is available! The third edition of the Mastering DAX video course is now available! The world has changed a lot since the release of the second edition. While the foundational concepts of DAX have&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/mastering-dax3-video-course-email.png" class="webfeedsFeaturedVisual" /></figure><p>The third edition of the Mastering DAX video course is available!<br />
<span id="more-894561"></span></p>
<p>The <strong>third edition of the </strong><a href="https://www.sqlbi.com/p/mastering-dax-video-course/">Mastering DAX video course</a> is now <strong>available</strong>!</p>
<div class="video-container"><iframe src="https://www.youtube.com/embed/Ra1c8IESSxg?si=jQ9_ssEngZJyAPcE&amp;rel=0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p>The world has changed a lot since the release of the second edition. While the foundational concepts of DAX have not changed, DAX has evolved as a language, incorporating user-defined functions, calendar-based time intelligence, and visual calculations over time.</p>
<p>In the meantime, Power BI is now part of the larger Microsoft Fabric ecosystem, and the adoption of Power BI and DAX keeps increasingly steadily. LLMs have arrived – whether they are ready to write DAX or not is a big “it depends”, but it is inevitable that more and more DAX code will be written by LLMs in the years to come.</p>
<p>Therefore, <strong>why bother to learn DAX?</strong> Several reasons.</p>
<p>First, because DAX could be way shorter than the prompt required to generate it. Second, to validate the code generated by the machine. Third, because it is fun. Well, we are biased here!</p>
<p><strong>Why a third edition of the video course</strong>, then? Well, in this case too, several reasons, the same that guided the writing of the third edition of <a href="https://www.sqlbi.com/books/the-definitive-guide-to-dax-third-edition/">The Definitive Guide to DAX</a> book which we released in November 2025.</p>
<p><a href="https://www.sqlbi.com/p/mastering-dax-video-course/"><img decoding="async" style="float: right; margin: 0 0 0 10px; border: none !important;" src="https://cdn.sqlbi.com/wp-content/uploads/mdax3-launch-campaign.png" class="nozoom" alt="" width="265" height="160" /></a></p>
<p>First, <strong>new features</strong>: window functions, visual calculations, calendar-based time intelligence, and user-defined functions simply did not exist when we produced the second edition.</p>
<p>Second, to provide a <strong>smoother learning path</strong>. We completely rewrote the book first, then used it as the screenplay for the video course: same structure, same examples, different delivery. The book is precise, definitive, and detailed. The video course is more conversational, where we take the time needed to explain what is going on, often by writing the code step by step. Some will prefer the book, some will choose the video course, and others will use both! However, we start with the basics and introduce the theoretical concepts of the DAX language more gradually, in turn enabling newbies to write simple DAX formulas after the first two modules, rather than overwhelming students with all the theory before writing real-world measures. We know you want to jump into writing code ASAP!</p>
<p>We added many original illustrations to the book that we also use in the video course, using a digital whiteboard to illustrate abstract concepts more visually. We experimented with this teaching technique in our classrooms courses and in certain YouTube videos; the feedback we received confirmed it was a good idea, and we applied it to the video course, too.</p>
<p>Thus, this is a completely new video course. If you were an existing student with an active license for the second edition at the time we released the third edition, you will clearly see the differences because you now have access to both video courses: the second and the third edition. If you are a new student, you will just enjoy the new approach!</p>
<p>One note about the exercises: they are not ready yet, we will add them in April 2026, starting from the first modules. We already included all the sample files that you can use to follow the demos in the lectures, and you can play with them, too. Expect proper exercises to assess your knowledge after each module in the coming weeks.</p>
<p>Now, a few key metrics about the third edition:<br />
<img loading="lazy" decoding="async" style="float: left; margin: 0 10px 0 0; border: none !important;" src="https://cdn.sqlbi.com/wp-content/uploads/noun-video-6592288.png" alt="" class="nozoom" width="160" height="160" /></p>
<ul>
<li><strong>&gt;30 hours of recorded video lectures</strong>. This is double the duration of the content available in the second edition.</li>
<li><strong>Full coverage of any and all DAX features</strong> released all the way until December 2025.</li>
<li><strong>Professionally reviewed English subtitles</strong>. Automatic translations to Arabic, Chinese (Standard), Chinese (Traditional), French, German, Italian, Japanese, Korean, Polish, Portuguese (Brazil), Russian, Spanish, Ukrainian.</li>
</ul>
<p>As mentioned above, all students with an active license for the second edition of the Mastering DAX video course automatically received access to the third edition, with the same expiration date.</p>
<p>If you do not have access to the <a href="https://www.sqlbi.com/p/mastering-dax-video-course/">Mastering DAX video course</a> yet, or if you want to extend your current expiration date, take full advantage of our <strong>launch offer</strong> with a <strong>20% discount valid until April 3, 2026</strong>. Moreover, to celebrate the 10- year anniversary of Power BI and of our video course, we are extending the 20% discount to all video courses offered by SQLBI! Hurry and make sure to take advantage of this offer now, or else you’ll have to wait for Black Friday!</p>
<p>If this is your first time enrolling in the Mastering DAX video course, we recommend you just follow the lectures in the proposed order and make sure you gain some experience by practicing your newfound skills on the exercises provided along the way.</p>
<p>As we often say, <a href="https://www.sqlbi.com/blog/alberto/2020/06/20/7-reasons-dax-is-not-easy/">DAX is simple, but it is not easy</a>. We worked hard to make sure this third edition will be enjoyed by both newbies starting from zero knowledge of DAX and DAX professionals who were early adopters of Power BI and have already written thousands of DAX measures. Everyone will be able to level up with this new video course.</p>
<p><strong>Enjoy DAX!</strong></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/mastering-dax-video-course-3rd-edition/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Creating functions for the like for like DAX pattern</title>
		<link>https://www.sqlbi.com/tv/creating-functions-for-the-like-for-like-dax-pattern/</link>
					<comments>https://www.sqlbi.com/tv/creating-functions-for-the-like-for-like-dax-pattern/#respond</comments>
		
		<dc:creator><![CDATA[Alberto Ferrari]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=893411</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/Y43N_vT-jWw/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>Create a DAX model-independent user-defined function (UDF) to easily apply the like-for-like pattern to your data.]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/Y43N_vT-jWw/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>Create a DAX model-independent user-defined function (UDF) to easily apply the like-for-like pattern to your data.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/creating-functions-for-the-like-for-like-dax-pattern/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Creating functions for the like-for-like DAX pattern</title>
		<link>https://www.sqlbi.com/articles/creating-functions-for-the-like-for-like-dax-pattern/</link>
					<comments>https://www.sqlbi.com/articles/creating-functions-for-the-like-for-like-dax-pattern/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Mon, 09 Mar 2026 20:00:36 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[UDF]]></category>
		<category><![CDATA[User-defined functions]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=893286</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/like-for-like.png" class="webfeedsFeaturedVisual" /></figure>This article offers a comprehensive guide to changing the like-for-like pattern into model-independent functions to enhance flexibility and simplify DAX code. DAX user-defined functions (UDFs) are a powerful tool for improving the quality of your semantic models. DAX authors with&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/like-for-like.png" class="webfeedsFeaturedVisual" /></figure><p>This article offers a comprehensive guide to changing the like-for-like pattern into model-independent functions to enhance flexibility and simplify DAX code.<br />
<span id="more-893286"></span></p>
<p>DAX user-defined functions (UDFs) are a powerful tool for improving the quality of your semantic models. DAX authors with an IT background are accustomed to creating generic code using functions. However, many DAX creators came from different backgrounds of expertise, such as statistics, business, and marketing. They may not recognize the immense power that functions have brought to the Power BI community.</p>
<p>In this article, we want to practically show, through an example, how to wisely use functions to improve the generalization of code and to reduce the complexity of your semantic models, with the goal of raising curiosity towards user-defined functions and – in general – the world of code development.</p>
<p>We use the like-for-like comparison pattern as an example: <a href="https://www.daxpatterns.com/like-for-like-comparison/">https://www.daxpatterns.com/like-for-like-comparison/</a>. We are neither going to describe the pattern code, nor going to evaluate its quality. The goal is to show how to transform a pattern that requires manual intervention into a set of generic functions that greatly simplify the creation of new semantic models.</p>
<h2>Quick analysis of the like-for-like pattern</h2>
<p>The goal of the pattern is to show the sales amount for only the stores that were open across all the years analyzed. In the next report, you will see that several stores (Connecticut, Hawaii, and Idaho) were not open in 2022; therefore, following the pattern, they should not participate in the comparison.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-127.png" width="506" /></p>
<p>Using the code in the pattern, the report removes the stores that were not open during the entire selected period, from the calculation.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-123.png" width="506" /></p>
<p>The pattern is based on two items: a table (<em>StoreStatus</em>) containing information on whether a store is open or closed each year, and the <em>Same Store Sales</em> measure, which uses the information in <em>StoreStatus</em> to limit the calculation to stores open for the entire period. Here is the code of the pattern:</p>
<div class="dax-code-title">Calculated table in Sales table</div>
<pre class="brush: dax; title: ; snippet: Calculated table; table: Sales; notranslate">
StoreStatus = 
VAR AllStores =
    CROSSJOIN (
        SUMMARIZE ( Sales, &#039;Date&#039;&#x5B;Year] ),
        ALLNOBLANKROW ( Store&#x5B;StoreKey] )
    )
VAR OpenStores =
    SUMMARIZE (
        Sales,
        &#039;Date&#039;&#x5B;Year],
        Sales&#x5B;StoreKey]
    )
RETURN
    UNION (
        ADDCOLUMNS ( OpenStores, &quot;Status&quot;, &quot;Open&quot; ),
        ADDCOLUMNS ( EXCEPT ( AllStores, OpenStores ), &quot;Status&quot;, &quot;Closed&quot; )
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">  
Same Store Sales = 
VAR OpenStores =
    CALCULATETABLE (
        FILTER (
            ALLSELECTED ( StoreStatus&#x5B;StoreKey] ),     -- Filter the stores
            CALCULATE (                                -- where the Status is
                SELECTEDVALUE ( StoreStatus&#x5B;Status] )  -- always OPEN
            ) = &quot;Open&quot;                                 --
        ),                                             --
        ALLSELECTED ( &#039;Date&#039; )                         -- Over all selected years
    )
VAR FilterOpenStores =
    TREATAS (                 -- Use OpenStores to filter
        OpenStores,           -- Store&#x5B;StoreKey]   
        Store&#x5B;StoreKey]       -- by changing its data lineage
    )
VAR Result =
    CALCULATE (
        &#x5B;Sales Amount],
        KEEPFILTERS ( FilterOpenStores )
    )
RETURN
    Result 
</pre>
<p>If you want to implement the pattern in one of your models, you need to manually adapt the code to make it work with your specific table and column names. For example, it is likely that you want to perform a like-for-like comparison on other entities, like <em>Product</em>, <em>Customer</em>, or any other entity that is relevant to your business. Needless to say, adapting the code requires you to spend some time understanding how it works to avoid any mistakes.</p>
<p>The question is simple: Is there a better way to implement the pattern and reduce the implementation to simpler steps? Thanks to UDFs, the answer is yes.</p>
<h2>The goals of using UDFs</h2>
<p>By using UDFs, we strive to obtain several benefits:</p>
<ul>
<li>Creating functions that can be used in different semantic models with minimal effort.</li>
<li>Centralizing the code, so that subsequent optimizations of the functions provide benefits to all the models using them.</li>
<li>Sharing the functions with the community, to benefit from other people’s ideas, if any.</li>
</ul>
<p>All these benefits can be achieved by simply converting the calculated table code and the measure code into functions, while paying close attention to the key distinction between model-dependent and model-independent functions. If you are not familiar with these terms, you can find more information in this article: <a href="https://www.sqlbi.com/articles/model-dependent-and-model-independent-user-defined-functions-in-dax/">Model-dependent and model-independent user-defined functions in DAX</a>.</p>
<p>Basically, a model-dependent function knows about the structure of tables and columns in your model, whereas a model-independent function is completely agnostic about the structure of the model. A model-independent function needs to receive as parameters all the columns and tables required to perform its calculation.</p>
<p>At first glance, it may seem as though creating model-independent functions is a waste of time, a geeky thing with no real value. However, we are about to show you the opposite: thinking in terms of model-independent functions is the key to widening your view about functions and producing elegant and reusable code.</p>
<p>We could show the resulting code straight here. However, for educational purposes, it is more beneficial to show the process of moving from the original pattern to the generic UDF step by step.</p>
<h2>Moving measures and calculated tables into functions</h2>
<p>The first step is to replace the code in both the measure and the calculated table with functions. As you can see from the following code, we just moved the entire code from both the measure and the calculated table into two functions, with no parameters:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.StoreStatus = 
    ( ) =&gt;
    VAR AllStores =
        CROSSJOIN (
            SUMMARIZE ( Sales, &#039;Date&#039;&#x5B;Year] ),
            ALLNOBLANKROW ( Store&#x5B;StoreKey] )
        )
    VAR OpenStores =
        SUMMARIZE (
            Sales,
            &#039;Date&#039;&#x5B;Year],
            Sales&#x5B;StoreKey]
        )
    RETURN
        UNION (
            ADDCOLUMNS ( OpenStores, &quot;Status&quot;, &quot;Open&quot; ),
            ADDCOLUMNS ( EXCEPT ( AllStores, OpenStores ), &quot;Status&quot;, &quot;Closed&quot; )
        )
</pre>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.SameStoreSales = 
    () =&gt;
    VAR OpenStores =
        CALCULATETABLE (
            FILTER (
                ALLSELECTED ( StoreStatus&#x5B;StoreKey] ),     -- Filter the stores
                CALCULATE (                                -- where the Status is
                    SELECTEDVALUE ( StoreStatus&#x5B;Status] )  -- always OPEN
                ) = &quot;Open&quot;                                 --
            ),                                             --
            ALLSELECTED ( &#039;Date&#039; )                         -- Over all selected years
        )
    VAR FilterOpenStores =
        TREATAS (                 -- Use OpenStores to filter
            OpenStores,           -- Store&#x5B;StoreKey]   
            Store&#x5B;StoreKey]       -- by changing its data lineage
        )
    VAR Result =
        CALCULATE (
            &#x5B;Sales Amount],
            KEEPFILTERS ( FilterOpenStores )
        )
    RETURN
        Result
</pre>
<p>The function names start with <em>Local</em> to identify them as model-dependent functions. They are model-dependent because throughout the DAX code, we use column and table names that are present in the model. If we did not do that, then moving this code to another model, where table and column names are likely to differ, would invalidate the DAX code.</p>
<h2>Creating model-independent functions</h2>
<p>The next step is to split each of the functions into two: a model-independent function that does not reference any object in the semantic model, and a model-dependent function that calls the model-independent function by passing the required parameters.</p>
<p>This step is highly relevant because it separates model details from business logic. We execute it by replacing each and every column and table name in the UDF with a parameter. The model-independent functions are prefixed with <em>DaxPatterns.LikeForLike</em>, because this is the name we want to use for the library.</p>
<p>Let us start with the <em>OpenStores</em> function, which computes the table with the stores open each year:</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
DaxPatterns.LikeForLike.StoreStatus = 
    (
        dateYearNumberColumn : ANYREF,
        storeColumn : ANYREF,
        salesTable : ANYREF
    ) =&gt;
    VAR AllStores =
        CROSSJOIN (
            SUMMARIZE ( salesTable, dateYearNumberColumn ),
            ALLNOBLANKROW ( storeColumn )
        )
    VAR OpenStores =
        SUMMARIZE (
            salesTable,
            dateYearNumberColumn,
            storeColumn
        )
    
    RETURN
        UNION (
            ADDCOLUMNS ( OpenStores, &quot;Status&quot;, &quot;Open&quot; ),
            ADDCOLUMNS ( EXCEPT ( AllStores, OpenStores ), &quot;Status&quot;, &quot;Closed&quot; )
        )
</pre>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.StoreStatus = ( ) =&gt; 
    DaxPatterns.LikeForLike.EntityStatus ( &#039;Date&#039;&#x5B;Year], Sales&#x5B;StoreKey], Sales )
</pre>
<p>The business logic is entirely included in the <em>DaxPatterns.LikeForLike.StoreStatus</em> function, which receives as parameters the columns and tables required to compute the table. This UDF would work in any model, because it is model-independent. However, for the function to be useful, it must be called with the right set of parameters. This step is accomplished by the <em>Local.StoreStatus</em> function, which just executes the mapping between the model and the model-independent function, without adding any business logic.</p>
<p>The calculated table definition just calls the model-dependent function:</p>
<div class="dax-code-title">Calculated table</div>
<pre class="brush: dax; title: ; snippet: Calculated table; notranslate">
StoreStatus = Local.StoreStatus ()
</pre>
<p>In a very similar way, we split the second function into two. The only additional detail is that the function accepts the measure to compute as an argument. Indeed, measure names such as <em>Sales Amount</em> are model-dependent details and cannot be part of a model-independent function.</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
DaxPatterns.LikeForLike.ComputeForSameStore = 
    (
        storeStatusKeyColumn : ANYREF,
        storeStatusStatusColumn : ANYREF,
        storeKeyColumn : ANYREF,
        dateTable : ANYREF,
        formulaExpr : EXPR
    ) =&gt; 
    VAR OpenStores =
        CALCULATETABLE (
            FILTER (
                ALLSELECTED ( storeStatusKeyColumn ),          -- Filter the entities
                CALCULATE (                                    -- where the Status is
                    SELECTEDVALUE ( storeStatusStatusColumn )  -- always OPEN
                ) = &quot;Open&quot;                                     --
            ),                                                 -- 
            ALLSELECTED ( dateTable )                          -- Over all selected years
        )
    VAR FilterOpenStores =
        TREATAS (                  -- Use OpenEntities to filter
            OpenStores,            -- the dimension store key    
            storeKeyColumn         -- by changing its data lineage
        )
    VAR Result =
        CALCULATE (
            formulaExpr,
            KEEPFILTERS ( FilterOpenStores )
        )
    RETURN
        Result
</pre>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.ComputeForSameStore = 
    (
        formulaExpr : ANYREF
    ) =&gt;
    DaxPatterns.LikeForLike.ComputeForSameStore ( 
        StoreStatus&#x5B;StoreKey],
        StoreStatus&#x5B;Status],
        Store&#x5B;StoreKey],
        &#039;Date&#039;,
        formulaExpr
    )
</pre>
<p>The <em>Same Store Sales</em> measure becomes much simpler, because it just needs to call <em>Local.ComputeForSameStore</em> with the right parameter:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">  
Same Store Sales = Local.ComputeForSameStore( &#x5B;Sales Amount] )
</pre>
<p>The advantage of creating model-independent functions is that their business logic can now be copied to any model. In each model, you can create the <em>Local</em> function to map the corresponding columns and table names, but most of the work needs no refactoring.</p>
<h2>Creating more generic model-independent functions</h2>
<p>Now that we have the two model-independent functions, we can observe that the business logic of the like-for-like pattern could work not only for stores, but for any entity – for example, it could work for products. If you carefully think about it, the only differences in terms of business logic between products and stores are the column and table names. However, because these details are now parameters of the function, we can create a more generic function that accepts any entity rather than just stores.</p>
<p>While producing this new version of the functions, we also observe that products are neither open nor closed. Stores can be open or closed, but products can be active or inactive. Therefore, we change the terminology in the code, shifting from the semantically meaningful Open and Closed to the more generic terms Active and Inactive.</p>
<p>Despite this looking like a small detail, it is not. Choosing the correct names reflects the clear intention of moving from the particular to the generic. The more generic our functions are, the more reusable they will be.</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
DaxPatterns.LikeForLike.ComputeForSameEntity = 
    (
        entityStatusKeyColumn : ANYREF,
        entityStatusStatusColumn : ANYREF,
        entityKeyColumn : ANYREF,
        dateTable : ANYREF,
        formulaExpr : EXPR
    ) =&gt; 
    VAR OpenEntities =
        CALCULATETABLE (
            FILTER (
                ALLSELECTED ( entityStatusKeyColumn ),          -- Filter the entities
                CALCULATE (                                     -- where the Status is
                    SELECTEDVALUE ( entityStatusStatusColumn )  -- always OPEN
                ) = &quot;Active&quot;                                    --
            ),                                                  -- 
            ALLSELECTED ( dateTable )                           -- Over all selected years
        )
    VAR FilterOpenEntities =
        TREATAS (                  -- Use OpenEntities to filter
            OpenEntities,          -- the dimension entity key    
            entityKeyColumn        -- by changing its data lineage
        )
    VAR Result =
        CALCULATE (
            formulaExpr,
            KEEPFILTERS ( FilterOpenEntities )
        )
    RETURN
        Result
</pre>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
DaxPatterns.LikeForLike.EntityStatus = 
    (
        dateYearNumberColumn : ANYREF,
        entityColumn : ANYREF,
        transactionTable : ANYREF
    ) =&gt;
    VAR AllEntities =
        CROSSJOIN (
            SUMMARIZE ( transactionTable, dateYearNumberColumn ),
            ALLNOBLANKROW ( entityColumn )
        )
    VAR OpenEntities =
        SUMMARIZE (
            transactionTable,
            dateYearNumberColumn,
            entityColumn
        )
    
    RETURN
        UNION (
            ADDCOLUMNS ( OpenEntities, &quot;Status&quot;, &quot;Active&quot; ),
            ADDCOLUMNS ( EXCEPT ( AllEntities, OpenEntities ), &quot;Status&quot;, &quot;Inactive&quot; )
        )
</pre>
<h2>Computing like-for-like at the Product level</h2>
<p>Now that the model-independent functions are entirely agnostic about both the model details and the entity details, we can implement the like-for-like pattern at the <em>Product</em> level by just creating two local functions that instantiate the parameters of the model-independent functions appropriately:</p>
<div class="dax-code-title">Calculated table</div>
<pre class="brush: dax; title: ; snippet: Calculated table; notranslate">
ProductStatus = Local.ProductStatus ()
</pre>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.ProductStatus = 
    () =&gt; 
    DaxPatterns.LikeForLike.EntityStatus ( &#039;Date&#039;&#x5B;Year], Product&#x5B;ProductKey], Sales )
</pre>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.ComputeForSameProduct = 
    (
        formulaExpr : ANYREF
    ) =&gt;
    DaxPatterns.LikeForLike.ComputeForSameEntity ( 
        ProductStatus&#x5B;ProductKey],
        ProductStatus&#x5B;Status],
        Product&#x5B;ProductKey],
        &#039;Date&#039;,
        formulaExpr
    )
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Same Product Sales = Local.ComputeForSameProduct ( &#x5B;Sales Amount] )
</pre>
<p>Using the <em>Same Product Sales</em> measure, users can easily perform like-for-like comparison at the <em>Product</em> level rather than at the <em>Store</em> level.</p>
<p>The most relevant point is that there is no need to learn or understand the implementation details of the pattern. For a user or developer to implement the pattern, it is sufficient to know how to pass the correct parameters to model-independent functions, thereby minimizing friction in subsequent implementations.</p>
<p>Finally, if new optimizations or feature functions are introduced in DAX and a code review is needed, it suffices to update the model-independent functions, while ensuring that all measures that use the model-dependent functions benefit from the new features. The code is centralized in functions that are agnostic to model details, clearly separating the business logic from local details.</p>
<h2>Conclusions</h2>
<p>User-defined functions are a great feature in DAX. However, they must be used correctly to maximize benefits for developers. Whenever you develop code, you always start with a measure, because it is easy to modify and debug. However, once the code runs fine, you should always ask yourself whether it can be moved to a more generic measure so you can use the same logic elsewhere. During this process, try to think in terms of model-dependent and model-independent functions.</p>
<p>Not every measure or piece of DAX code will benefit from this method. Nonetheless, improving your ability to abstract from your context and to reason at a higher level will make your DAX code more elegant and easier to maintain.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/creating-functions-for-the-like-for-like-dax-pattern/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Real estate with Synoptic Panel by OKVIZ</title>
		<link>https://www.sqlbi.com/tv/real-estate-with-synoptic-panel-by-okviz/</link>
					<comments>https://www.sqlbi.com/tv/real-estate-with-synoptic-panel-by-okviz/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Sun, 08 Mar 2026 11:30:00 +0000</pubDate>
				<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=893410</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/QejXegBxb9E/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>Let’s see how companies manage 𝗳𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝘄𝗼𝗿𝗸𝘀𝗽𝗮𝗰𝗲 𝗮𝗹𝗹𝗼𝗰𝗮𝘁𝗶𝗼𝗻 with 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹 in Power BI, mapping office seats, verifying availability, moving across floors, and connecting the layout to live reservation data using custom SVG floor plans derived from AutoCAD files. 𝗦𝘁𝗲𝗽𝘀&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/QejXegBxb9E/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>Let’s see how companies manage 𝗳𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝘄𝗼𝗿𝗸𝘀𝗽𝗮𝗰𝗲 𝗮𝗹𝗹𝗼𝗰𝗮𝘁𝗶𝗼𝗻 with 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹 in Power BI, mapping office seats, verifying availability, moving across floors, and connecting the layout to live reservation data using custom SVG floor plans derived from AutoCAD files.</p>
<p>𝗦𝘁𝗲𝗽𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗮𝘀𝗲 𝘀𝘁𝘂𝗱𝘆:</p>
<ul>
<li>ConvertDWG office plans into SVG</li>
<li>Map seat/workstation IDs to data in the model</li>
<li>Manage multi-floor layouts with dynamic map selection</li>
<li>Enable tooltips, interactions, and linked visuals</li>
</ul>
<p>𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗰𝗮𝘀𝗲 𝘀𝘁𝘂𝗱𝘆 𝗵𝗲𝗿𝗲:<br />
https://okviz.com/usecase/workspace-synoptic-panel/</p>
<p>𝗧𝗿𝘆 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹<br />
https://okviz.com/synoptic-panel/</p>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/real-estate-with-synoptic-panel-by-okviz/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>SQLBI+ updates in March 2026</title>
		<link>https://www.sqlbi.com/blog/marco/2026/03/04/sqlbi-updates-in-march-2026/</link>
					<comments>https://www.sqlbi.com/blog/marco/2026/03/04/sqlbi-updates-in-march-2026/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 08:32:28 +0000</pubDate>
				<category><![CDATA[SQLBI+]]></category>
		<category><![CDATA[Visual calculations]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=blogpost&#038;p=893565</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/sqlbi-plus-understanding-visual-calcs.png" class="webfeedsFeaturedVisual" /></figure>We released a new course for SQLBI+ subscribers: Understanding Visual Calculations in DAX. This is not an introduction to visual calculations for users. The goal is to explain the details of implementing visual calculations for experienced DAX developers, including new&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/sqlbi-plus-understanding-visual-calcs.png" class="webfeedsFeaturedVisual" /></figure><p>We released a new course for <a href="https://www.sqlbi.com/p/plus/"><strong>SQLBI+</strong></a> subscribers: <strong><a href="https://www.sqlbi.com/learn/understanding-visual-calculations-in-dax/">Understanding Visual Calculations in DAX</a></strong>. This is not an introduction to visual calculations for users. The goal is to explain the details of implementing visual calculations for experienced DAX developers, including new concepts such as visual shape and visual context.</p>
<p><img loading="lazy" decoding="async" style="float: left; margin: 0 10px 0 0; border: none !important;" src="https://cdn.sqlbi.com/wp-content/uploads/sqlbi-plus-understanding-visual-calcs.png" alt="" width="237" height="134" /></p>
<p>The training is reference material for model and report developers who want to understand how visual calculations work internally, for troubleshooting, and to build better semantic models that delegate to visual calculations operations that do not belong in a centralized semantic model, such as most formatting-related operations specific to a report.</p>
<p><img loading="lazy" decoding="async" style="float: right; margin: 0 0 0 0; border: none !important;" src="https://cdn.sqlbi.com/wp-content/uploads/noun-video-6592288.png" alt="" width="150" height="150" /></p>
<p>The course includes over <a href="https://www.sqlbi.com/learn/understanding-visual-calculations-in-dax/">2.5 hours of videos</a> and a <a href="https://www.sqlbi.com/whitepapers/understanding-visual-calculations-in-dax/">white paper</a> in PDF format that should be used as companion content. The first draft of the whitepaper was released in 2024; we have now finalized and updated the document to align with the latest updates to visual calculation functions. The video course has been produced very recently and includes more practical examples of the user interface, whereas the white paper has the same structure and content but does not provide user interface instructions.  Combining both resources is usually the best idea!</p>
<p>Visual Calculations is a Power BI feature that is not available in other client tools, such as Excel. Other content we produced has an introductory description of visual calculations: the latest book (<a href="https://www.sqlbi.com/books/the-definitive-guide-to-dax-third-edition/">The Definitive Guide to DAX, 3rd edition</a>) and the video course (<a href="https://www.sqlbi.com/p/mastering-dax-video-course/">Mastering DAX</a>: the third edition will be available by the end of March 2026). This SQLBI+ content goes deeper and complements the DAX book and video course.</p>
<p>The course is organized into the following modules:</p>
<ul>
<li>Introducing visual calculations
<ul>
<li>Visual calculations and window functions</li>
</ul>
</li>
<li>Understanding the visual shape
<ul>
<li>Visual calculations are new columns in the virtual table</li>
<li>Understanding densification</li>
</ul>
</li>
<li>Understanding the visual context
<ul>
<li>Understanding EXPAND, COLLAPSE, EXPANDALL, and COLLAPSEALL</li>
<li>Navigating the lattice of the virtual table</li>
<li>Accessing the virtual table through ROWS and COLUMNS</li>
<li>Understanding the unique behaviors of the visual context</li>
<li>Understanding ROWS and COLUMNS used together</li>
<li>Understanding reset and direction</li>
<li>Using CALCULATE in visual calculations</li>
</ul>
</li>
<li>Understanding blank handling</li>
<li>Understanding visual calculation functions
<ul>
<li>Understanding PREVIOUS, NEXT, FIRST, LAST</li>
<li>Understanding LOOKUP, LOOKUPWITHTOTALS, and auto-expand</li>
<li>Understanding RUNNINGSUM</li>
<li>Understanding ISATLEVEL</li>
<li>Understanding MOVINGAVERAGE</li>
<li>Understanding RANGE</li>
</ul>
</li>
<li>Visual calculations and calculation groups</li>
<li>Visual calculation examples
<ul>
<li>Computing the moving average over the last six months</li>
<li>Computing growth over the same period last year</li>
<li>Comparing sales over the average of the siblings</li>
<li>Computing year-to-date</li>
<li>Computing the Pareto / ABC Class</li>
</ul>
</li>
<li>Conclusions</li>
</ul>
<p>Stay tuned for new <a href="https://www.sqlbi.com/p/plus/">SQLBI+</a> content coming later in 2026, and <strong>thank you to all SQLBI+ subscribers</strong> for their support!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/blog/marco/2026/03/04/sqlbi-updates-in-march-2026/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Debug DAX variables using TOJSON and TOCSV</title>
		<link>https://www.sqlbi.com/tv/debug-dax-variables-using-tojson-and-tocsv/</link>
					<comments>https://www.sqlbi.com/tv/debug-dax-variables-using-tojson-and-tocsv/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Tue, 24 Feb 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=893037</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/FW_NrT7lt_g/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>How to use the TOJSON and TOCSV functions to inspect the content of intermediate table variables when debugging a DAX measure.]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/FW_NrT7lt_g/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>How to use the TOJSON and TOCSV functions to inspect the content of intermediate table variables when debugging a DAX measure.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/debug-dax-variables-using-tojson-and-tocsv/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Debugging DAX variables using TOJSON and TOCSV</title>
		<link>https://www.sqlbi.com/articles/debugging-dax-variables-using-tojson-and-tocsv/</link>
					<comments>https://www.sqlbi.com/articles/debugging-dax-variables-using-tojson-and-tocsv/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Mon, 23 Feb 2026 20:00:16 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=892955</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image6-81-scaled.png" class="webfeedsFeaturedVisual" /></figure>This article describes how to use the TOJSON and TOCSV functions to inspect the content of intermediate table variables when debugging a DAX measure. In a previous article, Debugging DAX measures in Power BI, we described several techniques to find errors&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image6-81-scaled.png" class="webfeedsFeaturedVisual" /></figure><p>This article describes how to use the TOJSON and TOCSV functions to inspect the content of intermediate table variables when debugging a DAX measure.<br />
<span id="more-892955"></span></p>
<p>In a previous article, <a href="https://www.sqlbi.com/articles/debugging-dax-measures-in-power-bi/">Debugging DAX measures in Power BI</a>, we described several techniques to find errors in a DAX formula. The most basic approach, one that requires no external tools, is to temporarily change the RETURN statement of a measure so that it returns the value of an intermediate variable instead of the final result. When the variable contains a scalar value such as a number or a string, this is straightforward: you change the RETURN, observe the result in the report, and compare it with your expectations. We see this in the following example:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [6]; title: ; snippet: Measure; table: Sales; notranslate">
Delta Avg 2 =
VAR CurrentValue = &#x5B;Avg Transaction]
VAR ReferenceValue = CALCULATE ( &#x5B;Avg Transaction], ALLSELECTED ( ) )
VAR CurrentDelta = CurrentValue - ReferenceValue
VAR Result = DIVIDE ( CurrentDelta, ReferenceValue )
RETURN ReferenceValue
</pre>
<p>That technique remains fully valid: read the previous article if you are not familiar with it yet.</p>
<p>The situation becomes more complex when the variable you want to inspect contains a table. A measure can only return a scalar value, so you cannot simply return a table variable. In the previous article, we used CONCATENATEX to convert a table into a string by manually specifying which columns to include and how to format them. However, this approach requires writing a specific CONCATENATEX expression for each table you want to inspect, choosing the columns, defining the separator, and adjusting the format every time. This is time consuming, especially during an active debugging session where you may need to inspect several variables in quick succession.</p>
<p>A full-featured DAX debugger is available in a commercial tool, <a href="https://tabulareditor.com/">Tabular Editor 3</a>, which provides step-by-step execution and variable inspection. Another tool you may find useful is <a href="https://daxstudio.org/">DAX Studio</a>, which is free. However, not all developers have access to these tools, and even those who do sometimes need a quick, lightweight technique that works directly in Power BI without opening another tool.</p>
<p>TOJSON and TOCSV offer exactly that. These two functions convert a table into a string (JSON or CSV format, respectively) without requiring you to specify the columns or the format. You pass the table variable, and the function produces a complete textual representation of its content. The result is a scalar string that a measure can return and that a visual can display in a report.</p>
<p>It is important to highlight that debugging a measure often requires inspecting its value in a specific filter context. For example, you might notice that a matrix shows an incorrect value for a specific cell, such as a particular combination of year and product category, or for one of the subtotals. In that case, displaying the debugging output in a card visual would not be sufficient, because a card only shows the value in the filter context of the visual, so you should use external slicers and the filter pane to reproduce the filters combination to investigate. A more effective approach is to use the debugging measure directly in a matrix, so that you can inspect the content of the table variable within the filter context where the incorrect result appears. This is a typical scenario: the total does not correspond to the expected value, and you need to see what the intermediate table contains for that specific cell.</p>
<p>This article describes the syntax and practical use of TOJSON and TOCSV for this purpose. We illustrate the technique with several examples and discuss the limitations you should be aware of.</p>
<h2>Inspecting table variables in a measure</h2>
<p>Consider the following scenario. A measure builds an intermediate table in a variable (for example, using ADDCOLUMNS, FILTER, or SUMMARIZE) and then aggregates it to produce a final result, but the numbers are not what you expect. Here is the measure we use in this example, with two errors that are highlighted in the comments:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [13,14,19,20]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX Start = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
        ADDCOLUMNS ( 
            VALUES ( &#039;Date&#039;&#x5B;Date] ),
            &quot;@DayAmount&quot;, &#x5B;Sales Amount] 
        )
    VAR TargetDatesAmount =
        FILTER ( 
            DailySales, 
            -- Error 1: it should be TargetAmount instead of &#x5B;Max Daily Amount]
            &#x5B;@DayAmount] == &#x5B;Max Daily Amount]
                &amp;&amp; NOT ISBLANK ( &#x5B;@DayAmount] ) 
        )
    VAR TargetDates =
        SELECTCOLUMNS ( 
            -- Error 2: it should be TargetDatesAmount instead of DailySales
            DailySales,
            &#039;Date&#039;&#x5B;Date]
        )
    VAR Result =
        IF ( 
            COUNTROWS ( TargetDates ) = 1,
            FORMAT ( TargetDates, &quot;mm/dd/yyyy&quot; ),
            &quot;Too many dates&quot;
        )
    RETURN Result
)
</pre>
<p>The <em>Date MAX Start</em> measure should return the date when the <em>Max Daily Amount</em> was achieved, but it is not working.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-126.png" width="550" /></p>
<p>We suspect that the intermediate table might contain unexpected rows or incorrect values, but we cannot see it directly – in the wild, you will not have the comments saying where the error is! We need a way to peek inside that variable, in the specific filter context where the result is wrong. The following measure inspects the <em>TargetDatesAmount</em> variable by using CONCATENATEX:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [10,11,12,13,14]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX ConcatenateX = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
-- ...
-- skipping implementation that is identical to Date MAX Start
-- ...
    RETURN 
        CONCATENATEX ( 
            TOPN ( 10, TargetDatesAmount ), 
            FORMAT ( &#039;Date&#039;&#x5B;Date], &quot;mm/dd/yyyy&quot; )
                &amp; &quot;: &quot; &amp; &#x5B;@DayAmount] &amp; &quot;, &quot;
        )
)
</pre>
<p>The result of <em>Date MAX ConcatenateX</em> shows that the values in <em>TargetDatesAmount</em> are not being filtered.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-122-scaled.png" width="1024" /></p>
<p>We would see the same result if we iterated <em>DailySales</em> instead of <em>TargetDatesAmount</em> in CONCATENATEX, which indicates that the filter is ineffective. As described in the measure comments, we should replace the <em>Max Daily Amount</em> comparison with <em>TargetAmount</em> in the FILTER iteration. Similarly, by changing the CONCATENATEX iteration, we can see that <em>TargetDates</em> does not have the filtered rows once we fix the first bug, because we iterate over <em>DailySales</em> rather than <em>TargetDatesAmount</em>, as highlighted in the comments. Inspecting the variables helps us identify and fix these errors.</p>
<p>As we described in the introduction, the CONCATENATEX approach works, but it requires writing a custom expression for each table you want to inspect. With TOJSON and TOCSV, you can achieve the same result with a single function call, and there is no need to specify the columns.</p>
<h2>Converting a table to a string with TOCSV</h2>
<p>TOCSV converts a table into a string, formatted as comma-separated values. Because the result is a string, it can be returned by a measure and displayed in a report visual.</p>
<p>The syntax of TOCSV is:</p>
<pre class="brush: dax; title: ; notranslate">
TOCSV ( &lt;Table&gt;, &#x5B;&lt;MaxRows&gt;], &#x5B;&lt;Delimiter&gt;], &#x5B;&lt;IncludeHeaders&gt;] )
</pre>
<p>The first argument is the table to convert. The optional <em>MaxRows</em> parameter controls how many rows are included in the output; its default value is 10. The <em>Delimiter</em> parameter specifies the column separator (the default is a comma), and <em>IncludeHeaders</em> determines whether the first line contains column names (the default is TRUE). To inspect a table variable, we temporarily change the RETURN expression of the measure so that it returns the TOCSV output instead of the original result, thus reducing the code from 5 lines using CONCATENATEX to just one line:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [10]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX TOCSV = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
-- ...
-- skipping implementation that is identical to Date MAX Start
-- ...
    RETURN 
        TOCSV ( TargetDatesAmount, 3 )
)
</pre>
<p>The output is a plain text representation of the table content. Each row appears on a separate line, and columns are separated by the chosen delimiter. This is typically enough to verify whether the table contains the expected rows and values. We reduced the output to three rows to limit the vertical space used in the following screenshot.<img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image3-110.png" width="650" /></p>
<h2>Converting a table to a string with TOJSON</h2>
<p>TOJSON works similarly to TOCSV, but it produces a JSON-formatted string instead of a string formatted as comma-separated values. The syntax is simpler:</p>
<pre class="brush: dax; title: ; notranslate">
TOJSON ( &lt;Table&gt;, &#x5B;&lt;MaxRows&gt;] )
</pre>
<p>The only optional parameter is <em>MaxRows</em>, which defaults to 10, the same as TOCSV. TOJSON does not support parameters for delimiters or header control because the JSON format has a fixed structure:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [10]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX TOJSON = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
-- ...
-- skipping implementation that is identical to Date MAX Start
-- ...
    RETURN 
        TOJSON ( TargetDatesAmount, 3 )
)
</pre>
<p>The JSON output contains three elements: a “header” array with column names, a “rowCount” field indicating the total number of rows in the original table (regardless of <em>MaxRows</em>), and a “data” array with the actual row values.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image4-104.png" width="981" /></p>
<p>The “rowCount” field in the JSON output is particularly useful: it tells you how many rows the table actually contains, even when only the first few rows are displayed, as is the case in the screenshot above. This information is not available in the TOCSV output.</p>
<h2>Choosing between TOCSV and TOJSON</h2>
<p>Both functions serve the same purpose in a debugging context. The choice between them largely comes down to personal preference and readability.</p>
<p>TOCSV produces a more compact output that is easier to read at a glance, especially for small tables with a few columns. TOJSON produces a more structured output that includes the row count and is easier to parse programmatically. In our experience, TOCSV is more practical for quick visual inspection during debugging. TOJSON is more useful when you need to know the total row count of the original table, or when you plan to copy the output into another tool for further analysis.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image5-89-scaled.png" width="1024" /></p>
<p>Another difference between the two functions is when the table is empty: TOCSV always return a result, showing the column names in that case, whereas TOJSON does not return any result, which makes it hard to investigate the content of the table when it is empty. (Thank you Jan S for reporting it!)</p>
<h2>Understanding the <em>MaxRows</em> limitation</h2>
<p>Both TOCSV and TOJSON default to returning only 10 rows. This is an intentional design choice: converting a large table to a string can produce a very long text, which is both hard to read and potentially expensive to compute. The default limit of 10 rows keeps the output manageable.</p>
<p>For debugging purposes, 10 rows are often sufficient. When we are verifying the structure of an intermediate table (like checking which columns are present, whether the values look correct, and whether unexpected rows appear), the first few rows usually provide enough evidence to identify the problem. We used only 3 rows in the example to maximize the visibility of the screenshots in the article, but we usually keep the default of 10.</p>
<p>However, there are scenarios where 10 rows are not enough. If the issue you are investigating only manifests further down in the table, or if you need to verify the complete content, you can increase the <em>MaxRows</em> parameter.</p>
<p>Be mindful that increasing <em>MaxRows</em> significantly can produce a very long string. A visual may truncate the output, and the measure evaluation can become slower. For most debugging sessions, a value between 10 and 20 is a reasonable range. If you need to inspect a table with hundreds or thousands of rows, consider using DAX Studio or Tabular Editor 3 instead, which are better suited for exploring large datasets.</p>
<p>It is also important to note that the sort order of the rows returned by TOCSV and TOJSON cannot be controlled directly. The functions return rows in whatever order the engine provides, which may not be deterministic. Indeed, the previous example comparing TOCSV and TOJSON outputs shows three days in January (January 3<sup>rd</sup>, 4<sup>th</sup>, and 11<sup>th</sup>, respectively) that are not the first three days available in the month. If row order is important for your investigation, you should sort the table explicitly before passing it to TOCSV or TOJSON. For example, you might wrap it in a TOPN expression with the desired sort order:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [10,11,12]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX TOCSV Sorted = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
-- ...
-- skipping implementation that is identical to Date MAX Start
-- ...
    RETURN 
        TOCSV ( 
            TOPN ( 3, TargetDatesAmount, &#039;Date&#039;&#x5B;Date], DESC )
        )
)
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [10,11,12]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX TOJSON Sorted = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
-- ...
-- skipping implementation that is identical to Date MAX Start
-- ...
    RETURN 
        TOJSON ( 
            TOPN ( 3, TargetDatesAmount, &#039;Date&#039;&#x5B;Date], DESC )
        )
)
</pre>
<p>This way, the result includes the first three days with sales for each month, even though they are not sorted within the output produced by TOCSV and TOJSON (you can control the order in CONCATENATEX, which requires additional parameters).<img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image6-81-scaled.png" width="1024" /></p>
<h2>A practical debugging workflow</h2>
<p>Let us describe a typical debugging workflow that uses TOCSV to inspect intermediate variables.</p>
<p>Suppose we are developing a measure that computes a result through several steps, each stored in a variable. The result is not what we expect. Rather than guessing which step is wrong, we can systematically inspect each table variable by temporarily changing the RETURN statement. In the previous examples, we have seen several approaches to investigating the <em>TargetDatesAmount</em> variable by using CONCATENATEX, TOCSV, and TOJSON. Looking at the content produced, we locate and fix the first error, now testing the following measure:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [13,14]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX Step 2 = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
        ADDCOLUMNS ( 
            VALUES ( &#039;Date&#039;&#x5B;Date] ),
            &quot;@DayAmount&quot;, &#x5B;Sales Amount] 
        )
    VAR TargetDatesAmount =
        FILTER ( 
            DailySales, 
            -- Error 1 fixed
            &#x5B;@DayAmount] == TargetAmount
                &amp;&amp; NOT ISBLANK ( &#x5B;@DayAmount] ) 
        )
    VAR TargetDates =
        SELECTCOLUMNS ( 
            -- Error 2: it should be TargetDatesAmount instead of DailySales
            DailySales,
            &#039;Date&#039;&#x5B;Date]
        )
    VAR Result =
        IF ( 
            COUNTROWS ( TargetDates ) = 1,
            FORMAT ( TargetDates, &quot;mm/dd/yyyy&quot; ),
            &quot;Too many dates&quot;
        )
    RETURN 
        TOCSV ( TargetDatesAmount, 3 )
)
</pre>
<p>With this version of the measure, the <em>TargetDatesAmount</em> variable now has only one row per month and year, which indicates that the filter is working correctly.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image7-69.png" width="650" /></p>
<p>At this point, if we look at the result, we still see the same incorrect “Too many dates” sentence we had in the beginning. We must investigate more, so we return TOCSV applied to the next variable, <em>TargetDates</em>:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [10]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX Step 3 = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
-- ...
-- skipping implementation that is identical to Date MAX Step 2
-- ...
    RETURN 
        TOCSV ( TargetDates, 3 )
)
</pre>
<p>The result no longer includes <em>Sales Amount</em> computed for each date. However, we see three dates instead of one in each cell. If we used TOJSON, we would see a larger “rowCount” in each cell. The filter we fixed in <em>TargetDatesAmount</em> does not apply to the next step. Why? By reviewing the code more closely, we notice that we referenced <em>DailySales</em> again instead of <em>TargetDatesAmount</em> when iterating over the table in SELECTCOLUMNS. We fix this reference and we test the code again:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [19,20]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX Step 4 = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
        ADDCOLUMNS ( 
            VALUES ( &#039;Date&#039;&#x5B;Date] ),
            &quot;@DayAmount&quot;, &#x5B;Sales Amount] 
        )
    VAR TargetDatesAmount =
        FILTER ( 
            DailySales, 
            -- Error 1 fixed
            &#x5B;@DayAmount] == TargetAmount
                &amp;&amp; NOT ISBLANK ( &#x5B;@DayAmount] ) 
        )
    VAR TargetDates =
        SELECTCOLUMNS ( 
            -- Error 2 fixed
            TargetDatesAmount,
            &#039;Date&#039;&#x5B;Date]
        )
    VAR Result =
        IF ( 
            COUNTROWS ( TargetDates ) = 1,
            FORMAT ( TargetDates, &quot;mm/dd/yyyy&quot; ),
            &quot;Too many dates&quot;
        )
    RETURN 
        TOCSV ( TargetDates, 3 )
)
</pre>
<p>At this point, the <em>TargetDates</em> variable has only one row per cell.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image8-57.png" width="550" /></p>
<p>We can revert to <em>Result</em> after the RETURN statement and see the correct report:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [9]; title: ; snippet: Measure; table: Sales; notranslate">
Date MAX Fixed = 
VAR TargetAmount = &#x5B;Max Daily Amount]
RETURN IF (
    NOT ISBLANK ( TargetAmount ),
    VAR DailySales = 
-- ...
-- skipping implementation that is identical to Date MAX Step 4
-- ...
    RETURN Result
)
</pre>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image9-49.png" width="550" /></p>
<p>To recap, the process is the following: we replace the RETURN expression with TOCSV (or TOJSON) of the variable to inspect. We check the output, confirm whether that variable contains the expected content, and then move on to the next variable. Once we identify the step that produces the incorrect result, we can focus our analysis on that specific part of the measure.</p>
<p>After debugging is complete, we restore the original RETURN expression. The TOCSV or TOJSON call was never part of the measure logic: it was only a temporary lens used to inspect the calculation.</p>
<h2>Conclusions</h2>
<p>TOJSON and TOCSV are simple functions with a specific and very practical use in everyday DAX development: they let us convert a table into a string, so that we can return it from a measure and inspect its content directly in a report visual. This makes them valuable debugging tools when we need to verify the content of intermediate table variables.</p>
<p>The default limit of 10 rows is adequate for most debugging scenarios, but it can be increased when needed. Be mindful that very large outputs can be difficult to read and potentially slow to compute. For large-scale data exploration, dedicated tools such as DAX Studio and Tabular Editor 3 remain the better options.</p>
<p>The debugging technique itself is straightforward: temporarily replace the RETURN expression of your measure with a TOCSV or TOJSON call targeting the variable you want to inspect. Check the output, identify the problem, fix the measure, and restore the original RETURN. It is an effective workflow that requires no external tools and works entirely within Power BI.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/debugging-dax-variables-using-tojson-and-tocsv/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Displaying injury data on the human body by using Synoptic Panel in Power BI</title>
		<link>https://www.sqlbi.com/tv/displaying-injury-data-on-the-human-body-by-using-synoptic-panel-in-power-bi/</link>
					<comments>https://www.sqlbi.com/tv/displaying-injury-data-on-the-human-body-by-using-synoptic-panel-in-power-bi/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Thu, 19 Feb 2026 20:37:13 +0000</pubDate>
				<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=886115</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/B7BK8B9LKE4/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>How to 𝗱𝗶𝘀𝗽𝗹𝗮𝘆 𝗶𝗻𝗷𝘂𝗿𝘆 𝗱𝗮𝘁𝗮 𝗼𝗻 𝗮 𝗵𝘂𝗺𝗮𝗻 𝗯𝗼𝗱𝘆 𝗦𝗩𝗚 𝗺𝗮𝗽 by using 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹 in Power BI. This video shows how to import an SVG, map each body area to fields in your model, and make the visual respond&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/B7BK8B9LKE4/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>How to 𝗱𝗶𝘀𝗽𝗹𝗮𝘆 𝗶𝗻𝗷𝘂𝗿𝘆 𝗱𝗮𝘁𝗮 𝗼𝗻 𝗮 𝗵𝘂𝗺𝗮𝗻 𝗯𝗼𝗱𝘆 𝗦𝗩𝗚 𝗺𝗮𝗽 by using 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹 in Power BI. This video shows how to import an SVG, map each body area to fields in your model, and make the visual respond to filters and selections.</p>
<p>𝗜𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗰𝗮𝘀𝗲 𝘀𝘁𝘂𝗱𝘆, 𝘆𝗼𝘂 𝘄𝗶𝗹𝗹 𝗳𝗶𝗻𝗱:<br />
&#8211; How Synoptic Panel binds SVG elements to your dataset<br />
&#8211; How filters update the visualization in real time<br />
&#8211; How to prepare or convert SVG drawings<br />
&#8211; When to automate the mapping process<br />
&#8211; Why this technique applies to many scenarios, such as medical data, safety reporting, ergonomics, and more</p>
<p>𝗙𝘂𝗹𝗹 𝗰𝗮𝘀𝗲 𝘀𝘁𝘂𝗱𝘆 𝗮𝗻𝗱 𝗶𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗱𝗲𝘁𝗮𝗶𝗹𝘀<br />
https://okviz.com/usecase/injuries-synoptic-panel/</p>
<p>𝗧𝗿𝘆 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹<br />
https://okviz.com/synoptic-panel/</p>
<p>Share your thoughts or use cases in the comments: we’d love to see what you build!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/displaying-injury-data-on-the-human-body-by-using-synoptic-panel-in-power-bi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Understanding DATEADD parameters with calendar based time intelligence</title>
		<link>https://www.sqlbi.com/tv/understanding-dateadd-parameters-with-calendar-based-time-intelligence/</link>
					<comments>https://www.sqlbi.com/tv/understanding-dateadd-parameters-with-calendar-based-time-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[Alberto Ferrari]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">http://www.sqlbi.com/?post_type=video&#038;p=892340</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/FoHku9KsBaw/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>The new DATEADD parameters in DAX add greater flexibility to the calendar-based time intelligence in Power BI. Learn how they work!]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/FoHku9KsBaw/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>The new DATEADD parameters in DAX add greater flexibility to the calendar-based time intelligence in Power BI. Learn how they work!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/understanding-dateadd-parameters-with-calendar-based-time-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Understanding DATEADD parameters with calendar-based time intelligence</title>
		<link>https://www.sqlbi.com/articles/understanding-dateadd-parameters-with-calendar-based-time-intelligence/</link>
					<comments>https://www.sqlbi.com/articles/understanding-dateadd-parameters-with-calendar-based-time-intelligence/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Mon, 09 Feb 2026 20:00:13 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[Calendar]]></category>
		<category><![CDATA[Time Intelligence]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=892057</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image6-80.png" class="webfeedsFeaturedVisual" /></figure>The new calendar-based time intelligence functions offer greater flexibility than the classic time intelligence functions. This article describes the DATEADD parameters for controlling different granularity shifts. The primary reason to adopt the new calendar-based time intelligence in Power BI is&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image6-80.png" class="webfeedsFeaturedVisual" /></figure><p>The new calendar-based time intelligence functions offer greater flexibility than the classic time intelligence functions. This article describes the DATEADD parameters for controlling different granularity shifts.<br />
<span id="more-892057"></span></p>
<p>The primary reason to adopt the new calendar-based time intelligence in Power BI is its flexibility (read more on <a href="https://www.sqlbi.com/articles/introducing-calendar-based-time-intelligence-in-dax/">Introducing calendar-based time intelligence in DAX</a>). Classic time intelligence functions work out of the box and deliver meaningful results in most scenarios. However, to do so, they make assumptions about the calendar structure and the desired outcomes. Sometimes, the choices are not aligned with the user requirements, and developers need to author their own time intelligence calculations.</p>
<p>The new calendar-based time intelligence functions provide greater flexibility by allowing developers to configure parameters that drive the internal algorithms to meet diverse requirements. Using these parameters requires a precise understanding of the scenario for which they were built, which requires some attention to detail.</p>
<p>Most of the article focuses on understanding the complex scenarios you may encounter when performing time-intelligence calculations. Choosing the right set of arguments is a very simple step if you understand the scenario well, but it may be a frustrating experience if you just try these arguments without having already gained the required knowledge. In this article, we focus on a specific function: DATEADD, which shifts time intervals. Be mindful that DATEADD is internally used by many time intelligence calculations; therefore, the concepts explained for DATEADD apply to other time intelligence functions, like, for example, DATESINPERIOD.</p>
<h2>Comparing months with different lengths</h2>
<p>DATEADD shifts the original selection (the current filter context) back and forth by the specified number of intervals, where an interval can be DAY, WEEK, MONTH, QUARTER, or YEAR. For example: DATEADD ( ‘Gregorian’, -6, MONTH ) shifts the current filter context six months back in time. The interval is MONTH, -6 is the number of months to shift, <em>Gregorian</em> is the calendar to use.</p>
<p>Let us start investigating the scenario with a simple measure that computes the sales in the previous month, using classic time intelligence:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4]; title: ; snippet: Measure; table: Sales; notranslate">
Sales PM Date = 
CALCULATE (
    &#x5B;Sales Amount],
    DATEADD ( &#039;Date&#039;&#x5B;Date], -1, MONTH )
)
</pre>
<p>Using the measure in a report shows that the sales of January 2024 are reported correctly in February 2024.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-125.png" width="308" /></p>
<p>Despite it looking natural and correct, the result requires a deeper understanding. February 2024 contains 29 days, because 2024 is a leap year. January 2024 contains 31 days. The report is currently comparing two periods of different lengths. We are so used to variable-length months that we seldom worry about the fact that the we compare entities that are different (in this case in number of days). For the purposes of this article, the key point is that the two months have different lengths.</p>
<p>The same scenario happens when comparing March 2024 with February 2024 or – in general – whenever we compare two months with a different number of days.</p>
<p>It is important to note that right now, the granularity of the values we are inspecting is the month, and the period used for the shift is also the month. When the granularity of the selection and the granularity of the period used for the shift are the same, numbers are easy to read.</p>
<p>If we expand the matrix at the day level, the selection granularity becomes the day, while the period-to-shift granularity is the month. The measure still reports correct results: each day in February is shifted back to the same day in January. As expected, <em>Sales PM Date</em> on the third of February reports the sales of the third of January.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-121.png" width="320" /></p>
<p>The scenario quickly becomes more complex i\f, rather than looking at the beginning of the month, we go to the end of February, namely the 29<sup>th</sup> of February.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image3-109.png" width="324" /></p>
<p>The 29<sup>th</sup> of February correctly shows the value of sales for the 29<sup>th</sup> of January. However, January includes two additional days with sales: the 30<sup>th</sup> and 31<sup>st</sup>, which are not shown in February.</p>
<p>Let us repeat this simple concept: January 2024 has 31 days, with a total Sales Amount of <strong>188,419.28</strong>. February 2024 contains only 29 days; <em>Sales PM Date</em> displays the exact sales amount of the corresponding day in the previous month. Two dates and their corresponding sales (9,534.40 and 9,445.42 for the 30<sup>th</sup> and 31<sup>st</sup> of January) are missing. Still, the total is the same: <strong>188,419.28</strong>. In other words, the total shown is not the sum of the displayed daily rows.</p>
<p>Be mindful: this is not an incorrect behavior. It is a precise choice made by DAX to solve a problem that has no clear and simple solution: comparing two months with a different number of days, and showing the daily values at the same time, requires a compromise because we need to pack the total of $188,419.28 into only 29 rows rather than the original 31.</p>
<p>The problem is not new; it has always been there. Classic time intelligence functions addressed this scenario differently. The same formula to compute the sales in the previous month can be written using the new calendar-based time intelligence:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4]; title: ; snippet: Measure; table: Sales; notranslate">
Sales PM Cal = 
CALCULATE (
    &#x5B;Sales Amount],
    DATEADD ( &#039;Gregorian&#039;, -1, MONTH )
)
</pre>
<p>Quite surprisingly, the result is different.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image4-103.png" width="421" /></p>
<p>As you can see, on the 29<sup>th</sup> of February, the calendar-based time intelligence calculation returns 19,132.33 rather than 152.51. The difference between these two amounts is the exact sum of the two additional days that exist in January and that do not exist in February. In other words, the selection of one day in February (Feb 29<sup>th</sup>) is transformed into the selection of three days in January (Jan 29<sup>th</sup>, 30<sup>th</sup> and 31<sup>st</sup>).</p>
<p>This behavior is neither better nor worse: it is a different choice, and it is just a default choice that can be modified with additional parameters. The advantage is that the rows sum to the month total; the disadvantage is that day-level comparisons are inaccurate because we are comparing one day against three. As we anticipated, solving the problem requires a compromise, and the calendar-based time intelligence chose a different solution compared to the classic time intelligence.</p>
<p>This is a key difference between calendar-based time intelligence and classic time intelligence: in classic time intelligence, the DAX engine defines the behavior, and there is no way to change it. In calendar-based time intelligence, a set of additional parameters allows developers to specify how to handle time periods of varying lengths.</p>
<h2>Understanding Extension and Truncation</h2>
<p>Now that we have a clearer picture of the scenario, it is time to be more precise and define the rules of the game. DATEADD shifts the original selection (the current filter context) back and forth by the specified number of intervals, where an interval can be DAY, WEEK, MONTH, QUARTER, or YEAR.</p>
<p>In our example, we shifted Feb 29<sup>th</sup> one month back; the original selection is Feb 29<sup>th</sup>, the interval is MONTH, and the shift is -1.</p>
<p>When the original selection is at a finer grain than the interval, as in the examples we have shown so far, DATEADD uses the same-distance-from-parent technique. Because Feb 29<sup>th</sup> is the 29<sup>th</sup> day in February, it is shifted to the 29<sup>th</sup> day in January, which (surprise, surprise!) is the 29<sup>th</sup> of January. This algorithm works fine as long as the periods have the same length. If the periods have different lengths (like months in a Gregorian calendar), there are two possible scenarios, depending on which period is longer.</p>
<ul>
<li><strong>Extension: The resulting period is larger than the original period</strong>. For example, shifting the 29<sup>th</sup> of February one month back may return only one day (the 29th of January) or multiple days (from the 29th to the 31st of January), depending on the current end date of the selection. In other rows, the current selection may be extended to a larger selection.</li>
<li><strong>Truncation: The resulting period is shorter than the original period</strong>. For example, shifting the 30<sup>th</sup> of March back one month makes it impossible to find the 30<sup>th</sup> day of February. In this case, it is possible to produce either a blank (there is no 30<sup>th</sup> of February) or to return the last day of February, anchoring the result to the end of the period.</li>
</ul>
<p><em>Extension</em> and <em>Truncation</em> are indeed the two additional parameters of DATEADD to handle these two scenarios.</p>
<p><em>Extension</em> is effective when comparing the current period with a larger one; it changes only how the last day of the current period is shifted. <em>Extension</em> can be one of three values:</p>
<ul>
<li><strong>EXTENDING</strong>: this is the default behavior. If the current selection contains the last day of the interval (as is the case for the 29<sup>th</sup> of February), DATEADD returns all the dates from the 29<sup>th</sup> of January up to the end of January. In this case, a single date is transformed into a selection of multiple dates.</li>
<li><strong>PRECISE</strong>: The 29<sup>th</sup> of February is shifted back to the 29<sup>th</sup> of January. The 30<sup>th</sup> and 31<sup>st</sup> of January are not shown in the report, because there is neither a 30<sup>th</sup> nor a 31<sup>st</sup> of February. This is the same behavior as the classic time intelligence.</li>
<li><strong>ENDALIGNED</strong>: the 29<sup>th</sup> of February is shifted back to the 31<sup>st</sup> of January (end of periods are aligned). In this scenario, neither the 29<sup>th</sup> nor the 30<sup>th</sup> of January are shown. <em>ENDALIGNED</em> can be useful in some fancy scenarios, as we are about to show later.</li>
</ul>
<p>In the following report you can see the dates returned by DATEADD. You can see that the 29<sup>th</sup> of February is shifted back to three different periods, depending on the value used for the <em>Extension</em> argument.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image5-88.png" width="412" /></p>
<p><em>Truncation</em> is effective when the current period is larger than the resulting period, for example, when moving back one month from March to February. Truncation changes how the last days of the current period (those that do not exist in the previous month) are shifted. Truncation can be:</p>
<ul>
<li><strong>ANCHORED</strong>: the 29<sup>th</sup>, 30<sup>th</sup>, and 31<sup>st</sup> of March are shifted back to the 29<sup>th</sup> of February. In other words, all the dates from the current month, that do not exist in the previous month are routed to the last date of the previous month, possibly repeating the same value multiple times.</li>
<li><strong>BLANKS</strong>: the 30<sup>th</sup> and 31<sup>st</sup> of March result in BLANK, because there is no corresponding date in the previous month. One important note: if <em>EndAligned</em> is specified for <em>Extension</em>, it also applies to <em>Truncation</em>, thus moving the last day to the last day of the previous period.</li>
</ul>
<p>In the following diagram, you can see the effect of different <em>Truncation</em> and <em>Extension</em> values:</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image6-80.png" width="390" /></p>
<p>Choosing the correct set of values for <em>Extension</em> and <em>Truncation</em> is up to the developer. There is no “correct” solution: each has its own advantages and disadvantages that<br />
need to be discussed with users to produce useful, easily-understandable reports.</p>
<h2>A more practical scenario</h2>
<p>We want to compute a moving average over the last six months. We create two versions of the same calculation: one using classic time intelligence and one using calendar-based time intelligence. The two versions yield different results; we need to understand the problem and identify the correct parameter to use. As is often the case, understanding the issue is much harder than fixing the problem.</p>
<p>Here are the two measures:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [7]; title: ; snippet: Measure; table: Sales; notranslate">
Moving Average Calendar =
CALCULATE (
    DIVIDE (
        &#x5B;Sales Amount],
        COUNTROWS ( SUMMARIZE ( Sales, &#039;Date&#039;&#x5B;Year Month Number] ) )
    ),
    DATESINPERIOD ( &#039;Gregorian&#039;, MAX ( &#039;Date&#039;&#x5B;Date] ), -6, MONTH )
)
</pre>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [7]; title: ; snippet: Measure; table: Sales; notranslate">
Moving Average Classic =
CALCULATE (
    DIVIDE (
        &#x5B;Sales Amount],
        COUNTROWS ( SUMMARIZE ( Sales, &#039;Date&#039;&#x5B;Year Month Number] ) )
    ),
    DATESINPERIOD ( &#039;Date&#039;&#x5B;Date], MAX ( &#039;Date&#039;&#x5B;Date] ), -6, MONTH )
)
</pre>
<p>As you can see, the two measures differ only in how they use DATESINPERIOD. When used in a matrix, the two measures always provide the same result, with some noticeable exceptions.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image7-68.png" width="488" /></p>
<p>Two months in 2023 show a difference. Given the significant increase in sales in Jun 2023, the <em>Classic</em> version appears to produce the correct result. The <em>Calendar</em> version produces a smaller value, which is inconsistent with the increase in sales.</p>
<p>Let us elaborate on what happened in Jun 2023. DATESINPERIOD needs to shift the end of the month back by 6 months. To do so, it moves the provided date back by 6 months, then starts on the next day. The end of June 2023 is the 30<sup>th</sup> of June. Moving the 30<sup>th</sup> of June 6 months back produces the 30<sup>th</sup> of December 2023. DATESINPERIOD considers valid dates from the 31<sup>st</sup> of December (the day after the 30<sup>th</sup>) through to the 30<sup>th</sup> of June.</p>
<p>As you may already see, there is an error. DATESINPERIOD should start from the first of January, not from the 31<sup>st</sup> of December. However, because June has 30 days and December has 31, DATESINPERIOD (which internally uses DATEADD) returns the wrong start of the shifted period.</p>
<p>The error has two side effects: it adds an extra day and, worse, it counts an extra month (there are seven months from December to June, compared to the six months required by our calculation).</p>
<p>Recapping the considerations: when a month with 30 days is shifted back to a month with 31 days, our formula adds an extra day and month. We need to make sure that the last day of the month is moved back to the last day of the corresponding month. As it turns out, <em>EndAligned</em> is the algorithm to use. Indeed, DATESINPERIOD accepts the <em>Extension</em> parameter as an additional argument, thus allowing developers to choose between PRECISE and ENDALIGNED.</p>
<p>Changing the measure to the following definition using ENDALIGNED makes the measure compute the correct result:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [7]; title: ; snippet: Measure; table: Sales; notranslate">
Moving Average Calendar = 
CALCULATE (
    DIVIDE (
        &#x5B;Sales Amount],
        COUNTROWS ( SUMMARIZE ( Sales, &#039;Date&#039;&#x5B;Year Month Number] ) )
    ),
    DATESINPERIOD ( &#039;Gregorian&#039;, MAX ( &#039;Date&#039;&#x5B;Date] ), -6, MONTH, ENDALIGNED )
)
</pre>
<h2>Conclusions</h2>
<p>Classic time intelligence functions work out of the box because the complexity of the time intelligence calculations is hidden in the internal logic. Classic time intelligence relies on the assumption that the calendar under consideration is the standard Gregorian calendar.</p>
<p>The new calendar-based time intelligence functions do not rely on calendar knowledge; developers must tag columns and provide the required metadata for these functions to work. As a consequence, some of the automated behavior that simplifies the lives of developers had to be removed, and calendar-based time intelligence calculations may produce unexpected results if parameters are not properly aligned with the report requirements.</p>
<p>Your task, as a wise DAX developer, is to know the details about how these functions work and provide the correct set of arguments and/or metadata to the DAX engine to provide the correct results.</p>
<p>Calendar-based time intelligence gives developers greater power, which in turn requires more responsibility and control.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/understanding-dateadd-parameters-with-calendar-based-time-intelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Show transaction details on matrix visual in Power BI</title>
		<link>https://www.sqlbi.com/tv/show-transaction-details-on-matrix-visual-in-power-bi/</link>
					<comments>https://www.sqlbi.com/tv/show-transaction-details-on-matrix-visual-in-power-bi/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Tue, 27 Jan 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">http://www.sqlbi.com/?post_type=video&#038;p=891282</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/v41Zal354hE/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>How to show in a single column of a matrix information from multiple columns in a business entity or transaction.]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/v41Zal354hE/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>How to show in a single column of a matrix information from multiple columns in a business entity or transaction.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/show-transaction-details-on-matrix-visual-in-power-bi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Show transaction details on the matrix visual in Power BI</title>
		<link>https://www.sqlbi.com/articles/show-transaction-details-on-the-matrix-visual-in-power-bi/</link>
					<comments>https://www.sqlbi.com/articles/show-transaction-details-on-the-matrix-visual-in-power-bi/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Mon, 26 Jan 2026 20:00:21 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=891019</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/article-c0282.png" class="webfeedsFeaturedVisual" /></figure>This article shows how to create a DAX measure that displays information from multiple columns in a business entity or transaction, into a single column of a matrix. A common challenge in Power BI reporting is how to display several&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/article-c0282.png" class="webfeedsFeaturedVisual" /></figure><p>This article shows how to create a DAX measure that displays information from multiple columns in a business entity or transaction, into a single column of a matrix.<br />
<span id="more-891019"></span></p>
<p>A common challenge in Power BI reporting is how to display several pieces of information about a single item (such as sales transactions, product details, or customer details) without dedicating a separate column to each attribute. Using individual columns for each detail can consume space, especially for fields that are often empty. This article explores techniques to consolidate multiple fields from a business entity or transaction into a single column in a matrix visual, thus presenting transaction details in a space-efficient way.</p>
<h2>Dealing with multiple columns</h2>
<p>Presenting detailed information about a business entity often requires displaying values from multiple columns, which can take up too much space in Power BI reports and make them harder to read. Indeed, a consequence of there being too many columns is that we now have to use a horizontal scrollbar. So instead of using a separate column for each detail, it would be better to combine relevant fields and show them together in a single column, especially in a matrix visual. This section introduces the challenge and presents approaches to representing multiple attributes more efficiently, thus leading to advanced solutions using DAX measures in the next sections.</p>
<h2>Including multiple columns in a table</h2>
<p>To display transaction details across multiple columns in a table, we could create a separate table visual that shows the relevant fields for the selected item. This table must be synchronized with the matrix visual, typically by selecting an order number or unique identifier in the matrix, which then filters the table to display the associated details. In this example, the <em>Order Number</em> selection displays the customer details in a separate table.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-124.png" width="811" /></p>
<p>While this approach enables you to show all the details, it consumes additional space on the report canvas because you need both the matrix and the table visual. To partially address the space issue, we created a visual calculation that collapses multiple columns into a single <em>Details</em> column. This <em>Details</em> column can display concatenated information from several fields, which saves space (as illustrated by the green arrow). However, this solution comes with limitations: the synchronization between the matrix and the table is manual, and it requires users to click on the desired item in the matrix. Additionally, the columns used in the visual calculation cannot be hidden in the table; the common workaround is to set their width to 0, but this is not ideal, and in general, it is not a fully-integrated solution within the matrix visual because it requires a separate visual.</p>
<h2>Including multiple columns in a matrix</h2>
<p>Including multiple detail columns in a matrix visual adds hierarchical levels, making navigation difficult and potentially hurting query performance. The layout can also become unclear, as shown when several customer attributes appear under the order number (152200 in this example).</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-120.png" width="348" /></p>
<p>Just adding columns to the matrix is not the way to go.</p>
<h2>Collecting data in a measure</h2>
<p>When presenting a unique business identifier, such as a <em>Product Code</em>, <em>Customer Code</em>, or <em>Order Number</em>, you may want to show extra details in a <em>Details</em> column that, in reality, is a measure appearing after other measures in the matrix. This allows you to display transaction information directly, which eliminates the need for additional drill-down actions. Here is an example of what we want to obtain.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image3-108.png" width="421" /></p>
<p>To create this example, we defined a prototype measure to validate the design:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Details prototype = 
IF (
    SELECTEDVALUE ( Sales&#x5B;Order Number] ) = 152200, 
    &quot;Description of the customer&quot;
)
</pre>
<p>The <em>Details prototype</em> measure displays a fixed string when a specific order is selected. However, a measure can navigate the internal structure of the model to retrieve the required data. There are multiple possible approaches, because we can obtain the same result with different techniques. The goal of the article is to present several of these techniques, along with considerations regarding the performance and maintainability of the solution.</p>
<p>If you are interested in using the best solution, just skip to the final section. However, it is worth spending time learning the pros and cons of different approaches, because there is always a chance that you encounter scenarios where one of the other alternatives is better for you.</p>
<h2>Details measure – version 0</h2>
<p>This section is purely educational and shows how to explore the individual features we will use in the following complete implementations. The first measure we implement displays the currency and exchange rate for the transaction, all of which are available in the <em>Sales</em> table:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [2,3,7]; title: ; snippet: Measure; table: Sales; notranslate">
Details v0.A = 
VAR CurrencyCode = SELECTEDVALUE ( Sales&#x5B;Currency Code] ) 
VAR ExchangeRate = SELECTEDVALUE ( Sales&#x5B;Exchange Rate] )
RETURN 
    IF (
        NOT ISBLANK ( CurrencyCode ),
        CurrencyCode &amp; &quot;/USD=&quot; &amp; ExchangeRate
    )
</pre>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image4-102.png" width="400" /></p>
<p>The first issue to address is that the information is also visible when multiple orders share the same currency code. This can be solved by ensuring that only one order is visible: using ISINSCOPE guarantees that the <em>Order Number</em> is also displayed in the matrix visual, not just filtered outside of the visual. By checking that there is a row visible in <em>Sales</em>, we can also use VALUES instead of the more expensive SELECTEDVALUE, which relies on an underlying DISTINCTCOUNT:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [3,4,5]; title: ; snippet: Measure; table: Sales; notranslate">
Details v0.B = 
IF (
    ISINSCOPE ( Sales&#x5B;Order Number] ) &amp;&amp; NOT ISEMPTY ( Sales ),
    VAR CurrencyCode = VALUES ( Sales&#x5B;Currency Code] ) 
    VAR ExchangeRate = VALUES ( Sales&#x5B;Exchange Rate] )
    RETURN CurrencyCode &amp; &quot;/USD=&quot; &amp; ExchangeRate
)
</pre>
<p>The second issue is that adding columns requires writing a much longer expression, which would become more complex if we wanted to filter out blank values and to avoid displaying separators in the result when there is no value. For example, we should write something like that to ensure that both <em>Currency Code</em> and <em>Exchange Rate</em> are available:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [7,8]; title: ; snippet: Measure; table: Sales; notranslate">
Details v0.C = 
IF (
    ISINSCOPE ( Sales&#x5B;Order Number] ) &amp;&amp; NOT ISEMPTY ( Sales ),
    VAR CurrencyCode = VALUES ( Sales&#x5B;Currency Code] ) 
    VAR ExchangeRate = VALUES ( Sales&#x5B;Exchange Rate] )
    RETURN 
        IF ( 
            NOT ISBLANK ( CurrencyCode ) &amp;&amp; NOT ISBLANK ( ExchangeRate ),
            CurrencyCode &amp; &quot;/USD=&quot; &amp; ExchangeRate
        )
)
</pre>
<p>However, with more columns to display, the code to write and maintain would be much longer. To simplify that, we could look for an approach that just provides a list of displayed values, with a standard separator between them, to obtain the following result.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image5-87.png" width="350" /></p>
<p>The code uses a list of values in a table constructor, which CONCATENATEX iterates over to produce the output string. FILTER makes sure to remove empty values, so that the measure returns BLANK if the list is empty (without the filter, we would always have a list of two blank values):</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4,5,6,7,8,9,10,11,12,13,14]; title: ; snippet: Measure; table: Sales; notranslate">
Details v0.D = 
IF (
    ISINSCOPE ( Sales&#x5B;Order Number] ),
    CONCATENATEX (
        FILTER ( 
            {
                VALUES ( Sales&#x5B;Currency Code] ), 
                VALUES ( Sales&#x5B;Exchange Rate] )
            },
            &#x5B;Value] &lt;&gt; &quot;&quot;
        ),
        &#x5B;Value],
        &quot;, &quot;
    )
)
</pre>
<h2>Details measure – version 1</h2>
<p>Having introduced the method of iterating over a list of values utilizing CONCATENATEX and FILTER, we can now tackle the subsequent challenge: presenting data from various tables, for instance, the <em>Customer</em> table.</p>
<p>To use the same approach as what we introduced in the previous example, we must propagate the filter from <em>Sales</em> to <em>Customer</em>; this is because the filter context only filters one order, but not the related customer. We can wrap the CONCATENATEX expression in a CALCULATE function where CROSSFILTER enables the filter propagation from <em>Sales</em> to <em>Customer</em>:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4,19,20]; title: ; snippet: Measure; table: Sales; notranslate">
Details V1 = 
IF (
    ISINSCOPE ( Sales&#x5B;Order Number] ),
    CALCULATE (
        CONCATENATEX ( 
            FILTER ( 
                { 
                    VALUES ( Customer&#x5B;Country Code] ), 
                    VALUES ( Customer&#x5B;State Code] ), 
                    VALUES ( Customer&#x5B;City] ), 
                    VALUES ( Customer&#x5B;Name] ),  
                    VALUES ( Customer&#x5B;Age] ) 
                },
                &#x5B;Value] &lt;&gt; &quot;&quot; 
            ),
            &#x5B;Value],
            &quot;, &quot;
        ),
        CROSSFILTER ( Sales&#x5B;CustomerKey], Customer&#x5B;CustomerKey], BOTH )
    )
)
</pre>
<p>The result shows customer information for each order.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image6-79.png" width="500" /></p>
<p>A quick look at the query plan shows that this approach generates 27 storage engine queries, with a maximum of 1,681 rows processed by the formula engine. The execution time is completely irrelevant for this amount of data; it is more interesting to consider that the physical query plan has 254 rows (not displayed in the screenshot), which is another indicator of the query plan’s complexity.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image7-67.png" width="624" /></p>
<p>We will compare these metrics with other approaches in the following measure to evaluate the more efficient approach in terms of execution time, even though we want to balance the complexity of the code to write and maintain.</p>
<h2>Details measure – version 2</h2>
<p>One issue with using VALUES for each column to display is that the code is verbose. The syntax is simpler (and potentially more efficient) when accessing column values in a row context. In the first complete solution, we create an external row context to access the rows in the required table (<em>Customer</em> or <em>Sales</em>, hopefully only one row!) and we iterate the column values with the inner iterator. In the following measure, an external CONCATENATEX iterates the <em>Customer</em> table, so that the inner CONCATENATEX iterates the values obtained by using simple column references:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4,5,6,7,8,12,13,14,15,16,23,24]; title: ; snippet: Measure; table: Sales; notranslate">
Details V2.A = 
IF (
    ISINSCOPE ( Sales&#x5B;Order Number] ),
    CONCATENATEX ( 
        CALCULATETABLE (
            Customer,
            CROSSFILTER ( Sales&#x5B;CustomerKey], Customer&#x5B;CustomerKey], BOTH )
        ),
        CONCATENATEX ( 
            FILTER ( 
                { 
                    Customer&#x5B;Country Code], 
                    Customer&#x5B;State Code], 
                    Customer&#x5B;City], 
                    Customer&#x5B;Name], 
                    Customer&#x5B;Age] 
                },
                &#x5B;Value] &lt;&gt; &quot;&quot; 
            ),
            &#x5B;Value],
            &quot;, &quot;
        ),
        &quot; - &quot; // This should never be used as long 
              // as ISINSCOPE ( Sales&#x5B;Order Number] ) is True
    )
)
</pre>
<p>Because the <em>Customer</em> table must be filtered to return only the row corresponding to the selected order, we use CALCULATETABLE and CROSSFILTER to propagate the filter from <em>Sales</em> to <em>Customer</em>. Because we know a filter is active on <em>Sales</em> (thanks to the initial ISINSCOPE), we can be confident that only one customer is filtered, even if the order has multiple rows in <em>Sales</em>.</p>
<p>The outer CONCATENATEX should only iterate one row in <em>Customer</em>. Therefore, the “ &#8211; “ separator should never be used. The result of the <em>Details V2.A</em> measure is identical to <em>Details V1</em>.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image8-56.png" width="550" /></p>
<p>From a performance point of view, this approach reduces the number of storage engine queries (10 instead of 27), the number of rows in the physical execution plan (186 instead of 251), and the maximum number of rows iterated by the formula engine (31 instead of 1,681).</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image9-48.png" width="592" /></p>
<p>Despite the nested iterators, this approach should be faster because we only iterate over one row of the outer iterator. More importantly, this approach is the best for maintenance, as it requires only a simple list of columns to achieve the required results. Later, we will see that this simplest approach also makes it possible to move most of the code into a user-defined function (UDF).</p>
<p>Given the requirements, we wanted to get the details for the entire order. However, if the requirement concerned the details of each order, we could iterate through the <em>Sales</em> table and use RELATED to access columns in related tables. For example, the following measure displays the details of each of the products in the selected order, and it separates each row (and therefore each product) with a new line character:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [5,9,10,11,12,19]; title: ; snippet: Measure; table: Sales; notranslate">
Details V2.B = 
IF (
    ISINSCOPE ( Sales&#x5B;Order Number] ),
    CONCATENATEX ( 
        Sales,
        CONCATENATEX ( 
            FILTER ( 
                { 
                    Sales&#x5B;Quantity],
                    RELATED ( &#039;Product&#039;&#x5B;Product Name] ), 
                    RELATED ( &#039;Product&#039;&#x5B;Weight] ), 
                    RELATED ( &#039;Product&#039;&#x5B;Weight Unit Measure] )
                },
                &#x5B;Value] &lt;&gt; &quot;&quot; 
            ),
            &#x5B;Value],
            &quot;, &quot;
        ),
        UNICHAR ( 10 )
    )
)
</pre>
<p>Here are the results we can obtain with this measure: for each row of the order, we see the quantity, the product name, the weight (if present), and the weight unit. We did not implement a logic to hide the weight unit if the weight amount is not present, like for the first row related to WWI Desktop PC1.80 product.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image10-42.png" width="650" /></p>
<p>We do not make performance considerations in this case because the example is different from the one we use to compare the approaches. But we believe it was interesting from an educational perspective to show this other scenario.</p>
<h2>Details measure – version 3</h2>
<p>The last version has the same structure as version 2, but in this case, we obtain a row context to use column references with SUMMARIZE. This way, we do not have to rely on the bidirectional filter and we can also combine columns from different tables:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [5,6,7,8,9,10,11,12]; title: ; snippet: Measure; table: Sales; notranslate">
Details V3 = 
IF (
    ISINSCOPE ( Sales&#x5B;Order Number] ),
    CONCATENATEX ( 
        SUMMARIZE ( 
            Sales, 
            Customer&#x5B;Country Code], 
            Customer&#x5B;State Code], 
            Customer&#x5B;City], 
            Customer&#x5B;Name], 
            Customer&#x5B;Age]  
        ),
        CONCATENATEX ( 
            FILTER ( 
                { 
                    Customer&#x5B;Country Code], 
                    Customer&#x5B;State Code], 
                    Customer&#x5B;City], 
                    Customer&#x5B;Name],  
                    Customer&#x5B;Age] 
                },
                &#x5B;Value] &lt;&gt; &quot;&quot; 
            ),
            &#x5B;Value],
            &quot;, &quot;
        ),
        &quot; - &quot; // This should never be used as long 
               // as ISINSCOPE ( Sales&#x5B;Order Number] ) is True
    )
)
</pre>
<p>The result is the same as for the previous two versions.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image11-31.png" width="500" /></p>
<p>To be honest, this solution is not ideal from a maintenance point of view, because it requires duplicating the list of columns both in SUMMARIZE and in the table constructor used as FILTER argument. However, we wanted to show this approach because it could be more efficient from a query execution perspective.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image12-22.png" width="598" /></p>
<p>We have a further reduction of the number of storage engine queries (7 instead of 10), a larger number of rows in the physical execution plan (337 instead of 186, the largest number among all the versions of this measure), and the same maximum number of rows iterated by the formula engine (31) as in version 2.</p>
<p>Thus, is this approach worth using? It really depends on the performance impact measured in a real environment. Our suggestion is to refrain from applying potential optimizations that could make the code longer or harder to read and maintain. Therefore, we suggest using the approach described in version 2; implement version 3 only when you can prove that it achieves faster performance and is necessary for the scenario you are facing.</p>
<h2>Details measure – final version</h2>
<p>In this final section, we show how to implement our favorite approach (version 2) by moving the repetitive code into a function, thus making it easier to create other detail functions in your reports. The <em>Local.ConcatenateNonEmpty</em> function receives a table with a list of values to display, and a column reference pointing to the column to use (which is <em>[Value]</em> for table constructors with a single column):</p>
<div class="dax-code-title">Function</div>
<pre class="brush: dax; title: ; snippet: Function; notranslate">
Local.ConcatenateNonEmpty = ( 
    fieldsListTable : TABLE VAL, 
    fieldColumn : ANYREF EXPR 
) =&gt; 
CONCATENATEX ( 
    FILTER ( 
        fieldsListTable,
        fieldColumn &lt;&gt; &quot;&quot; 
    ),
    fieldColumn,
    &quot;, &quot;
)
</pre>
<p>The measure reduces its verbosity by invoking the <em>Local.ConcatenateNonEmpty</em> function to implement the inner CONCATENATEX iteration:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [9,10,11,12,13,14,15,16,17,18]; title: ; snippet: Measure; table: Sales; notranslate">
Details V2.A fx = 
IF (
    ISINSCOPE ( Sales&#x5B;Order Number] ),
    CONCATENATEX ( 
        CALCULATETABLE (
            Customer,
            CROSSFILTER ( Sales&#x5B;CustomerKey], Customer&#x5B;CustomerKey], BOTH )
        ),
        Local.ConcatenateNonEmpty (
            { 
                Customer&#x5B;Country Code], 
                Customer&#x5B;State Code], 
                Customer&#x5B;City], 
                Customer&#x5B;Name], 
                Customer&#x5B;Age] 
            },
            &#x5B;Value]
        ),
        &quot; - &quot; // This should never be used as long 
               // as ISINSCOPE ( Sales&#x5B;Order Number] ) is True
    )
)
</pre>
<p>You can find function-based versions of all the measures in the sample file you can download. The role of the function is to just reduce the amount of code to write, without impacting the performance obtained.</p>
<h2>Conclusions</h2>
<p>We have seen several ways to display transaction details in a compact manner, by using DAX measures. Different approaches can result in different levels of performance, but we must balance performance gains with code readability and maintainability. Counter-intuitively, using nested iterators could be ideal in this scenario because the outer row context gives access to column references, without the additional burden of retrieving each value from the filter context. Using a function to hide repetitive tasks can help you reuse this approach across different measures in your reports.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/show-transaction-details-on-the-matrix-visual-in-power-bi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Fixing the “Black Icons” issue in Power BI Desktop: understanding the problem and a temporary workaround</title>
		<link>https://www.sqlbi.com/blog/marco/2026/01/24/fixing-the-black-icons-issue-in-power-bi-desktop-understanding-the-problem-and-a-temporary-workaround/</link>
					<comments>https://www.sqlbi.com/blog/marco/2026/01/24/fixing-the-black-icons-issue-in-power-bi-desktop-understanding-the-problem-and-a-temporary-workaround/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Sat, 24 Jan 2026 13:43:01 +0000</pubDate>
				<category><![CDATA[Bug]]></category>
		<category><![CDATA[Power BI]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=blogpost&#038;p=891486</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/20260124-black-icons-workaround.png" class="webfeedsFeaturedVisual" /></figure>In mid-January 2026, many Power BI Desktop users began noticing a strange visual regression: several icons in the interface—including the Data, Model, and DAX icons—appeared as solid black shapes instead of their normal outlines. The issue appeared suddenly, even on&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/20260124-black-icons-workaround.png" class="webfeedsFeaturedVisual" /></figure><p><img loading="lazy" decoding="async" class="wp-image-891487 alignleft" src="https://cdn.sqlbi.com/wp-content/uploads/20260124-black-icons.png" alt="" width="46" height="211" /><br />
In mid-January 2026, many Power BI Desktop users began noticing a strange visual regression: several icons in the interface—including the Data, Model, and DAX icons—appeared as solid black shapes instead of their normal outlines. The issue appeared suddenly, even on machines where Power BI Desktop had <strong>not</strong> been updated. Because it happened the same week that Power BI Desktop January 2026 was released, this led to understandable confusion.</p>
<p>The root cause was not Power BI Desktop itself but an <strong>unexpected regression in the WebView2 runtime</strong>, one of the external dependencies used to render parts of the UI. Microsoft confirmed the issue and is preparing an official fix.</p>
<p>Until the update is released, there is a safe temporary workaround that restores the proper icons. This is useful if you need to record videos (like me!), run demos, or simply work without visual glitches.</p>
<p>Below is the full procedure.</p>
<h2>Why the issue appeared</h2>
<p>Power BI Desktop uses <a href="https://developer.microsoft.com/en-us/microsoft-edge/webview2/?form=MA13LH"><strong>Microsoft Edge WebView2</strong> </a>to render several UI components. A recent WebView2 update introduced a regression that broke the rendering of some vector-based icons, causing them to show up as filled black shapes.</p>
<p>Even older versions of Power BI Desktop were affected because WebView2 updates independently from Desktop.</p>
<p>Until the fixed version is published and distributed, affected users will continue to see black icons unless they override WebView2 manually.</p>
<h2>Temporary workaround: pin Power BI Desktop to a previous WebView2 fixed version</h2>
<p>This workaround forces Power BI Desktop to use a known-good WebView2 build rather than the most recent (faulty) one. It requires <strong>no system-wide changes</strong> and is fully reversible.</p>
<blockquote><p><strong>IMPORTANT</strong>: this workaround stops automatic updates of WebView2; <span style="text-decoration: underline;">you should remove this workaround once the bug is fixed</span> by Microsoft. Set a reminder to check that every week.</p></blockquote>
<h3>Step 1 — Download a previous WebView2 Fixed Version Runtime</h3>
<p><img loading="lazy" decoding="async" class="wp-image-891488 alignright" src="https://cdn.sqlbi.com/wp-content/uploads/20240126-webview2.png" alt="" width="223" height="249" /></p>
<ol>
<li>Visit:<br />
<a href="https://developer.microsoft.com/en-us/microsoft-edge/webview2">https://developer.microsoft.com/en-us/microsoft-edge/webview2</a></li>
<li>Locate the <strong>Fixed Version</strong> section.</li>
<li>Choose an earlier version (for example: <em>143.0.3650.139</em>).</li>
<li>Select <strong>x64</strong> and download the package.</li>
</ol>
<h3>Step 2 — Extract the CAB file</h3>
<p>The downloaded file is a CAB package. Extract it into a folder, such as:</p>
<pre><code>C:\Uti\Microsoft.WebView2.FixedVersionRuntime.143.0.3650.139.x64
</code></pre>
<p>(You must extract it; Power BI cannot use the CAB directly.)</p>
<h3>Step 3 — Create the <code>WEBVIEW2_BROWSER_EXECUTABLE_FOLDER</code> environment variable</h3>
<p>This step tells Power BI Desktop to load WebView2 from the folder you extracted earlier. If you have never created an environment variable before, follow the full click-by-click process below.</p>
<ol>
<li><strong>Open the Windows System Properties window</strong>
<ul>
<li>Press <strong>Windows Key</strong>, type <strong>“environment variables”</strong>, and select <strong>“Edit the system environment variables”</strong>.<br />
<em>Alternatively:</em></p>
<ul>
<li>Open <strong>Control Panel</strong> → <strong>System and Security</strong> → <strong>System</strong> → <strong>Advanced system settings</strong>.</li>
</ul>
</li>
</ul>
</li>
<li><strong>Open the Environment Variables dialog</strong>
<ul>
<li>In the <strong>System Properties</strong> window, go to the <strong>Advanced</strong> tab (usually already selected).</li>
<li>Click <strong>Environment Variables…</strong> at the bottom-right (this is the button indicated by the left arrow in the bitmap).</li>
</ul>
</li>
<li><strong>Create a new user variable</strong>
<ul>
<li>In the top section (<strong>User variables for </strong>), click <strong>New…</strong> (the button indicated by the right arrow in the bitmap).</li>
</ul>
</li>
<li><strong>Enter the variable details</strong>
<ul>
<li><strong>Variable name:</strong>
<pre><code>WEBVIEW2_BROWSER_EXECUTABLE_FOLDER
</code></pre>
</li>
<li><strong>Variable value:</strong><br />
Paste the full path of the folder where you extracted the WebView2 runtime, for example:</p>
<pre><code>C:\Uti\Microsoft.WebView2.FixedVersionRuntime.143.0.3650.139.x64
</code></pre>
</li>
</ul>
</li>
<li>Click <strong>OK</strong> to save, then <strong>OK</strong> again to close Environment Variables, and finally <strong>OK</strong> to close System Properties.</li>
<li>Close Power BI Desktop if it is running, then open it again so the environment variable is applied.</li>
</ol>
<p>This completes the configuration. Power BI Desktop will now load WebView2 from that specific folder instead of the system runtime.</p>
<p><img loading="lazy" decoding="async" class="wp-image-891490 alignnone" src="https://cdn.sqlbi.com/wp-content/uploads/20240126-environmentvariable.png" alt="" width="1099" height="438" /></p>
<h3>Step 4 — Launch Power BI Desktop</h3>
<p><img loading="lazy" decoding="async" class="size-full wp-image-891492 alignright" src="https://cdn.sqlbi.com/wp-content/uploads/20240126-fixedicons.png" alt="" width="123" height="307" /></p>
<p>Start Power BI Desktop as usual.<br />
The icons should now display correctly.</p>
<h2>Removing the Workaround</h2>
<p>Once Microsoft ships the patched WebView2 release (and the fix propagates), you should:</p>
<ol>
<li><strong>Remove the environment variable</strong><br />
Delete <code>WEBVIEW2_BROWSER_EXECUTABLE_FOLDER</code>.</li>
<li><strong>Delete the temporary folder</strong> you extracted earlier.</li>
</ol>
<p>Power BI Desktop will then return to using the standard system WebView2 runtime.</p>
<h2>Final Notes</h2>
<p>This workaround does not modify your Power BI installation and does not affect Windows globally. It simply redirects Power BI Desktop to use a stable WebView2 runtime until Microsoft finalizes the fix.</p>
<p>If you are preparing tutorials, training content, or video recordings, applying this workaround ensures the UI appears correctly and avoids unnecessary confusion.</p>
<p>Once the official fix is available, remove the override and return to the standard configuration.</p>
<p><strong>DISCLAIMER</strong>: While I wrote a draft of the workaround for internal use, I used ChatGPT to create a longer, more detailed step-by-step description to make it easier to apply.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/blog/marco/2026/01/24/fixing-the-black-icons-issue-in-power-bi-desktop-understanding-the-problem-and-a-temporary-workaround/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Optimizing supermarket product placement with Synoptic Panel in Power BI</title>
		<link>https://www.sqlbi.com/tv/synoptic-panel-05-supermarket/</link>
					<comments>https://www.sqlbi.com/tv/synoptic-panel-05-supermarket/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Tue, 20 Jan 2026 11:00:44 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=886113</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/j2AA7dtvyu8/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>This video introduces a 𝘀𝘂𝗽𝗲𝗿𝗺𝗮𝗿𝗸𝗲𝘁 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝗽𝗹𝗮𝗰𝗲𝗺𝗲𝗻𝘁 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲 by using 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹 in Power BI. We look at how supermarkets can understand 𝘄𝗵𝗲𝗿𝗲 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 𝗮𝗿𝗲 𝗽𝗹𝗮𝗰𝗲𝗱 and 𝗵𝗼𝘄 𝗽𝗹𝗮𝗰𝗲𝗺𝗲𝗻𝘁 𝗶𝗺𝗽𝗮𝗰𝘁𝘀 𝘀𝗮𝗹𝗲𝘀 𝗮𝗻𝗱 𝗺𝗮𝗿𝗴𝗶𝗻𝘀. Traditional tables and charts fail&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/j2AA7dtvyu8/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>This video introduces a 𝘀𝘂𝗽𝗲𝗿𝗺𝗮𝗿𝗸𝗲𝘁 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝗽𝗹𝗮𝗰𝗲𝗺𝗲𝗻𝘁 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲 by using 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹 in Power BI. We look at how supermarkets can understand 𝘄𝗵𝗲𝗿𝗲 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 𝗮𝗿𝗲 𝗽𝗹𝗮𝗰𝗲𝗱 and 𝗵𝗼𝘄 𝗽𝗹𝗮𝗰𝗲𝗺𝗲𝗻𝘁 𝗶𝗺𝗽𝗮𝗰𝘁𝘀 𝘀𝗮𝗹𝗲𝘀 𝗮𝗻𝗱 𝗺𝗮𝗿𝗴𝗶𝗻𝘀. Traditional tables and charts fail to accurately reflect the store layout, whereas this approach lets you visualize KPIs directly on the supermarket floor plan.</p>
<p>𝗬𝗼𝘂 𝗰𝗮𝗻:<br />
&#8211; Analyze product placement performance on a real store map<br />
&#8211; Display sales and margin metrics directly on areas and shelves<br />
&#8211; Switch measures (for example, margin %) and instantly spot high- and low-performing zones<br />
&#8211; Drill down from category areas to individual shelves<br />
&#8211; Isolate specific zones and analyze them in detail<br />
&#8211; Work with multiple SVG maps and levels of detail on the same report page</p>
<p>The data shown is fictitious, but the use case reflects real-world scenarios already adopted by customers to improve store layout and commercial decisions.</p>
<p><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f449.png" alt="👉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Supermarket product placement 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲: https://okviz.com/usecase/supermarket-products-placement-synoptic-panel/</p>
<p><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f449.png" alt="👉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> 𝗦𝘆𝗻𝗼𝗽𝘁𝗶𝗰 𝗣𝗮𝗻𝗲𝗹 𝘃𝗶𝘀𝘂𝗮𝗹 for Power BI: https://okviz.com/synoptic-panel/</p>
<p>If you build similar analyses with Synoptic Panel, 𝘀𝗵𝗮𝗿𝗲 𝘆𝗼𝘂𝗿 𝗿𝗲𝘀𝘂𝗹𝘁𝘀 with us or 𝗳𝗼𝗿𝘄𝗮𝗿𝗱 𝘁𝗵𝗶𝘀 𝘃𝗶𝗱𝗲𝗼 to anyone who could benefit from this approach.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/synoptic-panel-05-supermarket/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Performance of limited and regular relationships in Power BI</title>
		<link>https://www.sqlbi.com/tv/performance-of-limited-and-regular-relationships-in-power-bi/</link>
					<comments>https://www.sqlbi.com/tv/performance-of-limited-and-regular-relationships-in-power-bi/#respond</comments>
		
		<dc:creator><![CDATA[Alberto Ferrari]]></dc:creator>
		<pubDate>Tue, 13 Jan 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">http://www.sqlbi.com/?post_type=video&#038;p=890671</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/rLyirxSWc98/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>Compare the performance of regular versus limited relationships in DAX to understand the implications of using limited relationships in a Power BI semantic model.]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/rLyirxSWc98/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>Compare the performance of regular versus limited relationships in DAX to understand the implications of using limited relationships in a Power BI semantic model.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/performance-of-limited-and-regular-relationships-in-power-bi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Analyzing the performance of limited and regular relationships</title>
		<link>https://www.sqlbi.com/articles/analyzing-the-performance-of-limited-and-regular-relationships/</link>
					<comments>https://www.sqlbi.com/articles/analyzing-the-performance-of-limited-and-regular-relationships/#respond</comments>
		
		<dc:creator><![CDATA[Alberto Ferrari]]></dc:creator>
		<pubDate>Fri, 09 Jan 2026 06:00:04 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[Optimization]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=889537</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image9-47.png" class="webfeedsFeaturedVisual" /></figure>This article analyzes the performance of regular versus limited relationships in DAX to understand the implications of using limited relationships. A relationship in a semantic model can be either regular or limited. Regular relationships are the most common: a one-to-many&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image9-47.png" class="webfeedsFeaturedVisual" /></figure><p>This article analyzes the performance of regular versus limited relationships in DAX to understand the implications of using limited relationships.<br />
<span id="more-889537"></span></p>
<p>A relationship in a semantic model can be either regular or limited. Regular relationships are the most common: a one-to-many relationship between two tables on the same data island is almost always a regular relationship. If the tables involved in the relationship are stored in different data islands, then the relationship is limited. We analyzed the semantic differences between the two types of relationships in previous articles: <a href="https://www.sqlbi.com/articles/understanding-blank-row-and-limited-relationships/">Understanding blank row and limited relationships</a> and <a href="https://www.sqlbi.com/articles/strong-and-weak-relationships-in-power-bi/">Regular and limited relationships in Power BI</a>.</p>
<p>Relationships between different data islands are the most common case of limited relationships. In that scenario, performance depends on multiple factors, most of which are not under the control of a DAX developer. Indeed, when mixing data from different data islands, the DAX formula engine must act as a bridge between them, resulting in complex execution plans. Besides, when two tables reside in different data islands, only limited relationships can connect them. Therefore, a performance comparison would not make sense, as there are no alternative options to link the tables.</p>
<p>However, a model can have limited relationships in the very special case of two tables stored in the same data island and connected by a many-to-many cardinality relationship. By nature, many-to-many cardinality relationships are limited. While they seem like a convenient way to link two tables when the key used to link them together is not unique in both tables, many-to-many cardinality relationships are extremely expensive, and a wise data modeler should limit their use strictly to cases where they are absolutely necessary. In this article, we analyze the differences between regular and limited relationships, focusing solely on performance.</p>
<h2>Introduction</h2>
<p>Choosing the most efficient type of relationship is important in data models because relationships are used every time the DAX engine needs to scan a table and group by a related table. A slow relationship slows down every measure that traverses it.</p>
<p>Limited relationships are shown in Power BI by displaying small gaps at the edges of the line that connects two tables. In the following figure, the relationship between <em>Product</em> and <em>Sales</em> is regular (no gaps), whereas the relationship between <em>Sales</em> and <em>Customer</em> is limited (gaps are present).</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-123.png" width="800" /></p>
<p>When Power BI creates a matrix like the following, grouping by <em>Customer[Country]</em> and computing the <em>Sales Amount</em> measure, it needs to scan <em>Sales</em> and group by the columns in <em>Customer</em>, using the relationship to perform the join.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-119.png" width="200" /></p>
<p>To test the difference in performance between the two types of relationships, we execute queries that traverse the relationship using a regular relationship first and a limited relationship next. We are not going to change anything: the relationship being used is the same as the one between <em>Customer</em> and <em>Sales</em>. The only difference will be the type of relationship: either regular or limited.</p>
<h2>Choosing the test database</h2>
<p>Quick heads up before we move forward with the test – the downloadable demo for this article uses the usual Contoso 10K model, which has around 4,000 rows in the Sales table. The tests executed on such a tiny model are useless because the execution time is so fast that any noticeable difference may depend on external factors. Therefore, the timing shown in this article is related to the same model, but with 1.4 billion rows in the fact table. The conclusions are the same, but the numbers are so much larger that the difference is clear.</p>
<p>As an example, this simple query runs in three milliseconds on the small model, and it runs in 15,000 milliseconds on the large model:</p>
<div class="dax-code-title">Query</div>
<pre class="brush: dax; highlight: [3]; title: ; snippet: Query; notranslate">
EVALUATE
SUMMARIZECOLUMNS (
    Customer&#x5B;Country],
    &quot;Sales&quot;, &#x5B;Sales Amount]
)
</pre>
<p>However, multiple executions of the same query may vary in timing: it can range from 3 to 7 milliseconds on the small model (more than double the time), whereas it stays around 15,000 milliseconds (+/- 300) on the large model. Bear that in mind when measuring performance: small models are pretty much useless for assessing the quality of your DAX or modeling choices. Always test on large models, with tens of millions of rows on the many side and millions of rows on the one side of the relationship!</p>
<h2>Testing simple measures</h2>
<p>Let us start measuring the baseline: the time required to compute <em>Sales Amount</em> without any relationship being involved. This measures the time needed to scan the <em>Sales</em> table:</p>
<div class="dax-code-title">Query</div>
<pre class="brush: dax; highlight: [3]; title: ; snippet: Query; notranslate">
EVALUATE
SUMMARIZECOLUMNS (
    &quot;Sales&quot;, &#x5B;Sales Amount]
)
</pre>
<p>Here are the server timings.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image3-107.png" width="700" /></p>
<p>The interesting number is the storage engine CPU (SE CPU): more than 7 seconds. Because of parallelism, those 7 seconds are reduced to 134 milliseconds of execution waiting time (SE). However, when evaluating performance, we are interested in the SE CPU rather than the SE timings, because SE CPU provides a better picture of the real processing cost of an operation.</p>
<p>We now run the same query, grouping by <em>Customer[Country]</em>, and we evaluate the cost of traversing the relationship:</p>
<div class="dax-code-title">Query</div>
<pre class="brush: dax; highlight: [3]; title: ; snippet: Query; notranslate">
EVALUATE
SUMMARIZECOLUMNS (
    Customer&#x5B;Country],
    &quot;Sales&quot;, &#x5B;Sales Amount]
)
</pre>
<p>As you can see, the execution time nearly doubled.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image4-101.png" width="700" /></p>
<p>You can see that the relationship is used within the VertiPaq engine, where there is a join between <em>Sales</em> and <em>Customer</em>.</p>
<p>In all its simplicity, this demo already shows how expensive relationships are. Scanning the <em>Sales</em> table takes 7 seconds, whereas scanning the same <em>Sales</em> table using the relationship to group by <em>Customer[Country]</em> adds another 7 seconds. Be mindful that the cost of a relationship strongly depends on the number of rows in the table on the one-side (<em>Customer</em>, in this case). <em>Customer</em> contains 1.8M rows. Using a smaller table like <em>Product</em> produces a faster execution time:</p>
<div class="dax-code-title">Query</div>
<pre class="brush: dax; highlight: [3]; title: ; snippet: Query; notranslate">
EVALUATE
SUMMARIZECOLUMNS (
    Product&#x5B;Brand],
    &quot;Sales&quot;, &#x5B;Sales Amount]
)
</pre>
<p>Instead of 15 seconds, using <em>Product</em> as the one-side of the relationship produces a better result.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image5-86.png" width="700" /></p>
<p>Now that we have a better understanding of the baseline and the main factors to consider in a relationship, let us change the relationship between <em>Customer</em> and <em>Sales</em> from one-to-many to many-to-many, thus obtaining a limited relationship.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image6-78.png" width="600" /></p>
<p>If we execute the query that groups <em>by Customer[Country]</em> again, we notice important details:</p>
<ul>
<li>There are now multiple storage engine queries executed in a batch. A many-to-many cardinality relationship cannot be pushed down to the storage engine as a single query.</li>
<li>The execution time is noticeably longer.</li>
</ul>
<p>Here are the server timings.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image7-66.png" width="700" /></p>
<p>The SE CPU skyrocketed from 15,000 to 138,000 milliseconds. What is most relevant is to understand what this number comprises. We know that out of 15,000 milliseconds, around 7,000 are needed to scan the <em>Sales</em> table. Therefore, the additional cost of the relationship is around 8,000 milliseconds. Similarly, out of 138,000 milliseconds, 7,000 are needed to scan the table. Therefore, the additional cost is 131,000 milliseconds. In other words, the many-to-many cardinality relationship makes the calculation almost 20 times more expensive compared to a regular relationship.</p>
<p>Be mindful that this price must be paid whenever the table is scanned. In a regular report with dozens of different measures being computed, all these additional costs add up, further slowing the report.</p>
<h2>Testing multiple measures</h2>
<p>An important consideration about many-to-many cardinality relationships is that they cannot be pushed down to the storage engine as simple joins. This can limit the optimization options available to the DAX engine. Let us see an example of this in the context of Fusion optimization. If you are not familiar with Fusion, you can find further information here: <a href="https://www.sqlbi.com/articles/introducing-horizontal-fusion-in-dax/">Introducing horizontal fusion in DAX</a>, and here: <a href="https://www.sqlbi.com/articles/optimizing-fusion-optimization-for-dax-measures/">Optimizing fusion optimization for DAX measures</a>.</p>
<p>The following query needs to compute <em>Sales Amount</em> for male and female customers, using two different columns for the result:</p>
<div class="dax-code-title">Query</div>
<pre class="brush: dax; highlight: [4,5]; title: ; snippet: Query; notranslate">
EVALUATE
SUMMARIZECOLUMNS (
    Customer&#x5B;Country],
    &quot;Male Sales&quot;, CALCULATE ( &#x5B;Sales Amount], Customer&#x5B;Gender] = &quot;Male&quot; ),
    &quot;Female Sales&quot;, CALCULATE ( &#x5B;Sales Amount], Customer&#x5B;Gender] = &quot;Female&quot; )    
)
</pre>
<p>Thanks to Fusion optimization, with a regular relationship, the entire query can be computed through a single storage engine query that retrieves sales grouped by country and gender.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image8-55.png" width="700" /></p>
<p>If the relationship is set to many-to-many cardinality, Fusion is disabled because the VertiPaq engine cannot retrieve the values for both male and female in a single query by using a limited relationship. The same query with a many-to-many cardinality relationship generates these timings.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image9-47.png" width="700" /></p>
<p>The storage engine needs to execute two complex batches: one for male and one for female. Needless to say, the more measures you have that cannot be merged with Fusion, the more complicated the entire plan becomes.</p>
<h2>Conclusions</h2>
<p>Many-to-many cardinality relationships are a great modeling feature, but they significantly slow down every query executed by the DAX engine. A wise DAX developer needs to know the price of many-to-many cardinality relationships and limit their use to only the scenarios where they are strictly necessary.</p>
<p>Specifically, because the price of a relationship is linked to the number of unique values in the column used for the relationships, many-to-many cardinality should be avoided as much as possible with large cardinalities (millions). At the same time, they may be a good solution for smaller cardinalities (preferably thousands or fewer), where the limited number of unique values involved reduces the additional overhead.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/analyzing-the-performance-of-limited-and-regular-relationships/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What happened in DAX in 2025</title>
		<link>https://www.sqlbi.com/tv/what-happened-in-dax-in-2025/</link>
					<comments>https://www.sqlbi.com/tv/what-happened-in-dax-in-2025/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Tue, 30 Dec 2025 11:00:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=889968</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/NxM823R3Tow/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>A recap of the news in the DAX and SQLBI world in 2025 and a preview of what is coming in 2026. Happy New Year! All links in blog post: https://www.sqlbi.com/blog/marco/2025/12/29/what-happened-in-the-dax-world-in-2025/]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/NxM823R3Tow/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>A recap of the news in the DAX and SQLBI world in 2025 and a preview of what is coming in 2026.<br />
Happy New Year!</p>
<p>All links in blog post: https://www.sqlbi.com/blog/marco/2025/12/29/what-happened-in-the-dax-world-in-2025/</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/what-happened-in-dax-in-2025/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What happened in the DAX world in 2025</title>
		<link>https://www.sqlbi.com/blog/marco/2025/12/29/what-happened-in-the-dax-world-in-2025/</link>
					<comments>https://www.sqlbi.com/blog/marco/2025/12/29/what-happened-in-the-dax-world-in-2025/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Mon, 29 Dec 2025 20:00:35 +0000</pubDate>
				<guid isPermaLink="false">https://www.sqlbi.com/?post_type=blogpost&#038;p=889889</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/dax2025.jpg" class="webfeedsFeaturedVisual" /></figure>It is time for our annual review of what happened in the DAX and SQLBI worlds, and what we should expect in 2026! New DAX features in 2025 This was a big year for DAX, with the public preview that&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/dax2025.jpg" class="webfeedsFeaturedVisual" /></figure><p>It is time for our annual review of what happened in the DAX and SQLBI worlds, and what we should expect in 2026!</p>
<p><span id="more-889889"></span></p>
<h2>New DAX features in 2025</h2>
<p>This was a big year for DAX, with the public preview that started in September for two new features: <a href="https://www.sqlbi.com/articles/introducing-user-defined-functions-in-dax/">user-defined functions (UDFs)</a> and<a href="https://www.sqlbi.com/articles/introducing-calendar-based-time-intelligence-in-dax/"> calendar-based time intelligence</a>.</p>
<p>The <strong>user-defined functions are the most significant update to the DAX language since 2015</strong>, when Microsoft introduced <a href="https://www.sqlbi.com/articles/variables-in-dax-2015/">variables</a>. Can you believe that for more than five years we had to write DAX code without variables? Well, the user-defined functions we just mentioned will have an even bigger impact in the long term.</p>
<p>In five years from now, when you open a semantic model and find measures longer than 20 lines without functions invoked, you will immediately think that such code has to be “old”, or that it’s “weird”. The same feeling you may have today when you see no variables being used in a 40-line-long measure. More importantly, while you do need to understand variables to declare and use them, you do not need to understand all the details about functions to “consume” them. Therefore, <strong>I expect a proliferation of libraries of any kind</strong>. In large companies, complex algorithms and calculations will be written by a small number of expert developers and used by a large number of data analysts who may not have the same advanced DAX skills.</p>
<p><strong>Calendar-based time intelligence is a fantastic feature</strong> we should have received many years ago. It’s wonderful; it works well, even though it requires a little patience and an initial investment in time and effort. For a new model with non-trivial calendar requirements, it is the way to go. But if you have existing models and reports… <strong>You should not rush to migrate</strong>. There are performance advantages in specific scenarios, but overall, many people may not find the additional investment (in learning and refactoring of existing models and reports) worth the benefits. Expect more tooling coming to help you benefit from this feature, but more about this later.</p>
<h2>New DAX functions in 2025</h2>
<p>The 21 new functions introduced in 2025 do not introduce new concepts; most are logical consequences of other features released this year. For example, the new Calendar-based time intelligence updated the parameters of all<a href="https://dax.guide/functions/time-intelligence/"> time intelligence functions</a> (which now accept a Calendar argument) and included new functions that work with weeks and require a Calendar argument: <a href="https://dax.guide/closingbalanceweek/">CLOSINGBALANCEWEEK</a>, <a href="https://dax.guide/dateswtd/">DATESWTD</a>, <a href="https://dax.guide/endofweek/">ENDOFWEEK</a>, <a href="https://dax.guide/nextweek/">NEXTWEEK</a>, <a href="https://dax.guide/openingbalanceweek/">OPENINGBALANCEWEEK</a>, <a href="https://dax.guide/previousweek/">PREVIOUSWEEK</a>, <a href="https://dax.guide/startofweek/">STARTOFWEEK</a>, <a href="https://dax.guide/totalwtd/">TOTALWTD</a>.</p>
<p>We have several new <a href="https://dax.guide/functions/information/">information functions</a> to investigate the type of an expression, which are helpful in<a href="https://www.sqlbi.com/dax-user-defined-functions-udf/"> DAX user-defined functions (UDFs)</a> that receive an argument without type enforcement. Because of the new names and aliases for the data types, we have multiple functions for the same type (e.g., INTEGER and INT64; they are really the same). These are <a href="https://dax.guide/iscurrency/">ISCURRENCY</a>, <a href="https://dax.guide/isinteger/">ISINTEGER</a>, <a href="https://dax.guide/isboolean/">ISBOOLEAN</a>, <a href="https://dax.guide/isdatetime/">ISDATETIME</a>, <a href="https://dax.guide/isdecimal/">ISDECIMAL</a>, <a href="https://dax.guide/isdouble/">ISDOUBLE</a>, <a href="https://dax.guide/isint64/">ISINT64</a>, <a href="https://dax.guide/isnumeric/">ISNUMERIC</a>, <a href="https://dax.guide/isstring/">ISSTRING</a>. In the same group, we also have the new <a href="https://dax.guide/info.csdlmetadata/">INFO.CSDLMETADATA</a> and <a href="https://dax.guide/info.dependencies/">INFO.DEPENDENCIES</a> functions.</p>
<p>The only function added to the <a href="https://dax.guide/functions/visual-calculations/">visual calculations</a> is <a href="https://dax.guide/lookupwithtotals/">LOOKUPWITHTOTALS</a>. Finally, <a href="https://dax.guide/externalmeasure/">EXTERNALMEASURE</a> has been documented officially this year, but it had been in use since 2020, as it is used for composite models.</p>
<h2>What SQLBI delivered in 2025</h2>
<p>People are coming back to in-person events. This is a fact. While we don’t have as many local community events as we did before the pandemic, the trend is that they are growing. However, the consumption of technical content is moving in different directions. As we announced last year, we limited our in-person delivery time during 2025 because we had to produce new content:</p>
<ul>
<li>We completed the <strong>third edition of the book</strong>,<a href="https://www.sqlbi.com/books/the-definitive-guide-to-dax-third-edition/"><strong> The Definitive Guide to DAX</strong></a>. It is an entirely new book; we barely reused any parts from the previous edition. Why? Years of experience teaching DAX have given us more ideas for explaining core concepts to both newbies and experienced DAX practitioners. The book also includes many new chapters and covers the latest DAX features, including Calendar-based time intelligence and User-defined functions.</li>
<li>We have already started delivering the updated content of the <a href="https://www.sqlbi.com/training/dax/"><strong>Mastering DAX classroom course</strong></a>, which will have a new form for all deliveries in 2026.</li>
<li>We published <strong>new content</strong> inside <a href="https://www.sqlbi.com/p/plus/"><strong>SQLBI+</strong></a>, our subscription service for advanced content for developers who create semantic models for Power BI, Analysis Services, and Fabric. In 2025, we released two courses inside SQLBI+: <a href="https://www.sqlbi.com/whitepapers/security-in-tabular-semantic-models/">Security in Tabular Semantic Models</a> and <a href="https://www.sqlbi.com/whitepapers/ai-workflows-and-agentic-development-for-power-bi-an-initial-review/">AI Workflows and agentic development for Power BI</a>. Both courses offer a whitepaper along with hours of videos. We also released a new whitepaper, <a href="https://www.sqlbi.com/whitepapers/understanding-summarizecolumns/">Understanding SUMMARIZECOLUMNS</a>, which includes valuable, hard-to-find-elsewhere information that is vital if you plan to query a semantic model from outside Power BI or want to understand the inner workings of this powerful function.</li>
<li>We published more than<strong> <a href="https://www.sqlbi.com/articles/">30 new articles and blog posts</a></strong> at sqlbi.com. These represent high-quality, free learning content on SQLBI, which we produce to support the community and help students who cannot afford the paid content.</li>
<li>We released <strong><a href="https://daxlib.org/">DAX Lib</a></strong>, first repository for libraries of DAX user-defined functions. It is free, open-source, and driven by community contributions. We are so happy about the enthusiastic adoption of the tool! There are already more than 30 ready-to-use libraries on DAX Lib, and we are talking about a feature released only 3 months ago and that is still in preview. This is definitely something that will grow a lot over the next year.</li>
<li>Our partner company, Tabular Tools, released <a href="https://www.daxoptimizer.com/free/"><strong>DAX Optimizer Basic</strong></a> for general availability. This is a new version of the full DAX Optimizer service. Everyone can evaluate their DAX code in a semantic model: no email, no login, no password, no installation.</li>
<li>In 2025, the <a href="https://www.youtube.com/c/SQLBI"><strong>SQLBI YouTube channel</strong></a> surpassed <strong>125,000 subscribers</strong>. We continued to publish a video for each corresponding article every other week, plus a few other videos with interviews and other content. That’s another source of free learning for students that may not be ready to invest in a video course just yet.</li>
<li>The <strong><a href="https://www.sqlbi.com/newsletter/">SQLBI Newsletter</a></strong> is now being sent out to <strong>275,000 subscribers</strong>! This year we also celebrated <a href="https://www.sqlbi.com/blog/marco/2025/07/28/a-few-thoughts-about-newsletter-300-and-ai/">300 newsletters</a> with a special number dedicated to AI in BI and a new comics that is still alive after several months. Will it survive until next year? We hope so!</li>
<li>Our sister company, OKVIZ, released <strong><a href="https://okviz.com/synoptic-panel/">Synoptic Panel v2</a></strong> which turned out to be a huge hit. There are already 13 <a href="https://okviz.com/synoptic-panel/">use cases</a> showing very different solutions based on this visual: take a look at them, they are inspiring regardless of the use of custom visuals!</li>
</ul>
<h2>What’s coming in 2026</h2>
<ul>
<li><strong>New Mastering DAX video course</strong>: We are currently recording the new <a href="https://www.sqlbi.com/p/mastering-dax-video-course/">Mastering DAX video course</a> that will be available by the end of March 2026. Everyone with an active license for the current Mastering DAX video course on the day the new content is released will be automatically upgraded.</li>
<li><strong>Classroom courses in 2026</strong>: We will offer more dates in new cities. This spring, we will be back in Houston and New York City, and we’ll go to Baltimore for the first time. We are also going back to Copenhagen, Zürich, Amsterdam, and for the first time to Berlin. We are still planning the second half of 2026 also for other continents (!). Stay tuned and let us know where you would like us to deliver in-person classes!</li>
<li>We are planning <strong>new content to deliver inside the </strong><a href="https://www.sqlbi.com/p/plus/"><strong>SQLBI+ subscription</strong></a>. We do not have exact dates to share, but time intelligence and relationships are two topics that deserve white papers and video courses.</li>
<li><strong>Regular content</strong>: expect new articles and videos every other week!</li>
<li><strong>Tools</strong>: we will update existing free tools like Bravo for Power BI, because of DAX Lib, and because of the new calendar-based time intelligence. There could also be something new. Well, not entirely new, but certainly welcome for those who cannot install external tools. Ok, no more spoilers!</li>
</ul>
<p>Make sure to receive regular updates about SQLBI by registering for <a href="https://www.sqlbi.com/newsletter/">our newsletter</a> and subscribing to our <a href="https://www.youtube.com/c/sqlbi">YouTube channel</a>.</p>
<p>Enjoy DAX and see you in 2026!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/blog/marco/2025/12/29/what-happened-in-the-dax-world-in-2025/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Dynamic pricing model for aircraft seat sales with Synoptic Panel and Power BI</title>
		<link>https://www.sqlbi.com/tv/synoptic-panel-02-seat-purchasing/</link>
					<comments>https://www.sqlbi.com/tv/synoptic-panel-02-seat-purchasing/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Tue, 23 Dec 2025 15:30:08 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?post_type=video&#038;p=886116</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/FrUqHSgqLJ0/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>How to analyze a dynamic pricing model for aircraft seat sales by using Synoptic Panel in Power BI. See how seat-level revenue data, usually shown in a table, can be turned into an interactive plane seat map. By binding data&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/FrUqHSgqLJ0/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>How to analyze a <strong>dynamic pricing model for aircraft seat sales</strong> by using <strong>Synoptic Panel</strong> in Power BI.<br />
See how seat-level revenue data, usually shown in a table, can be turned into an <strong>interactive plane seat map</strong>. By binding data to an SVG drawing, Synoptic Panel makes it easy to spot high- and low-performing seats, analyze patterns by row or class, and interactively select seats to aggregate results.</p>
<p>The complete use case provides more details about how to implement this solution and obtain the result shown in the video.</p>
<p><strong>Inside the case study, you will find:</strong></p>
<ul>
<li>How seat revenue data is modeled in Power BI</li>
<li>How an SVG plane layout is bound to data</li>
<li>How dynamic coloring highlights revenue and ticket classes</li>
<li>How selections on the visual drive analysis</li>
</ul>
<p><strong>Full case study and implementation details:</strong><a href="https://okviz.com/usecase/plane-seats-synoptic-panel/"> https://okviz.com/usecase/plane-seats-synoptic-panel/</a></p>
<p><strong>Try Synoptic Panel:</strong> <a href="https://okviz.com/synoptic-panel">https://okviz.com/synoptic-panel</a></p>
<p>This is a simple demo, but it reflects how many companies already use Synoptic Panel to visualize complex layouts in a clear, intuitive way.</p>
<p><strong>Share this post</strong> with teams or partners who could benefit!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/synoptic-panel-02-seat-purchasing/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Measuring the impact of promotions on sales in Power BI</title>
		<link>https://www.sqlbi.com/tv/measuring-the-impact-of-promotions-on-sales-in-power-bi/</link>
					<comments>https://www.sqlbi.com/tv/measuring-the-impact-of-promotions-on-sales-in-power-bi/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Tue, 16 Dec 2025 11:00:00 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<guid isPermaLink="false">http://www.sqlbi.com/?post_type=video&#038;p=888870</guid>

					<description><![CDATA[<figure><img src="https://i.ytimg.com/vi/fAYsQ7cHX4I/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure>How to analyze the effectiveness of campaigns by separating attributed sales (directly linked to a campaign) from influenced sales (all sales of products participating in campaigns, regardless of attribution). The video describes the data model and DAX measures required.]]></description>
										<content:encoded><![CDATA[<figure><img src="https://i.ytimg.com/vi/fAYsQ7cHX4I/maxresdefault.jpg" class="webfeedsFeaturedVisual" /></figure><p>How to analyze the effectiveness of campaigns by separating attributed sales (directly linked to a campaign) from influenced sales (all sales of products participating in campaigns, regardless of attribution). The video describes the data model and DAX measures required.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/tv/measuring-the-impact-of-promotions-on-sales-in-power-bi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Measuring the impact of promotions on sales in Power BI</title>
		<link>https://www.sqlbi.com/articles/measuring-the-impact-of-promotions-on-sales-in-power-bi/</link>
					<comments>https://www.sqlbi.com/articles/measuring-the-impact-of-promotions-on-sales-in-power-bi/#respond</comments>
		
		<dc:creator><![CDATA[Marco Russo]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 20:00:53 +0000</pubDate>
				<category><![CDATA[DAX]]></category>
		<category><![CDATA[Power BI]]></category>
		<category><![CDATA[Data modeling]]></category>
		<category><![CDATA[DAX Patterns]]></category>
		<guid isPermaLink="false">https://www.sqlbi.com/?p=889276</guid>

					<description><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image7-65.png" class="webfeedsFeaturedVisual" /></figure>This article describes the data model and DAX measures to analyze the effectiveness of campaigns, by separating attributed sales (directly linked to a campaign) from influenced sales (all sales of products participating in campaigns, regardless of attribution). Marketing campaigns drive&#8230;]]></description>
										<content:encoded><![CDATA[<figure><img src="https://cdn.sqlbi.com/wp-content/uploads/image7-65.png" class="webfeedsFeaturedVisual" /></figure><p>This article describes the data model and DAX measures to analyze the effectiveness of campaigns, by separating attributed sales (directly linked to a campaign) from influenced sales (all sales of products participating in campaigns, regardless of attribution).<br />
<span id="more-889276"></span></p>
<p>Marketing campaigns drive sales, but not all sales can be directly attributed to a specific campaign. A customer might see a promotion and later purchase the product without using any discount code or campaign link. How do you measure this broader influence? This article presents data modeling and DAX measures to analyze campaign effectiveness by separating attributed sales (transactions for which we can clearly identify the one campaign that generated the sale) from influenced sales (all sales of products participating in one or more campaigns, regardless of whether the sale can be attributed to one exact campaign). The solution includes several measures using a many-to-many relationship between products and campaigns, combined with dynamic CROSSFILTER manipulation, to provide a complete view of the campaign’s impact.</p>
<p><strong><em>DISCLAIMER</em></strong><em>: The last two sections (“Interpreting the results” and “Other scenarios”) of this article have been written with the support of AI to assess the business scenarios where the data model and the measures described in this article can be used. The text has been manually edited and reviewed by the authors.</em></p>
<h2>Introduction</h2>
<p>Marketing campaigns are designed to drive sales, but measuring their actual impact is challenging. When a customer redeems a discount code or clicks through a promotional link, that sale is clearly attributed to the campaign. But what about the customers who see an advertisement, visit the store a week after the campaign is over, and then purchase the product at full price? The campaign influenced the purchase decision, yet the transaction has no direct link to it.</p>
<p>Traditional attribution models only count directly-tracked conversions, thus underestimating the full value of the campaign. To understand the whole picture, we need to answer several questions:</p>
<ul>
<li>How much revenue came directly from campaign transactions?</li>
<li>How much did campaign products sell in total, regardless of attribution?</li>
<li>Are there products benefiting from campaigns we are not currently analyzing?</li>
<li>How are products that are not the object of any campaign, performing?</li>
</ul>
<p>We will build a set of measures to identify:</p>
<ul>
<li><strong>Attributed Sales</strong>: Transactions explicitly linked to one campaign.</li>
<li><strong>Influenced Sales</strong>: All sales of products participating in campaigns.</li>
<li><strong>Unattributed Sales</strong>: Campaign product sales without direct attribution.</li>
<li><strong>Organic Sales</strong>: Sales of products that have never participated in any campaign.</li>
</ul>
<p>These measures provide a complete view of campaign effectiveness beyond simple attribution.</p>
<p><strong><em>IMPORTANT</em></strong><em>: For the sake of simplicity, this article does not include the period of validity for a campaign. Depending on your feedback, we may consider including a more complete template in a future article in </em><a href="https://www.daxpatterns.com/"><em>DAX Patterns</em></a><em>. Here, we wanted to focus on the data modeling and DAX code involved in moving from attributed to influenced sales, which can be helpful in situations described in the “Other scenarios” section later in the article.</em></p>
<h2>The data model</h2>
<p>The model is based on the standard Contoso dataset, focusing on the <em>Product</em> and <em>Sales</em> tables for this scenario. We added two additional tables: <em>Campaigns</em> and <em>CampaignProducts</em>. The following diagram shows this part of the semantic model.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image1-122.png" width="674" /></p>
<p>The <strong><em>Sales</em></strong> table includes a <em>Campaign</em> column that stores which campaign, if any, was used for each transaction. This column has a direct relationship to the <em>Campaigns</em> table, defining campaign attribution at the individual transaction level. When a sale is completed through a campaign (e.g., via a discount code or promotional link), the <em>Sales[Campaign]</em> column stores the campaign identifier. When a sale has no campaign involvement, the column is blank.</p>
<p><strong><em>Campaign</em></strong> is a simple dimension that lists available campaigns. Because the relationship between <em>Campaign</em> and <em>Sales</em> is a regular one-to-many relationship, transactions in <em>Sales</em> with a blank value in <em>Sales[Campaign]</em> generate an additional blank row in the <em>Campaign</em> table. This is intentional. An alternative approach would be creating a fictitious &#8220;Not Assigned&#8221; campaign to avoid the blank row, but this would complicate the measures: we would need to reference that specific value whenever we want to isolate unattributed sales. Using the blank row instead allows us to rely on the standard ISBLANK function, making the DAX code easier to maintain.</p>
<p><strong><em>CampaignProducts</em></strong> is a bridge table implementing the many-to-many relationship between <em>Product</em> and <em>Campaign</em>: a product can participate in multiple campaigns, and a campaign can include multiple products. The relationship directions are set to filter from <em>Product</em> to <em>CampaignProducts</em> and from <em>Campaign</em> to <em>CampaignProducts</em>, but not the other way around. This means filtering a campaign does not automatically filter products through the bridge table. The measures we build will selectively activate bidirectional filtering using CROSSFILTER when we need to include all products participating in a campaign, regardless of transaction attribution in <em>Sales</em>.</p>
<h2>Building the measures</h2>
<p>In the introduction, we raised several questions about measuring campaign effectiveness. We now build a set of measures to answer them, starting from a simple base measure and progressively adding complexity to isolate different perspectives on sales.</p>
<h3>Sales Amount</h3>
<p><em>Sales Amount</em> is a standard measure in the Contoso sample model we use and serves as the base measure for our analysis. It calculates total revenue by multiplying quantity and net price for each transaction:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; title: ; snippet: Measure; table: Sales; notranslate">
Sales Amount = 
SUMX ( 
    Sales, 
    Sales&#x5B;Quantity] * Sales&#x5B;Net Price] 
)
</pre>
<p>Each subsequent measure will modify the filter context around <em>Sales Amount</em> to isolate different types of sales.</p>
<h3>Attributed Sales</h3>
<p><em>Attributed Sales</em> isolates transactions directly linked to a campaign:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4]; title: ; snippet: Measure; table: Sales; notranslate">
Attributed Sales =
CALCULATE (
    &#x5B;Sales Amount],
    KEEPFILTERS ( NOT ISBLANK ( Campaign&#x5B;Campaign] ) )
)
</pre>
<p>The measure filters <em>Sales Amount</em> to include only transactions where the <em>Campaigns[Campaign]</em> column is not blank, meaning a campaign was explicitly used for the sale. By checking <em>Campaigns[Campaign]</em> rather than <em>Sales[Campaign]</em>, we ensure that transactions assigned to unavailable campaigns are included. If you use <em>Sales[Campaign]</em> instead, data quality issues will result in numbers that do not correspond to <em>Sales Amount</em> by summing the following <em>Unattributed Sales</em> measure.</p>
<p>We use KEEPFILTERS to preserve any campaign selection made in the report. Without KEEPFILTERS, the filter<em> NOT ISBLANK ( Campaigns[Campaign] )</em> would override the existing campaign filter, showing all attributed sales regardless of which campaign is selected. With KEEPFILTERS, the two filters intersect: we see only attributed sales for the selected campaign(s).</p>
<h3>Unattributed Sales</h3>
<p><em>Unattributed Sales</em> captures the opposite: transactions with no campaign attribution:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [6,10,11]; title: ; snippet: Measure; table: Sales; notranslate">
Unattributed Sales = 
IF (
    ISINSCOPE ( Campaign&#x5B;Campaign] ),
    CALCULATE ( 
        &#x5B;Sales Amount],
        KEEPFILTERS ( ISBLANK ( Campaign&#x5B;Campaign] ) )
    ),
    CALCULATE ( 
        &#x5B;Sales Amount],
        REMOVEFILTERS ( Campaign ),
        ISBLANK ( Campaign&#x5B;Campaign] )
    )
)
</pre>
<p>This measure filters for the blank row in the <em>Campaign</em> table to isolate transactions where <em>Sales[Campaign]</em> is blank or corresponds to a non-existing campaign. These are sales that occurred without involving any tracked, valid campaign.</p>
<p>The ISINSCOPE function determines whether <em>Campaign</em> is part of the report&#8217;s grouping hierarchy. When <em>Campaign[Campaign]</em> is in scope – for example when viewing individual campaigns within a brand, even when you look at the data by <em>Campaign[Campaign Name]</em> because it is sorted by <em>Campaign[Campaign]</em> – the measure uses KEEPFILTERS to intersect the blank campaign filter with any existing selection. This returns a value only on the blank row itself.</p>
<p>When <em>Campaign[Campaign]</em> is not in scope (for example, when viewing only at the brand level), the measure uses REMOVEFILTERS to ignore any campaign selection and then filters for the blank campaign. This ensures that <em>Unattributed Sales</em> displays a value at the brand level even when the user has selected specific campaigns in a slicer. Without this logic, selecting one or more campaigns would hide unattributed sales at aggregated levels, making it harder to understand the complete picture.</p>
<p><em>Attributed Sales</em> and <em>Unattributed Sales</em> are complementary. For any given product, their sum equals the total <em>Sales Amount</em>, as shown in the following picture. This makes them helpful for understanding which portion of revenue is directly trackable to campaigns and which is not.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image2-118.png" width="665" /></p>
<p>The <em>Campaign</em> table can filter the measures we showed so far. However, if you filter a campaign, the <em>Unattributed Sales</em> will always be blank, as shown in the following picture.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image3-106.png" width="775" /></p>
<h3>Total Sales</h3>
<p><em>Total Sales</em> shows the complete revenue, ignoring any campaign filter applied in the report:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4]; title: ; snippet: Measure; table: Sales; notranslate">
Total Sales =
CALCULATE ( 
    &#x5B;Sales Amount],
    REMOVEFILTERS ( Campaign )
)
</pre>
<p>The measure uses REMOVEFILTERS to remove any selection on the <em>Campaign</em> table, ensuring we always see the full sales amount regardless of any selected campaign.</p>
<p>The following report illustrates how <em>Total Sales</em> differs from Sales Amount. The matrix shows <em>Product[Brand]</em> on rows with <em>Campaign[Campaign Name]</em> as a nested hierarchy. Looking at the Contoso brand: at the brand level, both <em>Sales Amount</em> and <em>Total Sales</em> show 581,299.33, which is the total sales for all Contoso products.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image4-100.png" width="717" /></p>
<p>When we expand Contoso to see individual campaigns, the behavior diverges. The blank row (highlighted with a green dashed border) represents unattributed sales:<em> Sales Amount</em> shows 206,609.47, while <em>Total Sales</em> remains at 581,299.33. For the individual campaigns (Promo A, Bundle B, and Mailing C, highlighted with red solid borders), <em>Sales Amount</em> shows only the transactions attributed to each campaign, while <em>Total Sales</em> consistently displays the brand total of 581,299.33.</p>
<p>The report also highlights that <em>Attributed Sales</em> shows the value of products associated with selected campaigns; the measure is additive because each product can be attributed to only one campaign. <em>Unattributed Sales</em> shows the value you see in <em>Sales Amount</em> when the blank campaign is selected.</p>
<p>This pattern makes <em>Total Sales</em> a helpful reference point. When analyzing a specific campaign, you can compare <em>Attributed Sales</em> against <em>Total Sales</em> to understand what fraction of overall brand revenue came through that campaign. The measure provides a stable context regardless of the filtered campaigns.</p>
<h3>Influenced Sales</h3>
<p><em>Influenced Sales</em> is the most complex measure. It captures all sales of products participating in the selected campaigns, regardless of whether each transaction was attributed to a campaign. To understand how it works, it is helpful to look at a report that compares<em> Sales Amount</em> and <em>Influenced Sales</em> for different campaigns.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image5-85.png" width="504" /></p>
<p>Looking at the Contoso brand, at the brand level, <em>Influenced Sales</em> shows 483,345.11, which is lower than <em>Sales Amount</em> (581,299.33). This is because <em>Influenced Sales</em> only includes products that participate in at least one campaign. Contoso products outside any campaign are excluded.</p>
<p>When we expand Contoso to see individual campaigns, the difference between the two measures becomes clear. For Promo A, <em>Sales Amount</em> shows 127,176.13, only the transactions directly attributed to that campaign. <em>Influenced Sales</em> shows 225,859.05, which corresponds to all sales of products participating in Promo A, regardless of whether the transaction was linked to the campaign. The gap between these two values represents potential campaign influence that is not captured by direct attribution.</p>
<p>Notice that the blank row shows no value for <em>Influenced Sales</em>. This is correct: since no campaign is selected for that row, there are no campaign products to evaluate. <em>Total Influenced Sales</em> shows the same value across all rows within a brand. This measure ignores the campaign filter and will be explained later in the article.</p>
<p>The <em>Influenced Sales</em> measure computes the <em>Sales Amount</em> value using the filter propagation described in the following diagram, where <em>Campaign</em> filters <em>Sales</em> by filtering the products (in <em>Product</em>) that belong to the selected campaigns (in <em>CampaignProducts</em>).</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image6-77.png" width="652" /></p>
<p>This is the implementation of the <em>Influenced Sales</em> measure:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [6,7,8]; title: ; snippet: Measure; table: Sales; notranslate">
Influenced Sales =
VAR HasCampaigns = COUNTROWS ( CampaignProducts ) &gt; 0
VAR Result =
    CALCULATE (
        DIVIDE ( &#x5B;Sales Amount] + 0, HasCampaigns ),
        VALUES ( Campaign&#x5B;Campaign] ), -- Forces selected or all campaigns
        CROSSFILTER ( CampaignProducts&#x5B;ProductKey], &#039;Product&#039;&#x5B;ProductKey], BOTH ),
        CROSSFILTER ( Sales&#x5B;Campaign], Campaign&#x5B;Campaign], NONE )
    )
RETURN Result
</pre>
<p>The measure requires two CROSSFILTER modifications to reshape how filters propagate through the model.</p>
<p>First, <em>CROSSFILTER ( CampaignProducts[ProductKey], &#8216;Product'[ProductKey], BOTH )</em> enables bidirectional filtering between <em>CampaignProducts</em> and <em>Product</em>. When a campaign is selected, the filter flows from Campaigns → CampaignProducts → Product → Sales, capturing all transactions for products in that campaign.</p>
<p>Second, <em>CROSSFILTER ( Sales[Campaign], Campaign[Campaign], NONE )</em> disables the direct relationship between <em>Sales</em> and <em>Campaigns</em>. Without this, the campaign filter would also flow directly to <em>Sales</em> via the <em>Campaign[Campaign]</em> column, limiting results to only attributed transactions. By disabling this path, we ensure the filter reaches <em>Sales</em> only through the <em>Product</em> relationship.</p>
<p><em>VALUES ( Campaign[Campaign] ) </em>enforces a filter based on the selected campaigns. This is necessary when no campaigns are selected: in that case, there would be no filter to propagate from <em>Campaign,</em> therefore all products would be visible. In this condition, for this measure, we want to filter only products that have at least one campaign.</p>
<p>Finally, the DIVIDE pattern handles an edge case: when a product participates in a campaign but has no sales, we want to return zero rather than blank. The expression <em>DIVIDE ( [Sales Amount] + 0, HasCampaigns )</em> returns 0 when <em>Sales Amount</em> is blank and campaigns exist for the selected products; when the products are not present in any of the selected campaigns, then the result is blank (because of a division by zero) and the product will not be removed from the report if all the other measures displayed are blank. This ensures that products in a campaign with zero sales remain visible in reports when analyzing campaign reach.</p>
<h3>Total Influenced Sales</h3>
<p><em>Total Influenced Sales</em> extends the previous measure to show influenced sales across all campaigns, ignoring the current campaign selection:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4,5]; title: ; snippet: Measure; table: Sales; notranslate">
Total Influenced Sales =
CALCULATE ( 
    &#x5B;Influenced Sales],
    REMOVEFILTERS ( Campaign ),
    NOT ISBLANK ( Campaign&#x5B;Campaign] )
)
</pre>
<p>The <em>NOT ISBLANK ( Campaigns[Campaign] </em>) filter selects all campaigns, overriding any specific campaign filter in the report. The measure then calculates<em> Influenced Sales</em> for this broader selection.</p>
<p>This is useful for comparison: if <em>Total Influenced Sales</em> is significantly higher than <em>Influenced Sales</em> for the selected campaign, it indicates that other campaigns are also driving sales of overlapping products. Analysts can use this to identify cross-campaign effects or to ensure they are not missing important campaign interactions.</p>
<p>The following report shows the Contoso brand expanded to the product level, with the Mailing C campaign selected. This view reveals how to interpret <em>Influenced Sales</em> and <em>Total Influenced Sales </em>and highlights several interesting scenarios.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image7-65.png" width="875" /></p>
<p><strong>Point 1</strong> (black dashed border) shows products with neither attributed sales nor influenced sales for Mailing C. The Cigarette Lighter Adapter products do not participate in the Mailing C campaign, yet their sales appear in Total Sales (79.97, 491.20, 381.45). <em>Sales Amount</em> is blank because none of these transactions were attributed to Mailing C, and <em>Influenced Sales</em> is also blank because these products are not part of the selected campaign. The difference emerges in <em>Total Influenced Sales</em>: the Black variant shows blank, while the Red and White variants show values matching <em>Total Sales</em>. This distinction occurs because the Red and White variants belong to at least one campaign (just not Mailing C), so their <em>Total Influenced Sales</em> equals their <em>Total Sales</em>. For any single product, if <em>Total Influenced Sales</em> has a value, it will match <em>Total Sales</em>. The Black variant does not participate in any campaign, which is why <em>Total Sales</em> has a value but <em>Total Influenced Sales</em> is blank.</p>
<p><strong>Point 2</strong> (green solid border) shows products where all measures align. The Contoso 16GB Mp5 Player M1600 Blue and Green variants have <em>Sales Amount</em>, <em>Influenced Sales</em>, and <em>Total Influenced Sales</em> all showing the same value. This indicates that these products are exclusively in Mailing C, all their sales were attributed to the campaign, and no other campaigns include them.</p>
<p><strong>Point 3</strong> (red dashed border) shows products in the campaign with zero sales. These products participate in at least one campaign other than Mailing C but have no transactions in <em>Sales</em>. <em>Total Influenced Sales</em> shows 0.00 rather than blank, making these products visible in the report. If the products were also included in the Mailing C campaign, they would show 0.00 also for the <em>Influenced Sales</em> measure. This is the edge case handled by the DIVIDE pattern in the <em>Influenced Sales</em> measure: we want to distinguish between “no sales” (0.00) and “not in campaign” (blank).</p>
<p><strong>Point 4</strong> (blue dashed border) shows products where <em>Influenced Sales</em> exceeds <em>Sales Amount</em>. For the Contoso 16GB New Generation MP5 Player M1650 White, <em>Influenced Sales</em> is 1,765.43, but <em>Sales Amount</em> is only 1,004.47. The Contoso 2G MP3 Player E200 Black shows a similar pattern: <em>Sales Amount</em> is 19.41, while <em>Influenced Sales</em> is 180.33. These gaps reveal the difference between attributed and unattributed sales for products in the selected campaign. When a product belongs to the selected campaign (Mailing C in this example), <em>Total Sales</em>, <em>Influenced Sales</em>, and <em>Total Influenced Sales</em> will always match at the product level, as shown for these highlighted products. However, at the Contoso brand level, the values differ because they aggregate products with different conditions, as illustrated in the previous points.</p>
<h3>Organic Sales</h3>
<p><em>Organic Sales</em> isolates revenue from products that have never participated in any campaign:</p>
<div class="dax-code-title">Measure in Sales table</div>
<pre class="brush: dax; highlight: [4,8,10,15,16]; title: ; snippet: Measure; table: Sales; notranslate">
Organic Sales = 
IF (
    NOT ISINSCOPE ( Campaign&#x5B;Campaign] ),
    VAR ProductsWithoutCampaigns =
        CALCULATETABLE (
            FILTER (
                &#039;Product&#039;,
                ISEMPTY ( RELATEDTABLE ( CampaignProducts ) )
            ),
            REMOVEFILTERS ( Campaign )
        )
    VAR Result = 
        CALCULATE (
            &#x5B;Sales Amount],
            ProductsWithoutCampaigns,
            REMOVEFILTERS ( Campaign )
        )
    RETURN Result
)
</pre>
<p>The measure first identifies products with no campaign participation (<em>ProductsWithoutCampaigns</em>). The CALCULATETABLE expression iterates over <em>Product</em>, keeping only those where there are no campaigns. We apply REMOVEFILTERS on <em>Campaign</em> to ensure this evaluation considers all campaigns, not just those selected in the report. Product sales should be regarded as “organic” only if the product has never appeared in any campaign.</p>
<p>The second CALCULATE then computes<em> Sales Amount</em> for these products only and stores the value in <em>Result</em>. In this case, REMOVEFILTERS on <em>Campaign</em> ensures we ignore any campaign selection.</p>
<p>The initial ISINSCOPE check ensures that the measure returns a value only when <em>Campaign</em> is not part of the report&#8217;s grouping hierarchy. When users filter campaigns through a slicer or other external filter, the measure still displays results because it ignores the campaign selection. However, when <em>Campaign</em> appears as a row or column in the visual, the measure returns a blank value to avoid confusion, since organic products, by definition, do not belong to any campaign.</p>
<p>Conceptually, <em>Organic Sales</em> equals the difference between <em>Total Sales</em> and <em>Total Influenced Sales</em> when using additive measures like <em>Sales Amount</em>, as demonstrated in the following report, where <em>Check Organic</em> is computed using a visual calculation:</p>
<div class="dax-code-title">Visual calculation</div>
<pre class="brush: dax; title: ; snippet: Visual calculation; notranslate">
Check Organic = &#x5B;Total Sales] - &#x5B;Total Influenced Sales]
</pre>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image8-54.png" width="565" /></p>
<p>However, it is a good idea to know how to compute it correctly for any measure, including non-additive ones, using the code shown in the <em>Organic Sales</em> measure.</p>
<p><em>Organic Sales</em> serves as a baseline. These products have no campaign exposure, so their performance reflects purely organic demand. Comparing <em>Organic Sales</em> trends to <em>Influenced Sales</em> for a group of products can help quantify the overall lift generated by campaign activity. For example, consider the following report.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image9-46.png" width="885" /></p>
<p>The report shows all measures together for two campaigns: Bundle B and Mailing C. At the <strong>brand level</strong> (like Contoso), each measure provides a distinct perspective. <em>Total Sales</em> shows overall revenue regardless of campaign filters. <em>Organic Sales</em> isolates products outside any campaign, serving as a baseline for non-promoted performance. <em>Unattributed Sales</em> captures transactions with no campaign link, displayed even though specific campaigns are selected in the slicer. <em>Attributed Sales</em> shows revenue directly tracked to the selected campaigns, while <em>Influenced Sales</em> expands this to include all sales of products participating in those campaigns. <em>Total Influenced Sales</em> provides a reference across all campaigns, which helps identify whether products in the selected campaigns also belong to other campaigns.</p>
<p>Expanding the Contoso brand reveals <strong>campaign-level detail</strong>. <em>Organic Sales</em> and <em>Unattributed Sales</em> display blank at this level because they are not meaningful when a specific campaign is in scope. <em>Total Sales</em> repeats the brand total since it ignores campaign filters. <em>Attributed Sales</em> and <em>Influenced Sales</em> show campaign-specific values, while <em>Total Influenced Sales</em> remains constant and reflects the broader amount influenced by all existing campaigns.</p>
<p>Reading across the columns, analysts can quickly assess campaign performance: the gap between <em>Attributed Sales</em> and <em>Influenced Sales</em> reveals untracked potential impact, <em>Organic Sales</em> provides a control group for comparison, and <em>Total Influenced Sales</em> highlights cross-campaign overlap.</p>
<h2>Interpreting the results</h2>
<p>The measures we built provide different perspectives on campaign effectiveness. Reading them together reveals insights that no single measure could provide on its own.</p>
<h3>How to read the measures together</h3>
<p>Start with <strong><em>Sales Amount</em></strong> as your baseline: this is the total revenue under the current filter context, including any campaign selection. When a campaign is selected, <em>Sales Amount</em> shows only the transactions attributed to that campaign. However, when you combine different measures in the same report, you may want to use <strong><em>Attributed Sales</em></strong> instead of <em>Sales Amount</em> to simplify comparisons with other measures.</p>
<p>Compare <em>Attributed Sales</em> to <strong><em>Influenced Sales</em></strong> to understand the broader reach. <em>Influenced Sales</em> includes all revenue from products participating in the selected campaign, regardless of attribution. If <em>Influenced Sales</em> is significantly higher than <em>Attributed Sales</em>, the campaign products are selling well, but many transactions are not being tracked as campaign sales. This could indicate that customers are exposed to the campaign but complete purchases through other channels.</p>
<p>Use <strong><em>Total Sales</em></strong> as a reference point. This measure ignores campaign filters entirely, showing the complete revenue picture. Comparing <em>Attributed Sales</em> to <em>Total Sales</em> reveals the fraction of overall revenue directly attributable to campaigns.</p>
<p><strong><em>Total Influenced Sales</em></strong> helps identify cross-campaign effects. If <em>Total Influenced Sales</em> is higher than <em>Influenced Sales</em> for a selected campaign, other campaigns also include some of the same products. This overlap might indicate redundant campaign targeting or deliberate cross-promotion strategies.</p>
<p>Finally, <strong><em>Organic Sales</em></strong> isolates products outside any campaign activity. This provides a control group for evaluating whether campaigns are genuinely lifting sales or simply capturing demand that would have occurred anyway.</p>
<h3>Example analysis: comparing Attributed vs Influenced</h3>
<p>Consider a scenario where a brand shows the following values for a selected campaign.</p>
<p><img decoding="async" src="https://cdn.sqlbi.com/wp-content/uploads/image10-41.png" width="434" /></p>
<p><em>Attributed Sales</em> represent only 23% of total brand revenue (<em>Total Sales</em>) and 59% of <em>Influenced Sales</em>. This means that 41% of campaign product sales occurred without direct attribution. Several interpretations are possible: customers may have seen the campaign and purchased later without using a promo code, or the campaign may have raised awareness that led to purchases across different channels. The gap between <em>Attributed Sales</em> and <em>Influenced Sales</em> quantifies this untracked potential impact.</p>
<p>If the business goal is to maximize tracked conversions, this gap suggests an opportunity to improve attribution mechanisms. If the goal is brand awareness, the higher <em>Influenced Sales</em> figure may be the more relevant success metric.</p>
<h3>Using <em>Organic Sales</em> as a baseline</h3>
<p><em>Organic Sales</em> measures revenue from products that have never participated in any campaign. This baseline serves two purposes.</p>
<p>First, it provides a comparison point for campaign product performance. If <em>Influenced Sales</em> for campaign products grows faster than <em>Organic Sales</em> over time, campaigns may be generating incremental lift. If both grow at similar rates, the campaigns might simply be capturing existing demand rather than creating new sales.</p>
<p>Second, <em>Organic Sales</em> helps calibrate expectations. Products outside campaigns represent unassisted performance. Comparing per-product averages between campaign and non-campaign products can reveal whether campaign selection criteria are effective. If products without campaigns outperform those with campaigns, the targeting strategy may need revision.</p>
<h2>Other scenarios</h2>
<p>The pattern described in this article applies to any situation where you need to measure both direct attribution and broader influence. The key requirement is a many-to-many relationship between products (or other entities) and some grouping that may drive sales without explicit tracking. Here are some examples.</p>
<p><strong>Store events or in-store displays.</strong> Retailers often feature products in special displays or promotional events. A customer might notice a product at an endcap display but purchase it later during a regular shopping trip, with no record linking the sale to the display. Comparing attributed sales (transactions during the event) to influenced sales (all sales of featured products) reveals the display’s broader impact.</p>
<p><strong>Email marketing.</strong> Products highlighted in newsletters may generate immediate clicks, but they also build awareness that leads to purchases days or weeks later through other channels. The influenced sales measure captures this delayed effect that click-through attribution misses.</p>
<p><strong>Influencer partnerships.</strong> When influencers promote products, followers often purchase through general channels rather than tracked affiliate links. Measuring total sales of promoted products against attributed affiliate conversions quantifies the influencer’s true reach.</p>
<p><strong>Seasonal or thematic collections.</strong> Retailers group products into collections like &#8220;Summer Essentials&#8221; or &#8220;Back to School.&#8221; These collections drive traffic and awareness, but customers may buy individual items without any collection-level tracking. <em>Influenced Sales</em> for a collection shows its overall contribution to revenue.</p>
<p><strong>B2B account-based marketing.</strong> In B2B contexts, marketing campaigns target specific accounts. Deals closed by sales teams may not carry campaign attribution even when marketing activity influenced the relationship. Comparing the value of attributed deals to total revenue from targeted accounts measures marketing’s broader contribution.</p>
<p><strong>Loyalty program analysis.</strong> Featured products in loyalty programs may see increased purchases from members who never redeem points or rewards for those specific items. <em>Influenced Sales</em> shows total member purchases of featured products, regardless of whether rewards are redeemed.</p>
<h2>Conclusions</h2>
<p>This article presented a pattern for measuring campaign effectiveness beyond direct attribution. The solution relies on a many-to-many relationship between products and campaigns, implemented through a bridge table, combined with the selective use of CROSSFILTER to control how filters propagate through the model.</p>
<p>The article includes several measures that serve different questions:</p>
<ul>
<li><em>Attributed Sales</em> answers “What did the campaign directly generate?”</li>
<li><em>Influenced Sales</em> answers “How did campaign products perform overall?”</li>
<li><em>Organic Sales</em> provides a baseline for products outside campaign activity.</li>
</ul>
<p>You do not have to use all the measures presented; choose only those that meet your requirements!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.sqlbi.com/articles/measuring-the-impact-of-promotions-on-sales-in-power-bi/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
