<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Trifacta – People. Transforming. Data</title>
	<subtitle type="text">A Trifacta Podcast on all things modern data management</subtitle>

	<updated>2023-01-05T23:32:23Z</updated>

	<link rel="alternate" type="text/html" href="https://www.trifacta.com/" />
	<id>https://www.trifacta.com/feed/atom/</id>
	<link rel="self" type="application/atom+xml" href="http://www.trifacta.com/feed/atom/" />

	<generator uri="https://wordpress.org/" version="6.1.1">WordPress</generator>
	<entry>
		<author>
			<name>Trifacta</name>
					</author>

		<title type="html"><![CDATA[What’s New in Designer Cloud 9.7]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/whats-new-in-designer-cloud-9-7/" />

		<id>https://www.trifacta.com/?p=62110</id>
		<updated>2023-01-05T23:32:23Z</updated>
		<published>2022-12-24T19:01:39Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[What’s New in 9.7 We’re excited to share our latest capabilities from the and 9.7 release. As always, there’s a wide range of new features to discuss: Restart Plans from Failed Tasks In 9.7, if a plan fails, users are now able to restart the plan from the first point of failure, running only the [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/whats-new-in-designer-cloud-9-7/"><![CDATA[<h2><strong>What’s New in 9.7</strong></h2>
<p>We’re excited to share our latest capabilities from the and 9.7 release. As always, there’s a wide range of new features to discuss:</p>
<p><strong>Restart Plans from Failed Tasks</strong></p>
<p>In 9.7, if a plan fails, users are now able to restart the plan from the first point of failure, running only the failed task and all downstream tasks after the failed task. This means users will no longer need to rerun an entire plan in the event of failure, saving resources and allowing for faster execution.</p>
<p><img decoding="async" loading="lazy" class="aligncenter size-full wp-image-62112" src="http://s26597.pcdn.co/wp-content/uploads/2022/12/dcloud1.png" alt="" width="673" height="328" data-wp-pid="62112" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/12/dcloud1.png 673w, http://s26597.pcdn.co/wp-content/uploads/2022/12/dcloud1-300x146.png 300w" sizes="(max-width: 673px) 100vw, 673px" /></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><strong>Toggle Histogram Option<br />
</strong></p>
<p>Users can now toggle the Histograms and Data Quality Bar in the Transformer view, allowing for faster performance when conducting quick updates – especially when working with larger data samples. The column histograms can be enabled or disabled from the Transformer page. To activate this feature, it must first be enabled in workspace settings (Experimental Settings &gt; Enable/Disable Data Grid from view options).</p>
<p><img decoding="async" loading="lazy" class="aligncenter size-full wp-image-62113" src="http://s26597.pcdn.co/wp-content/uploads/2022/12/dcloud3.png" alt="" width="720" height="273" data-wp-pid="62113" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/12/dcloud3.png 720w, http://s26597.pcdn.co/wp-content/uploads/2022/12/dcloud3-300x114.png 300w" sizes="(max-width: 720px) 100vw, 720px" /></p>
<p><strong>Schema Drift Detection Updates:<br />
</strong></p>
<p>In 9.7, existing column drift detection capabilities have been extended to support the detection of new, removed, or moved columns in delimited source files (such as CSV, TSV, pipe delimited, etc.). With schema drift detection enabled, users are always notified about schema changes. Jobs can be stopped automatically if any changes in schema are detected, and the respective configuration is enabled.</p>
<p><strong>Users Can Now Cancel Plan Runs</strong></p>
<p>Users with viewer permission on plans can now cancel plan runs from the Run Details page. This allows users to cancel plans if needed without needing to contact an admin.</p>
<p><strong>Support for Ubuntu 20.04<br />
</strong></p>
<p>With 9.7,  will have support for Ubuntu 20.04 (upgraded from Ubuntu 18.04).</p>
<p><strong>Google Cloud Dataprep Specific Features:</strong></p>
<p>In 9.7, we have additional updates specific to Google Cloud Dataprep:</p>
<ul>
<li><strong>Support for Complex Data Types When Publishing to BigQuery</strong>: Users can now publish Objects and arrays of Objects to BigQuery Structs. Previously, these were published as String values.</li>
<li><strong>In-VPC Support for Conversion Service for Private Data Planes</strong>: 9.7 adds support to allow Conversions to be run within Containers within Google Private Data Planes.</li>
<li><strong>Default Settings for Dataflow Execution</strong>: We’ve added the ability to create default settings at the workspace level for Dataflow Execution, allowing admins to skip the step of manually assigning settings to each new user. With this update, admins can also lock the ability to override default settings, giving them full control of all execution settings if needed.</li>
</ul>
<p><strong>New connector with 9.7 Release</strong></p>
<p>We continue our journey to help you connect to any data source, enabling additional use cases. With our 9.7 release, we support the following new connector:</p>
<ul>
<li><strong>AlloyDB (Read and Write)</strong></li>
</ul>
<p>For more information, see <a href="https://docs.trifacta.com/display/DP/Early+Preview+Connection+Types">Early Preview Connection Types</a>. You can learn all about our <a href="https://community.trifacta.com/s/article/connectivity-updates">connectivity updates here</a>.</p>
<p>If you haven’t done it already, it’s a great time to <a href="https://www.trifacta.com/start-wrangling/">sign up for a free trial</a> with Designer Cloud. Join us today on your journey to the cloud.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Shrila Senthil</name>
					</author>

		<title type="html"><![CDATA[Easily Send Messages Through Microsoft Teams Using Plans&#8217; HTTP Task]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/easily-integrate-plans-and-send-messages-through-microsoft-teams-using-alteryx-designer-clouds-http-task/" />

		<id>https://www.trifacta.com/?p=61834</id>
		<updated>2022-12-28T19:20:30Z</updated>
		<published>2022-12-07T10:30:05Z</published>
		<category scheme="https://www.trifacta.com/" term="Tips &amp; Tricks" />
		<summary type="html"><![CDATA[Plans offer a variety of features that allow our users to orchestrate data pipelines. One of the key elements within Plans is the ability to connect with external messaging applications so that you can automatically communicate information to other platforms whenever you’d like. Alteryx Designer Cloud has a native integration with Slack, making it easy [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/easily-integrate-plans-and-send-messages-through-microsoft-teams-using-alteryx-designer-clouds-http-task/"><![CDATA[<p><a href="https://docs.trifacta.com/display/DP/Create+a+Plan">Plans</a> offer a variety of features that allow our users to orchestrate data pipelines. One of the key elements within Plans is the ability to connect with external messaging applications so that you can automatically communicate information to other platforms whenever you’d like.</p>
<p>Alteryx Designer Cloud has a <a href="https://help.trifacta.com/en/articles/5552961-how-to-create-a-slack-task-in-your-plan">native integration</a> with Slack, making it easy for you to deliver messages to accessible Slack channels. But have you ever wondered how you can do this with other platforms? We’ll walk you through it! After reading this article, you will be able to use our built-in <a href="https://docs.trifacta.com/display/DP/Create+HTTP+Task">HTTP Task</a> to send messages through Microsoft Teams as well.</p>
<h4><strong>The Challenge: </strong>Let’s imagine you’ve been using Alteryx Designer Cloud to send summaries of your data to your sales department via Slack for a while now. You have recently switched to a different role, and your new team communicates via Microsoft Teams. You now need to send similar Plan updates to specific Teams channels.</h4>
<p>Your first flow takes in data and transforms it into a clean and usable format. It removes duplicate values, calculates averages, identifies max and min points in the data, etc. If the flow runs successfully, it will update our general dashboard with these new values. If it fails, you want it to inform the Teams channel of data scientists of all the needed information from that Plan run so that they can understand and fix the problem.</p>
<h4><strong>The Solution: </strong>To use the HTTP Task to connect with Teams, you will need to first obtain the correct endpoint URL. Using <a href="https://learn.microsoft.com/en-us/graph/api/chatmessage-post?view=graph-rest-1.0&amp;tabs=http">this</a> guide, you can see that this is the correct URL format:</h4>
<ul>
<li>POST https://graph.microsoft.com/v1.0/teams/{team-id}/channels/{channel-id}/messages</li>
</ul>
<p>From here, you can obtain your Team and Channel ID in 2 different ways. The first would be to go directly through the Microsoft Teams app, as explained <a href="https://www.sharepointeurope.com/how-to-fetch-the-teams-id-and-channel-id-for-microsoft-teams/">here</a>. The other option is to use Microsoft’s <a href="https://developer.microsoft.com/en-us/graph/graph-explorer">Graph Explorer</a> – Where you will input the following links (using “GET”) and copy the <u>correct ID</u> from the Response Preview.</p>
<ul>
<li>Team ID: <a href="https://graph.microsoft.com/v1.0/me/joinedTeams">https://graph.microsoft.com/v1.0/me/joinedTeams</a></li>
<li>Channel ID: <a href="https://graph.microsoft.com/v1.0/teams/%7Bteam-id%7D/channels">https://graph.microsoft.com/v1.0/teams/<strong>{team-id}</strong>/channels</a></li>
</ul>
<p><em>[<strong>Note</strong>: you need permission to access information from specific chats or channels – if you run into any permission errors, it means you don’t have adequate permissions for the Teams channel you’re using]</em></p>
<p>Your next step is to add headers, as explained in <a href="https://learn.microsoft.com/en-us/graph/api/chatmessage-post?view=graph-rest-1.0&amp;tabs=http">this</a> guide. You can copy your access token from the <a href="https://developer.microsoft.com/en-us/graph/graph-explorer">Graph Explorer</a> and input it as one of the header values: Bearer {access token}</p>
<p>Finally, you must add a body – this is the message that will be sent to your team. Fortunately, you can easily use metadata from the Plans to send them specific information from the run that can help them identify the cause of the problem.</p>
<p>Now that your configuration is saved, hit ‘Test’ in the Response tab and see the response. This ensures that your configuration is successful.</p>
<p><img decoding="async" loading="lazy" class="aligncenter size-full wp-image-62101" src="http://s26597.pcdn.co/wp-content/uploads/2022/12/MicrosoftTeams-image-1.png" alt="" width="488" height="835" data-wp-pid="62101" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/12/MicrosoftTeams-image-1.png 488w, http://s26597.pcdn.co/wp-content/uploads/2022/12/MicrosoftTeams-image-1-175x300.png 175w" sizes="(max-width: 488px) 100vw, 488px" /></p>
<p>That’s it &#8211; Mission accomplished! You’ve now learned how to successfully integrate your Plans with Microsoft Teams using Alteryx Designer Cloud’s HTTP Task! Now, you can automatically have error reports sent to your team instead of having to manually send them this information.</p>
<p>Read more about Plans here: <a href="https://www.trifacta.com/blog/plans-for-data-pipelines/">Plans &#8211; The Command Center for Orchestrating Data Pipelines</a></p>
<p>Read more about our HTTP Task here: <a href="https://docs.trifacta.com/display/DP/Create+HTTP+Task">Create HTTP Task</a></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Paul Warburg</name>
					</author>

		<title type="html"><![CDATA[What’s New in Designer Cloud 9.6]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/whats-new-in-designer-cloud-powered-by-trifacta-9-6/" />

		<id>https://www.trifacta.com/?p=61816</id>
		<updated>2022-12-14T17:52:58Z</updated>
		<published>2022-11-29T23:28:56Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[What’s New in 9.6 We’re excited to share our latest capabilities from the Designer Cloud and Google Cloud Dataprep 9.6 release. As always, there’s a wide range of new features to discuss:   New Support for Snowflake on Azure For customers that have hosted Designer Cloud on AWS, it is now possible to read and write [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/whats-new-in-designer-cloud-powered-by-trifacta-9-6/"><![CDATA[<h2><strong>What’s New in 9.6</strong></h2>
<p><span class="TextRun SCXW43895674 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW43895674 BCX0">We’re excited to share our latest capabilities from the Designer Cloud and Google Cloud </span><span class="NormalTextRun SpellingErrorV2Themed SCXW43895674 BCX0">Dataprep</span> <span class="NormalTextRun CommentStart SCXW43895674 BCX0">9.</span><span class="NormalTextRun SCXW43895674 BCX0">6</span><span class="NormalTextRun SCXW43895674 BCX0"> release. As always, there’s a wide range of new features to discuss:</span></span><span class="TextRun MacChromeBold SCXW43895674 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW43895674 BCX0"> </span></span><span class="EOP SCXW43895674 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<h3><strong>New Support for Snowflake on Azure</strong></h3>
<p><span class="TextRun SCXW189208304 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW189208304 BCX0">For customers that have hosted </span><span class="NormalTextRun CommentStart SCXW189208304 BCX0">Designer Cloud</span><span class="NormalTextRun SCXW189208304 BCX0"> on AWS, it is now possible to read and write data from Snowflake on Azure using JD</span><span class="NormalTextRun SCXW189208304 BCX0">BC connectivity</span><span class="NormalTextRun SCXW189208304 BCX0"> (including pushdown support)</span><span class="NormalTextRun SCXW189208304 BCX0">.</span><span class="NormalTextRun SCXW189208304 BCX0"> This gives </span><span class="NormalTextRun CommentStart SCXW189208304 BCX0">additional </span><span class="NormalTextRun SCXW189208304 BCX0">options to our customers who operate in a multi-cloud environment.</span> <span class="NormalTextRun SCXW189208304 BCX0">This update</span><span class="NormalTextRun SCXW189208304 BCX0"> requires Designer Cloud to be hosted on AWS</span> <span class="NormalTextRun SCXW189208304 BCX0">(</span><span class="NormalTextRun SCXW189208304 BCX0">there is not</span> <span class="NormalTextRun SCXW189208304 BCX0">a </span><span class="NormalTextRun ContextualSpellingAndGrammarErrorV2Themed SCXW189208304 BCX0">fully-hosted</span><span class="NormalTextRun SCXW189208304 BCX0"> SaaS offering on Azure at this time</span><span class="NormalTextRun SCXW189208304 BCX0">)</span><span class="NormalTextRun SCXW189208304 BCX0">.</span></span><span class="EOP SCXW189208304 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<h3><strong>Flow Editors Can Now Edit Custom SQL Datasets</strong></h3>
<p><span class="TextRun SCXW223965771 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW223965771 BCX0">Editors of flows </span><span class="NormalTextRun SCXW223965771 BCX0">can now be granted edit access to modify</span><span class="NormalTextRun SCXW223965771 BCX0"> SQL queries </span><span class="NormalTextRun SCXW223965771 BCX0">and</span> <span class="NormalTextRun SCXW223965771 BCX0">custom SQL datasets </span><span class="NormalTextRun SCXW223965771 BCX0">as needed</span><span class="NormalTextRun SCXW223965771 BCX0"> (previously, editors </span><span class="NormalTextRun SCXW223965771 BCX0">could only be granted</span><span class="NormalTextRun SCXW223965771 BCX0"> view access)</span><span class="NormalTextRun SCXW223965771 BCX0">. </span><span class="NormalTextRun SCXW223965771 BCX0">This will allow more flexibility in allowing users to tweak SQL as needed</span><span class="NormalTextRun SCXW223965771 BCX0"> without the need to go back to the original owner of a dataset</span><span class="NormalTextRun SCXW223965771 BCX0">.</span><span class="NormalTextRun SCXW223965771 BCX0"> To access this feature, editors can right click a SQL dataset in a flow and click “Edit custom SQL”.</span></span><span class="EOP SCXW223965771 BCX0" data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:0,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<h2><strong><img decoding="async" loading="lazy" class="alignnone size-medium wp-image-61818" src="http://s26597.pcdn.co/wp-content/uploads/2022/11/sql-edit-custom-300x250.png" alt="" width="300" height="250" data-wp-pid="61818" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/11/sql-edit-custom-300x250.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/11/sql-edit-custom.png 458w" sizes="(max-width: 300px) 100vw, 300px" /></strong></h2>
<h2><strong><img decoding="async" loading="lazy" class="alignnone wp-image-61819" src="http://s26597.pcdn.co/wp-content/uploads/2022/11/sql-code-view-300x129.png" alt="" width="530" height="228" data-wp-pid="61819" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/11/sql-code-view-300x129.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/11/sql-code-view-1024x440.png 1024w, http://s26597.pcdn.co/wp-content/uploads/2022/11/sql-code-view-768x330.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/11/sql-code-view.png 1100w" sizes="(max-width: 530px) 100vw, 530px" /></strong></h2>
<h3><strong>Data Previews Can Now Be Toggled On/Off</strong></h3>
<p><span class="TextRun SCXW141062001 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW141062001 BCX0">Users now have the option to toggle the data-grid preview on and off</span><span class="NormalTextRun SCXW141062001 BCX0"> when building a recipe</span><span class="NormalTextRun SCXW141062001 BCX0">, allowing users to make updates without waiting for the data grid to refresh</span><span class="NormalTextRun SCXW141062001 BCX0">. T</span><span class="NormalTextRun SCXW141062001 BCX0">oggling the data-grid preview off can help </span><span class="NormalTextRun SCXW141062001 BCX0">improve performance and </span><span class="NormalTextRun SCXW141062001 BCX0">reduce the </span><span class="NormalTextRun SCXW141062001 BCX0">loading time </span><span class="NormalTextRun SCXW141062001 BCX0">needed </span><span class="NormalTextRun SCXW141062001 BCX0">to reload the grid </span><span class="NormalTextRun SCXW141062001 BCX0">when </span><span class="NormalTextRun SCXW141062001 BCX0">gene</span><span class="NormalTextRun SCXW141062001 BCX0">rating </span><span class="NormalTextRun SCXW141062001 BCX0">a preview</span><span class="NormalTextRun SCXW141062001 BCX0">.</span> <span class="NormalTextRun SCXW141062001 BCX0">This </span><span class="NormalTextRun SCXW141062001 BCX0">can be</span><span class="NormalTextRun SCXW141062001 BCX0"> particularly useful when applying the same changes to multiple different recipes.</span></span></p>
<h3><strong>Job History Page Now Defaults to &#8220;Run by me&#8221;</strong></h3>
<p><span class="TextRun SCXW35147077 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW35147077 BCX0">The job history page now defaults to “Run by me” rather than “All jobs”. This will allow users to more easily access the flows that are the most relevant to </span><span class="NormalTextRun CommentStart SCXW35147077 BCX0">them</span><span class="NormalTextRun SCXW35147077 BCX0"> and improve the performance of the job history page</span><span class="NormalTextRun SCXW35147077 BCX0">.</span></span><span class="EOP SCXW35147077 BCX0" data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:0,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<p><img decoding="async" loading="lazy" class="alignnone wp-image-61821" src="http://s26597.pcdn.co/wp-content/uploads/2022/11/run-by-me-job-history-300x108.png" alt="" width="625" height="225" data-wp-pid="61821" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/11/run-by-me-job-history-300x108.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/11/run-by-me-job-history-1024x367.png 1024w, http://s26597.pcdn.co/wp-content/uploads/2022/11/run-by-me-job-history-768x275.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/11/run-by-me-job-history.png 1536w, http://s26597.pcdn.co/wp-content/uploads/2022/11/run-by-me-job-history-2048x734.png 2048w" sizes="(max-width: 625px) 100vw, 625px" /></p>
<h3><strong>Google Cloud Dataprep Specific Releases:</strong></h3>
<p><span class="TextRun SCXW154075253 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW154075253 BCX0">In 9.</span><span class="NormalTextRun SCXW154075253 BCX0">6</span><span class="NormalTextRun SCXW154075253 BCX0">, we have an additional </span><span class="NormalTextRun SCXW154075253 BCX0">updates</span> <span class="NormalTextRun SCXW154075253 BCX0">specific</span><span class="NormalTextRun SCXW154075253 BCX0"> to Google Cloud </span><span class="NormalTextRun SpellingErrorV2Themed SCXW154075253 BCX0">Dataprep</span><span class="NormalTextRun SCXW154075253 BCX0">:</span></span><span class="EOP SCXW154075253 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559731&quot;:720,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<ul>
<li><span class="TextRun MacChromeBold SCXW217641859 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><strong><span class="NormalTextRun SCXW217641859 BCX0">Full </span><span class="NormalTextRun SpellingErrorV2Themed SCXW217641859 BCX0">BigQuery</span> <span class="NormalTextRun SCXW217641859 BCX0">Pushdown</span><span class="NormalTextRun SCXW217641859 BCX0"> Support for</span><span class="NormalTextRun SCXW217641859 BCX0"> Merge</span><span class="NormalTextRun SCXW217641859 BCX0"> Operations</span></strong><span class="NormalTextRun SCXW217641859 BCX0"><strong>:</strong> </span></span><span class="TextRun SCXW217641859 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW217641859 BCX0">MERGE operations in </span><span class="NormalTextRun SpellingErrorV2Themed SCXW217641859 BCX0">Dataprep</span><span class="NormalTextRun SCXW217641859 BCX0"> can now be executed using </span><span class="NormalTextRun SpellingErrorV2Themed SCXW217641859 BCX0">BigQuery</span><span class="NormalTextRun SCXW217641859 BCX0"> pushdown processing (previously, MERGE operations were </span><span class="NormalTextRun SCXW217641859 BCX0">only </span><span class="NormalTextRun SCXW217641859 BCX0">supported via Dataflow)</span><span class="NormalTextRun SCXW217641859 BCX0">. This will </span><span class="NormalTextRun SCXW217641859 BCX0">work for both </span><span class="NormalTextRun SCXW217641859 BCX0">file-to-table</span><span class="NormalTextRun SCXW217641859 BCX0"> and </span><span class="NormalTextRun SCXW217641859 BCX0">table-to-table</span> <span class="NormalTextRun SCXW217641859 BCX0">scenarios</span><span class="NormalTextRun SCXW217641859 BCX0">, </span><span class="NormalTextRun SCXW217641859 BCX0">allow</span><span class="NormalTextRun SCXW217641859 BCX0">ing</span><span class="NormalTextRun SCXW217641859 BCX0"> for faster loading times and reduced costs when performing </span><span class="NormalTextRun ContextualSpellingAndGrammarErrorV2Themed SCXW217641859 BCX0">merges</span><span class="NormalTextRun SCXW217641859 BCX0"> and bringing in new data records from applications</span><span class="NormalTextRun SCXW217641859 BCX0">.</span></span><span class="EOP SCXW217641859 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></li>
</ul>
<h3><strong>New connectors with 9.6 Release</strong></h3>
<p><span data-contrast="none">We continue our journey to help you connect to any data source, enabling additional use cases. With our 9.6 release, we support the following new early-preview/read-only connectors:</span><span data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:276}"> </span></p>
<ul>
<li data-leveltext="●" data-font="Calibri" data-listid="4" data-list-defn-props="{&quot;335552541&quot;:1,&quot;335559684&quot;:-2,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Calibri&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;●&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><span data-contrast="auto">Microsoft Dataverse</span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:720,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259,&quot;335559991&quot;:360}"> </span></li>
<li data-leveltext="●" data-font="Calibri" data-listid="4" data-list-defn-props="{&quot;335552541&quot;:1,&quot;335559684&quot;:-2,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Calibri&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;●&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><span data-contrast="auto">Adobe Analytics</span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:720,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259,&quot;335559991&quot;:360}"> </span></li>
<li data-leveltext="●" data-font="Calibri" data-listid="4" data-list-defn-props="{&quot;335552541&quot;:1,&quot;335559684&quot;:-2,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Calibri&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;●&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><span data-contrast="auto">Google Contacts</span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:720,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259,&quot;335559991&quot;:360}"> </span></li>
<li data-leveltext="●" data-font="Calibri" data-listid="4" data-list-defn-props="{&quot;335552541&quot;:1,&quot;335559684&quot;:-2,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Calibri&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;●&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><span data-contrast="auto">Workday (Added support for OAuth connection)</span><span data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:720,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259,&quot;335559991&quot;:360}"> </span></li>
</ul>
<p>For more information, see <a href="https://docs.trifacta.com/display/DP/Early+Preview+Connection+Types">Early Preview Connection Types</a>. You can learn all about our <a href="https://community.trifacta.com/s/article/connectivity-updates">connectivity updates here</a>.</p>
<p>If you haven’t done it already, it’s a great time to <a href="https://www.trifacta.com/start-wrangling/">sign up for a free trial</a> with Trifacta. Join us today on our journey to the cloud.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Trifacta</name>
					</author>

		<title type="html"><![CDATA[The 4 Factors that Separate “Good” from “Great”]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/the-4-factors-that-separate-good-from-great/" />

		<id>https://www.trifacta.com/?p=61708</id>
		<updated>2022-12-28T19:27:06Z</updated>
		<published>2022-10-19T01:03:37Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Customer Spotlight" />
		<summary type="html"><![CDATA[Originally appeared on Alteryx Community. &#8212; Do you love your job? For most Trifacta Community members, the answer is a resounding “Yes!!” Getting to work with data to answer your business’s most important questions is no doubt an exciting occupation. But … could your “good” job become “great?” Could your “great” job become “amazing??” Let’s [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/the-4-factors-that-separate-good-from-great/"><![CDATA[<p><em>Originally appeared on <a href="https://community.alteryx.com/t5/Analytics/The-4-Factors-that-Separate-Good-from-Great/ba-p/1013453">Alteryx Community</a>.</em></p>
<p>&#8212;</p>
<p>Do you love your job? For most Trifacta Community members, the answer is a resounding “Yes!!” Getting to work with data to answer your business’s most important questions is no doubt an exciting occupation.</p>
<p>But … could your “good” job become “great?” Could your “great” job become “amazing??” Let’s explore.</p>
<h1>What’s slowing you down?</h1>
<p>As a data professional, you likely navigate many challenges throughout your day. For example, perhaps you have to wrangle dozens of different data sources and outputs. Did you know that, according to the International Data Corporation (IDC), the <a href="https://www.alteryx.com/resources/report/idc-state-of-data-science-and-analytics">average analytical process involves 6 inputs and 7 outputs</a>? That’s a lot to data sources and data destinations keep track of.</p>
<p style="text-align: center;"><img decoding="async" loading="lazy" class="alignnone size-full wp-image-61709 aligncenter" src="http://s26597.pcdn.co/wp-content/uploads/2022/10/data-analyst-funny.gif" alt="" width="266" height="195" data-wp-pid="61709" />Actual footage of a data analyst at work.</p>
<p>Plus, the <a href="https://www.alteryx.com/resources/report/idc-state-of-data-science-and-analytics">IDC finds</a> analysts typically use anywhere from 4 to 7 different tools to get their analytic work done. How many different tools and technologies do you use every day??</p>
<p>In some cases, it’s not just the data or technology that might prove challenging to manage—but also people. For some analytics projects, you may have had to petition repeatedly for help from hard-to-reach (and slow-to-respond!) experts.</p>
<p>All this trouble to get from data to insight can be exhausting, with <a href="https://www.alteryx.com/resources/report/idc-state-of-data-science-and-analytics">IDC reporting</a> that data professionals worldwide spend a full 44% of their workday on unsuccessful data activities. In fact, a brand-new <a href="https://www.alteryx.com/resources/whitepaper/4-ways-to-unlock-transformative-business-outcomes-from-analytic-investmentshttps:/www.alteryx.com/resources/whitepaper/4-ways-to-unlock-transformative-business-outcomes-from-analytic-investments">IDC report</a> shows a whopping 93% of organizations are not fully using the analytics skills of their employees.</p>
<p>So, if you’re tired at the end of your workday, know you’re not alone!</p>
<h1>Making positive change</h1>
<p>You may be asking yourself: “how you can I solve these kinds of issues?” Naturally, you’ll find a million-and-one ideas in the pages of this very Community, from the <a href="https://community.alteryx.com/t5/Alteryx-Academy/ct-p/alteryx-academy">Academy</a> to the <a href="https://community.alteryx.com/t5/Discussions/ct-p/ayx-discussions">Discussion Forums</a>!</p>
<p>However, some issues can’t be fully solved with a clever workflow. The truth is company policies and organizational design may be the root cause of some of your woes. After all, data silos, insufficient privileges, and lack of necessary support for analytic work are often the result of business-level decisions.</p>
<p>Organizational concerns may sound like a murky area for an analyst to explore. After all, you’re a champion of measurable, quantitative data! Happily, these process and situational issues can be broken down into clear, measurable dimensions. So, they don’t have to remain a mysterious, subjective area any longer.</p>
<p style="text-align: center;"><img decoding="async" loading="lazy" class="alignnone size-full wp-image-61710 aligncenter" src="http://s26597.pcdn.co/wp-content/uploads/2022/10/jim-c.gif" alt="" width="500" height="281" data-wp-pid="61710" />Let&#8217;s see what we can discover here.</p>
<h1>Enter the “Analytics Maturity Model”</h1>
<p>The <a href="https://iianalytics.com/">International Institute of Analytics</a> (IIA) has been studying the analytics practices of organizations across the globe for more than a decade. As a result of benchmarking hundreds of businesses, the IIA has created a model that measures how well (or poorly) organizations leverage analytics to drive insights and make decisions. They call it (unsurprisingly) the “Analytics Maturity Model.” It measures businesses along 4 different dimensions:</p>
<ul>
<li><strong>Data Maturity</strong>: Data is the raw ingredient, the foundational element for your analytics strategy. Do the right teams have the right access to the right quality of data?</li>
<li><strong>Organizational Dynamics</strong>: Effective organizations have a clear analytics strategy. How is success defined? What resources, processes and structures have been set up to execute the strategy?</li>
<li><strong>Analytic Team Dynamics</strong>: For analytics success, analytic teams must find the balance between control and freedom. Has analytics leadership identified and prioritized data-driven business opportunities? How well have they orchestrated their teams into action?</li>
<li><strong>Usage and Technology</strong>: The set of tools, techniques, architectures, methods, and practices in use. How well do they connect analytics professionals to the rest of the organization? How well do they help the business realize its analytics strategy?</li>
</ul>
<h1>What Analytics Maturity does for you</h1>
<p>Knowing your organization’s levels of Analytics Maturity across these 4 dimensions may sound abstract but having a clear scorecard of how things are going today lays the foundation for future improvements. In particular, knowing your company’s current maturity scores as described above helps identify where your company is doing the right things, and where they should work to improve their analytics processes.</p>
<p>It&#8217;s not that kind of assessment, we promise!</p>
<p>The good news is that this isn’t like taking a test, where your company “passes” or “fails.” It’s about measuring your organization’s progress over time. And every business has room to grow—according to the IIA, the average organization today has an analytics maturity score of just 2.2 out of 5.</p>
<h1>How does your business measure up?</h1>
<p>Now you know what an analytics maturity assessment can reveal about your business. Are you ready to find out how your company measures up? Good news: the IIA’s analytics maturity assessment is <a href="https://www.alteryx.com/analyticsmaturity">available free for you today</a>.</p>
<p>More good news: it takes less than 15 minutes to complete. And it’s just multiple-choice questions—no essays required! You can complete a maturity assessment yourself, or you can wow your leadership by sharing it with them as well. Or do both!</p>
<p>With the Analytics Maturity Assessment, you can create the ultimate win-win scenario. You can help improve your company, and your company can help you—perhaps by taking down data silos, streamlining onerous processes, getting you the tools and technologies to meet your needs—the sky’s the limit.</p>
<p><a href="https://www.alteryx.com/analyticsmaturity"><img decoding="async" loading="lazy" class="aligncenter wp-image-61712" src="http://s26597.pcdn.co/wp-content/uploads/2022/10/company-score2-300x79.png" alt="" width="420" height="110" data-wp-pid="61712" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/10/company-score2-300x79.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/10/company-score2-1024x268.png 1024w, http://s26597.pcdn.co/wp-content/uploads/2022/10/company-score2-768x201.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/10/company-score2.png 1050w" sizes="(max-width: 420px) 100vw, 420px" /></a></p>
<p>Take a look at potential systemic challenges that could be making your job needlessly difficult. If you do, you help steer your organization down a path that takes your work life from “good” to “great,” and even to “amazing.”</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Paul Warburg</name>
					</author>

		<title type="html"><![CDATA[What’s New in Designer Cloud Powered by Trifacta 9.5]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/whats-new-in-designer-cloud-powered-by-trifacta-9-5/" />

		<id>https://www.trifacta.com/?p=61648</id>
		<updated>2022-12-14T17:52:39Z</updated>
		<published>2022-10-14T23:28:24Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[What’s New in 9.5 We’re excited to share our latest capabilities from the Designer Cloud powered by Trifacta and Google Cloud Dataprep 9.5 release. As always, there’s a wide range of new features to discuss: Introducing Designer Cloud powered by Trifacta! For AWS and Azure users, Designer Cloud powered by Trifacta 9.5 officially introduces a [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/whats-new-in-designer-cloud-powered-by-trifacta-9-5/"><![CDATA[<h2><strong>What’s New in 9.5</strong></h2>
<p>We’re excited to share our latest capabilities from the Designer Cloud powered by Trifacta and Google Cloud Dataprep 9.5 release. As always, there’s a wide range of new features to discuss:</p>
<h3><strong>Introducing Designer Cloud powered by Trifacta!</strong></h3>
<p>For AWS and Azure users, Designer Cloud powered by Trifacta 9.5 officially introduces a new product name and an updated product look. For more information about this change, be sure to read <a href="https://www.trifacta.com/blog/announcing-designer-cloud-powered-by-trifacta/">our full blog post on the rebrand</a>.</p>
<h3><strong>Snowflake Enhancements</strong></h3>
<p>Our 9.5 update brings several enhancements to Designer Cloud&#8217;s Snowflake integration:</p>
<ul>
<li><strong>Snowflake Pushdown Support for Sampling: </strong>For AWS users, pushdown processing can now be used when creating samples from data stored in Snowflake tables. This makes the data preparation process even more seamless by greatly decreasing the time needed to create a data sample based on full scans of a dataset. With 9.5, this pushdown support is available for all sampling techniques other than Clustering and Stratified samples.</li>
<li><strong>Snowflake Pushdown Support for S3 Data Sources: </strong>Users working with S3 data files can now process their workflows up to 12x faster by leveraging Snowflake pushdown. Previously, Snowflake pushdown processing was only available when working with data sourced from Snowflake tables. With 9.5, Snowflake pushdown processing can now be used with data sourced  from S3 buckets that is being written to Snowflake tables. This allows for ELT from S3 to Snowflake, and can greatly increase the run time efficiency of workflows for those using S3 data sources.</li>
<li><strong>Upsert Support for Snowflake Publishing: </strong>Upserts can now be used when publishing to Snowflake, allowing users to add individual rows without processing and replacing entire tables.</li>
<li><span class="TextRun MacChromeBold SCXW48339818 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="none"><span class="NormalTextRun SCXW48339818 BCX0"><strong>Snowflake JDBC Connector (Private Preview):</strong> </span></span><span class="TextRun SCXW48339818 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="none"><span class="NormalTextRun SCXW48339818 BCX0">We’ve implemented a common JDBC framework so that users can connect to Snowflake wherever it is located (either in AWS, Azure, or another cloud deployment). This allows users to ingest data from Snowflake on any cloud, as well as making it possible to write data to Snowflake on Azure (using pushdown processing!). This feature is still in private preview. Reach out to your Customer Success Manager or Account Executive to get access.</span></span><span class="EOP SCXW48339818 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:720,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></li>
</ul>
<h3><strong>Individual Asset Transfer Between Users</strong></h3>
<p><span data-contrast="auto">Previously, in cases such as a user leaving a company, admins could bulk transfer all assets from one user to another via API. </span><span data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<p><span data-contrast="auto">In 9.5, the ability to transfer asset ownership has been expanded from admins to include individual users. It&#8217;s also now possible to transfer the ownership of </span><i><span data-contrast="auto">individual </span></i><span data-contrast="auto">assets between users (as opposed to the bulk transfer of all assets) &#8211; including all 1</span><span data-contrast="auto">st</span><span data-contrast="auto"> class objects such as flows, connections, imported datasets, plans, macros, and UDFs. This allows for helpful use cases, such as transferring developed assets to a centralized operations account for scheduling.</span></p>
<p><img decoding="async" loading="lazy" class="alignnone wp-image-61649" src="http://s26597.pcdn.co/wp-content/uploads/2022/10/asset-transfer-300x180.png" alt="" width="653" height="392" data-wp-pid="61649" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/10/asset-transfer-300x180.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/10/asset-transfer-1024x614.png 1024w, http://s26597.pcdn.co/wp-content/uploads/2022/10/asset-transfer-768x461.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/10/asset-transfer.png 1100w" sizes="(max-width: 653px) 100vw, 653px" /></p>
<p>We’ve also made asset transfer accessible via dropdowns in the UI, making it possible for non admins to transfer assets without having to write code in an API. To top it off, we’ve added a table to record the transfer history of assets, giving admins a record of who has owned which assets over time.</p>
<p><img decoding="async" loading="lazy" class="alignnone wp-image-61749" src="http://s26597.pcdn.co/wp-content/uploads/2022/10/bulk-asset-transfer-screenshot-300x167.png" alt="" width="697" height="388" data-wp-pid="61749" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/10/bulk-asset-transfer-screenshot-300x167.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/10/bulk-asset-transfer-screenshot-1024x571.png 1024w, http://s26597.pcdn.co/wp-content/uploads/2022/10/bulk-asset-transfer-screenshot-768x428.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/10/bulk-asset-transfer-screenshot.png 1536w, http://s26597.pcdn.co/wp-content/uploads/2022/10/bulk-asset-transfer-screenshot-2048x1142.png 2048w" sizes="(max-width: 697px) 100vw, 697px" /></p>
<h3><strong>Edit Recipes With Datagrid Disabled</strong></h3>
<p>In 9.5, users now have the option to launch the transformer page (recipe view) with the datagrid disabled. This allows users to edit recipe steps without waiting for data samples to load, allowing for faster edits when moving in and out of individual recipes. This is particularly useful when working with large datasets/recipes, or in cases where users find themselves in environments that have poor internet connectivity, causing slower sample loading times.</p>
<p><img decoding="async" loading="lazy" class="alignnone wp-image-61750" src="http://s26597.pcdn.co/wp-content/uploads/2022/10/datagrid-disabled-screenshot-300x166.png" alt="" width="674" height="373" data-wp-pid="61750" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/10/datagrid-disabled-screenshot-300x166.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/10/datagrid-disabled-screenshot-1024x567.png 1024w, http://s26597.pcdn.co/wp-content/uploads/2022/10/datagrid-disabled-screenshot-768x425.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/10/datagrid-disabled-screenshot.png 1536w, http://s26597.pcdn.co/wp-content/uploads/2022/10/datagrid-disabled-screenshot-2048x1133.png 2048w" sizes="(max-width: 674px) 100vw, 674px" /></p>
<h3><strong>New Flow Parameter Type &#8211; Selector</strong></h3>
<p><span class="TextRun SCXW24440758 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW24440758 BCX0">In 9.5, we’ve added a new flow parameter type – Selector. The Selector Flow Parameter allows users to define a parameter based on an enumerated list of values with a single selection option. This can be used to define an override key, where an overridden value applies to all references of the parameter within a flow.</span></span><span class="EOP SCXW24440758 BCX0" data-ccp-props="{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:0,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<h3><img decoding="async" loading="lazy" class="alignnone wp-image-61751" style="font-size: 16px;" src="http://s26597.pcdn.co/wp-content/uploads/2022/10/selector-override-screenshot-300x169.png" alt="" width="685" height="386" data-wp-pid="61751" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/10/selector-override-screenshot-300x169.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/10/selector-override-screenshot-1024x577.png 1024w, http://s26597.pcdn.co/wp-content/uploads/2022/10/selector-override-screenshot-768x432.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/10/selector-override-screenshot.png 1536w, http://s26597.pcdn.co/wp-content/uploads/2022/10/selector-override-screenshot-2048x1153.png 2048w, http://s26597.pcdn.co/wp-content/uploads/2022/10/selector-override-screenshot-1200x675-cropped.png 1200w" sizes="(max-width: 685px) 100vw, 685px" /></h3>
<h3><strong>Flow Import and API Connection Mapping</strong></h3>
<p><span class="TextRun SCXW157158295 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW157158295 BCX0">When importing flows to a new environment or workspace, users can now specify connections, allowing for more plug-and-play usability. Using this feature, users can simply change their connections, and datasets will be replaced properly without any additional steps.</span></span><span class="EOP SCXW157158295 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<h3><strong>Refresh Excel, PDF, and Google Sheets Files</strong></h3>
<p>In 9.5, dataset refresh has been further expanded to include Excel files, PDFs, and Google Sheets. This builds on existing dataset refresh support for relational, delimited, schematized, and JSON files. When the underlying schema for a supported dataset changes, dataset refresh allows users to upload fresh data and refresh their datasets without the need to create a new dataset object and replace it in the flow. Dataset refreshes can be used to address schema changes, or to add or remove columns of data from a dataset. This makes your datasets more durable, reusable objects and helps to avoid workspace clutter and versioning issues.</p>
<h3><strong>Speeding Up the Job History Page</strong></h3>
<p>To speed up performance on the job history page, admins now have the option to change the default number of days displayed (180 days / 120 days / 60 days). Changing this default can reduce page rendering time by up to 20%, providing even faster performance.<strong> </strong></p>
<h3><strong>Enable OAuth for Sharepoint</strong></h3>
<p><span class="TextRun SCXW10597934 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW10597934 BCX0">For enhanced security, users can now leverage OAuth 2.0 connectivity to access </span><span class="NormalTextRun SpellingErrorV2Themed SpellingErrorHighlight SCXW10597934 BCX0">Sharepoint</span><span class="NormalTextRun SCXW10597934 BCX0"> lists.</span></span><span class="EOP SCXW10597934 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<h3><strong>Google Cloud Dataprep Specific Releases:</strong></h3>
<p><span class="TextRun SCXW34651809 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW34651809 BCX0">In 9.5, we have some additional updates specific to Google Cloud </span><span class="NormalTextRun SpellingErrorV2Themed SCXW34651809 BCX0">Dataprep</span><span class="NormalTextRun SCXW34651809 BCX0">:</span></span><span class="EOP SCXW34651809 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></p>
<ul>
<li><span class="TextRun MacChromeBold SCXW26515786 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW26515786 BCX0"><strong>Enable Sort Transform:</strong> </span></span><span class="TextRun SCXW26515786 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SpellingErrorV2Themed SCXW26515786 BCX0">Dataprep</span><span class="NormalTextRun SCXW26515786 BCX0"> users can now sort dataset samples in the transformer grid. Samples can be sorted based on columns in ascending or descending order or based on the order of rows when the dataset was created. This was an existing feature in Designer Cloud powered by </span><span class="NormalTextRun SpellingErrorV2Themed SCXW26515786 BCX0">Trifacta</span><span class="NormalTextRun SCXW26515786 BCX0"> on AWS that has been brought to Google Cloud </span><span class="NormalTextRun SpellingErrorV2Themed SCXW26515786 BCX0">Dataprep.</span></span></li>
<li><span class="TextRun SCXW93937940 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW93937940 BCX0"><span class="TextRun MacChromeBold SCXW26515786 BCX0" lang="EN-US" xml:lang="EN-US" data-contrast="auto"><span class="NormalTextRun SCXW26515786 BCX0"><strong>Enable Service Accounts for In-VPC Batch Jobs and BigQuery Execution:</strong> </span></span>For </span><span class="NormalTextRun SpellingErrorV2Themed SCXW93937940 BCX0">Dataprep</span><span class="NormalTextRun SCXW93937940 BCX0"> users, Service accounts can now be used to execute transformation jobs within your VPC and within </span><span class="NormalTextRun SpellingErrorV2Themed SCXW93937940 BCX0">BigQuery</span><span class="NormalTextRun SCXW93937940 BCX0">. This enhanced security measure removes calls to the </span><span class="NormalTextRun SpellingErrorV2Themed SCXW93937940 BCX0">Trifacta</span><span class="NormalTextRun SCXW93937940 BCX0"> VPC for credentials and reduces timeouts on longer-running jobs.</span></span><span class="EOP SCXW93937940 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:720,&quot;335559731&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"> </span></li>
</ul>
<h3><strong>New connectors with 9.5 Release</strong></h3>
<p>We continue our journey to help you connect to any data source, enabling additional use cases. With our 9.5 release, we support the following new early-preview/read-only connectors:</p>
<ul>
<li>Workday</li>
<li>Google Calendar</li>
<li>QuickBooks</li>
</ul>
<p>For more information, see <a href="https://docs.trifacta.com/display/DP/Early+Preview+Connection+Types">Early Preview Connection Types</a>. You can learn all about our <a href="https://community.trifacta.com/s/article/connectivity-updates">connectivity updates here</a>.</p>
<p>If you haven’t done it already, it’s a great time to <a href="https://www.trifacta.com/start-wrangling/">sign up for a free trial</a> with Trifacta. Join us today on our journey to the cloud.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Shyam Srinivasan</name>
					</author>

		<title type="html"><![CDATA[Announcing Designer Cloud Powered by Trifacta]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/announcing-designer-cloud-powered-by-trifacta/" />

		<id>https://www.trifacta.com/?p=61570</id>
		<updated>2022-12-13T06:00:17Z</updated>
		<published>2022-09-27T01:00:56Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[Welcome to Designer Cloud Notice anything different about Trifacta? If you haven’t already heard, we’re pleased to share that Trifacta is now Designer Cloud!  Designer Cloud 9.5 introduces a new product name and an updated product look. These small changes provide a glimpse into an exciting future for Trifacta that will bring a variety of [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/announcing-designer-cloud-powered-by-trifacta/"><![CDATA[<p><b>Welcome to Designer Cloud</b></p>
<p><span style="font-weight: 400;">Notice anything different about Trifacta? If you haven’t already heard, we’re pleased to share that Trifacta is now Designer Cloud! </span></p>
<p><span style="font-weight: 400;">Designer Cloud 9.5 introduces a new product name and an updated product look. These small changes provide a glimpse into an exciting future for Trifacta that will bring a variety of new, powerful features you’re sure to love. </span></p>
<p><span style="font-weight: 400;"> </span><span style="font-weight: 400;"> </span></p>
<p><b>So, what is Designer Cloud?</b><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">In February 2022, Trifacta was acquired by Alteryx. Since then, we’ve been hard at work behind the scenes building something truly extraordinary for our customers. </span></p>
<p><span style="font-weight: 400;">Trifacta has long been well-known for providing the world’s most advanced self-service cloud data engineering platform. Our cloud-first focus has allowed us to build an infrastructure that combines infinite scalability with strong data governance and security. </span><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">At the same time, Alteryx has become a household name in the data community. Well known for its best-in-class Analytics Automation Platform, Alteryx has been making data professionals’ lives easier since 1997. </span></p>
<p><span style="font-weight: 400;">Our vision is to bring together Trifacta’s cloud-first, enterprise-grade capabilities with Alteryx’s best-in-class workflow and canvas in an enhanced, world-class experience we’re calling Designer Cloud powered by Trifacta.  </span></p>
<p><span style="font-weight: 400;"> </span><span style="font-weight: 400;"> </span></p>
<p><b>Today marks a major step along this journey.</b><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">In Designer Cloud powered by Trifacta 9.5, you will see a new logo as well as a new product look. </span></p>
<p><b>Coming soon</b><span style="font-weight: 400;">, Trifacta users will gain access to an additional interface that brings together elements from the Alteryx Analytics Automation platform to complement the Trifacta experience. Soon, Trifacta users will be able to take advantage of this suite of new, additional capabilities.  </span></p>
<p><span style="font-weight: 400;">When it arrives, this new combined offering will allow users with Designer expertise the opportunity to leverage many of Designer’s capabilities when building data workflows in the cloud. The combined cloud platform will serve the needs of entire enterprises, from data analytics teams and IT/technology teams to line of business users. </span></p>
<p><span style="font-weight: 400;">Rest assured, the Trifacta application isn’t going away. As these new features are added, the Trifacta platform will continue to be developed, and users will have the option to continue using Trifacta as they currently do today. </span></p>
<p><span style="font-weight: 400;"> </span></p>
<p><b>An End-to-End Portfolio: The Alteryx Analytics Cloud</b><span style="font-weight: 400;"> </span></p>
<p><span style="font-weight: 400;">With these changes, Designer Cloud powered by Trifacta is joining the</span><a href="https://www.alteryx.com/products/alteryx-cloud"> <span style="font-weight: 400;">Alteryx Analytics Cloud</span></a><span style="font-weight: 400;"> &#8211; a suite of cloud analytics tools that will allow users to accelerate their analytics journey like never before. </span></p>
<p><span style="font-weight: 400;">The Alteryx Analytics Cloud includes powerful tools such as</span><a href="https://www.alteryx.com/products/alteryx-machine-learning"> <span style="font-weight: 400;">Alteryx Machine Learning</span></a><span style="font-weight: 400;"> and</span><a href="https://www.alteryx.com/products/auto-insights"> <span style="font-weight: 400;">Alteryx Auto Insights</span></a><span style="font-weight: 400;">. As Designer Cloud powered by Trifacta’s development is continued, users will also see emerging integrations across this suite of products, making the Alteryx Analytics Cloud a truly unified, end-to-end analytics platform. </span></p>
<p><span style="font-weight: 400;">Ready to get started? Jump into Designer Cloud and take a look around!</span></p>
<p>&nbsp;</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Mark Heinsman</name>
							<uri>https://www.trifacta.com</uri>
						</author>

		<title type="html"><![CDATA[What’s New in Designer Cloud 9.3/9.4]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/whats-new-in-designer-cloud-powered-by-trifacta-9-3-9-4/" />

		<id>https://www.trifacta.com/?p=61537</id>
		<updated>2022-12-14T03:33:35Z</updated>
		<published>2022-09-01T18:48:39Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[What’s New in Designer Cloud 9.3/9.4 We’re excited to share our latest capabilities as part of the Designer Cloud 9.3/9.4 releases. As always, there’s a wide range of new features to discuss: Designer Cloud?  To reflect our ongoing product development as part of the Alteryx Analytics Cloud, Trifacta is now Designer Cloud. This name change [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/whats-new-in-designer-cloud-powered-by-trifacta-9-3-9-4/"><![CDATA[<p class="p1"><b>What’s New in Designer Cloud 9.3/9.4</b></p>
<p class="p1">We’re excited to share our latest capabilities as part of the <b>Designer Cloud 9.3/9.4 releases</b>. As always, there’s a wide range of new features to discuss:</p>
<p class="p1"><b>Designer Cloud? </b></p>
<p class="p1">To reflect our ongoing product development as part of the Alteryx Analytics Cloud, Trifacta is now Designer Cloud. This name change will be reflected in future versions of the product, along with some exciting new features. The name of our product has changed, but it’s still the same great Trifacta experience &#8211; and more!<span class="Apple-converted-space"> </span></p>
<p class="p1"><b>What’s new in Designer Cloud 9.4:</b></p>
<p class="p4"><b>JavaScript UDFs Can Now Be Executed With BigQuery Pushdown</b></p>
<p class="p4">In <a href="https://www.trifacta.com/blog/whats-new-in-trifacta-9-0-release/"><span class="s1">Designer Cloud 9.0</span></a>, we announced the ability for users to create custom transformations using Javascript User-Defined Functions (UDFs). In 9.4, we’re happy to announce that Javascript UDFs are now <i>generally available</i>, with an update. For Dataprep users, Javascript UDFs can now be executed with BigQuery Pushdown, allowing for faster and more efficient data transformations across your entire Dataprep job.<span class="Apple-converted-space"> </span></p>
<p class="p1"><b>Access Plans from Flow Output Panel</b></p>
<p class="p1">In 9.4, it’s easier than ever to identify and access plans that are triggering jobs. If a job run is triggered from a plan, a link to the plan will now appear next to the job in the flow output panel, allowing users to easily navigate to the plan. This link will take users to the job’s task node within the plan, making it easier to review your jobs in context.</p>
<p class="p2"><b><span class="Apple-converted-space"><img decoding="async" class="alignnone size-full wp-image-61553" src="http://s26597.pcdn.co/wp-content/uploads/2022/09/access-plans-flow-view.png" alt="" width="350" data-wp-pid="61553" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/09/access-plans-flow-view.png 754w, http://s26597.pcdn.co/wp-content/uploads/2022/09/access-plans-flow-view-175x300.png 175w, http://s26597.pcdn.co/wp-content/uploads/2022/09/access-plans-flow-view-596x1024.png 596w" sizes="(max-width: 754px) 100vw, 754px" /></span></b></p>
<p class="p1"><b>Faster Access to Data Quality Information After Job Runs</b></p>
<p class="p1">It’s now easier to identify data quality issues when you run your jobs. If a job successfully completes, but some Data Quality Rules do not pass, or there are column data mismatches, users will now be notified directly in the body of the Job Run notification email, without the need to open any attachments to see the pertinent information.<span class="Apple-converted-space"> </span></p>
<p class="p2"><span class="Apple-converted-space"><img decoding="async" class="alignnone size-full wp-image-61555" src="http://s26597.pcdn.co/wp-content/uploads/2022/09/faster-access-to-data-quality-information.png" alt="" width="350" data-wp-pid="61555" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/09/faster-access-to-data-quality-information.png 800w, http://s26597.pcdn.co/wp-content/uploads/2022/09/faster-access-to-data-quality-information-185x300.png 185w, http://s26597.pcdn.co/wp-content/uploads/2022/09/faster-access-to-data-quality-information-632x1024.png 632w, http://s26597.pcdn.co/wp-content/uploads/2022/09/faster-access-to-data-quality-information-768x1244.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></span></p>
<p class="p4"><b>Job and Plan Emails Now Send by Default</b></p>
<p class="p4">We don’t want our users to miss a single important update, so email notifications will now be turned on by default when users run a new Flow or Plan. This will automatically notify users by email on job success or job failure. Existing workspace-level overrides and flow-level overrides have been preserved. Users can change the default at any time by navigating to their workspace settings.</p>
<p class="p4"><span class="Apple-converted-space"><img decoding="async" class="alignnone size-full wp-image-61554" src="http://s26597.pcdn.co/wp-content/uploads/2022/09/email-defaults.png" alt="" width="700" data-wp-pid="61554" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/09/email-defaults.png 936w, http://s26597.pcdn.co/wp-content/uploads/2022/09/email-defaults-300x103.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/09/email-defaults-768x263.png 768w" sizes="(max-width: 936px) 100vw, 936px" /> </span></p>
<p class="p1"><b>Users Can Now Refresh JSON Files</b></p>
<p class="p4">In <a href="https://www.trifacta.com/blog/whats-new-in-trifacta-9-0-release/"><span class="s2">Designer Cloud 9.0</span></a>, we added the ability for users to refresh relational, delimited, and schematized files. In 9.4, we’ve expanded dataset refresh to include JSON files. When the underlying schema for a JSON dataset changes, users can now upload fresh data and refresh their datasets without the need to create a new dataset object and replace it in the flow. Dataset refreshes can be used to address schema changes, or to add or remove columns of data from a dataset. In future releases, Dataset refresh will be expanded to support Excel and PDF files.</p>
<p class="p3"><span class="Apple-converted-space"><img decoding="async" class="alignnone size-full wp-image-61552" src="http://s26597.pcdn.co/wp-content/uploads/2022/09/refresh-JSON-datasets.jpg" alt="" width="600" data-wp-pid="61552" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/09/refresh-JSON-datasets.jpg 936w, http://s26597.pcdn.co/wp-content/uploads/2022/09/refresh-JSON-datasets-300x240.jpg 300w, http://s26597.pcdn.co/wp-content/uploads/2022/09/refresh-JSON-datasets-768x615.jpg 768w" sizes="(max-width: 936px) 100vw, 936px" /></span></p>
<p class="p1"><b>What’s new in Designer Cloud 9.3:</b></p>
<p class="p1"><b>9.3:</b></p>
<p class="p4"><b>Improved Transformer Loading Experience</b></p>
<p class="p4">Users should now notice faster performance when entering the recipe mode/transformer grid. Rather than waiting for a sample to load, which could block users from taking action for a few seconds upon entering a recipe, sample loading now occurs asynchronously.<span class="Apple-converted-space"> </span></p>
<p class="p1"><b>New Expandable and Collapsible Left Navigation Bar</b></p>
<p class="p1">The left navigation bar has been updated. You can now expand the left nav bar to display full-text options for each menu item, or collapse it to reclaim additional screen area.</p>
<p class="p4"><b>Improved Support for Synapse External Tables</b></p>
<p class="p4">We’ve improved support for Synapse external tables. Users should see improved performance when reading large data files from Synapse external Serverless Pools. Users can also now publish to Synapse external tables.</p>
<p class="p1"><b>Dataprep: Run SQL Jobs in Customer VPC</b></p>
<p class="p1">Dataprep customers can now connect to data sources and ingest, publish, and execute SQL steps directly from their own virtual private cloud (VPC). Design time access to schema, metadata, and sample data can also leverage the customer’s VPC. For more information, see <a href="https://docs.trifacta.com/display/DP/Run+Dataprep+in+Your+VPC"><span class="s1">Run Dataprep in Your VPC</span></a>.</p>
<p class="p1"><b>Dataprep: In-VPC Processing for Photon<span class="Apple-converted-space"> </span></b></p>
<p class="p1">In addition to SQL steps, Dataprep users can now execute Trifacta Photon In-Memory jobs within their own virtual private cloud (VPC).</p>
<p class="p1"><b>New connectors with Designer Cloud 9.3/9.4</b></p>
<p class="p1">We continue our journey to help you connect to any data source, enabling additional use cases. With Designer Cloud 9.3/9,4, we support the following new early-preview/read-only connectors:</p>
<ul class="ul1">
<li class="li1">SendGrid</li>
<li class="li1">SAP HANA</li>
<li class="li1">Denodo</li>
<li class="li1">Zoho CRM</li>
<li class="li1">DocuSign</li>
</ul>
<p class="p5"><span class="s3">For more information, see</span><span class="s4"> <a href="https://docs.trifacta.com/display/DP/Early+Preview+Connection+Types"><span class="s1">Early Preview Connection Types</span></a>.</span> <span class="s6">You can learn all about our <a href="https://community.trifacta.com/s/article/connectivity-updates"><span class="s1">connectivity updates here</span></a>.</span></p>
<p class="p1">If you haven’t done it already, it’s a great time to <a href="https://www.trifacta.com/start-wrangling/"><span class="s1">sign up for a free trial</span></a> with Designer Cloud. Join us today on our journey to the cloud.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Yeyre Reiter</name>
					</author>

		<title type="html"><![CDATA[Trifacta Legend June 2022: Huong Do]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/trifacta-legend-june-2022-huong-do/" />

		<id>https://www.trifacta.com/?p=61411</id>
		<updated>2022-12-28T20:06:08Z</updated>
		<published>2022-08-05T15:46:03Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Customer Spotlight" />
		<summary type="html"><![CDATA[&#160; Trifacta Legends recognizes customers every month who are doing groundbreaking work with data using Trifacta. We’re pleased to announce the Trifacta Legends for June 2022: Huong Do from Canopy Huong Do is a product owner at Canopy. She has a strong background in the data space, including ingestion, aggregations, and ETL transformations. She has [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/trifacta-legend-june-2022-huong-do/"><![CDATA[<p><img decoding="async" loading="lazy" class="alignright wp-image-61413 size-medium" src="http://s26597.pcdn.co/wp-content/uploads/2022/08/Huong-Do-298x300.png" alt="" width="298" height="300" data-wp-pid="61295" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/08/Huong-Do-298x300.png 298w, http://s26597.pcdn.co/wp-content/uploads/2022/08/Huong-Do-1016x1024.png 1016w, http://s26597.pcdn.co/wp-content/uploads/2022/08/Huong-Do-150x150.png 150w, http://s26597.pcdn.co/wp-content/uploads/2022/08/Huong-Do-768x774.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/08/Huong-Do-1320x1331.png 1320w, http://s26597.pcdn.co/wp-content/uploads/2022/08/Huong-Do-120x120.png 120w, http://s26597.pcdn.co/wp-content/uploads/2022/08/Huong-Do.png 1460w" sizes="(max-width: 298px) 100vw, 298px" /></p>
<p>&nbsp;</p>
<p>Trifacta Legends recognizes customers every month who are doing groundbreaking work with data using Trifacta.</p>
<p>We’re pleased to announce the <strong>Trifacta Legends for June 2022: Huong Do from Canopy</strong></p>
<p>Huong Do is a product owner at Canopy. She has a strong background in the data space, including ingestion, aggregations, and ETL transformations. She has been in the data space for a very long time developing data products, including the bread and butter of Canopy software.</p>
<p>We talked to Huong about her experience building data products at Canopy. She shared some of the challenges Canopy faced, and how they overcame them with Designer Cloud’s self-service solution.</p>
<p><strong>Trifacta</strong>: Huong, can you tell us a little more about yourself?</p>
<p><strong>Huong</strong>: Sure! I graduated from National University of Singapore in 2017, and I have been part of Canopy since then. I’m in charge of designing and delivering automated functions for data ingestions and aggregations. I also help with evaluating business processes, anticipating requirements, and uncovering areas for improvement to develop solutions that help with automations in the company. Currently, I’m managing data-related products and I supervise other BAs on delivering requirements and solutions for our team and our clients.</p>
<p><strong>Trifacta</strong>: Awesome! With that, can you tell us a little more about Canopy as a company?</p>
<p><strong>Huong</strong>: Canopy is a cloud-based financial data aggregation and analytics platform. Our main clients are family offices, wealth managers, private banks, trustees, and self-directed traders. Currently, we report on more than $160 billion in assets across our 335 custodians. Our goal is to provide our clients with a more holistic view of their investment portfolios so that they are able to make better investment decisions. Currently, we offer 3 main types of services: data acquisition &amp; aggregation, data cleansing &amp; standardization, and analytics visualization &amp; reporting. And Designer Cloud powered by Trifacta is playing a very important part in our data cleansing and standardization as a service for our clients.</p>
<p><strong>Trifacta</strong>: That’s awesome. I’m glad the product is helping you achieve those goals. Do you mind sharing how you were doing things before, how Designer Cloud has simplified your process of standardization and cleansing, and how Canopy plays a role in the end product that customers touch?</p>
<p><strong>Huong</strong>: So we used to have a multi-layered, time-consuming process, with large dependencies between the analysts and data engineers to do data cleansing and transformation. There was a lot of back-and-forth in communications between the analysts who understand the data and the data engineers who know how to code. As part of the normal process, the analyst had to put the logic down in a way that the developer could understand, then the developer would do the coding before going back to the analyst to do the testing of the transformation and the cleansing.</p>
<p><strong>Trifacta</strong>: What were some of the challenges with that process?</p>
<p><strong>Huong</strong>: The entire process was very human-dependent, and very time-consuming, because there was a lot of back-and-forth communication. It also depended on our development schedule as well. Because of that, we found that it was not effective for us to onboard new data sources quickly, smoothly, or effectively. So we needed to find a better solution and a better way of dealing with these issues so that we could reach a larger market and get in touch with more data providers without dependencies that would hinder us from onboarding more and more data sources in a timely manner for our clients.</p>
<p><strong>Trifacta</strong>: That makes sense! And how did Designer Cloud help solve that back-and-forth problem?</p>
<p><strong>Huong</strong>: Designer Cloud provided us with a very user-friendly platform for our data analysts to own and do the data transformation and cleansing by themselves without the dependencies on the developers who know how to code. So those loops of back-and-forth communication between the data analysts and data engineers are now totally removed, and data analysts can work directly on Designer Cloud to do the cleansing and transformation. And the result of their transformations are shown to them in real-time previews, and that helps to reduce our cleansing and transformation turnaround time, and reduces the turnaround time to onboard new data sources from weeks to hours.</p>
<p><strong>Trifacta</strong>: That’s incredible! What have been some of the benefits Canopy has experienced following this implementation of Designer Cloud?</p>
<p><strong>Huong</strong>: This has created great opportunities for us to reach out to new markets and new data providers with new sources because we are able to shorten the turnaround time of onboarding new data sources. This also allowed us to reallocate our resources to make sure that we use our resources on more impactful or more important features in the products, and it has also helped us with hiring as well, because now we just need one person and not multiple team members in a back-and-forth process.</p>
<p><strong>Trifacta</strong>: That’s awesome! I love the fact that you are able to now scale and go into a broader market. Thanks so much for sharing your story with us, Huong!</p>
<p><strong>Huong</strong>: It’s my pleasure!</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Paul Warburg</name>
					</author>

		<title type="html"><![CDATA[Trifacta Legend May 2022: Mario Truss &#038; Armin Meyer at Seibert Media]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/trifacta-legend-may-2022-mario-truss-armin-meyer-at-seibert-media/" />

		<id>https://www.trifacta.com/?p=61287</id>
		<updated>2022-12-28T20:06:17Z</updated>
		<published>2022-07-01T01:42:26Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Customer Spotlight" />
		<summary type="html"><![CDATA[Trifacta Legends recognizes customers every month who are doing groundbreaking work with data using Trifacta. We’re pleased to announce the Trifacta Legends for May 2022: Mario Truss &#38; Armin Meyer from Seibert Media Mario Truss is a Product Owner of Customer Data Engineering, and Armin Meyer is a Service Owner of Tools &#38; Data at [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/trifacta-legend-may-2022-mario-truss-armin-meyer-at-seibert-media/"><![CDATA[<p><img decoding="async" loading="lazy" class="alignright size-medium wp-image-61295" src="http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-300x300.jpeg" alt="" width="300" height="300" data-wp-pid="61295" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-300x300.jpeg 300w, http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-150x150.jpeg 150w, http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-768x768.jpeg 768w, http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-120x120.jpeg 120w, http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-600x600.jpeg 600w, http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends.jpeg 800w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p><img decoding="async" loading="lazy" class="alignright size-medium wp-image-61297" src="http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-2-300x298.png" alt="" width="300" height="298" data-wp-pid="61297" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-2-300x298.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-2-150x150.png 150w, http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-2-120x120.png 120w, http://s26597.pcdn.co/wp-content/uploads/2022/07/may-legends-2.png 512w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p><span style="font-weight: 400;">Trifacta Legends recognizes customers every month who are doing groundbreaking work with data using Trifacta.</span></p>
<p><span style="font-weight: 400;">We’re pleased to announce the </span><b>Trifacta Legends for May 2022: Mario Truss &amp; Armin Meyer from Seibert Media</b></p>
<p><span style="font-weight: 400;">Mario Truss is a Product Owner of Customer Data Engineering, and Armin Meyer is a Service Owner of Tools &amp; Data at Seibert Media. Besides being a data nerd, Mario loves music and teaching things to people. </span></p>
<p><span style="font-weight: 400;">Armin has been focused on agile methods for 10+ years, and has been working for the past 3 years to enhance the usage of data, tools and processes at Seibert Media. Aside from working with data, Armin is an avid skier.</span></p>
<p><span style="font-weight: 400;">We talked to Mario &amp; Armin about their experience facilitating data modernization and democratization at Seibert Media. They shared some of the challenges they faced, and how they overcame them with Dataprep’s self-service solution.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">Armin, can you tell us about your business at Seibert Media?</span></p>
<p><b>Armin: </b><span style="font-weight: 400;">We provide some of the best selling apps in the Atlassian marketplace. Some of our solutions include draw.io, Linchpin, and Agile Hive. We also do consultancy, hosting, and license management for a lot of customers. We are well-known in the German-speaking region. We focus on team collaboration tools, and Mario &amp; I work on the internal data management, data engineering, &amp; business intelligence team. </span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">Can you help us understand your data engineering journey, what technologies you use to help you achieve your objectives, and why?</span></p>
<p><b>Armin: </b><span style="font-weight: 400;">As a Google Cloud Partner, we focus on doing this in Google Cloud, which is why we use Dataprep by Trifacta. But prior to that, we have always been hands on with our data, and so we had a lot of manual processes to do reporting and controlling. One advantage this brought was that we didn’t have a lot of on-premise things in the data field, so we could go directly to the cloud before it was a “hot” thing. But a disadvantage was that, when you do things manually, you face a lot of problems. It’s a lot of work, you have to regularly pull the data out of the systems, and your reports are static and quickly become obsolete. And everything you do on your reports is very costly. So we wanted to get better at using the data we had.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">What were some of the first steps you took towards eliminating some of these manual processes to make better use of your data?</span></p>
<p><b>Armin: </b><span style="font-weight: 400;">We started with some groundwork, using Kafka to extract data continuously from all of the systems that had relevant data for us. Then we pulled this data to Google BigQuery and systematically started transforming it and processing it with Dataprep. As an output, we sent this to Data Studio. </span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">And under this system, you were able to automate tasks that were previously quite manual?</span></p>
<p><b>Armin: </b><span style="font-weight: 400;">That’s correct. We were able to replace a lot of manual work, like pulling the data out of the systems, and Dataprep made the data transformations automated in a lot of cases, or at least a lot faster.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">That’s great. What have been some of the business impacts of this shift, both now and going forward?</span></p>
<p><b>Armin: </b><span style="font-weight: 400;">We have a BI team that creates data and does the job for the business people. They raise the questions, and we do the data transformation. The next step where we want to go, and where Dataprep is essential, is to provide self-service capabilities to our analytics and also to our data integration into the other operative systems. So the BI team can now focus on doing things like semantic models of our main data objects, where they model dimensions and common metrics, and then let the users do the rest of the job themselves to get the insights they need.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta:</b><span style="font-weight: 400;"> That’s wonderful. Sounds like you guys have gone a long way towards democratization! Mario, are you able to share any more details on what the end-to-end data engineering process looks like for you at Seibert Media?</span></p>
<p><b>Mario: </b><span style="font-weight: 400;">First we have data sources, which can be APIs, CSV files, or other data. We often have to deal with a lot of different data and a lot of varying data quality. We utilize Apache Kafka to bring data inside of BigQuery. After that, we load that data inside of our BigQuery data warehouse. That’s just raw data coming from the systems. Almost in every use case we use some sort of transformation or enrichment rather than just our raw data. And that’s where Dataprep comes into place. Once we transform our data, we sync it back to our BigQuery data warehouse, and afterwards we facilitate our analytics purposes inside of Google Data Studio. </span></p>
<p>&nbsp;</p>
<p><b>Trifacta:</b><span style="font-weight: 400;"> You mentioned that Dataprep is a key part of this process. What makes Dataprep so essential for your team?</span></p>
<p><b>Mario:</b><span style="font-weight: 400;"> Dataprep makes it possible for people like myself, who don’t have a computer science background, to make ETL transformations to the data and put it into the format that we need in order to use it afterwards. It allows us to make those transformations without code and makes it more accessible to people who maybe aren’t able to write perfect SQL or some other programming language to process data.</span></p>
<p><b>Armin: </b><span style="font-weight: 400;">To add on to that, we started around 3 years ago, and on the first stage we did all of the transformation and processing in BigQuery itself. But what we faced is that you needed really experienced people to do this. And this is when we established Dataprep, which was really a game changer for us, because we could get in people who weren’t so experienced with writing routines and SQL queries. So it’s now much easier to find people to do the job, and it’s quicker to do the job. </span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">So how have you achieved your goals as a company through this modernization and democratization process?</span></p>
<p><b>Mario: </b><span style="font-weight: 400;">One of our goals as a company is to become a data-driven organization or company. And we believe that we can only become that if non-technical people have some sort of interface to interact with that data. And Dataprep’s low-code, no-code tool makes that possible. The democratization and the transformation of the data is extremely valuable in making the data accessible to business users so that they can get insights. The BI team can try things out by themselves without being dependent on us. And we can streamline the whole process so we don’t have to rely on a multitude of tools.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">That’s wonderful! Mario and Armin, thanks again for sharing your story.</span></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Paul Warburg</name>
					</author>

		<title type="html"><![CDATA[Trifacta Legend March 2022: Bud Johnson at Healthgrades]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/trifacta-legend-march-2022-bud-johnson-at-healthgrades/" />

		<id>https://www.trifacta.com/?p=60718</id>
		<updated>2022-12-28T20:06:28Z</updated>
		<published>2022-04-20T18:22:27Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Customer Spotlight" />
		<summary type="html"><![CDATA[Trifacta Legends recognizes a customer every month who is doing groundbreaking work with data using Trifacta. We’re pleased to announce the Trifacta Legend for March 2022: Bud Johnson, Sr. Data Operations Manager at Healthgrades.  Bud Johnson is a Sr. Data Operations Manager at Healthgrades, where he and his team help match people with the proper [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/trifacta-legend-march-2022-bud-johnson-at-healthgrades/"><![CDATA[<p><span style="font-weight: 400;"><img decoding="async" loading="lazy" class="alignright size-full wp-image-60720" src="http://s26597.pcdn.co/wp-content/uploads/2022/04/bud-johnson-headshot.jpeg" alt="" width="300" height="300" data-wp-pid="60720" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/04/bud-johnson-headshot.jpeg 300w, http://s26597.pcdn.co/wp-content/uploads/2022/04/bud-johnson-headshot-150x150.jpeg 150w" sizes="(max-width: 300px) 100vw, 300px" />Trifacta Legends recognizes a customer every month who is doing groundbreaking work with data using Trifacta.</span></p>
<p><span style="font-weight: 400;">We’re pleased to announce the </span><b>Trifacta Legend for March 2022: Bud Johnson, Sr. Data Operations Manager at Healthgrades. </b></p>
<p><span style="font-weight: 400;">Bud Johnson is a Sr. Data Operations Manager at Healthgrades, where he and his team help match people with the proper care at the proper time to maximize their healthcare outcomes. Bud is a Trifacta fan, his background is in the data space, and he has been in this space for a very long time, working in interactive advertising specifically for over 25 years.</span></p>
<p><span style="font-weight: 400;">We talked to Bud about his experience and insight into interactive advertising. Bud shared some of the challenges he faced, and how he overcame them with automated processes powered by Trifacta.</span></p>
<p><b>Trifacta: </b><span style="font-weight: 400;">Welcome Bud! Do you want to add anything to the introduction?</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">So, as you can tell, I’ve been around awhile. I took my first computer class utilizing an old IBM 360 mainframe with an acoustic phone coupler in 1975. So I’ve been playing around with this stuff for quite awhile, but went into sales and marketing, and came back to the dark side about 25 years ago directly in interactive advertising. </span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">That’s awesome. You’ve probably seen a lot of variations and development in this space.</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">Oh, everything has changed. </span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">That’s awesome! Bud, can you tell us about Healthgrades, and how we have probably interacted with Healthgrades without even knowing it?</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">So, we have the largest health marketplace in the country &#8211; possibly in the world. And our mission is to help match people with the proper care at the proper time to maximize their healthcare outcomes. So we can help you find the correct doctor &#8211; we have ratings from consumers, and we also bring in any public records that we can get. And the area that I work in the most is with internal customers who do interactive pharma advertising online.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">Bud, can you take 30 seconds to explain what interactive advertising is?</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">So just think of interactive advertising as advertising on the web. If you’re looking at, let’s say, a dermatologist, you might get an acne ad for somebody who has a good pharmaceutical for that. So an interactive ad can help you find the right doctor, and possibly find the right treatments and medications. If you don’t know interactive advertising, this can be really complex to set up, maintain, and track, especially for our business, being in the healthcare industry, where we have much more government control over what we can and can’t say and do.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">Excellent. So can you tell us about some of the biggest problems you face with interactive advertising, and how you use Trifacta to make your ads more effective and get more timely insights?</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">So one of the biggest problems that we have with interactive advertising from the data side is that most of our data is coming in as attachments to emails. Think CSVs and spreadsheets. We have no control over this, and they may come directly from our clients, or from their agencies, or it may be automated from a client’s Google Ad Manager or Google Campaign Manager via APIs or canned reporting. And that data can change with absolutely no notice, but we still attempt to make changes and get data published within the same work day it’s received. So we have to be very nimble. </span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">So the accuracy and timeliness are absolutely important. Can you walk us through that a bit more?</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">Well, our data is used to ensure that we’re on pace for our KPIs. Using our data, we can tell if we are on schedule or behind, and this gives our trafficking teams the ability to go in and tweak campaigns as we go. Now, most of our customers have monthly goals. It would be very hard if we were only able to go in and make changes to their ads 3 times in that month. So frequency and recency of the data are very important &#8211; time is of the essence.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">It sounds like the speed of insight is very important to you. What problems were you facing with your previous process that handled this incoming data?</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">So we were using a SaaS offering that was an all-in-one tool. And that tool helped us get out of doing a lot of things by hand and get to a single source of truth. But what we found was that if we brought in something new, as I discussed in the beginning, and if an agency suddenly changed the format of the data coming into us, even something as small as a name on a field, we would have to open a ticket and wait 1-3 days for resolution. And we don’t like our data being unavailable. So we made the choice to change from an all-in-one vendor to something that gave us a lot more speed and flexibility, but more importantly, more control over how the data was ingested, and how we joined the data. We needed to be able to bring in data from a lot of internal and external systems and be able to expand. </span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">And how did Trifacta help to simplify this process for you and your team?</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">What we found is that Trifacta really helped us reduce the need for a lot of the things in how we originally envisioned writing this ourselves. We wrote a script to send information from our emails into S3, and Trifacta can immediately come and grab that information from S3 and standardize it. And that’s helpful, because we have certain types of data that come in from agencies we find the same problems with. And if we have a reoccurring class of data, we can add that into our template to easily check for it on any file we have coming in, just in case.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">So you’ve been able to use Trifacta to help templatize your process and speed up the onboarding of new files. That’s awesome. That’s great value from a technology perspective. What is some of the value you’ve gained from using Trifacta from a business value perspective?</span></p>
<p><b>Bud: </b><span style="font-weight: 400;">One of the big things that we do is leverage Trifacta to bring in data multiple times every day, and we’ve used this to set up business rule enforcement. We can capture data very quickly, and when things aren’t going out properly, we can get an automatic alert sent out to the team. Right now, we have 25 different alerts that we send out everyday. And that saves us I don’t even know how much when it comes to avoiding situations where we might have to give out potential refunds if ads aren’t showing where they were paid for. It also helps us avoid lost opportunities, because we can get data in very quickly. To give you an idea, using Trifacta, I was able to create end-to-end flows to bring in 27 new data sources in one week. That would have been impossible with our old SaaS provider. Trifacta is really the hub of our entire system.</span></p>
<p>&nbsp;</p>
<p><b>Trifacta: </b><span style="font-weight: 400;">That’s amazing. 27 sources in just one week. Thank you again Bud for walking us through your data journey!</span></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Candy</name>
					</author>

		<title type="html"><![CDATA[Announcing the Designer Cloud 9.2 Release]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/whats-new-in-the-trifacta-9-2-release/" />

		<id>https://www.trifacta.com/?p=60702</id>
		<updated>2022-12-14T01:45:56Z</updated>
		<published>2022-04-20T16:42:16Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[A month has passed by and it’s time for our latest software release. It’s our pleasure to share the new capabilities as part of the Designer Cloud 9.2 release. Let’s learn how we can make data work for us.  &#160; Easier iteration on Designer Cloud recipes with Lock/Unlock Column Data Types You can now lock [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/whats-new-in-the-trifacta-9-2-release/"><![CDATA[<p><span style="font-weight: 400;">A month has passed by and it’s time for our latest software release. It’s our pleasure to share the new capabilities as part of the </span><b>Designer Cloud 9.2 release</b><span style="font-weight: 400;">. Let’s learn how we can make data work for us. </span></p>
<p>&nbsp;</p>
<p><strong>Easier iteration on Designer Cloud recipes with Lock/Unlock Column Data Types</strong></p>
<p><span style="font-weight: 400;">You can now lock or unlock the data type of columns in your data, allowing you to iterate on the recipe without constantly resetting the data types. This also helps prevent the column’s data type from being re-inferred after subsequent transformations. </span></p>
<p><img decoding="async" loading="lazy" class="aligncenter size-full wp-image-60716" src="http://s26597.pcdn.co/wp-content/uploads/2022/04/announcing-trifacta-9-2.gif" alt="" width="1357" height="630" data-wp-pid="60716" /></p>
<p><span style="font-weight: 400;">This new capability gives you the ability to define whether a column’s data type should be fixed or allowed to be inferred by the system, at various points within the transformation workflow. The capability can be initiated from multiple entry points including 1/ A single column through the Change Type column menu option within the Transformer grid; 2/ A single column through a recipe step as an option on the Change Column Type transformation; 3/ A single column through the column data type drop-down in the Data Configuration settings; and 4/ Multiple columns through a recipe step using the new Lock Column Type transformation step.</span></p>
<p><span style="font-weight: 400;">Learn more from </span><a href="https://community.trifacta.com/s/article/lock-unlock-column-data-type"><span style="font-weight: 400;">this helpful article</span></a><span style="font-weight: 400;">.</span></p>
<p><strong>Publish Arrays to Google BigQuery</strong></p>
<p><span style="font-weight: 400;">An important data type within the cloud data warehouse ecosystem is an array. Cloud data warehouses such as Google BigQuery support these arrays natively and recommend the use of nested and repeated data types to optimize space and reduce the complexity of dataset joins. </span></p>
<p><span style="font-weight: 400;">Now you can publish arrays natively into BigQuery. This is part of our initiative to support a wide range of data types that you can publish directly onto cloud data warehouses such as BigQuery.</span></p>
<p>&nbsp;</p>
<p><strong>Faster data loading with Designer Cloud&#8217;s in-memory engine</strong></p>
<p><span style="font-weight: 400;">You can now experience improved performance when you use Designer Cloud&#8217;s in-memory processing engine. With faster caching, this performance improvement executes only the incremental changes when new steps are added to the recipes.</span></p>
<p><span style="font-weight: 400;">This ability increased the number of output tables that can be cached for a recipe. The caching is configurable on a per workspace basis.</span></p>
<p>&nbsp;</p>
<p><strong>New connectors with Designer Cloud 9.2</strong></p>
<p><span style="font-weight: 400;">We continue our journey to help you connect to any data source, enabling additional use cases. With Designer Cloud 9.2, we support the following new connectors:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>Marketo:</b><span style="font-weight: 400;"> A marketing automation platform that enables marketers to manage personalized multi-channel programs and campaigns for prospects and customers.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>SFTP support for the AWS Cloud:</b><span style="font-weight: 400;"> You can now use the SFTP connector with Designer Cloud on the AWS cloud as well. </span></li>
</ul>
<p><span style="font-weight: 400;">You can learn all about our </span><a href="https://community.trifacta.com/s/article/connectivity-updates"><span style="font-weight: 400;">connectivity updates here</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">If you haven’t done it already, it’s a great time to </span><a href="https://www.trifacta.com/start-wrangling/"><span style="font-weight: 400;">sign up for a free trial</span></a><span style="font-weight: 400;"> with Designer Cloud. Join us today on our journey to the cloud.</span> Lock Column Type transformation step.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Emily Valla</name>
					</author>

		<title type="html"><![CDATA[The Power of the Merge: Bringing Together The Best of Data and Analytics]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/the-power-of-the-merge-bringing-together-the-best-of-data-and-analytics/" />

		<id>https://www.trifacta.com/?p=60680</id>
		<updated>2022-04-11T20:37:33Z</updated>
		<published>2022-04-11T20:37:33Z</published>
		<category scheme="https://www.trifacta.com/" term="Company" />
		<summary type="html"><![CDATA[I recall an old ad campaign for Reese’s Peanut Butter Cups whose tagline was “Two great tastes that taste great together.”  Two great things coming together to create a new, better thing? That’s what I call the power of the merge. And there are two examples of this power of the merge I want to [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/the-power-of-the-merge-bringing-together-the-best-of-data-and-analytics/"><![CDATA[<p><span style="font-weight: 400;">I recall an old ad campaign for Reese’s Peanut Butter Cups whose tagline was “Two great tastes that taste great together.” </span></p>
<p><span style="font-weight: 400;">Two great things coming together to create a new, better thing? That’s what I call the </span><i><span style="font-weight: 400;">power of the merge</span></i><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">And there are two examples of this power of the merge I want to share:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The merging of two data and analytics powerhouses, Trifacta and Alteryx, to help our customers drive their analytics transformation at scale</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The merging of two critical market studies from Dresner Advisory Services, </span><i><span style="font-weight: 400;">Data Preparation </span></i><span style="font-weight: 400;">and </span><i><span style="font-weight: 400;">Data Integration Pipelines</span></i><span style="font-weight: 400;">, into a single </span><i><span style="font-weight: 400;">2022 Data Engineering Market Study</span></i><span style="font-weight: 400;">, the first of its kind</span></li>
</ol>
<p><span style="font-weight: 400;">It turns out one merger validates the other. I’ll explain.</span></p>
<h2><span style="font-weight: 400;">Trifacta and Alteryx Combine to Help Every Worker Get the Insights They Need From Their Data </span></h2>
<p><a href="https://www.alteryx.com/about-us/newsroom/press-release/alteryx-closes-acquisition-of-trifacta"><span style="font-weight: 400;">The powerful combination of Trifacta and Alteryx </span></a><span style="font-weight: 400;">brings together Trifacta’s game-changing integrations and cloud-native capabilities with Alteryx’s industry-leading analytics solution. Trifacta anchors and accelerates Alteryx’s journey to the cloud and opens new categories of users across IT within large enterprises. Together, Trifacta and Alteryx advance the development of an integrated end-to-end, low code/no code analytics automation platform in the cloud. </span></p>
<p><span style="font-weight: 400;">I couldn’t be more proud of this formidable new entity we’ve created together. No other company is better suited to meet the analytics needs of its customers. We’re going to propel analytics forward. </span></p>
<p><span style="font-weight: 400;">The power of this combination emanates from a single, shared goal: analytics for all: empowering every worker to get the insights they need from their data. We believe data analytics should be accessible to everyone. Trifacta focuses on the democratization of data engineering. Alteryx focuses on the democratization of analytics. When data analytics are accessible to everyone, everyone wins, from IT to lines of business. </span></p>
<h2><span style="font-weight: 400;">Dresner Merges Two Reports to Recognize Data Engineering</span></h2>
<p><span style="font-weight: 400;">Dresner Advisory Services also demonstrated the power of the merge in 2022. This respected industry analyst combined two of its earlier market studies, </span><i><span style="font-weight: 400;">Data Preparation</span></i><span style="font-weight: 400;"> and </span><i><span style="font-weight: 400;">Data Integration Pipelines</span></i><span style="font-weight: 400;">, into a new independent research report debuting this year: the </span><i><span style="font-weight: 400;">2022 Data Engineering Market Study</span></i><span style="font-weight: 400;">. </span></p>
<p><span style="font-weight: 400;">And I’m delighted to announce Dresner ranked Alteryx/Trifacta as the top data engineering vendor in its first-ever data engineering study.</span></p>
<p><span style="font-weight: 400;">Data engineering is recognized as a stand-alone space for the first time in this new study. It explores market requirements and priorities for data orchestration, integration, and transformations including advanced analytics in the data engineering pipeline workflow. </span></p>
<p><span style="font-weight: 400;">Like other Dresner market studies, its research is exhaustive. Dresner surveyed a diverse cross-section of 6,000 organizations worldwide to review data engineering market trends, dig deep into end-user requirements and features, and rank 28 data engineering vendors.</span></p>
<p><span style="font-weight: 400;">I encourage you to read the </span><a href="https://www.trifacta.com/gated-form/dresner-study-2022/"><span style="font-weight: 400;">full study</span></a><span style="font-weight: 400;">, but here are some highlights:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>Data engineering is important. </b><span style="font-weight: 400;">Sixty-one percent of respondents indicate data engineering is “critical” or “very important.” It’s clear this technology is becoming an indispensable part of the 21st century business landscape.
<p></span></li>
<li style="font-weight: 400;" aria-level="1"><b>Data engineering is popular.</b><span style="font-weight: 400;"> Sixty-three percent of respondents say their organizations use data engineering capabilities today, and 20% have plans to use data engineering tools within the next 12 months.
<p></span></li>
<li style="font-weight: 400;" aria-level="1"><b>Data engineering approaches leave room for improvement. </b><span style="font-weight: 400;">Only 20% percent of respondents rate their current approach to data engineering as highly effective.
<p></span></li>
<li style="font-weight: 400;" aria-level="1"><b>Data engineering tools are versatile. </b><span style="font-weight: 400;">Organizations often purchase and use data engineering tools for more than one use case. Some of the most common ones include data integration, cleansing, and building transformation workflows for data warehouses that support dashboards and reporting.
<p></span></li>
<li style="font-weight: 400;" aria-level="1"><b>Data engineering tools are used across the organization. </b><span style="font-weight: 400;">They’re no longer the domain of one department or function, validating Alteryx/Trifacta’s goal to democratize data analytics. Interestingly, executive management teams reported using data engineering constantly. </span><span style="font-weight: 400;">
<p></span></li>
</ul>
<p><span style="font-weight: 400;">These highlights validate the need for the combination of Alteryx and Trifacta, and they confirm what we know: The success of your business depends on the success of your analytics program, and the success of your analytics depends on the success of your data engineering. </span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;"><br />
</span><span style="font-weight: 400;">Dresner isn’t the only industry expert to see the premier value of Trifacta’s data engineering capabilities: the Data Breakthrough Awards recently named Trifacta the</span><a href="https://www.prweb.com/releases/trifacta_recognized_for_data_engineering_innovation_in_the_2022_data_breakthrough_awards_program/prweb18570112.htm"><span style="font-weight: 400;"> Data Transformation Solution of the Year</span></a><span style="font-weight: 400;">. And, customers again ranked Trifacta a Leader in Data Preparation, Data Quality, and ETL Tools  with 9 different </span><a href="https://www.g2.com/products/trifacta/reviews"><span style="font-weight: 400;">G2 Awards in Spring 2022.</span></a><span style="font-weight: 400;"> And, the power of Trifacta and Alteryx will only increase as we work toward delivering the world’s first data engineering-backed analytics solution. </span></p>
<p>&nbsp;</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Shyam Srinivasan</name>
					</author>

		<title type="html"><![CDATA[How to Sort Data in Google Sheets]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/how-to-sort-data-in-google-sheets/" />

		<id>https://www.trifacta.com/?p=60588</id>
		<updated>2022-12-28T18:53:48Z</updated>
		<published>2022-03-30T18:59:39Z</published>
		<category scheme="https://www.trifacta.com/" term="Tips &amp; Tricks" />
		<summary type="html"><![CDATA[How to Sort Data in Google Sheets You’ve imported your data into Google Sheets—now you need to sort it. Thankfully, there’s an easier way than moving your columns up or down by hand. Google Sheets allows you to automatically sort your data numerically or alphabetically. &#160; In this post, we’ll review how to sort data [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/how-to-sort-data-in-google-sheets/"><![CDATA[<h2>How to Sort Data in Google Sheets</h2>
<p><span style="font-weight: 400;">You’ve imported your data into Google Sheets—now you need to sort it. Thankfully, there’s an easier way than moving your columns up or down by hand. Google Sheets allows you to automatically sort your data numerically or alphabetically. </span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">In this post, we’ll review how to sort data in Google Sheets as well as how to filter data in Google Sheets. Read on to learn more. </span></p>
<p>&nbsp;</p>
<p><b>How to Sort Data in Google Sheets (Alphabetically or Numerically) </b></p>
<p><span style="font-weight: 400;">Any type of text data can be sorted alphabetically in Google Sheets. In this case, we have a list of names that we’d like to sort.  </span></p>
<p>&nbsp;</p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">First, start by selecting your sheet by clicking the blank square in the upper left corner. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Under </span><b>Data</b><span style="font-weight: 400;">, select “Sort Range,” which will prompt a pop-up window to appear. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">To exclude the header row in our sort, we’ll check the box that says “Data has header row.” </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Under “Sort by,” we’ll select the “Name” column that we want to sort. We’ll keep the “A → Z” option selected since we want the names organized in ascending alphabetical order.</span><span style="font-weight: 400;"><br />
<img decoding="async" loading="lazy" class="alignnone size-full wp-image-60589" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.42.43-PM.png" alt="" width="942" height="612" data-wp-pid="60589" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.42.43-PM.png 942w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.42.43-PM-300x195.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.42.43-PM-768x499.png 768w" sizes="(max-width: 942px) 100vw, 942px" /><br />
</span></li>
</ol>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">This same function also works with numbers. If we select our column of donation amounts, we could choose to organize those amounts from high to low (Z → A) or low to high (A → Z).  </span></p>
<p><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60599" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.42.54-PM.png" alt="" width="960" height="592" data-wp-pid="60599" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.42.54-PM.png 960w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.42.54-PM-300x185.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.42.54-PM-768x474.png 768w" sizes="(max-width: 960px) 100vw, 960px" /></p>
<p><b>How to Sort Data in Google Sheets Across Multiple Columns</b></p>
<p><span style="font-weight: 400;">Google Sheets also gives us the option to sort multiple columns at once. For example, we could give our donor names first priority (sorted A → Z) and our donation amounts second priority (sorted Z → A.). </span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">Setting that up in Google Sheets would look like this: </span></p>
<p><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60598" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.00-PM.png" alt="" width="966" height="598" data-wp-pid="60598" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.00-PM.png 966w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.00-PM-300x186.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.00-PM-768x475.png 768w" sizes="(max-width: 966px) 100vw, 966px" /></p>
<p><span style="font-weight: 400;">Before setting up this sorting logic, our “Names” column was sorted alphabetically, but there was no sorting preference for the coinciding donation amounts. </span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">In the image below, we can see that donor Amelia made three donations in the month of September, but her donation amounts aren’t organized in any particular order. </span></p>
<p><span style="font-weight: 400;"><br />
<img decoding="async" loading="lazy" class="alignnone size-full wp-image-60597" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.07-PM.png" alt="" width="704" height="508" data-wp-pid="60597" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.07-PM.png 704w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.07-PM-300x216.png 300w" sizes="(max-width: 704px) 100vw, 704px" /><br />
</span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">Now, let’s watch how those donation amounts change once we apply our multi-column sorting logic: </span></p>
<p><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60596" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.12-PM.png" alt="" width="710" height="524" data-wp-pid="60596" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.12-PM.png 710w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.12-PM-300x221.png 300w" sizes="(max-width: 710px) 100vw, 710px" /></p>
<p><span style="font-weight: 400;">Amelia’s name is still listed alphabetically, but her donations values have been reorganized to be listed from highest to lowest (Z → A). Theoretically, we could also add a third column to be considered, such as the date of the donation, and should there be any repeat donors and repeat donation amounts, the date of donation would determine their order. </span></p>
<p><span style="font-weight: 400;">Here’s more detailed instructions on how to sort data in multiple columns: </span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">First, start by selecting your sheet by clicking the blank square in the upper left corner. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Under </span><b>Data</b><span style="font-weight: 400;">, select “Sort Range,” which will prompt a pop-up window to appear. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">To exclude the header row, we’ll check the box that says “Data has header row.” </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Select the first column that you’d like to sort and whether the values should be listed from high to low or low to high. </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Click “Add another sort column” to add your second column. Repeat until you’d selected all columns you’d like to sort. </span></li>
</ol>
<p><b>How to Filter Data in Google Sheets</b><b><br />
</b><span style="font-weight: 400;">Filtering data in Google Sheets is a great way to highlight certain data while removing (without deleting) other data that you aren’t interested in. It’s especially useful if you’re working on a shared document; there may be different questions that you’re looking to answer about the data vs. your colleagues. Filtering protects the integrity of the data while allowing you to quickly find the insights that you need. </span></p>
<p><span style="font-weight: 400;">Here’s how to do it: </span></p>
<p><span style="font-weight: 400;">1.Select the column or range of columns where you want to apply your filter.</span></p>
<p><span style="font-weight: 400;">2. In the upper right corner, click on the three dots and select the funnel “Create a filter.”</span></p>
<p><span style="font-weight: 400;"><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60595" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.19-PM.png" alt="" width="574" height="338" data-wp-pid="60595" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.19-PM.png 574w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.19-PM-300x177.png 300w" sizes="(max-width: 574px) 100vw, 574px" /></span></p>
<p>3. Now, the column(s) that you selected will be highlighted in green. Click on the green funnel next to your column name.</p>
<p><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60594" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.25-PM.png" alt="" width="466" height="444" data-wp-pid="60594" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.25-PM.png 466w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.43.25-PM-300x286.png 300w" sizes="(max-width: 466px) 100vw, 466px" /></p>
<p>4. This will open up a pop-up window where we’ll decide how we want to filter the data. In this case, we’re going to be applying a conditional filter. A conditional filter allows you to apply certain rules; in this case we want to look at every donation amount above $75 so that we can analyze which donor has donated large amounts.</p>
<p><span style="font-weight: 400;"><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60593" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.13-PM.png" alt="" width="624" height="768" data-wp-pid="60593" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.13-PM.png 624w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.13-PM-244x300.png 244w" sizes="(max-width: 624px) 100vw, 624px" /></span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">The resulting data looks like this: </span></p>
<p><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60592" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.24-PM.png" alt="" width="462" height="602" data-wp-pid="60592" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.24-PM.png 462w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.24-PM-230x300.png 230w" sizes="(max-width: 462px) 100vw, 462px" /></p>
<p><span style="font-weight: 400;">Of course, there’s also the option to simply filter out certain values. For example, if you had a list of products and the states they were purchased in, you may want to filter out certain states to see the product’s popularity by region. Or, you could filter out products to see if they are more popular in certain states. </span></p>
<p><span style="font-weight: 400;">5. You have the option to save any filter you create so that other collaborators can reuse it. Simply return to the funnel in the upper right and click on the drop down arrow where you’ll select “Save as filter view.” </span></p>
<p>&nbsp;</p>
<p><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60591" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.30-PM.png" alt="" width="962" height="300" data-wp-pid="60591" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.30-PM.png 962w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.30-PM-300x94.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.30-PM-768x240.png 768w" sizes="(max-width: 962px) 100vw, 962px" /></p>
<p><span style="font-weight: 400;">6. To close out of your filter view, click on the funnel once again so that it is deselected. When you want to return to your filter view, go back to the drop down arrow and select the name of your filter (in this case, “Filter 1”). </span></p>
<p><img decoding="async" loading="lazy" class="alignnone size-full wp-image-60590" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.36-PM.png" alt="" width="1366" height="428" data-wp-pid="60590" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.36-PM.png 1366w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.36-PM-300x94.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.36-PM-1024x321.png 1024w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.36-PM-768x241.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/03/Screen-Shot-2022-03-30-at-12.44.36-PM-1320x414.png 1320w" sizes="(max-width: 1366px) 100vw, 1366px" /></p>
<p><b>Where Sorting or Filtering Data in Google Sheets Can Fall Short </b><b><br />
</b><span style="font-weight: 400;">Sorting or filtering data can be the answer for those trying to find out how to organize data in Google Sheets. But it’s also often used as a means to explore and better understand the contents of data—and this is where these methods can come up short. </span></p>
<p><span style="font-weight: 400;">Though filtering and sorting data in Google Sheets does offer insight into your data’s trends, at the end of the day, you’re still looking at rows and columns. Which means it’s hard to get a clear picture of the data and any outliers, commonalities, etc. that it may contain. This difficulty will only amplify as the data increases and you’re no longer scanning tens or hundreds of rows—but thousands. </span></p>
<p><span style="font-weight: 400;">Finally, users must remember that sorting or filtering data will do little good if the data isn’t cleaned properly. For example, say you’re filtering for all mentions of “California” in your data. The filter will </span><i><span style="font-weight: 400;">not</span></i><span style="font-weight: 400;"> bring up any misspellings or abbreviations of the word. While there are ways to search for alternative representations, such as using a conditional filter of “Text starts with” or “Text ends with,” this can still be a time-consuming (and ultimately imperfect) process. </span></p>
<p>&nbsp;</p>
<p><b>The Alteryx Designer Cloud Data Preparation Platform</b></p>
<p><span style="font-weight: 400;">Though Google Sheets is an excellent tool for simple reporting or analytic tasks, many organizations are adopting data preparation platforms like Designer Cloud in order to prepare big or complex data for analysis—or to simply ensure that data of </span><i><span style="font-weight: 400;">any</span></i><span style="font-weight: 400;"> size is free of errors. </span></p>
<p><span style="font-weight: 400;">The Designer Cloud platform automatically presents visual representations of your data based upon its content in the most compelling visual profile. This allows an immediate understanding of the data at a glance—no more searching or filtering through spreadsheets to find trends across your data.</span></p>
<p><span style="font-weight: 400;">It also alerts users to any data quality concerns, such as missing or invalid data, so that these data quality issues don’t slip through to the end analysis. And, since Trifacta is powered by machine learning, the platform is smart enough to recognize what the user is trying to do. If they want to standardize all versions or misspellings of California, for example, the tool will automatically suggest things like “CA” or “Calif.” </span></p>
<p><span style="font-weight: 400;">To be clear, there is no direct competition between Google Sheets and a data preparation platform like Trifacta—they are simply two great tools used for different purposes. In fact, the Designer Cloud technology can be found on the Google Cloud Platform as </span><a href="https://cloud.google.com/dataprep"><span style="font-weight: 400;">Google Cloud Dataprep by Trifacta</span></a><span style="font-weight: 400;">. And while using Cloud Dataprep, it’s easy for users to pull in Google Sheets data to explore, join, and prepare for analytic use. </span></p>
<p><a href="https://www.trifacta.com/start-wrangling/"><b><i>To learn more about Designer Cloud, kick off your 30-day free trial for free today! </i></b></a></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Shyam Srinivasan</name>
					</author>

		<title type="html"><![CDATA[What&#8217;s New in the Designer Cloud 9.1 Release]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/whats-new-in-the-trifacta-9-1-release/" />

		<id>https://www.trifacta.com/?p=60574</id>
		<updated>2022-12-14T03:03:01Z</updated>
		<published>2022-03-29T14:11:23Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[Our software releases and updates come fast and furious. We’re excited to share the latest capabilities as part of the Designer Cloud 9.1 release. As always, we cover a wide range of features related to data engineering. Let’s dive into them. General Availability of SSH Tunneling Connectivity Support Hybrid architectures spanning the cloud and on-premises networks [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/whats-new-in-the-trifacta-9-1-release/"><![CDATA[<p>Our software releases and updates come fast and furious. We’re excited to share the latest capabilities as part of the <strong>Designer Cloud 9.1 release</strong>. As always, we cover a wide range of features related to data engineering. Let’s dive into them.</p>
<p><strong>General Availability of SSH Tunneling Connectivity Support</strong></p>
<p>Hybrid architectures spanning the cloud and on-premises networks are common, especially for large enterprises with applications residing both on-premises and in the cloud. To support and strengthen hybrid architectures, we’re excited to announce the <strong>General Availability of connectivity using SSH Tunneling</strong>. This expands on our previous limited preview announcement of this capability with our 8.10 release last year.</p>
<p>To help connect to hosts such as database servers deployed within a private network, you can now enable SSH Tunneling within Designer Cloud. SSH Tunneling offers a secure solution where the SSH ports are open for access from public networks whenever needed. With this solution, you don’t need to whitelist specific IP addresses or open application ports to access these hosts. SSH Tunneling is a secure and widely accepted technology where all data is encrypted during transit, thereby maintaining a secure transport session.</p>
<p><img decoding="async" loading="lazy" class="aligncenter size-full wp-image-60573" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-create-connection.png" alt="" width="629" height="623" data-wp-pid="60573" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-create-connection.png 629w, http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-create-connection-300x297.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-create-connection-150x150.png 150w" sizes="(max-width: 629px) 100vw, 629px" /></p>
<p>You can learn more from our <a href="https://docs.trifacta.com/display/AWS/Configure+SSH+Tunnel+Connectivity" target="_blank" rel="noopener noreferrer">technical documentation</a>.</p>
<p><strong>Higher Data Accuracy with Schema Change Detection</strong></p>
<p>Schema refers to the sequence and data types in a dataset. It is common for schemas to change over time, causing broken transformation steps or recipes that can cause data corruption with downstream applications. This new capability enables you to monitor schema changes in your dataset and helps you identify data sources where the schema has changed. Further, the job fails when this occurs. This is done by comparing the current schema of the data source and the schema that was previously stored in the database.</p>
<p><img decoding="async" loading="lazy" class="aligncenter size-full wp-image-60576" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-schema-changes.png" alt="" width="682" height="690" data-wp-pid="60576" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-schema-changes.png 682w, http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-schema-changes-297x300.png 297w" sizes="(max-width: 682px) 100vw, 682px" /></p>
<p>Schema changes are detected if columns are added, removed, or moved. You can configure the jobs to fail if schema changes are detected. This is supported for JDBC, BigQuery, AVRO, and Parquet file formats, with support for additional formats coming in the upcoming releases. Learn more about this new capability with our community article <a href="https://community.trifacta.com/s/article/schema-drift-detection" target="_blank" rel="noopener noreferrer">here</a>.</p>
<p><strong>Better Visibility with Sample Job IDs</strong></p>
<p>With Designer Cloud, you can quickly start working with your dataset. This is accomplished by automatically generating a sample using the first set of rows of your dataset. Sample jobs are independent executions and you can always specify the type of sample you wish to create and initiate the job to create the sample. The sampling jobs run in the background.</p>
<p>Samples have unique IDs which previously could be accessed on the job history page. You can now visualize these IDs in the familiar Transformer view, helping you identify the samples easily. This also helps with better visibility showing all the samples along with their IDs on a single screen. The details of the sample including job ID can be accessed by clicking on a particular sample name.</p>
<p><img decoding="async" loading="lazy" class="aligncenter size-full wp-image-60577" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-sample-details.png" alt="" width="979" height="617" data-wp-pid="60577" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-sample-details.png 979w, http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-sample-details-300x189.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-sample-details-768x484.png 768w" sizes="(max-width: 979px) 100vw, 979px" /></p>
<p>You can learn all about sampling <a href="https://community.trifacta.com/s/article/sampling" target="_blank" rel="noopener noreferrer">here</a>.</p>
<p><strong>Increased flexibility with dataset configurations</strong></p>
<p>Datasets are the foundation for all data pipelines. Datasets often come from different sources containing extraneous columns, complex column names, or other inconsistencies leading to incorrect inference and inaccurate results. You can now overcome these hurdles by updating and preserving metadata configuration that can be reused consistently and applied each time the dataset is used in a new flow.</p>
<p><img decoding="async" loading="lazy" class="aligncenter size-full wp-image-60578" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/trifacta-9-1-dataset-settings.gif" alt="" width="1257" height="655" data-wp-pid="60578" /></p>
<p>With this new capability, you can search for and select a subset of columns to be included or omitted out of flows whenever the dataset is used. You can manually rename columns as part of the reusable dataset and override system inferred Trifacta data types for ongoing saving and reuse. It is currently supported for relational, delimited, and schema files such as Parquet and Avro.</p>
<p>Learn more about configuration settings <a href="https://community.trifacta.com/s/article/dataset-configuration-settings" target="_blank" rel="noopener noreferrer">here</a>.</p>
<p><strong>Additional security with Customer Managed Encryption Keys (CMEK) for Dataflow</strong></p>
<p>Customer Managed Encryption Keys are created, managed, and stored within the cloud key management service. These keys can be applied to individual objects. When used, data that is written for the objects that are scoped by the keys are automatically encrypted when written and decrypted when read.</p>
<p>Now, you can have user-specific CMEKs when using Google Cloud Dataflow. This will ensure that any intermediate files created by Dataflow will use the CMEKs. This capability is currently under Private Preview and please contact Trifacta support if you would like to use this feature.</p>
<p><strong>New connector with Designer Cloud 9.1</strong></p>
<p>We now have a new connector in Early Preview. We support connectors to <strong>Instagram Ads</strong> which is a method of paying for post-sponsored content on the Instagram platform. You can learn all about our <a href="https://community.trifacta.com/s/article/connectivity-updates" target="_blank" rel="noopener noreferrer">connectivity updates here</a>.</p>
<p>It’s never too late to <a href="https://www.trifacta.com/start-wrangling/" target="_blank" rel="noopener noreferrer">sign up for a free trial</a> with Designer Cloud. Join us today on our journey to the cloud with Alteryx.</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Paul Warburg</name>
					</author>

		<title type="html"><![CDATA[Trifacta Legend February 2022: Bob Hall at The Home Depot]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/trifacta-legend-february-2022-bob-hall-at-the-home-depot/" />

		<id>https://www.trifacta.com/?p=60537</id>
		<updated>2022-12-28T20:06:37Z</updated>
		<published>2022-03-18T16:25:59Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Customer Spotlight" />
		<summary type="html"><![CDATA[Trifacta Legends recognizes a customer every month who is doing groundbreaking work with data using Trifacta. We’re pleased to announce the Trifacta Legend for February 2022: Bob Hall, Sr. Manager of People Analytics at The Home Depot.  Bob Hall is a Sr. Manager of People Analytics at The Home Depot, where he and his team [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/trifacta-legend-february-2022-bob-hall-at-the-home-depot/"><![CDATA[<p><span style="font-weight: 400;"><img decoding="async" loading="lazy" class="alignright size-medium wp-image-60540" src="http://s26597.pcdn.co/wp-content/uploads/2022/03/bob-300x300.png" alt="" width="300" height="300" data-wp-pid="60540" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/03/bob-300x300.png 300w, http://s26597.pcdn.co/wp-content/uploads/2022/03/bob-150x150.png 150w, http://s26597.pcdn.co/wp-content/uploads/2022/03/bob-768x768.png 768w, http://s26597.pcdn.co/wp-content/uploads/2022/03/bob-600x600.png 600w, http://s26597.pcdn.co/wp-content/uploads/2022/03/bob.png 800w" sizes="(max-width: 300px) 100vw, 300px" />Trifacta Legends recognizes a customer every month who is doing groundbreaking work with data using Trifacta.</span></p>
<p><span style="font-weight: 400;">We’re pleased to announce the <strong>Trifacta Legend for February 2022: Bob Hall, Sr. Manager of People Analytics at The Home Depot.</strong> </span></p>
<p><span style="font-weight: 400;">Bob Hall is a Sr. Manager of People Analytics at The Home Depot, where he and his team help support and drive efficiency for Home Depot’s field associates. Bob is passionate about process innovation and democratizing data to provide actionable and intelligent insights to his organization. During his time at Home Depot, Bob has played a key role in upgrading antiquated processes and replacing them with automation powered by a modern data stack. Bob has a bachelor’s degree in Business Management from the Georgia Institute of Technology.</span></p>
<p><span style="font-weight: 400;">We talked to Bob about his experience and insight into process innovation at The Home Depot. Bob shared some of the challenges he and his team faced and how he overcame those obstacles to create scalable processes using Trifacta’s Data Engineering Cloud. </span></p>
<p><b>Trifacta: </b><span style="font-weight: 400;">First, thank you Bob for being a valued customer of Trifacta. It has been a pleasure to work with you and we look forward to the continued partnership. Can you tell us a little more about yourself and your role at The Home Depot?</span></p>
<p><b>Bob:</b><span style="font-weight: 400;"> Certainly. I am deeply passionate about process innovation. As you can imagine with a huge company like ours, there are some systems that are antiquated and some processes that are antiquated. Home Depot is really invested in making sure we’re staying up to speed with the times, and part of that is how do we make things better for our associates? All of our roles here at the Store Support Center are to support our field organization and our field associates and make their jobs easier. </span></p>
<p><b>Trifacta: </b><span style="font-weight: 400;">That’s awesome. Can you tell us about one of the great use cases that you’ve delivered with Trifacta? </span></p>
<p><b>Bob: </b><span style="font-weight: 400;">Sure. So we have a Paycard Reconciliation use case. This is one of our foundational use cases. And I say that because this is a use case that we identified very early on that really allowed us to test and learn. </span></p>
<p><span style="font-weight: 400;">So what is Paycard Reconciliation? Basically, as we’re delivering paycards to either terminated associates or people who have elected to get a paycard, we’re trying to ensure that this payment reconciles back financially. This is a very important process that has compliance aspects to it and associate satisfaction is also tied to it. It’s all about reducing errors, reducing fallouts, and ensuring accuracy and compliance. There’s also a big time saving component. Spending a couple hours doing this everyday was not going to be beneficial for our team when they could lend their expertise to something else and drive more value to our associates and customers.</span></p>
<p><b>Trifacta: </b><span style="font-weight: 400;">So what was this process like before you started using Trifacta?</span></p>
<p><b>Bob: </b><span style="font-weight: 400;">We were using a combination of Excel and Access. We would take Workday files and files from our banks that we leverage. In Excel, our associates would spend a couple of hours looking at an Excel spreadsheet and manually reconciling things that were fallouts or outliers. Then, we had to feed this back to the Access database so that we could re-reconcile the next day for the next run. So overall, it was a highly manual process with a lot of steps, pulling in a lot of data from different sources and disparate systems.</span></p>
<p><span style="font-weight: 400;">With Dataprep, until we get to RPA to feed Dataprep, we still have to take those reports and reconcile them from our vendors, our banks, but after that, the process is purely automated. So Dataprep takes them from storage, it does the reconciliation for us, it calls out any fallouts or errors, and then it feeds it back into the Dataprep cycle. We now have daily execution and automation through scheduling functionality in Dataprep. And we now save over 2 hours per run on this per day, saving us a significant amount of time while ensuring consistency and accuracy, which is probably one of the more important things for us. </span></p>
<p><b>Trifacta:</b><span style="font-weight: 400;"> That’s amazing. So when you talk about Trifacta, how would you articulate the value?</span></p>
<p><b>Bob:</b><span style="font-weight: 400;"> What’s the value? We’ve reduced the daily reconciliation process by 2 hours, and really annually by around 520 hours. That’s a huge measure of value for us. It’s also improved the quality of work. No one wants to do manual reconciliations for several hours a day. So we’ve improved that quality for them so that they can hopefully do the same for our associates that interact with our field customers. There’s also risk reduction. We now have full confidence in completeness and accuracy around this process, so that if we’re approached for audits we can now walk everyone through all of the steps. We’ve used Trifacta to scale up operations. Before, we had to manually scale down our scope, but with Dataprep, we can scale up virtually to audit the entire population systemically. </span></p>
<p><b>Trifacta: </b><span style="font-weight: 400;">That’s incredible! Thanks again Bob for walking us through your use case and all the value you’re adding to The Home Depot. We really appreciate it.</span></p>
<p><b>Bob:</b><span style="font-weight: 400;"> You’re welcome!</span></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Shyam Srinivasan</name>
					</author>

		<title type="html"><![CDATA[What’s New in the Designer Cloud 9.0 Release]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/whats-new-in-trifacta-9-0-release/" />

		<id>https://www.trifacta.com/?p=60231</id>
		<updated>2022-12-14T01:43:07Z</updated>
		<published>2022-02-17T18:19:05Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[It’s been a whirlwind of a New Year for us at Alteryx (formerly Trifacta) with all our big announcements and updates. On that note, our delivery of new capabilities showcasing our innovation in the data engineering space continues at a frenetic pace. 2022 is here and it’s time to get on board the new 9.x software [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/whats-new-in-trifacta-9-0-release/"><![CDATA[<p><span style="font-weight: 400;">It’s been a whirlwind of a New Year for us at </span><span style="font-weight: 400;">Alteryx (formerly Trifacta)</span><span style="font-weight: 400;"> with all our </span><span style="font-weight: 400;">big announcements and updates</span><span style="font-weight: 400;">. On that note, our delivery of new capabilities showcasing our innovation in the data engineering space continues at a frenetic pace. 2022 is here and it’s time to get on board the new 9.x software release train. We have some innovative capabilities on the first release of this train, and we just got started on what is going to be an exciting journey ahead. Let’s dive right in!</span></p>
<p><b>General Availability of SQL-based ELT on Snowflake, the Data Cloud</b></p>
<p><span style="font-weight: 400;">Last year, we launched the private preview of SQL-based ELT on Snowflake with our </span><span style="font-weight: 400;">8.10 release</span><span style="font-weight: 400;"> at AWS re:Invent. We’re now excited to announce this capability is now generally available to all our customers. </span></p>
<p><span style="font-weight: 400;">With full pushdown on Snowflake, the data transformation logic also known as the data wrangling logic is converted into SQL, and the transformations are directly executed on Snowflake. During transformations, the data stays within Snowflake, resulting in a secure solution that efficiently uses the compute resources in the cloud while delivering a complete data transformation solution within the ELT architecture.</span></p>
<p><img decoding="async" loading="lazy" class="size-full wp-image-60232 aligncenter" src="http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-1.png" alt="" width="512" height="449" data-wp-pid="60232" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-1.png 512w, http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-1-300x263.png 300w" sizes="(max-width: 512px) 100vw, 512px" /></p>
<p><span style="font-weight: 400;"><br />
Read our </span><a href="https://www.trifacta.com/blog/sql-based-elt-with-pushdown-optimization-on-snowflake/"><span style="font-weight: 400;">technical blog post </span></a><span style="font-weight: 400;">to learn more.</span></p>
<p><b>Keep your data fresh and updated with ‘Dataset Schema Refresh’</b></p>
<p><span style="font-weight: 400;">Our newest capability allows you to refresh dataset metadata with the latest source schema. A schema is a skeleton structure representing the logical view of a dataset. This dataset can be a file, a table, or a SQL query in a database. Schemas may apply to relational tables and schematized file formats such as Parquet and Avro.</span></p>
<p><span style="font-weight: 400;">Dataset schemas often change at different rates, and often columns could be added or removed from your dataset. This could result in incorrect results as headers may not align with the new underlying data. To overcome this undesired outcome, users needed to import new copies of the dataset to keep it in sync. If the same dataset is being used in several flows, users needed to manually replace the dataset on each flow to sync with the new version. This can be highly error-prone.</span></p>
<p><span style="font-weight: 400;">Now, “Dataset Schema Refresh” enables on-demand updating of your imported dataset schemas to capture changes to columns. This helps reduce the number of duplicate or invalid datasets that are created from the same source. Additionally, this helps reduce the challenges of replacing datasets and retaking samples with changes to dataset schemas. You can initiate multiple refreshes of different datasets concurrently, helping you with increased efficiency. This capability applies to relational schemas, schema files, and delimited files.</span></p>
<p><img decoding="async" loading="lazy" class="aligncenter wp-image-60233 size-full" src="http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-2.gif" alt="" width="512" height="245" data-wp-pid="60233" /></p>
<p><b><br />
Increased flexibility with data transformations using Javascript User-Defined Functions</b></p>
<p><span style="font-weight: 400;">Today, Designer Cloud supports a wide range of functions to transform your data easily and efficiently. However, there may be scenarios to address specific use cases that need custom functions. To support these use cases, you can now create custom transformations using Javascript User-Defined Functions (UDFs). These are also referred to as custom user-defined functions. These functions can be imported into Designer Cloud to use in your recipes.</span></p>
<p><span style="font-weight: 400;">Once a Javascript UDF has been created and added by one user, it becomes searchable and available for all users within the familiar Designer Cloud Transformer interface. You can also combine the UDF with other generally available functions in the Transformer formula builder for further expansion.</span></p>
<p><span style="font-weight: 400;">With the 9.0 release, Javascript UDFs are available in private preview. To enable this capability in your environment, please email </span><a href="mailto:support@trifacta.com"><span style="font-weight: 400;">support@trifacta.com</span></a><span style="font-weight: 400;"> with details of your use case. We look forward to working with you and enabling your specific use cases.</span></p>
<p><b>REST API Connectivity</b></p>
<p><span style="font-weight: 400;">We now support REST API connections on Designer Cloud that provide a generic interface to relational data. REST APIs have gained popularity as they provide a flexible, lightweight way to integrate applications, and have emerged as a common method to connect endpoints in different architectures. </span></p>
<p><span style="font-weight: 400;">Here is a high-level refresher on what APIs and REST APIs are. APIs or Application Programming Interfaces are sets of rules that define how applications can connect to, and communicate with each other. REST APIs communicate via HTTP requests to perform standard database functions such as creating, reading, updating, and deleting records within a resource. Using REST API connections within Designer Cloud, you can now create connections to individual endpoints across hundreds of REST-based applications. This is an import-only connection type. REST API connectivity is available in private preview with the 9.0 release. </span></p>
<p><span style="font-weight: 400;">Click </span><a href="https://docs.trifacta.com/display/AWS/REST+API+Connections" target="_blank" rel="noopener"><span style="font-weight: 400;">here </span></a><span style="font-weight: 400;">for more information.</span></p>
<p><b>Easy resolution for missing or invalid connections</b></p>
<p><span style="font-weight: 400;">When a flow containing pre or post-SQL scripts does not have a valid connection associated with it, the scripts will fail. This could happen due to moving flows from one environment to another where connections might not exist. It could also be due to users importing flows containing pre or post-SQL publishing actions using private connections that have not been shared with them.</span></p>
<p><span style="font-weight: 400;">We now provide users the ability to resolve missing or invalid connections for pre or post-run SQL scripts by alerting the users on these invalid connections on the flow view canvas as well as on the side panel. When the publishing action is opened, you will see that the current connection is invalid and you will have the option to choose another valid database connection that you have access to.</span></p>
<p><img decoding="async" loading="lazy" class="size-full wp-image-60234 aligncenter" src="http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-3.png" alt="" width="512" height="415" data-wp-pid="60234" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-3.png 512w, http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-3-300x243.png 300w" sizes="(max-width: 512px) 100vw, 512px" /></p>
<p><img decoding="async" loading="lazy" class="size-full wp-image-60235 aligncenter" src="http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-4.png" alt="" width="512" height="264" data-wp-pid="60235" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-4.png 512w, http://s26597.pcdn.co/wp-content/uploads/2022/02/Trifacta-9.0-4-300x155.png 300w" sizes="(max-width: 512px) 100vw, 512px" /></p>
<p><b><br />
New connectors with Designer 9.0</b></p>
<p><span style="font-weight: 400;">We continue our journey to help you connect to any data source, enabling additional use cases. With Designer Cloud 9.0, we support the following new connectors:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>Zendesk:</b><span style="font-weight: 400;"> A service-first CRM company that builds software designed to improve customer relationships.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>LinkedIn Ads: </b><span style="font-weight: 400;">A paid marketing tool that offers access to LinkedIn social feed through sponsored posts.</span></li>
</ul>
<p><span style="font-weight: 400;">You can learn all about our </span><a href="https://community.trifacta.com/s/article/connectivity-updates" target="_blank" rel="noopener"><span style="font-weight: 400;">connectivity updates here</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">With our recent acquisition by </span><a href="https://www.alteryx.com" target="_blank" rel="noopener"><span style="font-weight: 400;">Alteryx</span></a><span style="font-weight: 400;">, we are forging ahead in our cloud journey for the best of data engineering and analytics. If you have not done already, </span><a href="https://www.trifacta.com/start-wrangling/" rel="noopener"><span style="font-weight: 400;">sign up for a free trial</span></a><span style="font-weight: 400;"> today and join us on this exciting ride. Onwards and upwards!</span></p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Yeyre Reiter</name>
					</author>

		<title type="html"><![CDATA[ETL Developer: Key Role in Determining and Supporting Data Systems and Data Storage]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/etl-developer-2/" />

		<id>https://www.trifacta.com/?p=60210</id>
		<updated>2022-12-28T19:34:55Z</updated>
		<published>2022-02-16T18:20:16Z</published>
		<category scheme="https://www.trifacta.com/" term="Tips &amp; Tricks" />
		<summary type="html"><![CDATA[Who Is an ETL Developer? An ETL Developer is an IT specialist, well-versed in software engineering and database development, who designs, develops, automates, and supports complex applications to extract, transform, and load data. ETL stands for “extract, transform, load.” It refers to the 3-step process of preparing raw data so that data analysts and data [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/etl-developer-2/"><![CDATA[<h3><em><strong>Who Is an ETL Developer?</strong></em></h3>
<p><span style="font-weight: 400;">An ETL Developer is an IT specialist, well-versed in software engineering and database development, who designs, develops, automates, and supports complex applications to extract, transform, and load data. ETL stands for “extract, transform, load.” It refers to the 3-step process of preparing raw data so that data analysts and data scientists can use it to gain actionable insights about the business.</span></p>
<p><b>Step 1: Extract</b></p>
<p><span style="font-weight: 400;">Organizations generate massive volumes of data. This data may be stored across multiple systems and in a wide range of different formats. Data must be extracted from cloud environments, CRMs, or other external systems before it can be used in applications or for analytics or machine learning. </span></p>
<p><b>Step 2: Transform</b></p>
<p><span style="font-weight: 400;">After data is extracted and collected, it&#8217;s in a raw state and needs work to make it compatible with defined standards. Transforming data can involve: </span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>Cleansing:</b><span style="font-weight: 400;"> removing inconsistencies and missing values </span></li>
<li style="font-weight: 400;" aria-level="1"><b>Standardizing:</b><span style="font-weight: 400;"> bringing datasets into a required format</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Deduplicating:</b><span style="font-weight: 400;"> excluding irrelevant data</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Verifying:</b><span style="font-weight: 400;"> removing data that can&#8217;t be used and marking aberrations</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Sorting:</b><span style="font-weight: 400;"> organizing data by type</span></li>
</ul>
<p><b>Step 3: Load</b></p>
<p><span style="font-weight: 400;">The final step is to load transformed data into data storage, such as a data warehouse, cloud data warehouse, cloud data lake, or data lakehouse, or into external systems or applications. These systems include automated tools to make data accessible for users, such as business intelligence tools for visualizing and reporting on data.</span></p>
<p>&nbsp;</p>
<h3><em><strong>What Are the Responsibilities of an ETL Developer?</strong></em></h3>
<p><span style="font-weight: 400;">ETL Developers must have a big-picture view of their organization’s data needs and environment and are responsible for a wide range of duties and tasks.</span></p>
<p><b>Determining Data Storage and Management Needs</b></p>
<p><span style="font-weight: 400;">ETL Developers figure out the exact storage needs of the organizations they work for. ETL Developers need a clear, detailed picture of their organization’s current and future data architecture, environment, and needs.</span></p>
<p><b>Designing and Building Data Storage and Management Systems</b></p>
<p><span style="font-weight: 400;">ETL Developers design systems, such as cloud data warehouses, cloud data lakes, or  lakehouses, to address their organizations’ data needs and work with development teams to build them.</span></p>
<p><b>Building Data Pipelines</b></p>
<p><span style="font-weight: 400;">ETL Developers create and manage data pipelines—that is, reliable tools and processes that deliver data to end users—to connect to data in different formats and move it between systems.</span></p>
<p><span style="font-weight: 400;"> </span><b>Extracting, Transforming, and Loading of Data</b></p>
<p><span style="font-weight: 400;">When building data pipelines, the goal of an ETL Developer is to extract data, prepare it, and move it —in full loads and/or incremental data loads— from a source file into a destination, such as a cloud data warehouse, cloud data lake, data lakehouse, or external application. </span></p>
<p><b>Testing and Troubleshooting</b></p>
<p><span style="font-weight: 400;">ETL Developers perform quality assurance tests to make sure their systems and pipelines are stable and run smoothly. ETL Developers also identify and resolve system problems that may arise within the warehousing system. </span></p>
<p>&nbsp;</p>
<h3><em><strong>How Does Trifacta Help ETL Developers?</strong></em></h3>
<p><span style="font-weight: 400;">Trifacta significantly reduces the time, technical skills, and costs required for ETL Developers to access any type of data, wherever it resides, and automate the process of transforming data and building data pipelines. </span></p>
<p><span style="font-weight: 400;">The Trifacta Data Engineering Cloud helps ETL Developers transform data, ensure quality, and automate data pipelines, making data consumable at any scale. This intelligent, collaborative, self-service data engineering cloud platform helps ETL Developers:</span></p>
<p><b>Connect to data from any source. </b><span style="font-weight: 400;">With universal data connectivity and a self-service architecture, Trifacta makes it fast and easy for ETL Developers to connect data from any source. This makes it easier for ETL Developers to support a wider range of data integration use cases and applications.</span></p>
<p><b>Transform raw data into ready-to-use data. </b><span style="font-weight: 400;">ETL Developers can use Trifacta’s visual interface and predictive data transformation suggestions to greatly reduce the time it takes to detect and resolve complex data patterns and transform them into consumable data across the organization. </span></p>
<p><b>Create real-time previews of transformed data. </b><span style="font-weight: 400;">Trifacta presents automated, visual, and interactive representations of data. ETL Developers can use these previews to explore data more deeply and understand it at its most granular level. Outliers in the data can be automatically identified and flagged for follow-up, helping ETL Developers easily eliminate bad data.</span></p>
<p><b>Build, automate, deploy data pipelines. </b><span style="font-weight: 400;">With just a few clicks, Trifacta helps ETL Developers build automated data pipelines at scale. With Trifacta, ETL Developers can deploy and manage self-service data pipelines in minutes, not months.</span></p>
<p><span style="font-weight: 400;">Interested in learning how Trifacta can help your ETL Developers reduce the time, technical skills, and costs required to transform data and build data pipelines? </span><a href="https://www.trifacta.com/schedule-a-demo/"><span style="font-weight: 400;">Schedule a demo of Trifacta today</span></a><span style="font-weight: 400;">. </span></p>
<p>&nbsp;</p>
]]></content>
		
			</entry>
		<entry>
		<author>
			<name>Praphull Mishra</name>
					</author>

		<title type="html"><![CDATA[Faster Data Processing with Designer Cloud&#8217;s In-Memory Engine]]></title>
		<link rel="alternate" type="text/html" href="https://www.trifacta.com/blog/faster-data-processing-photon/" />

		<id>https://www.trifacta.com/?p=60071</id>
		<updated>2022-12-14T04:15:34Z</updated>
		<published>2022-02-03T23:32:54Z</published>
		<category scheme="https://www.trifacta.com/" term="Cloud Products" />
		<summary type="html"><![CDATA[Data Transformation has always been a tedious task for data processing engines. Distributed data processing engines can be very handy for processing very large datasets but it isn&#8217;t going to be competitive with running a single process on a single machine if the data fits in. Processing these small to medium sized datasets has been [&#8230;]]]></summary>

					<content type="html" xml:base="https://www.trifacta.com/blog/faster-data-processing-photon/"><![CDATA[<p><span style="font-weight: 300;">Data Transformation has always been a tedious task for data processing engines. Distributed data processing engines can be very handy for processing very large datasets but it isn&#8217;t going to be competitive with running a single process on a single machine if the data fits in. Processing these small to medium sized datasets has been a problem with well known distributed data processing engines due to their initialization overheads. Keeping this in consideration, Photon targets empowering the user with a unique data transformation experience that is intelligent, productive, fast and efficient when working on small to medium sized datasets.</span><span style="font-weight: 300;"> First introduced at <a href="https://www.oreilly.com/library/view/strata-hadoop/9781491944608/video244745.html">Strata in 2016,</a> the latest developments in Photon create an enhanced experience for users. </span></p>
<p><span style="font-weight: 300;">Photon is an in-memory, batch data processing engine, designed to be fast and efficient for small to medium-sized datasets due to minimal initialization overhead. When you build your recipe in the Designer Cloud, you can see the effects of the transformations that you are creating in real time. When you wish to produce result sets of these transformations, you must run a job, which performs a separate set of execution steps on the data. Photon snaps into this Intelligent Execution architecture of Designer Cloud to run side-by-side with more resource-intensive distributed computing frameworks like </span><a href="https://spark.apache.org/" target="_blank" rel="noopener"><span style="font-weight: 300;">Apache Spark</span></a><span style="font-weight: 300;"> and </span><a href="https://cloud.google.com/dataflow/" target="_blank" rel="noopener"><span style="font-weight: 300;">Google Cloud Dataflow</span></a><span style="font-weight: 300;"> that Designer Cloud supports for big data processing.</span></p>
<p><strong>Why do we need another execution engine?</strong></p>
<p><span style="font-weight: 300;">We set two goals while designing Photon, firstly to provide real-time feedback to users as they try to transform their sample data in the browser, and secondly to create a fast and efficient environment for job execution on the complete dataset. As already mentioned, Designer Cloud leverages Google Dataflow and Apache Spark to process very large datasets efficiently in a distributed manner. For small to medium-sized data, Photon’s single node, in-memory architecture reduces the overhead during initialization significantly and makes it the optimal choice, allowing us to provide our users with reduced job execution times and costs. In our internal testing, Photon jobs were 85-95% faster than Google Dataflow Jobs. This lightweight design of Photon also allows us to embed Photon directly in the browser and power Designer Cloud’s real-time transformation UI, which many of our customers love.</span></p>
<p><strong>How does Photon work?</strong></p>
<p><span style="font-weight: 300;">Photon is Trifacta&#8217;s built-in interactive, data processing execution engine that runs on the web browser providing users real-time transformations for their datasets.</span></p>
<p><a href="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-1.png" target="_blank" rel="noopener"><img decoding="async" loading="lazy" class="size-full wp-image-60074 aligncenter" src="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-1.png" alt="" width="512" height="335" data-wp-pid="60074" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-1.png 512w, http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-1-300x196.png 300w" sizes="(max-width: 512px) 100vw, 512px" /></a></p>
<p>&nbsp;</p>
<p><span style="font-weight: 300;">Photon takes the data transformation steps (also known as the Designer Cloud recipe), converts it into a Protobuf representation which is Google&#8217;s language-neutral, platform-neutral, extensible mechanism for serializing structured data. Further, it interprets the different transforms, prepares an execution graph, in which each node represents a transform to be applied on the Data.</span></p>
<p><span style="font-weight: 300;">The Data is then sent to each node in the form of multiple row batches (a continuous chunk of data) and the transform is applied on it, Execution is done in parallel by feeding it to the next node.</span></p>
<p><a href="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-3.png" target="_blank" rel="noopener"><img decoding="async" loading="lazy" class="size-full wp-image-60072 aligncenter" src="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-3.png" alt="" width="512" height="106" data-wp-pid="60072" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-3.png 512w, http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-3-300x62.png 300w" sizes="(max-width: 512px) 100vw, 512px" /></a></p>
<p><span style="font-weight: 300;"><br />
Photon leverages the above mechanism in 2 ways, one while designing the recipe step by step and the other is while running the entire recipe on a complete dataset.</span></p>
<p><span style="font-weight: 300;">While designing the recipe, Photon is built as a js module with help of the </span><a href="https://emscripten.org/" target="_blank" rel="noopener"><span style="font-weight: 300;">Emscripten toolchain</span></a><span style="font-weight: 300;"> that interacts with the UI. An individual recipe step is sent to Photon which checks if a corresponding result-table is present in the previously computed results in the “Photon cache” to avoid unnecessary computation. If not, it executes the recipe on the data shown in the UI, stores the results in the cache, and returns the results to the UI.</span></p>
<p><span style="font-weight: 300;">Photon also can run transformations on the whole dataset when it is chosen as an execution engine within Designer Cloud.</span></p>
<p><a href="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-2.png" target="_blank" rel="noopener"><img decoding="async" loading="lazy" class="size-full wp-image-60073 aligncenter" src="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-2.png" alt="" width="512" height="58" data-wp-pid="60073" srcset="http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-2.png 512w, http://s26597.pcdn.co/wp-content/uploads/2022/02/Photon-2-300x34.png 300w" sizes="(max-width: 512px) 100vw, 512px" /></a></p>
<p><span style="font-weight: 300;"><br />
This uses a fully managed and scalable infrastructure Designer Cloud manages behind the scenes. Since Photon can be run as a standalone executable during job execution, it is easily containerized. This allows us to support Photon job execution directly in the user’s VPC, making job execution faster and more secure by bringing the execution engine to where the data resides.</span></p>
<p><span style="font-weight: 300;">In summary, Photon is ideal to process small and medium datasets with a faster and more efficient architecture, by overcoming the execution overhead that is typically observed by many mainstream processing engines. Users are provided with data transformation in real-time making it easy to use with Designer Cloud&#8217;s intuitive interface.</span></p>
<p><span style="font-weight: 300;">We would love for you to give it a spin today. Sign up for </span><a href="https://www.trifacta.com/start-wrangling"><span style="font-weight: 300;">our free trial</span></a><span style="font-weight: 300;"> and experience the magic of Photon from Designer Cloud.</span></p>
]]></content>
		
			</entry>
	</feed>
