<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Bit of Technology</title>
	<atom:link href="https://bitoftech.net/feed/" rel="self" type="application/rss+xml" />
	<link>https://bitoftech.net/</link>
	<description>dotnet, Azure, cloud native, and more…</description>
	<lastBuildDate>Wed, 09 Nov 2022 08:28:44 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>

 
<site xmlns="com-wordpress:feed-additions:1">64786678</site>	<item>
		<title>Invoking Dapr Services in Azure Container Apps using gRPC &#8211; Part 2</title>
		<link>https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/</link>
					<comments>https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/#respond</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Wed, 09 Nov 2022 08:23:01 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Dapr]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[grpc]]></category>
		<category><![CDATA[Microservice]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1560</guid>

					<description><![CDATA[<p>In the previous post, I covered how to deploy a gRPC service to Azure Container App, and how to create a simple minimal Web API deployed to Azure Container Apps which acts as a gRPC client consuming the gRPC Service, In this post, I&#8217;ll cover the 2 reaming scenarios which I&#8217;ll enable Dapr on the [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/">Invoking Dapr Services in Azure Container Apps using gRPC &#8211; Part 2</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In the <a href="https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/" target="_blank" rel="noopener">previous post</a>, I covered how to deploy a gRPC service to Azure Container App, and how to create a simple minimal Web API deployed to Azure Container Apps which acts as a gRPC client consuming the gRPC Service, In this post, I&#8217;ll cover the 2 reaming scenarios which I&#8217;ll enable Dapr on the Client and Service apps and see how we can call the gRPC service using GrpcClient and the DaprClient SDK.</p>
<h2>Invoking Dapr Services in Azure Container Apps using gRPC</h2>
<p>When Dapr is enabled, we can utilize Dapr service invocation API that acts similar to a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, transient error handling, and retry of transient failures, this will be an added value compared to calling services directly using GrpcClient without Dapr. Looking at the diagram below, the two scenarios I&#8217;ll cover will be using Dapr Sidecar of each service to call each other over gRPC, the gRPC client API will be exposed &#8220;externally&#8221; over HTTP so I can test using a traditional REST client.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-dapr.jpg"><img fetchpriority="high" decoding="async" class="alignnone size-large wp-image-1562" src="https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-dapr-1024x327.jpg" alt="ACA-Tutorial-grpc-aca-dapr" width="1024" height="327" srcset="https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-dapr-1024x327.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-dapr-300x96.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-dapr-768x245.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-dapr.jpg 1427w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>The <a href="https://github.com/tjoudeh/container-apps-grpc-dapr" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub.</h4>
<h3>Scenario 2: Invoke gRPC services via Dapr Sidecar using GrpcClient (Dapr enabled)</h3>
<p>In this scenario, we&#8217;ll see that we can enable Dapr on both applications and it will allow us to keep using our own proto services defined in the <a href="https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/" target="_blank" rel="noopener">previous post</a> (<a href="https://github.com/tjoudeh/container-apps-grpc-dapr/blob/3a0acd373cd388485c889b4ae9d9a9340ec6573e/Expenses.Grpc.Server/Protos/expense.proto" target="_blank" rel="noopener">expense.proto</a>) without any change to the gRPC service. This means that we can use service invocation to call our existing gRPC services without having to include any Dapr client SDKs or include custom gRPC services.</p>
<h3>Update gRPC Client API</h3>
<h4>Step 1: Update &#8216;GrpcClient&#8217; Address Configuration</h4>
<p>In order for the gRPC client API to invoke the gRPC service, we need to update the typed gRPC client configuration and use the injected &#8220;DAPR_GRPC_PORT&#8221; environment variable Dapr is enabled on the client API, the value of this environment variable is the gRPC port that the Dapr sidecar is listening on. Our gRPC client API should use this variable to connect to the Dapr sidecar instead of hardcoding the port value, to do this change, open &#8220;Program.cs&#8221; and add the highlighted lines below:</p><pre class="urvanov-syntax-highlighter-plain-tag">builder.Services.AddGrpcClient&lt;ExpenseSvc.ExpenseSvcClient&gt;(o =&gt;
{
    var islocalhost = builder.Configuration.GetValue("grpc:localhost", false);

    var serverAddress = "";

    if (islocalhost)
    {
        var port = "7029";
        var scheme = "https";
        var daprGRPCPort = Environment.GetEnvironmentVariable("DAPR_GRPC_PORT");

        if (!string.IsNullOrEmpty(daprGRPCPort))
        {
            scheme = "http";
            port = daprGRPCPort;
        }

        serverAddress = string.Format(builder.Configuration.GetValue&lt;string&gt;("grpc:server"), scheme, port);
    }
    else
    {
        serverAddress = builder.Configuration.GetValue&lt;string&gt;("grpc:server");
    }

    o.Address = new Uri(serverAddress);
});</pre><p>Notice that when Dapr is enabled, the value of the Environment Variable &#8220;DAPR_GRPC_PORT&#8221; will not be empty and it will contain a port number, so we are going to use this port number and stop calling the gRPC server address directly, we are offloading the gRPC server service discovery to the gRPC Client Dapr Sidecar as I&#8217;ll show you in the next step.</p>
<h4>Step 2: Inject Metadata headers upon invoking gRPC methods</h4>
<p>Now I need to configure service discovery of the gRPC server by providing the Dapr Server App-Id when the gRPC client invokes a method on the server, to do this, we need to inject a &#8220;Metadata&#8221; header similar to the code below, so add the method below to your &#8220;Program.cs&#8221; file:</p><pre class="urvanov-syntax-highlighter-plain-tag">Metadata? BuildMetadataHeader()
{
    //The gRPC port that the Dapr sidecar is listening on
    var daprGRPCPort = Environment.GetEnvironmentVariable("DAPR_GRPC_PORT");

    Metadata? metadata = null;

    if (!string.IsNullOrEmpty(daprGRPCPort))
    {
        metadata = new Metadata();
        var serverDaprAppId = "expenses-grpc-server";
        metadata.Add("dapr-app-id", serverDaprAppId);
        app?.Logger.LogInformation("Calling gRPC server app id '{server}' using dapr sidecar on gRPC port: {daprGRPCPort}", serverDaprAppId, daprGRPCPort);
    }

    return metadata;
}</pre><p>Notice how we are adding an entry to the &#8220;Metadata&#8221; dictionary with a key named &#8220;dapr-app-id&#8221; and a value of &#8220;expenses-grpc-server&#8221; which is the name of the gRPC service Dapr App Id, which we are going to set in the following steps.</p>
<p>Next, we need to call the method 
			<span id="urvanov-syntax-highlighter-69ac93cda1801162608499" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">BuildMetadataHeader</span><span class="crayon-sy">(</span><span class="crayon-sy">)</span></span></span> upon calling each gRPC method, I will show below how to do it when calling the method &#8220;GetExpenseByIdAsync&#8221; and the other methods will follow the same pattern, check the highlighted line below:</p><pre class="urvanov-syntax-highlighter-plain-tag">app.MapGet("/api/expenses/{id}", async (ExpenseSvc.ExpenseSvcClient grpcClient, int id) =&gt;
{
    GetExpenseByIdResponse? response;
    var request = new GetExpenseByIdRequest { Id = id };
    app?.Logger.LogInformation("Calling grpc server (GetExpenseByIdRequest) for id: {id}", id);
    
    response = await grpcClient.GetExpenseByIdAsync(request, BuildMetadataHeader());

    return Results.Ok(response.Expense);

}).WithName("GetExpenseById");</pre><p>With this change on gRPC client, we are ready to test the gRPC service and client locally using Dapr CLI.</p>
<h3>Enable Dapr and test the gRPC server and client locally</h3>
<p>Now I&#8217;ll run the gRPC server while Dapr is enabled, to do so, navigate to the root folder of the project &#8220;Expenses.Grpc.Server&#8221; and run the command below. If you don&#8217;t have Dapr CLI installed locally on your machine you can check my <a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">previous post</a> for more details:</p><pre class="urvanov-syntax-highlighter-plain-tag">dapr run --app-id expenses-grpc-server --app-protocol grpc --app-port 7029 --app-ssl -- dotnet run</pre><p>When using 
			<span id="urvanov-syntax-highlighter-69ac93cda1807931407707" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">dapr </span><span class="crayon-v">run</span></span></span>  command we are running a dapr process as a sidecar next to the gRPC Server, the properties we have configured as the following:</p>
<ul>
<li>app-id: The unique identifier of the application. Used for service discovery, the value of this parameter is: &#8220;expenses-grpc-server&#8221; and it should match what we used in the &#8220;BuildMetadataHeader()&#8221; method.</li>
<li>app-port: This parameter tells Dapr which port your application is listening on, you can get the app port from “launchSettings.json” file in the gRPC service project, I&#8217;m using the https port here.</li>
<li>app-protocol: This is a gRPC service, setting this to &#8220;grpc&#8221;.</li>
<li>app-ssl: Sets the URI scheme of the app to https and attempts an SSL connection.</li>
</ul>
<p>Next, I will run the gRPC client API while Dapr is enabled, to do so, navigate to the root folder of the project &#8220;Expenses.Grpc.Api&#8221; and run the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">dapr run --app-id expenses-grpc-api --app-protocol http --app-port 5252 --dapr-http-port 3501 -- dotnet run</pre><p>The properties we have configured for the gRPC client as the following:</p>
<ul>
<li>app-id: Using the value &#8220;expenses-grpc-api&#8221;.</li>
<li>app-protocol: gRPC client API is a standard Web API, so the protocol is HTTP.</li>
<li>dapr-http-port: The HTTP port for Dapr to listen on, setting it to 3501.</li>
</ul>
<p>To test locally, using any Rest client, send HTTP GET request to the endpoint 
			<span id="urvanov-syntax-highlighter-69ac93cda180b479072905" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;">http<span class="crayon-o">:</span><span class="crayon-o">/</span><span class="crayon-o">/</span>localhost<span class="crayon-o">:</span>3501<span class="crayon-o">/</span>v1<span class="crayon-sy">.</span>0<span class="crayon-o">/</span>invoke<span class="crayon-o">/</span>expenses<span class="crayon-o">-</span>grpc<span class="crayon-o">-</span>api<span class="crayon-o">/</span>method<span class="crayon-o">/</span>api<span class="crayon-o">/</span>expenses<span class="crayon-o">/</span>2</span></span> notice how I&#8217;m calling the Dapr Sidecar &#8220;Invoke&#8221; API of the gRPC client API with app-id &#8220;expenses-grpc-api&#8221; on port 3501, internally the gRPC client API will invoke the gRPC Server Sidecar over gRPC protocol and call the method &#8220;GetExpenseById&#8221; in the gRPC service.</p>
<h3>Enable Dapr on the gRPC Server and Client then deploy updated to Azure Container Apps</h3>
<p>Now we need to ship the changes done on the gRPC client API to Azure Container Apps and enable Dapr on the gRPC service and client too.</p>
<h4>Step 1: Enable Dapr on the gRPC Server Azure Container App</h4>
<p>We didn&#8217;t do any code changes on the server, so there is no need to build and push a new image, nor to deploy a new revision of the Azure Container App, all we need to do is to enable Dapr on the gRPC server Azure Container App, to do so, run the below CLI command:</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp dapr enable `
--name $BACKEND_SVC_NAME  `
--resource-group $RESOURCE_GROUP `
--dal `
--dapr-app-id $BACKEND_SVC_NAME  `
--dapr-app-port 80 `
--dapr-app-protocol grpc `
--dapr-log-level info</pre><p>Notice how I&#8217;m setting the property &#8220;dapr-app-protocol&#8221; to &#8220;grpc&#8221;, this will tell Dapr Sidecar that this service is using gRPC protocol.</p>
<h4>Step 2: Update gRPC client API and create a new Azure Container App Revision</h4>
<p>To reflect changes done on the gRPC client API, we need to create and push a new image to the container registry used, once this is done, we need to create a new revision using the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp update `
--name $BACKEND_API_NAME  `
--resource-group $RESOURCE_GROUP `
--revision-suffix v20221102-1 `
--set-env-vars  "grpc__server={0}://localhost:{1}" "grpc__localhost=true"</pre><p>Notice that I&#8217;m setting the environment variables with new values so that when the gRPC client API constructs the gRPC server address, it will use the assigned &#8220;Dapr Grpc Port&#8221; as described in the previous steps.</p>
<h4>Step 3: Enable Dapr on the gRPC Client API Azure Container App</h4>
<p>Similar to what I&#8217;ve done on the gRPC Server, we need to enable Dapr on the gRPC client API too, to do so, run the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp dapr enable `
--name  $BACKEND_API_NAME  `
--resource-group $RESOURCE_GROUP `
--dal `
--dapr-app-id $BACKEND_API_NAME  `
--dapr-app-port 80 `
--dapr-app-protocol grpc `
--dapr-log-level info</pre><p>We are setting the &#8220;dapr-app-protocol&#8221; to &#8220;grpc&#8221; for communication between the service and its Sidecar but remember from the <a href="https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/" target="_blank" rel="noopener">previous post</a> that Azure Container App hosting the gRPC Client API is using &#8220;transport&#8221; of type &#8220;http&#8221; that&#8217;s why we can invoke it using standard HTTP request as we will see in the next step.</p>
<p>With this in place, we can do our final testing via PostMan or any other REST client, take note of your gRPC client API Container App FQDN and invoke the POST operation to create a new expense, PostMan result should look like the below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps.jpg"><img decoding="async" class="alignnone size-large wp-image-1557" src="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-1024x707.jpg" alt="PostMan-gRPC-client-Container-Apps" width="1024" height="707" srcset="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-1024x707.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-300x207.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-768x530.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-1536x1061.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps.jpg 1703w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>Note on distributed tracing of Dapr Services:</h4>
<p>If you configured Application Insights when creating the Azure Container Apps Environment by setting the parameter &#8220;&#8211;dapr-instrumentation-key&#8221; to App Insights instrumentation key; you will be able to see the distributed tracing between the gRPC client API and the gRPC server on App Insights Application Map, this won&#8217;t be available if you are using the first approach (Scenario 1 in the <a href="https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/" target="_blank" rel="noopener">previous post</a>). The application map should be similar to the image below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/gRPC-application-map-aca-scaled.jpg"><img decoding="async" class="alignnone size-large wp-image-1567" src="https://bitoftech.net/wp-content/uploads/2022/11/gRPC-application-map-aca-1024x494.jpg" alt="gRPC-application-map-aca" width="1024" height="494" srcset="https://bitoftech.net/wp-content/uploads/2022/11/gRPC-application-map-aca-1024x494.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/gRPC-application-map-aca-300x145.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/gRPC-application-map-aca-768x370.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/gRPC-application-map-aca-1536x741.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/11/gRPC-application-map-aca-2048x987.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></a></p>
<h3>Scenario 3: Invoke gRPC services via Dapr Sidecar using DaprClient SDK (Dapr enabled)</h3>
<p>Dapr offers a .NET SDK which enables us to invoke the gRPC services using a different approach other than the standard &#8220;GrpcClient&#8221; used in the previous 2 scenarios. If we are going to use this approach and utilize the methods available on Dapr .NET SDK such as method &#8220;InvokeMethodGrpcAsync&#8221; we need to build the gRPC service in a different way by implementing <a href="https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto" target="_blank" rel="noopener">Dapr appcallback service</a>, yet there is no change on proto files.</p>
<h3>Create a new gRPC service inheriting from &#8216;AppCallback.AppCallbackBase&#8217;</h3>
<h4>Step 1: Create a new gRPC service</h4>
<p>The first thing to do is to install the NuGet package &#8220;Dapr.AspNetCore&#8221; on project &#8220;Expenses.Grpc.Server&#8221;, then add a new file named &#8220;ExpenseServiceAppCallBack.cs&#8221; under the folder &#8220;Services&#8221;, this new service will do the exact behavior done by the previous <a href="https://github.com/tjoudeh/container-apps-grpc-dapr/blob/master/Expenses.Grpc.Server/Services/ExpenseService.cs" target="_blank" rel="noopener">ExpensesService</a>, I will paste the content of the file here and go over the important parts of it:</p><pre class="urvanov-syntax-highlighter-plain-tag">using Dapr.AppCallback.Autogen.Grpc.v1;
using Dapr.Client.Autogen.Grpc.v1;
using Google.Protobuf.WellKnownTypes;
using Grpc.Core;

namespace Expenses.Grpc.Server.Services
{
    public class ExpenseServiceAppCallBack : AppCallback.AppCallbackBase
    {

        private readonly ILogger&lt;ExpenseServiceAppCallBack&gt; _logger;
        private readonly IExpensesRepo _expensesRepo;
        
        public ExpenseServiceAppCallBack(IExpensesRepo expensesRepo, ILogger&lt;ExpenseServiceAppCallBack&gt; logger)
        {
            _expensesRepo = expensesRepo;
            _logger = logger;
        }

        public override Task&lt;InvokeResponse&gt; OnInvoke(InvokeRequest request, ServerCallContext context)
        {
            var response = new InvokeResponse();

            switch (request.Method)
            {
                case "GetExpenses":

                    var getExpensesRequestInput = request.Data.Unpack&lt;GetExpensesRequest&gt;();
                    var getExpensesResponseOutput = new GetExpensesResponse();

                    _logger.LogInformation("Getting expenses for owner: {owner}", getExpensesRequestInput.Owner);

                    var filteredResults = _expensesRepo.GetExpensesByOwner(getExpensesRequestInput.Owner);
                    getExpensesResponseOutput.Expenses.AddRange(filteredResults);

                    response.Data = Any.Pack(getExpensesResponseOutput);
                    break;

                case "GetExpenseById":
                   
                    var getExpenseByIdRequestInput = request.Data.Unpack&lt;GetExpenseByIdRequest&gt;();
                    var getExpenseByIdResponseOutput = new GetExpenseByIdResponse();

                    _logger.LogInformation("Getting expense by id: {id}", getExpenseByIdRequestInput.Id);

                    var expense = _expensesRepo.GetExpenseById(getExpenseByIdRequestInput.Id);
                    getExpenseByIdResponseOutput.Expense = expense;

                    response.Data = Any.Pack(getExpenseByIdResponseOutput);
                    break;

                case "AddExpense":

                    var addExpenseRequestInput = request.Data.Unpack&lt;AddExpenseRequest&gt;();
                    var addExpenseResponseOutput = new AddExpenseResponse();

                    _logger.LogInformation("Adding expense for provider {provider} for owner: {owner}", addExpenseRequestInput.Provider, addExpenseRequestInput.Owner);

                    var expenseModel = new ExpenseModel()
                    {
                        Owner = addExpenseRequestInput.Owner,
                        Amount = addExpenseRequestInput.Amount,
                        Category = addExpenseRequestInput.Category,
                        Provider = addExpenseRequestInput.Provider,
                        Workflowstatus = addExpenseRequestInput.Workflowstatus,
                        Description = addExpenseRequestInput.Description
                    };

                    _expensesRepo.AddExpense(expenseModel);
                    addExpenseResponseOutput.Expense = expenseModel;

                    response.Data = Any.Pack(addExpenseResponseOutput);
                    break;
               
                default:
                    break;
            }

            return Task.FromResult(response);
        }
    }
}</pre><p>What I&#8217;ve done here is the following:</p>
<ul>
<li>The service inherits from the abstract class &#8220;AppCallback.AppCallbackBase&#8221;, this is needed as it will be called by the Dapr runtime to invoke gRPC method.</li>
<li>I&#8217;m overriding the method &#8220;OnInvoke&#8221; which is called when service invocation is happening, this method accepts an input parameter of type &#8220;InvokeRequest&#8221; which contains the following properties:
<ul>
<li>A string property named &#8220;Method&#8221; holds the name of the method which is invoked by the caller. In our case, we are supporting three methods &#8220;GetExpenses&#8221;, &#8220;GetExpenseById&#8221;, and &#8220;AddExpense&#8221;. Those method names should be identical to the names defined in the &#8220;<a href="https://github.com/tjoudeh/container-apps-grpc-dapr/blob/3a0acd373cd388485c889b4ae9d9a9340ec6573e/Expenses.Grpc.Server/Protos/expense.proto#L44" target="_blank" rel="noopener">expense.proto</a>&#8221; file definition.</li>
<li>A &#8220;Data&#8221; property of type &#8220;Google.Protobuf.WellKnownTypes.Any&#8221;, this property holds a serialized protocol buffer message. I&#8217;m calling &#8220;Unpack&#8221; and specifying the expected input type request to serialize the data into a strongly typed object.</li>
</ul>
</li>
<li>Using the injected &#8220;IExpenseRepo&#8221; I&#8217;m calling the right operation to manipulate the In-memory Expenses list.</li>
<li>Lastly, I&#8217;m generating a response output based on the invoked method and returning the response after &#8220;packing&#8221; the specified message into an &#8220;Any&#8221; message and assigning it to the &#8220;Data&#8221; property of an &#8220;InvokeResponse&#8221; object.</li>
</ul>
<h4>Step 2: Add &#8220;ExpenseServiceAppCallBack&#8221; gRPC service to the routes pipeline</h4>
<p>Now we need to add the &#8220;ExpenseServiceAppCallBack&#8221; gRPC service to the routing pipeline so clients can invoke the operation &#8220;InvokeMethodGrpcAsync&#8221;, to do so, open the file &#8220;Program.cs&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">app.MapGrpcService&lt;ExpenseServiceAppCallBack&gt;();</pre><p></p>
<h3>Update gRPC Client API to use DaprClient SDK</h3>
<h4>Step 1: Install DaprClient SDK into gRPC Client API</h4>
<p>Now we need to install the NuGet package &#8220;Dapr.AspNetCore&#8221; on project &#8220;Expenses.Grpc.Api&#8221;, after it gets installed, we need to register the DaprClient into service collection, so open Program.cs file and add 
			<span id="urvanov-syntax-highlighter-69ac93cda181d625811715" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">builder</span><span class="crayon-sy">.</span><span class="crayon-v">Services</span><span class="crayon-sy">.</span><span class="crayon-e">AddDaprClient</span><span class="crayon-sy">(</span><span class="crayon-sy">)</span><span class="crayon-sy">;</span></span></span></p>
<h4>Step 2: Use DaprClient for methods invocations</h4>
<p>Next, I&#8217;m injecting the &#8220;DaprClient&#8221; into each defined route endpoint, and I&#8217;m calling the method &#8220;InvokeMethodGrpcAsync&#8221;, so for example the updated code of the endpoint &#8220;/api/expenses/{id}&#8221; will look like the below:</p><pre class="urvanov-syntax-highlighter-plain-tag">app.MapGet("/api/expenses/{id}", async (ExpenseSvc.ExpenseSvcClient grpcClient, Dapr.Client.DaprClient daprClient, int id) =&gt;
{
    GetExpenseByIdResponse? response;

    var request = new GetExpenseByIdRequest { Id = id };

    if (builder.Configuration.GetValue("grpc:daprClientSDK", false))
    {
        app?.Logger.LogInformation("DaprClientSDK::Calling grpc server (GetExpenseByIdRequest) for id: {id}", id);
        response = await daprClient.InvokeMethodGrpcAsync&lt;GetExpenseByIdRequest, GetExpenseByIdResponse&gt;("expenses-grpc-server", "GetExpenseById", request);
    }
    else
    {
        app?.Logger.LogInformation("Calling grpc server (GetExpenseByIdRequest) for id: {id}", id);
        response = await grpcClient.GetExpenseByIdAsync(request, BuildMetadataHeader());
    }

    return Results.Ok(response.Expense);

}).WithName("GetExpenseById");</pre><p>Notice in the highlighted line above how I&#8217;m specifying the gRPC server Dapr &#8220;App-Id&#8221;, and invoking the method named &#8220;GetExpenseById&#8221; hosted in this server.<br />
<strong>Note:</strong> We can completely remove the reference to the &#8220;GrpcClient&#8221; if we are going to use the &#8220;DaprClient&#8221; but I kept the code as is and introduced an Environment Variable named &#8220;grpc:daprClientSDK&#8221; and set it to &#8220;true&#8221; when using &#8220;DaprClient&#8221;. With this approach; the code on gRPC client API will remain working whether you decided to use &#8220;GrpcClient&#8221; or &#8220;DaprClient&#8221;.</p>
<h4>Step 3: Test the gRPC client using DaprClient SDK locally</h4>
<p>Now I&#8217;ll run both the gRPC server and gRPC Client API while Dapr is enabled locally to do so, run both commands below, and don&#8217;t forget to change the directory to the root folder of each project:</p>
<p>Run gRPC Server:</p><pre class="urvanov-syntax-highlighter-plain-tag">dapr run --app-id expenses-grpc-server --app-protocol grpc --app-port 7029 --app-ssl -- dotnet run</pre><p>Run gRPC Client API:</p><pre class="urvanov-syntax-highlighter-plain-tag">dapr run --app-id expenses-grpc-api --app-protocol http --app-port 5252 --dapr-http-port 3501 -- dotnet run</pre><p>To test locally, using any Rest client, send HTTP GET request to the endpoint 
			<span id="urvanov-syntax-highlighter-69ac93cda1822915609645" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;">http<span class="crayon-o">:</span><span class="crayon-o">/</span><span class="crayon-o">/</span>localhost<span class="crayon-o">:</span>3501<span class="crayon-o">/</span>v1<span class="crayon-sy">.</span>0<span class="crayon-o">/</span>invoke<span class="crayon-o">/</span>expenses<span class="crayon-o">-</span>grpc<span class="crayon-o">-</span>api<span class="crayon-o">/</span>method<span class="crayon-o">/</span>api<span class="crayon-o">/</span>expenses<span class="crayon-o">/</span>2</span></span> notice how logs generated from the gRPC server are using the new &#8220;ExpenseServiceAppCallBack&#8221; gRPC service, logs should look similar to the below image:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/daprclient-logs.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1572" src="https://bitoftech.net/wp-content/uploads/2022/11/daprclient-logs-1024x245.jpg" alt="daprclient-logs" width="1024" height="245" srcset="https://bitoftech.net/wp-content/uploads/2022/11/daprclient-logs-1024x245.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/daprclient-logs-300x72.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/daprclient-logs-768x184.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/daprclient-logs.jpg 1088w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h3>Deploy changes to Azure Container Apps</h3>
<p>Lastly, we need to push changes done on the gRPC server and gRPC Client API to Azure Container Apps and create 2 new revisions, to do so, create 2 new images for both applications and push them to your container registry, then update Azure Container Apps by running the two CLI commands:</p>
<p>Update gRPC Server Azure Container App</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp update `
--name $BACKEND_SVC_NAME  `
--resource-group $RESOURCE_GROUP `
--revision-suffix v20221103-1</pre><p>Update gRPC Client Api Azure Container App, notice how I&#8217;m setting the environment variable &#8220;grpc:daprClientSDK&#8221; to &#8220;true&#8221; to instruct the client API to use DaprClient SDK, not the GrpcClient.</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp update `
--name $BACKEND_API_NAME  `
--resource-group $RESOURCE_GROUP `
--revision-suffix v20221103-1 `
--set-env-vars "grpc__daprClientSDK=true"</pre><p>With this in place, you can do your final testing using any Rest Client, if you open Application Insights and checked the Request details, you will notice that there are extra properties injected when using DaprClient SDK for service-to-service invocation over gRPC, check the image below</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/dapr-client-vs-grpc-client.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1573" src="https://bitoftech.net/wp-content/uploads/2022/11/dapr-client-vs-grpc-client-1024x592.jpg" alt="dapr-client-vs-grpc-client" width="1024" height="592" srcset="https://bitoftech.net/wp-content/uploads/2022/11/dapr-client-vs-grpc-client-1024x592.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/dapr-client-vs-grpc-client-300x173.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/dapr-client-vs-grpc-client-768x444.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/dapr-client-vs-grpc-client-1536x888.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/11/dapr-client-vs-grpc-client.jpg 1661w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>The <a href="https://github.com/tjoudeh/container-apps-grpc-dapr" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub.</h4>
<h3>Conclusion</h3>
<p>In these two posts, I&#8217;ve covered the various scenarios in which we can invoke service using synchronous communication using GrpcClient or DaprSlient SDK and host the services on Azure Container Apps.</p>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<h4>References:</h4>
<ul>
<li><a href="https://docs.dapr.io/operations/configuration/grpc/" target="_blank" rel="noopener">How-To: Configure Dapr to use gRPC</a></li>
<li><a href="https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/" target="_blank" rel="noopener">Service invocation overview</a></li>
<li><a href="https://unsplash.com/photos/qMehmIyaXvY" target="_blank" rel="noopener">Featured Image Credit</a></li>
</ul>
<p>The post <a href="https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/">Invoking Dapr Services in Azure Container Apps using gRPC &#8211; Part 2</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1560</post-id>	</item>
		<item>
		<title>gRPC Communication In Azure Container Apps &#8211; Part 1</title>
		<link>https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/</link>
					<comments>https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/#comments</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Mon, 07 Nov 2022 00:43:24 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[grpc]]></category>
		<category><![CDATA[Microservice]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1538</guid>

					<description><![CDATA[<p>In the previous post, we have seen how two Azure Container Apps can communicate with each other synchronously without and with Dapr using service-to-service invocation and services discovery using HTTP protocol. In this post, I will cover how 2 services deployed to Azure Container Apps communicate synchronously over gRPC without using Dapr and then we [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/">gRPC Communication In Azure Container Apps &#8211; Part 1</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In the <a href="https://bitoftech.net/2022/08/25/communication-microservices-azure-container-apps/" target="_blank" rel="noopener">previous post</a>, we have seen how two Azure Container Apps can communicate with each other synchronously without and with <a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">Dapr using service-to-service invocation</a> and services discovery using HTTP protocol.</p>
<p>In this post, I will cover how 2 services deployed to Azure Container Apps communicate synchronously over gRPC without using Dapr and then we will Daperize the 2 services and utilize the service-to-service invocation features coming with Dapr.</p>
<p>The scenarios I&#8217;ll cover are the following:</p>
<ul>
<li>Scenario 1: Invoke gRPC services deployed to Container Apps using GrpcClient.</li>
<li><a href="https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/" target="_blank" rel="noopener">Scenario 2</a>: Invoke gRPC services deployed to Container Apps via Dapr Sidecar using GrpcClient (<a href="https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/" target="_blank" rel="noopener">Part 2</a>)</li>
<li><a href="https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/" target="_blank" rel="noopener">Scenario 3</a>: Invoke gRPC services deployed to Container Apps via Dapr Sidecar using DaprClient SDK (<a href="https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/" target="_blank" rel="noopener">Part 2</a>)</li>
</ul>
<h2>gRPC Communication In Azure Container Apps</h2>
<p>Basically what I&#8217;m going to build today is a simple gRPC enabled service/server that exposes 3 endpoints to manage personal expenses, this gRPC service will be deployed into Azure Container Apps and will be invoked/called from a simple minimal .Net Web API which will act as gRPC client and will be deployed to Azure Container Apps too.</p>
<h4>The <a href="https://github.com/tjoudeh/container-apps-grpc-dapr" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub.</h4>
<p>The scenarios depend on each other, so if you are interested in scenario 3 for example please consider looking into the scenarios in order.</p>
<h3>Scenario 1: Invoke gRPC services using GrpcClient (Dapr disabled)</h3>
<p>In this scenario, I will create the gRPC service and the gRPC client which invokes the service, use PostMan interactive UI to call the gRPC services, and then deploy both gRPC service and client to Azure Container Apps. The scenario should reflect the architecture diagram below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-plain.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1544" src="https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-plain-1024x413.jpg" alt="grpc-container apps" width="1024" height="413" srcset="https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-plain-1024x413.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-plain-300x121.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-plain-768x310.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/ACA-Tutorial-grpc-aca-plain.jpg 1202w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h3>Create the gRPC Service</h3>
<h4>Step 1: Create the gRPC Service project</h4>
<p>Using VS Code or Visual Studio 2022, I will create a new project named &#8220;Expenses.Grpc.Server&#8221; of the type &#8220;ASP.NET Core gRPC Service&#8221;. If you are using VS Code you can create the project by running 
			<span id="urvanov-syntax-highlighter-69ac93cda1fdd160744431" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">dotnet </span><span class="crayon-r">new</span><span class="crayon-h"> </span><span class="crayon-v">grpc</span><span class="crayon-h"> </span><span class="crayon-o">-</span><span class="crayon-i">o</span><span class="crayon-h"> </span><span class="crayon-v">Expenses</span><span class="crayon-sy">.</span><span class="crayon-v">Grpc</span><span class="crayon-sy">.</span><span class="crayon-v">Server</span></span></span> This will create a new gRPC enabled project with a default &#8220;greet.proto&#8221; file which we are going to update in the next step.</p>
<h4>Step 2: Create a new Proto file</h4>
<p>Now, I will add a new Proto file named &#8220;expense.proto&#8221; under the folder &#8220;Protos&#8221; which will contain all RPC methods needed to generate the gRPC server stubs, request inputs, and response outputs for the methods, add the file and paste the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">syntax = "proto3";

option csharp_namespace = "Expenses.Grpc.Server";

message ExpenseModel{
    int32 id =1;
    string provider =2;
    double amount =3; 
    string category = 4;
    string owner = 5;
    int32 workflowstatus = 6;
    optional string description = 7;
}

message GetExpensesRequest {
    string owner = 1;
}

message GetExpensesResponse {
    repeated ExpenseModel expenses = 1;
}

message GetExpenseByIdRequest {
    int32 id = 1;
}

message GetExpenseByIdResponse {
    ExpenseModel expense = 1;
}

message AddExpenseRequest {
    string provider =1;
    double amount =2; 
    string category = 3;
    string owner = 4;
    int32 workflowstatus = 5;
    optional string description = 6;
}

message AddExpenseResponse {
    ExpenseModel expense = 1;
}

service ExpenseSvc {
    rpc GetExpenses(GetExpensesRequest) returns (GetExpensesResponse) {}
    rpc GetExpenseById(GetExpenseByIdRequest) returns (GetExpenseByIdResponse) {}
    rpc AddExpense(AddExpenseRequest) returns (AddExpenseResponse) {}
}</pre><p>What I&#8217;ve done here is straightforward, this Proto file exposes 3 RPC methods as the following</p>
<ul>
<li>GetExpenses: Returns list of &#8220;ExpenseModel&#8221; based on the expense owner input string.</li>
<li>GetExpenseById: Returns a single &#8220;ExpenseModel&#8221; based on the provided expense id.</li>
<li>AddExpense: Add a new expense to the repository based on the provided &#8220;ExpenseRequest&#8221;, and returns the saved &#8220;ExpenseModel&#8221; to the caller.</li>
</ul>
<h4>Step 3: Generate gRPC service stubs based on the Proto file</h4>
<p>In order to generate the server stubs, I need to invoke the protocol buffer compiler to generate the code for the target language (.NET in my case) .NET uses the <a href="https://www.nuget.org/packages/Grpc.Tools/" target="_blank" rel="noopener">Grpc.Tools</a> NuGet package with MSBuild to provide automatic code generation of server stubs, so when we build the project or call &#8220;dotnet build&#8221;, the compiler will automatically generate server stubs based on the Proto file definition.</p>
<p>To enable this, we need to add this new Proto file to the gRPC service project, so open the project file &#8220;Expenses.Grpc.Server.csproj&#8221; and paste the below code:</p><pre class="urvanov-syntax-highlighter-plain-tag">&lt;ItemGroup&gt;
    &lt;Protobuf Include="Protos\expense.proto" GrpcServices="Server" /&gt;
&lt;/ItemGroup&gt;</pre><p>You can remove any default Proto files coming with the default template. Notice how I set the &#8220;GrpcServices&#8221; property to the value of a &#8220;Server&#8221; as we are generating server stubs now.</p>
<p>Once added, build the project, and it should generate the code of server stubs for you.</p>
<h4>Step 4: Create a repository to store data In-Memory</h4>
<p>To keep things simple, I will use a static list to store expenses and manipulate them, we will use the same repository in the next post in scenario 3 once we create the gRPC service implementation in a different way. To create the repository add a new interface named &#8220;IExpensesRepo.cs&#8221; under a new folder named &#8220;Services&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">namespace Expenses.Grpc.Server.Services
{
    public interface IExpensesRepo
    {
        List&lt;ExpenseModel&gt; GetExpensesByOwner(string owner);
        ExpenseModel? GetExpenseById(int id);
        ExpenseModel AddExpense(ExpenseModel expense);
    }
}</pre><p>Then add a new file named &#8220;ExpensesRepo.cs&#8221; which implements the interface &#8220;IExpensesRepo.cs&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">namespace Expenses.Grpc.Server.Services
{
    public class ExpensesRepo : IExpensesRepo
    {
        private static List&lt;ExpenseModel&gt; _expensesList = new List&lt;ExpenseModel&gt;();

        private void GenerateRandomExpenses()
        {
            if (_expensesList.Count &gt; 0)
            {
                return;
            }

            _expensesList.Add(new ExpenseModel
            {
                Id = 1,
                Provider = "Golds Gym",
                Amount = 290,
                Category = "Fitness Activity",
                Owner = "tjoudeh@mail.com",
                Workflowstatus = 1,
                Description = ""
            });

            _expensesList.Add(new ExpenseModel
            {
                Id = 2,
                Provider = "Adidas",
                Amount = 100,
                Category = "Athletic Shoes",
                Owner = "tjoudeh@mail.com",
                Workflowstatus = 1,
                Description = ""
            });

            _expensesList.Add(new ExpenseModel
            {
                Id = 3,
                Provider = "FreeMind",
                Amount = 25,
                Category = "Yoga Class",
                Owner = "xyz@yahoo.com",
                Workflowstatus = 2,
                Description = ""
            });
        }

        public ExpensesRepo()
        {
            GenerateRandomExpenses();
        }

        public ExpenseModel AddExpense(ExpenseModel expense)
        {
            expense.Id = _expensesList.Max(e =&gt; e.Id) + 1;
            _expensesList.Add(expense);
            return expense;
        }

        public ExpenseModel? GetExpenseById(int id)
        {
            return _expensesList.SingleOrDefault(e =&gt; e.Id == id);
        }

        public List&lt;ExpenseModel&gt; GetExpensesByOwner(string owner)
        {
            var expensesList = _expensesList.Where(t =&gt; t.Owner.Equals(owner, StringComparison.OrdinalIgnoreCase)).OrderByDescending(o =&gt; o.Id).ToList();

            return expensesList;
        }
    }
}</pre><p>What I&#8217;ve done here is simple, it is a service that exposes three methods responsible to list expenses by expense owner, getting a single expense by Id, and adding a new expense and storing it in the static list.</p>
<h4>Step 5: Implement the Service based on generated gRPC service stubs</h4>
<p>Now I will add the actual implementation of the Expenses Service by inheriting it from the auto-generated stubs. To do so add a new file named &#8220;ExpenseService.cs&#8221; under the folder named &#8220;Services&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">using Grpc.Core;

namespace Expenses.Grpc.Server.Services
{
    public class ExpenseService : ExpenseSvc.ExpenseSvcBase
    {

        private readonly ILogger&lt;ExpenseService&gt; _logger;
        private readonly IExpensesRepo _expensesRepo;

        public ExpenseService(IExpensesRepo expensesRepo, ILogger&lt;ExpenseService&gt; logger)
        {
            _logger = logger;
            _expensesRepo = expensesRepo;
            _logger.LogInformation("Invoking Constructor");

        }

        public override Task&lt;GetExpensesResponse&gt; GetExpenses(GetExpensesRequest request, ServerCallContext context)
        {

            _logger.LogInformation("Getting expenses for owner: {owner}", request.Owner);

            var response = new GetExpensesResponse();

            var filteredResults =  _expensesRepo.GetExpensesByOwner(request.Owner);

            response.Expenses.AddRange(filteredResults);

            return Task.FromResult(response);
        }

        public override Task&lt;AddExpenseResponse&gt; AddExpense(AddExpenseRequest request, ServerCallContext context)
        {

            _logger.LogInformation("Adding expense for provider {provider} for owner: {owner}", request.Provider, request.Owner);

            var response = new AddExpenseResponse();

            var expenseModel = new ExpenseModel()
            {
                Owner = request.Owner,
                Amount = request.Amount,
                Category = request.Category,
                Provider = request.Provider,
                Workflowstatus = request.Workflowstatus,
                Description = request.Description
            };

            _expensesRepo.AddExpense(expenseModel);

            response.Expense = expenseModel;

            return Task.FromResult(response);
        }

        public override Task&lt;GetExpenseByIdResponse&gt; GetExpenseById(GetExpenseByIdRequest request, ServerCallContext context)
        {
            _logger.LogInformation("Getting expense by id: {id}", request.Id);

            var response = new GetExpenseByIdResponse();

            var expense = _expensesRepo.GetExpenseById(request.Id);

            response.Expense = expense;

            return Task.FromResult(response);

        }

    }

}</pre><p>What I&#8217;ve done here is the following:</p>
<ul>
<li>The service &#8220;ExpenseService&#8221; is inherited from the abstract class &#8220;ExpenseSvc.ExpenseSvcBase&#8221;, this abstract class is auto-generated by the protocol buffer compiler based on the &#8220;expense.proto&#8221; file definition. The class exists in the following location on my machine &#8220;~\Expenses.Grpc.Server\obj\Debug\net6.0\Protos\ExpenseGrpc.cs&#8221;</li>
<li>I&#8217;m overriding the three methods that are generated by the protocol buffer compiler and exist in the class &#8220;ExpenseSvc.ExpenseSvcBase&#8221; Those are methods defined in the &#8220;expense.proto&#8221; file. Each method accepts a &#8220;Request&#8221; input parameter that defines what the request accepts, and a &#8220;ServerCallContext&#8221; parameter that contains authentication context, headers, etc&#8230; Each method returns a &#8220;Response&#8221; based on the &#8220;expense.proto&#8221; definition. All those Request inputs and Response outputs classes are auto-generated by the gRPC tooling for .NET.</li>
<li>For example, if we take a look at the method &#8220;GetExpenses&#8221;, we&#8217;ll notice that it accepts an input request of type &#8220;GetExpensesRequest&#8221; which encapsulates the &#8220;Owner&#8221; string property which I will be passing to the method &#8220;GetExpensesByOwner&#8221; from the repository &#8220;ExpensesRepo&#8221; to return a list of expenses associated with this owner. The response type of this method is a list of expenses that we are adding to the &#8220;Expenses&#8221; property part of the &#8220;GetExpensesResponse&#8221;.</li>
</ul>
<h4>Step 6: Add &#8220;Expenses&#8221; gRPC service to the routes pipeline and Register &#8220;IExpensesRepo&#8221; repository</h4>
<p>Now we need to add the &#8220;ExpensesService&#8221; gRPC service to the routing pipeline so clients can access the three methods, and we need the repository &#8220;IExpensesRepo&#8221; as a singleton service so it is injected into the &#8220;ExpensesService&#8221; service constructor, to do so, open the file &#8220;Program.cs&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">using Expenses.Grpc.Server.Services;

var builder = WebApplication.CreateBuilder(args);

// Additional configuration is required to successfully run gRPC on macOS.
// For instructions on how to configure Kestrel and gRPC clients on macOS, visit https://go.microsoft.com/fwlink/?linkid=2099682

// Add services to the container.

builder.Services.AddSingleton&lt;IExpensesRepo, ExpensesRepo&gt;();

builder.Services.AddGrpc();

var app = builder.Build();

// Configure the HTTP request pipeline.
app.MapGrpcService&lt;ExpenseService&gt;();

app.MapGet("/", () =&gt; "Communication with gRPC endpoints must be made through a gRPC client. To learn how to create a client, visit: https://go.microsoft.com/fwlink/?linkid=2086909");
app.Run();</pre><p></p>
<h4>Step 7: (Optional) Enable gRPC service reflection</h4>
<p>In order to call a gRPC service, any gRPC-enabled client tooling should have access to the Protobuf contract of the service (.proto) file before being able to invoke the gRPC service, to simplify this, we can enable <a href="https://github.com/grpc/grpc/blob/master/doc/server-reflection.md" target="_blank" rel="noopener">gRPC reflection</a> on the server, so tools such as PostMan (I&#8217;m going to use it for testing next step) will use reflection to automatically discover service contracts. Once gRPC reflection is enabled on the gRPC server; it adds a new gRPC service to the app that clients can call to discover services.</p>
<p>To do so, add a reference for the NuGet package &#8220;Grpc.AspNetCore.Server.Reflection&#8221; and then open the file &#8220;Program.cs&#8221; and add the highlighted lines below:</p><pre class="urvanov-syntax-highlighter-plain-tag">var builder = WebApplication.CreateBuilder(args);

// code omitted for brevity

builder.Services.AddGrpcReflection();

var app = builder.Build();

// code omitted for brevity

app.MapGrpcReflectionService();
app.MapGet("/", () =&gt; "Communication with gRPC endpoints must be made through a gRPC client. To learn how to create a client, visit: https://go.microsoft.com/fwlink/?linkid=2086909");
app.Run();</pre><p>What I&#8217;ve done here is the following:</p>
<ul>
<li>&#8220;AddGrpcReflection&#8221; to register services that enable reflection.</li>
<li>&#8220;MapGrpcReflectionService&#8221; to add a reflection service endpoint.</li>
</ul>
<p>With those changes in place, client apps that support gRPC reflection can call the reflection service to discover services hosted by the server. It is worth noting that when reflection is enabled, it only enables service discovery and doesn&#8217;t bypass server-side security.</p>
<h4>Step 8: Test gRPC service using PostMan</h4>
<p>To start testing the gRPC service locally, run the project &#8220;Expenses.Grpc.Server&#8221; using 
			<span id="urvanov-syntax-highlighter-69ac93cda1fee506514267" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">dotnet </span><span class="crayon-v">run</span></span></span> or from Visual Studio using Kestrel. Take note of the https port as shown I&#8217;m the image below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/grpc-dotnet-run.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1548" src="https://bitoftech.net/wp-content/uploads/2022/11/grpc-dotnet-run-1024x272.jpg" alt="grpc-dotnet-run" width="1024" height="272" srcset="https://bitoftech.net/wp-content/uploads/2022/11/grpc-dotnet-run-1024x272.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/grpc-dotnet-run-300x80.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/grpc-dotnet-run-768x204.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/grpc-dotnet-run.jpg 1110w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Open PostMan and follow the steps below:</p>
<ol>
<li>Select the &#8220;New&#8221; button and choose &#8220;gRPC Request&#8221;.</li>
<li>Enter the gRPC server&#8217;s hostname and port in the server URL. In my case, it is &#8220;localhost:7029&#8221;. Don&#8217;t include the http or https scheme in the URL. I&#8217;m using the port with https, so I select the padlock next to the server URL to enable TLS in Postman.</li>
<li>Navigate to the &#8220;Service definition&#8221; section, then select server reflection or import the &#8220;expense.proto&#8221; file. In my case, our gRPC service has reflection enabled so I will use the &#8220;server reflection&#8221; approach. When complete, the dropdown list next to the server URL textbox has a list of gRPC methods available.</li>
<li>Navigate to the &#8220;Settings&#8221; section, and turn off &#8220;Enable server certificate verification&#8221;, this is only needed when running gRPC server locally.</li>
<li>To call a gRPC method, in my case I will test the method &#8220;AddExpense&#8221; so select from the dropdown, copy the message content from below and paste it in the message body textbox, then select &#8220;Invoke&#8221; to send the gRPC call to the server.</li>
</ol>
<p></p><pre class="urvanov-syntax-highlighter-plain-tag">{
    "provider": "Golds Gym 6",
    "amount": 350,
    "category": "Fitness Activity",
    "owner": "tjoudeh@mail.com",
    "workflowstatus": 2,
    "description": "Gym Subscription"
}</pre><p>If all is configured successfully, you should receive back the created expense with an &#8220;Id&#8221; property assigned to it, it should look like the below image:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-Call.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1550" src="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-Call-1024x864.jpg" alt="PostMan-gRPC-aspnet-core" width="1024" height="864" srcset="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-Call-1024x864.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-Call-300x253.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-Call-768x648.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-Call.jpg 1179w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h3>Create the gRPC Client App</h3>
<h4>Step 1: Create the gRPC Client project</h4>
<p>Now I will add a new Minimal Web Api which will act as a gRPC client for the gRPC service and will define three REST API endpoints to access the gRPC methods, using VS Code or Visual Studio 2022, create a new project named &#8220;Expenses.Grpc.API&#8221; of the type &#8220;ASP.NET Core Web Api&#8221;, do not forget to uncheck &#8220;Use Controllers&#8221; and &#8220;Configure for HTTPS&#8221;. If you are using VS Code you can create the project by running 
			<span id="urvanov-syntax-highlighter-69ac93cda1ff1899221506" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">dotnet </span><span class="crayon-r">new</span><span class="crayon-h"> </span><span class="crayon-v">webapi</span><span class="crayon-h"> </span><span class="crayon-o">-</span><span class="crayon-v">minimal</span><span class="crayon-h"> </span><span class="crayon-o">-</span><span class="crayon-i">o</span><span class="crayon-h"> </span><span class="crayon-v">Expense</span><span class="crayon-sy">.</span><span class="crayon-v">Grpc</span><span class="crayon-sy">.</span><span class="crayon-v">Api</span><span class="crayon-h"> </span><span class="crayon-o">--</span><span class="crayon-v">no</span><span class="crayon-o">-</span><span class="crayon-v">https</span></span></span> This will create a new minimal Web API.</p>
<h4>Step 2: Add needed NuGet packages and Proto file reference</h4>
<p>I will install the 2 NuGet packages needed for the gRPC client to invoke the gRPC service, to do so, Install package &#8220;<a href="https://www.nuget.org/packages/Grpc.AspNetCore" target="_blank" rel="noopener">Grpc.AspNetCore</a>&#8221; and package &#8220;<a href="https://www.nuget.org/packages/Grpc.Net.ClientFactory" target="_blank" rel="noopener">Grpc.Net.ClientFactory</a>&#8220;. The package &#8220;Grpc.Net.ClientFactory&#8221; is optional to use, but I recommend using it as it will provide a central location for configuring gRPC client instances and it will manage the lifetime of the underlying &#8220;HttpClientMessageHandler&#8221;</p>
<p>Next, I&#8217;ll add a link reference to the &#8220;expense.proto&#8221; protobuf file so the gRPC client application will be able to automatically generate gRPC client code to invoke the gRPC service, thanks for the built-in .NET integration between MSBuild and the <a href="https://www.nuget.org/packages/Grpc.Tools/" target="_blank" rel="noopener">Grpc.Tools</a> NuGet package</p>
<p>Open the file &#8220;Expenses.Grpc.Api.csproj&#8221; and add the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">&lt;ItemGroup&gt;
	&lt;Protobuf Include="..\Expenses.Grpc.Server\Protos\expense.proto" GrpcServices="Client"&gt;
		&lt;Link&gt;Protos\expense.proto&lt;/Link&gt;
	&lt;/Protobuf&gt;
&lt;/ItemGroup&gt;</pre><p>Notice that I set the &#8220;GrpcServices&#8221; property value to &#8220;Client&#8221;, this will tell the gRPC client when building it to generate concrete client types based on the &#8220;expense.proto&#8221; file definition. The generated gRPC client code will contain &#8220;client&#8221; methods that translate to the gRPC service I&#8217;ve built in the previous steps.</p>
<h4>Step 3: Add environment variables</h4>
<p>To simplify the testing of gRPC client locally and when deploying it to Azure Container Apps, I added the below environment variables which we are going to update when deploying to Azure Container Apps, so open file &#8220;appsettings.json&#8221; and add the configuration below:</p><pre class="urvanov-syntax-highlighter-plain-tag">{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "AllowedHosts": "*",
  "grpc": {
    "server": "{0}://localhost:{1}",
    "localhost": true,
    "daprClientSDK": false
  }
}</pre><p></p>
<h4>Step 4: Define API endpoints and invoke gRPC</h4>
<p>Because I&#8217;m using minimal API, all services configuration and API endpoints will be on the same file &#8220;Program.cs&#8221;, so I will list the entire code of the file and explain what I&#8217;ve done thoroughly, open the file &#8220;Program.cs&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">using Expenses.Grpc.Server;
using Grpc.Core;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.

builder.Services.AddGrpcClient&lt;ExpenseSvc.ExpenseSvcClient&gt;(o =&gt;
{
    var islocalhost = builder.Configuration.GetValue("grpc:localhost", false);

    var serverAddress = "";

    if (islocalhost)
    {
        var port = "7029";
        var scheme = "https";

        serverAddress = string.Format(builder.Configuration.GetValue&lt;string&gt;("grpc:server"), scheme, port);
    }
    else
    {
        serverAddress = builder.Configuration.GetValue&lt;string&gt;("grpc:server");
    }

    o.Address = new Uri(serverAddress);
});

var app = builder.Build();

// Configure the HTTP request pipeline.

app.MapGet("/api/expenses", async (ExpenseSvc.ExpenseSvcClient grpcClient, string owner) =&gt;
{
    GetExpensesResponse? response;
    
    var request = new GetExpensesRequest { Owner = owner };

    app?.Logger.LogInformation("Calling grpc server (GetExpenses) for owner: {owner}", owner);

    response = await grpcClient.GetExpensesAsync(request);
    
    return Results.Ok(response.Expenses);

});

app.MapGet("/api/expenses/{id}", async (ExpenseSvc.ExpenseSvcClient grpcClient, int id) =&gt;
{
    GetExpenseByIdResponse? response;

    var request = new GetExpenseByIdRequest { Id = id };

    app?.Logger.LogInformation("Calling grpc server (GetExpenseByIdRequest) for id: {id}", id);
    
    response = await grpcClient.GetExpenseByIdAsync(request);

    return Results.Ok(response.Expense);

}).WithName("GetExpenseById");

app.MapPost("/api/expenses", async (ExpenseSvc.ExpenseSvcClient grpcClient, Dapr.Client.DaprClient daprClient,ExpenseModel expenseModel) =&gt;
{
    AddExpenseResponse? response;

    var request = new AddExpenseRequest
    {
        Provider = expenseModel.Provider,
        Amount = expenseModel.Amount,
        Category = expenseModel.Category,
        Owner = expenseModel.Owner,
        Workflowstatus = expenseModel.Workflowstatus,
        Description = expenseModel.Description
    };

    app?.Logger.LogInformation("Calling grpc server (AddExpenseRequest) for provider: {provider}", expenseModel.Provider);
    response = await grpcClient.AddExpenseAsync(request);

    return Results.CreatedAtRoute("GetExpenseById", new { id = response.Expense.Id }, response.Expense);
});

app.Run();

internal class ExpenseModel
{
    public string Provider { get; set; } = string.Empty;
    public double Amount { get; set; } = 0.0;
    public string Category { get; set; } = string.Empty;
    public string Owner  { get; set; } = string.Empty;
    public int Workflowstatus  { get; set; } 
    public string Description { get; set; } = string.Empty;
}</pre><p>What I&#8217;ve done here is the following:</p>
<ul>
<li>In-line 
			<span id="urvanov-syntax-highlighter-69ac93cda1ff6291379468" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">builder</span><span class="crayon-sy">.</span><span class="crayon-v">Services</span><span class="crayon-sy">.</span><span class="crayon-v">AddGrpcClient</span><span class="crayon-o">&lt;</span><span class="crayon-v">ExpenseSvc</span><span class="crayon-sy">.</span><span class="crayon-v">ExpenseSvcClient</span><span class="crayon-o">&gt;</span></span></span> I&#8217;ve registered a gRPC client, the generic &#8220;AddGrpcClient&#8221; extension method is used within an instance of WebApplicationBuilder at the app&#8217;s entry point, specifying the gRPC typed client class and service address. The client class is auto-generated based on the &#8220;expense.proto&#8221; definition. The gRPC client type is registered as transient with dependency injection (DI). The client can now be injected and consumed directly in types created by DI.</li>
<li>The gRPC service address when running locally will be the same address I&#8217;ve used when running the gRPC service on localhost, the address value in my case will be &#8220;https://localhost:7029&#8221;.</li>
<li>I&#8217;ve defined a DTO class named &#8220;ExpenseModel&#8221; which will be used when sending JSON payload in the POST request body to create a new expense. We could use the &#8220;ExpenseModel&#8221; class generated by gRPC server but it is always better to use a DTO when exposing API endpoints.</li>
<li>I&#8217;ve defined 2 GET endpoints to get expenses based on the Expense Owner or by Expense Id, and a POST endpoint to allow creating a new expense.</li>
<li>Taking on of the endpoints for example: GET &#8220;/api/expenses/{id}&#8221; and analyzing it in detail:
<ul>
<li>I&#8217;ve injected an instance named &#8220;grpcClient&#8221; of the &#8220;ExpenseSvc.ExpenseSvcClient&#8221;. This client contains auto-generated methods based on the &#8220;expense.proto&#8221; definition.</li>
<li>I&#8217;m taking the &#8220;Id&#8221; from the route and using it to create an instance of &#8220;GetExpenseByIdRequest&#8221;.</li>
<li>Then I&#8217;m calling the method &#8220;GetExpenseByIdAsync&#8221; asynchronously as the following 
			<span id="urvanov-syntax-highlighter-69ac93cda1ff7824711897" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">response</span><span class="crayon-h"> </span><span class="crayon-o">=</span><span class="crayon-h"> </span><span class="crayon-e">await </span><span class="crayon-v">grpcClient</span><span class="crayon-sy">.</span><span class="crayon-e">GetExpenseByIdAsync</span><span class="crayon-sy">(</span><span class="crayon-v">request</span><span class="crayon-sy">)</span><span class="crayon-sy">;</span></span></span> and assign the response to an object of type &#8220;GetExpenseByIdResponse&#8221;</li>
<li>Lastly, I&#8217;m returning 200 OK with the returned ExpenseModel to the caller.</li>
</ul>
</li>
<li>The other endpoints are following the same pattern described in the previous point.</li>
</ul>
<h4>Step 5: Test the gRPC client and the gRPC server locally</h4>
<p>With those changes in place, I can run both the gRPC server and the client and test locally to do so, run the gRPC server first by calling 
			<span id="urvanov-syntax-highlighter-69ac93cda1ff8106782753" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">dotnet </span><span class="crayon-v">run</span></span></span> and then the gRPC client using the same command. I&#8217;ll be using PostMan to send a regular HTTP request to the client API to one of the endpoints, for example, I will send a POST request to the endpoint &#8220;/api/expenses&#8221;, the request will look like the below:</p><pre class="urvanov-syntax-highlighter-plain-tag">POST /api/expenses/ HTTP/1.1
Host: localhost:5252
Content-Type: application/json
Content-Length: 170

{
    "provider": "Hyve8",
    "amount": 350,
    "category": "Gym Subscription",
    "owner": "tjoudeh@mail.com",
    "workflowstatus": 1,
    "description": ""
}</pre><p>Once you send the request, the endpoint should return 201 created and you should see logs reported on the gRPC server to indicate that the request from gRPC client reached the gRPC server and a new expense has been created, logs should be similar to the below image:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/grpc-server-logs.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1553" src="https://bitoftech.net/wp-content/uploads/2022/11/grpc-server-logs-1024x118.jpg" alt="grpc-server-logs" width="1024" height="118" srcset="https://bitoftech.net/wp-content/uploads/2022/11/grpc-server-logs-1024x118.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/grpc-server-logs-300x35.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/grpc-server-logs-768x89.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/grpc-server-logs.jpg 1142w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h3>Deploy the gRPC server and the gRPC client to Azure Container Apps</h3>
<p>We are ready now to deploy both applications to Azure Container Apps and do an end-to-end test.</p>
<h4>Step 1: Add Dockerfile to the gRPC server and client</h4>
<p>In order to deploy both applications into a container we need to add Docker files for both apps, so ensure you are on the root of the project &#8220;Expenses.Grpc.Server&#8221; and add a file named &#8220;Docekrfile&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["Expenses.Grpc.Server/Expenses.Grpc.Server.csproj", "Expenses.Grpc.Server/"]
RUN dotnet restore "Expenses.Grpc.Server/Expenses.Grpc.Server.csproj"
COPY . .
WORKDIR "/src/Expenses.Grpc.Server"
RUN dotnet build "Expenses.Grpc.Server.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "Expenses.Grpc.Server.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Expenses.Grpc.Server.dll"]</pre><p>Do the same for the project &#8220;Expenses.Grpc.Api&#8221; and add a file named &#8220;Docekrfile&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["Expenses.Grpc.Api/Expenses.Grpc.Api.csproj", "Expenses.Grpc.Api/"]
RUN dotnet restore "Expenses.Grpc.Api/Expenses.Grpc.Api.csproj"
COPY . .
WORKDIR "/src/Expenses.Grpc.Api"
RUN dotnet build "Expenses.Grpc.Api.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "Expenses.Grpc.Api.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Expenses.Grpc.Api.dll"]</pre><p>You can use the tooling coming with Visual Studio or VS Code (<a href="https://code.visualstudio.com/docs/containers/overview" target="_blank" rel="noopener">Docker Extension</a> should be installed) to generate the Docker files for you.</p>
<h4>Step 2: Build and Push Container Images to a Container Registry</h4>
<p>Now we need to build and push both images to a Container Registry, In my case, I&#8217;m using ACR, you can follow <a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">this post</a> to see the commands needed to build and push images to ACR. You can use Docker Hub as well.</p>
<h4>Step 3: Create Azure Container Apps Environment</h4>
<p>To deploy an Azure Container App, we need to have a Container Apps Environment to host it, you can follow <a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">this post</a> to see how to create an environment.</p>
<h4>Step 4: Deploy gRPC server to Azure Container Apps</h4>
<p>Now I&#8217;ll deploy the gRPC server to Azure Container App, I&#8217;ll be using CLI for creating the Azure Container App, use the right variable names in terms of environment, ACR name, etc.. to match your deployment:</p><pre class="urvanov-syntax-highlighter-plain-tag">$RESOURCE_GROUP="aca-grpc-rg"
$LOCATION="westus"
$ENVIRONMENT="aca-grpc-env"
$BACKEND_SVC_NAME="expenses-grpc-server"
$BACKEND_API_NAME="expenses-grpc-api"
$ACR_NAME="taskstrackeracr"

az containerapp create `
  --name $BACKEND_SVC_NAME  `
  --resource-group $RESOURCE_GROUP `
  --environment $ENVIRONMENT `
  --image "$ACR_NAME.azurecr.io/$BACKEND_SVC_NAME" `
  --registry-server "$ACR_NAME.azurecr.io" `
  --target-port 80 `
  --ingress 'external' `
  --transport http2 `
  --min-replicas 1 `
  --max-replicas 1 `
  --cpu 0.25 --memory 0.5Gi `
  --revision-suffix v20221031-1 `
  --query configuration.ingress.fqdn</pre><p>Notice that I&#8217;m setting the &#8220;Transport&#8221; value here to &#8220;http2&#8221; as this is needed when the deployed container is exposing gRPC methods. I can set the &#8220;Ingress&#8221; property to &#8220;Internal&#8221; to allow only the container apps deployed within the same environment to call the gRPC server (similar to our case) but I kept it &#8220;external&#8221; as I want to test the gRPC server from PostMan as the next step.</p>
<h4>Step 5: Test the gRPC server after deploying to Azure Container App</h4>
<p>Similar to what I&#8217;ve done previously when testing gRPC server locally, I will take the FQDN of the gRPC server and follow the exact steps done in step 8 earlier, if all is working correctly, you should be able to invoke the gRPC service using service reflection successfully, for example, to create create an expense, PostMan results will look like the below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-server-Container-Apps.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1556" src="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-server-Container-Apps-1024x875.jpg" alt="PostMan-gRPC-server-Container-Apps" width="1024" height="875" srcset="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-server-Container-Apps-1024x875.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-server-Container-Apps-300x256.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-server-Container-Apps-768x656.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-server-Container-Apps-1536x1313.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-server-Container-Apps.jpg 1780w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Notice that we can &#8220;Enable server certificate verification&#8221; when testing against a gRPC service deployed to Azure Container Apps as the certificate used by Container Apps is a trusted certificate and the PostMan client won&#8217;t complain about it.</p>
<h4>Step 6: Deploy gRPC client to Azure Container Apps</h4>
<p>Now I&#8217;ll deploy the gRPC client API to Azure Container App, I&#8217;ll be using CLI for creating the Azure Container App:</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp create `
  --name $BACKEND_API_NAME  `
  --resource-group $RESOURCE_GROUP `
  --environment $ENVIRONMENT `
  --image "$ACR_NAME.azurecr.io/$BACKEND_API_NAME" `
  --registry-server "$ACR_NAME.azurecr.io" `
  --target-port 80 `
  --transport http `
  --ingress 'external' `
  --min-replicas 1 `
  --max-replicas 1 `
  --cpu 0.25 --memory 0.5Gi `
  --env-vars "grpc__server=https://expenses-grpc-server.gentleplant-b85581e3.westus.azurecontainerapps.io" "grpc__localhost=false" `
  --query configuration.ingress.fqdn</pre><p>Notice that I&#8217;m keeping the transport to HTTP here as this is a normal Web API that will invoke the gRPC server. As well I&#8217;m setting the environment variables to the address of the gRPC service.</p>
<p>With this in place, we can do our final testing via PostMan or any other REST client, take note of your gRPC client API Container App FQDN and invoke the POST operation to create a new expense, PostMan result should look like the below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1557" src="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-1024x707.jpg" alt="PostMan-gRPC-client-Container-Apps" width="1024" height="707" srcset="https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-1024x707.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-300x207.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-768x530.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps-1536x1061.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/11/PostMan-gRPC-client-Container-Apps.jpg 1703w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>In the <a href="https://bitoftech.net/2022/11/09/invoking-dapr-services-in-azure-container-apps-using-grpc/" target="_blank" rel="noopener">next post</a>, I&#8217;ll be covering how to invoke gRPC services via Dapr Sidecar using GrpcClient and DaprClient (Dapr Client SDK).</p>
<h4>The <a href="https://github.com/tjoudeh/container-apps-grpc-dapr" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub.</h4>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<h4>References:</h4>
<ul>
<li><a href="https://learn.microsoft.com/en-us/aspnet/core/grpc/test-tools?view=aspnetcore-6.0" target="_blank" rel="noopener">Test gRPC services with Postman or gRPCurl in ASP.NET Core</a></li>
<li><a href="https://sahansera.dev/building-grpc-client-dotnet/" target="_blank" rel="noopener">Building a gRPC Client in .NET</a></li>
<li><a href="https://unsplash.com/photos/xuTJZ7uD7PI" target="_blank" rel="noopener">Feature Image Credit</a></li>
</ul>
<p>The post <a href="https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/">gRPC Communication In Azure Container Apps &#8211; Part 1</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/11/07/grpc-communication-in-azure-container-apps/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1538</post-id>	</item>
		<item>
		<title>Azure Container Apps Volume Mounts using Azure Files &#8211; Part 12</title>
		<link>https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/</link>
					<comments>https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/#comments</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Sat, 15 Oct 2022 22:25:00 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Bicep]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Azure Files]]></category>
		<category><![CDATA[Azure Storage]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1510</guid>

					<description><![CDATA[<p>This is the twelfth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are: Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1 Deploy backend API Microservice to Azure Container Apps &#8211; Part 2 Communication between Microservices in Azure Container Apps &#8211; Part 3 [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/">Azure Container Apps Volume Mounts using Azure Files &#8211; Part 12</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This is the twelfth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:</p>
<ul>
<li><a href="https://bitoftech.net/2022/08/25/tutorial-building-microservice-applications-azure-container-apps-dapr/" target="_blank" rel="noopener">Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1</a></li>
<li><a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">Deploy backend API Microservice to Azure Container Apps &#8211; Part 2</a></li>
<li><a href="https://bitoftech.net/2022/08/25/communication-microservices-azure-container-apps/" target="_blank" rel="noopener">Communication between Microservices in Azure Container Apps &#8211; Part 3</a></li>
<li><a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">Dapr Integration with Azure Container Apps &#8211; Part 4</a></li>
<li><a href="https://bitoftech.net/2022/08/29/azure-container-apps-state-store-with-dapr-state-management-api/" target="_blank" rel="noopener">Azure Container Apps State Store With Dapr State Management API &#8211; Part 5</a></li>
<li><a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">Azure Container Apps Async Communication with Dapr Pub/Sub API &#8211; Part 6</a></li>
<li><a href="https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/" target="_blank" rel="noopener">Azure Container Apps with Dapr Bindings Building Block &#8211; Part 7</a></li>
<li><a href="https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/" target="_blank" rel="noopener">Azure Container Apps Monitoring and Observability with Application Insights &#8211; Part 8</a></li>
<li><a href="https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/" target="_blank" rel="noopener">Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; Part 9</a></li>
<li><a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/" target="_blank" rel="noopener">Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; Part 10</a></li>
<li><a href="https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/" target="_blank" rel="noopener">Azure Container Apps Auto Scaling with KEDA &#8211; Part 11</a></li>
<li>Azure Container Apps Volume Mounts using Azure Files &#8211; (This Post)</li>
<li><em>Integrate Health probes in Azure Container Apps &#8211; Part 13</em></li>
</ul>
<h2>Azure Container Apps Volume Mounts using Azure Files</h2>
<p>In the <a href="https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/" target="_blank" rel="noopener">previous post</a>, we saw how we could store and persists files on external service such as Azure Blob Storage. Still, there are some cases in which persisting files on the container itself are a must to deploy the service successfully, for example, in the last post titled &#8220;<a href="https://bitoftech.net/2022/10/09/deploy-meilisearch-into-azure-container-apps/" target="_blank" rel="noopener">Deploy Meilisearch into Azure Container Apps</a>&#8221; I showed how to deploy a <a href="https://docs.meilisearch.com/" target="_blank" rel="noopener">Meliesearch</a> instance on Container Apps requires persistence of different files such as (Lightning Memory-Mapped Database files, configurations, etc&#8230;) on the container storage as a volume mounted to a file share from <a href="https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction">Azure Files</a> is a requirement for the service to function correctly.</p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Storage options for Azure Container Apps</h3>
<p>There are 3 different types of storage Container Apps that can utilize any of them or all of them based on the use case, the <a href="https://learn.microsoft.com/en-us/azure/container-apps/storage-mounts?pivots=aca-cli" target="_blank" rel="noopener">official documentation</a> covers this thoroughly. In this post, I will be focusing on configuring Container Apps Volume Mounts using Azure Files as permanent storage for writing files to Azure Files share and making them accessible to other containers.</p>

<table id="tablepress-7" class="tablepress tablepress-id-7">
<thead>
<tr class="row-1">
	<th class="column-1">Storage type</th><th class="column-2">Description</th><th class="column-3">Usage examples</th>
</tr>
</thead>
<tbody class="row-hover">
<tr class="row-2">
	<td class="column-1"><a href="https://learn.microsoft.com/en-us/azure/container-apps/storage-mounts?pivots=aca-cli#container-file-system" rel="noopener" target="_blank">Container file system</a></td><td class="column-2">Temporary storage scoped to the local container</td><td class="column-3">Writing a local app cache.</td>
</tr>
<tr class="row-3">
	<td class="column-1"><a href="https://learn.microsoft.com/en-us/azure/container-apps/storage-mounts?pivots=aca-cli#temporary-storage" rel="noopener" target="_blank">Temporary storage</a></td><td class="column-2">Temporary storage scoped to an individual replica</td><td class="column-3">Sharing files between containers in a replica. For instance, the main app container can write log files that are processed by a sidecar container.</td>
</tr>
<tr class="row-4">
	<td class="column-1"><a href="https://learn.microsoft.com/en-us/azure/container-apps/storage-mounts?pivots=aca-cli#azure-files" rel="noopener" target="_blank">Azure Files</a></td><td class="column-2">Permanent storage</td><td class="column-3">Writing files to a file share to make data accessible by other systems.</td>
</tr>
</tbody>
</table>
<!-- #tablepress-7 from cache -->
<p>The use case that we will cover today is the following; once the user creates a new task, the Backend API service will capture the raw JSON of the task, create a JSON file, then store it locally on the service&#8217;s storage volume-mounted to an Azure Files share, then we&#8217;ll configure the Frontend Web Container App service to use the same volume mount and file share so the files written by the Backend API will be available for the Frontend Web App for download/view from the UI. We will see that creating new revisions or restarting the containers will not have any impact on the persistence of the files, they will be always preserved and accessible by the container apps.</p>
<h3>Updating the Backend API Project</h3>
<h4>Step 1: Save files locally on container storage</h4>
<p>We will introduce now few changes on the Backend API code base to write files on the container&#8217;s local storage, to do so open the file named &#8220;TasksStoreManager.cs&#8221; in the project &#8220;TasksTracker.TasksManager.Backend.Api&#8221; and add the below method:</p><pre class="urvanov-syntax-highlighter-plain-tag">private async Task WriteFileAsync(TaskModel taskModel)
 {
     var options = new JsonSerializerOptions()
     {
         WriteIndented = true
     };
     
     var jsonString = System.Text.Json.JsonSerializer.Serialize(taskModel, options);

     var directory = Path.Combine(Directory.GetCurrentDirectory(), "attachments");

     if (!Directory.Exists(directory))
     {
         Directory.CreateDirectory(directory);
     }

     var filePath = Path.ChangeExtension(Path.Combine(directory, taskModel.TaskId.ToString()), ".json");

     _logger.LogInformation("Trying to write file for task with id '{0}' on path {1}", taskModel.TaskId.ToString(),filePath);

     await File.WriteAllTextAsync(filePath, jsonString);
 }</pre><p>What we have done here is simple, we are just taking the task JSON content and writing it to a file locally when we call the method 
			<span id="urvanov-syntax-highlighter-69ac93cda23b5280865352" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">Directory</span><span class="crayon-sy">.</span><span class="crayon-e">GetCurrentDirectory</span><span class="crayon-sy">(</span><span class="crayon-sy">)</span></span></span> while running on Container Apps, it will return the Work directory of your service which is <strong>/app </strong>so this means that we are going to store the files locally on the container storage on this directory <strong>/app/attachments. </strong></p>
<p>We need to call the method &#8220;WriteFileAsync&#8221;, so update the method named &#8220;CreateNewTask&#8221; and invoke &#8220;WriteFileAsync&#8221; as the highlighted line below:</p><pre class="urvanov-syntax-highlighter-plain-tag">public async Task&lt;Guid&gt; CreateNewTask(string taskName, string createdBy, string assignedTo, DateTime dueDate)
 {
     //code removed for brevity 
    
     await _daprClient.SaveStateAsync&lt;TaskModel&gt;(STORE_NAME, taskModel.TaskId.ToString(), taskModel);

     _logger.LogInformation("Write task file as json with name: '{0}' to permanent file storage", taskModel.TaskName);

     await WriteFileAsync(taskModel);

     await PublishTaskSavedEvent(taskModel);
	 
     //code removed for brevity 

 }</pre><p></p>
<h3>Updating the Frontend Web App Project</h3>
<h4>Step 1: Download files from the container storage</h4>
<p>Next, we will update the Frontend Web App to read/download the saved JSON task file, we will add a new button on the edit page that is responsible to read the stored file, to do this open the page named &#8220;Tasks/Edit.cshtml.cs&#8221; in the project named &#8220;TasksTracker.WebPortal.Frontend.Ui&#8221; and add the below method:</p><pre class="urvanov-syntax-highlighter-plain-tag">public IActionResult OnGetDownloadFile(string fileNameWithoutExtension)
 {

     byte[] bytes;
     var fileName = Path.ChangeExtension(fileNameWithoutExtension, ".json");

     var directory = Path.Combine(Directory.GetCurrentDirectory(), "attachments");

     var filePath = Path.Combine(directory, fileName);

     try
     {
         //Read the File data into Byte Array.
         bytes = System.IO.File.ReadAllBytes(filePath);

         //Send the File to Download.
         return File(bytes, "application/octet-stream", fileName);
     }
     catch (FileNotFoundException)
     {
         var result = new NotFoundObjectResult(new { message = "File Not Found" });
         return result;
     }

 }</pre><p>As you can see above, we are reading the Tasks JSON files from the same storage location directory <strong>/app/attachments</strong>, remember that those are 2 different services and each one has its own local storage, but once we configure both services local storage to a volume mounted to the <strong>same</strong> Azure Files share, both services will be able to see the same files under this directory. More about this in the next steps.</p>
<p>Now, we just need to add a link on the Edit screen to download the raw JSON for each task, to do this, open the file named &#8220;Tasks/Edit.cshtml&#8221; and add the highlighted line below:</p><pre class="urvanov-syntax-highlighter-plain-tag">&lt;div class="col-md-4"&gt;
    &lt;form method="post"&gt;
		
	@* code removed for brevity *@
		
        &lt;div class="form-group"&gt;
            &lt;input type="submit" value="Save" class="btn btn-primary" /&gt;
            &lt;a class="btn btn-primary" download href="@Url.Page("Edit", "DownloadFile", new { fileNameWithoutExtension = Model.TaskUpdate!.TaskId })"&gt;Download Raw&lt;/a&gt;
        &lt;/div&gt;

    &lt;/form&gt;
&lt;/div&gt;</pre><p></p>
<h3>Create the Azure File Share</h3>
<p>If you are following up with the tutorial, we can use the same Azure Storage account used earlier to create Azure File share (skip step 1 below), if not you can create a new storage account as the step 1 below</p>
<h4>Step 1: Create a new Store account</h4>
<p>From a PowerShell console run the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">$RESOURCE_GROUP="tasks-tracker-rg"
$STORAGE_ACCOUNT="taskstracker"
$LOCATION="eastus"

az storage account create `
  --resource-group $RESOURCE_GROUP `
  --name $STORAGE_ACCOUNT_NAME `
  --location "$LOCATION" `
  --kind StorageV2 `
  --sku Standard_LRS `
  --enable-large-file-share `
  --query provisioningState</pre><p></p>
<h4>Step 2: Create Azure Storage File Share</h4>
<p>Now we will create the File Share which will be used to mount containers&#8217; local storage/volumes to it, to do use PowerShell and run the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">$STORAGE_ACCOUNT="taskstracker"
$SHARE_NAME = "permanent-file-share"
$RESOURCE_GROUP="tasks-tracker-rg"

az storage share-rm create `
  --resource-group $RESOURCE_GROUP `
  --storage-account $STORAGE_ACCOUNT  `
  --name $SHARE_NAME `
  --quota 1024 `
  --enabled-protocols SMB `
  --output table</pre><p></p>
<h4>Step 3: Link the Storage File Share to Container Apps Environment</h4>
<p>The below step will define the storage mount link from our Container Apps environment to the Azure Storage account:</p><pre class="urvanov-syntax-highlighter-plain-tag">## Get storage account key
$STORAGE_ACCOUNT_KEY=$(az storage account keys list -n $STORAGE_ACCOUNT --query "[0].value" -o tsv)

##Create the storage link in the environment.
$STORAGE_MOUNT_NAME = "permanent-storage-mount"
az containerapp env storage set `
  --name $ENVIRONMENT `
  --access-mode ReadWrite `
  --azure-file-account-name $STORAGE_ACCOUNT `
  --azure-file-account-key $STORAGE_ACCOUNT_KEY `
  --azure-file-share-name $SHARE_NAME `
  --storage-name $STORAGE_MOUNT_NAME `
  --resource-group $RESOURCE_GROUP `
  --output table</pre><p>Basically, this above command creates a link between the Container App environment and the file share created in step 2 when using the 
			<span id="urvanov-syntax-highlighter-69ac93cda23c2328827909" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012-black crayon-theme-vs2012-black-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">az </span><span class="crayon-e">storage </span><span class="crayon-v">share</span><span class="crayon-o">-</span><span class="crayon-v">rm</span></span></span> command.</p>
<h4>Step 4: Update the Backend API service to use the storage mount</h4>
<p>Now we are ready to configure the Backend API service to use this storage mount, to do so w need to do 2 things:</p>
<ul>
<li>Define a storage volume</li>
<li>Define a volume mount</li>
</ul>
<p>To achieve this, the az container app has no direct API to update the storage configuration, so what we are going to do, is download the YAML file of the service, do the modification manually by hand on the YAML file, then update the container app using the updated YAML. If you are using ARM/Bicep templates you can do this directly from there.</p>
<p>But first, we need to push our code changes to ACR, to do this run the PowerShell command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">$BACKEND_API_NAME="tasksmanager-backend-api"
$ACR_NAME="taskstrackeracr"

az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_API_NAME" --file 'TasksTracker.TasksManager.Backend.Api/Dockerfile' .</pre><p>Next, we will download the Backend API YAML file, so we can run the manual edits, the command below will download a file named &#8220;app-backend-api.yaml&#8221; to the directory you are in:</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp show `
  --name $BACKEND_API_NAME `
  --resource-group $RESOURCE_GROUP `
  --output yaml &gt; app-backend-api.yaml</pre><p>Once the file &#8220;app-backend-api.yaml&#8221; is downloaded, go ahead and open with VS Code or an editor which supports editing yaml files, we need to do 3 things on the file:</p>
<ol>
<li>Define a &#8220;volumes&#8221; array on the template level with a single entry/item named &#8220;azure-file-volume&#8221; for the storage linked to the container app environment &#8220;permanent-storage-mount&#8221;, this volume entry should be of storage type &#8220;AzureFile&#8221;</li>
<li>Define &#8220;volumeMounts&#8221; array for the container with an entry/item with a volume name &#8220;azure-file-volume&#8221; and with a mount path of value &#8220;/app/attachments&#8221;. This path should match the path our backend API writes JSON files to and our frontend web app reads JSON files from. You can define many volume mounts if needed to support other storage use cases</li>
<li>Update the &#8220;revesionSuffix&#8221; to a unique value, so the container app will not complain that revesionSuffix is already used.</li>
</ol>
<p>Your modified application YAML should look similar to the below image, don&#8217;t forget to save the file <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/10/backendappyaml.jpg"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1519" src="https://bitoftech.net/wp-content/uploads/2022/10/backendappyaml.jpg" alt="backendappyaml" width="730" height="894" srcset="https://bitoftech.net/wp-content/uploads/2022/10/backendappyaml.jpg 730w, https://bitoftech.net/wp-content/uploads/2022/10/backendappyaml-245x300.jpg 245w" sizes="auto, (max-width: 730px) 100vw, 730px" /></a></p>
<p>Next, we need to update the container app and create a new revision using the updated YAML file, to do so, run the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp update `
  --name $BACKEND_API_NAME `
  --resource-group $RESOURCE_GROUP `
  --yaml app-backend-api.yaml `
  --output table</pre><p></p>
<h4>Step 5: Update the Frontend Web App service to use the storage mount</h4>
<p>Now we&#8217;ve to do the same steps we have done before for the backend API to make the Container App Frontend WebApp storage volume mounted to AzureFile share and can read the JSON files written by the Backend API</p>
<p>Push code changes to ACR as below:</p><pre class="urvanov-syntax-highlighter-plain-tag">az acr build --registry $ACR_NAME --image "tasksmanager/$FRONTEND_WEBAPP_NAME" --file 'TasksTracker.WebPortal.Frontend.Ui/Dockerfile' .</pre><p>Download the YAML file of the Frontend Web App</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp show `
  --name $FRONTEND_WEBAPP_NAME `
  --resource-group $RESOURCE_GROUP `
  --output yaml &gt; app-frontend-ui.yaml</pre><p>Manually update the YAML entries to create &#8220;volumes&#8221; and &#8220;volumeMounts&#8221;, the files will be identical to what used in the Backend API</p>
<p>Lastly, update the Frontend Web App container by creating a new revision as the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">##Update container app using yaml file
az containerapp update `
  --name $FRONTEND_WEBAPP_NAME `
  --resource-group $RESOURCE_GROUP `
  --yaml  app-frontend-ui.yaml `
  --output table</pre><p>With both changes in place, we can give it a try and save a new task from the UI, it should create a JSON file in the backend, and when you edit the file, you should see a download button that allows you to download the Raw JOSN file from the shared Azure File.</p>
<p>You can access both containers by using &#8220;exec&#8221; and navigating to the directory &#8220;/app/attachments&#8221;, you should be able to see the files stored in this local location which s mounted to Azure Files.</p><pre class="urvanov-syntax-highlighter-plain-tag">##Access Backend API container
az containerapp exec --name $BACKEND_API_NAME --resource-group $RESOURCE_GROUP
ls /app/attachments

##Access Frontend WebApp container
az containerapp exec --name $FRONTEND_WEBAPP_NAME --resource-group $RESOURCE_GROUP
ls /app/attachments</pre><p><a href="https://bitoftech.net/wp-content/uploads/2022/10/exec-result.jpg"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1520" src="https://bitoftech.net/wp-content/uploads/2022/10/exec-result.jpg" alt="exec-result" width="772" height="178" srcset="https://bitoftech.net/wp-content/uploads/2022/10/exec-result.jpg 772w, https://bitoftech.net/wp-content/uploads/2022/10/exec-result-300x69.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/10/exec-result-768x177.jpg 768w" sizes="auto, (max-width: 772px) 100vw, 772px" /></a></p>
<p>And you can verify that files are stored in the shared Azure Files Storage from Azure Portal or Azure Storage Explorer.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/10/AzureFilesStorage.jpg"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1521" src="https://bitoftech.net/wp-content/uploads/2022/10/AzureFilesStorage.jpg" alt="Azure Files Storage" width="961" height="559" srcset="https://bitoftech.net/wp-content/uploads/2022/10/AzureFilesStorage.jpg 961w, https://bitoftech.net/wp-content/uploads/2022/10/AzureFilesStorage-300x175.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/10/AzureFilesStorage-768x447.jpg 768w" sizes="auto, (max-width: 961px) 100vw, 961px" /></a></p>
<h3>Update the Bicep template to reflect changes in solution components</h3>
<p>To keep things consistent, we need to update the Bicep template to match the changes we&#8217;ve done using the &#8220;az containerapp&#8221;, the changes are on GitHub so i will just put links to the impacted files with a brief description of what changed</p>
<ul>
<li>Update on file &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/ad9022aea32864325c5b0dadb10f7c15dc3410f5/deploy/storageAccount.bicep" target="_blank" rel="noopener">storageAccount.bicep</a>&#8220;: Adding the new file service and the file share</li>
<li>Update on file &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/ad9022aea32864325c5b0dadb10f7c15dc3410f5/deploy/acaEnvironment.bicep" target="_blank" rel="noopener">acaEnvironment.bicep</a>&#8220;: Creating environment storages which created the link between Azure File Share and container Apps environment</li>
<li>Update on file &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/ad9022aea32864325c5b0dadb10f7c15dc3410f5/deploy/containerApp.bicep" target="_blank" rel="noopener">containerApp.bicep</a>&#8220;: Creating the &#8220;volumeMounts&#8221; array and the &#8220;volumes&#8221; for all containers</li>
<li>Update on file &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/ad9022aea32864325c5b0dadb10f7c15dc3410f5/deploy/main.bicep" target="_blank" rel="noopener">main.bicep</a>&#8221; by updating the values for some parameters and passing the storage mount name for the container apps module.</li>
</ul>
<h3>Conclusion</h3>
<p>As we saw in this post, we can mount a file share from Azure Files as a volume inside a container, and once this setup we can achieve the below:</p>
<ul>
<li>Files written under the mount location are persisted to the file share.</li>
<li>Files in the share are available via the mount location.</li>
<li>Multiple containers can mount the same file share, including ones that are in another replica, revision, or container app.</li>
<li>All containers that mount the share can access files written by any other container or method.</li>
<li>More than one Azure Files volume can be mounted in a single container.</li>
</ul>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<h4>References:</h4>
<ul>
<li><a href="https://techcommunity.microsoft.com/t5/fasttrack-for-azure/azure-container-apps-working-with-storage/ba-p/3561853" target="_blank" rel="noopener">Azure Container Apps: working with storage</a></li>
<li><a href="https://unsplash.com/photos/GNyjCePVRs8?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink" target="_blank" rel="noopener">Featured Image Credit</a></li>
</ul>
<p>The post <a href="https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/">Azure Container Apps Volume Mounts using Azure Files &#8211; Part 12</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1510</post-id>	</item>
		<item>
		<title>Deploy Meilisearch into Azure Container Apps</title>
		<link>https://bitoftech.net/2022/10/09/deploy-meilisearch-into-azure-container-apps/</link>
					<comments>https://bitoftech.net/2022/10/09/deploy-meilisearch-into-azure-container-apps/#comments</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Sun, 09 Oct 2022 02:12:31 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Bicep]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[ARM]]></category>
		<category><![CDATA[Azure Files]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Meilisearch]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1497</guid>

					<description><![CDATA[<p>Last week I was working on a proof of concept solution which includes a service responsible to provide a simple front-facing search component for a hardware tools website. During the research, I stumbled upon various options and I wanted to try deploying Meilisearch on Azure Container Apps as it meets most of the requirements for [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/10/09/deploy-meilisearch-into-azure-container-apps/">Deploy Meilisearch into Azure Container Apps</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Last week I was working on a proof of concept solution which includes a service responsible to provide a simple front-facing search component for a hardware tools website. During the research, I stumbled upon various options and I wanted to try deploying <a href="https://www.meilisearch.com/" target="_blank" rel="noopener">Meilisearch</a> on Azure Container Apps as it meets most of the requirements for the search service within the solution.</p>
<h3>Overview of Meilisearch</h3>
<p><a href="https://docs.meilisearch.com/learn/what_is_meilisearch/overview.html" target="_blank" rel="noopener">Meilisearch</a> is a RESTful search API that offers a fast and instant search experience (search as you type), it is designed for a vast majority of needs of small-to-medium businesses with little configuration needed during installation yet with high customization.<br />
Meilisearch is an <a href="https://github.com/meilisearch/meilisearch" target="_blank" rel="noopener">open-source project</a> built using Rust with more than 29K stars on GitHub and support for <a href="https://docs.meilisearch.com/learn/what_is_meilisearch/sdks.html" target="_blank" rel="noopener">various SDKs</a> including <a href="https://github.com/meilisearch/meilisearch-dotnet" target="_blank" rel="noopener">dotnet</a>.</p>
<h3>Meilisearch on Azure Container Apps</h3>
<p>Meilisearch can be deployed on Azure on different options, Meilisearch container image can be deployed on <a href="https://docs.meilisearch.com/learn/cookbooks/azure.html" target="_blank" rel="noopener">Azure App Services</a> as per the documentation, on this post I will go over the steps needed to prepare the Bicep template needed to deploy a Meilisearch container image into Azure Container Apps and use storage mounts in Azure Container Apps to permanently host Meilisearch database into <a href="https://learn.microsoft.com/en-us/azure/container-apps/storage-mounts?pivots=aca-cli#azure-files">Azure Files</a>.</p>
<h4>The <a href="https://github.com/tjoudeh/Container-Apps-Meilisearch" target="_blank" rel="noopener">source code</a> used for this post exists on GitHub.</h4>
<h3>Deploying Meilisearch on Azure Container Apps</h3>
<p>In this post, I will go over the Bicep template needed to deploy Meilisearch into Azure Container Apps and any important notes needed for deployment.</p>
<p>You can click on the button below to deploy a Meilisearch instance into Azure Container Apps. This is the final result of the Bicep templates we are going to build:<br />
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Ftjoudeh%2FContainer-Apps-Meilisearch%2Fmaster%2Fdeploy%2Fmain.json" target="_blank" rel="noopener"><br />
<img decoding="async" src="https://aka.ms/deploytoazurebutton" /><br />
</a><br />
If you are new to Bicep, you can check my <a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/" target="_blank" rel="noopener">previous post</a> on how to deploy Container Apps using Bicep.</p>
<h4>Step 1: Define Azure Storage Resource</h4>
<p>Azure storage is needed to create a file share service under it, the file share service will allow us to mount a file share from <a href="https://learn.microsoft.com/en-us/azure/storage/files/" target="_blank" rel="noopener">Azure Files</a> as a volume inside the Meilisearch container. This means that Meilisearch DB and its configuration files are written into the container volume location are persisted in the file share, this is durable storage so if the container is restarted, crashed or a new revision is deployed; the files on the share will not be impacted and the new provisioned container will find the files on the configured volume.</p>
<p>To create a file share service under Azure storage, create a directory named &#8220;deploy/modules&#8221;, add a new file named &#8220;storage.bicep&#8221; and use the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">@description('The name of your application')
param applicationName string

@description('The Azure region where all resources in this example should be created')
param location string = resourceGroup().location

@description('A list of tags to apply to the resources')
param resourceTags object

@description('The name of the container to create. Defaults to applicationName value.')
param containerName string = applicationName

@description('The name of the Azure file share.')
param shareName string

@description('The name of storage account')
param storageAccountName string

resource storageAccount 'Microsoft.Storage/storageAccounts@2021-09-01' = {
  name: storageAccountName
  location: location
  tags: resourceTags
  sku: {
    name: 'Standard_LRS'
  }
  kind: 'StorageV2'
}

resource blobServices 'Microsoft.Storage/storageAccounts/blobServices@2021-09-01' = {
  name: 'default'
  parent: storageAccount
}

resource storageContainer 'Microsoft.Storage/storageAccounts/blobServices/containers@2021-09-01' = {
  name: containerName
  parent: blobServices
}

resource fileServices 'Microsoft.Storage/storageAccounts/fileServices@2021-09-01' = {
  name: 'default'
  parent: storageAccount
}

resource permanentFileShare 'Microsoft.Storage/storageAccounts/fileServices/shares@2022-05-01' = {
  name: shareName
  parent: fileServices
  properties: {
    accessTier: 'TransactionOptimized'
    enabledProtocols: 'SMB'
    shareQuota: 1024
  }
}

var storageKeyValue = storageAccount.listKeys().keys[0].value

output storageAccountName string = storageAccount.name
output id string = storageAccount.id
output apiVersion string = storageAccount.apiVersion
output storageKey string = storageKeyValue</pre><p>Looking at the code above, notice that we&#8217;ve created a file share with the access tier &#8220;TransactionOptimized&#8221;, you can use &#8220;Premium&#8221; as it is backed by SSD drives and provides low latency. The size of the file share is set to 1024 gigabytes (1TB).</p>
<h4>Step 2: Define Azure Log Analytics Workspace Resource</h4>
<p>Add a new file named &#8220;logAnalyticsWorkspace.bicep&#8221; under the folder &#8220;modules&#8221;, and use the code below, the log analytics workspace is needed by the Container Apps Environment,</p><pre class="urvanov-syntax-highlighter-plain-tag">@description('The name of your Log Analytics Workspace')
param logAnalyticsWorkspaceName string

@description('The Azure region where all resources in this example should be created')
param location string = resourceGroup().location

@description('A list of tags to apply to the resources')
param resourceTags object

resource logAnalyticsWorkspace'Microsoft.OperationalInsights/workspaces@2021-12-01-preview' = {
  name: logAnalyticsWorkspaceName
  tags: resourceTags
  location: location
  properties: any({
    retentionInDays: 30
    features: {
      searchVersion: 1
    }
    sku: {
      name: 'PerGB2018'
    }
  })
}

var sharedKey = listKeys(logAnalyticsWorkspace.id, logAnalyticsWorkspace.apiVersion).primarySharedKey

output workspaceResourceId string = logAnalyticsWorkspace.id
output logAnalyticsWorkspaceCustomerId string = logAnalyticsWorkspace.properties.customerId
output logAnalyticsWorkspacePrimarySharedKey string = sharedKey</pre><p></p>
<h4>Step 3: Define an Azure Container Apps Environment Resource</h4>
<p>Add a new file named &#8220;acaEnvironment.bicep&#8221; under the folder &#8220;modules&#8221; and use the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">@description('The name of Azure Container Apps Environment')
param acaEnvironmentName string

@description('The Azure region where all resources in this example should be created')
param location string = resourceGroup().location

@description('A list of tags to apply to the resources')
param resourceTags object

param logAnalyticsWorkspaceCustomerId string

@secure()
param logAnalyticsWorkspacePrimarySharedKey string 

resource environment 'Microsoft.App/managedEnvironments@2022-03-01' = {
  name: acaEnvironmentName
  location: location
  tags: resourceTags
  properties: {
    appLogsConfiguration: {
      destination: 'log-analytics'
      logAnalyticsConfiguration: {
        customerId: logAnalyticsWorkspaceCustomerId
        sharedKey: logAnalyticsWorkspacePrimarySharedKey
      }
    }
  }
}

output acaEnvironmentId string = environment.id</pre><p></p>
<h4>Step 4: Define an Azure Container Apps Environment Storages Resource</h4>
<p>Add a new file named &#8220;acaEnvironmentStorages.bicep&#8221; under the folder &#8220;modules&#8221; and use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">@description('The name of Azure Container Apps Environment')
param acaEnvironmentName string

@description('The name of your storage account')
param storageAccountResName string

@description('The storage account key')
@secure()
param storageAccountResourceKey string 

@description('The ACA env storage name mount')
param storageNameMount string

@description('The name of the Azure file share. Defaults to applicationName value.')
param shareName string

resource environment 'Microsoft.App/managedEnvironments@2022-03-01' existing = {
  name: acaEnvironmentName
}

//Environment Storages
resource permanentStorageMount 'Microsoft.App/managedEnvironments/storages@2022-03-01' = {
  name: storageNameMount
  parent: environment
  properties: {
    azureFile: {
      accountName: storageAccountResName
      accountKey: storageAccountResourceKey
      shareName: shareName
      accessMode: 'ReadWrite'
    }
  }
}</pre><p>This is a key step to configure a storage definition of type <strong>AzureFile</strong> in the Container Apps Environment, within this file we have enabled the environment to use the Azure File share service for any Container App under this environment, we are setting the access mode to &#8220;ReadWrite&#8221; as we need to write and read files from the file share.</p>
<h4>Step 5: Define an Azure Container Apps Resource</h4>
<p>Add a new file named &#8220;containerApp.bicep&#8221; under the folder &#8220;modules&#8221; and use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">param containerAppName string
param location string 
param environmentId string 
param containerImage string
param targetPort int
param containerRegistry string
param containerRegistryUsername string
param isPrivateRegistry bool
param registryPassName string
param minReplicas int = 0
param maxReplicas int = 1
@secure()
param secListObj object
param envList array = []
param revisionMode string = 'Single'
param storageNameMount string
param volumeName string
param mountPath string
param resourceTags object
param resourceAllocationCPU string
param resourceAllocationMemory string

resource containerApp 'Microsoft.App/containerApps@2022-06-01-preview' = {
  name: containerAppName
  location: location
  tags: resourceTags
  properties: {
    managedEnvironmentId: environmentId
    configuration: {
      activeRevisionsMode: revisionMode
      secrets: secListObj.secArray
      registries: isPrivateRegistry ? [
        {
          server: containerRegistry
          username: containerRegistryUsername
          passwordSecretRef: registryPassName
        }
      ] : null
      ingress: {
        external: true
        targetPort: targetPort
        transport: 'auto'
        traffic: [
          {
            latestRevision: true
            weight: 100
          }
        ]
      } 
      dapr: null
    }
    template: {
      containers: [
        {
          image: containerImage
          name: containerAppName
          env: envList
          volumeMounts: [
            { 
               mountPath:mountPath
               volumeName:volumeName
            }
          ]
          resources:{
            cpu: json(resourceAllocationCPU)
            memory: resourceAllocationMemory
           }
        }
      ]
      volumes: [
        {
           name: volumeName
           storageName: storageNameMount
           storageType: 'AzureFile'
        }
      ]
      scale: {
        minReplicas: minReplicas
        maxReplicas: maxReplicas
      }
    }
  }
}

output fqdn string =  containerApp.properties.configuration.ingress.fqdn</pre><p>This module is responsible to deploy the actual Meilisearch container image to Container App, what we have done here is the following:</p>
<ul>
<li>Configuring the ingress of the container app to be external ingress (accepts HTTP requests from the public internet).</li>
<li>Parameterizing the target port of the ingress controller, we will set the value in the next steps.</li>
<li>Parameterizing the Meilisearch container image which will be deployed on this container app, the parameter will hold the Meilisearch image from docker hub.</li>
<li>Parameterizing the compute resources CPU and memory of the container app.</li>
<li>Parameterizing the secrets and environment variables arrays.</li>
<li>Define one single storage volume of type <strong>AzureFile</strong> for the container app and parameterize the volume name and storage name mount.</li>
<li>Define one single volume mount in the container app and parameterize the mount path and volume name.</li>
<li>Lastly, outputting the FQDN of the provisioned container app as it will be the URL to access Meilisearch API.</li>
</ul>
<h4>Step 6: Define the Main module for the final deployment</h4>
<p>Lastly, we need to define the Main Bicep module which will link the modules together, this will be the file that is referenced from AZ CLI command when creating the entire resources, to do so under the folder &#8220;deploy&#8221;, add a new file named &#8220;main.bicep&#8221; and use the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">targetScope = 'subscription'

//Azure Regions which Azure Container Apps available at can be found on this link:
//https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?products=container-apps&amp;regions=all
@description('The Azure region code for deployment resource group and resources such as westus, eastus, northeurope, etc...')
param location string = 'westus'

@description('The name of your search service. This value should be unique')
param applicationName string = 'meilisearch'

@description('The Container App CPU cores and Memory')
@allowed([
  {
    cpu: '0.25'
    memory: '0.5Gi'
  }
  {
    cpu: '0.5'
    memory: '1.0Gi'
  }
  {
    cpu: '0.75'
    memory: '1.5Gi'
  }
  {
    cpu: '1.0'
    memory: '2.0Gi'
  }
  {
    cpu: '1.25'
    memory: '2.50Gi'
  }
  {
    cpu: '1.5'
    memory: '3.0Gi'
  }
  {
    cpu: '1.75'
    memory: '3.5Gi'
  }
  {
    cpu: '2.0'
    memory: '4.0Gi'
  }
])
param containerResources object = {
  cpu: '1.0'
  memory: '2.0Gi'
}

@maxLength(4)
@description('The environment of deployment such as dev, test, stg, prod, etc...')
param deploymentEnvironment string = 'dev'

@secure()
@description('The Master API Key used to connect to Meilisearch instance')
@minLength(32)
param meilisearchMasterKey string = newGuid()


var resourceGroupName = '${applicationName}-${deploymentEnvironment}-rg'
var logAnalyticsWorkspaceResName = '${applicationName}-${deploymentEnvironment}-logs'
var environmentName = '${applicationName}-${deploymentEnvironment}-env'
var storageAccountName  = '${take(applicationName,14)}${deploymentEnvironment}strg'

var shareName = 'meilisearch-fileshare'
var storageNameMount = 'permanent-storage-mount'

var meilisearchImageName = 'getmeili/meilisearch:v0.29'
var meilisearchAppPort = 7700
var dbMountPath = '/data/meili'
var volumeName = 'azure-file-volume'

var defaultTags = {
  environment: deploymentEnvironment
  application: applicationName
}

resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
  name: resourceGroupName
  location: location
  tags: defaultTags
}

module storageModule 'modules/storage.bicep' = {
  scope: resourceGroup(rg.name)
  name: '${deployment().name}--storage'
  params: {
    storageAccountName: storageAccountName
    location: rg.location
    applicationName: applicationName
    containerName: applicationName
    shareName: shareName
    resourceTags: defaultTags
  }
}

module logAnalyticsWorkspace 'modules/logAnalyticsWorkspace.bicep' = {
  scope: resourceGroup(rg.name)
  name: '${deployment().name}--logAnalyticsWorkspace'
  params: {
    logAnalyticsWorkspaceName: logAnalyticsWorkspaceResName
    location: rg.location
    resourceTags: defaultTags
  }
}

module environment 'modules/acaEnvironment.bicep' = {
  scope: resourceGroup(rg.name)
  name: '${deployment().name}--acaenvironment'
  params: {
    acaEnvironmentName: environmentName
    location: rg.location
    logAnalyticsWorkspaceCustomerId: logAnalyticsWorkspace.outputs.logAnalyticsWorkspaceCustomerId
    logAnalyticsWorkspacePrimarySharedKey: logAnalyticsWorkspace.outputs.logAnalyticsWorkspacePrimarySharedKey
    resourceTags: defaultTags
  }
}

module environmentStorages 'modules/acaEnvironmentStorages.bicep' = {
  scope: resourceGroup(rg.name)
  name: '${deployment().name}--acaenvironmentstorages'
  dependsOn:[
    environment
  ]
  params: {
    acaEnvironmentName: environmentName
    storageAccountResName: storageModule.outputs.storageAccountName
    storageAccountResourceKey: storageModule.outputs.storageKey
    storageNameMount: storageNameMount
    shareName: shareName
  }
}

module containerApp 'modules/containerApp.bicep' = {
  scope: resourceGroup(rg.name)
  name: '${deployment().name}--${applicationName}'
  dependsOn: [
    environment
  ]
  params: {
    containerAppName: applicationName
    location: rg.location
    environmentId: environment.outputs.acaEnvironmentId
    containerImage: meilisearchImageName
    targetPort: meilisearchAppPort
    minReplicas: 1
    maxReplicas: 1
    revisionMode: 'Single'
    storageNameMount: storageNameMount
    mountPath: dbMountPath
    volumeName: volumeName
    resourceTags: defaultTags
    resourceAllocationCPU: containerResources.cpu
    resourceAllocationMemory: containerResources.memory
    secListObj: {
      secArray: [
        {
          name: 'meili-master-key-value'
          value: meilisearchMasterKey
        }
      ]
    }
    envList: [
      {
        name: 'MEILI_MASTER_KEY'
        secretRef: 'meili-master-key-value'
      }
      {
        name: 'MEILI_DB_PATH'
        value: dbMountPath
      }
    ]
  }
}

output containerAppUrl string = containerApp.outputs.fqdn</pre><p>What we have done is the following:</p>
<ul>
<li>Defined a set of parameters so the end user can control the deployment of Meilisearch instance, parameters defined as the following:
<ul>
<li>Location: The Azure region code (&#8220;westus&#8221;, &#8220;northeurope&#8221;, &#8220;australiacentral&#8221;, etc&#8230;). This should be a region where Azure container Apps and Azure Storage is available, you can check where Azure Container Apps are available on this <a href="https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?products=storage,container-apps&amp;regions=all" target="_blank" rel="noopener">link</a>.</li>
<li>Application Name: the name of the Meilisearch search service, this name will be part of the FQDN and will be used to set resource group name, storage, container app environment, and log analytics workspace.</li>
<li>Container Resources: Container App CPU and Memory. Read <a href="https://learn.microsoft.com/en-us/azure/container-apps/containers#configuration" target="_blank" rel="nofollow noopener">here</a> to understand more about those CPU/Memory combinations. The limits are soft limits and you can request to increase the quota by submitting a <a href="https://azure.microsoft.com/support/create-ticket/" target="_blank" rel="noopener">support request</a>.</li>
<li>Deployment Environment: Used to identify deployment resources (&#8220;dev&#8221;, &#8220;stg&#8221;, &#8220;prod&#8221;, etc&#8230;) and tag them with the selected environment, this has nothing to do with the capacity or performance of the resources provisioned. This will be useful if you are deploying multiple  Meilisearch instances under the same subscription for dev/test scenarios.</li>
<li>Meilisearch Master Key: This is the Master API Key used with the Meilisearch instance, minimum length is 32 characters. The recommendation is to generate a strong key, if not provided deployment template will generate a guide as the Master API key.</li>
</ul>
</li>
<li>Defined a set of variables to be passed to the child modules to provision needed resources, variables defined are:
<ul>
<li>Share Name: the name of the file share which will be created under Azure storage files share, this name will be passed to the Container Apps environment too to configure the storage definition of type <strong>AzureFile</strong> in the Container Apps environment.</li>
<li>Storage Name Mount: The name of the storage mount associated with the Meilisearch container.</li>
<li>Meilisearch Image Name: The Meilisearch docker image name &#8220;<a href="https://hub.docker.com/r/getmeili/meilisearch/tags" target="_blank" rel="noopener">getmeili/meilisearch:v0.29</a>&#8221; hosted in docker hub using tag &#8216;v0.29&#8217; this is the latest tag at the time of writing this post.</li>
<li>Meilisearch App Port: This is the port that Container App listens to for incoming requests, Meilisearch uses port 7700, when ingress is enabled on Container App; the ingress endpoint is exposed on port 443.</li>
<li>DB Mount Path: The path &#8220;/data/meili&#8221; is used, this path represents the volume inside the Container App mounted to Azure File Share. You can change the path if needed but the path needs to be the same in an environment variable named &#8220;<strong>MEILI_DB_PATH</strong>&#8220;</li>
<li>Volume Name: Logical name used to define the volume used in the Container App.</li>
</ul>
</li>
<li>Notice how we are storing the &#8220;Meilisearch Master Key&#8221; securely in the Container Apps secrets, and using a &#8220;secretRef&#8221; in the environment variables to reference the secret. The name of the environment variable which stores the Master API Key must be &#8220;<strong>MEILI_MASTER_KEY</strong>&#8220;</li>
<li>Lastly, we are returning the provisioned Container App FQDN as a deployment output.</li>
</ul>
<h4>Step 7: Deploy Meilisearch Resources using Azure CLI</h4>
<p>Now we are ready to deploy the resources using Azure CLI, to do so, open a PowerShell console and use the script below, don&#8217;t forget to set actual values for the placeholders.</p><pre class="urvanov-syntax-highlighter-plain-tag">az deployment sub create `
  --template-file ./main.bicep `
  --location WestUS `
  --parameters '{ \"meilisearchMasterKey\": {\"value\":\"YOUR_MASTER_KEY\"}, \"applicationName\": {\"value\":\"YOUR_APP_NAME\"}, \"deploymentEnvironment\": {\"value\":\"dev\"}, \"location\": {\"value\":\"westus\"} }'</pre><p>Note: You can use the &#8220;<strong>Deploy to Azure Button</strong>&#8221; highlighted above to deploy the resources.</p>
<p>If all is completed successfully you should see the resource group and the below 4 resources created under the subscription selected as the image below. To get the Container App FQDN, you can navigate to the Container App or you can get it from the deployment output tab.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/10/ResourceGroup.jpg"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1504" src="https://bitoftech.net/wp-content/uploads/2022/10/ResourceGroup.jpg" alt="Meilisearch Azure Container Apps" width="587" height="511" srcset="https://bitoftech.net/wp-content/uploads/2022/10/ResourceGroup.jpg 587w, https://bitoftech.net/wp-content/uploads/2022/10/ResourceGroup-300x261.jpg 300w" sizes="auto, (max-width: 587px) 100vw, 587px" /></a></p>
<h4>Step 8: Test Deployed Meilisearch Instance</h4>
<p>To test the deployed Meilisearch Instance, I&#8217;ve created a <a href="https://github.com/tjoudeh/Container-Apps-Meilisearch/tree/master/Meilisearch.Poc/Meilisearch.Console" target="_blank" rel="noopener">console application</a> that uses the <a href="https://github.com/meilisearch/meilisearch-dotnet" target="_blank" rel="noopener">dotnet Meilisearch SDK</a> and it creates an index named &#8220;movies&#8221; and indexes 40K documents using a JSON file (file is included in source code) or you can download it from <a href="https://docs.meilisearch.com/movies.json" target="_blank" rel="noopener">this link</a>.</p>
<p>The console application contains the below code:</p><pre class="urvanov-syntax-highlighter-plain-tag">using System.Text.Json;

namespace Meilisearch.Console
{
    public class Movie
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Poster { get; set; }
        public string Overview { get; set; }
        public IEnumerable&lt;string&gt; Genres { get; set; }
    }

    internal class Program
    {
        static async Task Main(string[] args)
        {

            MeilisearchClient client = new MeilisearchClient("https://&lt;fqdn&gt;.&lt;location&gt;.azurecontainerapps.io", "&lt;MASTER API KEY&gt;");
            var options = new JsonSerializerOptions
            {
                PropertyNameCaseInsensitive = true
            };

            string jsonString = await File.ReadAllTextAsync(@"movies.json");
            var movies = JsonSerializer.Deserialize&lt;IEnumerable&lt;Movie&gt;&gt;(jsonString, options);

            var index = client.Index("movies");

            var newSettings = new Settings
            {
                FilterableAttributes = new string[] { "genres" },
                SortableAttributes = new string[] { "title" },
            };

            await index.UpdateSettingsAsync(newSettings);

            await index.AddDocumentsAsync&lt;Movie&gt;(movies,"id");
        }
    }
}</pre><p>What the console application does is the following:</p>
<ul>
<li>Instantiating MeilisearchClient by passing the URL of the Container App and the Master API Key.</li>
<li>Reading the JSON content from the file.</li>
<li>Updating the settings and attributes of an index named &#8220;movies&#8221;.</li>
<li>Lastly, add the documents (movies) into the index named &#8220;movies&#8221;.</li>
</ul>
<p>Meilisearch provides a <a href="https://docs.meilisearch.com/learn/cookbooks/postman_collection.html#import-the-collection" target="_blank" rel="noopener">PostMan collection</a> that contains all the endpoints to configure your deployed Meilisearch instance and perform a search as well, Meilisearch PostMan collection can be downloaded from <a href="https://docs.meilisearch.com/postman/meilisearch-collection.json" target="_blank" rel="noopener">this link.</a></p>
<p>To verify that documents added successfully to the &#8220;movies&#8221; index, you can issue the below HTTP Get request and you see a list of movies returned.</p><pre class="urvanov-syntax-highlighter-plain-tag">GET /indexes/movies/documents
Host: &lt;FQDN&gt;.&lt;Location&gt;.azurecontainerapps.io
Authorization: Bearer &lt;MASTER API KEY&gt;</pre><p></p>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<h4>References:</h4>
<ul>
<li><a href="https://github.com/cmaneu/meilisearch-on-azure" target="_blank" rel="noopener">Meilisearch on Azure</a></li>
</ul>
<p>The post <a href="https://bitoftech.net/2022/10/09/deploy-meilisearch-into-azure-container-apps/">Deploy Meilisearch into Azure Container Apps</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/10/09/deploy-meilisearch-into-azure-container-apps/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1497</post-id>	</item>
		<item>
		<title>Monitor Microservices App using Azure Managed Grafana</title>
		<link>https://bitoftech.net/2022/09/29/monitor-microservices-app-using-azure-managed-grafana/</link>
					<comments>https://bitoftech.net/2022/09/29/monitor-microservices-app-using-azure-managed-grafana/#respond</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Thu, 29 Sep 2022 01:16:17 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Dapr]]></category>
		<category><![CDATA[Managed Grafana]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Grafana]]></category>
		<category><![CDATA[KEDA]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1460</guid>

					<description><![CDATA[<p>This post is inspired by the amazing article Monitoring Azure Container Apps With Azure Managed Grafana written by Paul Yu. In his post, Paul walks us through provisioning Azure Managed Grafana instance and Container Apps using Terraform AzAPI provider and how we can add Container Apps dashboards into the Managed Grafana instance. In this post, [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/09/29/monitor-microservices-app-using-azure-managed-grafana/">Monitor Microservices App using Azure Managed Grafana</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<blockquote><p>This post is inspired by the amazing article <a href="https://paulyu.dev/article/monitoring-azure-container-apps-with-azure-managed-grafana/" target="_blank" rel="noopener">Monitoring Azure Container Apps With Azure Managed Grafana</a> written by <a href="https://twitter.com/pauldotyu" target="_blank" rel="noopener">Paul Yu.</a></p></blockquote>
<p>In his post, Paul walks us through provisioning Azure Managed Grafana instance and Container Apps using Terraform AzAPI provider and how we can add Container Apps dashboards into the Managed Grafana instance.</p>
<p>In this post, I&#8217;ll walk you through how to monitor in Grafana a simple Microservices App which consists of 2 Azure Container Apps, Azure Service Bus, and Azure Storage. And I will put the system under some load so we can see how Container Apps are scaling based on KEDA scaling rules and how these metrics are reflecting beautifully in Managed Grafana dashboards.</p>
<h4>The <a href="https://github.com/tjoudeh/ACA.Grafana.Demo" target="_blank" rel="noopener">source code</a> is available on GitHub. A detailed Tutorial of Container Apps Can be <a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">accessed here</a>.</h4>
<p>The simple Microservices App components are shown below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/AzureManagedGrafanaArchitecture.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1461" src="https://bitoftech.net/wp-content/uploads/2022/09/AzureManagedGrafanaArchitecture-1024x625.jpg" alt="Azure Managed Grafana Architecture" width="1024" height="625" srcset="https://bitoftech.net/wp-content/uploads/2022/09/AzureManagedGrafanaArchitecture-1024x625.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/AzureManagedGrafanaArchitecture-300x183.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/AzureManagedGrafanaArchitecture-768x469.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/AzureManagedGrafanaArchitecture-1536x937.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/AzureManagedGrafanaArchitecture.jpg 1631w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h3>An overview of Azure Managed Grafana</h3>
<p>Grafana is an open and composable observability and data visualization platform that permits you to visualize metrics, logs, and traces from multiple sources. Grafana is also available as <a href="https://learn.microsoft.com/en-us/azure/managed-grafana/overview" target="_blank" rel="noopener">Azure Managed Grafana</a>, a managed service that is optimized for the Azure environment that enables us to run Grafana natively within the Azure cloud platform.</p>
<p>As we will see in this post Azure Managed Grafana will allow us to bring together all our telemetry data in one place. It can access a wide variety of data sources, including our data stores in Azure as it has built-in support for <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/" target="_blank" rel="noopener">Azure Monitor</a> and <a href="https://learn.microsoft.com/en-us/azure/data-explorer/" target="_blank" rel="noopener">Azure Data Explorer</a>.</p>
<h3>Setting up Azure Managed Grafana</h3>
<p>We will start now by creating an Azure Managed Grafana workspace using the Azure CLI.</p>
<h4>Step 1: Create a new resource group</h4>
<p>We need to create a resource group that will contain our resources, open the PowerShell terminal, use the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">$RESOURCE_GROUP="grafana-poc-rg"
$LOCATION="eastus"

az group create `
  --name $RESOURCE_GROUP `
  --location $LOCATION</pre><p></p>
<h4>Step 2: Create an Azure Managed Grafana workspace</h4>
<p>Run the command below to create an Azure Managed Grafana instance with a <strong>system-assigned managed identity</strong>, this is the default authenticate method for Azure Managed Grafana instances:</p><pre class="urvanov-syntax-highlighter-plain-tag">$GRAFANA_NAME = "grafana-aca-poc"

az grafana create `
--name $GRAFANA_NAME `
--resource-group $RESOURCE_GROUP</pre><p>When running the command above, you must log in to Azure CLI with a user that has an <strong>Owner Role </strong>assigned on the subscription, this is needed because when a Grafana instance is created, Azure Managed Grafana grants it the <strong>Monitoring Reader</strong> role for all Azure Monitor data and Log Analytics resources within the subscription.<br />
With this in place, any resource provisioned in the subscription will inherit from the subscription a <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/roles-permissions-security#monitoring-reader" target="_blank" rel="noopener">Monitoring Reader</a> role (Can read all monitoring data).</p>
<p>Once the deployment is complete, you&#8217;ll see a note in the output of the command line stating that the instance was successfully created, Take note of the <strong>endpoint</strong> URL listed in the CLI output. Based on the resource naming and location we&#8217;ve picked it will be similar to the following URL: <strong>https://grafana-aca-poc-&lt;randomstring&gt;.eus.grafana.azure.com/</strong></p>
<p>Now we are ready to login into Managed Grafana instance and start defining our dashboard, we will do this after we deploy our simple microservices app.</p>
<h3>Deploy microservices application resources</h3>
<p>In the below steps, we will deploy the application components listed below, if you need through details on how those components are setup, you can visit the corresponding links:</p>
<ul>
<li>Azure Service Bus namespace, a topic, and subscription as a service broker</li>
<li>Azure Storage Account to store processed messages.</li>
<li>Azure Container Apps Environment</li>
<li>2 Dapr components (<a href="https://bitoftech.net/2022/08/29/azure-container-apps-state-store-with-dapr-state-management-api/" target="_blank" rel="noopener">State Management API</a>) and (<a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">Pub/Sub API</a>)</li>
<li>Azure Container Apps to host <a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">Backend Processor/API</a> with internal ingress</li>
<li>Azure Container Apps to host Frontend API with external ingress</li>
</ul>
<h4>Step 1: Deploy Azure Service Bus and Azure Storage Account</h4>
<p>Use the commands below to deploy the Service Bus and Storage account:</p><pre class="urvanov-syntax-highlighter-plain-tag">$NamespaceName="shipmentsservices"
$TopicName="shipmentstopic"
$TopicSubscription="shipments-processor-subscription"

##Create Service Bus namespace
az servicebus namespace create `
--resource-group $RESOURCE_GROUP `
--name $NamespaceName `
--location $LOCATION

##Create a topic under namespace
az servicebus topic create `
--resource-group $RESOURCE_GROUP `
--namespace-name $NamespaceName `
--name $TopicName

##Create a topic subscription
az servicebus topic subscription create `
  --resource-group $RESOURCE_GROUP `
  --namespace-name $NamespaceName `
  --topic-name $TopicName `
  --name $TopicSubscription

##Create Storage Account
$STORAGE_ACCOUNT_NAME = "shipmentstore"
az storage account create `
--name $STORAGE_ACCOUNT_NAME `
--resource-group $RESOURCE_GROUP `
--location $LOCATION `
--sku Standard_LRS `
--kind StorageV2</pre><p></p>
<h4>Step 2: Deploy Azure Container App Environment</h4>
<p>Now we will create the ACA environment and associate 2 dapr components <strong>pubsub-servicebus</strong> and <strong>shipmentsstatestore,</strong> the content of the dapr component file <a href="https://github.com/tjoudeh/ACA.Grafana.Demo/blob/52a1d5dedc3211547a20e2443866dc49400011e6/components/pubsub-svcbus.yaml" target="_blank" rel="noopener">pubsub-svcbus.yaml</a> and <a href="https://github.com/tjoudeh/ACA.Grafana.Demo/blob/52a1d5dedc3211547a20e2443866dc49400011e6/components/statestore-tablestorage.yaml" target="_blank" rel="noopener">statestore-tablestorage.yaml</a> can be found on the links.</p>
<p>Creating ACA environment will create by default a Log Analytics Workspace which will contain all the metrics, telemetry, and logs produced by any container apps deployed within this environment, and by default Managed Gragana Instance has a <strong>Monitoring Reader</strong> role so container apps metrics will be visible on Grafana dashboards.</p><pre class="urvanov-syntax-highlighter-plain-tag">$ENVIRONMENT="orders-services-aca-env"
az containerapp env create `
--name $ENVIRONMENT `
--resource-group $RESOURCE_GROUP `
--location $LOCATION

## Set Dapr Components
az containerapp env dapr-component set `
--name $ENVIRONMENT --resource-group $RESOURCE_GROUP `
--dapr-component-name pubsub-servicebus `
--yaml '.\components\pubsub-svcbus.yaml'

az containerapp env dapr-component set `
--name $ENVIRONMENT --resource-group $RESOURCE_GROUP `
--dapr-component-name shipmentsstatestore `
--yaml '.\components\statestore-tablestorage.yaml'</pre><p></p>
<h4>Step 3: Deploy Azure Container App Backend Processor</h4>
<p>We will deploy now the backend API which is responsible to process messages published into Service Bus and will store the processed messages in Table Storage. This API will be configured to use KEDA Service Bus scaler to autoscale for up to 10 instances if the length of messages in the Service Bus Topic exceeds 10 messages. As well this service will expose an internal endpoint to allow the frontend API to invoke it over HTTP using dapr service to service invocation. To read more about KEDA autoscaling you can visit my <a href="https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/" target="_blank" rel="noopener">previous post</a>.</p>
<p>I&#8217;m building and pushing the docker images to Azure Container Registry, you can use the same approach or you can use another registry such as Docker Hub. The source code of the backend processor can be found <a href="https://github.com/tjoudeh/ACA.Grafana.Demo/tree/master/Backend.Processor" target="_blank" rel="noopener">here</a>.</p>
<p>To deploy the Backend processor, use the commands below:</p><pre class="urvanov-syntax-highlighter-plain-tag">$BACKEND_SVC_NAME="shipments-backend-processor"
$ACR_NAME="taskstrackeracr"
## Create Backend App
az containerapp create `
 --name $BACKEND_SVC_NAME  `
 --resource-group $RESOURCE_GROUP `
 --environment $ENVIRONMENT `
 --registry-server "$ACR_NAME.azurecr.io" `
 --image "$ACR_NAME.azurecr.io/$BACKEND_SVC_NAME" `
 --cpu 0.50 --memory 1.0Gi `
 --target-port 80 `
 --ingress 'internal' `
 --secrets "svcbus-connstring=&lt;Conn string&gt;" `
 --enable-dapr `
 --dapr-app-id $BACKEND_SVC_NAME `
 --dapr-app-port 80 `
 --min-replicas 1 `
 --max-replicas 10 `
 --scale-rule-name "queue-length-autoscaling" `
 --scale-rule-type "azure-servicebus" `
 --scale-rule-metadata "topicName=shipmentstopic" `
		       "subscriptionName=shipments-processor-subscription" `
		       "namespace=shipmentsservices" `
		       "messageCount=10" `
		       "connectionFromEnv=svcbus-connstring" `
 --scale-rule-auth "connection=svcbus-connstring"</pre><p></p>
<h4>Step 4: Deploy Azure Container App Frontend API</h4>
<p>This API is very simple, it exposes a single external API endpoint which we are going to load test using K6, this endpoint reads data from Azure Table Storage by invoking an internal endpoint in the Backend Processor. HTTP autoscaling rule will be configured to trigger if the container app replica receives 10 concurrent requests per second or more. It can scale up to 10 instances maximum as per the configuration.</p>
<p>I&#8217;m building and pushing the docker images to Azure Container Registry, you can use the same approach or you can use another registry. The source code of the backend processor can be found <a href="https://github.com/tjoudeh/ACA.Grafana.Demo/tree/master/Frontend.Api" target="_blank" rel="noopener">here</a>.</p>
<p>To deploy the Frontend API, use the commands below:</p><pre class="urvanov-syntax-highlighter-plain-tag">$FRONTEND_WEBAPP_NAME="shipments-frontend-api"
## Create Frontend App
az containerapp create `
--name $FRONTEND_WEBAPP_NAME  `
--resource-group $RESOURCE_GROUP `
--environment $ENVIRONMENT `
--registry-server "$ACR_NAME.azurecr.io" `
--image "$ACR_NAME.azurecr.io/$FRONTEND_WEBAPP_NAME" `
--cpu 0.25 --memory 0.5Gi `
--target-port 80 `
--ingress 'external' `
--enable-dapr `
--dapr-app-id  $FRONTEND_WEBAPP_NAME `
--dapr-app-port 80 `
--min-replicas 1 `
--max-replicas 10 `
--scale-rule-name "http-requests-rule" `
--scale-rule-http-concurrency 10 `
--scale-rule-type "http"</pre><p></p>
<h3>Generate Load to scale up the Container Apps</h3>
<p>In order to see some realistic dashboard data on Grafana, we need to simulate some load on the microservices app, to do so we need to publish a large number of messages to Service Bus, and we need to simulate high traffic coming from the internet on the Frontend API, to achieve this we need do the following:</p>
<h4>Step 1: Publish a large number of messages to Service Bus Topic</h4>
<p>I&#8217;ve created a sample console application that asks the user how many messages to publish on the topic and it will send every 25 messages in a batch, use can provide for example 10000 messages and the console application will publish them in seconds, this will make the Topic fill up quickly so KEDA Service Bus scaler which is monitoring the Topic will trigger the auto-scaling for Backend Processor and start spinning up more replicas to cope up with this sudden number of messages in the topic. The <a href="https://github.com/tjoudeh/ACA.Grafana.Demo/tree/master/Client.Publisher" target="_blank" rel="noopener">source code</a> of this console application can be found on the link.</p>
<h4>Step 2: Generate HTTP traffic load to scale up Container App Frontend API</h4>
<p>I will be using an open source tool named <a href="https://k6.io/" target="_blank" rel="noopener">K6</a> to simulate virtual user traffic to the Frontend API, the traffic which we will generate should trigger the HTTP auto-scaling Frontend API and start spinning up more replicas to handle the sudden number of concurrent HTTP requests.</p>
<p>Using the tool is very simple, you need to create a JavaScript file and then invoke this file from K6 CLI. To install the K6 CLI follow the instructions document on this <a href="https://k6.io/docs/getting-started/installation/" target="_blank" rel="noopener">link</a>. On Windows, they have <a href="https://k6.io/docs/getting-started/installation/#windows" target="_blank" rel="noopener">.msi package</a> too which makes installation very simple.</p>
<p>The content of the JavaScript file will contain the following:</p><pre class="urvanov-syntax-highlighter-plain-tag">import http from 'k6/http';
import {sleep} from 'k6';

const baseUrl = 'https://shipments-frontend-api.happysea-573aaf45.eastus.azurecontainerapps.io';

  export let options = {
        vus: 100,
        duration: '180s'
  };

export default function () {
    var idfrom = getRandomInt(1,10000);
    http.get(`${baseUrl}/api/shipments?idfrom=${idfrom}`);
    sleep(1);
  }

function getRandomInt(min, max) {
    min = Math.ceil(min);
    max = Math.floor(max);
    return Math.floor(Math.random() * (max - min) + min); // The maximum is exclusive and the minimum is inclusive
  }</pre><p>This script will use 100 virtual users for the duration of 3 minutes and will keep sending GET requests to the public endpoint <strong>/api/shipments?idfrom=&lt;RandomId&gt;</strong></p>
<p>To invoke the script, open the command line and use the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">k6 run script.js</pre><p>By now we&#8217;ve generated a load of data on every component within our simple microservices app, let&#8217;s start configuring Grafana Dashboard and exploring the metrics.</p>
<h3>Configure Managed Grafana Dashboards</h3>
<h4>Step 1: Login to Managed Grafana Instance</h4>
<p>Now we&#8217;ll use the Managed Grafa endpoint URL to login to Grafana, to get the endpoint URL you can use the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">az grafana show `
--name $GRAFANA_NAME `
--query properties.endpoint</pre><p>Open the URL in the browser and login with a corporate account as Managed Grafana doesn’t support login with personal Microsoft accounts at the time of writing this post.</p>
<p>After you login successfully we need to <strong>Import</strong> a couple of Grafana dashboards. Grafana has more than 40 Azure Dashboards built by the community and some of them by Microsoft. You can explore Azure dashboards by clicking on this <a href="https://grafana.com/grafana/dashboards/?category=azure">link.</a></p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/GrafanaImport.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1464" src="https://bitoftech.net/wp-content/uploads/2022/09/GrafanaImport-1024x601.jpg" alt="Azure Managed Grafana Import" width="1024" height="601" srcset="https://bitoftech.net/wp-content/uploads/2022/09/GrafanaImport-1024x601.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/GrafanaImport-300x176.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/GrafanaImport-768x451.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/GrafanaImport.jpg 1061w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>In our case we are interested in 4 Dashboards as the below:</p>
<ul>
<li>Single Azure Container App view: <a href="https://grafana.com/grafana/dashboards/16592-azure-container-apps-container-app-view/" target="_blank" rel="noopener">https://grafana.com/grafana/dashboards/16592-azure-container-apps-container-app-view/</a></li>
<li>Azure Container Apps Aggregate view: <a href="https://grafana.com/grafana/dashboards/16591-azure-container-apps-aggregate-view/" target="_blank" rel="noopener">https://grafana.com/grafana/dashboards/16591-azure-container-apps-aggregate-view/</a></li>
<li>Azure Storage account: <a href="https://grafana.com/grafana/dashboards/14469-azure-insights-storage-accounts/" target="_blank" rel="noopener">https://grafana.com/grafana/dashboards/14469-azure-insights-storage-accounts/</a></li>
<li>Azure Service Bus: <a href="https://grafana.com/grafana/dashboards/10533-azure-service-bus/" target="_blank" rel="noopener">https://grafana.com/grafana/dashboards/10533-azure-service-bus/</a></li>
</ul>
<p>Importing the dashboards is a simple process, after you click on the Import menu item, you will be redirected to the below page</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/GrafanaLoadDashboard.jpg"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1465" src="https://bitoftech.net/wp-content/uploads/2022/09/GrafanaLoadDashboard.jpg" alt="Managed Grafana Load Dashboard" width="940" height="871" srcset="https://bitoftech.net/wp-content/uploads/2022/09/GrafanaLoadDashboard.jpg 940w, https://bitoftech.net/wp-content/uploads/2022/09/GrafanaLoadDashboard-300x278.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/GrafanaLoadDashboard-768x712.jpg 768w" sizes="auto, (max-width: 940px) 100vw, 940px" /></a></p>
<p>Provide the URL of the dashboard or the ID of the dashboard and click on the Load button. Once the metadata of the dashboard is loaded, you need to set the Datasource to <strong>Azure Monitor</strong> as in the image below, and then click on Import:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ACAGrafanaImport.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1466" src="https://bitoftech.net/wp-content/uploads/2022/09/ACAGrafanaImport-948x1024.jpg" alt="Container App Managed Grafana Import" width="948" height="1024" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ACAGrafanaImport-948x1024.jpg 948w, https://bitoftech.net/wp-content/uploads/2022/09/ACAGrafanaImport-278x300.jpg 278w, https://bitoftech.net/wp-content/uploads/2022/09/ACAGrafanaImport-768x830.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ACAGrafanaImport.jpg 1031w" sizes="auto, (max-width: 948px) 100vw, 948px" /></a></p>
<p>You need to do the same steps for the remaining 3 dashboards (Azure Service Bus, Azure Storage, and Container Apps aggregated view)</p>
<p>After you import the 4 dashboards, you can navigate to the Dashboards menu item, select <strong>Browse</strong> and you should see the 4 dashboards imported successfully as in the image below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ManagedGrafanaDashboards.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1467" src="https://bitoftech.net/wp-content/uploads/2022/09/ManagedGrafanaDashboards-1024x570.jpg" alt="Managed Grafana Dashboards" width="1024" height="570" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ManagedGrafanaDashboards-1024x570.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ManagedGrafanaDashboards-300x167.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/ManagedGrafanaDashboards-768x428.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ManagedGrafanaDashboards-1536x856.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/ManagedGrafanaDashboards.jpg 1786w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>You can now select any of the dashboards and start seeing the beautiful visualization of metrics in one place, let&#8217;s check a snapshot of each dashboard when the system was under load:</p>
<h4>Azure Service Bus Dashboard</h4>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ServiceBusGrafana.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1468" src="https://bitoftech.net/wp-content/uploads/2022/09/ServiceBusGrafana-1024x769.jpg" alt="Grafana Service Bus" width="1024" height="769" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ServiceBusGrafana-1024x769.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ServiceBusGrafana-300x225.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/ServiceBusGrafana-768x577.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ServiceBusGrafana-1536x1154.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/ServiceBusGrafana-2048x1539.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Pay attention to the panel with the title <strong>Service Bus &#8211; Messages in Queue / Topic</strong> it represents the spike of messages when the console application was publishing a load of messages. There are many other useful panels you can explore.</p>
<h4>Azure Storage Dashboard</h4>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageGrafana.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1469" src="https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageGrafana-1024x891.jpg" alt="" width="1024" height="891" srcset="https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageGrafana-1024x891.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageGrafana-300x261.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageGrafana-768x669.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageGrafana-1536x1337.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageGrafana-2048x1783.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Very useful panels to show for example the size of incoming data while the Backend Processor was writing processed messages to Table Storage, and the size of outgoing data when the same processor was serving read requests from the table storage.</p>
<h4>Backend Processor Azure Container App Dashboard</h4>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsBackEndGrafana.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1470" src="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsBackEndGrafana-1024x944.jpg" alt="Container Apps Grafana" width="1024" height="944" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsBackEndGrafana-1024x944.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsBackEndGrafana-300x277.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsBackEndGrafana-768x708.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsBackEndGrafana-1536x1416.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsBackEndGrafana-2048x1888.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Check the panel titled <strong>Replica Count</strong> and notice how 10 replicas are provisioned at the same time the Service Bus was receiving a load of messages, there are many other useful panels cush as Memory and Networks, etc.. which are not shown on this snapshot.</p>
<h4>Frontend API Azure Container App Dashboard</h4>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsFrontEndGrafana.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1471" src="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsFrontEndGrafana-1024x921.jpg" alt="Container Apps Grafana" width="1024" height="921" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsFrontEndGrafana-1024x921.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsFrontEndGrafana-300x270.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsFrontEndGrafana-768x691.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsFrontEndGrafana-1536x1382.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsFrontEndGrafana-2048x1842.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Take a look at the panel titled <strong>Max Request Count</strong> which gives an indication of the number of inbound HTTP requests and how container apps started to scale up when there is a sudden increase in the HTTP requests count.</p>
<h4>Azure Container Apps Aggregated View</h4>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsAggregatedView.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1472" src="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsAggregatedView-1024x574.jpg" alt="Container Apps Aggregated View" width="1024" height="574" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsAggregatedView-1024x574.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsAggregatedView-300x168.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsAggregatedView-768x431.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsAggregatedView-1536x861.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppsAggregatedView.jpg 1787w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Simple dashboard which will be useful if you have various Container Apps or Container Apps Environments, this dashboard will act as an entry point to access Container Apps.</p>
<h3>Summary</h3>
<p>As we saw in this post, Managed Grafana lets you bring together all your telemetry data into one place, you can customize dashboards the way you prefer and you can depend on community dashboards to help you get started. To know more about customization visit <a href="https://grafana.com/docs/grafana/latest/dashboards/" target="_blank" rel="noopener">Grafana documentation.</a></p>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<h4>References:</h4>
<ul>
<li><a href="https://unsplash.com/photos/jOqJbvo1P9g?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink" target="_blank" rel="noopener">Featured Image Credit</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/managed-grafana/quickstart-managed-grafana-cli" target="_blank" rel="noopener">Create an Azure Managed Grafana instance using the Azure CLI</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/managed-grafana/how-to-authentication-permissions?tabs=azure-portal" target="_blank" rel="noopener">Set up Azure Managed Grafana authentication and permissions</a></li>
<li><a href="https://grafana.com/grafana/dashboards/16592-azure-container-apps-container-app-view/" target="_blank" rel="noopener">Introducing Azure Managed Grafana for improving your Dynamics</a></li>
</ul>
<p>The post <a href="https://bitoftech.net/2022/09/29/monitor-microservices-app-using-azure-managed-grafana/">Monitor Microservices App using Azure Managed Grafana</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/09/29/monitor-microservices-app-using-azure-managed-grafana/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1460</post-id>	</item>
		<item>
		<title>Azure Container Apps Auto Scaling with KEDA &#8211; Part 11</title>
		<link>https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/</link>
					<comments>https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/#respond</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Thu, 22 Sep 2022 02:30:42 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Dapr]]></category>
		<category><![CDATA[KEDA]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1430</guid>

					<description><![CDATA[<p>This is the eleventh part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are: Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1 Deploy backend API Microservice to Azure Container Apps &#8211; Part 2 Communication between Microservices in Azure Container Apps &#8211; Part 3 [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/">Azure Container Apps Auto Scaling with KEDA &#8211; Part 11</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This is the eleventh part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:</p>
<ul>
<li><a href="https://bitoftech.net/2022/08/25/tutorial-building-microservice-applications-azure-container-apps-dapr/" target="_blank" rel="noopener">Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1</a></li>
<li><a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">Deploy backend API Microservice to Azure Container Apps &#8211; Part 2</a></li>
<li><a href="https://bitoftech.net/2022/08/25/communication-microservices-azure-container-apps/" target="_blank" rel="noopener">Communication between Microservices in Azure Container Apps &#8211; Part 3</a></li>
<li><a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">Dapr Integration with Azure Container Apps &#8211; Part 4</a></li>
<li><a href="https://bitoftech.net/2022/08/29/azure-container-apps-state-store-with-dapr-state-management-api/" target="_blank" rel="noopener">Azure Container Apps State Store With Dapr State Management API &#8211; Part 5</a></li>
<li><a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">Azure Container Apps Async Communication with Dapr Pub/Sub API &#8211; Part 6</a></li>
<li><a href="https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/" target="_blank" rel="noopener">Azure Container Apps with Dapr Bindings Building Block &#8211; Part 7</a></li>
<li><a href="https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/" target="_blank" rel="noopener">Azure Container Apps Monitoring and Observability with Application Insights &#8211; Part 8</a></li>
<li><a href="https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/" target="_blank" rel="noopener">Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; Part 9</a></li>
<li><a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/" target="_blank" rel="noopener">Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; Part 10</a></li>
<li>Azure Container Apps Auto Scaling with KEDA &#8211; (This Post)</li>
<li><a href="https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/" target="_blank" rel="noopener">Azure Container Apps Volume Mounts using Azure Files &#8211; Part 12</a></li>
<li><em>Integrate Health probes in Azure Container Apps &#8211; Part 13</em></li>
</ul>
<h2>Azure Container Apps Auto Scaling with KEDA</h2>
<p>In this post, we will explore how we can configure Auto Scaling rules in Container Apps. In my opinion, the <strong>Auto Scaling</strong> feature is one of the key features of any <strong>Serverless</strong> hosting platform, you want your application to respond dynamically based on the increased demand on workloads to maintain your system availability and performance.<br />
Container Apps support Horizontal Scaling (<strong>Scaling Out</strong>) by adding more replicas (new instances of the Container App) and splitting the workload across multiple replicas to process the work in parallel. When the demand decrease, Container Apps will (<strong>Scale In</strong>) by removing the unutilized replicas according to your configured scaling rule. With this approach, you pay only for the replicas provisioned during the increased demand period, and you can as well configure the scaling rule to scale to <strong>Zero </strong>replicas, which means that no charges are incurred when your Container App scales to zero.</p>
<p>Azure Container Apps supports different scaling triggers as the below:</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/azure/container-apps/scale-app#http" target="_blank" rel="noopener">HTTP traffic</a>: Scaling based on the number of concurrent HTTP requests to your revision.</li>
<li><a href="https://learn.microsoft.com/en-us/azure/container-apps/scale-app#cpu" target="_blank" rel="noopener">CPU</a> or <a href="https://learn.microsoft.com/en-us/azure/container-apps/scale-app#memory" target="_blank" rel="noopener">Memory</a> usage: Scaling based on the amount of CPU utilized or memory consumed by a replica.</li>
<li>Azure Storage Queues: Scaling based on the number of messages in Azure Storage Queue.</li>
<li>Event-driven using <a href="https://keda.sh/" target="_blank" rel="noopener">KEDA</a>: Scaling based on events triggers, such as the number of messages in Azure Service Bus Topic or the number of blobs in Azure Blob Storage container.</li>
</ul>
<p>As I covered in the initial posts, Azure Container Apps utilize different open source technologies, KEDA is one of them to enable event-driven autoscaling, which means that KEDA is installed by default when you provision your Container App, we should not worry about installing it. All we need to focus on is enabling and configuring our Container App scaling rules.</p>
<p>In this post, I will be focusing on event-driven autoscaling using KEDA.</p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>An overview of KEDA</h3>
<p><a href="https://keda.sh/" target="_blank" rel="noopener">KEDA</a> stands for Kubernetes Event-Driven Autoscaler. It is an open-source project initially started by <a href="https://cloudblogs.microsoft.com/opensource/2019/05/06/announcing-keda-kubernetes-event-driven-autoscaling-containers/" target="_blank" rel="noopener">Microsoft and Red Hat</a> to allow any Kubernetes workload to benefit from the event-driven architecture model. Prior to KEDA, horizontally scaling Kubernetes deployment was achieved through the Horizontal Pod Autoscaler (<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" target="_blank" rel="noopener">HPA</a>). The HPA relies on resource metrics such as Memory and CPU to determine when additional replicas should be deployed. Within any enterprise application, there will be other external metrics we want to scale out our application based on, think of Kafka topic log, length of an Azure Service Bus Queue, or metrics obtained from a Prometheus query. KEDA offers more than <a href="https://keda.sh/docs/2.8/scalers/" target="_blank" rel="noopener">50 scalers</a> to pick from based on your business need. KEDA exists to fill this gap and provides a framework for scaling based on events in conjunction with HPA scaling based on CPU and Memory.</p>
<h3>Configure Scaling Rule in Backend Background Processer Project</h3>
<p>We need to configure our Backend Background Processer named &#8220;tasksmanager-backend-processor&#8221; service to scale out and increase the number of replicas based on the number of messages in the Topic named &#8220;tasksavedtopic&#8221;. If our service is under a huge workload and our single replica is not able to cope with the number of messages on the topic, we need the Container App to spin up more replicas to parallelize the processing of messages on this topic.</p>
<p>So our requirements for scaling the backend processor are as follows:</p>
<ul>
<li>For every 10 messages on the Azure Service Bus Topic, scale-out by one replica.</li>
<li>When there are no messages on the topic, scale-in to a Zero replica.</li>
<li>The maximum number of replicas should not exceed 5.</li>
</ul>
<p>To achieve this, we will start looking into <a href="https://keda.sh/docs/2.8/scalers/azure-service-bus/" target="_blank" rel="noopener">KEDA Azure Service Bus scaler</a>, This specification describes the 
			<span id="urvanov-syntax-highlighter-69ac93cda2aa0947582619" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">azure</span><span class="crayon-o">-</span><span class="crayon-v">servicebus</span></span></span>  trigger for Azure Service Bus Queue or Topic, let&#8217;s take a look at the below yaml file which contains a template for the KEDA specification:</p><pre class="urvanov-syntax-highlighter-plain-tag">triggers:
- type: azure-servicebus
  metadata:
    # Required: queueName OR topicName and subscriptionName
    queueName: functions-sbqueue
    # or
    topicName: functions-sbtopic
    subscriptionName: sbtopic-sub1
    # Optional, required when pod identity is used
    namespace: service-bus-namespace
    # Optional, can use TriggerAuthentication as well
    connectionFromEnv: SERVICEBUS_CONNECTIONSTRING_ENV_NAME # This must be a connection string for a queue itself, and not a namespace level (e.g. RootAccessPolicy) connection string [#215](https://github.com/kedacore/keda/issues/215)
    # Optional
    messageCount: "5" # Optional. Count of messages to trigger scaling on. Default: 5 messages
    cloud: Private # Optional. Default: AzurePublicCloud
    endpointSuffix: servicebus.airgap.example # Required when cloud=Private</pre><p>Let&#8217;s review the parameters:</p>
<ul>
<li>The property 
			<span id="urvanov-syntax-highlighter-69ac93cda2aa5284385813" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">type</span></span></span>  is set to &#8220;azure-servicebus&#8221;, each KEDA scaler specification file has a unique type.</li>
<li>One of the properties 
			<span id="urvanov-syntax-highlighter-69ac93cda2aa7186666311" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">queueName</span></span></span>  or 
			<span id="urvanov-syntax-highlighter-69ac93cda2aa9614970001" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">topicName</span></span></span> should be provided, in our case, it will be &#8220;topicName&#8221; and we will use the value &#8220;tasksavedtopic&#8221;.</li>
<li>The property 
			<span id="urvanov-syntax-highlighter-69ac93cda2aab891894070" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">subscriptionName</span></span></span>  will be set to use &#8220;tasksmanager-backend-processor&#8221; This represents the subscription associated with the topic. Not needed if we are using queues.</li>
<li>The property 
			<span id="urvanov-syntax-highlighter-69ac93cda2aad219754278" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">connectionFromEnv</span></span></span>  will be set to reference a secret stored in our Container App, we will not use the Azure Service Bus shared access policy (connection string) directly, the shared access policy will be stored in the Container App secrets, and the secret will be referenced here. Please note that the Service Bus Shared Access Policy needs to be of type <strong>Manage</strong>. It is required for KEDA to be able to get metrics from Service Bus and read the length of messages in the queue or topic.</li>
<li>The property 
			<span id="urvanov-syntax-highlighter-69ac93cda2aaf075161507" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">messageCount</span></span></span>  is used to decide when scaling out should be triggered, in our case, it will be set to 10.</li>
<li>The property 
			<span id="urvanov-syntax-highlighter-69ac93cda2ab1889865656" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">cloud </span><span class="crayon-v">represents</span></span></span> the name of the cloud environment that the service bus belongs to.</li>
</ul>
<p>Note about authentication: KEDA scaler for Azure Service Bus supports different authentication mechanisms such as <a href="https://learn.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity" target="_blank" rel="noopener">Pod Managed Identity</a>, <a href="https://azure.github.io/azure-workload-identity/docs/" target="_blank" rel="noopener">Azure AD Workload Identity</a>, and shared access policy (connection string). When using KEDA with Azure Container Apps, at the time of writing this post, the only supported authentication mechanism is Connection Strings.</p>
<p>Azure Container Apps has its own proprietary schema to map KEDA Scaler template to its own when defining a custom scale rule, you can define this scaling rule via Container Apps <a href="https://learn.microsoft.com/en-us/azure/container-apps/azure-resource-manager-api-spec?tabs=arm-template#container-app-examples" target="_blank" rel="noopener">ARM templates</a>, <a href="https://learn.microsoft.com/en-us/azure/container-apps/azure-resource-manager-api-spec?tabs=yaml#container-app-examples" target="_blank" rel="noopener">yaml manifest</a>, Azure CLI, or from Azure Portal. In this post, I will cover how to do it from the Azure Portal and Azure CLI.</p>
<h4>Step 1: Create a new secret in Container App</h4>
<p>Let&#8217;s now create a secret named &#8220;svcbus-connstring&#8221; in our Container App named &#8220;tasksmanager-backend-processor&#8221;, this secret will contain the value of Azure Service Bus shared access policy (connection string) with &#8220;Manage&#8221; policy. To do so, run the below commands in Azure CLI to get the connection string, and then add this secret using the second command:</p><pre class="urvanov-syntax-highlighter-plain-tag">##List Service Bus Access Policy RootManageSharedAccessKey
$RESOURCE_GROUP = "tasks-tracker-rg"
$NamespaceName = "tasksTracker"
az servicebus namespace authorization-rule keys list `
  --resource-group $RESOURCE_GROUP ` 
  --namespace-name $NamespaceName `
  --name RootManageSharedAccessKey `
  --query primaryConnectionString `
  --output tsv

##Create a new secret named 'svcbus-connstring' in backend processer container app
az containerapp secret set `
 --name $BACKEND_SVC_NAME `
 --resource-group $RESOURCE_GROUP `
 --secrets "svcbus-connstring=&lt;Connection String from Service Bus&gt;"</pre><p></p>
<h4>Step 2: Create a Custom Scaling Rule from Azure CLI</h4>
<p>Now we are ready to add a new custom scaling rule to match the business requirements, to do so we need to run the below Azure CLI command:</p>
<p><strong>Note: </strong>I had to update 
			<span id="urvanov-syntax-highlighter-69ac93cda2ab4337686509" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">az </span><span class="crayon-v">containerapp</span></span></span>  extension in order to create a scaling rule from CLI, to update it you can run the following command 
			<span id="urvanov-syntax-highlighter-69ac93cda2ab6497516695" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">az </span><span class="crayon-e">extension </span><span class="crayon-v">update</span><span class="crayon-h"> </span><span class="crayon-o">--</span><span class="crayon-e">name </span><span class="crayon-v">containerapp</span></span></span></p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp update `
  --name $BACKEND_SVC_NAME `
  --resource-group $RESOURCE_GROUP `
  --min-replicas 0 `
  --max-replicas 5 `
  --scale-rule-name "queue-length" `
  --scale-rule-type "azure-servicebus" `
  --scale-rule-auth "connection=svcbus-connstring" `  
  --scale-rule-metadata "topicName=tasksavedtopic" `
                          "subscriptionName=tasksmanager-backend-processor" `
                          "namespace=tasksTracker" `
                          "messageCount=10" `
                          "connectionFromEnv=svcbus-connstring"</pre><p>What we have done is the following:</p>
<ul>
<li>Setting the minimum number of replicas to Zero, means that this Container App could be scaled-in to Zero replicas if there are no new messages on the topic.</li>
<li>Setting the maximum number of replicas to 5, means that this Container App will not exceed more than 5 replicas regardless of the number of messages on the topic.</li>
<li>Setting a friendly name for the scale rule &#8220;queue-length&#8221; which will be visible in Azure Portal.</li>
<li>Setting the scare rule type to &#8220;azure-servicebus&#8221;, this is important to tell KEDA which type of scalers our Container App is configuring.</li>
<li>Setting the authentication mechanism to type &#8220;connection&#8221; and indicating which secret reference will be used, in our case &#8220;svcbus-connstring&#8221;</li>
<li>Setting the &#8220;metadata&#8221; dictionary of the scale rule, those matching the metadata properties in KEDA template we discussed earlier.</li>
</ul>
<p>Once you run this command the custom scale rule will be created, we can navigate to the Azure Portal and see the details.</p>
<h4>Step 3: Create a Custom Scaling Rule from the Azure Portal</h4>
<p>Select your Container App named &#8220;tasksmanager-backend-processor&#8221; and navigate to the tab named &#8220;Scale&#8221;, select your minimum and maximum replica count, and then click on &#8220;Add Scale Rule&#8221;, set the values as below, and click on Save. This will create a new revision for the Container App with the scale rule applied.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppScaleRule.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-1435 " src="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppScaleRule-259x300.jpg" alt="Container App Scale Rule" width="505" height="585" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppScaleRule-259x300.jpg 259w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppScaleRule-768x890.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ContainerAppScaleRule.jpg 866w" sizes="auto, (max-width: 505px) 100vw, 505px" /></a></p>
<h4>Step 4: Run an end-to-end test and generate a load of messages</h4>
<p>Now we are ready to test out our Azure Service Bus Scaling Rule, to generate a load of messages we can do this from Service Bus Explorer under our Azure Service Bus namespace, so navigate to Azure Service Bus, select your topic/subscription, and then select &#8220;Service Bus Explorer&#8221;.</p>
<p>The message structure our backend processor expects is as JSON below, so copy this message and click on Send messages button, paste the message content, set the content type to &#8220;application/json&#8221;, check the &#8220;Repeat Send&#8221; check box, select 500 messages and put an interval of 5 ms between them, click &#8220;Send&#8221; when you are ready.</p><pre class="urvanov-syntax-highlighter-plain-tag">{
    "data": {
        "isCompleted": false,
        "isOverDue": false,
        "taskAssignedTo": "temp@mail.com",
        "taskCreatedBy": "Readiness Prob",
        "taskCreatedOn": "2022-08-18T12:45:22.0984036Z",
        "taskDueDate": "2022-08-19T12:45:22.0983978Z",
        "taskId": "6a051aeb-f567-40dd-a434-39927f2b93c5",
        "taskName": "Health Readiness Task"
    }
}</pre><p><a href="https://bitoftech.net/wp-content/uploads/2022/09/AzureServiceBusMessagesLoad.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1437" src="https://bitoftech.net/wp-content/uploads/2022/09/AzureServiceBusMessagesLoad-1024x518.jpg" alt="Azure Service Bus Messages Load" width="1024" height="518" srcset="https://bitoftech.net/wp-content/uploads/2022/09/AzureServiceBusMessagesLoad-1024x518.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/AzureServiceBusMessagesLoad-300x152.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/AzureServiceBusMessagesLoad-768x388.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/AzureServiceBusMessagesLoad-1536x777.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/AzureServiceBusMessagesLoad-2048x1036.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>Step 5: Verify that multiple replicas are created</h4>
<p>If all is setup correctly, 5 replicas will be created based on the number of messages we generated into the topic, there are various ways to verify this:</p>
<ul>
<li>You can verify this by looking into the &#8220;Live Metrics&#8221; within Application Insights, you will see instantly that there are 5 &#8220;tasksmanager-backend-processor&#8221; provisioned to work in parallel and consume the messages:</li>
</ul>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/AppInsightsReplicas.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1438" src="https://bitoftech.net/wp-content/uploads/2022/09/AppInsightsReplicas-1024x367.jpg" alt="App insight Replicas" width="1024" height="367" srcset="https://bitoftech.net/wp-content/uploads/2022/09/AppInsightsReplicas-1024x367.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/AppInsightsReplicas-300x107.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/AppInsightsReplicas-768x275.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/AppInsightsReplicas.jpg 1527w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<ul>
<li>You can verify this from Container Apps &#8220;Console&#8221; tab you will see those replicas in the drop-down list:</li>
</ul>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ReplicaConsole.png"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1439" src="https://bitoftech.net/wp-content/uploads/2022/09/ReplicaConsole-1024x386.png" alt="Container Apps Console Replicas " width="1024" height="386" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ReplicaConsole-1024x386.png 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ReplicaConsole-300x113.png 300w, https://bitoftech.net/wp-content/uploads/2022/09/ReplicaConsole-768x290.png 768w, https://bitoftech.net/wp-content/uploads/2022/09/ReplicaConsole-1536x579.png 1536w, https://bitoftech.net/wp-content/uploads/2022/09/ReplicaConsole.png 2042w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<ul>
<li>You can use the below Azure CLI command to list the names of replicas:</li>
</ul>
<p></p><pre class="urvanov-syntax-highlighter-plain-tag">##Query Number &amp; names of Replicas
  az containerapp replica list `
    --name $BACKEND_SVC_NAME `
    --resource-group $RESOURCE_GROUP `
    --query [].name</pre><p></p>
<h4>Note about KEDA Scale In:</h4>
<p>Container Apps implements the <a href="https://keda.sh/docs/2.8/concepts/scaling-deployments/#scaledobject-spec" target="_blank" rel="noopener">KEDA ScaledObject</a> with the following default settings:</p>
<ul>
<li>pollingInterval: 30 seconds. This is the interval to check each trigger on. By default, KEDA will check each trigger source on every ScaledObject every 30 seconds.</li>
<li>cooldownPeriod: 300 seconds. The period to wait after the last trigger is reported active before scaling in the resource back to 0. By default, it’s 5 minutes (300 seconds).</li>
</ul>
<p>Currently, there is no way to override this value, yet there is an <a href="https://github.com/microsoft/azure-container-apps/issues/388" target="_blank" rel="noopener">open issue</a> on the Container Apps repo and the PG is tracking it, 5 minutes might be a long period to wait for instances to be scaled in after they finish processing messages.</p>
<p>That&#8217;s it for now, hopefully, you find KEDA framework easy to work with to scale your Container Apps.</p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<h4>References:</h4>
<ul>
<li><a href="https://azure.github.io/Cloud-Native/blog/11-scaling-container-apps/#translating-keda-templates-to-azure-templates" target="_blank" rel="noopener">Scaling Container Apps</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/container-apps/scale-app" target="_blank" rel="noopener">Set scaling rules in Azure Container Apps</a></li>
<li><a href="https://keda.sh/docs/2.8/concepts/scaling-deployments/#scaledobject-spec" target="_blank" rel="noopener">Scaling Deployments, StatefulSets &amp; Custom Resources</a></li>
<li><a href="https://unsplash.com/photos/jOqJbvo1P9g?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditShareLink" target="_blank" rel="noopener">Featured Image Credit</a></li>
</ul>
<p>The post <a href="https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/">Azure Container Apps Auto Scaling with KEDA &#8211; Part 11</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1430</post-id>	</item>
		<item>
		<title>Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; Part 10</title>
		<link>https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/</link>
					<comments>https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/#respond</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Fri, 16 Sep 2022 01:37:47 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Bicep]]></category>
		<category><![CDATA[IaC]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Microservice]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1404</guid>

					<description><![CDATA[<p>This is the tenth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are: Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1 Deploy backend API Microservice to Azure Container Apps &#8211; Part 2 Communication between Microservices in Azure Container Apps &#8211; Part 3 [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/">Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; Part 10</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This is the tenth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:</p>
<ul>
<li><a href="https://bitoftech.net/2022/08/25/tutorial-building-microservice-applications-azure-container-apps-dapr/" target="_blank" rel="noopener">Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1</a></li>
<li><a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">Deploy backend API Microservice to Azure Container Apps &#8211; Part 2</a></li>
<li><a href="https://bitoftech.net/2022/08/25/communication-microservices-azure-container-apps/" target="_blank" rel="noopener">Communication between Microservices in Azure Container Apps &#8211; Part 3</a></li>
<li><a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">Dapr Integration with Azure Container Apps &#8211; Part 4</a></li>
<li><a href="https://bitoftech.net/2022/08/29/azure-container-apps-state-store-with-dapr-state-management-api/" target="_blank" rel="noopener">Azure Container Apps State Store With Dapr State Management API &#8211; Part 5</a></li>
<li><a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">Azure Container Apps Async Communication with Dapr Pub/Sub API &#8211; Part 6</a></li>
<li><a href="https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/" target="_blank" rel="noopener">Azure Container Apps with Dapr Bindings Building Block &#8211; Part 7</a></li>
<li><a href="https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/" target="_blank" rel="noopener">Azure Container Apps Monitoring and Observability with Application Insights &#8211; Part 8</a></li>
<li><a href="https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/" target="_blank" rel="noopener">Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; Part 9</a></li>
<li>Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; (This Post)</li>
<li><a href="https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/" target="_blank" rel="noopener">Azure Container Apps Auto Scaling with KEDA &#8211; Part 11</a></li>
<li><a href="https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/" target="_blank" rel="noopener">Azure Container Apps Volume Mounts using Azure Files &#8211; Part 12</a></li>
<li><em>Integrate Health probes in Azure Container Apps &#8211; Part 13</em></li>
</ul>
<h2>Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps</h2>
<p>In the <a href="https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/" target="_blank" rel="noopener">previous post</a> we&#8217;ve used GitHub Actions to continuously build and deploy the 3 Azure Container Apps after any code commits to a specific branch, on this post we will be working on defining the proper process to automate the infrastructure provisioning by creating the scripts/templates to provision the resources, this process is known as IaC (Infrastructure as Code).</p>
<p>Once we have this in place, IaC deployments will benefit us in key ways such as:</p>
<ol>
<li>Increase confidence in the deployments, ensure consistency, reduce human error in resource provisioning, and ensure consistent deployments.</li>
<li>Avoid configuration drifts, IaC is an idempotent operation, which means it provides the same result each time it&#8217;s run.</li>
<li>Provision of new environments, during the lifecycle of the application you might need to run penetration testing or load testing for a short period of time in a totally isolated environment, with IaC in place it will be a matter of executing the scripts to recreate an identical environment to the production one.</li>
<li>When you provision resources from the Azure Portal many processes are abstracted, in our case think of when creating an Azure Container Apps Environment from the portal, behind the sense it will create a log analytics workspace and associate with the environment. With IaC it can help provide a better understanding of how Azure works and how to troubleshoot issues that might arise.</li>
</ol>
<h3>ARM Templates in Azure</h3>
<p>ARM templates are files that define the infrastructure and configuration for your deployment, the template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it.</p>
<p>Within Azure there are 2 ways to create IaC, we can use the <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview" target="_blank" rel="noopener">JSON ARM templates</a> or <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/overview?tabs=bicep" target="_blank" rel="noopener">Bicep</a> (domain-specific language). From my personal experience, I&#8217;ve used JSON ARM templates in real-world scenarios which tend to be complex to manage and maintain especially when the project grows and the number of components and dependencies increases.</p>
<p>For Bicep, this is my first time to use it, so far I found it easier to work with and you can be more productive compared to ARM templates, it is worth mentioning that Bicep code will be compiled into ARM templates eventually, this process called &#8220;Transpilation&#8221;.</p>
<p>If you want to learn more about Bicep, I highly recommend checking Microsft learn website <a href="https://docs.microsoft.com/en-us/training/paths/fundamentals-bicep/" target="_blank" rel="noopener">Fundamentals of Bicep</a> and the post <a href="https://zimmergren.net/getting-started-azure-bicep/" target="_blank" rel="noopener">Getting started with Azure Bicep</a> by <a href="https://github.com/zimmergren" target="_blank" rel="noopener">Tobias Zimmergren</a></p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-Bicep-l.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1412" src="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-Bicep-l-1024x632.jpg" alt="Bicep Container Apps ARM" width="1024" height="632" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-Bicep-l-1024x632.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-Bicep-l-300x185.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-Bicep-l-768x474.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-Bicep-l.jpg 1109w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Creating the Infrastructure as Code using Bicep</h3>
<p>Lets&#8217; get started by defining the Bicep modules needed to create the Infstatrcure code, what we want to achieve by the end of this post is to have a new resource group containing all the resources and configuration (connection strings, secrets, env variables, dapr components, etc..) we used to build our solution, we should have a new resource group which contains the below resources.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/BicepResources.jpg"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1407" src="https://bitoftech.net/wp-content/uploads/2022/09/BicepResources.jpg" alt="Azure Bicep Resources" width="934" height="733" srcset="https://bitoftech.net/wp-content/uploads/2022/09/BicepResources.jpg 934w, https://bitoftech.net/wp-content/uploads/2022/09/BicepResources-300x235.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/BicepResources-768x603.jpg 768w" sizes="auto, (max-width: 934px) 100vw, 934px" /></a></p>
<h4>Step 1: Add the needed extension to VS Code Or Visual Studio</h4>
<p>If you are using VS Code, you need to install an extension named &#8220;<a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep" target="_blank" rel="noopener">Bicep</a>&#8220;, and if you are using Visual Studio, PG team announced the release of Bicep extension for Visual Studio version 17.3 and higher, you can get it from <a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep">here</a>. Both extensions will simplify building Bicep files as they will offer IntelliSense, Validation, listing all available resource types, etc..</p>
<h4>Step 2: Define an Azure Log Analytics Workspace</h4>
<p>Add a new folder named &#8220;deploy&#8221; on the root project directory then add a new file named &#8220;logAnalyticsWorkspace.bicep&#8221;, use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">param logAnalyticsWorkspaceName string
param location string = resourceGroup().location

resource logAnalyticsWorkspace'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
  name: logAnalyticsWorkspaceName
  location: location
  properties: any({
    retentionInDays: 30
    features: {
      searchVersion: 1
    }
    sku: {
      name: 'PerGB2018'
    }
  })
}

output workspaceResourceId string = logAnalyticsWorkspace.id
output logAnalyticsWorkspaceCustomerId string = logAnalyticsWorkspace.properties.customerId
//Do not use outputs to return secrets, we will use a different way
//var sharedKey = listKeys(logAnalyticsWorkspace.id, logAnalyticsWorkspace.apiVersion).primarySharedKey
//output logAnalyticsWorkspacePrimarySharedKey string = sharedKey</pre><p>This module takes 2 input parameters &#8220;logAnalyticsWorkspaceName&#8221; and &#8220;location&#8221;, the location defaults to the container resource group location. Bicep has a function named resourceGroup() which you can get the location from, yet you can override this default value if you provided the location when calling this module.</p>
<p>The output of this module is 2 parameters, first one is &#8220;workspaceResourceId&#8221; and the second one is &#8220;logAnalyticsWorkspaceCustomerId&#8221; both outputs are needed as input for the Azure Container Apps Environment and Application Insights resource which we will provision next.</p>
<p><strong>Important note</strong>: You can see that I left a commented code on how to return the secret/shared key using outputs, this was the initial way I followed but it turned out it is not secure as those outputs will be available on the deployment level (when viewing the Resource Group Deployment section) in Azure Portal in plain text, so anyone has access to the resource group will see those keys and secrets.<br />
I will be using a different approach to return the secrets from modules. Keep reading to see how. You can read more about the 2 possible approaches on the <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/scenarios-secrets#avoid-outputs-for-secrets" target="_blank" rel="noopener">official documentation</a>.<br />
In my personal opinion, I wished there is a &#8220;@Secure&#8221; param we can put on the output parameters to keep things modular. There is a <a href="https://github.com/Azure/bicep/issues/2163" target="_blank" rel="noopener">GitHub issue</a> opened and hopefully, PG will add it.</p>
<h4>Step 3: Define an Azure Application Insights resource</h4>
<p>Add a new file named &#8220;appInsights.bicep&#8221; under the folder &#8220;deploy&#8221; and use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">param location string = resourceGroup().location
param workspaceResourceId string 
param appInsightsName string

resource appInsights 'Microsoft.Insights/components@2020-02-02' = {
  name: appInsightsName
  location: location
  kind: 'web'
  properties: {
    Application_Type: 'web'
    WorkspaceResourceId:workspaceResourceId
  }
}

//Do not use output params to pass keys for other resources
//output appInsightsInstrumentationKey string = appInsights.properties.InstrumentationKey</pre><p>This module accepts 3 parameters, the &#8220;location&#8221;, and &#8220;workspaceResourceId &#8220;, and &#8220;appInsightsName&#8221;, we are associating the log analytics workspace with this Application Insights resource by setting the property &#8220;WorkspaceResourceId&#8221;</p>
<h4>Step 4: Define an Azure Container Apps Environment resource</h4>
<p>Add a new file named &#8220;acaEnvironment.bicep&#8221; under the folder &#8220;deploy&#8221; and use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">param acaEnvironmentName string
param location string = resourceGroup().location
@secure()
param instrumentationKey string
param logAnalyticsWorkspaceCustomerId string
@secure()
param logAnalyticsWorkspacePrimarySharedKey string 

resource environment 'Microsoft.App/managedEnvironments@2022-03-01' = {
  name: acaEnvironmentName
  location: location
  properties: {
    daprAIInstrumentationKey:instrumentationKey
    appLogsConfiguration: {
      destination: 'log-analytics'
      logAnalyticsConfiguration: {
        customerId: logAnalyticsWorkspaceCustomerId
        sharedKey: logAnalyticsWorkspacePrimarySharedKey
      }
    }
  }
}

output acaEnvironmentId string = environment.id</pre><p>This module accepts 5 input parameters, mainly those to associate the log analytics workspace with the Environment and set the Application Insights Key for dapr distributed calls tracing and telemetry. This module outputs the Container App Environment Id as an output parameter which will be used when defining the 3 Azure Container Apps. More about the &#8220;<a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/data-types#secure-strings-and-objects" target="_blank" rel="noopener">@secure</a>&#8221; attribute below.</p>
<h4>Step 5: Define Azure CosmosDB resources</h4>
<p>We will start defining the supporting resources needed for our solution, we will start defining the module needed to provision Azure Cosmos Account, Database, and Container, to do so, add a new file named &#8220;cosmosdb.bicep&#8221; and use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">@description('Cosmos DB account name, max length 44 characters, lowercase')
param accountName string 

@description('Location for the Cosmos DB account.')
param location string = resourceGroup().location

@description('The primary replica region for the Cosmos DB account.')
param primaryRegion string

@description('The default consistency level of the Cosmos DB account.')
@allowed([
  'Eventual'
  'ConsistentPrefix'
  'Session'
  'BoundedStaleness'
  'Strong'
])
param defaultConsistencyLevel string = 'Session'

@description('The name for the database')
param databaseName string = 'tasksmanagerdb'

@description('The name for the container')
param containerName string = 'taskscollection'

var accountName_var = toLower(accountName)
var consistencyPolicy = {
  Eventual: {
    defaultConsistencyLevel: 'Eventual'
  }
  ConsistentPrefix: {
    defaultConsistencyLevel: 'ConsistentPrefix'
  }
  Session: {
    defaultConsistencyLevel: 'Session'
  }
  BoundedStaleness: {
    defaultConsistencyLevel: 'BoundedStaleness'
    maxStalenessPrefix: 100000
    maxIntervalInSeconds: 300
  }
  Strong: {
    defaultConsistencyLevel: 'Strong'
  }
}
var locations = [
  {
    locationName: primaryRegion
    failoverPriority: 0
    isZoneRedundant: false
  }
]

resource databaseAccount 'Microsoft.DocumentDB/databaseAccounts@2021-01-15' = {
  name: accountName_var
  kind: 'GlobalDocumentDB'
  location: location
  properties: {
    consistencyPolicy: consistencyPolicy[defaultConsistencyLevel]
    locations: locations
    databaseAccountOfferType: 'Standard'
  }
}

resource accountName_databaseName 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases@2021-01-15' = {
  parent: databaseAccount
  name: databaseName
  properties: {
    resource: {
      id: databaseName
    }
  }
}

resource accountName_databaseName_containerName 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers@2021-01-15' = {
  parent: accountName_databaseName
  name: containerName
  properties: {
    resource: {
      id: containerName
      partitionKey: {
        paths: [
          '/partitionKey'
        ]
        kind: 'Hash'
      }
    }
    options: {
      autoscaleSettings: {
        maxThroughput: 4000
      }
    }
  }
}

output documentEndpoint string = databaseAccount.properties.documentEndpoint
// Do not use output param for returning cosmos db master key
//var primaryMasterKeyValue = listKeys(accountName_resource.id, accountName_resource.apiVersion).primaryMasterKey
//output primaryMasterKey string = primaryMasterKeyValue</pre><p>On this file, we are creating Azure Cosmos Account, Database, and a Collection, I&#8217;m defaulting the names of Database and Collection to use the same names we&#8217;ve previously used in the tutorial, but we are going to send a different Cosmos Account. For full details on Cosmos DB Bicep module parameters, you can check it <a href="https://docs.microsoft.com/en-us/azure/cosmos-db/sql/quick-create-bicep?toc=%2Fazure%2Fazure-resource-manager%2Fbicep%2Ftoc.json&amp;tabs=CLI" target="_blank" rel="noopener">here</a>.</p>
<h4>Step 6: Define Azure Service Bus resource</h4>
<p>Add a new file named &#8220;serviceBus.bicep&#8221; under the folder &#8220;deploy&#8221; and use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">@description('The location where we will deploy our resources to. Default is the location of the resource group')
param location string = resourceGroup().location

@description('The name of the service bus namespace')
param serviceBusName string

var topicName = 'tasksavedtopic'

resource serviceBus 'Microsoft.ServiceBus/namespaces@2021-11-01' = {
  name: serviceBusName
  location: location
  sku: {
    name: 'Standard'
  }
}

resource topic 'Microsoft.ServiceBus/namespaces/topics@2021-11-01' = {
  name: topicName
  parent: serviceBus
}

//var listKeysEndpoint = '${serviceBus.id}/AuthorizationRules/RootManageSharedAccessKey'
//var sharedAccessKey = '${listKeys(listKeysEndpoint, serviceBus.apiVersion).primaryKey}'
//var connectionStringValue = 'Endpoint=sb://${serviceBus.name}.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=${sharedAccessKey}'
//output connectionString string = connectionStringValue</pre><p>This module will be needed to provision the Azure Service Bus and a Topic named &#8220;tasksavedtopic&#8221; by default. The connection string of the service bus will be returned in a different way, not like the commented code.</p>
<h4>Step 7: Define Azure Storage Account resource</h4>
<p>Add a new file named &#8220;storageAccount.bicep&#8221; under the folder &#8220;deploy&#8221; and use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">param storageAccountName string
param location string = resourceGroup().location

var externalTasksQueueName = 'external-tasks-queue'

resource storageAccount 'Microsoft.Storage/storageAccounts@2021-09-01' = {
  name: storageAccountName
  location: location
  sku: {
    name: 'Standard_LRS'
  }
  kind: 'StorageV2'
}

resource storageQueues 'Microsoft.Storage/storageAccounts/queueServices@2021-09-01' = {
  name: 'default'
  parent: storageAccount
}

resource external_queue 'Microsoft.Storage/storageAccounts/queueServices/queues@2021-09-01' = {
  name: externalTasksQueueName
  parent: storageQueues
}

//var storageAccountKeyValue = storageAccount.listKeys().keys[0].value
//output storageAccountKey string = storageAccountKeyValue</pre><p>This module will be needed to provision an Azure Storage Account and a queue named &#8220;external-tasks-queue&#8221;.</p>
<h4>Step 8: Define Azure Container Apps module</h4>
<p>Now we will create a reusable Bicep module which will be used to define the 3 Azure Container Apps in our solution, so add a new file named &#8220;containerApp.bicep&#8221; and use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">param containerAppName string
param location string = resourceGroup().location
param environmentId string 
param containerImage string
param targetPort int
param isExternalIngress bool
param containerRegistry string
param containerRegistryUsername string
param isPrivateRegistry bool
param enableIngress bool 
param registryPassName string
param minReplicas int = 0
param maxReplicas int = 1
@secure()
param secListObj object
param envList array = []
param revisionMode string = 'Single'
param useProbes bool = false

resource containerApp 'Microsoft.App/containerApps@2022-03-01' = {
  name: containerAppName
  location: location
  properties: {
    managedEnvironmentId: environmentId
    configuration: {
      activeRevisionsMode: revisionMode
      secrets: secListObj.secArray
      registries: isPrivateRegistry ? [
        {
          server: containerRegistry
          username: containerRegistryUsername
          passwordSecretRef: registryPassName
        }
      ] : null
      ingress: enableIngress ? {
        external: isExternalIngress
        targetPort: targetPort
        transport: 'auto'
        traffic: [
          {
            latestRevision: true
            weight: 100
          }
        ]
      } : null
      dapr: {
        enabled: true
        appPort: targetPort
        appId: containerAppName
        appProtocol: 'http'
      }
    }
    template: {
      containers: [
        {
          image: containerImage
          name: containerAppName
          env: envList
          probes: useProbes? [
            {
              type: 'Readiness'
               httpGet: {
                 port: 80
                 path: '/api/health/readiness'
                  scheme: 'HTTP'
               }
              periodSeconds: 240
               timeoutSeconds: 5
               initialDelaySeconds: 5
                successThreshold: 1
                failureThreshold: 3
            }
          ] : null
        }
      ]
      scale: {
        minReplicas: minReplicas
        maxReplicas: maxReplicas
      }
    }
  }
}

output fqdn string = enableIngress ? containerApp.properties.configuration.ingress.fqdn : 'Ingress not enabled'</pre><p>The Azure Container Apps module is reusable and almost accepts input parameters for all Azure Container Apps properties, we will be using it to provision the 3 container apps. The output parameter of this module is the fully qualified domain name (FQDN) of the container app if the &#8220;enableIngress&#8221; property is set to true.</p>
<p>Notice how conditional boolean parameters such as &#8220;useProbes&#8221;, or &#8220;enableIngress&#8221; control how the template will be provisioned differently based on those values.</p>
<p>The property &#8220;envList&#8221; is an array that accepts a dictionary of environment variables, we will see how we are sending those values next.</p>
<p>The property &#8220;secListObj&#8221; is an object with a &#8220;<a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/data-types#secure-strings-and-objects" target="_blank" rel="noopener">@secure</a>&#8221; attribute his attribute can be applied on string and object parameters that might contain secret values, when it is used Azure won&#8217;t make the parameter values available in the deployment logs nor on the terminal if you are using Azure CLI.</p>
<h4>Step 9: Define the Main module for the solution</h4>
<p>Lastly, we need to define the Main Bicep module which will link all other modules together, this will be the file that is referenced from AZ CLI command or GitHub action when creating the entire resources, to do so add a new file named &#8220;main.bicep&#8221;, I will not list the file content here as it is more than 500 lines but I will go over it step by step, you can always get the file from GitHub repo. The file &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/db823092cc0df4dfe15f34ce23c51be785da823f/deploy/main.bicep" target="_blank" rel="noopener">main.bicep</a>&#8221; can be accessed from the previous link.</p>
<p>What we have added to the file is the following:</p>
<ul>
<li>We have defined many parameters to make the deployment flexible, yet we default most of them, this means that there is no need to provide values for the defaulted ones. The only mandatory parameters are &#8220;<span class="pl-smi">containerRegistryPassword</span> &#8221; and &#8220;<span class="pl-smi">sendGridApiKey</span>&#8220;.</li>
<li>We used the attribute &#8220;@secure&#8221; on the properties &#8220;<span class="pl-smi">containerRegistryPassword</span>&#8221; and &#8220;<span class="pl-smi">sendGridApiKey</span>&#8221; as those are secrets and should not be logged or visible when typing them in Azure CLI.</li>
<li>We are calling the modules defined earlier &#8220;cosmosdb.bicep&#8221;, &#8220;serviceBus.bicep&#8221;, &#8220;storageAccount.bicep&#8221;, &#8220;logAnalyticsWorkspace.bicep&#8221;, and then &#8220;appInsights.bicep&#8221; by bundling them in 1 single module named &#8220;primaryResources.bicep&#8221;.</li>
<li>We want to get the secrets, keys, and connection strings from the resources without using the output parameter, so we are going to use the 
			<span id="urvanov-syntax-highlighter-69ac93cda2ce5421403684" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">existing</span></span></span> keyword to obtain a strongly typed reference to the pre-created resource. Then use the listKeys() function to obtain the key or connection string.<br />
For example, let&#8217;s take a look at how we are getting a reference for the Application Insights Resource by providing its name as an input, then using the resource named &#8220;appInsightsResource&#8221; to get the InstrumentationKey by calling &#8220;<span class="pl-smi">appInsightsResource</span>.<span class="pl-smi">properties</span>.<span class="pl-smi">InstrumentationKey</span>&#8220;.</li>
<li>Looking at how we are creating the Azure Container App &#8220;Backend API App&#8221; using the module &#8220;containerApp.bicep&#8221; we notice the following:
<ul>
<li>We are setting the array &#8220;dependsOn&#8221; to help the Bicep interpreter to understand the dependencies between components, this is not needed if we are not using the &#8220;existing&#8221; keyword.</li>
<li>We are passing various mandatory parameters for Azure Container Apps, we are getting the value of &#8220;environmentId&#8221; from the output parameter defined in ACA Environment.</li>
<li>We are passing the needed information to access the ACR to pull the right image for each ACA. We will pass the registry password from Azure CLI.</li>
<li>We are passing the property &#8220;secListObj&#8221; by adding secrets to a dictionary named &#8220;secArray&#8221;, we did this because @secure attribute is only applicable on strings and objects, so this is a workaround to do this by creating an object which wraps an array. Thanks <a href="https://ochzhen.com/blog/pass-array-and-numbers-as-secure-params" target="_blank" rel="noopener">Alex</a> for the hint.</li>
</ul>
</li>
<li>Looking at one of the Dapr components, for example, &#8220;<span class="pl-smi">stateStoreDaprComponent</span>&#8221; you will notice the following:
<ul>
<li>Setting the component name while prefixing it with ACA environment name &#8220;${<span class="pl-smi">environmentName</span>}/statestore&#8221; <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/child-resource-name-type" target="_blank" rel="noopener">(Parent/Child) relation</a> in Bicep.</li>
<li>Setting &#8220;secrets&#8221;, &#8220;metadata&#8221; and &#8220;scopes&#8221; dictionaries of this component in a consistent way among all other Dapr components.</li>
</ul>
</li>
</ul>
<h4>Step 10: Calling the main.bicep file from Azure CLI and deploy the Infrastructure</h4>
<p>Now we are ready to start the actual deployment by calling 
			<span id="urvanov-syntax-highlighter-69ac93cda2cea889611380" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">az </span><span class="crayon-e">deployment </span><span class="crayon-e">group </span><span class="crayon-v">create</span></span></span> to do so, open the PowerShell console and use the content below, don&#8217;t forget to replace the &#8220;containerRegistryPassword&#8221; and SendGrid API key with your values. You need to create an empty resource group before. I&#8217;m using a resource group named &#8220;tasks-bicep-rg&#8221;:</p><pre class="urvanov-syntax-highlighter-plain-tag">az group create `
    --name "tasks-bicep-rg" `
    --location "eastus"  

az deployment group create `
   --resource-group "tasks-bicep-rg" `
   --template-file ./deploy/main.bicep `
   --parameters '{ \"containerRegistryPassword\": {\"value\":\"XXXX\"}, \"sendGridApiKey\": {\"value\":\"SG.XXXXX\"} }'</pre><p>Azure CLI will take the Bicep module, and start creating the deployment in the resource group &#8220;tasks-bicep-rg&#8221;</p>
<h4>Step 11: Verify the final results</h4>
<p>If the deployment succeeded; you should see all the resources created under the resource group, as well you can navigate to the &#8220;Deployments&#8221; tab to verify the ARM templates deployed, it should look like the below image:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/DeployedResources-1.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1418" src="https://bitoftech.net/wp-content/uploads/2022/09/DeployedResources-1-1024x709.jpg" alt="Bicep Deployed Resources" width="1024" height="709" srcset="https://bitoftech.net/wp-content/uploads/2022/09/DeployedResources-1-1024x709.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/DeployedResources-1-300x208.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/DeployedResources-1-768x532.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/DeployedResources-1.jpg 1322w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>That&#8217;s it for now, Bicep is a quite powerful DSL language to manage your Infrastructure, I highly recommend checking the reference below to get more familiar with Bicep and IaC.</p>
<h4>References:</h4>
<ul>
<li><a href="https://docs.microsoft.com/en-us/training/paths/fundamentals-bicep/" target="_blank" rel="noopener">Fundamentals of Bicep</a></li>
<li><a href="https://www.thorsten-hans.com/how-to-deploy-azure-container-apps-with-bicep/" target="_blank" rel="noopener">How to deploy Azure Container Apps with Bicep by Thorsten Hans</a></li>
</ul>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<p>The post <a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/">Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; Part 10</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1404</post-id>	</item>
		<item>
		<title>Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; Part 9</title>
		<link>https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/</link>
					<comments>https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/#comments</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Tue, 13 Sep 2022 15:33:51 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Continuous Deployment]]></category>
		<category><![CDATA[Dapr]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[CD]]></category>
		<category><![CDATA[CI]]></category>
		<category><![CDATA[Microservice]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1391</guid>

					<description><![CDATA[<p>This is the ninth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are: Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1 Deploy backend API Microservice to Azure Container Apps &#8211; Part 2 Communication between Microservices in Azure Container Apps &#8211; Part 3 [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/">Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; Part 9</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This is the ninth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:</p>
<ul>
<li><a href="https://bitoftech.net/2022/08/25/tutorial-building-microservice-applications-azure-container-apps-dapr/" target="_blank" rel="noopener">Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1</a></li>
<li><a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">Deploy backend API Microservice to Azure Container Apps &#8211; Part 2</a></li>
<li><a href="https://bitoftech.net/2022/08/25/communication-microservices-azure-container-apps/" target="_blank" rel="noopener">Communication between Microservices in Azure Container Apps &#8211; Part 3</a></li>
<li><a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">Dapr Integration with Azure Container Apps &#8211; Part 4</a></li>
<li><a href="https://bitoftech.net/2022/08/29/azure-container-apps-state-store-with-dapr-state-management-api/" target="_blank" rel="noopener">Azure Container Apps State Store With Dapr State Management API &#8211; Part 5</a></li>
<li><a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">Azure Container Apps Async Communication with Dapr Pub/Sub API &#8211; Part 6</a></li>
<li><a href="https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/" target="_blank" rel="noopener">Azure Container Apps with Dapr Bindings Building Block &#8211; Part 7</a></li>
<li><a href="https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/" target="_blank" rel="noopener">Azure Container Apps Monitoring and Observability with Application Insights &#8211; Part 8</a></li>
<li>Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; (This Post)</li>
<li><a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/" target="_blank" rel="noopener">Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; Part 10</a></li>
<li><a href="https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/" target="_blank" rel="noopener">Azure Container Apps Auto Scaling with KEDA &#8211; Part 11</a></li>
<li><a href="https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/" target="_blank" rel="noopener">Azure Container Apps Volume Mounts using Azure Files &#8211; Part 12</a></li>
<li><em>Integrate Health probes in Azure Container Apps &#8211; Part 13</em></li>
</ul>
<h2>Continuous Deployment for Azure Container Apps using GitHub Action</h2>
<p>In the previous posts, you noticed that with every change in the source code on any of the 3 services, we have to build the applications locally, create a new image and push it to ACR by calling CLI command manually, and then we use another CLI command 
			<span id="urvanov-syntax-highlighter-69ac93cda2ea9035940214" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">az </span><span class="crayon-e">containerapp </span><span class="crayon-v">update</span></span></span> to create a new Revision of the subject Container App and publish it.</p>
<p>This will be a tedious process and might create issues if multiple developers are working on the same repository and they keep updating the remote resources manually from their machines.</p>
<p>To enhance this process we will rely on GitHub Actions to automate the build and deploy process for the 3 services, the process will be as the following:</p>
<ol>
<li>A code change is done on one of the projects, and a commit is pushed to a certain branch of our GitHub repository.</li>
<li>GitHub action is triggered which updates the container image in the container registry. We will have a dedicated GitHub action for each project.</li>
<li>Once the container is updated in the registry, Azure Container Apps creates a new revision based on the updated container image.</li>
</ol>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActions.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1394" src="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActions-1024x244.jpg" alt="Azure Container Apps GitHub Actions" width="1024" height="244" srcset="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActions-1024x244.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActions-300x72.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActions-768x183.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActions.jpg 1090w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Configure GitHub actions to connect to Azure</h3>
<p>We need to allow our GitHub repository (GitHub actions) to connect to the Azure subscription and the resource group which hosts our  3 services, there are <a href="https://docs.microsoft.com/en-us/azure/developer/github/connect-from-azure?tabs=azure-portal%2Clinux" target="_blank" rel="noopener">multiple ways</a> to do this but we are going to use the <a href="https://docs.microsoft.com/en-us/azure/developer/github/connect-from-azure?tabs=azure-portal%2Clinux#create-a-service-principal-and-add-it-as-a-github-secret" target="_blank" rel="noopener">Azure login action with a service principal secret</a> approach.</p>
<p>To do so from the Azure CLI, run the command below, you need to replace the values of Subscription ID and Resource Group to match your deployment.</p><pre class="urvanov-syntax-highlighter-plain-tag">$SERVICE_PRINCIPAL_NAME="TasksTrackerSP"
$SUBSCRIPTION_ID="&lt;use your subscription id value&gt;"
$RESOURCE_GROUP="tasks-tracker-rg"

az ad sp create-for-rbac `
  --name "$SERVICE_PRINCIPAL_NAME" `
  --role "contributor" `
  --scopes "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP" `
  --sdk-auth</pre><p>After running this command, the output will be similar to the JSON below, you need to copy this JSON as we are using it in the next step.</p><pre class="urvanov-syntax-highlighter-plain-tag">{
  "clientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
  "clientSecret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
  "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
  "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
  "resourceManagerEndpointUrl": "https://management.azure.com/",
  "activeDirectoryGraphResourceId": "https://graph.windows.net/",
  "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
  "galleryEndpointUrl": "https://gallery.azure.com/",
  "managementEndpointUrl": "https://management.core.windows.net/"
}</pre><p></p>
<h3>Add Action Secrets to GitHub Repository</h3>
<h4>Step 1: Create credentials for Azure</h4>
<p>Now we need to create a couple of secrets that will be used by the GitHub actions, to do so, from the GitHub portal, navigate to your repository select &#8220;Settings&#8221;,  select &#8220;Secrets&#8221;, then &#8220;Actions&#8221;. Click on &#8220;New Secret Repository&#8221;.</p>
<p>As the image below, name your secret &#8220;TASKSMANAGER_AZURE_CREDENTIALS&#8221; and paste the JSON from the previous step, you need to include the properties &#8220;clientId&#8221;, &#8220;clientSecret&#8221;, &#8220;subscriptionId&#8221;, and &#8220;tenantId&#8221;, other properties in JOSN can be eliminated.</p><pre class="urvanov-syntax-highlighter-plain-tag">{ 
  "clientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx", 
  "clientSecret": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", 
  "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx", 
  "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
}</pre><p><a href="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecret.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-1395" src="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecret-1024x871.jpg" alt="GitHub Action Secret" width="574" height="488" srcset="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecret-1024x871.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecret-300x255.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecret-768x653.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecret.jpg 1521w" sizes="auto, (max-width: 574px) 100vw, 574px" /></a></p>
<h4>Step 2: Create credentials for Azure Container Registry</h4>
<p>We need to create 2 more GitHub action secrets to store the ACR username and password to allow us to build, push, and pull images inside the GitHub action.</p>
<p>To get the username and password for ACR, run the below command in Azure CLI, you can get them from Azure Portal too.</p><pre class="urvanov-syntax-highlighter-plain-tag">az acr credential show -n $ACR_NAME</pre><p>As the step above, we will create new secrets named &#8220;TASKSMANAGER_REGISTRY_USERNAME&#8221; and &#8220;TASKSMANAGER_REGISTRY_PASSWORD&#8221; and you put the right values accordingly and save them. You should have 3 actions secrets as in the image below:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecrets.jpg"><img loading="lazy" decoding="async" class="alignnone wp-image-1396" src="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecrets-1024x704.jpg" alt="GitHub Action Secrets" width="576" height="396" srcset="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecrets-1024x704.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecrets-300x206.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecrets-768x528.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionSecrets.jpg 1109w" sizes="auto, (max-width: 576px) 100vw, 576px" /></a></p>
<h3>Create GitHub Action for Backend Web API service</h3>
<p>We will start now defining the first GitHub action for the project named &#8220;TasksTracker.TasksManager.Backend.Api&#8221;, we will have a dedicated GitHub action for each service, so if a change happened to one of the services, there is no need to trigger action and do a re-deploy the service for the unchanged ones.</p>
<p>Next, create a new folder named &#8220;.github/workflows&#8221; on the root project directory and add a new file named &#8220;build-deploy-backend-api.yaml&#8221;, use the content below:</p><pre class="urvanov-syntax-highlighter-plain-tag">name: tasksmanager-backend-api deployment

# When this action will be executed
on:
  # Automatically trigger it when detected changes in repo
  push:
    branches: 
      [ dev ]
    paths:
    - 'TasksTracker.TasksManager.Backend.Api/**'
    - '.github/workflows/build-deploy-backend-api.yaml'

  # Allow mannually trigger 
  workflow_dispatch:      

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout to the branch
        uses: actions/checkout@v2

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1

      - name: Log in to container registry
        uses: docker/login-action@v1
        with:
          registry: taskstrackeracr.azurecr.io
          username: ${{ secrets.TASKSMANAGER_REGISTRY_USERNAME }}
          password: ${{ secrets.TASKSMANAGER_REGISTRY_PASSWORD }}

      - name: Build and push container image to registry
        uses: docker/build-push-action@v2
        with:
          push: true
          tags: taskstrackeracr.azurecr.io/tasksmanager-backend-api:${{ github.sha }} , taskstrackeracr.azurecr.io/tasksmanager-backend-api:latest
          file: TasksTracker.TasksManager.Backend.Api/Dockerfile
          context: ./.


  deploy:
    runs-on: ubuntu-latest
    needs: build
    
    steps:
      - name: Azure Login
        uses: azure/login@v1
        with:
          creds: ${{ secrets.TASKSMANAGER_AZURE_CREDENTIALS }}


      - name: Deploy to containerapp
        uses: azure/CLI@v1
        with:
          inlineScript: |
            az config set extension.use_dynamic_install=yes_without_prompt
            az containerapp registry set -n tasksmanager-backend-api -g tasks-tracker-rg --server taskstrackeracr.azurecr.io --username  ${{ secrets.TASKSMANAGER_REGISTRY_USERNAME }} --password ${{ secrets.TASKSMANAGER_REGISTRY_PASSWORD }}
            az containerapp update -n tasksmanager-backend-api -g tasks-tracker-rg --image taskstrackeracr.azurecr.io/tasksmanager-backend-api:${{ github.sha }}</pre><p>Let&#8217;s review what we added to this yaml file:</p>
<ul>
<li>Line 1: Setting the name &#8220;tasksmanager-backend-api deployment&#8221;  for the GitHub action.</li>
<li>Lines 4-8 Configure the action to be triggered when a change happens on the branch &#8216;dev&#8217;, if you are using a different branch name you can update this value.</li>
<li>Lines 9 -10: Specify that any change on the files under the path &#8220;TasksTracker.TasksManager.Backend.Api&#8221; will trigger this action, this is very important to control that changes on this subject service will trigger this action, we want to avoid for example changes that happen on the UI project to trigger this action.</li>
<li>Line 11: Specify that any change on the yaml file &#8220;build-deploy-backend-api.yaml&#8221; for this action will trigger the action.</li>
<li>Line 14: Enabled manual trigger for the action (this might be needed to trigger the action without any change on code).</li>
<li>Lines 16 -19: Specify that the latest Ubuntu agent will be used to run the action.</li>
<li>Lines 20 &#8211; 40: Specify what we actually want to happen within our GitHub action (what jobs), we are going to add to our build process
<ul>
<li>Lines 21-22: Check out our repository so our GitHub action can access it.</li>
<li>Lines 24-25: Using the Builder, this action uses BuildKit under the hood through a simple Buildx action.</li>
<li>Lines 27-32: Login to Azure Container Registry using the Username and Password created in GitHub actions secrets.</li>
<li>Lines 34-40: Action to build Docker image and push to ACR, notice that we are creating 2 tags when pushing, one with tag &#8220;latest&#8221; and the other one will contain the &#8220;Commit SHA that triggered the workflow&#8221;.</li>
</ul>
</li>
<li>Lines 48-51 Login to Azure subscription using the credentials stored in GitHub action secrets.</li>
<li>Lines 54- 60: Use Azure CLI to do the actual deployment/update for the Azure Container App and deploy a new revision.</li>
</ul>
<p><strong>Note:</strong> you need to do the same and create another 2 new GitHub actions for the project &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/583187f502cdd20afebd82e1b5a0474bd9350607/.github/workflows/build-deploy-backend-processor.yaml" target="_blank" rel="noopener">TasksTracker.Processor.Backend.Svc</a>&#8221; and project &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/583187f502cdd20afebd82e1b5a0474bd9350607/.github/workflows/build-deploy-frontend.yaml" target="_blank" rel="noopener">TasksTracker.WebPortal.Frontend.Ui</a>&#8220;, the GitHub action files are in the links.</p>
<p>With this in place, you can commit your work and notice that the GitHub action is triggered, if all is configured correctly you should see the results in the GitHub Actions workflows tab as well the Azure App Service should be updated with a new revision.</p>
<p><strong>Note</strong>: Azure Container Apps has support from the Azure Portal or via Azure CLI commands &#8220;<a href="https://docs.microsoft.com/en-us/cli/azure/containerapp/github-action?view=azure-cli-latest" target="_blank" rel="noopener">az containerapp github-action</a>&#8221; to set up this <a href="https://docs.microsoft.com/en-us/azure/container-apps/github-actions-cli?tabs=bash" target="_blank" rel="noopener">continuous deployment</a>, I preferred to do it manually from scratch to understand what is happening under the hood, as well with the built-in approach you need to create a GitHub PAT to allow Azure to access GitHub and setup the GitHub action workflows, another secret that you want to maintain <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f641.png" alt="🙁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionRuns.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1400" src="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionRuns-1024x708.jpg" alt="Azure Container Apps GitHub Action Runs" width="1024" height="708" srcset="https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionRuns-1024x708.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionRuns-300x207.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionRuns-768x531.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/GitHubActionRuns.jpg 1333w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>In the <a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/" target="_blank" rel="noopener">next post</a>, we will be implementing using Bicep templates to create the entire environment, and resources from scratch, this will be useful if you want to provision a new test or staging environment for example. Stay tuned!</p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<p>The post <a href="https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/">Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; Part 9</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1391</post-id>	</item>
		<item>
		<title>Azure Container Apps Monitoring and Observability with Application Insights &#8211; Part 8</title>
		<link>https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/</link>
					<comments>https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/#comments</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Fri, 09 Sep 2022 01:00:36 +0000</pubDate>
				<category><![CDATA[Application Insights]]></category>
		<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Dapr]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[Microservice]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1362</guid>

					<description><![CDATA[<p>This is the eighth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are: Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1 Deploy backend API Microservice to Azure Container Apps &#8211; Part 2 Communication between Microservices in Azure Container Apps &#8211; Part 3 [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/">Azure Container Apps Monitoring and Observability with Application Insights &#8211; Part 8</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This is the eighth part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:</p>
<ul>
<li><a href="https://bitoftech.net/2022/08/25/tutorial-building-microservice-applications-azure-container-apps-dapr/" target="_blank" rel="noopener">Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1</a></li>
<li><a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">Deploy backend API Microservice to Azure Container Apps &#8211; Part 2</a></li>
<li><a href="https://bitoftech.net/2022/08/25/communication-microservices-azure-container-apps/" target="_blank" rel="noopener">Communication between Microservices in Azure Container Apps &#8211; Part 3</a></li>
<li><a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">Dapr Integration with Azure Container Apps &#8211; Part 4</a></li>
<li><a href="https://bitoftech.net/2022/08/29/azure-container-apps-state-store-with-dapr-state-management-api/" target="_blank" rel="noopener">Azure Container Apps State Store With Dapr State Management API &#8211; Part 5</a></li>
<li><a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">Azure Container Apps Async Communication with Dapr Pub/Sub API &#8211; Part 6</a></li>
<li><a href="https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/" target="_blank" rel="noopener">Azure Container Apps with Dapr Bindings Building Block &#8211; Part 7</a></li>
<li>Azure Container Apps Monitoring and Observability with Application Insights &#8211; (This Post)</li>
<li><a href="https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/" target="_blank" rel="noopener">Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; Part 9</a></li>
<li><a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/" target="_blank" rel="noopener">Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; Part 10</a></li>
<li><a href="https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/" target="_blank" rel="noopener">Azure Container Apps Auto Scaling with KEDA &#8211; Part 11</a></li>
<li><a href="https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/" target="_blank" rel="noopener">Azure Container Apps Volume Mounts using Azure Files &#8211; Part 12</a></li>
<li><em>Integrate Health probes in Azure Container Apps &#8211; Part 13</em></li>
</ul>
<h2>Azure Container Apps Monitoring and Observability with Application Insights</h2>
<p>When building a microservices application that consists of many distributed services that communicate with each other across different processes and use different infrastructure services (different message brokers, different databases, different 3rd party components, etc&#8230;) then having a mechanism to observe and trace from end to end is inevitable.</p>
<p>The good news that Azure Container Apps provides various built-in Monitoring and Observability features that help in understanding how are the various services are operating and performing. Those features will help to monitor and diagnose the state of your distributed services to improve performance and respond to critical problems.</p>
<p>The Azure Container Apps <a href="https://docs.microsoft.com/en-us/azure/container-apps/observability" target="_blank" rel="noopener">official documentation page</a> talks thoroughly about observability features, so I will not cover them here.<br />
From my personal experience while building this tutorial and during the development and test phases, the 2 features I relied on most were the <a href="https://docs.microsoft.com/en-us/azure/container-apps/log-streaming" target="_blank" rel="noopener">Log streaming</a> and <a href="https://docs.microsoft.com/en-us/azure/container-apps/log-monitoring?tabs=bash" target="_blank" rel="noopener">Azure Monitor Log Analytics</a>.</p>
<p>The official documentation page didn&#8217;t talk about how to configure Azure Container Apps and Azure Container Apps Environment with <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview" target="_blank" rel="noopener">Application Insights</a>, as well Azure Container Apps <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/codeless-overview#supported-environments-languages-and-resource-providers" target="_blank" rel="noopener">do not support Auto-Instrumentation</a> for Application Insights, so in this post, I will be focusing on how we can integrate Application Insights into our microservices application.</p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Application Insights overview</h3>
<p>Application Insights is an offering from Azure Monitor that will help us to monitor all Azure Container Apps under the same Container App Environment and collect <strong>telemetry</strong> about the services within the solution, as well as understand the usage of the services and users&#8217; engagement via integrated analytics tools.<br />
What I mean by <strong>Telemetry</strong> here is the data collected to observe our application, it can be broken into three categories :</p>
<ol>
<li>Distributed Tracing: provides insights into the traffic between services involved in distributed transactions, think of when the Frontend Web App talks with the Backend Api App to insert or retrieve data. An application map of how calls are flowing between services is very important for any distributed application.</li>
<li>Metrics: provide insights into the performance of a service and its resource utilization, think of the CPU and Memory utilization of the Backend Background processor, and how we can understand when we need to increase the number of replicas.</li>
<li>Logging: provides insights into how code is executing and if errors have occurred.</li>
</ol>
<p>So let&#8217;s get started and add Application Insights to our solution.</p>
<h3>Provision a Workspace-based Application Insights Instance</h3>
<p>We need to create one single Application Insights instance to be used for all the services telemetry within our application, to create the instance using Azure CLI script below:</p><pre class="urvanov-syntax-highlighter-plain-tag">## install the application-insights extension for the CLI
  az extension add -n application-insights
  
  ## Workspace crreated when creating Azure Container Apps Env
  $workspace = "/subscriptions/&lt;SubscriptionId&gt;/resourcegroups/&lt;RsourceGroup&gt;/providers/microsoft.operationalinsights/workspaces/&lt;WorkSpaceName&gt;"
  
  ##Create Application Insights instance
  $ai = $(az monitor app-insights component create -g $RESOURCE_GROUP -l $Location --app $taskstracker-ai --workspace $workspace)  | ConvertFrom-Json
  
  ## Get APPINSIGHTS INSTRUMENTATIONKEY
  $APPINSIGHTS_INSTRUMENTATIONKEY=$($ai.InstrumentationKey)</pre><p>We are creating here a <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-workspace-resource" target="_blank" rel="noopener">Workspace-based Application Insights</a> resource as the &#8220;Classic&#8221; one will be out of support in 2024. So we are going to use the same Workspace we created when we provisioned the Azure Container App Environment in this <a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">previous post</a>.</p>
<p>Next, we need to update the Azure Container App Environment we have created in this <a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">previous post</a>, if you remember we used the CLI command 
			<span id="urvanov-syntax-highlighter-69ac93cda3130624348018" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-e">az </span><span class="crayon-e">containerapp </span><span class="crayon-e">env </span><span class="crayon-i">create</span></span></span>  (<a href="https://docs.microsoft.com/en-us/cli/azure/containerapp/env?view=azure-cli-latest#az-containerapp-env-create" target="_blank" rel="noopener">link for command properties</a>) as the below script to create the Environment:</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp env create `
  --name $ENVIRONMENT `
  --resource-group $RESOURCE_GROUP `
  --location $LOCATION</pre><p>You can see that we didn&#8217;t provide the property 
			<span id="urvanov-syntax-highlighter-69ac93cda3133213722864" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-o">--</span><span class="crayon-i">dapr</span><span class="crayon-cn">-instrumentation</span><span class="crayon-cn">-key</span></span></span> when creating it, and unfortunately we can&#8217;t update this from the Azure CLI, there is an <a href="https://github.com/microsoft/azure-container-apps/issues/293" target="_blank" rel="noopener">open issue</a> on GitHub asking to add support for updating the Azure Container Apps Environment via the CLI.<br />
So my recommendation is to set the property &#8220;dapr-instrumentation-key&#8221; when creating the ACA Environment as it is the property responsible for setting the instrumentation key of the Application Insights instance used by Dapr to export Service to Service communication telemetry, without it you will not be able to see the relation between microservices.</p>
<p>In our case we need to do the update, in order to do this we are going to use Bicep/ARM template, I&#8217;m going to cover in the future post in full detail how we are going to use Bicep to provision every component in the solution including the external components such as Azure Storage, Cosmos DB, Service bus, etc&#8230; it will be a detailed post to show you how Bicep will simplify the creation of Infrastructure as a Code (IaC) and will enable you to create and redeploy the entire solution with all configuration we have done manually by executing one command or running 1 single action from GitHub actions.</p>
<p>For this post, I will keep the description minimal so follow along with me to create a simple Bicep file used to update the Azure Container Apps Environment, to do so, from VS code add a new folder named &#8220;deploy&#8221; on the same root folder of our solution &#8220;TasksTracker.ContainerApps/deploy&#8221; and then add a new file named &#8220;updateacaEnvironment.bicep&#8221;.<br />
You need to install a VS code extension named <a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep" target="_blank" rel="noopener">Bicep</a>, so go ahead and install it as we are going to use it heavily in the next post.<br />
Now open the file named &#8220;updateacaEnvironment.bicep&#8221; and paste the code below</p><pre class="urvanov-syntax-highlighter-plain-tag">param location string = resourceGroup().location
param environmentName string = 'tasks-tracker-containerapps-env'
param appInsightsName string = 'taskstracker-ai'
param logAnalyticsWorkspaceName string ='workspace-taskstrackerrgRItW'

resource appInsights 'Microsoft.Insights/components@2020-02-02' existing = {
    name: appInsightsName
}

resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2021-06-01' existing = {
  name: logAnalyticsWorkspaceName
}

resource environment 'Microsoft.App/managedEnvironments@2022-03-01' = {
  name: environmentName
  location: location
  properties: {
    daprAIInstrumentationKey:appInsights.properties.InstrumentationKey
    appLogsConfiguration: {
      destination: 'log-analytics'
      logAnalyticsConfiguration: {
        customerId: logAnalyticsWorkspace.properties.customerId
        sharedKey: listKeys(logAnalyticsWorkspace.id, logAnalyticsWorkspace.apiVersion).primarySharedKey
      }
    }
  }
}</pre><p>What we&#8217;ve done here is simple, we are getting a reference for the existing Application Insights and Log Analytics Workspace instances, then, we are updating the existing Azure Container Apps Environment, see how we are setting the property &#8220;daprAIInstrumentationKey&#8221; on the highlighted line above, and providing the Application Insights Instrumentation Key.</p>
<p>To execute this file, use the PowerShell terminal and run the command below:</p><pre class="urvanov-syntax-highlighter-plain-tag">## Ensure you are on directory "~/asksTracker.ContainerApps"
az deployment group create `
  --resource-group $RESOURCE_GROUP `
  --template-file ./deploy/updateacaEnvironment.bicep</pre><p></p>
<h3>Installing Application Insights SDK into the 3 Microservices apps</h3>
<p>Now we need to add Application Insights SDK to the 3 services we have, this is an identical operation, so I will describe how we can do it on the Backend API and you do the same on the remaining services.</p>
<h4>Step 1: Install the Application Insights SDK using NuGet</h4>
<p>To add the SDK, open the file &#8220;TasksTracker.TasksManager.Backend.Api.csproj&#8221; and add the below NuGet reference</p><pre class="urvanov-syntax-highlighter-plain-tag">&lt;ItemGroup&gt;
    &lt;!--Other packages are removed for brevity--&gt;
    &lt;PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.21.0" /&gt;
  &lt;/ItemGroup&gt;</pre><p></p>
<h4>Step 2: Set RoleName property for all the services</h4>
<p>Next and on each project, we will add a new file named &#8220;AppInsightsTelemetryInitializer.cs&#8221; on the root directory of the project, so add a file named &#8220;AppInsightsTelemetryInitializer.cs&#8221; under project &#8220;TasksTracker.TasksManager.Backend.Api.Svc&#8221; and paste the below:</p><pre class="urvanov-syntax-highlighter-plain-tag">namespace TasksTracker.TasksManager.Backend.Api
{
    public class AppInsightsTelemetryInitializer : ITelemetryInitializer
    {
        public void Initialize(ITelemetry telemetry)
        {
            if (string.IsNullOrEmpty(telemetry.Context.Cloud.RoleName))
            {
                //set custom role name here
                telemetry.Context.Cloud.RoleName = "tasksmanager-backend-api";
            }
        }
    }
}</pre><p>The only difference between each file on the 3 projects is the RoleName property value, this property will be used by Application Insights to identify the components on the application map, as well for example it will be useful for us if we need to filter on all the warning logs generated from the Backend API project, so we will use the value &#8220;tasksmanager-backend-api&#8221; when we filter.<br />
You can check the RoleName value used in the project &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/52577bacd53745afc6fdb1639e932439fe67870b/TasksTracker.WebPortal.Frontend.Ui/AppInsightsTelemetryInitializer.cs" target="_blank" rel="noopener">TasksTracker.WebPortal.Frontend.Ui</a>&#8221; and project &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/52577bacd53745afc6fdb1639e932439fe67870b/TasksTracker.Processor.Backend.Svc/AppInsightsTelemetryInitializer.cs" target="_blank" rel="noopener">TasksTracker.Processor.Backend.Svc</a>&#8220;.</p>
<p>Next, we need to register this &#8220;AppInsightsTelemetryInitializer&#8221; class, to do this, open the file &#8220;Program.cs&#8221; and add the code highlighted below (lines 5-9), don&#8217;t forget that you need to do the same for the remaining 2 projects.</p><pre class="urvanov-syntax-highlighter-plain-tag">//Code removed for brevity 

builder.Services.AddControllers();

builder.Services.AddApplicationInsightsTelemetry();

builder.Services.Configure&lt;TelemetryConfiguration&gt;((o) =&gt; {
    o.TelemetryInitializers.Add(new AppInsightsTelemetryInitializer());
});

var app = builder.Build();

//Code removed for brevity</pre><p></p>
<h4>Step 3: Set the Application Insights instrumentation key in appsettings.json file</h4>
<p>Now we need to set the Application Insights Instrumentation Key so the projects are able to send telemetry data to the AI instance, to do this open file &#8220;appsettings.json&#8221; and paste the code below, we are going to set this via secrets and environment variables once we redeploy the Container Apps and create new revisions.</p><pre class="urvanov-syntax-highlighter-plain-tag">{
  "ApplicationInsights": {
    "InstrumentationKey": ""
  } 
}</pre><p>With this step we have completed the changes on the projects, let&#8217;s now deploy the changes and create new revisions.</p>
<h3>Deploy the 3 services to Azure Container Apps and create new Revisions</h3>
<h4>Step 1: Add Application Insights Insrumnation key as a secret</h4>
<p>Let&#8217;s create a secret named &#8220;appinsights-key&#8221; on each Container App which contains the value of the AI instrumentation key, remember that we can obtain this value from Azure Portal by going to AI instance we created or we can get it from Azure CLI as we did above. To create the secret use PowerShell console and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp secret set `
--name $BACKEND_API_NAME `
--resource-group $RESOURCE_GROUP `
--secrets "appinsights-key=&lt;AI Key Here&gt;"

az containerapp secret set `
--name $FRONTEND_WEBAPP_NAME `
--resource-group $RESOURCE_GROUP `
--secrets "appinsights-key=&lt;AI Key Here&gt;"

az containerapp secret set `
--name $BACKEND_SVC_NAME `
--resource-group $RESOURCE_GROUP `
--secrets "appinsights-key=&lt;AI Key Here&gt;"</pre><p></p>
<h4>Step 2: Build new images and push them to ACR</h4>
<p>As we have done previously we need to build and push the 3 apps images to ACR so they are ready to be deployed to Azure Container Apps, to do so, continue using the same PowerShell console and paste the code below (Make sure you are on directory &#8220;TasksTracker.ContainerApps&#8221;):</p><pre class="urvanov-syntax-highlighter-plain-tag">## Build Backend API on ACR and Push to ACR
az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_API_NAME" --file 'TasksTracker.TasksManager.Backend.Api/Dockerfile' . 

## Build Backend Service on ACR and Push to ACR
az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_SVC_NAME" --file 'TasksTracker.Processor.Backend.Svc/Dockerfile' .

## Build Frontend Web App on ACR and Push to ACR
az acr build --registry $ACR_NAME --image "tasksmanager/$FRONTEND_WEBAPP_NAME" --file 'TasksTracker.WebPortal.Frontend.Ui/Dockerfile' .</pre><p></p>
<h4>Step 3: Deploy new revisions of the 3 services to Azure Container Apps and set a new environment variable</h4>
<p>As we&#8217;ve done multiple times, we need to update the Azure Container App hosting the 3 services with a new revision so our code changes are available for end users, to do so run the below PowerShell script, n and notice how we used the property 
			<span id="urvanov-syntax-highlighter-69ac93cda313d794164810" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-o">--</span><span class="crayon-v">set</span><span class="crayon-o">-</span><span class="crayon-v">env</span><span class="crayon-o">-</span><span class="crayon-v">vars</span></span></span> to set new environment variable named &#8220;ApplicationInsights__InstrumentationKey&#8221; and its value is a secret reference coming from the secret &#8220;appinsights-key&#8221; we added in step 1.</p><pre class="urvanov-syntax-highlighter-plain-tag">az containerapp update `
--name $BACKEND_API_NAME  `
--resource-group $RESOURCE_GROUP `
--image taiseerjoudeh/tasksmanager-backend-api-repo:latest `
--revision-suffix v20220908-1 `
--cpu 0.25 --memory 0.5Gi `
--set-env-vars "ApplicationInsights__InstrumentationKey=secretref:appinsights-key" `
--min-replicas 1 `
--max-replicas 2

az containerapp update `
--name $FRONTEND_WEBAPP_NAME  `
--resource-group $RESOURCE_GROUP `
--revision-suffix v20220908-1 `
--cpu 0.25 --memory 0.5Gi `
--set-env-vars "ApplicationInsights__InstrumentationKey=secretref:appinsights-key" `
--min-replicas 1 `
--max-replicas 1

az containerapp update `
--name $BACKEND_SVC_NAME `
--resource-group $RESOURCE_GROUP `
--revision-suffix v20220908-1 `
--cpu 0.25 --memory 0.5Gi `
--set-env-vars "ApplicationInsights__InstrumentationKey=secretref:appinsights-key" `
--min-replicas 1 `
--max-replicas 5</pre><p>With those changes in place, you should start seeing telemetry coming to the Application Insights instance provisioned, let&#8217;s review Application Insights key dashboards and panels in Azure Portal.</p>
<h4>Distributed Tracing via Application Map</h4>
<p>Application Map will help us to spot any performance bottlenecks or failure hotspots across all our services of our distributed microservices application. Each node on the map represents an application component (service) or its dependencies and has a health KPI and alerts status.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsights.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1369" src="https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsights-1024x998.jpg" alt="Distributed Tracing Application Insights" width="1024" height="998" srcset="https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsights-1024x998.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsights-300x292.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsights-768x749.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsights.jpg 1109w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Looking at the image above, you will see for example how the Backend Api with could RoleName &#8220;tasksmanager-backend-api&#8221; is depending on the Cosmos DB instance, showing the number of calls and average time to service these calls. The application map is interactive so you can select a service/ component and drill down into details. For example, when I drill down into the Dapr State node to understand how many times my backend API invoked the Dapr Sidecar state service to Save/Delete state, you will see results similar to the below image:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsightsDetails.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1371" src="https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsightsDetails-1024x816.jpg" alt="Distributed Tracing Application Insights Details" width="1024" height="816" srcset="https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsightsDetails-1024x816.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsightsDetails-300x239.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsightsDetails-768x612.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/DistributedTracingApplicationInsightsDetails.jpg 1199w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>Mintor production application using Live Metrics</h4>
<p>This is one of my favorite monitoring panels, it provides you with near real-time (1-second latency) status of your entire distributed application, we can see performance and failures count, we can watch exceptions and traces as they happened, and we can see live servers (replicas in our case) and the CPU and Memory utilization and the number of requests they are handling.<br />
These live metrics provide very powerful diagnostics for our production microservice application. Check the image below and see the server names and some requests coming to the system.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/LiveMetricsApplicationInsights.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1372" src="https://bitoftech.net/wp-content/uploads/2022/09/LiveMetricsApplicationInsights-1024x495.jpg" alt="Live Metrics Application Insights" width="1024" height="495" srcset="https://bitoftech.net/wp-content/uploads/2022/09/LiveMetricsApplicationInsights-1024x495.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/LiveMetricsApplicationInsights-300x145.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/LiveMetricsApplicationInsights-768x372.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/LiveMetricsApplicationInsights-1536x743.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/LiveMetricsApplicationInsights.jpg 1759w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>Logs search using Transaction Search</h4>
<p>Transaction search in Application Insights will help us to find and explore individual telemetry items, such as exceptions, web requests, or dependencies. As well as any log traces and events that we&#8217;ve added to the application.<br />
For example, if I want to see all the event types of type &#8220;Request&#8221; for the cloud RoleName &#8220;tasksmanager-backend-api&#8221; in the past 24 hours, I can use the transaction search dashboard to do this, see how the filters are set and the results are displayed nicely, you can drill down on each result to have more details and what telemetry was captured before and after. A very useful feature when troubleshooting exceptions and reading logs.<br />
<a href="https://bitoftech.net/wp-content/uploads/2022/09/TransactionSearchApplicationInsights.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1373" src="https://bitoftech.net/wp-content/uploads/2022/09/TransactionSearchApplicationInsights-1024x683.jpg" alt="Transaction Search Application Insights.jpg" width="1024" height="683" srcset="https://bitoftech.net/wp-content/uploads/2022/09/TransactionSearchApplicationInsights-1024x683.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/TransactionSearchApplicationInsights-300x200.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/TransactionSearchApplicationInsights-768x512.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/TransactionSearchApplicationInsights-1536x1024.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/TransactionSearchApplicationInsights-2048x1366.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>Failures and Performance Panels</h4>
<p>The failure panel allows us to view the frequency of failures across different operations to help us to focus our efforts on those with the highest impact.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/FailuresApplicationInsights.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1374" src="https://bitoftech.net/wp-content/uploads/2022/09/FailuresApplicationInsights-1024x730.jpg" alt="Failures Application Insights" width="1024" height="730" srcset="https://bitoftech.net/wp-content/uploads/2022/09/FailuresApplicationInsights-1024x730.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/FailuresApplicationInsights-300x214.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/FailuresApplicationInsights-768x548.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/FailuresApplicationInsights-1536x1095.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/FailuresApplicationInsights.jpg 1562w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a><br />
The Performance panel displays performance details for the different operations in our system. By identifying those operations with the longest duration, we can diagnose potential problems or best target our ongoing development to improve the overall performance of the system</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/PerformanceApplicationInsights.jpg"><img loading="lazy" decoding="async" class="alignnone size-large wp-image-1375" src="https://bitoftech.net/wp-content/uploads/2022/09/PerformanceApplicationInsights-1024x728.jpg" alt="Performance Application Insights" width="1024" height="728" srcset="https://bitoftech.net/wp-content/uploads/2022/09/PerformanceApplicationInsights-1024x728.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/PerformanceApplicationInsights-300x213.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/PerformanceApplicationInsights-768x546.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/PerformanceApplicationInsights-1536x1093.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/PerformanceApplicationInsights.jpg 1566w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a><br />
In the next post, we will be implementing continuous integration and deployment so we automate the process of building images, pushing them to ACR, and then creating new revisions.</p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<h4>References:</h4>
<ul>
<li><a href="https://docs.microsoft.com/en-us/dotnet/architecture/dapr-for-net-developers/observability" target="_blank" rel="noopener">The Dapr observability building block</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-overview" target="_blank" rel="noopener">Opentelemtry Overview</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-map?tabs=net" target="_blank" rel="noopener">Application Map: Triage distributed applications</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/tutorial-performance" target="_blank" rel="noopener">Find and diagnose performance issues with Azure Application Insights</a></li>
<li><a href="https://pixabay.com/photos/network-server-system-2402637/" target="_blank" rel="noopener">Featured image credit by Bethany Drouin</a></li>
</ul>
<p>The post <a href="https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/">Azure Container Apps Monitoring and Observability with Application Insights &#8211; Part 8</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1362</post-id>	</item>
		<item>
		<title>Azure Container Apps with Dapr Bindings Building Block &#8211; Part 7</title>
		<link>https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/</link>
					<comments>https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/#comments</comments>
		
		<dc:creator><![CDATA[Taiseer Joudeh]]></dc:creator>
		<pubDate>Mon, 05 Sep 2022 01:45:03 +0000</pubDate>
				<category><![CDATA[ASP.NET 6]]></category>
		<category><![CDATA[Azure Container Apps]]></category>
		<category><![CDATA[Dapr]]></category>
		<category><![CDATA[Microservices]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[Azure Storage]]></category>
		<category><![CDATA[Microservice]]></category>
		<guid isPermaLink="false">https://bitoftech.net/?p=1338</guid>

					<description><![CDATA[<p>This is the seventh part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are: Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1 Deploy backend API Microservice to Azure Container Apps &#8211; Part 2 Communication between Microservices in Azure Container Apps &#8211; Part 3 [&#8230;]</p>
<p>The post <a href="https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/">Azure Container Apps with Dapr Bindings Building Block &#8211; Part 7</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This is the seventh part of Building Microservice Applications with Azure Container Apps and Dapr. The topics we’ll cover are:</p>
<ul>
<li><a href="https://bitoftech.net/2022/08/25/tutorial-building-microservice-applications-azure-container-apps-dapr/" target="_blank" rel="noopener">Tutorial for building Microservice Applications with Azure Container Apps and Dapr &#8211; Part 1</a></li>
<li><a href="https://bitoftech.net/2022/08/25/deploy-microservice-application-azure-container-apps/" target="_blank" rel="noopener">Deploy backend API Microservice to Azure Container Apps &#8211; Part 2</a></li>
<li><a href="https://bitoftech.net/2022/08/25/communication-microservices-azure-container-apps/" target="_blank" rel="noopener">Communication between Microservices in Azure Container Apps &#8211; Part 3</a></li>
<li><a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">Dapr Integration with Azure Container Apps &#8211; Part 4</a></li>
<li><a href="https://bitoftech.net/2022/08/29/azure-container-apps-state-store-with-dapr-state-management-api/" target="_blank" rel="noopener">Azure Container Apps State Store With Dapr State Management API &#8211; Part 5</a></li>
<li><a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">Azure Container Apps Async Communication with Dapr Pub/Sub API &#8211; Part 6</a></li>
<li>Azure Container Apps with Dapr Bindings Building Block &#8211; (This Post)</li>
<li><a href="https://bitoftech.net/2022/09/09/azure-container-apps-monitoring-and-observability-with-application-insights-part-8/" target="_blank" rel="noopener">Azure Container Apps Monitoring and Observability with Application Insights – Part 8</a></li>
<li><a href="https://bitoftech.net/2022/09/13/continuous-deployment-for-azure-container-apps-using-github-actions-part-9/" target="_blank" rel="noopener">Continuous Deployment for Azure Container Apps using GitHub Actions &#8211; Part 9</a></li>
<li><a href="https://bitoftech.net/2022/09/16/use-bicep-to-deploy-dapr-microservices-apps-to-azure-container-apps-part-10/" target="_blank" rel="noopener">Use Bicep to Deploy Dapr Microservices Apps to Azure Container Apps &#8211; Part 10</a></li>
<li><a href="https://bitoftech.net/2022/09/22/azure-container-apps-auto-scaling-with-keda-part-11/" target="_blank" rel="noopener">Azure Container Apps Auto Scaling with KEDA &#8211; Part 11</a></li>
<li><a href="https://bitoftech.net/2022/10/16/azure-container-apps-volume-mounts-using-azure-files/" target="_blank" rel="noopener">Azure Container Apps Volume Mounts using Azure Files &#8211; Part 12</a></li>
<li><em>Integrate Health probes in Azure Container Apps &#8211; Part 13</em></li>
</ul>
<h2>Azure Container Apps with Dapr Bindings Building Block</h2>
<p>In this post, we are going to extend the backend background processor service named &#8220;ACA-Processer Backend&#8221; which we created in the <a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">previous post</a>, we will rely on <a href="https://docs.dapr.io/developing-applications/building-blocks/bindings/bindings-overview/" target="_blank" rel="noopener">Dapr Input and Output Bindings</a> to achieve 3 scenarios as the following:</p>
<ul>
<li>Trigger a process on the &#8220;ACA-Processer Backend&#8221; based on a <strong>configurable interval schedule</strong>, this implements a background worker to wake up (at a regular interval) and check if tasks created are overdue and mark them as overdue, then store the updated state on Azure Cosmos DB.</li>
<li>Trigger a process on the &#8220;ACA-Processer Backend&#8221; based on a <strong>message sent to a specific Azure Storage Queue</strong>, this is a fictitious scenario but we will assume that this Azure Storage Queue is an external system to which external clients can submit tasks to this queue and our &#8220;ACA-Processer Backend&#8221; will be configured to trigger a certain process when a new message is received.</li>
<li>From the service &#8220;ACA-Processer Backend&#8221; we will <strong>invoke an external resource</strong> that is storing the content of the incoming task from the external queue as a JSON blob file on Azure Storage Blobs.</li>
</ul>
<p>Let&#8217;s take a look at the high-level architecture diagram below to understand the flow of input and output bindings in Dapr:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-SimpleBinding.jpg" target="_blank" rel="noopener"><img loading="lazy" decoding="async" class="alignnone wp-image-1340 size-large" src="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-SimpleBinding-1024x367.jpg" alt="Dapr Bindings Input and Output" width="1024" height="367" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-SimpleBinding-1024x367.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-SimpleBinding-300x107.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-SimpleBinding-768x275.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-SimpleBinding.jpg 1517w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<p>Let&#8217;s assume that there is an external system (outside of our Tasks Tracker microservice application) that needs to integrate with our Tasks Tracker application, this external system can publish a message on Azure Storage Queue which contains information about a task that needs to be stored and maintained in our Tasks Tracker application, so our system needs to react when a message is added to the Azure Storage Queue.<br />
To achieve this in a simple way and without writing a lot of plumbing code to access the Azure Storage Queue, our system will expose an event handler (aka Input Binding) that receives and processes the message coming to the storage queue. Once the processing of the message completes and we store the task into Cosmos DB, our system will trigger an event (aka Output binding) that invokes a fictitious external service that stores the content of the message into an Azure Blob Storage container.</p>
<p>Note: When I started looking at Dapr Bindings Building Block, I noticed a lot of similarities with the Pub/Sub Building Block we covered in the <a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">previous post</a>. But remember that Pub/Sub Building Block is meant to be used for Async communication between services <strong>within your solution</strong>, the Binding Building Block has a wider scope and it mainly focuses on connectivity and interoperability across different systems, disparate applications, and services outside the boundaries of your own application. For a full list of <a href="https://docs.dapr.io/reference/components-reference/supported-bindings/" target="_blank" rel="noopener">supported bindings</a> visit this link.</p>
<h3>Overview of Dapr Bindings Building Block</h3>
<p>Let&#8217;s take a look at the detailed Dapr Binding Building Block architecture diagram that we are going to implement in this post to fulfill the use case we discussed earlier:</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-DetailedBinding.jpg" target="_blank" rel="noopener"><img loading="lazy" decoding="async" class="alignnone wp-image-1342 size-large" src="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-DetailedBinding-1024x866.jpg" alt="Dapr Bindings Azure Storage" width="1024" height="866" srcset="https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-DetailedBinding-1024x866.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-DetailedBinding-300x254.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-DetailedBinding-768x649.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/ACA-Tutorial-DetailedBinding.jpg 1430w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<p>Looking at the diagram we notice the following:</p>
<ul>
<li>In order to receive events and data from the external resource (Azure Storage Queue) our &#8220;ACA-Processor Backend&#8221; service need to register a public endpoint that will become an event handler.</li>
<li>This binding configuration between the external resource and our service will be configured by using the &#8220;Input Binding Confgiration yaml&#8221; file, the Dapr sidecar of the background service will read the configuration and subscribe to the endpoint defined for the external resource, in our case, it will be a specific Azure Storage Queue.</li>
<li>When a message is published to the storage queue; the input binding component running in the Dapr sidecar picks it up and triggers the event.</li>
<li>The Dapr sidecar invokes the endpoint (event handler defined in the &#8220;ACA-Processer Backend&#8221; Service) configured for the binding. In our case, it will be an endpoint that can be reached by invoking a POST operation 
			<span id="urvanov-syntax-highlighter-69ac93cda3426300393051" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">http</span><span class="crayon-o">:</span><span class="crayon-c">//localhost:3502/ExternalTasksProcessor/Process</span></span></span> and the request body content will be the JSON payload of the published message to the Azure Storage Queue.</li>
<li>When the event is handled in our &#8220;ACA-Processer Backend&#8221; and the business logic is completed, this endpoint needs to return an HTTP response with a 200 ok status to acknowledge that processing is complete. If the event handling is not completed or there is an error, this endpoint should return HTTP 400 or 500 status code.</li>
<li>In order to enable our service &#8220;ACA-Processor Backend&#8221; to trigger an event that invokes an external resource, we need to use the &#8220;Output Binding Configuration Yaml&#8221; file to configure the relation between our service and the external resource (Azure Blob Storage) and how to connect to it.</li>
<li>Once the Dapr sidecar reads the binding configuration file, our service can trigger an event that invokes the output binding API on the Dapr sidecar, in our case, the event will be creating a new blob file containing the content of the message we read from the Azure Storage Queue.</li>
<li>With this in place, our service &#8220;ACA-Processor Backend&#8221; is ready to invoke the external resource by sending a POST operation to the endpoint 
			<span id="urvanov-syntax-highlighter-69ac93cda342b286087921" class="urvanov-syntax-highlighter-syntax urvanov-syntax-highlighter-syntax-inline  crayon-theme-vs2012 crayon-theme-vs2012-inline urvanov-syntax-highlighter-font-monaco" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important;"><span class="crayon-pre urvanov-syntax-highlighter-code" style="font-size: 12px !important; line-height: 15px !important;font-size: 12px !important; -moz-tab-size:4; -o-tab-size:4; -webkit-tab-size:4; tab-size:4;"><span class="crayon-v">http</span><span class="crayon-o">:</span><span class="crayon-c">//localhost:3502/v1.0/bindings/ExternalTasksBlobstore</span></span></span> and the JSON payload will contain the below content, or we can use the Dapr client SDK to invoke this output biding to invoke the external service and store the file in Azure Blob Storage<br />
<pre class="urvanov-syntax-highlighter-plain-tag">{
  "data": "{
	        "taskName": "Health Readiness Task",
	        "taskAssignedTo": "tayseer_joudeh@hotmail.com",
	        "taskCreatedBy": "tjoudeh@bitoftech.net",
	        "taskDueDate": "2022-08-19T12:45:22.0983978Z"
 }",
  "operation": "create"
}</pre>
</li>
</ul>
<p>Let&#8217;s now update our Backend Background Processer project and define the input and output bindings configuration files and event handlers.</p>
<p>To proceed with this tutorial we need to provision this fictisios external service (Azure Storage Account) to start responding to messages published to a queue and use the same storage account to store blob files as an external event. To do so, you can run the below PowerShell script to create Azure Storage Account and get the master key.</p><pre class="urvanov-syntax-highlighter-plain-tag">$STORAGE_ACCOUNT_NAME = "&lt;replace with unique storage name&gt;"
  
az storage account create `
--name $STORAGE_ACCOUNT_NAME `
--resource-group $RESOURCE_GROUP `
--location $LOCATION `
--sku Standard_LRS `
--kind StorageV2
  
# list azure storage keys
az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT_NAME</pre><p></p>
<h3>Updating the Backend Background Processer Project</h3>
<h4>Step 1: Create an event handler (API endpoint) to respond to messages published to Azure Storage Queue</h4>
<p>Let&#8217;s add an endpoint that will be responsible to handle the event when a message is published to Azure Storage Queue,  this endpoint will start receiving the message published from the external service, to do so, add a new controller named &#8220;ExternalTasksProcessorController.cs&#8221; under &#8220;Controllers&#8221; folder and use the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">namespace TasksTracker.Processor.Backend.Svc.Controllers
{
    [Route("ExternalTasksProcessor")]
    [ApiController]
    public class ExternalTasksProcessorController : ControllerBase
    {
        private readonly ILogger&lt;ExternalTasksProcessorController&gt; _logger;
        private readonly DaprClient _daprClient;

        public ExternalTasksProcessorController(ILogger&lt;ExternalTasksProcessorController&gt; logger,
                                                DaprClient daprClient)
        {
            _logger = logger;
            _daprClient = daprClient;
        }

        [HttpPost("process")]
        public async Task&lt;IActionResult&gt; ProcesseTaskAndStore([FromBody] TaskModel taskModel)
        {
            try
            {
                _logger.LogInformation("Started processing external task message from storage queue. Task Name: '{0}'", taskModel.TaskName);

                taskModel.TaskId = Guid.NewGuid();
                taskModel.TaskCreatedOn = DateTime.UtcNow;

                //Dapr SideCar Invocation (save task to a state store)
                await _daprClient.InvokeMethodAsync(HttpMethod.Post, "tasksmanager-backend-api", $"api/tasks", taskModel);

                _logger.LogInformation("Saved external task to the state store successfuly. Task name: '{0}', Task Id: '{1}'", taskModel.TaskName, taskModel.TaskId);

		      //ToDo: code to invoke external binding and store queue message content into blob file in auzre storage

                return Ok();
            }
            catch (Exception)
            {
                throw;
            }
        }
    }
}</pre><p>What we have added here is simple, we just defined an action method named &#8220;ProcesseTaskAndStore&#8221; which can be accessed by sending HTTP POST operation on the endpoint &#8220;ExternalTasksProcessor/Process&#8221; and this action method accepts the TaskModel in the request body as JSON payload, this is what will be received from the external service (Azure Storage Queue). Within this action method, we are going to store the received task into Cosmos DB using Dapr state store API covered in <a href="https://bitoftech.net/2022/08/29/azure-container-apps-state-store-with-dapr-state-management-api/" target="_blank" rel="noopener">this post</a>, and then we return 200 OK to acknowledge that message received is processed successfully and should be removed from the external service queue.</p>
<h4>Step 2: Create Dapr Input Binding Component file</h4>
<p>Now we need to create the component configuration file which will describe the configuration and how our backend background processor will start handling events coming from the external service (Azure Storage Queues). To do so, add a new file named &#8220;dapr-bindings-in-storagequeue.yaml&#8221; under folder &#8220;components&#8221; and paste the below:</p><pre class="urvanov-syntax-highlighter-plain-tag">apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: externaltasksmanager
spec:
  type: bindings.azure.storagequeues
  version: v1
  metadata:
  - name: storageAccount
    value: "taskstracker"
  - name: storageAccessKey
    value: "" 
  - name: queue
    value: "external-tasks-queue"
  - name: decodeBase64
    value: "true"
  - name: route
    value: /externaltasksprocessor/process</pre><p>The full specifications of yaml file with Azure Storage Queues can be found on <a href="https://docs.dapr.io/reference/components-reference/supported-bindings/storagequeues/" target="_blank" rel="noopener">this link</a>, but let&#8217;s go over the configuration we have added here:</p>
<ul>
<li>The type of binding is &#8220;bindings.azure.storagequeues&#8221;.</li>
<li>The name of this input binding is &#8220;externaltasksmanager&#8221;.</li>
<li>We are setting the &#8220;storageAccount&#8221; name, &#8220;storageAccessKey&#8221; value, and the &#8220;queue&#8221; name. Those properties will describe how the event handler we added can connect to the external service. You can create any queue you prefer on the Azure Storage Account we created to simulate an external system.</li>
<li>We are setting the &#8220;route&#8221; property to the value &#8220;/externaltasksprocessor/process &#8221; which is the address of the endpoint we have just added so POST requests are sent to this endpoint.</li>
<li>We are setting the property &#8220;decodeBase64&#8221; to &#8220;true&#8221; as the message queued in the Azure Storage Queue is Base64 encoded.</li>
</ul>
<h4>Step 3: Create Dapr Output Binding Component file</h4>
<p>Now we need to create the component configuration file which will describe the configuration and how our backend background processor will be able to invoke the external service (Azure Blob Storage) and be able to create a JSON file that contains the content of the message received from Azure Storage Queues. To do so, add a new file named &#8220;dapr-bindings-out-blobstorage.yaml&#8221; under folder &#8220;components&#8221; and paste the below:</p><pre class="urvanov-syntax-highlighter-plain-tag">apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: externaltasksblobstore
spec:
  type: bindings.azure.blobstorage
  version: v1
  metadata:
  - name: storageAccount
    value: "taskstracker"
  - name: storageAccessKey
    value: "" 
  - name: container
    value: "externaltaskscontainer"
  - name: decodeBase64
    value: false</pre><p>The full specifications of yaml file with Azure blob storage can be found on <a href="https://docs.dapr.io/reference/components-reference/supported-bindings/blobstorage/" target="_blank" rel="noopener">this link</a>, but let&#8217;s go over the configuration we have added here:</p>
<ul>
<li>The type of binding is &#8220;bindings.azure.blobstorage&#8221;.</li>
<li>The name of this output binding is &#8220;externaltasksblobstore&#8221;. We will use this name when we use the Dapr SDK to trigger the output binding.</li>
<li>We are setting the &#8220;storageAccount&#8221; name, &#8220;storageAccessKey&#8221; value, and the &#8220;container&#8221; name. Those properties will describe how our backend background service will be able to connect to the external service and create a blob file. We will assume that there is a container already created on the external service and named &#8220;externaltaskscontainer&#8221; All our JSON blob files created will be under this container.</li>
<li>We are setting the property &#8220;decodeBase64&#8221;  to &#8220;false&#8221; as we don&#8217;t want to encode file content to base64 images, we need to store the file content as is.</li>
</ul>
<h4>Step 4: Use Dapr client SDK to invoke the output binding</h4>
<p>Now we need to invoke the output binding by using the .NET SDK, to do so, open the file named &#8220;ExternalTasksProcessorController.cs&#8221; and update the action method code as the below</p><pre class="urvanov-syntax-highlighter-plain-tag">[HttpPost("process")]
        public async Task&lt;IActionResult&gt; ProcesseTaskAndStore([FromBody] TaskModel taskModel)
        {
            try
            {
                _logger.LogInformation("Started processing external task message from storage queue. Task Name: '{0}'", taskModel.TaskName);

                taskModel.TaskId = Guid.NewGuid();
                taskModel.TaskCreatedOn = DateTime.UtcNow;

                //Dapr SideCar Invocation (save task to a state store)
                await _daprClient.InvokeMethodAsync(HttpMethod.Post, "tasksmanager-backend-api", $"api/tasks", taskModel);

                _logger.LogInformation("Saved external task to the state store successfuly. Task name: '{0}', Task Id: '{1}'", taskModel.TaskName, taskModel.TaskId);

		//code to invoke external binding and store queue message content into blob file in auzre storage
		     IReadOnlyDictionary&lt;string,string&gt; metaData = new Dictionary&lt;string, string&gt;()
                    {
                        { "blobName", $"{taskModel.TaskId}.json" },
                    };

                await _daprClient.InvokeBindingAsync("externaltasksblobstore", "create", taskModel, metaData);

                _logger.LogInformation("Invoked output binding '{0}' for external task. Task name: '{1}', Task Id: '{2}'", OUTPUT_BINDING_NAME, taskModel.TaskName, taskModel.TaskId);

                return Ok();
            }
            catch (Exception)
            {
                throw;
            }
        }</pre><p>Looking at lines 17-24, you will see that we calling the method &#8220;InvokeBindingAsync&#8221; and we are passing the binding name &#8220;externaltasksblobstore&#8221; defined in the configuration file, as well the second parameter &#8220;create&#8221; is the action we need to do against the external blob storage. You can for example delete or get a content of a certain file. For a full list of supported actions on Azure Blob Storage, <a href="https://docs.dapr.io/reference/components-reference/supported-bindings/blobstorage/#binding-support" target="_blank" rel="noopener">visit this link</a>.</p>
<p>Notice how are setting the file name we are storing at the external service, we need the file names to be created using the same Task Identifier, all we need to do is to pass the key &#8220;blobName&#8221; with the file name values into the &#8220;metaData&#8221; dictionarity.</p>
<h4>Step 5: Test Dapr bindings locally</h4>
<p>Now we are ready to give it an end-to-end test on our dev machines, to do so, run the 3 applications together using Debug and Run button from VS Code. You can read how we configured the 3 apps to run together in <a href="https://bitoftech.net/2022/08/29/dapr-integration-with-azure-container-apps/" target="_blank" rel="noopener">this post</a>.</p>
<p>Open <a href="https://azure.microsoft.com/en-us/products/storage/storage-explorer/#overview" target="_blank" rel="noopener">Azure Storage Explorer</a>, if you don&#8217;t have it you can install it from <a href="https://azure.microsoft.com/en-us/products/storage/storage-explorer/#overview" target="_blank" rel="noopener">here</a>. Login to your Azure Subscription and navigate to the storage account already created, create a queue, and use the same name you already used in the Dapr Input configuration file.</p>
<p>The content of the message that Azure Storage Queue excepts should be as below, so try to queue a new message using the tool as the image below</p><pre class="urvanov-syntax-highlighter-plain-tag">{
  "taskName": "Task from External System",
  "taskAssignedTo": "tayseer_joudeh@hotmail.com",
  "taskCreatedBy": "tjoudeh@bitoftech.net",
  "taskDueDate": "2022-08-19T12:45:22.0983978Z"
}</pre><p><a href="https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageQueue.jpg" target="_blank" rel="noopener"><img loading="lazy" decoding="async" class="alignnone wp-image-1346" src="https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageQueue-651x1024.jpg" alt="Azure Storage Queue" width="374" height="589" srcset="https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageQueue-651x1024.jpg 651w, https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageQueue-191x300.jpg 191w, https://bitoftech.net/wp-content/uploads/2022/09/AzureStorageQueue.jpg 763w" sizes="auto, (max-width: 374px) 100vw, 374px" /></a></p>
<p>If all is configured successfully you should be able to see a JSON file created as a blob in the Azure Storage Container named &#8220;externaltaskscontainer&#8221; based on your configuration.</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/AzureBlobStorage.jpg" target="_blank" rel="noopener"><img loading="lazy" decoding="async" class="alignnone wp-image-1347" src="https://bitoftech.net/wp-content/uploads/2022/09/AzureBlobStorage-1024x278.jpg" alt="Azure Blob Storage Container Apps" width="807" height="219" srcset="https://bitoftech.net/wp-content/uploads/2022/09/AzureBlobStorage-1024x278.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/AzureBlobStorage-300x81.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/AzureBlobStorage-768x208.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/AzureBlobStorage-1536x416.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/AzureBlobStorage.jpg 1549w" sizes="auto, (max-width: 807px) 100vw, 807px" /></a></p>
<h4>Step 6: Create Input and Output Binding Component files matching Azure Container Apps Specs</h4>
<p>Go ahead and add a new file named &#8220;containerapps-bindings-in-storagequeue.yaml&#8221; under the folder &#8220;aca-components&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">componentType: bindings.azure.storagequeues
version: v1
metadata:
- name: storageAccount
  value: "taskstracker"
- name: storageAccessKey
  secretRef: storagekey
- name: queue
  value: "external-tasks-queue"
- name: decodeBase64
  value: "true"
- name: route
  value: /externaltasksprocessor/process        
secrets:
- name: storagekey
  value: "&lt;value&gt;"
scopes:
- tasksmanager-backend-processor</pre><p>The properties of this file is matching the ones used in Dapr component-specific file, it is a component of type &#8220;bindings.azure.storagequeues&#8221;.<br />
The only difference is that we are using &#8220;secretRef&#8221; when setting the &#8220;storageAccessKey&#8221; and we will be setting the actual value from the Azure Portal after we add those Dapr Input binding component to Azure Container Apps Env.</p>
<p>Let&#8217;s add a new file named &#8220;containerapps-bindings-out-blobstorage.yaml&#8221; under the folder &#8220;aca-components&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">componentType: bindings.azure.blobstorage
version: v1
metadata:
- name: storageAccount
  value: "taskstracker"
- name: storageAccessKey
  secretRef: storagekey
- name: container
  value: "externaltaskscontainer"
- name: decodeBase64
  value: "false"
- name: publicAccessLevel
  value: "none"    
secrets:
- name: storagekey
  value: "&lt;value&gt;"
scopes:
- tasksmanager-backend-processor</pre><p>The properties of this file is matching the ones used in Dapr component-specific file, it is a component of type &#8220;bindings.azure.blobstorage&#8221;.<br />
The only difference is that we are using &#8220;secretRef&#8221; when setting the &#8220;storageAccessKey&#8221; and we will be setting the actual value from the Azure Portal after we add those Dapr Output binding component to Azure Container Apps Env.</p>
<p>With those changes in place, we are ready to rebuild the backend background processor container image, update Azure Container Apps Env, and redeploy a new revision, but I want to add one small piece and introduce a special type of input binding which is <a href="https://docs.dapr.io/reference/components-reference/supported-bindings/cron/" target="_blank" rel="noopener">Cron Jobs</a>. So let&#8217;s do this before.</p>
<h3>Overview of Cron Input Binding</h3>
<p>Cron binding is a special type of input binding, it doesn&#8217;t subscribe for events coming from an external system, the corn biding can be used to trigger application code in our service periodically based on a configurable interval, so for example, if we want to trigger certain code to scan all the tasks in the system every 4 hours and mark the tasks which are overdue, the corn binding is suitable to do this.</p>
<h4>Step 1: Add Cron binding configuration</h4>
<p>The first step to configuring Cron binding is to add a component file that describes where is the code that needs to be triggered and on which intervals it should be triggered, to do so add a new file named &#8220;dapr-scheduled-cron.yaml&#8221; under folder &#8220;components&#8221; and use the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: ScheduledTasksManager
  namespace: default
spec:
  type: bindings.cron
  version: v1
  metadata:
  - name: schedule
    value: "@every 10m"
scopes:
- tasksmanager-backend-processor</pre><p>What we have done here is the following:</p>
<ul>
<li>Added new input binding of type &#8220;bindings.cron&#8221;</li>
<li>Provided the name &#8220;ScheduledTasksManager&#8221; for this binding, this means that an HTTP POST endpoint on the URL &#8220;/ScheduledTasksManager&#8221; should be added as it will be invoked when the job is triggered based on the Cron intervals.</li>
<li>Setting the interval for this Cron job to be triggered every 10 minutes, for full details and available options on how to set this value, visit the<a href="https://docs.dapr.io/reference/components-reference/supported-bindings/cron/#schedule-format" target="_blank" rel="noopener"> Cron binding documentation</a>.</li>
</ul>
<h4>Step 2: Add Cron binding configuration matching Azure Container Apps Specs</h4>
<p>Now we will add a new file named &#8220;containerapps-scheduled-cron.yaml&#8221; under folder &#8220;aca-components&#8221;. this file will be used when updating the Azure Container App Env and enable this binding, and use the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">componentType: bindings.cron
version: v1
metadata:
- name: schedule
  value: "@every 6h"
# Application scopes  
scopes:
- tasksmanager-backend-processor</pre><p>Note that the name of the binding is not part of the file metadata, we are going to set the name of the binding to the value &#8220;ScheduledTasksManager&#8221; when we update the Azure Container Apps Env.</p>
<h4>Step 3: Add the endpoint which will be invoked by Cron binding</h4>
<p>As we saw in the previous steps, the Cron job configuration is very simple, we now need to add an endpoint which accepts POST request when the Cron job is triggered, to do so add a new file named &#8220;ScheduledTasksManagerController.cs&#8221; under the project &#8220;TasksTracker.Processor.Backend.Svc&#8221; and use the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">namespace TasksTracker.Processor.Backend.Svc.Controllers
{
    [Route("ScheduledTasksManager")]
    [ApiController]
    public class ScheduledTasksManagerController : ControllerBase
    {
        private static string STORE_NAME = "periodicjobstatestore";
        private static string WATERMARK_KEY = "PeriodicSvcWatermark";
        private readonly ILogger&lt;ScheduledTasksManagerController&gt; _logger;
        private readonly DaprClient _daprClient;

        public ScheduledTasksManagerController(ILogger&lt;ScheduledTasksManagerController&gt; logger,
                                                DaprClient daprClient)
        {
            _logger = logger;
            _daprClient = daprClient;
        }

        [HttpPost]
        public async Task CheckOverDueTasksJob()
        {
            var currentWatermark = DateTime.UtcNow;

            _logger.LogInformation($"ScheduledTasksManager::Timer Services triggered at: {currentWatermark}");

            var overdueTasksList = new List&lt;TaskModel&gt;();

            var waterMark = await _daprClient.GetStateAsync&lt;DateTime&gt;(STORE_NAME, WATERMARK_KEY);
            _logger.LogInformation($"ScheduledTasksManager::reading watermark from state store, watermark value: {waterMark}");

            var tasksList = await _daprClient.InvokeMethodAsync&lt;List&lt;TaskModel&gt;&gt;(HttpMethod.Get, "tasksmanager-backend-api", $"api/overduetasks?waterMark={waterMark}");
            _logger.LogInformation($"ScheduledTasksManager::completed query state store for tasks, retrieved tasks count: {tasksList.Count()}");

            foreach (var taskModel in tasksList)
            {
                if (currentWatermark.Date&gt; taskModel.TaskDueDate.Date)
                {
                    overdueTasksList.Add(taskModel);
                }
            }

            if (overdueTasksList.Count&gt; 0)
            {
                _logger.LogInformation($"ScheduledTasksManager::marking {overdueTasksList.Count()} as overdue tasks");
                await _daprClient.InvokeMethodAsync(HttpMethod.Post, "tasksmanager-backend-api", $"api/overduetasks/markoverdue", overdueTasksList);
            }

            _logger.LogInformation($"ScheduledTasksManager::storing watermark to state store, watermark value: {currentWatermark}");
            await _daprClient.SaveStateAsync(STORE_NAME, WATERMARK_KEY, currentWatermark);
        }
    }
}</pre><p>Let&#8217;s highlight what we have added to this controller:</p>
<ul>
<li>A new action method named &#8220;CheckOverDueTasksJob&#8221; contains the business logic which will be triggered by the Cron job configuration on a certain interval</li>
<li>I&#8217;m using a watermark value that holds the last run of this job, the value of the watermark (timestamp of the last run) is stored in a persistent store (in my case it is stored as a blob file) using Dapr State management API on Azure Blob Storage, you can use output binding for this as well. This is the beauty o Dapr Building Blocks; it is flexible and you can choose different ways to achieve the same thing. We will add the State management Dapr component file in the next step.</li>
<li>I have added 2 methods named &#8220;GetTasksByTime&#8221; and &#8220;MarkOverdueTasks&#8221; to the interface &#8220;ITasksManager&#8221; in project &#8220;TasksTracker.TasksManager.Backend.Api&#8221;. Those methods are called via the Dapr Service Invocation Building Block. You can see the complete code of method &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/667bfeb28a9ed68e5221344d5fbbaf986ab2c580/TasksTracker.TasksManager.Backend.Api/Services/TasksStoreManager.cs#L121" target="_blank" rel="noopener">GetTasksByTime</a>&#8221; and method &#8220;<a href="https://github.com/tjoudeh/TasksTracker.ContainerApps/blob/667bfeb28a9ed68e5221344d5fbbaf986ab2c580/TasksTracker.TasksManager.Backend.Api/Services/TasksStoreManager.cs#L162" target="_blank" rel="noopener">MarkOverdueTasks</a>&#8221; by visiting the links.</li>
</ul>
<h4>Step 4: Add Azure Blob Storage State Store Components to store and retrieve Watermark</h4>
<p>Add a new file named &#8220;dapr-statestore-blobstorage-periodic.yaml&#8221; under the folder &#8220;components&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: periodicjobstatestore
spec:
  type: state.azure.blobstorage
  version: v1
  metadata:
  - name: accountName
    value: "taskstracker"
  - name: accountKey
    value: ""
  - name: containerName
    value: "periodicjobcontainer"</pre><p>Next, we need to add a component file that matches Azure Container Apps Specs, so add a new file named &#8220;containerapps-statestore-blobstorage-periodic.yaml&#8221; under the folder &#8220;aca-components&#8221; and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">componentType: state.azure.blobstorage
version: v1
metadata:
- name: accountName
  value: "taskstracker"
- name: accountKey
  secretRef: storagekey
- name: containerName
  value: "periodicjobcontainer"
secrets:
- name: storagekey
  value: ""
scopes:
- tasksmanager-backend-processor</pre><p>What we have done here should be familiar to you now, we have added a state store component file of type &#8220;state.azure.blobstorage&#8221;. This component file will describe to our service how it will store blob files on the storage account named &#8220;taskstracker&#8221; under container &#8220;periodicjobcontainer&#8221;. We will be setting the actual &#8220;storageKey&#8221; from the Azure Portal after we update Azure Container Apps Env.</p>
<h3>Deploy the Backend Background Processer and the Backend API Projects to Azure Container Apps</h3>
<h4>Step 1: Build the Backend Background Processer and the Backend API App images and push them to ACR</h4>
<p>As we have done previously we need to build and deploy both app images to ACR so they are ready to be deployed to Azure Container Apps, to do so, continue using the same PowerShell console and paste the code below (Make sure you are on directory &#8220;TasksTracker.ContainerApps&#8221;):</p><pre class="urvanov-syntax-highlighter-plain-tag">az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_API_NAME" --file 'TasksTracker.TasksManager.Backend.Api/Dockerfile' . 

az acr build --registry $ACR_NAME --image "tasksmanager/$BACKEND_SVC_NAME" --file 'TasksTracker.Processor.Backend.Svc/Dockerfile' .</pre><p></p>
<h4>Step 2: Add 4 Dapr Components to Azure Container Apps Environment</h4>
<p>We need to run the command below commands to add the 4 component files we have defined in this post, to do so use the same PowerShell console and paste the code below:</p><pre class="urvanov-syntax-highlighter-plain-tag">##Input binding component for Azure Storage Queue
az containerapp env dapr-component set `
  --name $ENVIRONMENT --resource-group $RESOURCE_GROUP `
  --dapr-component-name externaltasksmanager `
  --yaml '.\aca-components\containerapps-bindings-in-storagequeue.yaml'

##Output binding component for Azure Blob Storage
az containerapp env dapr-component set `
 --name $ENVIRONMENT --resource-group $RESOURCE_GROUP `
  --dapr-component-name externaltasksblobstore `
 --yaml '.\aca-components\containerapps-bindings-out-blobstorage.yaml'
 
##Cron binding component
az containerapp env dapr-component set `
  --name $ENVIRONMENT --resource-group $RESOURCE_GROUP `
  --dapr-component-name scheduledtasksmanager `
  --yaml '.\aca-components\containerapps-scheduled-cron.yaml'

##State Store component for Azure Blob Storage
az containerapp env dapr-component set `
 --name $ENVIRONMENT --resource-group $RESOURCE_GROUP `
  --dapr-component-name periodicjobstatestore `
 --yaml '.\aca-components\containerapps-statestore-blobstorage-periodic.yaml'</pre><p>Once the components are added to Azure Container Apps Env. Don&#8217;t forget to update the &#8220;secretRef&#8221; where it is needed from the Azure Portal. It will be similar to what we have done previously in this <a href="https://bitoftech.net/2022/09/02/azure-container-apps-async-communication-with-dapr-pub-sub-api-part-6/" target="_blank" rel="noopener">post</a></p>
<h4>Step 3: Deploy new revisions of the Backend API and Backend Background Processer to Azure Container Apps</h4>
<p>As we&#8217;ve done multiple times, we need to update the Azure Container App hosting the Backend API &amp; Backend Background Processer with a new revision so our code changes are available for end users, to do so run the below PowerShell script:</p><pre class="urvanov-syntax-highlighter-plain-tag">## Update Backend API App container app and create a new revision 
az containerapp update ` 
--name $BACKEND_API_NAME ` 
--resource-group $RESOURCE_GROUP ` 
--revision-suffix v20220829-1 ` 
--cpu 0.25 --memory 0.5Gi ` 
--min-replicas 1 ` 
--max-replicas 2

## Update Backend Background Processer container app and create a new revision 
az containerapp update `
--name $BACKEND_SVC_NAME `
--resource-group $RESOURCE_GROUP `
  --revision-suffix v20220829-1 `
  --cpu 0.25 --memory 0.5Gi `
  --min-replicas 1 `
  --max-replicas 5</pre><p>With those changes in place and deployed, from the Azure Portal, you can open the log streams of the container app hosting the &#8220;ACA-Processor-Backend&#8221; and check the logs generated after queuing a message into Azure Storage Queue as an external system, you should receive logs similar to the below</p>
<p>That&#8217;s it for now, this post turned out to be a bit lengthy <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> stay tuned and happy coding!</p>
<p><a href="https://bitoftech.net/wp-content/uploads/2022/09/AzureContainerAppsLogs.jpg" target="_blank" rel="noopener"><img loading="lazy" decoding="async" class="alignnone wp-image-1349 size-large" src="https://bitoftech.net/wp-content/uploads/2022/09/AzureContainerAppsLogs-1024x462.jpg" alt="Azure Container Apps Logs" width="1024" height="462" srcset="https://bitoftech.net/wp-content/uploads/2022/09/AzureContainerAppsLogs-1024x462.jpg 1024w, https://bitoftech.net/wp-content/uploads/2022/09/AzureContainerAppsLogs-300x135.jpg 300w, https://bitoftech.net/wp-content/uploads/2022/09/AzureContainerAppsLogs-768x346.jpg 768w, https://bitoftech.net/wp-content/uploads/2022/09/AzureContainerAppsLogs-1536x693.jpg 1536w, https://bitoftech.net/wp-content/uploads/2022/09/AzureContainerAppsLogs.jpg 1780w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></a></p>
<h4>The <a href="https://github.com/tjoudeh/TasksTracker.ContainerApps" target="_blank" rel="noopener">source code</a> for this tutorial is available on GitHub. You can check the <a href="https://tasksmanager-frontend-webapp.agreeablestone-8c14c04c.eastus.azurecontainerapps.io/" target="_blank" rel="noopener">demo application</a> too.</h4>
<h3>Follow me on Twitter <a style="color: #ed702b;" title="Taiseer Joudeh Twitter" href="http://twitter.com/tjoudeh" target="_blank" rel="noopener">@tjoudeh</a></h3>
<h4>References:</h4>
<ul>
<li><a href="https://docs.microsoft.com/en-us/dotnet/architecture/dapr-for-net-developers/bindings" target="_blank" rel="noopener">The Dapr bindings building block</a></li>
</ul>
<p>The post <a href="https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/">Azure Container Apps with Dapr Bindings Building Block &#8211; Part 7</a> appeared first on <a href="https://bitoftech.net">Bit of Technology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://bitoftech.net/2022/09/05/azure-container-apps-with-dapr-bindings-building-block/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1338</post-id>	</item>
	</channel>
</rss>
