<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0"><channel><title>Surinder Bhomra's RSS</title><description><![CDATA[A Web and Software Developer based in Oxford delivering CMS and custom build solutions to a wide variety of high profile clients.]]></description><link>https://www.surinderbhomra.com</link><generator>GatsbyJS</generator><lastBuildDate>Tue, 31 Mar 2026 17:50:43 GMT</lastBuildDate><xhtml:meta xmlns:xhtml="http://www.w3.org/1999/xhtml" content="noindex" name="robots"/><item><title><![CDATA[Is AI Coding The False Messiah?]]></title><link>https://www.surinderbhomra.com/Blog/2026/03/29/Is-AI-Coding-False-Messiah</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2026/03/29/Is-AI-Coding-False-Messiah</guid><pubDate>Sun, 29 Mar 2026 19:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I recently started using Claude Code for my development work, and I have to admit, it is without a doubt the most powerful AI coding assistant I have ever tried. It sets an entirely new precedent for what agentic AI can achieve. The sheer competence of it caught me off-guard. You give it a prompt whilst connecting it to your project, and it effortlessly navigates through all the files you give it access to and spits out its logic in seconds.&lt;/p&gt;
&lt;p&gt;Witnessing the steadfastness of completing tasks that would normally take hours to be resolved in a matter of minutes leaves you constantly wanting more, pulling you into a different mindset where you realise that truly anything is now possible.&lt;/p&gt;
&lt;p&gt;This exact power has triggered a profound doubt. When AI can handle the intricate architecture and the tedious implementation with little to no thought required on my part, the following question emerges: What use is there for me as a developer?&lt;/p&gt;
&lt;p&gt;It reminds me of a scene from the very first episode of the popular 80s classic, &lt;a href="https://en.wikipedia.org/wiki/Knight_Rider_(1982_TV_series)" target="_blank" rel="noopener noreferrer"&gt;Knight Rider&lt;/a&gt;. During one of his initial exchanges with Devon Miles after test-driving KITT, Michael shares his unease over the car's terrifying level of autonomy:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Michael:&lt;/strong&gt;&lt;/em&gt; Oh, great. You mean it can decide to take off and go for gas, or a car wash. Just like that? Well, that would be terrific if I happened to be working under it.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Devon:&lt;/strong&gt;&lt;/em&gt; It wouldn't do anything to harm you, I assure you.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Like Michael, we are sitting behind the wheel of a machine that seems fully capable of driving itself. AI might not "harm" us in the physical sense, but if we aren't careful, letting it take over entirely can quietly crush the craft we have worked on for years.&lt;/p&gt;
&lt;p&gt;For the past couple of years, the developer world has been swept up in the era of vibecoding — the practice of letting Large Language Models (LLMs) do the heavy lifting while we sit back and play the role of high-level orchestrators. But as tools like Claude Code, Gemini and Copilot continue to push the boundaries of autonomy, I've come to a quiet realisation: AI is actively eroding our ability to code for ourselves.&lt;/p&gt;
&lt;h2&gt;Getting High On AI&lt;/h2&gt;
&lt;p&gt;The constant influx of new capabilities is not just exhausting to keep up with; it slowly chips away at the parts of the job we genuinely love. When the machine does all the thinking, we stop being developers and become prompt-aholics. Writing a story to get the perfect output becomes our only real skill, leaving the underlying code as a black box we no longer care to understand.&lt;/p&gt;
&lt;p&gt;I am concerned at the potential loss of the muscle memory we have acquired over the years where we have been able to patiently read through class libraries, trace logic through multiple files, and meticulously debug a stubborn issue. Like a drug-addict on crack cocaine waiting for the next hit, we are trading our skill of problem-solving for a quick hit of instantly generated code.&lt;/p&gt;
&lt;h2&gt;Delusions of Grandeur&lt;/h2&gt;
&lt;p&gt;AI coding agents are remarkably good at writing units of code that look perfect in isolation. If you need a specific algorithm or a standard component, AI will provide a clean, consistent snippet that perfectly matches your prompt. But here is the danger: LLMs are the ultimate yes-men. They will rarely push back and tell you that the feature you are building is fundamentally flawed, or that the architecture you are releasing is absolute rubbish.&lt;/p&gt;
&lt;p&gt;This creates a dangerous divide. An already experienced developer can look at the generated code, question its validity, and make an informed judgement call on whether it is genuinely acceptable for the production environment. They have the hard-earned scars to know when a shortcut will result in technical debt.&lt;/p&gt;
&lt;p&gt;Conversely, novice developers or non-technical managers wielding these tools can quickly fall victim to delusions of grandeur. Just because they can suddenly spin up a functioning web app or a complex API in an afternoon, they begin to believe they possess senior-level engineering prowess. However, software engineering is not just about stringing together functioning isolated components; it's about cohesive architecture, long-term maintainability, and understanding how a change in one area of an application effects the entire system.&lt;/p&gt;
&lt;p&gt;When you blindly stitch together AI-generated code for months on end without that seasoned oversight, the results aren't going to be pretty. To quote a fellow developer I know, it becomes "AI slop". You eventually wake up to a codebase filled with inefficiencies, repetitive patterns, and short-sighted design choices. AI was consistent with the immediate prompt, but it failed entirely to maintain the long-term context of the project's evolution.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;So, what is the solution? It certainly isn't a total retreat back to the analogue days of manual coding. The precedent Claude Code has set proves that AI is far too valuable a tool to discard. The answer lies in finding a fine balance.&lt;/p&gt;
&lt;p&gt;We must stop treating AI as an outsourced developer that writes our code from start to finish, and start treating it as a brilliant, if occasionally short-sighted, peer reviewer. When AI offers a solution, we shouldn't just hit "accept" and move on. We need to dissect it. We must judge if its suggestion truly fits the broader architecture, learn from the new techniques it introduces, and actively verify its logic.&lt;/p&gt;
&lt;p&gt;This approach keeps you engaged in the "why" and "how" of the code rather than just the "what". AI cannot be allowed to act as a substitute for human reasoning. It is there to assist, not to take the steering wheel completely.&lt;/p&gt;
&lt;p&gt;We are the engineers; AI is the co-pilot. By finding this balance, we maintain was is required to actually learn and grow, ensuring we build software that stands the test of time without losing the joy of the craft itself.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[5 Years of Investing]]></title><link>https://www.surinderbhomra.com/Blog/2026/02/20/5-Years-Investing</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2026/02/20/5-Years-Investing</guid><pubDate>Fri, 20 Feb 2026 07:30:00 GMT</pubDate><content:encoded>&lt;p&gt;I can still remember the very first stock I invested in. It was the Vanguard S&amp;#x26;P 500 Accumulating index fund (VUAG), led by some other random and very bad stock picks. In fact, it has been 5 years to the day. How time flies!&lt;/p&gt;
&lt;p&gt;If I could tell my novice investor self to do just one thing, that would be to start my investment journey by sticking to a single index fund and not act foolhardy in the quest to try and make a load of money by picking individual stocks.&lt;/p&gt;
&lt;p&gt;It's quite easy to have a false sense of confidence investing in a passive fund, like the S&amp;#x26;P 500 (where you aren't doing anything complicated), when you see you're in the green making a profit. The human condition is wired to always want more. We're never happy with what we have and convince ourselves that we could accelerate growth by picking overhyped assets just because everyone else seems to be getting rich quickly. The moment we step outside the safety of a simple fund to chase bigger returns, we end up fighting our own psychology.&lt;/p&gt;
&lt;p&gt;Instead of trying to figure out how to get rich quickly, focus on how to avoid losing money. Warren Buffett's two famous rules of investing are: 1) Don't lose money, and 2) Don't forget rule number one.&lt;/p&gt;
&lt;p&gt;Trying to pick individual winning companies is incredibly difficult, and the odds are stacked against the average investor unless you really do your due diligence and ask the right questions.&lt;/p&gt;
&lt;p&gt;The Bessembinder Study looked at 26,000 companies between 1926 and 2016, where researcher Hendrik Bessembinder found that just over 4% (about 1,000 companies) delivered all of the stock market's returns. It is much simpler to own the racetrack, like an index fund, instead of trying to pick the winning horse.&lt;/p&gt;
&lt;p&gt;Even though I started out investing in the S&amp;#x26;P 500 that solely tracks top American companies, you might want to consider a global index that embraces both developing and emerging markets. Thanks to rising US tariffs and protectionist trade policies, the rest of the world is actively reducing its reliance on America, making global investing incredibly timely. Today, around 85% of global trade flows through alternative channels, and the US's share of world trade has hit its lowest point since 2014.&lt;/p&gt;
&lt;p&gt;You may think having a single index is quite boring. A boring portfolio can end up being the most fruitful one! If I take my wife's investment portfolio, which I manage, it simply consists of three funds, and year-to-date she has already made around a 4.8% return. This might not sound like much, but when you consider the S&amp;#x26;P 500 returns 8-10% annually, she is already hitting half those returns and we're only 2 months into this year.&lt;/p&gt;
&lt;p&gt;Interestingly, a &lt;a href="https://finance.yahoo.com/news/archives-praise-dead-investors-060000235.html" target="_blank" rel="noopener noreferrer"&gt;Fidelity study from 2013&lt;/a&gt; looked at which investors had the best-performing portfolios over a decade. The top performers had one thing in common: they were dead. They couldn't tinker, panic, or second-guess themselves. A "nice" morbid fact to put your investment mindset into perspective.&lt;/p&gt;
&lt;p&gt;Even though my portfolio is bigger in monetary and position terms, it hasn't reached the heights my wife's has this year so far, as I am more bullish on specific sectors and companies from looking at the financials. I may not be making much of a return at this moment in time, but if everything goes to plan, the returns should outpace my wife's. I enjoy the investing process and looking for potential. In whatever decisions I make, in the back of my mind I always remember what Eugene Fama famously said:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Money is like soap. The more you handle it, the less you'll have.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So, I am always treading carefully by doing my due diligence and investing with conviction!&lt;/p&gt;
&lt;p&gt;The overarching lesson I've learnt is that complexity is not a prerequisite for success. If you are just starting out, do yourself a massive favour and begin with the absolute basics. Investing regularly into a single, globally diversified index fund, regardless of whether the market is up or down, is more than enough to get the compounding snowball rolling while you learn the ropes.&lt;/p&gt;
&lt;p&gt;You can always expand your horizons later, branching into individual stocks or specific sectors only once you have built a solid foundation of knowledge and genuinely enjoy the process of analysing the financials. Until then, there is absolutely no shame in keeping things boring. In fact, for the vast majority of people, a simple, hands-off approach isn't just fine—it is the optimal strategy for building long-term wealth. Start simple, stay consistent, and let the market do the heavy lifting.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[High-Efficiency Image Uploads Through Client-side Compression]]></title><link>https://www.surinderbhomra.com/Blog/2026/02/16/High-Efficiency-Image-Uploads</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2026/02/16/High-Efficiency-Image-Uploads</guid><pubDate>Mon, 16 Feb 2026 21:00:00 GMT</pubDate><content:encoded>&lt;p&gt;An image doesn't just consist of pixels. It is a container full of additional information and structural inefficiencies hidden from the naked eye. When you capture a photo with a smartphone or a professional DSLR, the resulting file is almost always significantly larger than it needs to be for the web.&lt;/p&gt;
&lt;p&gt;This "image bloat" generally falls into two categories: Informational and Structural.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Informational:&lt;/strong&gt; EXIF (Exchangeable Image File Format) data is metadata stored within the image header that includes GPS coordinates, camera serial numbers, and date-time stamps. While useful for photographers, this data adds unnecessary kilobytes to every upload and can even pose a privacy risk.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Structural:&lt;/strong&gt; This is mainly down to resolution overkill, where modern cameras capture images at 12MP or higher - perfect for a billboard print, but massive for a website. But other reasons could be due to Sensor Noise where the digital sensor captures random variations in colour that the human eye can't distinguish, but the file's compression algorithm works overtime to preserve.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The Hidden Cost of Raw Uploads&lt;/h2&gt;
&lt;p&gt;When it comes to the process of uploading images online, we are often causing unnecessary strain on the end-user's connection as well as our own servers. By forcing a browser to transmit raw, unoptimised files, we create a high probability of failure through multiple, redundant requests.&lt;/p&gt;
&lt;p&gt;Every time a user on a unstable connection attempts to push a 10MB high-resolution photo, they are essentially gambling with the connection's stability. If that connection blips at 95%, the request fails, and the server is left with garbage data it can't use. The user is forced to start the entire process over again. This cycle doesn't just waste bandwidth; it inflates server CPU usage as it struggles to manage timed-out threads and increases the physical storage costs for data that the user never actually intended to be so large.&lt;/p&gt;
&lt;h2&gt;Real-World Scenario&lt;/h2&gt;
&lt;p&gt;I encountered this exact bottleneck while developing a valuation form. In this scenario, users were required to upload multiple high-quality photos of their assets for appraisal. On paper, this sounds simple. However, in the real world, users aren't always sitting on high-speed fibre-optic broadband. They are often out in the field, where the connection could be unstable.&lt;/p&gt;
&lt;p&gt;What was required is the ability for the image to be compressed on the users device before the upload process even starts. I found a JavaScript library that was worth a try: &lt;a href="https://www.npmjs.com/package/browser-image-compression" target="_blank" rel="noopener noreferrer"&gt;browser-image-compression&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;How The Client-Side Compression Works&lt;/h2&gt;
&lt;p&gt;This library works by leveraging the browser's internal Canvas API and Web Workers to perform a digital reconstruction of the image. When a file is processed, it is drawn onto an invisible canvas at a set resolution, which instantly strips away all bloat like EXIF metadata and GPS coordinates. Then re-encodes the pixels using lossy compression algorithms to discard high-frequency noise.&lt;/p&gt;
&lt;p&gt;This entire magical process happens on a background thread (Web Worker), the image is crunched down to a fraction of its original size without freezing the user interface, ensuring the new lean file is ready for a faster upload.&lt;/p&gt;
&lt;h2&gt;Results&lt;/h2&gt;
&lt;p&gt;The difference in upload performance was night and day. Images that were originally 8–10MB were now being compressed to approximately 900KB. It is worth noting that the compression could have been even more aggressive; however, we capped the maximum size at 1MB, as we felt that was the perfect "sweet spot" for maintaining high visual quality in this scenario.&lt;/p&gt;
&lt;p&gt;By hitting that 900KB mark, we effectively reduced the data transfer requirements by 90%!&lt;/p&gt;
&lt;h2&gt;Demo&lt;/h2&gt;
&lt;p&gt;To see these performance gains in action, I have put together a &lt;a href="https://jsfiddle.net/sbhomra/3hsx01ne/" target="_blank" rel="noopener noreferrer"&gt;live demo&lt;/a&gt; where you can upload your own high-resolution photos and see the real-time reduction in "image bloat" without any loss in visual quality.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://ik.imagekit.io/surinderbhomra/Blog/Client-side/BrowserImageCompressionDemo.png?tr=w-500#center" alt="Browser Image Compression - Demo"&gt;&lt;/p&gt;
&lt;iframe width="100%" height="500" src="//jsfiddle.net/sbhomra/3hsx01ne/embedded/js,html/?menuColor=FFF" frameborder="0" loading="lazy" allowtransparency="true" allowfullscreen="true"&gt;&lt;/iframe&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Implementing client-side compression isn't just a nice-to-have feature. It is a fundamental shift in how we handle user data and server resources. By moving the processing to the user's device, we achieve three major wins:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reliability:&lt;/strong&gt; Small files don't just upload faster; they succeed more often. By reducing an 8MB file to 900KB, you remove the timeout risk that plagues users on unstable connections.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Privacy by Default:&lt;/strong&gt; Because the library reconstructs the image on a canvas, sensitive EXIF data and GPS coordinates are stripped before they ever reach your cloud storage. This reduces your liability and protects your users.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure Savings:&lt;/strong&gt; The backend no longer needs to spend expensive CPU cycles stripping metadata or resizing massive blobs. You save on bandwidth, processing power, and long-term storage costs.&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Stockmantics - A Personal AI Project]]></title><link>https://www.surinderbhomra.com/Blog/2026/01/03/Stockmantics</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2026/01/03/Stockmantics</guid><pubDate>Sat, 03 Jan 2026 13:20:00 GMT</pubDate><content:encoded>&lt;p&gt;In my previous &lt;a href="/Blog/2025/12/24/Dream-A-Little-Bigger-Darling"&gt;post&lt;/a&gt; discussing my foray into the world of AI, I mentioned working on a personal project called "Stockmantics". But what exactly is &lt;a href="https://www.stockmantics.com" target="_blank" rel="noopener noreferrer"&gt;Stockmantics&lt;/a&gt;, and why did I decide to build it?&lt;/p&gt;
&lt;p&gt;Stockmantics started because I needed a project where I could apply my AI knowledge to a real-world problem. In the end, I didn't have to look further than my own hobbies.&lt;/p&gt;
&lt;p&gt;Aside from coding, I’ve become heavily invested (pun intended) in the stock market. It all started shortly after COVID, when there was so much buzz online about people putting money into companies and index funds. Seeing the returns made by those who invested at the right time (during the lockdown of March 2020) opened my eyes to a potential new income stream. I didn't want to miss out on the fun, so I decided to learn the ropes of an area I knew nothing about. I just didn't expect it to turn into a full-time hobby.&lt;/p&gt;
&lt;p&gt;However, unlike most hobbies, I considered this one fraught with danger; one must err on the side of caution. After all, real money is at stake, and acting foolhardy or investing incorrectly can lead to significant losses.&lt;/p&gt;
&lt;h2&gt;The Requirement&lt;/h2&gt;
&lt;p&gt;When I became more confident in my investment strategy and the type of trader I wanted to be, I found one aspect consistently time-consuming: finding an easy-to-read daily digest in one place. I was tired of hopping from website to website or subscribing to endless newsletters just to get a clear picture.&lt;/p&gt;
&lt;p&gt;So, with the help of AI, I decided to build a tool that would do this for me, and Stockmantics was born. My requirements were as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Market Snapshot:&lt;/strong&gt; A quick look at key indices (S&amp;#x26;P 500, FTSE 100, NASDAQ, Commodities, etc.).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Daily Summary:&lt;/strong&gt; A single, concise sentence summarising what happened that day.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Global News:&lt;/strong&gt; Key events from the USA, Europe, and Asia.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Crypto Updates:&lt;/strong&gt; High-level developments in cryptocurrency, focusing on the majors.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Investor Action:&lt;/strong&gt; A conclusion based on the day's news, suggesting what an investor should look out for.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Smart Glossary:&lt;/strong&gt; Tooltipped definitions for stock market, investment, and economic terms to assist novice investors (and provide a constant refresher for myself).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Social-Media Integration:&lt;/strong&gt; Automatic posting to X, highlighting key stories from the day's article.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;My philosophy for this personal project is simple: if it assists my own needs, that is a big win in itself. If someone else finds my method of digesting the day's financial news useful, that will be the icing on the cake. I decided early on that the success of Stockmantics would not be measured by visitor numbers or X followers, but by what I learnt during the development process and whether it truly works for me.&lt;/p&gt;
&lt;h2&gt;Application Architecture&lt;/h2&gt;
&lt;p&gt;The application architecture is based on the following Microsoft technologies:&lt;/p&gt;
&lt;h3&gt;ASP.NET Core Razor Pages&lt;/h3&gt;
&lt;p&gt;The website is a relatively small and simple application that consisted of the following pages:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Homepage&lt;/li&gt;
&lt;li&gt;Article Listing&lt;/li&gt;
&lt;li&gt;Article&lt;/li&gt;
&lt;li&gt;Generic Content (for About/Terms/Disclaimer pages)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A CMS wasn't needed as all content and data would be served from Azure Storage Tables. All there is from a content-management perspective is an authenticated "Article Management" area, where content generated by Gemini could be overridden when required.&lt;/p&gt;
&lt;h3&gt;Azure Storage Tables&lt;/h3&gt;
&lt;p&gt;I actively decided to use Azure Storage Tables over an SQL database to store all of the Stockmantics data as there was no relational element between each table. It also provided a lower cost alternative and quicker route to development.&lt;/p&gt;
&lt;p&gt;List of tables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Article&lt;/li&gt;
&lt;li&gt;MarketSnapshot&lt;/li&gt;
&lt;li&gt;SocialShare&lt;/li&gt;
&lt;li&gt;StockmarketGlossary&lt;/li&gt;
&lt;li&gt;AppSetting&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Azure Blob&lt;/h3&gt;
&lt;p&gt;For images that may be used in article content.&lt;/p&gt;
&lt;h3&gt;Azure Functions&lt;/h3&gt;
&lt;p&gt;All the grunt work getting the data is done by Timer Triggered Azure Functions that would fire shortly after the US Markets open (around midday GMT) in order to get the most up-to-date goings on in the market.&lt;/p&gt;
&lt;p&gt;A breakdown of the Azure Functions are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Generate News Article&lt;/strong&gt; - queries stock market API's and news feeds to send to the Gemini API to construct an article tailored to my requirements. It is then stored in the Article table with related attributes and additional meta data suited to be served in a webpage.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Generate Social Posts&lt;/strong&gt; - extracts 10 key facts from the generated news article to be transformed into tweets. The days generated tweets are stored until pushed to social media platforms.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Market Snapshot&lt;/strong&gt; - uses the Yahoo Finance API to return the market price and percentage change for the core market indices. These values are then passed to the Gemini APIs "Grounding with Google Search" to provide sentiment and the reasons behind the change in price.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Post To X&lt;/strong&gt; - publishes a tweet every 15 minutes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Post To Bluesky&lt;/strong&gt; - publishes a post every 15 minutes.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The Chosen AI Engine&lt;/h2&gt;
&lt;p&gt;It was always going to be a choice between Google Gemini and OpenAI. I was already familiar with both LLMs (Large Language Models), having casually thrown stock market queries at them—among other things—long before this project was even a glint in my eye. Ultimately, my decision hinged on two key factors:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;API:&lt;/strong&gt; The ease of use and the reliability of the endpoints in returning structured data.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cost Factor:&lt;/strong&gt; Being unfamiliar with the specific pricing structures of LLMs, I needed to estimate the cost per API call and project my monthly expenditure based on token usage. The &lt;a href="https://gptforwork.com/tools/openai-chatgpt-api-pricing-calculator" target="_blank" rel="noopener noreferrer"&gt;OpenAI GPT API Pricing Calculator&lt;/a&gt; provided an excellent breakdown of costs across all major AI providers.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I concluded that Google Gemini was the best fit for Stockmantics, primarily because the model I intended to use (gemini-2.5-flash) offered the most competitive pricing. The cost for one million input and output tokens works out to approximately $0.37, compared to OpenAI's $2.00.&lt;/p&gt;
&lt;p&gt;Furthermore, I felt that Gemini held a slight edge over OpenAI. They might have been late to the AI party, but they have certainly made up for lost time with impressive speed. It also had a card up its sleeve that I only discovered during development: Grounding with Google Search. This feature allows the model to access real-time information from the web, ensuring that the data returned is current rather than limited to a training cut-off date.&lt;/p&gt;
&lt;h2&gt;Misjudging the Machine: Data is King!&lt;/h2&gt;
&lt;p&gt;I initially was under the impression that I could simply ask the likes of OpenAI or Gemini to collate the day's stock market news, which I could then format to my liking. However, this proved to be a mistake. When dealing with fast-moving financial news, I found the results hit-and-miss. The models would frequently return information that was out of date or cite entirely incorrect market prices (even when using Grounding with Google Search).&lt;/p&gt;
&lt;p&gt;At this point, I realised I needed to take a step back and reassess my approach. It became clear that without a reliable, accurate data feed, this application would be of no use to man nor beast.&lt;/p&gt;
&lt;p&gt;The solution had to start with raw data, which the LLM could then use as its base to expand upon. For this, I found pulling financial data available through the likes of Yahoo Finance feeds to be invaluable, amongst other finance-related news feeds.&lt;/p&gt;
&lt;h2&gt;Lengthy Vetting Period&lt;/h2&gt;
&lt;p&gt;The transition from a proof-of-concept to the final version of Stockmantics required a lengthy vetting period, which continued weeks after releasing to live. The raw output from the LLM was rarely perfect on the first try, leading to a many iteratation of refinement. My focus was on four key areas:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Structure &amp;#x26; Flow:&lt;/strong&gt; Tweaking the system instructions to ensure the output was digestible, preventing the model from generating dense, unreadable paragraphs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sector Balance:&lt;/strong&gt; Ensuring the article provided a holistic view of the market, rather than fixating solely on volatile tech stocks or the "Magnificent Seven".&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Glossary Precision:&lt;/strong&gt; Fine-tuning the tooltips to provide definitions that were accessible to novices without losing technical accuracy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Geopolitical Neutrality:&lt;/strong&gt; Ensuring that reports on world affairs, which often drive market sentiment were delivered with an objective and balanced tone.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;What I learnt from this process is that while anyone can write a basic AI prompt, getting the granular nuances right takes a significant amount of time. It is less about coding and more about the art of communication; you have to learn how to speak the model's language to get the consistent, high-quality output you need. Even now, I find myself still making ongoing tweaks for further improvement.&lt;/p&gt;
&lt;p&gt;If you compare the very &lt;a href="https://www.stockmantics.com/news/google-avoids-breakup-as-gold-hits-record-highs-amid-market-jitters-1756908265" target="_blank" rel="noopener noreferrer"&gt;first article&lt;/a&gt; published against one the &lt;a href="https://www.stockmantics.com/news/geopolitical-tensions-rise-as-us-intervenes-in-venezuela-tech-stocks-diverge-1767618050" target="_blank" rel="noopener noreferrer"&gt;more recent&lt;/a&gt;, I am hoping a vast difference will be noticed.&lt;/p&gt;
&lt;h2&gt;Breakdown of Costs&lt;/h2&gt;
&lt;p&gt;One of my main priorities was to keep the running costs on this project tight and I think things ended up being quite good on value. Here is a monthly breakdown:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Website and Domain: £6.25&lt;/li&gt;
&lt;li&gt;Azure Services (Functions/Blob Storage/Tables): £1.10&lt;/li&gt;
&lt;li&gt;Google Gemini API: £4.00&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So we're looking at around £11.35 in total monthly costs. Not bad. Google Gemini costs will be the only item that I expect to fluctute based on the varied number of tokens utilised for each daily article.&lt;/p&gt;
&lt;p&gt;NOTE: Google Gemini and Azure services are only used weekdays for when the stock markets are open. So the costs are based on a 5 day week.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I am unsure what the long-term future holds for Stockmantics. Its lifespan ultimately depends on ongoing costs, maintenance effort, and whether I continue to find it useful for my own needs. However, for now, it serves a valuable purpose beyond just financial news: I have a robust, live application that acts as the perfect test bed for experimenting with new AI features and expanding my technical skillset.&lt;/p&gt;
&lt;p&gt;Fortunately, thanks to various architectural decisions and efficiency improvements, the running costs are currently sustainable, and the site itself is very low maintenance—touch wood! I foresee that further development will only be required if the external APIs change. I have already paid for a years worth of web hosting until October 2026 and will reassess things closer to that date.&lt;/p&gt;
&lt;p&gt;If you got this far, thank you for taking the time to read through the development process. If you are interested in seeing the final result, you can find all the links to  Stockmantics below:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.stockmantics.com" target="_blank" rel="noopener noreferrer"&gt;Website - www.stockmantics.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://x.com/Stockmantics" target="_blank" rel="noopener noreferrer"&gt;X - Stockmantics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://bsky.app/profile/stockmantics.com" target="_blank" rel="noopener noreferrer"&gt;Bluesky - stockmantics.com&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Learning from Algorithms Instead of People]]></title><link>https://www.surinderbhomra.com/Blog/2025/12/28/Learning-From-Algorithms-Instead-Of-People</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2025/12/28/Learning-From-Algorithms-Instead-Of-People</guid><pubDate>Sun, 28 Dec 2025 17:45:00 GMT</pubDate><content:encoded>&lt;p&gt;What happens when we remove the human from a fact or a piece of information? Does it change how we perceive it? This thought came to mind when I was questioning if community-based sites, such as &lt;a href="/Notes/2025/12/26/Stackoverflow-Still-Relevant"&gt;Stackoverflow are still relevant&lt;/a&gt; and made an open-ended remark that Generative AI has now starved us of knowing the person behind the knowledge.&lt;/p&gt;
&lt;p&gt;Historically, we accept knowledge through some form of testimony. We will only believe in something based on what a person has told us. We evaluate their character, their knowledge and most importantly, their honesty. With AI, there is no "person" to trust. You cannot evaluate the AI's moral character or life experience because it has none.&lt;/p&gt;
&lt;p&gt;To demonstrate this point, let's take the following statement about the US economy:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The stock market is the highest it's ever been. We have the greatest economy in the history of our country.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you heard this from Donald Trump (the above statement has been said multiple times by him), you would likely question it immediately. We are familiar with his rhetorical style in how he often bends the truth or prioritises hyperbole over precision. Our scepticism is triggered by the source.&lt;/p&gt;
&lt;p&gt;However, if you asked a financial analyst, you would get a more nuanced response:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;While the market did hit record  numbers (which happens naturally due to inflation), the rate of growth was not actually the 'greatest in history'. At the three-year mark, the market was up roughly 45% under Trump, compared to 53% under Obama and 57% under Clinton.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When we remove the human source, we lose this critical context. By stripping away the "who", we put the accuracy of the "what" in jeopardy. AI operates by taking the insights that required years of research and lived experience, strips them of their author, and repackages them only to regurgitate them with its own bias for our instant consumption. I rarely see the likes of ChatGPT or Gemini offer true attribution to the human behind the data for our own vetting.&lt;/p&gt;
&lt;p&gt;I am far too aware of this from my own experience in building one of my own projects with AI focusing on the stock market and economy, where the data can be subjective and context-dependent. An example of this is when trying to provide the reasoning behind changes in key indices and commodities. The reasoning behind a change in value often hides a dozen competing narratives. When I built my application, I realised that if the AI chooses one narrative over another without telling me why or who championed it, it isn't just summarising the truth; it is effectively editing it.&lt;/p&gt;
&lt;p&gt;Now, I don't want this post to come across negative towards AI, as it would pretty hypocritical after my glowing take on how I use the  technology detailed in my &lt;a href="/Blog/2024/12/24/Dream-A-Little-Bigger-Darling"&gt;previous post&lt;/a&gt;, it has just made me more conscious that even though  knowledge it presents doesn't necessarily lack meaning, but it might lack soul. We get the answer, but we miss the human condition that made the answer necessary in the first place.&lt;/p&gt;
&lt;p&gt;We have to acknowledge that AI is an incredible tool for gathering information, but it should be the starting point, not the finish. Use it to broaden your search, but go to people to deepen your understanding.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[You mustn't be afraid to dream a little bigger, darling.]]></title><link>https://www.surinderbhomra.com/Blog/2025/12/24/Dream-A-Little-Bigger-Darling</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2025/12/24/Dream-A-Little-Bigger-Darling</guid><pubDate>Wed, 24 Dec 2025 12:30:00 GMT</pubDate><content:encoded>&lt;p&gt;The title of this post isn't just a great line from Inception; it's a directive. Eames telling Arthur to expand their constructed reality beyond mere imitation and take bigger risks has been replaying in the back of my mind lately. It felt like the only appropriate way to break the radio silence after such a long hiatus and offer a glimpse into my current mindset. While I haven't been navigating multiple levels of a subconscious dream state, this past year has been about breaking free from self-imposed limitations. I've been pushing beyond my day-to-day coding endeavors to invest time into the very thing dominating our headlines: Artificial Intelligence!&lt;/p&gt;
&lt;p&gt;It is a technology moving at such breakneck speed that you can't just dip a toe in; you have to dive in headfirst and swim, trusting that you'll emerge on the other side a wiser man. Failing to observe the shift in an industry like mine, in my view, is career suicide. With platforms and services releasing their own form of AI tools—some I deem more successful than others—I needed to find my own way in. As programmers, we can no longer afford the luxury of being so tunnel-visioned, clinging rigidly to our area of expertise while the landscape changes around us.&lt;/p&gt;
&lt;p&gt;The thought of getting any footing into the world of AI filled me with dread. This could be down to setting the bar of expectation too high. I knew I was never going to be the type of person to build some deep learning AI engine from scratch, as you really need the "street smarts" of an AI Engineer to do that. Instead, learning to use AI tools and frameworks already readily available would give me the step up I needed, such as Machine Learning and APIs provided by ChatGPT and Gemini.&lt;/p&gt;
&lt;h2&gt;The Journey To Discovery&lt;/h2&gt;
&lt;p&gt;My journey began not with complex neural networks, but with the fundamentals of machine learning (via ML.NET). It was a learning curve, requiring me to rethink how I approached problem-solving. But as the concepts started to click, the potential for a specific use case suddenly became undeniable. I started small, experimenting with a simple concept that could be of tangible value, where I could predict future pricing of used cars based on historical data and their individual attributes.&lt;/p&gt;
&lt;p&gt;Not too far along from this, I started working on my very own side-project in another area I am very passionate about: stocks and trading. I developed a website called &lt;a href="https://www.stockmantics.com" target="_blank" rel="noopener noreferrer"&gt;Stockmantics&lt;/a&gt; that would take in the day's stock and trading news to produce daily digest in a format that was beneficial to me. My own one-stop shop for the day's trading news, without having to read many different newsletters as I had done previously. I used AI as a way to assist in my own needs that could also help others. It's a beast of a project that I am incredibly proud of, and I plan to do a write-up on it next year. But for now, suffice it to say that it taught me more about the practical pipelines of AI than any tutorial ever could.&lt;/p&gt;
&lt;p&gt;One of the final AI projects I worked on at the tail end of the year was a proof-of-concept that revolved around vision search. I wanted to see if I could build a system capable of scanning a client's database to find visually similar items based on nothing but an uploaded image, with the ability to detect what the image consisted of. The addition of metadata attribution working alongside the image search resulted in accurate results that surpassed my own expectations.&lt;/p&gt;
&lt;p&gt;If Asimov had his Three Laws to govern the behaviour of robots, I had my three specific applications, each being a critical stepping stone that would shape my understanding as to where I could integrate AI and the future possibilities—endless? Rather than just being the end user, I was building something of my own creation. I was able to see AI through a different perspective, which resulted in a newfound appreciation. It ended up being a really rewarding experience that has been far from what I am normally used to developing, and this is just the start.&lt;/p&gt;
&lt;h2&gt;Final Thoughts&lt;/h2&gt;
&lt;p&gt;I've come to view AI not as a competitor, or a full human replacement, but as a tireless, low-cost assistant ready to help take the smallest seed of an idea and grow it into a tangible reality, at a speed I never thought possible. It bridges the gap between theory and fruition, allowing me to truly dream a little bigger.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[GatsbyJS Plugin: Smoothscroll with Offset]]></title><link>https://www.surinderbhomra.com/Blog/2024/11/26/GatsbyJS-Smoothscroll-Offset-Plugin</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/11/26/GatsbyJS-Smoothscroll-Offset-Plugin</guid><pubDate>Tue, 26 Nov 2024 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I've been using the &lt;a href="https://www.npmjs.com/package/gatsby-plugin-smoothscroll" target="_blank" rel="noopener noreferrer"&gt;gatsby-plugin-smoothscroll&lt;/a&gt; plugin in the majority of GatsbyJS builds to provide a nice smooth scrolling effect to a HTML element on a page. Unfortunately, it lacked the capability of providing an offset scroll to position, which is useful when a site has a fixed header or navigation.&lt;/p&gt;
&lt;p&gt;I decided to take the &lt;code&gt;gatsby-plugin-smoothscroll&lt;/code&gt; plugin and simplify it so that it would not require a dependency on &lt;a href="https://www.npmjs.com/package/smoothscroll-polyfill" target="_blank" rel="noopener noreferrer"&gt;polyfilled smooth scrolling&lt;/a&gt; as this is native to most modern browsers. The plugin just contains a helper function that can be added to any &lt;code&gt;onClick&lt;/code&gt; event with or without an offset parameter.&lt;/p&gt;
&lt;h2&gt;Usage&lt;/h2&gt;
&lt;p&gt;The plugin contains a &lt;code&gt;smoothScrollTo&lt;/code&gt; helper function that can be imported onto the page:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-js"&gt;// This could be in your `pages/index.js` file.

import smoothScrollTo from "gatsby-plugin-smoothscroll-offset";
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;smoothScrollTo&lt;/code&gt; function can then be used within an &lt;code&gt;onClick&lt;/code&gt; event handler:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-html"&gt;&amp;#x3C;!-- Without offset --&gt;
&amp;#x3C;button onClick={() =&gt; smoothScrollTo("#some-id")}&gt;My link without offset&amp;#x3C;/button&gt;

&amp;#x3C;!-- With offset of 80px --&gt;
&amp;#x3C;button onClick={() =&gt; smoothScrollTo("#some-id", 80)}&gt;My link with offset&amp;#x3C;/button&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Demo&lt;/h2&gt;
&lt;p&gt;A demonstration of the plugin in use can be found by navigating to my &lt;a href="/Blog/Archive"&gt;Blog Archive&lt;/a&gt; page and clicking on any of the category links.&lt;/p&gt;
&lt;p&gt;Prior to this plugin, the category list header would be covered by the sticky navigation.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://ik.imagekit.io/surinderbhomra/Blog/JAM-Stack/GatsbyJSSmoothscrollPluginOffetDemoBefore.png?tr=w-600#center" alt="Smooth Scrolling without Offset"&gt;&lt;/p&gt;
&lt;p&gt;Now that an offset of 80px can be set, the category list header is now visible.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://ik.imagekit.io/surinderbhomra/Blog/JAM-Stack/GatsbyJSSmoothscrollPluginOffetDemoAfter.png?tr=w-600#center" alt="Smooth Scrolling with Offset"&gt;&lt;/p&gt;
&lt;h2&gt;Links&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/gatsby-plugin-smoothscroll-offset" target="_blank" rel="noopener noreferrer"&gt;NPM Package&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/SurinderBhomra/gatsby-plugin-smoothscroll-offset" target="_blank" rel="noopener noreferrer"&gt;Github repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[In-Specie Transfer From Freetrade To Trading 212]]></title><link>https://www.surinderbhomra.com/Blog/2024/11/22/In-Specie-Transfer-Freetrade-To-Trading-212</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/11/22/In-Specie-Transfer-Freetrade-To-Trading-212</guid><pubDate>Fri, 22 Nov 2024 20:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I woke up yesterday morning to a serendipitous discovery that all my stock positions had successfully been transferred from Freetrade to Trading 212. There really is nothing more rewarding than seeing all investments under one stockbroker with a nice five-figure number staring back at you.&lt;/p&gt;
&lt;p&gt;Since I started investing in stocks at the start of 2022, the only stock broker app that was available to me was &lt;a href="https://magic.freetrade.io/join/surinder/df0d4f0e" target="_blank" rel="noopener noreferrer"&gt;Freetrade&lt;/a&gt; and it made my introduction to making investments into hand-picked stocks very straight-forward. But as my portfolio grew, so did my requirements and when Trading 212 opened its doors to new sign-ups (after being on a very long waiting list), I decided to see if the grass was truly greener on the other side... and it was.&lt;/p&gt;
&lt;p&gt;Trading 212 had what Freetrade didn't:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;An active community of like-minded users commenting on their views and insights against each stock.&lt;/li&gt;
&lt;li&gt;5.2% (as of today 5.17%) interest on held cash.&lt;/li&gt;
&lt;li&gt;Introduction of a Cash ISA.&lt;/li&gt;
&lt;li&gt;Ability to view stock graphs in detailed view with the ability to annotate specific trendlines.&lt;/li&gt;
&lt;li&gt;Free use of the use of a Stocks and Shares ISA.&lt;/li&gt;
&lt;li&gt;Lower FX rates.&lt;/li&gt;
&lt;li&gt;Fractional shares on ETFs.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Unfortunately for Freetrade, I just couldn't see a future where they could provide the features I needed in addition to free use of the service. I was being charged £60 per year for the privilege of a Stocks and Shares ISA - free on Trading 212.&lt;/p&gt;
&lt;p&gt;Even though I explored Trading 212 when it became available last year, I made a decision to only start investing at the start of the 2024 tax year to avoid any ISA-related tax implications by utilising two Stocks and Shares (S&amp;#x26;S) ISAs. This is now a void issue as you are able to invest in two different S&amp;#x26;S ISAs as long as you do not exceed the yearly £20k limit.&lt;/p&gt;
&lt;h2&gt;Planning The Move&lt;/h2&gt;
&lt;p&gt;I am currently seven months into using Trading 212 for investing but it was only until October I felt I was in a position to transfer all my stock holding from Freetrade. Why such a long wait?&lt;/p&gt;
&lt;p&gt;The wait was primarily due to not really understanding the correct route to transferring my portfolio without eating into my current years tax free allocation, whilst retaining the average stock price per holding. I also had concerns over the large sum of money to transfer and it's something that shouldn't be taken lightly.&lt;/p&gt;
&lt;p&gt;I am hoping this post will provide some clarity through my experience in transferring my portfolio to Trading 212, even if it is tailored more towards what I experienced in moving away from Freetrade.&lt;/p&gt;
&lt;h2&gt;In-Specie Transfer&lt;/h2&gt;
&lt;p&gt;&lt;em&gt;In-specie&lt;/em&gt; wasn't a term I was familiar with prior to researching how I could move my stock portfolio to another platform.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;'In specie' is a Latin term meaning 'in the actual form'. Transferring an asset 'in specie' means to transfer the ownership of that asset from one person/company/entity to another person/company/entity in its current form, that is without the need to convert the asset to cash.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Before in-specie transfers, the only way to move from one stock broker to another was to sell all your holdings as cash to then reinvest again within the new brokerage. The main disadvantages of doing this is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Time out of the market creating more risk to price fluctuations.&lt;/li&gt;
&lt;li&gt;Potential loss due to the difference between the sell and buy prices.&lt;/li&gt;
&lt;li&gt;Additional brokerage fees when repurchasing the same assets with a new provider.&lt;/li&gt;
&lt;li&gt;Loss of tax efficiency if you have a large portfolio that might wipe out or exceed the yearly tax-free allocation.&lt;/li&gt;
&lt;li&gt;Missed dividend payouts.&lt;/li&gt;
&lt;li&gt;Taking losses on selling stocks that haven't made a profit.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I've noticed over the last couple of years in-specie transfers have become more universally supported amongst the smaller stock brokers (the ones you and I are more likely to use) such as Freetrade, Trading 212 and InvestEngine, which makes moving from one platform to another a much simpler process.&lt;/p&gt;
&lt;p&gt;Even though the process has become simpler, it is still a time-consuming process as transfer completion can take anywhere between 4-6 weeks based on the coordination between both stock platforms.&lt;/p&gt;
&lt;h2&gt;My In-Specie Transfer Timeline&lt;/h2&gt;
&lt;p&gt;My own in-specie transfer had taken a little longer than I hoped - around six weeks with the key milestones dated below.&lt;/p&gt;
&lt;h3&gt;12/10/24&lt;/h3&gt;
&lt;p&gt;Initiated the transfer process in Trading 212 by selecting the stocks I wanted to transfer. You can select specific stocks or your whole portfolio. I based my transfer on selecting all my holdings and specifying the average stock price as I want to retain my position.&lt;/p&gt;
&lt;h3&gt;23/10/24&lt;/h3&gt;
&lt;p&gt;Freetrade emailed to confirm a transfer request has been received and to confirm that my portfolio is in order to allow the process to move smoothly, which entailed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Adding £17 fee per each US holding in my account.&lt;/li&gt;
&lt;li&gt;Rounding up any fractional shares. - Shares in their fractional state cannot be transferred. For one of my stock holdings, I decided to purchase slightly more and round up the total value rather than sell down as this stock in particular is in the negative.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;12/11/24&lt;/h3&gt;
&lt;p&gt;Three weeks had passed and I hadn't heard anything from either party. I contacted Trading 212 support to report a delay in transfer and if any reason could be provided for this. I didn't get a reply, but the next day, things started ticking along. Maybe this gave them the 'kick' they needed?&lt;/p&gt;
&lt;h3&gt;13/11/24&lt;/h3&gt;
&lt;p&gt;Trading 212 completed arrangements with Freetrade and they were now in a position to start the actual transfer that will be over the course of a two week period.&lt;/p&gt;
&lt;h3&gt;21/11/24&lt;/h3&gt;
&lt;p&gt;I woke up to find all stocks had been transferred whilst maintaining my average stock price. There is still one minor job awaiting completion: transfer of a small amount of cash. The most important job had been done and I could now rest easy.&lt;/p&gt;
&lt;h2&gt;Next steps&lt;/h2&gt;
&lt;p&gt;Once the small amount of cash has been transferred, I plan on cancelling my yearly Freetrade Standard plan expiring in June 2025. By the time the transfer has been completed, I will have an outstanding 6 months left on my subscription that I can get refunded (minus a £5 admin fee).&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Rendering DataAnnotation Attributes For Forms In Umbraco]]></title><link>https://www.surinderbhomra.com/Blog/2024/11/14/Umbraco-Rendering-DataAnnotation-Attributes</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/11/14/Umbraco-Rendering-DataAnnotation-Attributes</guid><pubDate>Thu, 14 Nov 2024 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When developing custom forms in Umbraco using ASP.NET Core’s Tag Helpers and DataAnnotations, I noticed that display names and validation messages weren’t being rendered for any of the fields.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[Required(ErrorMessage = "The 'First Name' field is required.")]
[Display(Name = "First Name")]
public string? FirstName { get; set; }

[Required(ErrorMessage = "The 'Last Name' field is required.")]
[Display(Name = "Last Name")]
public string? LastName { get; set; }

[Required(ErrorMessage = "The 'Email Address' field is required.")]
[Display(Name = "Email Address")]
public string? EmailAddress { get; set; }

&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This was quite an odd issue that (if I'm honest!) took me quite some time to resolve as I followed my usual approach to building forms — an approach I’ve used many times in Umbraco without any issues. The only difference in this instance was that I was using an Umbraco form wrapper.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@using (Html.BeginUmbracoForm&amp;#x3C;ContactFormController&gt;("Submit"))
{
    &amp;#x3C;fieldset&gt;
        &amp;#x3C;!-- Form fields here --&gt;
    &amp;#x3C;/fieldset&gt;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I must have been sitting under a rock as I have never come across this from the years working in Umbraco. It could be down to the fact that the forms I have developed in the past didn't rely so heavily on .NET's DataAnnotation attributes.&lt;/p&gt;
&lt;p&gt;The only solution available to remedy this problem was to install a Nuget package (currently in beta) that has kindly been created by &lt;a href="https://www.nuget.org/packages/Dyfort.Umbraco.DictionaryMetadataProvider/1.0.0-beta1" target="_blank" rel="noopener noreferrer"&gt;Dryfort.com&lt;/a&gt;, which resolves the display name and validation attributes for in-form rendering.&lt;/p&gt;
&lt;p&gt;The Nuget package works in Umbraco 10 onwards. I've personally used it in version 13 without any problem. Until there is an official Umbraco fix, this does the job nicely and highly recommended if you encounter similar issues.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Nexudus API: Add Newsletter Subscriber To Multiple Groups]]></title><link>https://www.surinderbhomra.com/Blog/2024/10/24/Nexudus-API-Add-NewsletterSubscriber-Multiple-Groups</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/10/24/Nexudus-API-Add-NewsletterSubscriber-Multiple-Groups</guid><pubDate>Thu, 24 Oct 2024 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As someone who specializes in integrations, I’m hardly ever surprised when I come across yet another CRM platform I’ve never heard of. It feels like there are almost as many CRMs out there as stars in the night sky — okay, maybe that's a bit of an exaggeration, but you get the idea.&lt;/p&gt;
&lt;p&gt;I was introduced to another platform while working on a small integration project: Nexudus. Nexudus is a comprehensive system designed specifically for managing coworking spaces, shared workspaces and flexible offices, whilst incorporating the features you’d expect from a customer relationship management platform.&lt;/p&gt;
&lt;p&gt;For one part of this integration, newsletter subscribers needed to be stored in Nexudus through a statically-generated site built on Astro, hosted in Netlify. The only way to pass subscriber data to Nexudus is through their &lt;a href="https://developers.nexudus.com/reference/getting-started-with-your-api-1" target="_blank" rel="noopener noreferrer"&gt;API platform&lt;/a&gt;, which posed an opportunity to build this integration using Netlify serverless functions.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://developers.nexudus.com/reference/newslettersubscriber" target="_blank" rel="noopener noreferrer"&gt;Newsletter Subscriber API documentation&lt;/a&gt; provides a good starting point for sending through subscriber details and assigning to specific newsletter groups. However, one issue arose during integration whereby the endpoint would error if a user was already subscribed within Nexudus, even if it was a subscription for different group.&lt;/p&gt;
&lt;p&gt;It would seem how Nexudus deals with existing subscribers will require a separate update process, as just using the &lt;a href="https://developers.nexudus.com/reference/add-newslettersubscriber" target="_blank" rel="noopener noreferrer"&gt;Add Newsletter API endpoint&lt;/a&gt; alone does not take into consideration changes to subscription groups. It would be more straight-forward if the Mailchimp API approach was taken, whereby the same user email address can be assigned to multiple mailing lists through a &lt;a href="https://mailchimp.com/developer/marketing/api/list-members/add-member-to-list/" target="_blank" rel="noopener noreferrer"&gt;single API endpoint&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When developing the Netlify serverless function, I put in additional steps to that will allow existing subscribers to be added to new subscription groups through the following process:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Look up the subscriber by email address.&lt;/li&gt;
&lt;li&gt;If a subscriber is not found, a new record is created.&lt;/li&gt;
&lt;li&gt;If a subscriber is found, update the existing record by passing through any changed values by the record ID.&lt;/li&gt;
&lt;li&gt;For an updated record, the new group ID will need to be sent along with the group ID's the user is already assigned to.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;A Github repository has been created containing the aforementioned functionality that can be found here: &lt;a href="https://github.com/SurinderBhomra/nexudus-netlify-functions" target="_blank" rel="noopener noreferrer"&gt;nexudus-netlify-functions&lt;/a&gt;. I may add other Nexudus API endpoints that I have been working on to this repo going forward.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[From Pen To Screen: Rediscovering The Process of Writing]]></title><link>https://www.surinderbhomra.com/Blog/2024/10/11/Rediscovering-The-Process-Of-Writing</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/10/11/Rediscovering-The-Process-Of-Writing</guid><pubDate>Fri, 11 Oct 2024 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In a world filled with technological innovation that fulfils the majority of one's every need, one can sometimes end up feeling all too sterile, especially around the creative-led tasks that should invoke something more visceral.&lt;/p&gt;
&lt;p&gt;It’s only a matter of time before many of us start to feel a void from relying on multifunctional devices that have become deeply intertwined with every part of our lives. Loosening even a small thread of this technological dependence can bring a profound sense of focus.&lt;/p&gt;
&lt;p&gt;One aspect I felt I had to change was my approach to writing as I started getting the feeling that the process was becoming all too sterile and monotonous. I had the urge to go back to a more tactile method of publishing content by starting the process with good old-fashioned pen and paper.&lt;/p&gt;
&lt;p&gt;One thing that became noticeably apparent when returning to this method of curating content is that the real world is far less forgiving, requiring the brain to relearn how to organise thoughts for long-form writing. In the early stages of drafting blog posts by hand, my pages were cluttered with crossed-out sentences and scribbled words. It became evident that I was really reliant on the forgiving nature of writing apps where blocks of text could easily be moved around.&lt;/p&gt;
&lt;p&gt;However, with each blog post I wrote by hand, my brain has managed to think further ahead when it previously lacked forethought where I regularly experienced &lt;a href="/Blog/2019/09/05/Writers-Block-and-The-Difficulties-of-Blogging"&gt;writer's block&lt;/a&gt;. The posts I've published throughout September have all been curated by initially compiling a basic outline, which is then expanded upon into a longer form on paper first. This is probably how I managed to increase my output during the month. I can only attribute this to the lack of visual distractions creating a more kinesthetic environment for thoughts to gestate.&lt;/p&gt;
&lt;p&gt;My approach to writing has changed over the years since I have been blogging and I am reminded of how I used to assimilate ideas from a post I wrote back in 2015: &lt;a href="/Blog/2015/06/01/Pen-Paper-Productivity"&gt;Pen + Paper = Productivity&lt;/a&gt;. It is here where I said something profound that has been lost on me:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Paper has no fixed structure that you are forced to conform to, which makes processing your own thoughts very easy. Unfortunately, software for note-taking has not advanced nearly as fast. It's still all too linear and fixed.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It's been nine years since that post was written, and while technology has advance to the point of offering the convenience of writing on tablets, which I’ve done for a while using my own &lt;a href="/Blog/2019/06/30/My-Essential-iPad-Accessories-and-Applications"&gt;Apple iPad and Apple Pencil&lt;/a&gt; — it simply doesn’t compare. No matter how much we try to mimic the experience with "paperlike" screen protectors.&lt;/p&gt;
&lt;p&gt;Even though technology helps us accomplish things faster, it comes at the cost of not being in the moment. Sometimes, the journey is more meaningful than the destination, and we don’t always need to rely on technology simply because it’s there.&lt;/p&gt;
&lt;p&gt;Does going back to basics make the publishing process longer? Surprisingly, not as much as you’d think. I was pleasantly surprised to discover that after everything is written down on paper, the final steps are mostly mechanical — typing it up on my laptop, running a spell and grammar check, adding an image, and finally hitting the publish button.&lt;/p&gt;
&lt;p&gt;When handwriting long-form content, the process needs to be as easy and frictionless as possible by investing in a good quality writing instrument. To quote Emmert Wolf: &lt;em&gt;An artist is only as good as his tools&lt;/em&gt;. Using a better pen has encouraged me to write more, especially compared to the fatigue I felt with a Bic Crystal, which I find more suited to casual note-taking.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Who knows, maybe this new approach will even improve the overall legibility of my handwriting — it really has deteriorated since I left school. Most likely the result of many years of programming. I don't think I will ever stop relying on my wife to write birthday and greeting cards anytime soon.&lt;/p&gt;
&lt;p&gt;I’d forgotten just how satisfying the experience of handwriting blog posts can be. It’s a bit like channelling the spirit of &lt;a href="https://en.wikipedia.org/wiki/Bob_Ross" target="_blank" rel="noopener noreferrer"&gt;Bob Ross&lt;/a&gt;, layering words like brushstrokes that gradually form paragraphs into passages. When you're done, you can sit back and admire the canvas of carefully crafted marks you’ve created.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[PowerShell: Output A List of Recently Created Or Modified Files]]></title><link>https://www.surinderbhomra.com/Blog/2024/09/27/Powershell-Output-Recently-Created-Modified-Files</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/09/27/Powershell-Output-Recently-Created-Modified-Files</guid><pubDate>Fri, 27 Sep 2024 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;At times there is need to get a list of files that have been updated. This could for the following reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Audit compliance&lt;/strong&gt; to maintain records of application changes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Backup verification&lt;/strong&gt; to confirm the right files were backed up.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Verification of changed files&lt;/strong&gt; to confirm which files were added, modified, or deleted during an update.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Security checks&lt;/strong&gt; to ensure that there have been no unauthorised or suspicious files changed or installed through hacking.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Troubleshooting Issues&lt;/strong&gt; after a new application release by seeing a list of changed files can help identify the source of issues.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Based on the information I found online, I put together a PowerShell script that was flexible enough to meet the needs of the above scenarios, as I encountered one of them this week. I'll let you guess the scenario I faced.&lt;/p&gt;
&lt;p&gt;At its core, the following PowerShell script uses the &lt;code&gt;Get-ChildItem&lt;/code&gt; command to list out all files recursively across all sub-folders, ordered by the created date descending with the addition of handful of optional parameters.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Get-ChildItem -Path C:\My-Path -Recurse -Include *.png | 
			Select -Last 5 CreationTime,LastWriteTime,FullName | 
			Sort-Object -Property CreationTime -Descending | 
			Export-Csv "file-list.csv"
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Breakdown of the parameters used:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter/Object&lt;/th&gt;
&lt;th&gt;Detail&lt;/th&gt;
&lt;th&gt;Is Optional&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;-Path&lt;/td&gt;
&lt;td&gt;The folder path to where files need to be listed.&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-Recurse&lt;/td&gt;
&lt;td&gt;Get files from the path and its subdirectories&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-Include&lt;/td&gt;
&lt;td&gt;Filter the file output through a path element or pattern,. This only works when the "Recurse" parameter is present.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Select&lt;/td&gt;
&lt;td&gt;Set the maximum output (-Last) and list of fields to be listed.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sort-Object&lt;/td&gt;
&lt;td&gt;Specify field and sort order.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Export-Csv&lt;/td&gt;
&lt;td&gt;Export the list of files list to a CSV.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;If the files need to be sorted by last modified date, the &lt;code&gt;Sort-Object&lt;/code&gt; property needs to be set to "LastWriteTime".&lt;/p&gt;
&lt;p&gt;When the script is run, you'll see the results rendered in the following way:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;CreationTime        LastWriteTime       FullName
------------        -------------       --------
25/05/2023 20:33:44 25/05/2023 20:33:44 X:\Downloads\synology\Screenshot 2023-05-25 at 20.33.38.png
16/05/2023 14:18:21 16/05/2023 14:18:21 X:\Downloads\synology\Screenshot 2023-05-16 at 14.18.15.png
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Further Information&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://powershellfaqs.com/sort-files-by-date-in-powershell/" target="_blank" rel="noopener noreferrer"&gt;How to Sort Files by Date in PowerShell?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/get-childitem?view=powershell-7.4" target="_blank" rel="noopener noreferrer"&gt;PowerShell - Get-ChildItem&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Umbraco 13: Get Dropdown Value From A Custom Member Type]]></title><link>https://www.surinderbhomra.com/Blog/2024/09/24/Umbraco-13-Get-Dropdown-Value-From-Custom-Member-Type</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/09/24/Umbraco-13-Get-Dropdown-Value-From-Custom-Member-Type</guid><pubDate>Tue, 24 Sep 2024 18:35:00 GMT</pubDate><content:encoded>&lt;p&gt;I've been working with custom functionality for registering and authenticating external site users in Umbraco 13 using its Members feature.&lt;/p&gt;
&lt;p&gt;A custom Member Type was created so I could create field properties to specifically store all member registeration data. This consisted of Textboxes, Textareas and Dropdown fields.&lt;/p&gt;
&lt;p&gt;Getting values for fields in code is very straight-forward, but I encountered issues in when dealing with fields that consist of preset values, such as a Dropdown list of titles (Mr/Mrs/Ms/etc).&lt;/p&gt;
&lt;p&gt;Based on the Umbraco documentation for working with a &lt;a href="https://docs.umbraco.com/umbraco-cms/v/13.latest-lts/fundamentals/backoffice/property-editors/built-in-umbraco-property-editors/dropdown" target="_blank" rel="noopener noreferrer"&gt;Dropdown field&lt;/a&gt;, I should be able to get the selected value through this one line of code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@if (Model.HasValue("title"))
{
    &amp;#x3C;p&gt;@(Model.Value&amp;#x3C;string&gt;("title"))&amp;#x3C;/p&gt;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When working with custom properties from a Member Type, the approach seems to be different. A &lt;code&gt;GetValue()&lt;/code&gt; is the only accessor we have available to us to output a value - something we are already accustomed to working in Umbraco.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;IMember? member = memberService.GetByEmail("johndoe@gmail.com");
string title = member.Properties["title"].GetValue()?.ToString(); // Output: "[\"Mr\"]"
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However, the value is returned as a serialized array. This is also the case when using the typed &lt;code&gt;GetValue()&lt;/code&gt; accessor on the property:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;IMember? member = memberService.GetByEmail("johndoe@gmail.com");
string title = member.GetValue&amp;#x3C;string&gt;("title"); // Output: "[\"Mr\"]"
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img src="https://ik.imagekit.io/surinderbhomra/Blog/Umbraco/Umbraco13MemberDropDownPropertyDebug.png?tr=w-600#center" alt="Umbraco 13 - Dropdown Value From Custom Member Type Property"&gt;&lt;/p&gt;
&lt;p&gt;The only way to get around this was to create a custom extension method to deserialize the string array so the value alone could be output:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;public static class MemberPropertyExtensions
{
    /// &amp;#x3C;summary&gt;
    /// Gets the selected value of a Dropdown property.
    /// &amp;#x3C;/summary&gt;
    /// &amp;#x3C;param name="property"&gt;&amp;#x3C;/param&gt;
    /// &amp;#x3C;returns&gt;&amp;#x3C;/returns&gt;
    public static string? GetSelectedDropdownValue(this IProperty property)
    {
        if (property == null)
            return string.Empty;

        string? value = property?.GetValue()?.ToString();

        if (string.IsNullOrEmpty(value))
            return string.Empty;

        string[]? propertyArray = JsonConvert.DeserializeObject&amp;#x3C;string[]&gt;(value);

        return propertyArray?.FirstOrDefault();
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It's a simple but effective solution. Now our original code can be updated by adding our newly created &lt;code&gt;GetSelectedDropdownValue()&lt;/code&gt; method to the property:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;IMember? member = memberService.GetByEmail("johndoe@gmail.com");
string title = member.Properties["title"].GetSelectedDropdownValue();
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Useful Information&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="/Blog/2021/12/01/Umbraco-Setting-A-DropDownList-Value-Programmatically"&gt;Umbraco: Setting A DropDownList Value Programmatically&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Changing Tracks: Moving from Spotify to YouTube Music]]></title><link>https://www.surinderbhomra.com/Blog/2024/09/20/Moving-From-Spotify-To-YouTube-Music</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/09/20/Moving-From-Spotify-To-YouTube-Music</guid><pubDate>Fri, 20 Sep 2024 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I've had a Spotify music membership for as long as I can remember. Many other subscriptions held throughout my life have come and gone, but Spotify has stood the test of time.&lt;/p&gt;
&lt;p&gt;A seed of doubt was planted when Spotify began raising the prices of their plans multiple times over a short period of time, beginning in April 2021. Even then, I was relatively unconcerned; it was annoying, but I felt content knowing there were no better music providers that could compete with what Spotify provided. Spotify made music very accessible to me in every way.&lt;/p&gt;
&lt;p&gt;During the first price hike, I trialled Apple Music during a brief period of insanity only to quickly come running back to the safety of Spotify.&lt;/p&gt;
&lt;p&gt;The penny dropped in May 2024, during the third price hike, when I began to question whether my Spotify usage was worth paying £11.99 per month. Even though I listen to music, I occasionally go through periods where I only listen to podcasts, which are freely available online and podcasting platforms.&lt;/p&gt;
&lt;h2&gt;First Steps To Considering YouTube Music As A Viable Replacement&lt;/h2&gt;
&lt;p&gt;Before making any hasty decisions, I audited all subscriptions both my wife and I use to if there is any possibility of making cost savings... Just like a Conservative party government imposing austerity measures, except my actions wouldn't lead to a Liz Truss level economic crises.&lt;/p&gt;
&lt;p&gt;It wasn't until I discovered my wife's YouTube Premium subscription, which she had purchased through the Apple App Store for an absurdly high price. A word to the wise: Never buy subscriptions through Apple's App Store because Apple charges a commission on top. My wife was paying around £18 per month compared to £12.99 if purchased directly from the YouTube website.&lt;/p&gt;
&lt;p&gt;I digress...&lt;/p&gt;
&lt;p&gt;This was enough to get me thinking about upgrading to the Family tier that included:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Ad-free videos&lt;/li&gt;
&lt;li&gt;Offline downloads&lt;/li&gt;
&lt;li&gt;YouTube Music&lt;/li&gt;
&lt;li&gt;Add up to 5 members to the subscription&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All this costing £19.99 per month. At this price, we would be making savings if we moved away from our individual YouTube and Spotify plans. I was already sold on ad-free videos (those advertisements are so annoying!) and if I could be persuaded to subscribe to YouTube Music, this would end up being a very cost-effective option.&lt;/p&gt;
&lt;p&gt;The writing was on the wall. My Spotify days were numbered. I looked into what was involved (if possible) in migrating all my playlists over to YouTube Music.&lt;/p&gt;
&lt;h2&gt;Requirements and Initial Thoughts of YouTube Music&lt;/h2&gt;
&lt;p&gt;Prior to carrying out any form of migration, I opted for a 30 day free trial of YouTube Music as I wanted to see if it met as many key requirements as possible.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;th&gt;Requirement Met?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Availability of all songs from artists I listen to including the obscure ones&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Podcasts&lt;/td&gt;
&lt;td&gt;Big yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Native MacOS app&lt;/td&gt;
&lt;td&gt;Room for improvement&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ability to cast music to my speakers on my network&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quality new music suggestions&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Overall, YouTube Music met majority of my requirements. As expected, it does take a little while to familiarise one self with the interface but there are similarities when compared with Spotify.&lt;/p&gt;
&lt;h3&gt;YouTube Music - The Extension of YouTube&lt;/h3&gt;
&lt;p&gt;YouTube Music is really an extension of YouTube in how it is able to pull in specific YouTube content, whether that is music videos, podcasts or shows. All the audio related content in video form you would normally view in YouTube is encompassed here. In most cases, this is seen as an advantage, however the only aspect where the lines between music and video get blurred is in the auto-generated "Liked music" playlist.&lt;/p&gt;
&lt;p&gt;You may find the "Liked music" playlist is already prefilled with videos you have liked on YouTube. If YouTube Music deems a liked video as music, it will also be shown here, which isn't necessarily accurate. For example, it automatically listed a &lt;a href="https://www.youtube.com/watch?v=uty2zd7qizA&amp;#x26;list=LL&amp;#x26;index=79" target="_blank" rel="noopener noreferrer"&gt;Breaking Bad Parody video&lt;/a&gt; I liked from 9 years ago. If you prefer your randomly liked videos to stay in solely in YouTube, you have to manually disable the "Show your liked music from YouTube" feature in the settings.&lt;/p&gt;
&lt;h3&gt;The Music Catalog and New Music Recommendations&lt;/h3&gt;
&lt;p&gt;The music catalog size is on par with Spotify and there hasn't been a time where a track wasn't available. In fact, there were 3-4 tracks in my Spotify playlist that was no longer accessible, but this was not the case on YouTube Music, which was a surprise.&lt;/p&gt;
&lt;p&gt;During times when I am in the search for new music, I found the recommendation algorithm far better than Spotify and after a couple weeks of  using YouTube Music I was compiled some really good personalised mixes - something that will get even better in time. Due to its link with YouTube, I was recommended even more options of live performances, remixes and cover tracks.&lt;/p&gt;
&lt;p&gt;What surprised me the most is the a feature I didn't even think I needed: The Offline Mixtape. There are times when I don't actually know what tracks I want to listen to when on the road and the Offline Mixtape compiles a list of tracks consisting of a combination of my liked songs and similar tracks for added variation. All automatically synchronised to my devices.&lt;/p&gt;
&lt;h3&gt;Podcasts&lt;/h3&gt;
&lt;p&gt;From the podcasts I listen to on Spotify I didn't have any issues in finding on YouTube Music. There is an added benefit of playing a podcast as audio or video (if the podcast offers this format), which is a nice touch. I was also recommended new types of podcasts that I would have never been exposed to based on what I listen to. I am sure (and correct me if I am wrong) Spotify didn't make recommendations as visible as what I am seeing in YouTube Music where podcasts are categorised. For example, the categories offered to me are: Wealth, Finances, Health, Mysteries, etc&lt;/p&gt;
&lt;h3&gt;Lack of Native Desktop App&lt;/h3&gt;
&lt;p&gt;The lack of a native desktop app detracts from my otherwise glowing review of YouTube Music. I was surprised to find that there isn't one, given that this is the norm among other music providers.&lt;/p&gt;
&lt;p&gt;Even though Chrome allows you to download it as a Progressive Web App, it's better than nothing. It just doesn't seem integrated enough. I keep accidentally closing the YouTube Music app on my MacOS by clicking the "close" button when all I want to do is hide the window.&lt;/p&gt;
&lt;p&gt;It can also be laggy at times, especially when Chromecasting to a smart speaker. When I change tracks, my speaker takes a few seconds to catch up.&lt;/p&gt;
&lt;p&gt;Overall, it's good but not great. Does not have the same polish as the Spotify app. But it's definitely manageable. The lack of a native desktop app has not dissuaded me from using it. If needed, I can always use the YouTube Music app on my Pixel or iPad.&lt;/p&gt;
&lt;h2&gt;The Migration&lt;/h2&gt;
&lt;p&gt;After a satisfactory trial period using YouTube Music, I looked for ways to move all my Spotify playlists. There are many options through online services and software that can aid the migration process, which can be used for free (sometimes with limitations) or at a cost.&lt;/p&gt;
&lt;p&gt;After carrying out some research on the various options available to me, I opted for a free CLI tool built in Python: &lt;a href="https://github.com/linsomniac/spotify_to_ytmusic" target="_blank" rel="noopener noreferrer"&gt;spotify_to_ytmusic&lt;/a&gt;. It has received a lot of good reviews from a &lt;a href="https://www.reddit.com/r/YoutubeMusic/comments/1193llv/comment/kefl8ms/?utm_source=share&amp;#x26;utm_medium=web3x&amp;#x26;utm_name=web3xcss&amp;#x26;utm_term=1&amp;#x26;utm_content=share_button" target="_blank" rel="noopener noreferrer"&gt;Reddit post&lt;/a&gt; and received positive feedback where users were able to migrate thousands of their songs spanning multiple playlists with ease. The only disadvantage with free options that provide unlimited migration is that they aren't necessarily straight-forward for the average user and some technical acumen is required.&lt;/p&gt;
&lt;p&gt;The installation, setup and familiarising yourself with the CLI commands to use the spotify_to_ytmusic application is the only part that takes some time. But once you have generated API Keys in both Spotify and Google, followed the instructions as detailed in the Github repo, the migration process itself doesn't take long at all.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;When I told one of my coworkers that I had switched to YouTube Music, I received a sceptical look and a response to confirm I am of sane mind. This exemplifies how we have simply accepted Spotify as the only acceptable music platform, blinded to alternatives.&lt;/p&gt;
&lt;p&gt;YouTube Premium, which includes YouTube Music in one package, is an extremely good deal. Not only can you watch YouTube videos ad-free, but you also get a music library comparable to Spotify at a similar price.&lt;/p&gt;
&lt;p&gt;If you have been questioning whether YouTube Music is worth a try. Question no more and make the move.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Google Maps Distance Matrix API - Outputting More Than 25 Destinations]]></title><link>https://www.surinderbhomra.com/Blog/2024/09/13/Google-Maps-Distance-Matrix-25-Requests-Limit</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/09/13/Google-Maps-Distance-Matrix-25-Requests-Limit</guid><pubDate>Fri, 13 Sep 2024 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The Google Maps Distance Matrix API gives us the capability to calculate travel distance and time between multiple locations across different modes of transportation, such as driving walking, or cycling. This is just one of the many other APIs Google provides to allow us to get the most out of location and route related data.&lt;/p&gt;
&lt;p&gt;I needed to use the Google Distance Matrix API (GDMA) to calculate the distance of multiple points of interests (destinations) from one single origin. The dataset of destinations consisted of sixty to one-hundred rows of data containing the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Title&lt;/li&gt;
&lt;li&gt;Latitude&lt;/li&gt;
&lt;li&gt;Longitude&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This dataset would need to be parsed to the GDMA as destinations in order get the information on how far each item was away from the origin. One thing came to light during integration was that the API is limited to only outputting 25 items of distance data per request.&lt;/p&gt;
&lt;p&gt;The limit posed by the GDMA would be fine for the majority of use-cases, but in my case this posed a small problem as I needed to parse the whole dataset of destinations to ensure all points of interests were ordered by the shortest distance.&lt;/p&gt;
&lt;p&gt;The only way I could get around the limits posed by the GDMA was to batch my requests 25 destinations at a time. The dataset of data I would be parsing would never exceed 100 items, so I was fairly confident this would be an adequate approach. However, I cannot be 100% certain what the implications of such an approach would be if you were dealing with thousands of destinations.&lt;/p&gt;
&lt;p&gt;The code below demonstrates a small sample-set of destination data that will be used to calculate distance from a single origin.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/*
	Initialise the application functionality.
*/
const initialise = () =&gt; {
	const destinationData = [
                    {
                      title: "Wimbledon",
                      lat: 51.4273717,
                      long: -0.2444923,
                    },
                    {
                      title: "Westfields Shopping Centre",
                      lat: 51.5067724,
                      long: -0.2289425,
                    },
                    {
                      title: "Sky Garden",
                      lat: 51.3586154,
                      long: -0.9027887,
                    }
                  ];
                  
	getDistanceFromOrigin("51.7504091", "-1.2888729", destinationData);
}

/*
	Processes a list of destinations and outputs distances closest to the origin.
*/
const getDistanceFromOrigin = (originLat, originLong, destinationData) =&gt; {
  const usersMarker = new google.maps.LatLng(originLat, originLong);
  let distanceInfo = [];
  
  if (destinationData.length &gt; 0) {
  	// Segregate dealer locations into batches.
    const destinationBatches = chunkArray(destinationData, 25);

    // Make a call to Google Maps in batches.
    const googleMapsRequestPromises = destinationBatches.map(batch =&gt; googleMapsDistanceMatrixRequest(usersMarker, batch));

    // Iterate through all the aynchronous promises returned by Google Maps batch requests.
    Promise.all(googleMapsRequestPromises).then(responses =&gt; {
      const elements = responses.flatMap(item =&gt; item.rows).flatMap(item =&gt; item.elements);

      // Set the distance for each dealer in the dealers data
      elements.map(({ distance, status }, index) =&gt; {
        if (status === "OK") {
          destinationData[index].distance = distance.text;
          destinationData[index].distance_value = distance.value;
        }
      });
      
      renderTabularData(destinationData.sort((a, b) =&gt; (a.distance_value &gt; b.distance_value ? 1 : -1)));
    })
    .catch(error =&gt; {
      console.error("Error calculating distances:", error);
    });
  }
}

/*
	Outputs tabular data of distances.
*/
renderTabularData = (destinationData) =&gt; {
	let tableHtml = "";
  
    tableHtml = `&amp;#x3C;table&gt;
                    &amp;#x3C;tr&gt;
                        &amp;#x3C;th&gt;No.&amp;#x3C;/th&gt;
                        &amp;#x3C;th&gt;Destination Name&amp;#x3C;/th&gt;
                        &amp;#x3C;th&gt;Distance&amp;#x3C;/th&gt;
                    &amp;#x3C;/tr&gt;`;

	if (destinationData.length === 0) {
        tableHtml += `&amp;#x3C;tr colspan="2"&gt;
                        &amp;#x3C;td&gt;No data&amp;#x3C;/td&gt;
                    &amp;#x3C;/tr&gt;`;
  }
  else {
        destinationData.map((item, index) =&gt; {
  		        tableHtml += `&amp;#x3C;tr&gt;
                                &amp;#x3C;td&gt;${index+1}&amp;#x3C;/td&gt;
                                &amp;#x3C;td&gt;${item.title}&amp;#x3C;/td&gt;
                                &amp;#x3C;td&gt;${item.distance}&amp;#x3C;/td&gt;
                            &amp;#x3C;/tr&gt;`;
            });
  }
  
  tableHtml += `&amp;#x3C;/table&gt;`;
  
  document.getElementById("js-destinations").innerHTML = tableHtml;
}

/*
	Queries Google API Distance Matrix to get distance information.
*/
const googleMapsDistanceMatrixRequest = (usersMarker, destinationBatch) =&gt; {
  const distanceService = new google.maps.DistanceMatrixService();
  let destinationsLatLong = [];
  
  if (destinationBatch.length === 0) {
  	return;
  }
  
  destinationBatch.map((item, index) =&gt; {
    destinationsLatLong.push({
      lat: parseFloat(item.lat),
      lng: parseFloat(item.long),
    });
  });
  
  const request = 
        {
          origins: [usersMarker],
          destinations: destinationsLatLong,
          travelMode: "DRIVING",
        };

  return new Promise((resolve, reject) =&gt; {
    distanceService.getDistanceMatrix(request, (response, status) =&gt; {
      if (status === "OK") {
        resolve(response);
      } 
      else {
        reject(new Error(`Unable to retrieve distances: ${status}`));
      }
    });
  });
};

/*
	Takes an array and resizes to specified size.
*/
const chunkArray = (array, chunkSize) =&gt; {
  const chunks = [];

  for (let i = 0; i &amp;#x3C; array.length; i += chunkSize) {
    chunks.push(array.slice(i, i + chunkSize));
  }

  return chunks;
}

/*
	Load Google Map Distance Data.
*/
initialise();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;getDistanceFromOrigin()&lt;/code&gt; and &lt;code&gt;googleMapsDistanceMatrixRequest()&lt;/code&gt; are the key functions that take the list of destinations, batches them into chunks of 25 and returns a tabular list of data. This code can be expanded further to be used alongside visual representation to render each destination as pins on an embedded Google Map, since we have the longitude and latitude points.&lt;/p&gt;
&lt;p&gt;The full working demo can be found via the following link: &lt;a href="https://jsfiddle.net/sbhomra/ns2yhfju" target="_blank" rel="noopener noreferrer"&gt;https://jsfiddle.net/sbhomra/ns2yhfju/&lt;/a&gt;. To run this demo, a Google Maps API key needs to be provided, which you will be prompted to enter on load.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The Silent Blogger]]></title><link>https://www.surinderbhomra.com/Blog/2024/09/05/The-Silent-Blogger</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/09/05/The-Silent-Blogger</guid><pubDate>Thu, 05 Sep 2024 22:15:00 GMT</pubDate><content:encoded>&lt;p&gt;Everyone has different reasons for blogging. It could be for professional development, knowledge exchange, documenting a personal journey, or just as a form of self-expression. My motive for blogging includes a small portion of each of these reasons, with one major difference: you have to find me.&lt;/p&gt;
&lt;p&gt;I don't go out of my way to promote this small portion of the internet web-sphere that I own. In the past, I experimented with syndicating articles to more prominent blogging media platforms and communities, but it didn't fulfil my expectations or bring any further benefits.&lt;/p&gt;
&lt;p&gt;I've observed that my demeanour mirrors an approach to blogging in that I don't feel the need to go to excessive lengths to disclose my accomplishments or a problems I've solved. This could be due to my age, as I am more comfortable just being myself. I have nothing to prove to anyone.&lt;/p&gt;
&lt;p&gt;A 13th century poet, Rumi, once said:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In silence, there is eloquence. Stop weaving and see how the pattern improves.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This quote implies that silence is the source of clarity that allows thoughts to develop naturally and emerge.&lt;/p&gt;
&lt;p&gt;Ever since I stopped the pursuit of recognition and a somewhat futile attempt to force my written words onto others, the natural order has allowed this blog to grow organically. Those who have found me from keyword searches has resulted in better interaction and monetisation (through my &lt;a href="https://www.buymeacoffee.com/surinderbhomra" target="_blank" rel="noopener noreferrer"&gt;Buy Me A Coffee page&lt;/a&gt;). Fortunately, since I've made an effort to make this blog as SEO-friendly as possible, my posts appear to perform fairly well across search engines.&lt;/p&gt;
&lt;p&gt;No longer do I stress over feeling the need to write blog posts using the "carrot and stick" approach just to garner more readership. I found I benefit from blogging about the things of interest. It's quality over quantity.&lt;/p&gt;
&lt;p&gt;If you have got this far in this very random admission of silent blogging, you're probably thinking: So what's your point?&lt;/p&gt;
&lt;p&gt;I suppose what I'm trying to say is that it's okay to blog without the expectation of having to promote every single post out to the world in hopes for some recognition. Previously, this was my way of thinking, and I've since realised that I was blogging (for the most part) for the wrong reasons. In one of my posts written in &lt;a href="/Blog/2019/09/05/Writers-Block-and-The-Difficulties-of-Blogging"&gt;2019&lt;/a&gt; I was in pursuit to be in the same league as the great bloggers I idolised:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I look at my blogging heroes like Scott Hanselman, Troy Hunt, Mosh Hamedani and Iris Classon (to name a few) and at times ponder if I will have the ability to churn out great posts on a regular basis with such ease and critical acclaim as they do.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I've learnt not to be so hard on myself and lessen expectations. When you trade your expectations for appreciation, your whole world changes; even though a sense of achievement feels great, it's far more important to enjoy what you're doing (roughly para-phrasing Tony Robbins here).&lt;/p&gt;
&lt;p&gt;This new perspective has reaffirmed my belief that I have always enjoyed blogging, but being a silent blogger provides a sense of freedom.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Kentico 13: Cookiebot Blocking Page Builder Scripts]]></title><link>https://www.surinderbhomra.com/Blog/2024/09/01/Kentico-13-Cookiebot-Page-Builder-Scripts</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/09/01/Kentico-13-Cookiebot-Page-Builder-Scripts</guid><pubDate>Sun, 01 Sep 2024 20:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Cookiebot was added to a Kentico 13 site a few weeks ago resulting in unexpected issues with pages that contained Kentico forms, which led me to believe there is a potential conflict with Kentico Page Builders client-side files.&lt;/p&gt;
&lt;p&gt;As all Kentico Developers are aware, the Page Builder CSS and JavaScript files are required for managing the layout of pages built with widgets as well as the creation and use of Kentico forms consisting of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;PageBuilderStyles - consisting CSS files declared in the &lt;code&gt;&amp;#x3C;/head&gt;&lt;/code&gt; section of the page code.&lt;/li&gt;
&lt;li&gt;PageBuilderScripts - consisting of JavaScript files declared before the closing &lt;code&gt;&amp;#x3C;/body&gt;&lt;/code&gt; tag.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this case, the issue resided with Cookiebot blocking scripts that are generated in code as an extension method or as a Razor Tag Helper.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;#x3C;html&gt;
&amp;#x3C;body&gt;
    ...
    &amp;#x3C;!-- Extension Method --&gt;
    @Html.Kentico().PageBuilderScripts()    
    ...
    &amp;#x3C;!-- Razor Tag Helper --&gt;
    &amp;#x3C;page-builder-scripts /&gt;
    ...
&amp;#x3C;/body&gt;
&amp;#x3C;/html&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Depending on the cookie consent given, Kentico Forms either failed on user submission or did not fulfil a specific action, such as, conditional form element visibility or validation.&lt;/p&gt;
&lt;p&gt;The first thing that came to mind was that I needed to configure the Page Builder scripts by allowing it to be ignored by Cookiebot. Cookiebot shouldn't hinder any key site functionality as long as you have configured the consent options correctly to disable cookie blocking for specific client-side scripts via the &lt;code&gt;data-cookieconsent&lt;/code&gt; attribute:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;#x3C;script data-cookieconsent="ignore"&gt;
    // This JavaScript code will run regardless of cookie consent given.
&amp;#x3C;/script&gt;

&amp;#x3C;script data-cookieconsent="preferences, statistics, marketing"&gt;
    // This JavaScript code will run if consent is given to one or all of options set in "cookieconsent" data attribute.
&amp;#x3C;/script&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Of course, it's without saying that the &lt;code&gt;data-cookieconsent&lt;/code&gt; should be used sparingly - only in situations where you may need the script to execute regardless of consent and have employed alternative ways of ensuring that the cookies are only set after consent has been obtained.&lt;/p&gt;
&lt;p&gt;But how can the Page Builder scripts generated by Kentico be modified to include the cookie consent attribute?&lt;/p&gt;
&lt;p&gt;If I am being honest, the approach I have taken to resolve this issue does not sit quite right with me, as I feel there is a better solution out there I just haven't been able to find...&lt;/p&gt;
&lt;p&gt;Inside the &lt;code&gt;_Layout.cshtml&lt;/code&gt; file, I added a conditional statement that checked if the page is in edit mode. If true, the page builder scripts will render normally using the generated output from the Tag Helper. Otherwise, manually output all the scripts from the Tag Helper and assign the &lt;code&gt;data-cookieconsent&lt;/code&gt; attribute.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;#x3C;html&gt;
&amp;#x3C;body&gt;
    ... 
    ...
    @if (Context.Kentico().PageBuilder().EditMode)
    {
        &amp;#x3C;page-builder-scripts /&gt;
    }
    else
    {
        &amp;#x3C;script src="/_content/Kentico.Content.Web.Rcl/Scripts/jquery-3.5.1.js" data-cookieconsent="ignore"&gt;&amp;#x3C;/script&gt;
        &amp;#x3C;script src="/_content/Kentico.Content.Web.Rcl/Scripts/jquery.unobtrusive-ajax.js" data-cookieconsent="ignore"&gt;&amp;#x3C;/script&gt;
        &amp;#x3C;script type="text/javascript" data-cookieconsent="ignore"&gt;
            window.kentico = window.kentico || {};
            window.kentico.builder = {};
            window.kentico.builder.useJQuery = true;
        &amp;#x3C;/script&gt;
        &amp;#x3C;script src="/Content/Bundles/Public/pageComponents.min.js" data-cookieconsent="ignore"&gt;&amp;#x3C;/script&gt;
        &amp;#x3C;script src="/_content/Kentico.Content.Web.Rcl/Content/Bundles/Public/systemFormComponents.min.js" data-cookieconsent="ignore"&gt;&amp;#x3C;/script&gt;
    }
&amp;#x3C;/body&gt;
&amp;#x3C;/html&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;After the modifications were made, all Kentico Forms were once again fully functional. However, the main disadvantage of this approach is that issues may arise when new hotfixes or major versions are released as the hard-coded script references will require checking.&lt;/p&gt;
&lt;p&gt;If anyone can suggest a better approach to integrating a cookie compliance solution or making modifications to the page builder script output, please leave a comment.&lt;/p&gt;
&lt;h2&gt;Useful Information&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.kentico.com/13/developing-websites/page-builder-development/creating-pages-with-editable-areas" target="_blank" rel="noopener noreferrer"&gt;Kentico 13 - Creating pages with editable areas&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.kentico.com/13/developing-websites/developing-xperience-applications-using-asp-net-core/bundling-static-assets-of-builder-components" target="_blank" rel="noopener noreferrer"&gt;Kentico 13 - Bundling static assets of builder components&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://support.cookiebot.com/hc/en-us/articles/360009063660-Disable-automatic-cookie-blocking-for-a-specific-script" target="_blank" rel="noopener noreferrer"&gt;Cookiebot - Disable automatic cookie blocking for a specific script&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://support.cookiebot.com/hc/en-us/articles/4405978132242-Manual-cookie-blocking" target="_blank" rel="noopener noreferrer"&gt;Cookiebot - Manual cookie blocking&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content:encoded></item><item><title><![CDATA[Side Hustling With UserTesting.com]]></title><link>https://www.surinderbhomra.com/Blog/2024/08/24/Side-Hustling-With-UserTesting</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/08/24/Side-Hustling-With-UserTesting</guid><pubDate>Sat, 24 Aug 2024 10:00:00 GMT</pubDate><content:encoded>&lt;span class="text-sm"&gt;
    Banner Image by: &lt;a href="https://www.freepik.com/free-vector/people-with-magnet-attracting-money-flat-vector-illustration-tiny-men-woman-using-laptop-developing-strategy-increase-income-taking-fast-loan-business-investment-finance-concept_28480777.htm#fromView=search&amp;page=1&amp;position=38&amp;uuid=0548b78f-c0df-4fad-856b-4126db577c89"&gt;pch.vector on Freepik&lt;/a&gt;
&lt;/span&gt;
&lt;p&gt;I've been looking out for a side hustle to supplement my monthly stock and shares investment contribution - trying to make up for lost time in the years I did not invest. As it was my first foray into the world of side hustling, I wanted to ease myself into things. So it was important for it to be flexible enough to work around office/personal hours and not require too much time.&lt;/p&gt;
&lt;p&gt;During the COVID-era, I kept note of some side hustles I was planning to try out but never got around to doing so. Forgetfulness also has a part to play in matters and was only reminded when coming across one of my notes from July 2021 stored in Evernote.&lt;/p&gt;
&lt;p&gt;Now was a good time as any to try out one of them: Usertesting.com.&lt;/p&gt;
&lt;h2&gt;What Is UserTesting?&lt;/h2&gt;
&lt;p&gt;Usertesting.com provides a platform for businesses to get feedback on their products and services. Anyone can apply to be a contributor  and provide feedback that consists of:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Accessibility&lt;/li&gt;
&lt;li&gt;Usability&lt;/li&gt;
&lt;li&gt;Live conversations with businesses&lt;/li&gt;
&lt;li&gt;Pre-release platform feature review&lt;/li&gt;
&lt;li&gt;Competitor benchmarking tests&lt;/li&gt;
&lt;li&gt;A/B testing to compare different versions of a product or feature&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Before becoming an active contributor, a UserTesting will require some basic information as part of the registration process and a practice test to be completed.&lt;/p&gt;
&lt;h2&gt;Acing The Practice Test&lt;/h2&gt;
&lt;p&gt;UserTesting will provide a test scenario to prove you're a legitimate person and have the capability to demonstrate good communication and analytical thinking. It provides a good standard that is expected when carrying out real tests.&lt;/p&gt;
&lt;p&gt;The test itself is not complicated but you should be prepared to clearly think out loud so there is an understanding of your thought process as you're undertaking various tasks. It's always a good idea before performing a task to read the question out loud so your interpretation of what is being asked is clear. Most importantly, be honest in what you're reviewing.&lt;/p&gt;
&lt;p&gt;At the end of the test, provide a conclusion and thank them for their time in this opportunity.&lt;/p&gt;
&lt;p&gt;The fact that UserTesting.com forces users to take an assessment beforehand demonstrates the credibility of the service and sets the standard for the type of businesses they work with.&lt;/p&gt;
&lt;p&gt;UserTesting will respond to your practice test within 2-3 days, provide feedback and let you know if you will be accepted as a contributor.&lt;/p&gt;
&lt;h2&gt;What To Expect From The Real Test?&lt;/h2&gt;
&lt;p&gt;After completing the practice test, I didn't get real tests immediately. It took a good couple of weeks for them to start trickling in. Even then, I didn't qualify to take part in some tests as I didn't have experience in the area of expertise.&lt;/p&gt;
&lt;p&gt;Tests are performed on Windows, Mac, Android or iOS devices. There might be a requirement to provide feedback using a specific device. Access to a microphone and sharing your screen is a strict prerequisite. Some do ask for a face recording as well, but I decided to refuse tests that requested this.&lt;/p&gt;
&lt;p&gt;Test vary in length and payout:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Short tests - $4&lt;/li&gt;
&lt;li&gt;10-20 minute tests - $10&lt;/li&gt;
&lt;li&gt;30-minute test - $30&lt;/li&gt;
&lt;li&gt;60-minute test - $60&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The 60-minute tests will always be live a conversation directly with the business and scheduled in advance.&lt;/p&gt;
&lt;h2&gt;The Type of Tests I've Contributed To&lt;/h2&gt;
&lt;p&gt;I have been quite lucky as to the tests offered to me as they seem to relate to the tech industry. Providing feedback for businesses such as Microsoft, SalesForce, Github, GitLab and Amazon has been insightful.&lt;/p&gt;
&lt;p&gt;Other tests have evolved around the sectors of AI, website accessibility, pre-release platform updates and cloud-hosting.&lt;/p&gt;
&lt;h2&gt;Payout&lt;/h2&gt;
&lt;p&gt;This is the part you have all been waiting for. How much money have I made since starting at the beginning of June?&lt;/p&gt;
&lt;img src="https://ik.imagekit.io/surinderbhomra/Blog/GIFs/JerryMaguireShowMeTheMoney.gif#center" alt="Jerry Maguire - Show Me The Money" width="600" /&gt;
&lt;p&gt;I completed twenty tests consisting majority of $10 tests, one $60 test and a handful of $4 tests. Totalling to $232. Each test is paid out within two weeks to your linked PayPal account. Not so bad for an ad-hoc side hustle.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://ik.imagekit.io/surinderbhomra/Blog/Finance/UserTestingPayout-082024.png?tr=w-800#center" alt="UserTesting.com Payout - August 2024"&gt;&lt;/p&gt;
&lt;p&gt;Twenty tests over the span of three months is not a lot when my contribution could have been higher. But when taking into consideration that this side hustle is only pursued outside of working hours and some tests do not apply to my expertise, it's not so bad.&lt;/p&gt;
&lt;p&gt;The majority of tests offered will be worth $10. Some may question whether they're even worth doing, to which I say: Yes! A $10 test can take anywhere between 5-15 minutes to complete on average. When you take the hourly UK National Minimum wage of £11.44, it's not bad. $10 converted to GBP equates to around £7.60. Easy money!&lt;/p&gt;
&lt;p&gt;The more you contribute the higher chance there is in getting more tests offered to you, providing your feedback rating is good. There are some damn interesting ones as well.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Don't apply to UserTesting with the expectation of mass riches as you will sorely be disappointed. Think of it as petty cash to count towards a little "fun money".&lt;/p&gt;
&lt;p&gt;Apart from the monetisation aspect of using UserTesting, I feel I am getting an early insight into where certain industry sectors are going, including my own, which is almost as valuable as the payout itself.&lt;/p&gt;
&lt;p&gt;There will be some days or even weeks when there will be no applicable tests. Just stick with it as all it takes is a handful of 30 or 60-minute tests (which can be hard to come by) to get a nice chunk of change for the month.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Addressing The Lack of Kentico Content]]></title><link>https://www.surinderbhomra.com/Blog/2024/08/03/Addressing-The-Lack-of-Kentico-Content</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/08/03/Addressing-The-Lack-of-Kentico-Content</guid><pubDate>Sat, 03 Aug 2024 19:30:00 GMT</pubDate><content:encoded>&lt;p&gt;I spoke to one of my developer friends a while back and as conversations go with someone tech-minded, it's a mixture of talking about code, frameworks, and platforms entwined with the more life-centric catch-up.&lt;/p&gt;
&lt;p&gt;Both having been in the tech industry for over 15 years, we discussed the "old ways" and what we did back then that we don't do now, which led to Kentico - a platform that we used to talk about all the time, where we'd try and push the boundaries to create awesome websites in the hopes of winning the coveted site of the month or year award. It occurred to us that it's not something we talk much about anymore. Almost as if overnight it vanished from our consciousness.&lt;/p&gt;
&lt;p&gt;Looking through the archive of postings, it's evident I haven't published anything Kentico-related in a long time, with my most recent being in September 2020. Despite the lack of Kentico content on my site, it remains a key player in the list of CMS platforms that I work with. The only difference is the share of Kentico projects are smaller when compared to the pre-2020 era.&lt;/p&gt;
&lt;p&gt;In this post, I discuss my thoughts as to the reason behind my lack of Kentico-related output.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; This post consists of my view points alone.&lt;/p&gt;
&lt;h2&gt;Licensing Cause and Effect&lt;/h2&gt;
&lt;p&gt;A contributing factor was the substantial shift in their licensing model sometime in 2020. Moving to an annual subscription at an increased cost and ditching the base license created somewhat of a barrier to entry for small to mid-sized clients who just needed a reliable CMS platform with customisability. So for someone like myself who could provide Kentico solutions in a freelance capacity was instantly priced out.&lt;/p&gt;
&lt;p&gt;I understand why Kentico needed to reassess its price structure. They offer one of the best .NET CMSs and to stay at the top, an increase in revenue is required to drive the business forward. In all honesty, I believe we had a good run on the old licensing model for over ten years, and it was only a matter of time until a pricing review was required.&lt;/p&gt;
&lt;p&gt;It's just a hard sell when trying to sell a CMS with a £10,000 price tag before any development has even started.&lt;/p&gt;
&lt;p&gt;In light of this, it's only natural to look for alternatives that align with your own business strategy and development needs. The time originally spent developing Kentico has now been reallocated to alternative CMS platforms.&lt;/p&gt;
&lt;h2&gt;A Stable Well-Rounded Platform&lt;/h2&gt;
&lt;p&gt;Kentico is a mature product with many out-of-the-box capabilities (that get better with every release), which indirectly contributed to my lack of blogging on the subject. I usually only blog about a platform when I find useful workarounds or discover an issue that I was able to resolve.&lt;/p&gt;
&lt;p&gt;This is truly a compliment and testament to Kentico's build quality. There is no need to write about something that is already well-documented and written by active users of the community.&lt;/p&gt;
&lt;h2&gt;Reassessing The Kentico Offering&lt;/h2&gt;
&lt;p&gt;Kentico is still offered whenever possible. Both clients and developers alike have confidence in the platform. Clients enjoy the interface and security. Developers appreciate the customisability, clear architecture, quick hot fixing, and consistency between editions.&lt;/p&gt;
&lt;p&gt;The only question we now have to ask ourselves is whether Kentico is the right platform for the client's requirements. Prior to the change in licensing, you would be scoffed at for asking such a question. Kentico would be the front-runner before considering anything else.&lt;/p&gt;
&lt;p&gt;Nowadays, Kentico would only be put forward to a client if they had large-scale requirements where cheaper CMS offerings fall short for the licensing costs to be justified.&lt;/p&gt;
&lt;p&gt;I was recently involved in an e-commerce project that ticked all the boxes in line with the client's priorities, which made for an ideal use-case to carry out the build in Kentico, such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enterprise-level security&lt;/li&gt;
&lt;li&gt;Industry-standard compliance&lt;/li&gt;
&lt;li&gt;All in one solution consisting of content management, e-commerce, and marketing automation&lt;/li&gt;
&lt;li&gt;Scalability&lt;/li&gt;
&lt;li&gt;Ability to handle large sets of data&lt;/li&gt;
&lt;li&gt;Advanced customisability&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In my view, if a client is not too concerned about the above, then alternatives will be used and additional development will be carried out to fill in any gaps.&lt;/p&gt;
&lt;h2&gt;The Alternatives&lt;/h2&gt;
&lt;p&gt;The CMS sphere is ripe with offerings where we are spoilt for choice. I have whittled these down to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Umbraco&lt;/li&gt;
&lt;li&gt;Kentico&lt;/li&gt;
&lt;li&gt;Prismic&lt;/li&gt;
&lt;li&gt;Dato&lt;/li&gt;
&lt;li&gt;HubSpot&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In my view, those variety of CMSs covers all pricing points, technologies and customisability.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;I would always jump at the chance in developing in Kentico as I know a large complex website can be developed with almost infinite customisation. But we can't help but notice there is a lot of competition out there, each providing a range of features across different architectures and price ranges.&lt;/p&gt;
&lt;p&gt;Based on my own experience, the demand for fully featured CMS platforms that have a large hosting footprint are reducing in popularity in the advent of more API driven (also known as headless) content delivery that works alongside other microservices.&lt;/p&gt;
&lt;p&gt;Investing in the Kentico eco-system (including its headless variant, Kontent) is always worth considering. It may just not be something I will be writing about consistently here as it requires a more corporate-level type of clientele.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[CSS Zen Garden - The Road To Enlightenment]]></title><link>https://www.surinderbhomra.com/Blog/2024/06/14/CSS-Zen-Garden-The-Road-To-Enlightenment</link><guid isPermaLink="false">https://www.surinderbhomra.com/Blog/2024/06/14/CSS-Zen-Garden-The-Road-To-Enlightenment</guid><pubDate>Fri, 14 Jun 2024 22:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Who remembers &lt;a href="https://csszengarden.com/" target="_blank" rel="noopener noreferrer"&gt;CSS Zen Garden&lt;/a&gt;? I do. As if it was yesterday... I remember first gazing my sights on a simplistic but visually stunning webpage demonstrating what all websites in the future could look like.&lt;/p&gt;
&lt;p&gt;CSS Zen Garden broke the norm of the websites we were used to seeing at the time - crowded blocky tabular-based layouts that lacked personality. It was a revelation and a turning point in web design standards!&lt;/p&gt;
&lt;p&gt;As described within the content of every CSS Zen design, its ethos was clear:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The Road To Enlightenment&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Littering a dark and dreary road lay the past relics of browser-specific tags, incompatible DOMs, broken CSS support, and abandoned browsers.&lt;/p&gt;
&lt;p&gt;We must clear the mind of the past. Web enlightenment has been achieved thanks to the tireless efforts of folk like the W3C, WaSP, and the major browser creators.&lt;/p&gt;
&lt;p&gt;The CSS Zen Garden invites you to relax and meditate on the important lessons of the masters. Begin to see with clarity. Learn to use the time-honored techniques in new and invigorating fashion. Become one with the web.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;CSS Zen Garden pathed &lt;em&gt;The Road To Enlightenment&lt;/em&gt; for me in an entirely different manner - deciding my career.&lt;/p&gt;
&lt;p&gt;Having completed my degree in Information Systems at university in 2006, I was at a crossroads as to which IT field I should specialise in. University seems to prepare you for anything apart from how to apply yourself when you exit the final doors of education into the real world.&lt;/p&gt;
&lt;p&gt;CSS Zen Garden changed the trajectory of my career. Originally, I considered entering the field of Consulting to then changing my mindset into garnering interest as a Web Developer instead. This has had a lasting effect. Even after 18 years, I am still involved in Web Development. I tend to focus more on backend functionality rather than look and feel.&lt;/p&gt;
&lt;p&gt;The CSS Zen Garden community spawned a variety of other designs from talented Web Developers that encompassed a design flair. But the design that started it all, &lt;a href="https://csszengarden.com/001/" target="_blank" rel="noopener noreferrer"&gt;001&lt;/a&gt;, will always hold a special place in my heart. The unforgettable  Japanese elements - the Itsukushima Shrine, water lilies, and a blossom tree.&lt;/p&gt;
&lt;p&gt;All the designs have stood the test of time and even through the age of modern web browsers and high-resolution screens, they still present a timeless look that fills me with nostalgia.&lt;/p&gt;</content:encoded></item></channel></rss>