<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ben stopford</title>
	<atom:link href="http://www.benstopford.com/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.benstopford.com</link>
	<description>Gently Flexing the Grid</description>
	<lastBuildDate>
	Wed, 24 Sep 2025 18:16:28 +0000	</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.1.19</generator>
	<item>
		<title>Technical Writing and its &#8216;Hierarchy of Needs&#8217;</title>
		<link>http://www.benstopford.com/2022/02/02/technical-writing-and-its-hierarchy-of-needs/</link>
				<pubDate>Wed, 02 Feb 2022 08:52:13 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=5327</guid>
				<description><![CDATA[<p>Technical writing is hard to do well and it’s also a bit different from other types of writing. While good technical writing has no strict definition I do think there is a kind of ‘hierarchy of needs’ that defines it. I&#8217;m not sure this is complete or perfect but I find categorizing to be useful. [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2022/02/02/technical-writing-and-its-hierarchy-of-needs/">Technical Writing and its &#8216;Hierarchy of Needs&#8217;</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[
<p>Technical writing is hard to do well and it’s also a bit different from other types of writing. While good technical writing has no strict definition I do think there is a kind of ‘hierarchy of needs’ that defines it. I&#8217;m not sure this is complete or perfect but I find categorizing to be useful. </p>



<p><strong>L1 &#8211;  Writing Clearly</strong></p>



<p>The author writes in a way that accurately represents the information they want to convey. Sentences have a clear purpose. The structure of the text flows from point to point. </p>



<p><strong>L2 &#8211; Explaining Well (Logos in rhetorical theory) </strong></p>



<p>The author breaks their argument down into logical blocks that build on one another to make complex ideas easier to understand. When done well, this almost always involves (a) short, inline examples to ground abstract ideas and (b) a concise and logical flow through the argument which does not repeat other than for grammatical effect or flip-flop between points.</p>



<p><strong>L3 &#8211; Style</strong></p>



<p>The author uses different turns of phrase, switches in person, different grammatical structures, humor, etc. to make their writing more interesting to read. Good style keeps the reader engaged. You know it when you see it as the ideas flow more easily into your mind. Really good style even evokes an emotion of its own. By contrast, an author can write clearly and explain well, but in a way that feels monotonous or even boring. </p>



<p><strong>L4 &#8211; Evoking Emotion (Pathos in rhetorical theory)</strong> </p>



<p>I think this is the most advanced and also the most powerful particularly where it inspires the reader to take action based on your words through an emotional argument. To take an example, Martin Kleppmann’s turning the database inside out inspired a whole generation of software engineers to rethink how they build systems. Tim or Kris&#8217; humor works in a different but equally effective way. Other appeals include establishing a connection with the reader, grounding in a subculture that the author and reader belong to, establishing credibility (ethos), highlighting where they are missing out on (FOMO), influencing through knowing and opinionated command of the content. There are many more.</p>



<p>The use of pathos (sadly) doesn’t always imply logos, often there are logical fallacies used even in technical writing. Writing is so much more powerful if both are used together. </p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2022/02/02/technical-writing-and-its-hierarchy-of-needs/">Technical Writing and its &#8216;Hierarchy of Needs&#8217;</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>Designing Event Driven Systems &#8211; Summary of Arguments</title>
		<link>http://www.benstopford.com/2018/10/04/designing-event-driven-systems-summary-arguments/</link>
				<pubDate>Thu, 04 Oct 2018 13:09:17 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=5065</guid>
				<description><![CDATA[<p>This post provides a terse summary of the high-level arguments addressed in my book. Why Change is Needed Technology has changed: Partitioned/Replayable logs provide previously unattainable levels of throughput (up to Terabit/s), storage (up to PB) and high availability. Stateful Stream Processors include a rich suite of utilities for handling Streams, Tables, Joins, Buffering of late events (important [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/10/04/designing-event-driven-systems-summary-arguments/">Designing Event Driven Systems &#8211; Summary of Arguments</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p>This post provides a terse summary of the high-level arguments addressed in <a href="https://www.confluent.io/designing-event-driven-systems">my book</a>.</p>
<h3><span style="font-weight: 400;">Why Change is Needed</span></h3>
<p><i><span style="font-weight: 400;">Technology has changed:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Partitioned/Replayable logs provide previously unattainable levels of throughput (up to Terabit/s), storage (up to PB) and high availability.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Stateful Stream Processors include a rich suite of utilities for handling Streams, Tables, Joins, Buffering of late events (important in asynchronous communication), state management. These tools interface directly with business logic. Transactions tie streams and state together efficiently.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Kafka Streams and KSQL are DSLs which can be run as standalone clusters, or embedded into applications and services directly. The latter approach makes streaming an API, interfacing inbound and outbound streams directly into your code. </span></li>
</ul>
<p><i><span style="font-weight: 400;">Businesses need asynchronicity:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Businesses are a collection of people, teams and departments performing a wide range of functions, backed by technology. Teams need to work asynchronously with respect to one another to be efficient. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Many business processes are inherently asynchronous, for example shipping a parcel from a warehouse to a user’s door. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">A business may start as a website, where the front end makes synchronous calls to backend services, but as it grows the web of synchronous calls tightly couple services together at runtime. Event-based methods reverse this, decoupling systems in time and allowing them to evolve independently of one another. </span></li>
</ul>
<p><i><span style="font-weight: 400;">A message broker has notable benefits:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">It flips control of routing, so a sender does not know who receives a message, and there may be many different receivers (pub/sub). This makes the system pluggable, as the producer is decoupled from the potentially many consumers. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Load and scalability become a concern of the broker, not the source system.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">There is no requirement for backpressure. The receiver defines their own flow control.</span></li>
</ul>
<p>Systems still require Request Response</p>
<ul>
<li>Whilst many systems are built entirely-event driven, request-response protocols remain the best choice for many use cases. The rule of thumb is: use request-response for intra-system communication particularly queries or lookups (customers, shopping carts, DNS), use events for state changes and inter-system communication (changes to business facts that are needed beyond the scope of the originating system).</li>
</ul>
<p><i><span style="font-weight: 400;">Data-on-the-outside is different:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">In service-based ecosystems the data that services share is very different to the data they keep inside their service boundary. Outside data is harder to change, but it has more value in a holistic sense.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">The events services share form a journal, or ‘Shared Narrative’, describing exactly how your business evolved over time. </span></li>
</ul>
<p><i><span style="font-weight: 400;">Databases aren’t well shared: </span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Databases have rich interfaces that couple them tightly with the programs that use them. This makes them useful tools for data manipulation and storage, but poor tools for data integration. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Shared databases form a bottleneck (performance, operability, storage etc.). </span></li>
</ul>
<p><i><span style="font-weight: 400;">Data Services are still “databases”: </span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">A database wrapped in a service interface still suffers from many of the issues seen with shared databases (The Integration Database Antipattern). Either it provides all the functionality you need (becoming a homegrown database) or it provides a mechanism for extracting that data and moving it (becoming a homegrown replayable log). </span></li>
</ul>
<p><i><span style="font-weight: 400;">Data movement is inevitable as ecosystems grow.</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">The core datasets of any large business end up being distributed to the majority of applications.  </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Messaging moves data from a tightly coupled place (the originating service) to a loosely coupled place (the service that is using the data). Because this gives teams more freedom (operationally, data enrichment, processing), it tends to be where they eventually end up.</span></li>
</ul>
<h3><span style="font-weight: 400;">Why Event Streaming</span></h3>
<p><i><span style="font-weight: 400;">Events should be 1st Class Entities: </span></i></p>
<ul>
<li style="font-weight: 400;"><i><span style="font-weight: 400;">Events are two things: (a) a notification and (b) a state transfer. </span></i><span style="font-weight: 400;">The former leads to stateless architectures, the latter to stateful architectures. Both are useful. </span></li>
<li style="font-weight: 400;"><i><span style="font-weight: 400;">Events become a Shared Narrative describing the evolution of the business over time: </span></i><span style="font-weight: 400;">When used with a replayable log, service interactions create a journal that describes everything a business does, one event at a time. This journal is useful for audit, replay (event sourcing) and debugging inter-service issues. </span></li>
<li style="font-weight: 400;"><i><span style="font-weight: 400;">Event-Driven Architectures move data to wherever it is needed: </span></i><span style="font-weight: 400;">Traditional services are about isolating functionality that can be called upon and reused. Event-Driven architectures are about moving data to code, be it a different process, geography, disconnected device etc. Companies need both. The larger and more complex a system gets, the more it needs to replicate state. </span></li>
</ul>
<p><i><span style="font-weight: 400;">Messaging is the most decoupled form of communication:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Coupling relates to a combination of (a) data, (b) function and (c) operability</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Businesses have core datasets: these provide a base level of unavoidable coupling.  </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Messaging moves this data from a highly coupled source to a loosely coupled destination which gives destination services control.</span></li>
</ul>
<p><i><span style="font-weight: 400;">A Replayable Log turns ‘Ephemeral Messaging’ into ‘Messaging that Remembers’:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Replayable logs can hold large, “Canonical” datasets where anyone can access them.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">You don’t ‘query’ a log in the traditional sense. You extract the data and create a view, in a cache or database of your own, or you process it in flight. The replayable log provides a central reference. This pattern gives each service the “slack” they need to iterate and change, as well as fitting the ‘derived view’ to the problem they need to solve. </span></li>
</ul>
<p><i><span style="font-weight: 400;">Replayable Logs work better at keeping datasets in sync across a company:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Data that is copied around a company can be hard to keep in sync. The different copies have a tendency to slowly diverge over time. Use of messaging in industry has highlighted this. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">If messaging ‘remembers’, it’s easier to stay in sync. The back-catalogue of data—the source of truth–is readily available.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Streaming encourages derived views to be frequently re-derived. This keeps them close to the data in the log. </span></li>
</ul>
<p><i><span style="font-weight: 400;">Replayable logs lead to Polyglot Views:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">There is no one-size-fits-all in data technology. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Logs let you have many different data technologies, or data representations, sourced from the same place.</span></li>
</ul>
<p><em>In Event-Driven Systems the Data Layer isn&#8217;t static</em></p>
<ul>
<li>In traditional applications the data layer is a database that is queried. In event-driven systems the data layer is a stream processor that prepares and coalesces data into a single event stream for ingest by a service or function.</li>
<li>KSQL can be used as a data preparation layer that sits apart from the business functionality. KStreams can be used to embed the same functionality into a service.</li>
<li>The streaming approach removes shared state (for example a database shared by different processes) allowing systems to scale without contention.</li>
</ul>
<p><i><span style="font-weight: 400;">The ‘Database Inside Out’ analogy is useful when applied at cross-team or company scales:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">A streaming system can be thought of as a database turned inside out. A commit log and a a set of materialized views, caches and indexes created in different datastores or in the streaming system itself. This leads to two benefits. </span>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Data locality is used to increase performance: data is streamed to where it is needed, in a different application, a different geography, a different platform, etc. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Data locality is used to increase autonomy: Each view can be controlled independently of the central log. </span></li>
</ul>
</li>
<li style="font-weight: 400;"><span style="font-weight: 400;">At company scales this pattern works well because it carefully balances the need to centralize data (to keep it accurate), with the need to decentralise data access (to keep the organisation moving). </span></li>
</ul>
<p><i><span style="font-weight: 400;">Streaming is a State of Mind: </span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Databases, Request-response protocols and imperative programming lead us to think in blocking calls and command and control structures. Thinking of a business solely in this way is flawed. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">The streaming mindset starts by asking “what happens in the real world?” and “how does the real world evolve in time?” The business process is then modelled as a set of continuously computing functions driven by these real-world events. </span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Request-response is about displaying information to users. Batch processing is about offline reporting. Streaming is about everything that happens in between. </span></li>
</ul>
<p><i><span style="font-weight: 400;">The Streaming Way:</span></i></p>
<ul>
<li style="font-weight: 400;"><span style="font-weight: 400;">Broadcast events</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Cache shared datasets in the log and make them discoverable.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Let users manipulate event streams directly (e.g., with a streaming engine like KSQL)</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Drive simple microservices or FaaS, or create use-case-specific views in a database of your choice</span></li>
</ul>
<p><span style="font-weight: 400;">The various points above lead to a set of broader principles that summarise the properties we expect in this type of system:</span></p>
<h3><span style="font-weight: 400;">The WIRED Principles</span></h3>
<p><b>Windowed</b><span style="font-weight: 400;">:</span> <span style="font-weight: 400;">Reason accurately about an asynchronous world.</span></p>
<p><b>Immutable</b><span style="font-weight: 400;">: </span><span style="font-weight: 400;">Build on a replayable narrative of events.</span></p>
<p><b>Reactive</b><span style="font-weight: 400;">:</span> <span style="font-weight: 400;">Be asynchronous, elastic &amp; responsive.</span></p>
<p><b>Evolutionary</b><span style="font-weight: 400;">: </span><span style="font-weight: 400;">Decouple. Be pluggable. Use canonical event streams.</span></p>
<p><b>Data-Enabled</b><span style="font-weight: 400;">: </span><span style="font-weight: 400;">Move data to services and keep it in sync.</span></p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/10/04/designing-event-driven-systems-summary-arguments/">Designing Event Driven Systems &#8211; Summary of Arguments</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>REST Request-Response Gateway</title>
		<link>http://www.benstopford.com/2018/06/07/rest-request-response-gateway/</link>
				<comments>http://www.benstopford.com/2018/06/07/rest-request-response-gateway/#comments</comments>
				<pubDate>Thu, 07 Jun 2018 10:43:58 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Kafka/Confluent]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=5050</guid>
				<description><![CDATA[<p>This post outlines how you might create a Request-Response Gateway in Kafka using the good old correlation ID trick and a shared response topic. It&#8217;s just a sketch. I haven&#8217;t tried it out. A Rest Gateway provides an efficient Request-Response bridge to Kafka. This is in some ways a logical extension of the REST Proxy, wrapping [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/06/07/rest-request-response-gateway/">REST Request-Response Gateway</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p>This post outlines how you might create a Request-Response Gateway in Kafka using the good old correlation ID trick and a shared response topic. It&#8217;s just a sketch. I haven&#8217;t tried it out.</p>
<p>A Rest Gateway provides an efficient Request-Response bridge to Kafka. This is in some ways a logical extension of the REST Proxy, wrapping the concepts of both a request and a response.</p>
<p><em>What problem does it solve?</em></p>
<ul>
<li>Allows you to contact a service, and get a response back, for example:
<ul>
<li>to display the contents of the user&#8217;s shopping basket</li>
<li>to validate and create a new order.</li>
</ul>
</li>
<li>Access many different services, with their implementation abstracted behind a topic name.</li>
<li>Simple Restful interface removes the need for asynchronous programming front-side of the gateway.</li>
</ul>
<p>So you may wonder: <strong><em>Why not simply expose a REST interface on a Service directly?</em> </strong>The gateway lets you access many different services, and the topic abstraction provides a level of indirection in much the same way that service discovery does in a traditional request-response architecture. So backend services can be scaled out, instances taken down for maintenance etc, all behind the topic abstraction. In addition the Gateway can provide observability metrics etc in much the same way as a service mesh does.</p>
<p>You may also wonder: <strong><em>Do I really want to do request response in Kafka?</em></strong> For commands, which are typically business events that have a return value, there is a good argument for doing this in Kafka. The command is a business event and is typically something you want a record of. For queries it is different as there is little benefit to using a broker, there is no need for broadcast and there is no need for retention, so this offers little value over a point-to-point interface like a HTTP request. So the latter case we wouldn&#8217;t recommend this approach over say HTTP, but it is still useful for advocates who want a single transport and value that over the redundancy of using a broker for request response (and yes these people exist).</p>
<p>This pattern can be extended to be a <strong><a class="external-link" href="https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar" rel="nofollow">sidecar</a></strong> rather than a gateway also (although the number of response topics could potentially become an issue in an architecture with many sidecars).</p>
<p>&nbsp;</p>
<p><img class="aligncenter size-large wp-image-5074" src="http://www.benstopford.com/wp-content/uploads/2018/06/Screen-Shot-2018-10-26-at-01.28.00-1024x581.png" alt="" width="1024" height="581" srcset="http://www.benstopford.com/wp-content/uploads/2018/06/Screen-Shot-2018-10-26-at-01.28.00-1024x581.png 1024w, http://www.benstopford.com/wp-content/uploads/2018/06/Screen-Shot-2018-10-26-at-01.28.00-300x170.png 300w, http://www.benstopford.com/wp-content/uploads/2018/06/Screen-Shot-2018-10-26-at-01.28.00-768x436.png 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></p>
<p><strong>Implementation </strong></p>
<p>Above we have a gateway running three instances, there are three services: Orders, Customer and Basket. Each service has a dedicated request topic that maps to that entity. There is a single response topic dedicated to the Gateway.</p>
<p>The gateway is configured to support different services, each taking 1 request topic and 1 response topic.</p>
<p>Imagine we POST and Order and expect confirmation back from the Orders service that it was saved. This work as follows:</p>
<ul>
<li>The HTTP request arrives at one node in the Gateway. It is assigned a correlation ID.</li>
<li>The correlation ID is derived so that it hashes to a partition of the response topic owned by this gateway node (we need this to route the request back to the correct instance). Alternatively a random correlation id could be assigned and the request forwarded to the gateway node that owns the corresponding partition of the response topic.</li>
<li>The request is tagged with a unique correlation ID and the name of the gateway response topic (each gateway has a dedicated response topic) then forwarded to the Orders Topic. The HTTP request is then parked in the webserver.</li>
<li>The Orders Service processes the request and replies on the supplied response topic (i.e. the response topic of the REST Gateway), including the correlation ID as the key of the response message. When the REST Gateway receives the response, it extracts the correlation ID key and uses it to unblock the outstanding request so it responds to the user HTTP request.</li>
</ul>
<p>Exactly the same process can be used for GET requests, although providing streaming GETs will require some form of batch markers or similar, which would be awkward for services to implement probably necessitating a client-side API.</p>
<p>If partitions move, whist requests are outstanding, they will timeout. We could work around this but it is likely acceptable for an initial version.</p>
<p>This is very similar to the way the <a class="external-link" href="https://github.com/confluentinc/kafka-streams-examples/blob/4.0.0-post/src/main/java/io/confluent/examples/streams/microservices/OrdersService.java" rel="nofollow">OrdersService works in the Microservice Examples</a></p>
<p><strong>Event-Driven Variant</strong></p>
<p>When using an event driven architecture via <a class="external-link" href="https://www.confluent.io/blog/build-services-backbone-events/" rel="nofollow">event collaboration</a>, responses aren&#8217;t based on a correlation id they are based on the event state, so for example we might submit orders, then respond once they are in a state of VALIDATED. The most common way to implement this is with CQRS.</p>
<p><strong>Websocket Variant</strong></p>
<p>Some users might prefer a websocket so that the response can trigger action rather than polling the gateway. Implementing a websocket interface is slightly more complex as you can&#8217;t use the queryable state API to redirect requests in the same way that you can with REST. There needs to be some table that maps (RequestId-&gt;Websocket(Client-Server)) which is used to &#8216;discover&#8217; which node in the gateway has the websocket connection for some particular response.</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/06/07/rest-request-response-gateway/">REST Request-Response Gateway</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
							<wfw:commentRss>http://www.benstopford.com/2018/06/07/rest-request-response-gateway/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
							</item>
		<item>
		<title>Slides from Craft Meetup</title>
		<link>http://www.benstopford.com/2018/05/09/slides-craft-meetup/</link>
				<pubDate>Wed, 09 May 2018 17:47:11 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=5047</guid>
				<description><![CDATA[<p>The slides for the Craft Meetup can be found here.</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/05/09/slides-craft-meetup/">Slides from Craft Meetup</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p>The slides for the Craft Meetup can be found <a href="http://benstopford.com/uploads/CraftMeetup.pdf">here</a>.</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/05/09/slides-craft-meetup/">Slides from Craft Meetup</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>Book: Designing Event Driven Systems</title>
		<link>http://www.benstopford.com/2018/04/27/book-designing-event-driven-systems/</link>
				<comments>http://www.benstopford.com/2018/04/27/book-designing-event-driven-systems/#comments</comments>
				<pubDate>Fri, 27 Apr 2018 12:13:14 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Top4]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=5042</guid>
				<description><![CDATA[<p>I wrote a book: Designing Event Driven Systems PDF EPUB MOBI (Kindle)</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/04/27/book-designing-event-driven-systems/">Book: Designing Event Driven Systems</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p>I wrote a book: Designing Event Driven Systems</p>
<p><a href="https://www.confluent.io/designing-event-driven-systems">PDF</a></p>
<p><a href="http://benstopford.com/uploads/deds.epub">EPUB</a></p>
<p><a href="http://benstopford.com/uploads/deds.mobi">MOBI</a> (Kindle)</p>
<p><a href="https://www.confluent.io/designing-event-driven-systems"><img class="alignnone" src="https://www.confluent.io/wp-content/uploads/Designing-Event-Driven-Systems_Confluent_FINALCOVER-1.jpg" alt="" width="208" height="311" /></a></p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/04/27/book-designing-event-driven-systems/">Book: Designing Event Driven Systems</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
							<wfw:commentRss>http://www.benstopford.com/2018/04/27/book-designing-event-driven-systems/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
							</item>
		<item>
		<title>Building Event Driven Services with Kafka Streams (Kafka Summit Edition)</title>
		<link>http://www.benstopford.com/2018/04/23/5037/</link>
				<pubDate>Mon, 23 Apr 2018 15:32:02 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=5037</guid>
				<description><![CDATA[<p>The Kafka Summit version of this talk is more practical and includes code examples which walk though how to build a streaming application with Kafka Streams. Building Event Driven Services with Kafka Streams from Ben Stopford</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/04/23/5037/">Building Event Driven Services with Kafka Streams (Kafka Summit Edition)</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p>The Kafka Summit version of this talk is more practical and includes code examples which walk though how to build a streaming application with Kafka Streams.</p>
<p><iframe src="//www.slideshare.net/slideshow/embed_code/key/v7WGyDg6WhxDt0" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> </p>
<div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/benstopford/building-event-driven-services-with-kafka-streams" title="Building Event Driven Services with Kafka Streams" target="_blank" rel="noopener noreferrer">Building Event Driven Services with Kafka Streams</a> </strong> from <strong><a href="https://www.slideshare.net/benstopford" target="_blank" rel="noopener noreferrer">Ben Stopford</a></strong> </div>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/04/23/5037/">Building Event Driven Services with Kafka Streams (Kafka Summit Edition)</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>Slides fo NDC &#8211; The Data Dichotomy</title>
		<link>http://www.benstopford.com/2018/01/19/slides-fo-ndc-data-dichotomy/</link>
				<pubDate>Fri, 19 Jan 2018 09:38:06 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=5031</guid>
				<description><![CDATA[<p>NDC London 2017 &#8211; The Data Dichotomy- Rethinking Data and Services with Streams from Ben Stopford When building service-based systems, we don’t generally think too much about data. If we need data from another service, we ask for it. This pattern works well for whole swathes of use cases, particularly ones where datasets are small [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/01/19/slides-fo-ndc-data-dichotomy/">Slides fo NDC &#8211; The Data Dichotomy</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><iframe src="https://www.slideshare.net/slideshow/embed_code/key/2UIHnMKv2JuZnH" width="427" height="356" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> </p>
<div style="margin-bottom:5px"> <strong> <a href="https://www.slideshare.net/benstopford/ndc-london-2017-the-data-dichotomy-rethinking-data-and-services-with-streams" title="NDC London 2017 - The Data Dichotomy- Rethinking Data and Services with Streams" target="_blank">NDC London 2017 &#8211; The Data Dichotomy- Rethinking Data and Services with Streams</a> </strong> from <strong><a href="https://www.slideshare.net/benstopford" target="_blank">Ben Stopford</a></strong> </div>
<section class="preamble">When building service-based systems, we don’t generally think too much about data. If we need data from another service, we ask for it. This pattern works well for whole swathes of use cases, particularly ones where datasets are small and requirements are simple. But real business services have to join and operate on datasets from many different sources. This can be slow and cumbersome in practice.</p>
</section>
<section class="body video-container">These problems stem from an underlying dichotomy. Data systems are built to make data as accessible as possible—a mindset that focuses on getting the job done. Services, instead, focus on encapsulation—a mindset that allows independence and autonomy as we evolve and grow. But these two forces inevitably compete in most serious service-based architectures.</p>
<p>Ben Stopford explains why understanding and accepting this dichotomy is an important part of designing service-based systems at any significant scale. Ben looks at how companies make use of a shared, immutable sequence of records to balance data that sits inside their services with data that is shared, an approach that allows the likes of Uber, Netflix, and LinkedIn to scale to millions of events per second.</p>
<p>Ben concludes by examining the potential of stream processors as a mechanism for joining significant, event-driven datasets across a whole host of services and explains why stream processing provides much of the benefits of data warehousing but without the same degree of centralization.</p>
</section>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2018/01/19/slides-fo-ndc-data-dichotomy/">Slides fo NDC &#8211; The Data Dichotomy</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>Handling GDPR: How to make Kafka Forget</title>
		<link>http://www.benstopford.com/2017/12/04/handling-gdpr-make-kafka-forget/</link>
				<comments>http://www.benstopford.com/2017/12/04/handling-gdpr-make-kafka-forget/#comments</comments>
				<pubDate>Mon, 04 Dec 2017 23:19:35 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=4990</guid>
				<description><![CDATA[<p>If you follow the press around Kafka you’ll probably know it’s pretty good at tracking and retaining messages, but sometimes removing messages is important too. GDPR is a good example of this as, amongst other things, it includes the right to be forgotten. This begs a very obvious question: how do you delete arbitrary data [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2017/12/04/handling-gdpr-make-kafka-forget/">Handling GDPR: How to make Kafka Forget</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p style="text-align: left;"><span style="font-weight: 400;">If you follow the press around Kafka you’ll probably know it’s pretty good at tracking and retaining messages, but sometimes removing messages is important too. </span><a href="http://www.itpro.co.uk/it-legislation/27814/what-is-gdpr-everything-you-need-to-know-8"><span style="font-weight: 400;">GDPR</span></a><span style="font-weight: 400;"> is a good example of this as, amongst other things, it includes </span><a href="https://en.wikipedia.org/wiki/Right_to_be_forgotten"><span style="font-weight: 400;">the right to be forgotten</span></a><span style="font-weight: 400;">. This begs a very obvious question: how do you delete arbitrary data from Kafka? It’s an immutable log after all.</span><span id="more-4990"></span></p>
<p style="text-align: left;"><span style="font-weight: 400;">As it happens Kafka is a pretty good fit for GDPR as, along with the right to be forgotten, users also have the right to request a copy of their personal data. Companies are also required to keep detailed records of what data is used for — a requirement where recording and tracking the messages that move from application to application is a boon.</span></p>
<h3 style="text-align: left;"><strong>How do you delete (or redact) data from Kafka?</strong></h3>
<p style="text-align: left;"><span style="font-weight: 400;">The simplest way to remove messages from Kafka is to simply let them expire. By default Kafka will keep data for two weeks and you can tune this as required. There is also an Admin API that lets you </span><a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-107%3A+Add+purgeDataBefore%28%29+API+in+AdminClient"><span style="font-weight: 400;">delete messages explicitly</span></a><span style="font-weight: 400;"> if they are older than some specified time or offset. But what if we are keeping data in the log for a longer period of time, say for <a href="https://www.confluent.io/blog/messaging-single-source-truth/">Event Sourcing</a> use cases or as a </span><a href="https://www.thoughtworks.com/radar/techniques/event-streaming-as-the-source-of-truth"><span style="font-weight: 400;">source of truth</span></a><span style="font-weight: 400;">? For this you can make use of  </span><a href="https://kafka.apache.org/documentation/#compaction"><span style="font-weight: 400;">Compacted Topics</span></a><span style="font-weight: 400;">, which allow messages to be explicitly deleted or replaced by key. </span></p>
<p style="text-align: left;"><span style="font-weight: 400;">Data isn’t removed from Compacted Topics in the same way as say a relational database. Instead Kafka uses a mechanism closer to those used by Cassandra and HBase where records are marked for removal then later deleted when the compaction process runs. </span></p>
<p style="text-align: left;"><span style="font-weight: 400;">To make use of this you configure the topic to be compacted and then send a delete event (by sending a null message, with the key of the message you want to delete). When compaction runs the message will be deleted forever. </span></p>
<pre class="wp-code-highlight prettyprint linenums:1">//Create a record in a compacted topic in kafka
producer.send(new ProducerRecord(CUSTOMERS_TOPIC, “Donald Trump”, “Job: Head of the Free World, Address: The White House”));
//Mark that record for deletion when compaction runs
producer.send(new ProducerRecord(CUSTOMERS_TOPIC, “Donald Trump”, null));</pre>
<p style="text-align: left;"><span style="font-weight: 400;">If the key of the topic is something other than the CustomerId then you need some process to map the two. So for example if you have a topic of Orders, then you need a mapping of Customer-&gt;OrderId held somewhere. Then to &#8216;forget&#8217; a customer simply lookup their Orders and either explicitly delete them, or alternatively redact any customer information they contain. You can do this in a KStreams job with a State Store or alternatively roll your own. </span></p>
<p style="text-align: left;"><span style="font-weight: 400;">There is a more unusual case where the key (which Kafka uses for ordering) is completely different to the key you want to be able to delete by. Let’s say that, for some reason, you need to key your Orders by ProductId. This wouldn’t be fine-grained enough to let you delete Orders for individual customers so the simple method above wouldn&#8217;t work. You can still achieve this by using a key that is a composite of the two: [ProductId][CustomerId] then using a custom partitioner in the Producer (see the </span><a href="https://kafka.apache.org/documentation/#producerconfigs"><span style="font-weight: 400;">Producer Config</span></a><span style="font-weight: 400;">: “partitioner.class”) which extracts the ProductId and uses only that subsection for partitioning. Then you can delete messages using the mechanism discussed earlier using the [ProductId][CustomerId] pair as the key. </span></p>
<h3 style="text-align: left;"><strong>What about the databases that I read data from or push data to?</strong></h3>
<p style="text-align: left;"><span style="font-weight: 400;">Quite often you’ll be in a pipeline where Kafka is moving data from one database to another using </span><a href="https://www.confluent.io/product/connectors/"><span style="font-weight: 400;">Kafka Connectors</span></a><span style="font-weight: 400;">. In this case you need to delete the record in the originating database and have that propagate through Kafka to any Connect Sinks you have downstream. If you’re using CDC this will just work: the delete will be picked up by the source Connector, propagated through Kafka and deleted in the sinks. If you’re not using a CDC enabled connector you’ll need some custom mechanism for managing deletes.</span></p>
<h3 style="text-align: left;"><strong>How long does Compaction take to delete a message?</strong></h3>
<p style="text-align: left;"><span style="font-weight: 400;">By default compaction will run periodically and won&#8217;t give you a clear indication of when a message will be deleted. Fortunately you can tweak the settings for stricter guarantees. </span>The best way to do this is to configure the compaction process to run continuously, then add a rate limit so that it doesn&#8217;t doesn&#8217;t affect the rest of the system unduly:</p>
<pre class="wp-code-highlight prettyprint linenums:1"># Ensure compaction runs continuously with a very low cleanable ratio
log.cleaner.min.cleanable.ratio = 0.00001 
# Set a limit on compaction so there is bandwidth for regular activities
log.cleaner.io.max.bytes.per.second=1000000 
</pre>
<p style="text-align: left;">Setting the cleanable ratio to 0 would make compaction run continuously. A small, positive value is used here, so the cleaner doesn’t execute if there is nothing to clean, but will kick in quickly as soon as there is. A sensible value for the log cleaner max I/O is [max I/O of disk subsystem] x 0.1 / [number of compacted partitions]. So say this computes to 1MB/s then a topic of 100GB will clean removed entries within 28 hours. Obviously you can tune this value to get the desired guarantees.</p>
<p style="text-align: left;"><span style="font-weight: 400;">One final consideration is that partitions in Kafka are made from a set of files, called segments, and the latest segment (the one being written to) isn&#8217;t considered for compaction. This means that a low throughput topic might accumulate messages in the latest segment for quite some time before rolling, and compaction kicking in. To address this we can force the segment to roll after a defined period of time. For example log.roll.hours=24 would force segments to roll every day if it hasn&#8217;t already met its size limit. </span></p>
<h3 style="text-align: left;"><strong>Tuning and Monitoring</strong></h3>
<p style="text-align: left;"><span style="font-weight: 400;">There are a number of configurations for tuning the compactor (see properties log.cleaner.* in the </span><a href="https://kafka.apache.org/documentation/#brokerconfigs"><span style="font-weight: 400;">docs</span></a><span style="font-weight: 400;">) and the compaction process publishes </span><a href="https://issues.apache.org/jira/browse/KAFKA-1327?focusedCommentId=13956822&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13956822"><span style="font-weight: 400;">JMX metrics</span></a><span style="font-weight: 400;"> regarding its progress. Finally you can actually set a topic to be both compacted and have an expiry (an undocumented </span><a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-71%3A+Enable+log+compaction+and+deletion+to+co-exist"><span style="font-weight: 400;">feature</span></a><span style="font-weight: 400;">) so data is never held longer than the expiry time. </span></p>
<h3 style="text-align: left;"><b>In Summary</b></h3>
<p style="text-align: left;">Kafka provides immutable topics where entries are expired after some configured time, compacted topics where messages with specific keys can be flagged for deletion and the ability to propagate deletes from database to database with CDC enabled Connectors.</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2017/12/04/handling-gdpr-make-kafka-forget/">Handling GDPR: How to make Kafka Forget</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
							<wfw:commentRss>http://www.benstopford.com/2017/12/04/handling-gdpr-make-kafka-forget/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
							</item>
		<item>
		<title>What could academia or industry could do (short or long term) to promote more collaboration?</title>
		<link>http://www.benstopford.com/2017/10/14/academia-industry-short-long-term-promote-collaboration/</link>
				<pubDate>Sat, 14 Oct 2017 13:00:47 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=4970</guid>
				<description><![CDATA[<p>I did a little poll of friends and colleagues about this question. Here are some of the answers which I found quite thought provoking: I&#8217;m a recovering academic from many years ago.  I feel like I have some perspective on graduate/research departments in computer science, even though I am sure things have changed a little [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2017/10/14/academia-industry-short-long-term-promote-collaboration/">What could academia or industry could do (short or long term) to promote more collaboration?</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p><em>I did a little poll of friends and colleagues about this question. Here are some of the answers which I found quite thought provoking:</em></p>
<p><span id="more-4970"></span></p>
<hr />
<p>I&#8217;m a recovering academic from many years ago.  I feel like I have some perspective on graduate/research departments in computer science, even though I am sure things have changed a little since I was in grad school.</p>
<p>One problem I saw is that a ton of the research done in Universities in computer science (outside areas like quantum computing, etc) lags behind industry.   A lot of graduate students in Software Engineering worked on projects that capable companies had already solved or that a senior industry developer could solve in a few weeks.</p>
<p>I also see a lot of graduate student project where they end up &#8220;building a tool&#8221; except the tool ends up being something nobody would ever use.</p>
<p>Every single one of those kinds of projects destroys the credibility of academics with industry.</p>
<p>A victory for academics seems to be publication or assembling statistical evidence for an assertion.  I get it but nobody in industry cares about those things.  Nobody.  Change your goalposts and align them with industry if you want to collaborate with industry.</p>
<p>I also think there is huge overlap between graduate student research and startups.  Lets say I&#8217;m 24 years old, and I think I have an idea to change the world with technology.  Instead of doing it at the University for a M.Sc I can just get some investment and build a startup (even without a business plan sometimes).</p>
<p>If academics want collaboration they need to be brutally honest with themselves and get more focused while facing where they sit today.  The software being written inside Universities often sucks. The research often moves too slowly.  Startups are the innovators.  The kinds of evidence and assertions being &#8220;proven&#8221; in academia are mostly uninteresting.  The outputs like publications are only read by other academics.</p>
<p>It might hurt but if you want credibility, cancel some of that crap. Work in the future, not in the past, understand your strengths and weaknesses and play to your strengths, change your goals to deliver outputs that are really consumable&#8230;</p>
<p>Its a lot to ask, so I don&#8217;t see any of that happening&#8230;</p>
<hr />
<p>My company, engages quite a lot with academia, and even runs an Institute partly for this. The following is a bit of a brain-dump.</p>
<p>Within the institute we employ an academic-in-residence (Carlota Perez.) This is to explicitly support and sponsor work that we think is valuable and should be completed. In this case, to help her finish her second book. The institute also runs a fellowship programme. This is broadly defined to attract individuals with ideas and talent to offer them a network and opportunities, supported by a stipend. We explicitly define this quite broadly to allow people who may not want to start businesses to find value.</p>
<p>Obviously we&#8217;re interested in finding people who want to start businesses, but we keep that distinct from the fellowship to allow more far-reaching visions space to grow, at least a little. If fellows do want to found a business, and are capable of it, then we draw them into and support them in that.</p>
<p>We&#8217;re looking to participate more in academic-industry think-tanks, and other bodies. We individually connect to people in these bodies, and in academia, a lot in workshops we run. Mostly to generate ideas and explore spaces.</p>
<p>Finally, we read a lot of papers.</p>
<p>In our view, this is a start, but not enough. We are doing a little to sponsor the development of ideas within academia, via Carlota Perez, and we&#8217;re allowing people to start research projects in the fellowship. But we want to help with more execution and scale. We&#8217;ve tried to partner with some universities, but we find that they&#8217;re not commercially-focused enough to support us in raising the capital to actually execute with. They want to provide ideas, we provide execution, and capital appears by magic. We need a bit more than that.</p>
<hr />
<p>I was affiliated with [Top UK University] for a time and here is my top-2 list of difficulties:</p>
<p>&#8211; IP: the university makes it really hard to separate the IP between work done during the collaboration vs work done in the day job (industry). The amount of paperwork is typical of a bureaucratic institution. Turn off for many people (why bother).</p>
<p>&#8211; IP again: this is slightly tangential to the original question and is more related to a different kind of industry-academia collaboration, one where the prof does a startup while in academia. [Top UK University] for example had a policy that 50% of the equity of the startup belonged to [Top UK University]. That number is huge. Prevents other VCs from investing in the startup. Guarantees that basically no one will do a serious startup. A more comparable number in leading US universities like Stanford is 2-5%. There were creative ways around that, but it was a grey area legally. Again, why would one bother going through the hoops. It&#8217;s easier to just not deal with academia at all.</p>
<hr />
<p>My suggestion would be that industry and academia need to develop more understanding of, and respect for, each other&#8217;s needs and incentives. To put it bluntly, the career demands are very different: industry people need to ship products that customers care about, while academics need to publish papers in good venues. With those different incentives come different timelines for working (industry thinks about shipping quickly and long-term maintenance; academia thinks about big ideas for the future, but doesn&#8217;t care about the code once the paper is published), different prioritisation of aspects of the work (e.g. testing), etc. Of course those are over-simplified caricatures, but I hope you get the idea.</p>
<p>I don&#8217;t think one is better than the other &#8212; they are just different, and for a collaboration to be productive, I think there needs to be mutual understanding and empathy for these different needs. People who have only worked in one of the two may get frustrated with people from the other camp, feeling that they just &#8220;don&#8217;t get what&#8217;s important&#8221; (because indeed different things are important).</p>
<hr />
<p>Caveat:  I’m still affiliated with various academic advisory boards so am somewhat biased by the progress we’re making. A few personal comments / observations:</p>
<p>&#8211; Although academia has shifted slightly to focus more on “impact” not just papers.</p>
<p>&#8211; The points made about  have always been particularly troublesome for working with [Top UK University] due to the[Top UK University] Innovations licensing arrangements but I think as that arrangement expires there’s recognition that companies can’t keep sinking massive grants into Universities unless they’re philanthropic without new creative commercial ways of working.</p>
<p>&#8211; Linked to the above two points one of the frustrations for industry is that a low TRL development that appears to be 80% of the commercial offer realised in a Uni can be achieved in 20% of the time but the other “20%” productisation to commercial fruition / TRL7 will be 800% of the industry partners production costs and associated time etc&#8230; This should be reflected in the engagement and IP position but isn’t really.</p>
<p>&#8211; Academia is only just recognising that it must adjust to collaborate or risk being out competed where “Quantum compute” or “fundamental battery tech”,etc ,etc research groups are appearing in bigger tech companies.</p>
<p>Caveat – my subjective view out of ignorance from the fringes: The EPSRC Industrial Strategy Challenge Fund and Prosperity Partnerships are a massive opportunity and yet the ISCF Waves that have appeared appear to have done so with limited industrial awareness, formal structure and engagement. So those that have been engaged have been at the table more likely through personal relationships, etc. So this needs more publicity and more formality… There also needs to be a clear understanding of Innovate UK, the Catapults’ and Research Councils’ roles.</p>
<hr />
<p>I&#8217;m not sure I have a great answer to this but I think it&#8217;s an interesting question. In the distributed systems world academia plays an important role, but there is always a divide. Things that I think might be useful:<br />
&#8211; Doing more to reach the audience in industry. The best example of this i&#8217;ve seen is https://blog.acolyer.org/.<br />
&#8211; Partnering to study why things work well in practice rather than in theory. For example there is much the wider community can learn from the internal design decisions made by key open source components that run in the real world. So in my field the design decisions made building Kafka, Cassandra, Zookeeper, HBase could use further study which would be useful for the next iteration of technologies.<br />
&#8211; Making it easier for industrial practitioners to play a role in academia. I know a few people that do this, but i&#8217;m not entirely sure how it works, but I feel it could be done more.</p>
<p>&nbsp;</p>
<p>Finally some comments on twitter here: https://twitter.com/benstopford/status/917991118058459138</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2017/10/14/academia-industry-short-long-term-promote-collaboration/">What could academia or industry could do (short or long term) to promote more collaboration?</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
										</item>
		<item>
		<title>Delete Arbitrary Messages from a Kafka</title>
		<link>http://www.benstopford.com/2017/10/06/delete-kafka-arbitrary-messages-trick/</link>
				<pubDate>Fri, 06 Oct 2017 07:19:46 +0000</pubDate>
		<dc:creator><![CDATA[ben]]></dc:creator>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Kafka/Confluent]]></category>

		<guid isPermaLink="false">http://www.benstopford.com/?p=4964</guid>
				<description><![CDATA[<p>I&#8217;ve been asked a few times about how you can delete messages from a topic in Kafka. So for example, if you work for a company and you have a central Kafka instance, you might want to ensure that you can delete any arbitrary message due to say regulatory or data protection requirements or maybe [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2017/10/06/delete-kafka-arbitrary-messages-trick/">Delete Arbitrary Messages from a Kafka</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></description>
								<content:encoded><![CDATA[<p>I&#8217;ve been asked a few times about how you can delete messages from a topic in Kafka. So for example, if you work for a company and you have a central Kafka instance, you might want to ensure that you can delete any arbitrary message due to say regulatory or data protection requirements or maybe simple in case something gets corrupted.</p>
<p>A potential trick to do this is to use a combination of (a) a compacted topic and (b) a custom <a href="https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/producer/Partitioner.html">partitioner</a> (c) a pair of <a href="https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/producer/ProducerInterceptor.html">interceptor</a>s.</p>
<p>The process would follow:</p>
<ul>
<li>Use a producer <a href="https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/producer/ProducerInterceptor.html">interceptor</a> to add a GUID to the end of the key before it is written.</li>
<li>Use a custom <a href="https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/producer/Partitioner.html">partitioner</a> to ignore the GUID for the purposes of partitioning</li>
<li>Use a compacted topic so you can then delete any individual message you need via producer.send(key+GUID, null)</li>
<li>Use a consumer <a href="https://kafka.apache.org/0110/javadoc/index.html?org/apache/kafka/clients/consumer/ConsumerInterceptor.html">interceptor</a> to remove the GUID on read.</li>
</ul>
<p>Two caveats: (1) Log compaction does not touch the most recent segment, so values will only be deleted once the first segment rolls. This essentially means it may take some time for the &#8216;delete&#8217; to actually occur. (2) I haven&#8217;t tested this!</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="http://www.benstopford.com/2017/10/06/delete-kafka-arbitrary-messages-trick/">Delete Arbitrary Messages from a Kafka</a> appeared first on <a rel="nofollow" href="http://www.benstopford.com">ben stopford</a>.</p>
]]></content:encoded>
										</item>
	</channel>
</rss>
