<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Musings of an Anonymous Geek</title>
	<atom:link href="http://protocolostomy.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://protocolostomy.com</link>
	<description>Made with only the finest 1&#039;s and 0&#039;s</description>
	<lastBuildDate>Tue, 10 Feb 2026 16:58:27 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
<site xmlns="com-wordpress:feed-additions:1">2259962</site>	<item>
		<title>AI Means Changing Your Sales Approach</title>
		<link>https://protocolostomy.com/2026/02/10/ai-means-changing-your-sales-approach/</link>
					<comments>https://protocolostomy.com/2026/02/10/ai-means-changing-your-sales-approach/#respond</comments>
		
		<dc:creator><![CDATA[jonesy]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 16:58:04 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[ai]]></category>
		<guid isPermaLink="false">https://protocolostomy.com/?p=1079</guid>

					<description><![CDATA[<p>Three weeks ago, I made an appointment to get a demo of a product I already knew I wanted to buy, because I had used it at another employer. No matter, the site would not let me sign us up until I got a demo and spoke to a sales drone. Last week, before that...</p>
The post <a href="https://protocolostomy.com/2026/02/10/ai-means-changing-your-sales-approach/">AI Means Changing Your Sales Approach</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></description>
										<content:encoded><![CDATA[<p>Three weeks ago, I made an appointment to get a demo of a product I already knew I wanted to buy, because I had used it at another employer. No matter, the site would not let me sign us up until I got a demo and spoke to a sales drone. Last week, before that demo could actually happen, I used Claude to actually just create an internal tool that is good enough to just cancel the demo. It&#8217;s an internal security awareness training program. It&#8217;s now managed with an audit log in google sheets, a quiz in google forms, auto-generated completion certs using an appscript attached to the form submission action. It took maybe 3-4 hours to get together. It has so far not ever failed or had a hiccup. </p>



<p>Yesterday morning, I priced out a solution for doing backups of the repositories in our GitHub organization. It was the classic cable TV problem: It was expensive, and the only way to get the features I wanted was to upgrade to a plan that would have me paying for all kinds of stuff I don&#8217;t need just to get the one thing I do. Yesterday afternoon, I started testing a homegrown solution that Claude helped me build, complete with a docker image, python code, terraform code to deploy it all, a README suitable for anyone either using or managing the service, etc. </p>



<p>It actually is already a new world. It&#8217;s not just coming, it&#8217;s here, and companies need to start thinking hard not only about what their customers want and need, but what they&#8217;re competing with. I&#8217;m not sure it&#8217;s possible to understand what you&#8217;re competing with if you don&#8217;t have an internal capability around AI. Just six months ago it might&#8217;ve still been true that the bar for creating &#8220;good enough&#8221; solutions internally was too high for a lot of customers. The list of products for which that&#8217;s still true is shrinking quite rapidly!  </p>The post <a href="https://protocolostomy.com/2026/02/10/ai-means-changing-your-sales-approach/">AI Means Changing Your Sales Approach</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></content:encoded>
					
					<wfw:commentRss>https://protocolostomy.com/2026/02/10/ai-means-changing-your-sales-approach/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1079</post-id>	</item>
		<item>
		<title>What AI Portends</title>
		<link>https://protocolostomy.com/2026/02/10/what-ai-portends/</link>
					<comments>https://protocolostomy.com/2026/02/10/what-ai-portends/#respond</comments>
		
		<dc:creator><![CDATA[jonesy]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 15:59:43 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Big Ideas]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Opinion]]></category>
		<category><![CDATA[Productivity]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[future]]></category>
		<category><![CDATA[history]]></category>
		<guid isPermaLink="false">https://protocolostomy.com/?p=1077</guid>

					<description><![CDATA[<p>It&#8217;s funny how humans have the same conversations over and over again across generations. When I was a kid in the 70s, the old folks talked about kids watching too much TV and it turning their brains to mush. In the 90&#8217;s parents talked about kids being obsessed with Gameboys and consoles and it turning...</p>
The post <a href="https://protocolostomy.com/2026/02/10/what-ai-portends/">What AI Portends</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></description>
										<content:encoded><![CDATA[<p>It&#8217;s funny how humans have the same conversations over and over again across generations. When I was a kid in the 70s, the old folks talked about kids watching too much TV and it turning their brains to mush. In the 90&#8217;s parents talked about kids being obsessed with Gameboys and consoles and it turning their brains to mush. In the 2010s parents talk about kids staring at phones all day and it turning their brains to mush. Same conversation. Still the world turns. </p>



<p>We do the same thing in the technology space. In the 80s, there were folks talking about computers taking all of the jobs. In the 90s, there were folks talking about the internet taking all of the jobs. In the 2010s people talked about AWS taking all of the jobs. Today we&#8217;re talking about AI taking all of the jobs. Still the world turns. </p>



<p>Whether kids brains were ever turned to mush is, perhaps, debatable. But regarding various technologies taking all of the jobs, we have data on that, and it&#8217;s provably false. Well, at least it&#8217;s false from a raw employment perspective. It&#8217;s absolutely true that certain <em>roles</em> ceased to exist, but the people in those roles migrated to other, possibly brand new roles. </p>



<p>That&#8217;s not to say there weren&#8217;t other impacts on peoples&#8217; lives &amp; careers, though: for those who saw computers and decided not to learn about them and how they could be adopted in their work, they eventually became seen as complacent, or in the way, or behind the times, etc., and those folks (and I&#8217;m speaking from my own observations of family members way back in the day) had trouble finding jobs, eventually, and when they did find work, it was with a company that was still doing everything manually, because (for example) the owner found it cheaper to hire people than migrate to computers because computers made filing clerks in very low demand. Sounds like a great place. </p>



<p>Something similar happened when AWS adoption was building steam back in 2008-2010. There were only a few services back then. The concept of a VPC was brand new, for example, as I recall. I thought it was great and was getting real work done with these new tools, where I could write code to deploy an EC2 instance, and store files in S3, etc. At that time I was a member of a lot of system administration groups and Linux user groups, and there was a meeting where someone took an informal poll that showed that less than half of the folks in the room wrote code on a regular basis. </p>



<p>That really shook me. I was friends with almost everyone in the room. I also could plainly see that AWS was getting bigger faster &#8211; not slowing down, or plateauing. Within the next meeting or two I put together a talk that more or less begged everyone in attendance to learn to code in whatever language fit their brain. To take their shell scripts and port them to Python, or Perl, or Ruby, or anything. Any language would do. &#8220;The reality is&#8221;, I said, &#8220;that Amazon is creating APIs to allow developers to do your job.&#8221; A couple of smirks, a couple of knowing nods of agreements &#8211; from coders. &#8220;It&#8217;s not that there won&#8217;t be any jobs&#8221; I continued, &#8220;It&#8217;s that the jobs that are left are going to be the ones we all hate doing now, like changing printer toner.&#8221; </p>



<p>It has been nearly 20 years since I gave that talk. LinkedIn is a thing. There were probably 30 people in attendance at that talk, all locals, and the people who smirked are verifiably either not working in technology <em>at all</em>, or are working in tiny 1-man shops that require a physical presence and require them to change printer toner. </p>



<p>So, I tell you all of that so I can tell you this: I don&#8217;t think AI is anything to panic about. There are realities, though, that history tells us are likely coming, though. What history also tells us is that the folks who wind up landing on their feet are those who learn to adopt the new technology. The ones who wind up having a difficult time staying afloat are those who are complacent in their roles, think that AI could never replace them, or who try to foster anti-AI cultures, or the like. And, like computers, and the internet, and AWS, it&#8217;s not that there won&#8217;t be any jobs. It&#8217;s just that you might not want the jobs left in the wake of AI. </p>The post <a href="https://protocolostomy.com/2026/02/10/what-ai-portends/">What AI Portends</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></content:encoded>
					
					<wfw:commentRss>https://protocolostomy.com/2026/02/10/what-ai-portends/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1077</post-id>	</item>
		<item>
		<title>Auditing Your Data Migration To ClickHouse Using ClickHouse Local</title>
		<link>https://protocolostomy.com/2024/01/21/auditing-your-data-migration-to-clickhouse-using-clickhouse-local/</link>
					<comments>https://protocolostomy.com/2024/01/21/auditing-your-data-migration-to-clickhouse-using-clickhouse-local/#respond</comments>
		
		<dc:creator><![CDATA[jonesy]]></dc:creator>
		<pubDate>Sun, 21 Jan 2024 15:04:23 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://protocolostomy.com/?p=1072</guid>

					<description><![CDATA[<p>I&#8217;ve been developing a quick and dirty data migration routine to get terabytes of data stored in AWS S3 as parquet files into our ClickHouse Cloud cluster. I&#8217;m really happy that I took some time to read up on the clickhouse local command, which is included in any installation of ClickHouse. Not only was this...</p>
The post <a href="https://protocolostomy.com/2024/01/21/auditing-your-data-migration-to-clickhouse-using-clickhouse-local/">Auditing Your Data Migration To ClickHouse Using ClickHouse Local</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve been developing a quick and dirty data migration routine to get terabytes of data stored in AWS S3 as parquet files into our ClickHouse Cloud cluster. I&#8217;m really happy that I took some time to read up on the <code>clickhouse local</code> command, which is included in any installation of ClickHouse.</p>



<p>Not only was this tool instrumental in getting the data migrated, but it also allowed me to very easily craft a way to quickly compare numbers between the source and destination to make sure everything I expected to be migrated was actually migrated. </p>



<h2 class="wp-block-heading">The Mess I Made</h2>



<p>It&#8217;s important to know that engineers with 25+ years of experience do dumb things sometimes. It&#8217;s also important to know that having decades of experience does not make you immune to fatigue, and really nobody should be working from 7AM until midnight. It&#8217;s not heroic. It&#8217;s unhealthy and problematic on a bunch of different levels. With that in mind, here&#8217;s the dopy stuff I did after working too late after too many hours:</p>



<p>Initially, after I migrated a subset of the data, I did a more manual, hacky check for consistency just using command line tools, manually querying each source in different terminal windows &amp; eyeballing the output. I wanted a nicer way to view that data, so I created a one-liner using <code>awk</code>, <code>paste</code>, and <code>column</code> commands. That looked like this:</p>



<pre class="wp-block-code"><code>paste count_clicks.txt count_clicks_parquet.txt | column -t | awk '
NR==1{
  printf(
    "%12s %18s %18s %12s %12s \n", 
    "date", 
    "clickhouse_count", 
    "warehouse_count", 
    "diff", 
    "pct-diff"
  )
}
{
  printf(
    "%12s %18d %18d %12d %12.4f%%\n", 
    $1, 
    $2, 
    $4, 
    $4-$2, 
    (100-($2/$4)*100)
  )
}'</code></pre>



<p>A quick overview of what&#8217;s happening there:</p>



<ul class="wp-block-list">
<li>NR==1 means the first record (NR==2 would mean the second record, etc). If the record number is one, awk will output what&#8217;s in the first set of curly braces. The <code>printf</code> function takes a format spec in the first argument. My format spec lays out 5 columns of either 12 or 18 characters. All of those columns will hold strings, hence the &#8216;s&#8217; in <code>%12s</code>. Then I have a bunch of hard-coded strings, which become the column headers. If you forget to put <code>NR==1</code> in there, they&#8217;ll print on every row. Ask me how I know! </li>



<li>The second set of brackets specs out what will be printed in the rest of the rows. In this case, I have column widths that match up with those of the column headers, and then I have the columns in the output of the earlier parts of the command pipeline: 
<ul class="wp-block-list">
<li>$1 is the date column</li>



<li>$2 is the count from the S3 data source</li>



<li>$4 is the count from the ClickHouse data source</li>



<li>$4-$2 shows the difference between the two data sources, and </li>



<li>The last column shows the percent difference between the two sources</li>
</ul>
</li>
</ul>



<p>The output looks something like this:</p>



<pre class="wp-block-code"><code>        date   clickhouse_count    warehouse_count         diff     pct-diff
  2023-12-01               1471               2445           74        0.017%
  2023-12-02               1665               1700           35        0.038%
  2023-12-03               4496               4537           41        0.045%
  2023-12-04               1650               1705           55        0.047%
  2023-12-05               1154               1237           83        0.069%
  2023-12-06               2777               2865           88        0.074%
  2023-12-07               9244               9293           49        0.041%</code></pre>



<p>The data here is made up to give an idea of what the output looks like. </p>



<p>So, I had this issue where my data audit showed a mismatch. I did a little work, very late at night, and went to bed thinking I had straightened it all out. When I woke up the next day, my well-rested brain and eyes caught a problem: I copied output from my queries of the two sources into two separate files, and mis-labeled the data, and then compared data in a completely different window with that, and&#8230;. well, it was a mess, and I didn&#8217;t fix anything. </p>



<p>I was up too late working for sure, but I also had a messy process. I should&#8217;ve and could&#8217;ve done better. </p>



<h2 class="wp-block-heading">Fresh Eyes, Fresh (and better) Ideas</h2>



<p>Revisiting my work from the night before was painful. As soon as I looked at my process at a high level (by scrolling through my terminal window history) I almost immediately said &#8220;this is insane. ClickHouse Local should be able to query both sources. I shouldn&#8217;t need to copy/paste and introduce levels of indirection that leave room for errors like this.&#8221; </p>



<p>I was right. Using ClickHouse Local, you can query a ClickHouse Cloud instance using the <code>remoteSecure</code> function, and query the S3 data using the <code>s3</code> function, which also lets me pass in the file format as an argument. So there&#8217;s support for my data sources and formats.</p>



<p>On top of that, ClickHouse supports Common Table Expressions (CTEs), so I can craft a query where I name the output from two separate sub select statements (one to each data source), and then write a third <code>SELECT</code> that references the two named result sets as if they were tables. </p>



<p>Below, <code>s3_count</code> and <code>ch_count</code> are named result sets. The last <code>SELECT</code> queries those two named result sets. </p>



<pre class="wp-block-code"><code>clickhouse local --query "
WITH s3_count AS (
  SELECT 
    toDate(time) AS day, 
    count() AS num_events 
  FROM s3('https://s3-endpoint/2023/12/**/*.parquet', 'Parquet')  
  GROUP BY day 
  ORDER BY day
), 
ch_count AS (
  SELECT 
    toDate(time) AS day, 
    count() AS num_events 
  FROM remoteSecure('clickhouse-instance-hostname:9440', 'db.tablename', 'clickhouse-user', 'clickhouse-password') 
  WHERE toYYYYMM(timestamp) = '202312' 
  GROUP BY day ORDER BY day
) 
SELECT 
  s3.day AS date, 
  s3.num_events AS s3_count,  
  ch.num_events AS ch_count, 
  ch_count - s3_count AS diff  
FROM s3_count AS s3 
LEFT JOIN ch_count AS ch 
ON s3.day = ch.day;"</code></pre>



<p>The output has a couple of quirks: for some reason, I guess possibly related to querying multiple sources, or using CTEs maybe, the output columns are not ordered according to my query, and there is no header line to tell me which column is which. It looks like this:</p>



<pre class="wp-block-code"><code>2023-02-01	40	2778	2738
2023-02-02	43	4413	4370
2023-02-03	26	7024	6998
2023-02-04	54	3079	3025</code></pre>



<h2 class="wp-block-heading">Conclusion</h2>



<p>So, this is not something I&#8217;d paste as-is into a slide deck and present to an executive team. However, <code>clickhouse local</code> in this case did give me a (relatively) quick way to verify the consistency (and quantify the inconsistency) between my data sources. Once the data was moved, I wasted probably an hour with <code>awk</code> and friends, but was able to recover the next morning and throw together the <code>clickhouse local</code> solution in maybe just another hour of reading docs, debugging the query, and running test queries. Hope this helps. </p>The post <a href="https://protocolostomy.com/2024/01/21/auditing-your-data-migration-to-clickhouse-using-clickhouse-local/">Auditing Your Data Migration To ClickHouse Using ClickHouse Local</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></content:encoded>
					
					<wfw:commentRss>https://protocolostomy.com/2024/01/21/auditing-your-data-migration-to-clickhouse-using-clickhouse-local/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1072</post-id>	</item>
		<item>
		<title>ClickHouse Cheat Sheet 2024</title>
		<link>https://protocolostomy.com/2024/01/17/clickhouse-cheat-sheet-2024/</link>
					<comments>https://protocolostomy.com/2024/01/17/clickhouse-cheat-sheet-2024/#respond</comments>
		
		<dc:creator><![CDATA[jonesy]]></dc:creator>
		<pubDate>Thu, 18 Jan 2024 04:24:19 +0000</pubDate>
				<category><![CDATA[Database]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[clickhouse]]></category>
		<category><![CDATA[database]]></category>
		<category><![CDATA[servers]]></category>
		<category><![CDATA[sql]]></category>
		<guid isPermaLink="false">https://protocolostomy.com/?p=1071</guid>

					<description><![CDATA[<p>For the past 4 months, ClickHouse has been my life, full time. I&#8217;ve been vetting it for production use and learning all about it in the process. Since my memory has always been notoriously poor, I take a lot of notes (in fact, that was the original reason for this blog&#8217;s existence). So, while I&#8217;d...</p>
The post <a href="https://protocolostomy.com/2024/01/17/clickhouse-cheat-sheet-2024/">ClickHouse Cheat Sheet 2024</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></description>
										<content:encoded><![CDATA[<p>For the past 4 months, ClickHouse has been my life, full time. I&#8217;ve been vetting it for production use and learning all about it in the process. Since my memory has always been notoriously poor, I take a lot of notes (in fact, that was the original reason for this blog&#8217;s existence). </p>



<p>So, while I&#8217;d love to have time to do some longer-form writing about ClickHouse, what I have now in terms of notes could be helpful to probably a lot of people as ClickHouse gains in popularity by the minute. </p>



<p>If there are specific topics within the realm of ClickHouse that you want to see covered, you can certainly let me know, but you should know that, operationally, ClickHouse was complex enough that we just this week opened a ClickHouse Cloud account, so my knowledge of it operationally will probably start to atrophy starting with the 24.x releases (up to this week, I built and ran various testing clusters myself, in AWS EC2, and EKS). </p>



<p>So, without further ado, here&#8217;s the Cheat Sheet. </p>



<h2 class="wp-block-heading">NUMBER_OF_COLUMNS_DOESNT_MATCH</h2>



<p>If you get seemingly inexplicable ‘NUMBER_OF_COLUMNS_DOESNT_MATCH’ errors whenever you’re doing a select (by itself or as part of an INSERT…SELECT, or whatever), and it seems like the column numbers <em>do</em> match, remove the parentheses around whatever follows SELECT. So, instead of <code>SELECT (x, y, z)</code> FROM foo it should be <code>SELECT x,y,z FROM foo</code>.</p>



<h2 class="wp-block-heading">Remote and RemoteSecure Table Functions</h2>



<p>If you have an ClickHouse server you manage, or an instance in their cloud offering, you can use <code>clickhouse local</code> and the remote table function to move data into your server from some other source. </p>



<p>For example, I have a cloud instance. I also have an event warehouse in S3, where events are stored in Parquet files. I can use the following command to do an <code>INSERT...SELECT</code> into my (remote) cloud database from the data in the Parquet files in S3:</p>



<p><code>clickhouse local --verbose --query "INSERT INTO TABLE FUNCTION remoteSecure('my-cloud-host:9440', 'mydb.mytable', 'myclouduser', 'mycloudpassword') (col1, col2, col3) SELECT col1, col2, col3 FROM s3('https://s3-endpoint/events/some-event/2023/**<em>/*</em>.parquet', 'Parquet');" </code></p>



<p>As a bonus trick, note the wildcard use in the <code>s3</code> function. The supported wildcards are similar to those used in the bash shell (and maybe zsh &#8211; I&#8217;m not as familiar). Read all about those <a href="https://clickhouse.com/docs/en/engines/table-engines/integrations/s3#wildcards-in-path">here</a>. </p>



<p>Also, note that <code>remoteSecure</code> is just a secure version of the <code>remote</code> function! The syntax is the same! On cloud instances I believe only <code>remoteSecure</code> is supported, which makes sense. </p>



<h2 class="wp-block-heading">ClickHouse Can&#8217;t Parse Some Date Formats</h2>



<p>I&#8217;ve had issues with:</p>



<ul class="wp-block-list">
<li>Dates with a timezone offset going into a DateTime column, and </li>



<li>Dates with microsecond precision going into a DateTime column</li>
</ul>



<p>There are multiple solutions to this, but one I&#8217;ve used that could work depending on your requirements is to wrap the column with the <code>parseDateTimeBestEffortOrZero</code> function, like this:</p>



<p><code>INSERT INTO events (source, timestamp) SELECT source, parseDateTimeBestEffortOrZero(timestamp) FROM mytable</code></p>



<p>By default, ClickHouse has a fast, cheap, and simplified time parsing algorithm. The &#8216;BestEffort&#8217; functions support a wide array of formats, including ISO8601 and RFC 822, and more. See <a href="https://clickhouse.com/docs/en/sql-reference/functions/type-conversion-functions#parsedatetime32besteffort">here</a>. </p>



<p>Here&#8217;s how it deals with a couple of formats I had to deal with:</p>



<pre class="wp-block-code"><code>SELECT parseDateTimeBestEffortOrZero('2011-11-04 00:05:23.283+00:00') AS time

┌────────────────time─┐
│ 2011-11-04 00:05:23 │
└─────────────────────┘

SELECT parseDateTimeBestEffortOrZero('2023-10-26T10:11:18.964768') AS time

┌────────────────time─┐
│ 2023-10-26 10:11:18 │
└─────────────────────┘</code></pre>



<p>In my case, I didn&#8217;t require the additional information lost by using the function, so it worked fine. If you need to preserve it, there are more options than I can cover here, but look at DateTime64 datatype for storing subsecond precision, and look at all of the other <a href="https://clickhouse.com/docs/en/sql-reference/functions/type-conversion-functions">datetime parsing functions available on this page</a>! </p>



<h2 class="wp-block-heading">My ClickHouse Server Is Out of Disk!</h2>



<p>AFAIK this will only apply to self-hosted instances. If your server&#8217;s disk is full, you have a couple of options:</p>



<ul class="wp-block-list">
<li>EBS volumes can be extended &amp; filesystem resized without data loss or downtime (<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/modify-ebs-volume-on-instance.html">source</a>)</li>



<li>A new, bigger disk can be attached to the EC2 instance. Then, add the old and new disks to a new storage policy in the clickhouse server’s config.xml file. Then, move all of the data to the new disk. Once done, the old disk can be removed from the storage policy. (<a href="https://github.com/ClickHouse/ClickHouse/issues/12632#issuecomment-715922823">source</a>) </li>
</ul>



<h2 class="wp-block-heading">Making Wide Tables Without Using NULLable Columns</h2>



<p>NULLable columns can impact performance in clickhouse and their best practices docs explicitly recommend avoiding them. Instead, you can create default values. This is how I was able to combine what were separate tables for each event type in our system into a single ‘events’ table, even though these events don’t all follow the exact same schema. Here&#8217;s an example excerpted from a <code>CREATE TABLE</code> statement:</p>



<pre class="wp-block-code"><code>    `headers` Map(String, String) DEFAULT map('00000', '00000'),
    `client_id` UUID DEFAULT '00000000-0000-0000-0000-000000000000',
    `destination` String DEFAULT '-',</code></pre>



<h2 class="wp-block-heading">The Infamous &#8220;TOO MANY PARTS&#8221; Error</h2>



<p>If you get a “TOO MANY PARTS” error either in the server log or in a client when doing an insert query to the server,<span style="font-size: revert; color: initial; font-family: -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Roboto, Oxygen-Sans, Ubuntu, Cantarell, &quot;Helvetica Neue&quot;, sans-serif;"> run this query. It’ll tell you, for each table, how many parts were created, and the average rows per part, over the last 10 minutes.</span></p>



<pre class="wp-block-code"><code><code>SELECT</code>  toStartOfTenMinutes(event_time) AS time,
  concat(database, '.', table) as table,
  count() AS new_parts,
  rount(avg(rows)) AS avg_rows_per_part 
FROM system.part_log
WHERE (event_date >= today()) AND (event_type = 'NewPart')
GROUP BY time, table
ORDER BY time ASC, table ASC;</code></pre>



<p>If you see thousands of parts created but the average row size is like, say, 4, then that’s unhealthy. Remember that ClickHouse is optimized to perform fewer, bigger inserts, and performs poorly when doing a high volume of very small inserts, because all of the inserts create parts that have to be merged, and merges are resource intensive. The fewer inserts, the fewer parts, the fewer merges, the happier the server is.</p>



<h2 class="wp-block-heading">Moving Internal Data Off The Root Volume</h2>



<p>ClickHouse stores internal log tables, tables from the <code>system</code> database, and metadata, all under <code>/var/lib/clickhouse</code> by default. Also by default, Amazon Linux puts <code>/var/</code> on the root volume. I wanted it off of the root volume, and thought it made sense to put it on the volume where all of the other data was stored. </p>



<p>This is possible, but it&#8217;s not a documented process and it took me a little debugging to get it done. Here&#8217;s what I did:</p>



<p>First, In config.xml (or wherever you keep the storage config), at the top level, set the following settings to whatever path you want (I used <code>/data/lib/clickhouse</code>): </p>



<pre class="wp-block-code"><code>&lt;path>/data/lib/clickhouse/&lt;/path>
&lt;tmp_path>/data/lib/clickhouse/tmp/&lt;/tmp_path>
&lt;user_files_path>/data/lib/clickhouse/user_files/&lt;/user_files_path>
&lt;format_schema_path>/data/lib/clickhouse/format_schemas/&lt;/format_schema_path>

&lt;user_directories>
    &lt;local_directory>
        &lt;path>/data/lib/clickhouse/access/&lt;/path>
    &lt;/local_directory>
&lt;/user_directories></code></pre>



<p>Also in config.xml, make sure there isn’t a metadata path associated with an existing disk in your <code>&lt;storage_configuration></code>. It would likely be pointing at <code>/var/lib</code> by default.</p>



<p>At this point I copied everything to the new location: <code>cp -R /var/lib/clickhouse /data/lib/.</code></p>



<p>In addition, you need to make 100% sure that the <code>clickhouse</code> user has ownership and full permissions over everything it had access to before that you&#8217;ve now moved. I did this: <code>cd /data/lib; chown -R clickhouse:clickhouse clickhouse</code> and that worked fine for me. </p>



<h2 class="wp-block-heading">ClickHouse Silently Fails To Start</h2>



<p>There have been plenty of times while learning about ClickHouse that it failed to start, but it was always very good about putting an error in the error log (by default, <code>/var/log/clickhouse-server/clickhouse-server.err.log</code>). One time, it failed to start, and there was no error in the log, and nothing to see using <code>journalctl</code> or <code>systemctl status</code> either. </p>



<p>If this happens, you should go find the command systemd (or your init script) uses to start it, copy it, and run it verbatim yourself, and you&#8217;ll likely see the missing error. In my case, ClickHouse failed to start because it was unable to access its pid file. That&#8217;s weird, because ClickHouse creates the pid file itself. </p>



<p>After some spelunking around I figured out that there was, in fact, a permissions issue (it&#8217;s always permissions or DNS folks!). What&#8217;s weird is that the permissions issue was in a directory on another completely separate physical disk, but it was causing this issue. So if this happens to you, go look at permissions under your <code>/var/lib/clickhouse</code> directory. The user that ClickHouse runs as should own everything recursively underneath <code>/var/lib/clickhouse</code>. </p>



<h2 class="wp-block-heading">Age-based Tiered Storage In ClickHouse</h2>



<p>When I first read about tiered storage in ClickHouse, the example they used seemed really strange to me: they configured ClickHouse to move data to another volume when the first volume was 80% full. When I think of tiered storage I immediately think of hot/cold storage volumes, and aging out data to cold storage after some specific period of time. You can totally do that with ClickHouse. </p>



<p>ClickHouse uses ‘storage policies’ to let you inform clickhouse of the disks available to it &amp; the volumes those disks make up. Using a storage policy (and the also-cool TTL clause), you can configure multiple volumes, ‘hot’ and ‘cold’, for example, and then ClickHouse will migrate the data between volumes for you. It can also recompress data or do other operations at the same time. Here’s what I did:</p>



<p>First, set up a <code>&lt;storage_configuration></code> w/ the hot &amp; cold disks:</p>



<pre class="wp-block-code"><code>     &lt;disks>
        &lt;hot>
            &lt;type>local&lt;/type>
            &lt;path>/data/&lt;/path>
        &lt;/hot>
        &lt;cold>
           &lt;type>s3&lt;/type>
           &lt;endpoint>https://mys3endpoint&lt;/endpoint>
        &lt;/cold>
    &lt;/disks></code></pre>



<p>Now, add the hot/cold policy:</p>



<pre class="wp-block-code"><code>&lt;policies>
    &lt;tiered_storage> &lt;!-- policy name --> 
        &lt;volumes>
            &lt;hot_volume> &lt;!-- volume name -->
                &lt;disk>hot&lt;/disk>
            &lt;/hot_volume>
            &lt;cold_volume> &lt;!-- volume name -->
                 &lt;disk>cold&lt;/disk>
            &lt;/cold_volume>
        &lt;/volumes>
    &lt;/tiered_storage>
&lt;/policies></code></pre>



<p>Now, when you create a new table, you can reference the <code>tiered_storage</code> policy in the <code>SETTINGS</code>, and use a <code>TTL</code> clause to tell ClickHouse what to move and when to move it.  For example:</p>



<pre class="wp-block-code"><code>CREATE TABLE default.foo
(
   source_id UUID,
   timestamp DateTime
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(timestamp)
TTL timestamp + toIntervalDay(45) TO VOLUME 'cold'
SETTINGS storage_policy = 'tiered_storage';</code></pre>



<p>ClickHouse will use the order of the volumes in the storage policy to determine priority, so by default, because we defined the &#8216;hot&#8217; volume first, newly-inserted rows will go there. However, after <code>timestamp + toIntervalDay(45)</code> it&#8217;ll be moved to the &#8216;cold&#8217; volume. </p>



<h2 class="wp-block-heading">Hope This Helps! </h2>



<p>ClickHouse is a very deep product, and each individual feature has a lot of nuance to it, as does the overall behavior of the service in general. This is obviously not the totality of what I learned, but if you&#8217;re just starting out with ClickHouse, I hope this saves you some of the hours I spent getting this knowledge. Good luck! </p>The post <a href="https://protocolostomy.com/2024/01/17/clickhouse-cheat-sheet-2024/">ClickHouse Cheat Sheet 2024</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></content:encoded>
					
					<wfw:commentRss>https://protocolostomy.com/2024/01/17/clickhouse-cheat-sheet-2024/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1071</post-id>	</item>
		<item>
		<title>User Activation With Django and Djoser</title>
		<link>https://protocolostomy.com/2021/05/06/user-activation-with-django-and-djoser/</link>
					<comments>https://protocolostomy.com/2021/05/06/user-activation-with-django-and-djoser/#respond</comments>
		
		<dc:creator><![CDATA[jonesy]]></dc:creator>
		<pubDate>Thu, 06 May 2021 14:25:09 +0000</pubDate>
				<category><![CDATA[Django]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://protocolostomy.com/?p=1063</guid>

					<description><![CDATA[<p>Depending on the project, Django and Djoser can go really well together. Django provides such an enormous feature set as a foundation, and such a modular platform, that tools like Djoser can provide enormous value while still staying out of the way of the rest of your application. At the same time, the whole solution...</p>
The post <a href="https://protocolostomy.com/2021/05/06/user-activation-with-django-and-djoser/">User Activation With Django and Djoser</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></description>
										<content:encoded><![CDATA[<p>Depending on the project, Django and Djoser can go really well together. Django provides such an enormous feature set as a foundation, and such a modular platform, that tools like Djoser can provide enormous value while still staying out of the way of the rest of your application. At the same time, the whole solution (Django, Djoser, and  any other reusable Django app I&#8217;ve ever seen), top to bottom, is Just Python™.  That means you can almost always find the right place to hook in your own code, without having to take over responsibility for an entire solution. </p>



<p>Djoser is a library that integrates nicely into a Django REST Framework project to provide API endpoints for things like user registration, login and logout, password resets, etc. It also integrates pretty seamlessly with Django-SimpleJWT to enable JWT support, and will expose JWT create, refresh, and verify endpoints if support is turned on. It&#8217;s pretty sweet. </p>



<p>Well&#8230;. most of the time. </p>



<p>The only real issues I&#8217;ve had with Djoser always root from one of two assumptions the project makes: </p>



<ol class="wp-block-list"><li>That you&#8217;re puritanical in your adherence to REST principles at every turn, and </li><li>That you&#8217;re building a Single Page Application (SPA)</li></ol>



<p>That first one is easily forgivable: if you&#8217;re going to be an opinionated solution, it&#8217;s best to be consistent, and strict. The minute you fall off of that wagon, everything starts to devolve into murkiness. It doesn&#8217;t seem like a big leap to say that a lot of developers prefer an API that is clear and consistent over one that is vague and inconsistent. </p>



<p>As for the second assumption, it honestly doesn&#8217;t get in the way very often, but on a recent project, it bit me pretty hard. On this project, I had to leverage Djoser in my Django project&#8217;s user activation flow. </p>



<h2 class="wp-block-heading">User Registration and User Activation</h2>



<p>User activation happens as part of the user registration process in my case (and, I suspect, most cases). At a high level, the registration flow goes like this: </p>



<ol class="wp-block-list"><li>a POST is sent to the server requesting that a given username or email be given a user account, using the given password. </li><li>the server generates a token of some kind, uses that token to generate a verification link, and sends an account activation email containing the link. </li><li>the end user opens the email and clicks the link</li><li>magic</li><li>the user is activated, and may or may not get an email confirming that their account is ready to go</li></ol>



<p>All of this is straightforward until you get to step 4: &#8216;magic&#8217;. Big surprise, right? Also perhaps unsurprising is that this is where Djoser&#8217;s assumptions make life difficult if you&#8217;re not building a Single Page Application, and/or are not a REST purist. </p>



<h2 class="wp-block-heading">Djoser User Registration</h2>



<p>Before getting to activation, you have to register. For completeness, it&#8217;s worth pointing out that Djoser provides an endpoint that takes a POST request with the desired username, email, and password to kick things off. It integrates nicely with the rest of your application without any real work to do other than adding a urlpattern that&#8217;s given to you to your urls.py file. </p>



<p>In my case, I&#8217;m using a custom user model and I changed the USERNAME_FIELD to &#8217;email&#8217;, and as a result, Djoser accepts just the email and password fields by default, because it&#8217;s leaning on the base Django functionality for as much as possible, which is smart and makes everyones&#8217; lives easier. </p>



<p>When this POST request comes in, password validators and any other things you have set up to happen at user creation time will happen, including the creation of a user record. However, the <code>is_active</code> flag will be <code>False</code> for that record. Then it generates an encoded uid (from the record it created) and a verification token, uses them along with the value of ACTIVATION_URL to form a confirmation link, puts that in an email to the email address used to register, and sends it. And that leads us to&#8230;</p>



<h2 class="wp-block-heading">Djoser User Activation</h2>



<p>First, let&#8217;s have a look at the default value for Djoser&#8217;s ACTIVATION_URL setting. This setting determines the URL that will be emailed to the person who is trying to register a new account. The default value is `</p>



<pre class="wp-block-preformatted">'#/activate/{uid}/{token}'`</pre>



<p>This is a front end URL with placeholders for the uid and token values. It gets assembled into an account registration verification link that looks like this: </p>



<p><code>http://localhost:8000/#/activate/Mw/am6c7b-85f2acbaf4691e9cc6c891bbc4fd7754</code></p>



<p>Up to this point in my project, I was gleefully following along with what Djoser seems to be making easy for me. Then I clicked the above link and everything crashed. Why? Because Djoser does not have a back end view to handle the front end URL that is the default ACTIVATION_URL. </p>



<p>Here&#8217;s the explanation from one of the project maintainers: <a href="https://github.com/sunscrapers/djoser/issues/14" target="_blank" rel="noreferrer noopener">https://github.com/sunscrapers/djoser/issues/14</a>  </p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>I suppose you directly use an url to <a href="https://github.com/sunscrapers/djoser/blob/master/djoser/views.py#L173">activation view</a> which <a href="https://github.com/sunscrapers/djoser/blob/master/djoser/utils.py#L39">expects POST request</a>. When you open a link to this view in browser it makes a GET request which is simply not working.</p><p>Our assumption is that GET requests should not change the state of application. That&#8217;s why the activation view expects POST in order to affect user model. Moreover it&#8217;s REST API so if you open one of the endpoints in your browser it displays JSON response which is not something for regular user.</p><p>If you&#8217;re working on single page application you need to create a new screen with separate url that generates POST request to your REST API.</p><p>If you really want to have view that activates user on GET request then you need to implement your own view, but remember to provide reasonable html response.</p></blockquote>



<p>To boil this down, my understanding from this is that:</p>



<ol class="wp-block-list"><li>Djoser devs know that the ACTIVATION_URL is going to be used to create a link that is sent to someone via email.</li><li>Djoser devs know that, when you click a link in an email, the result is a GET request to the back end. </li><li>Djoser devs have provided a view for user activation that only supports POST requests in spite of this fact.</li></ol>



<p>This is immensely frustrating. Their implementation seems to sacrifice a product that actually works at all for the sake of REST purity and maybe an assumption that all developers are only creating SPAs. What&#8217;s more, searching around for solutions turns up lots of confused people. The solution that I found trending was one where you write your own view that <em>does</em> accept a GET request, and then, inside the view, in the back end code, make a POST request to run the code in Djoser&#8217;s UserActivationView! </p>



<p>This all just felt way too&#8230; wrong for me. The back end should not make an HTTP request to itself. Perhaps what I did wasn&#8217;t 100% perfect either, but I&#8217;d be interested in a dialog that could shed more light on why things are the way they are, and how to properly and effectively deal with it. </p>



<h2 class="wp-block-heading" id="my-workaround">My Workaround</h2>



<p>First, if you&#8217;re just here for the code, here&#8217;s the view I created and the urlpattern that maps to it. </p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: python; title: ; notranslate">
from djoser.views import UserViewSet
from rest_framework.response import Response

class ActivateUser(UserViewSet):
    def get_serializer(self, *args, **kwargs):
        serializer_class = self.get_serializer_class()
        kwargs.setdefault(&#039;context&#039;, self.get_serializer_context())

        # this line is the only change from the base implementation.
        kwargs&#x5B;&#039;data&#039;] = {&quot;uid&quot;: self.kwargs&#x5B;&#039;uid&#039;], &quot;token&quot;: self.kwargs&#x5B;&#039;token&#039;]}

        return serializer_class(*args, **kwargs)

    def activation(self, request, uid, token, *args, **kwargs):
        super().activation(request, *args, **kwargs)
        return Response(status=status.HTTP_204_NO_CONTENT)
</pre></div>


<p>My Djoser ACTIVATION_URL in settings.py is `</p>



<pre class="wp-block-preformatted">'accounts/activate/{uid}/{token}'</pre>



<p>And then the urlpattern used to map requests to that url looks like this: </p>



<pre class="wp-block-preformatted">path('accounts/activate/&lt;uid&gt;/&lt;token&gt;', ActivateUser.as_view({'get': 'activation'}), name='activation'),</pre>



<h2 class="wp-block-heading">What&#8217;s Actually Happening In My Workaround</h2>



<p>Djoser leans on Django and Django Rest Framework for a lot of its functionality. In order to support a large number of URLs while duplicating the least amount of code, Djoser utilizes Django Rest Framework&#8217;s &#8216;ViewSet&#8217; concept, which lets you map an &#8216;action&#8217; to a method in a single class. So, instead of having separate views for &#8220;UserRegistration&#8221;, &#8220;UserActivation&#8221;, &#8220;UserPasswordChange&#8221;, and all of the other things that can happen to a user, Djoser just has one class called &#8220;UserViewSet&#8221; (at <code>djoser.views.UserViewSet</code>). </p>



<p>UserViewSet.activation is a method that takes a POST request containing the UID and token values, validates them, and (assuming validation passes) sends a signal that, in my application, sets the <code>is_active</code> flag on the user to <code>True</code> and sends the new user an email letting them know their account is now active. &#8220;It&#8217;s all perfect except for the POST!&#8221; I thought. But I wasn&#8217;t happy with solutions that have code in the back end going back out to the internet to trigger other code on the back end. I wasn&#8217;t going to accept sending a POST request to trigger another view. </p>



<p>So, step one was to create my own view, inheriting from UserViewSet, and then allowing that view to accept a GET request, because you&#8217;ll recall that our mission is to handle the user clicking the link in their email to activate their account. Aside from accepting a GET request, I don&#8217;t really want my code to do anything at all. Just call super().activation and get out of the way! </p>



<p>Now, the base implementation forces POST-only by decorating it with an <code>@action</code> decorator. The first argument to the decorator is a list of HTTP methods supported, and only <code>post</code> is listed. Great! So just don&#8217;t decorate that method, map it to a GET in <code>urls.py</code>, and you&#8217;re all set!</p>



<p>Sadly, it was not quite that easy. Since UserViewSet.activation only supports a POST request, it also assumes that what it needs is already in <code>request.data</code>. But when our GET request comes in, the <code>uid</code> and <code>token</code> values will be in <code>kwargs</code>. Making things more difficult, the <code>request</code> object here is Django Rest Framework&#8217;s <code>Request</code> object, and its <code>data</code> attribute is not settable (I think it&#8217;s a property defined with no setter, but don&#8217;t quote me). So, I can&#8217;t just overwrite <code>request.data</code> and move on. Now what? </p>



<p>So, we need to find a way to get data into <code>request.data</code> so that when I call <code>super().activation()</code>, it can act on that data. In looking at the code for <code>UserViewSet.activation</code> I found this: </p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: python; title: ; notranslate">
    @action(&#x5B;&quot;post&quot;], detail=False)
    def activation(self, request, *args, **kwargs):
        serializer = self.get_serializer(data=request.data)
</pre></div>


<p>That&#8217;s not the whole method, but for the whole method, the only time <code>request.data</code> is referenced is on the very first line of the code. Since we already said I can&#8217;t just shim in a line and overwrite <code>request.data</code>, let&#8217;s instead have a look at this <code>get_serializer</code> method! </p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: python; title: ; notranslate">
    def get_serializer(self, *args, **kwargs):
        &quot;&quot;&quot;
        Return the serializer instance that should be used for validating and
        deserializing input, and for serializing output.
        &quot;&quot;&quot;
        serializer_class = self.get_serializer_class()
        kwargs.setdefault(&#039;context&#039;, self.get_serializer_context())
        return serializer_class(*args, **kwargs)
</pre></div>


<p>Notice that it doesn&#8217;t explicitly have a <code>data</code> parameter defined in the method&#8217;s signature. That means any reference to <code>data</code> would have to be in <code>kwargs</code>. That means we can set <code>kwargs['data']</code> inside of this method to whatever we want. The updated method to make that happen only adds a single line:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: python; highlight: [4]; title: ; notranslate">
    def get_serializer(self, *args, **kwargs):
        serializer_class = self.get_serializer_class()
        kwargs.setdefault(&#039;context&#039;, self.get_serializer_context())
        kwargs&#x5B;&#039;data&#039;] = {&quot;uid&quot;: self.kwargs&#x5B;&#039;uid&#039;], &quot;token&quot;: self.kwargs&#x5B;&#039;token&#039;]}

        return serializer_class(*args, **kwargs)
</pre></div>


<p>That&#8217;s it. You just effectively replaced <code>request.data</code>. </p>



<h2 class="wp-block-heading">One More Time</h2>



<p>This is a lot. Let&#8217;s review what happened. </p>



<p>First, the mission: </p>



<ul class="wp-block-list"><li>Support a GET request that happens when the user clicks the account activation link in their email. </li></ul>



<p>Next, the problem:</p>



<ul class="wp-block-list"><li>There&#8217;s code to handle user activation, but it doesn&#8217;t support a GET request. </li></ul>



<p>Then, the workaround:</p>



<ul class="wp-block-list"><li>Create our own view that inherits from the Djoser UserViewSet to handle the incoming GET request. </li><li>Override the <code>activation</code> method to accept the <code>uid</code> and <code>token</code> parameters coming in on the URL and remove the <code>@action</code> decorator that only allowed HTTP POST. </li><li>Override the <code>get_serializer</code> method to insert the <code>uid</code> and <code>token</code> values into its <code>kwargs['data']</code>. </li><li>Define a urlpattern that maps a get request to our ACTIVATION_URL to our newly-created view. </li><li>Profit</li></ul>



<p>If you don&#8217;t recall any of the above steps from the discussion, <a href="#my-workaround">scroll back up to see my code</a> in the My Workaround section. </p>



<h2 class="wp-block-heading">But Maybe I&#8217;m Wrong!</h2>



<p>I&#8217;m wrong a lot. Maybe you know better. I&#8217;d be happy to see a better alternative solution. I&#8217;d also love to have a better understanding of the logic Djoser is using, because I admittedly just don&#8217;t get that. As a developer, I&#8217;m far more comfortable moving from APIs back towards the operating system and infrastructure services than I am moving into front end frameworks (though I do have to do that sometimes). So, if you understand how a &#8216;front end url&#8217; is sent via email and then expected to somehow be intercepted by a front end that then sends a POST to the back end, please point me to some docs! </p>



<p></p>The post <a href="https://protocolostomy.com/2021/05/06/user-activation-with-django-and-djoser/">User Activation With Django and Djoser</a> first appeared on <a href="https://protocolostomy.com">Musings of an Anonymous Geek</a>.]]></content:encoded>
					
					<wfw:commentRss>https://protocolostomy.com/2021/05/06/user-activation-with-django-and-djoser/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1063</post-id>	</item>
	</channel>
</rss>
