<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AFP548</title>
	<atom:link href="https://www.afp548.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.afp548.com</link>
	<description>Covering Apple IT</description>
	<lastBuildDate>Wed, 09 Nov 2022 03:10:25 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>
	<item>
		<title>Couch to 50k (CPE&#8217;s)</title>
		<link>https://www.afp548.com/2022/11/07/couch-to-50k-cpes/</link>
					<comments>https://www.afp548.com/2022/11/07/couch-to-50k-cpes/#respond</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Tue, 08 Nov 2022 00:55:42 +0000</pubDate>
				<category><![CDATA[AFP548 Site News]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387693</guid>

					<description><![CDATA[I’m going to ping pong between a few topics here, so to give everyone guideposts, here are the three main topics I’d like to stay with you, dear reader: 1. With 50k people in the MacAdmins Slack, it’s fine if you don’t want to be a Client Platform Engineer, but [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I’m going to ping pong between a few topics here, so to give everyone guideposts, here are the three main topics I’d like to stay with you, dear reader:<br />
1. With 50k people in the MacAdmins Slack, it’s fine if you don’t want to be a Client Platform Engineer, but <a href="https://knowyourmeme.com/memes/be-a-lot-cooler-if-you-did">~be a lot cooler if you did~</a> it’s a better time than ever to become one<br />
2. If current CPE&#8217;s aren&#8217;t doing explicit outreach to under-represented-in-tech groups and investing in the growth of the motivated people out there, our standing in our organizations (and the future of our speciality) is the lesser for it<br />
3. To grow/find those people, I have used the same interview questions which can be reverse-engineered into a bunch of advice for those still feeling like they have skills to acquire before being able to call themselves a CPE<br />
<img decoding="async" src="https://www.afp548.com/wp-content/uploads/2022/09/IMG_3398.jpeg" alt="Irrashai"></p>
<blockquote><p> What&#8217;s in a name?<br />
As a definition of what *Client Platform Engineering* is, we&#8217;d be the team that &#8216;manages the life cycle&#8217; of workstation platforms. In the day-to-day it would be less helping end users in real time, have less interactions with locked-down &#8216;appliance&#8217;-style mobile devices, but it&#8217;s definitely about making sure computers get patches, secure configurations, and groups can be identified that an appropriate environment is maintained on applicable devices.</p></blockquote>
<h2>Is CPE For You?</h2>
<p>Starting at the beginning, why am I intending to encourage the talent pool of CPE’s to increase, and saying it’s both a great time to be one and ok if you don’t want to do this work (forever)? Years ago the MacAdmin community would lose talent to Linux sysadminery/devops-y folk. Then being fluent in Continuous Integration/Continuous Development supporting the building of iOS apps became prolific. Around the same time securing Corp meant pulling the people who know how to patch and inspect configs on the fleet (we even have the highest count of CPUs) into Security-dedicated roles. Now there’s enough of a gap in tooling from the MDM perspective that vendors are recognizing you don’t get better without people from the trenches with problem space expertise. Opportunities abound! (And we can get compensated accordingly &#8211; it&#8217;s valuable work.)</p>
<p>Speaking for myself, I’ve enjoyed making this my career. But one of my favorite quotes from a manager I’ve heard in passing was ‘this isn’t going to be your last job, so I’m hoping we can help you improve in the way you want to evolve’. My personality means I’ve followed Mac blogs since it was hard to blog, I’m a lifer for this work. But as a devops mindset and its topics crept in to how MacAdmins can get their job done, we (should be able to have) increased who can do this job. Tech workers are kept afloat in seemingly all economic conditions and are finally getting salaries to show proper respect for the breadth of skill we display across various parts of the IT ‘stack’. Apple’s continuous churn, breaking and giving us QA tasks for the disruption they cause ~being courageous~ ‘innovating’ means we also have to continuously learn and stay on top of change, more than almost any other tech discipline (besides I guess JavaScript frameworks).</p>
<h2>Be You, Too</h2>
<p>I, for one, don&#8217;t need people who religiously follow the blogs or Apple’s mercurial shifts, but I do want folks who know how important gitops and code review and visualization of metrics are and yeah, also happen to be able to deliver operating system/workstation platform lifecycle management. If it’s a given this will not be your last job, I have no problem eventually losing teammmates to other departments and fields that use osquery or devtools scripting or containerized GitLab CI/CD or web-facing cloud service delivery, as long as we get the mutual benefit of our time together in <em>enough</em> of the core CPE disciplines.</p>
<p>At (several, failed) Dropbox interviews I detected I wasn’t respected because I hadn’t gone to university, same due to my lack of ‘real’ Computer Science bonafides when I interviewed at Google (although they were still super nice about it). Some employers train interviewers on bias and explicitly call out ensuring qualified candidates get the consideration they deserve. At most interviews I can still fake a ‘culture fit’ for pampered people havens because I’m male-identifying and cis and have worked in places with unconscionable practices, with really bad personalities that provided cover for my own toxicity. I want us all to realize we should grow people that aren’t like the archetypes we can be perceived as having benefitted from in our careers. When we do that, our service delivery will get feedback from the widest customer base possible because our members will be that much more approachable. Any drop off of interest from the already slim subset of people who care about our IT product hits our respect as peers in an engineering organization, and the dynamicity of our team can only be assisted by showing better representation of the customer population as a whole.</p>
<p>The <a href="https://www.macadmins.org/about-the-mac-admins-foundation">MacAdmins Foundation</a> recognizing the need for a mentorship program (see <a href="https://macadmins.slack.com/archives/C03RUF2NHU6">#maf-mentorship</a>) is a great space to watch, and I hope we can all find resources at our current employers to get experience both being mentored and mentoring so that we can get REALLY GOOD at investment in people as a community. (And I guess therapy/diversity training to unlearn some of the toxic aspects of how we had previously thought it was acceptable to operate would be a Good Thing<img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" />.)</p>
<h2>It&#8217;s Dangerous To Go Alone, Bring This</h2>
<p>Now we come to what I hope is AS REMEMBERED as the preceding topics in this post, which is the things someone with my career ‘bent’ would like to see people choose as aspects they can work on if they feel their skill set doesn’t yet match what a CPE team member should be expected to have. And admittedly it’s an unreasonably broad range of skills, but at this point I have well-trodden, go-to topics I’d like to see a passing familiarity with, and links to preferred resources to consult can hopefully get folks started.</p>
<p>Not just any scripting language, python is what I need people to be relatively fluent in, and Google’s <a href="https://developers.google.com/edu/python">online python class</a> was what really challenged me but also provided a great set of resources to come back to about ‘what was that <a href="https://developers.google.com/edu/python/introduction">super basic thing</a> again’. Being able to tool around in the interpreter is way more cool than fish shell prompt bling IMO. All of us who live or die by shell scripting know you hit a line length or complexity where it’s the wrong tool for the job, and I certainly learned by copy-pasting and modifying others code, but you need to reason with and solve problems from scratch ‘on your own’ with a language that intends to give you one good-enough common-case way to navigate a complex control flow. As a community, we benefit from a truly immensely valuable turn of events from over 15 years ago: <a href="https://managingosx.wordpress.com/">Greg Neagle</a> wanted to learn python, worked with Google engineers to add features to Munki, and writes well-commented, as straightforward as possible code with compact functions and not so much sprawl in either number of library files nor line length. Listing all the functions in <a href="https://github.com/wdas/reposado">Reposado</a> and tracing the logic control flow (for a PSU MacAdmins <a href="https://www.youtube.com/watch?v=PrSXU1v10KI">presentation</a>) was one of the best things I did for my understanding of solving non-trivial problems (in reality just tangentially related to the MacAdmin space) in python.</p>
<p>You may find other training resources that stick with you (I also liked the <a href="https://www.coursera.org/learn/interactive-python-1">Coursera python class</a> and like the Google resource above, regularly frequent the <a href="https://pymotw.com/">pymotw</a> reference series), and hopefully you come across approachable problems to take apart. And THEN you should definitely put your flag in the ground and share what you’ve come up with, open source has been a huge part of what CPE needs at any kind of scale. I think running a high-ish visibility project has helped Greg find peers in the community that solve problems in different ways, code review is a hard discipline to be immersed in and we all start somewhere &#8211; reporting issues and writing documentation is super helpful and valuable! Similar to how I benefitted from having to present on the topic of Reposado and my blogging about various things, being able to reason about a stance you’re taking in text is a hugely valuable skill. Greg wrote literal magazine articles on MacAdmin topics, and as the length of subscriptions in MacAdmins Slack’s blog-feed list and infrequency of articles to AFP548 may indicate… there’s always readers in this space looking for cogent content and ways to reason about the challenges this work presents.</p>
<h2>Show &#8216;Em What You Got</h2>
<p>Contribution to open source online has to get slid through the perilous slot of version control, so yet another side benefit of sharing code has been it proves you can slay the git dragon. (VS Code will never get me, but the war was long ago lost for mercurial, good riddance svn etc.) API wrangling and glue code will never stop being a well-worn topic, so if you’re in search of where to show prowess, python modules that handle web and ‘process’ (read = wrapping command line tools to automate things) interactions are what you’ll be seeing for most of your career. (Built-in standard library should be preferred, but if a problem-space-specific pip module is trustable-enough, investigate that and come up with your own qualification criteria!) If you can do some or most of that &#8216;glue&#8217; in a language like ObjectiveC or Go or Swift that&#8217;s all to the better, but these days pseudocode python when designing solutions feels like a common-enough language for design and discussion.</p>
<p>From a similar web and platform perspective, I want to interview people who know how IP subnetting and DNS records work <em>enough</em> and the basic moving parts to troubleshoot connectivity &#8211; I have no clue why this is overlooked so often. Getting at log files is another live-or-die skill for some, for me just being able to process large volumes of text across various formats (json, xml, etc.) to do basic analysis was a fun task that helped in other problem spaces over the years.</p>
<p>If I can be opinionated and brief (ever), don’t learn Jenkins, learn GitLabCI/CD or GitHub Actions and understand convergence time periods and document warranty’d behavior and engage with the pitfalls in both the dependency stack and network links that connect your automation from start to finish. (Also internalize why you should strive to never curl -k, ffs.) Likewise, document your entire DEP deploy process, all the things from an out-of-the-box Mac to the warranty’d Standard Operating Environment. Automation is a contract, put your name at the top of the wiki/notion page and own that you are responsible for what you said your engineering delivers. (It’s ok if the world has shifted before the ink is dry: the time stamp on the page is as much versioning as you need to be held to &#8211; just at least do it once for yourself/future you.)</p>
<h2>Get Through The Door</h2>
<p>More opinions! One (pdf) page for a resume SHOULD BE enough &#8211; anyone will help you whittle things down and format it well enough if you ask them, and a LinkedIn or Google Sites page with your entire history can fill in the rest of the details if you want to be verbose. Phrase achievements during your time with your previous employers in quantifiable numbers things like budget and/or time and/or complexity, impact-wise. Include stuff you have working knowledge of at least maintaining, don&#8217;t list alphabet soup. Do check the recruiter boxes in a job post! I want you to get past the recruiter pulse check, you&#8217;ll save them time if you include tangible experience with the tools we&#8217;re asking for familiarity with. Go ahead and tell me you have PHP or ColdFusion or Novell eDirectory or VectorWorks ~or trapeze~ or whatever other things may be considered inapplicable/passé experience as long as you delivered real value with it! Recruiters may not care, but I&#8217;ll be intrigued that you met the business challenges where they needed to be solved; demonstrate-able, broad excellence in anything is a great indication you can be good at Just CPE Thangs<img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" />.</p>
<p>In recent years &#8216;service delivery&#8217; (meaning hosting Crypt or an MDM or monitoring or inventory or CloudFunction/Lambda/glue-y stuffs or whatever other platforms end up not being solely SaaS nowadays) has meant ‘the cloud’, and for some that has also meant Terraform (or whatever platform-specific way of delivering your needed infra to your cloud is). My recent dalliances with compliance also brings us back to the fundamental need and unique problem space configuration management tools like Puppet, Chef, SaltStack and Ansible address. I cannot overstate the importance of approaching high-stakes infra or config delivery with that mindstate: you must get as much of your setup into repeatable code as possible, and at least understand the off-ramp one must take when the workflow around basic scripts ain’t gonna cut it anymore. You’ll definitely be leveling up past where I’m at when you can reason about where ‘data’ and/or config-specific boundaries ‘naturally’ lie and can be dealt with separately in Infra As Code and server-full (or, even cooler, serverless+metrics) config mgmt deployments.</p>
<h1><img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f647.png" alt="🙇" class="wp-smiley" style="height: 1em; max-height: 1em;" /></h1>
<p>Ok, pardon that may have been a bit of a firehose. I write like I talk, and I’m a recovering New Yorker. Hopefully the whole audience sees things in this they can get value from, and I’m glad to discuss things further in the comments or Slack or elsewhere online! (No I do not has a WoolyMammothdon.) Best of luck!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2022/11/07/couch-to-50k-cpes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Zentral for ~Observability~ ~Governance~ Compliance</title>
		<link>https://www.afp548.com/2022/09/29/zentral-for-observability-governance-compliance/</link>
					<comments>https://www.afp548.com/2022/09/29/zentral-for-observability-governance-compliance/#respond</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Thu, 29 Sep 2022 13:54:56 +0000</pubDate>
				<category><![CDATA[AFP548 Site News]]></category>
		<category><![CDATA[auditing]]></category>
		<category><![CDATA[CIS]]></category>
		<category><![CDATA[client platform engineer]]></category>
		<category><![CDATA[compliance]]></category>
		<category><![CDATA[CPE]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[gitops]]></category>
		<category><![CDATA[governance]]></category>
		<category><![CDATA[grafana]]></category>
		<category><![CDATA[jmespath]]></category>
		<category><![CDATA[Munki]]></category>
		<category><![CDATA[observability]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[osquery]]></category>
		<category><![CDATA[prometheus]]></category>
		<category><![CDATA[zentral]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387667</guid>

					<description><![CDATA[You’ve watched my MDOYVR presentation, but instead of being able to draw an owl, you’re concerned about standing up an osquery query distribution stack in production (unfortunately not what we’ll cover in this post, sorry!) and actually doing the job of visualizing the data. ‘Observability’ and ‘governance’ are my favorite [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>You’ve watched my <a href="https://www.youtube.com/watch?v=HVycnWM_4lA">MDOYVR presentation</a>, but instead of being able to draw an owl, you’re concerned about standing up an osquery query distribution stack in production (unfortunately not what we’ll cover in this post, sorry!) and actually doing the job of visualizing the data.</p>
<p>‘Observability’ and ‘governance’ are my favorite buzzwords of late, because the Reporting Structure Above incentivizes us Client Platform Engineers to display the slog of busywork we ship when preparing for being audited as it immediately turns into proof of a compliant state &#8211;  in being rigorous, we mix in things that prove we&#8217;re doing our <em>actual</em> job, and the org validates that every effort is proven to be worthwhile or dashboards wouldn&#8217;t need to be looked at. I won’t spend too many characters on ‘feelings’ about this whole CIS thing, I think I got enough of that out of my system, so in this post we&#8217;ll elaborate on the client and server and the data moving parts of actually shipping those checks.</p>
<p><img decoding="async" src="https://i.imgflip.com/6u0i11.jpg" alt="Outkast reference" /></p>
<p>As we use Zentral to get the job done, it&#8217;s what I&#8217;ll be referencing a bunch in this post. Bundling Grafana/Prometheus/OpenSearch behind its osquery service gave us flexible options at an enterprise scale, but that&#8217;s admittedly a lot of stack if you&#8217;re looking at setting up the ~dozen or so core moving parts (our Terraform apply is ~250 AWS resources between all the alarms and backups and secure infra configs and tweaks and customizations we&#8217;ve built up).</p>
<p>Unfortunately if you were hoping for a clean shot where you wouldn’t need to be a subject matter expert on various parts of the systems you need to audit AND the osquery servers moving parts… ur in teh rong biz/are perhaps in need of reminding what that &#8216;engineer&#8217; part refers to when the whole world has done gone SRE/DevOps-y for a while now. Fortunately, we&#8217;re all lucky to be in the biz with flexible, rock-solid tools like Zentral and the open source communities helping smooth over the rough spots that undoubtedly occur with the various agents that can send data through it. Even our relatively lean team got a long way towards telling a good story in graphs (lies, damn lies, statistics, etc.)&#8230; at least until the delayed and new/future tech debt comes a callin’ <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f605.png" alt="😅" class="wp-smiley" style="height: 1em; max-height: 1em;" />.</p>
<p>Zooming back in on the examples referenced in my presentation, jmespath and osquery are the two formats you can write ‘compliance check’s in. The ‘pass/fail’ data for these can then be surfaced as ‘filters’ a.k.a. widgets on the Zentral inventory view. You can even include a collection of them at a RESTful URL to easily point folks at. But… Zentral has a GUI only a developer could love <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f605.png" alt="😅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> &#8211; just as vital to the dependability of the backend (by leveraging cloud-primitives like subscriptions and message queues), and frontend (via ‘stores’ like open/elasticsearch/splunk/datadog/snowflake…) that accompanying Prometheus and Grafana stack I referred to lets us tap into and slice and dice the data for even easier consumption &#8211; which folks usually need logging infra and vendor-specific specialization to get at.</p>
<p>Jmespath is a way of querying json data (in our implementation from events earmarked to also get streamed into prometheus buckets) which has usage syntax not unlike jq. Zentral implements common-case, low-hanging fruit/golden-path ways of querying e.g. the presence of mobile configuration profiles delivered to a device (as inventoried by the ‘monolith’ wrapper custom to Zentral in case you’re hosting munki somewhere else &#8211; CERTAIN overpriced MDM’s cannot efficiently return full inventory data about what profiles are installed on a device… no comment.)  But you should keep in mind jmespath is not a full scripting language, you may soon realize how shallow you dip into jq when parsing stuff. Many of the ‘sources’ you can plug in as modules to Zentral (Jamf/Munki/Puppet being the most notable) have a section in each inventory record with the last recorded value for ‘extra facts’ &#8211; that custom/arbitrary inventory data can have simpler jmespath queries run and stop smoke from coming out of your ears. (I’m leaving out some specifics and details about which facts you may benefit from, but… you know where to find us in the Zentral community if you need blanks filled in <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" />.)</p>
<p>Helpfully, you can use the DevTool built into Zentral to take real collected data for a device and confirm that the jmespath check gives you the result you expect, and conversely devices that are in the opposite state are parsed correctly. Besides ‘scoping’ a check to a source or an operating system platform, tags also allow you to take other groupings you can sync into Zentral and apply arbitrary constraints as needed. For example, cloud-hosted Ubuntu systems wouldn’t need full disk encryption and screensaver locks, nor do macOS-based conference room and signage ‘appliance’ Mac Mini’s, so identifying those via tags or sites ensures the checks only run on applicable devices. Here’s an example of confirming the presence of a mobile configuration profile with WSOne or Jamf:</p>
<p><code>contains(keys(@),</code>profiles<code>) &amp;amp;&amp;amp; contains(profiles[*].uuid,</code>E81C883A-6938-4F43-8E28-10428836FB2B<code>)</code></p>
<p>Breakinitdown:</p>
<ul>
<li>First, confirm which source sends the inventory information for the top-level key you&#8217;re going to follow down the tree for (e.g. on Mac, munki gathers profile information)</li>
<li>Then confirm it&#8217;s present (as there may be a slow convergence period e.g. right after DEP bootstrap) by using &#8216;contains&#8217; and the key name wrapped in tildes: contains(keys(@), <code>profiles</code>)</li>
<li>As mentioned, you can customize this for other top-level keys, under <code>extra_facts</code>. Note those would be source-specific, so e.g. you’d choose Jamf if you need EAs</li>
<li>Next, continue evaluating with a logical and, &amp;&amp;</li>
<li>Then enumerate the uuids across all nested profiles with a glob/asterisk in square brackets after the top-level key, and then dot-index into the value we want to check: profiles[*].uuid</li>
<li>Nest that inside a contains() and follow the profiles[*].uuid above with a comma and then the value we&#8217;re checking for wrapped in tildes</li>
</ul>
<p>To reiterate a point I made in the presentation, I find these types of ‘is profile present’ checks… naïve, because you’re trusting that the responsible framework(s) did in fact apply the change/restriction and it’s not pending a reboot or transition to a specific ‘happy state’. So to contrast the previous example, here’s how you’d check that the condition puppet tests for via a fact to run idempotently is in the applicable complaint state, we use funny logic that ends up making enough sense that expressing a boolean in text doesn’t drive us nuts:</p>
<p><code>puppet_node.extra_facts.contains(keys(@),</code>indeed_nfc_disabled<code>) &amp;amp;&amp;amp; puppet_node.extra_facts.indeed_nfc_disabled == 'OK'</code></p>
<p>(We’d always see the puppet_node in the root of the json inventory representation, so we’re going two nodes down the tree to make sure we have the extra_fact populated in the first place, otherwise Zentral would give us a third state, ‘UNKNOWN’ which obvs isn’t ideal.)</p>
<p>To explain how you’d write SQLite to check compliance with osquery, it’s best to start with another naïve example &#8211; this checks the hash of one file and is wrapped with keywords Zentral looks for when parsing its boolean result:</p>
<p><code>SELECT sha256,<br />
CASE<br />
WHEN sha256 = 'omgwtfbbqrandomchars00112233445566' THEN 'OK'<br />
ELSE 'FAILED'<br />
END ztl_status<br />
FROM hash<br />
WHERE path = '/path/to/file';</code></p>
<p>To explain the syntax in use here even if you’re not SQLite/osquery-savvy,<br />
* starting with SELECT and then sha256 as the ‘key’/column ing the result whose value we&#8217;re trying to evaluate results based on<br />
* then CASE so we enter the sqlite ‘logic flow’ where pass/fail criteria can be assigned and different paths can be followed as a result<br />
* then WHEN sha256 = &#8216;sha256werelookingforomgwtfbbq&#8217; THEN &#8216;OK&#8217; ← which designates the &#8216;pass&#8217; case if the osquery hash table says the result of checking that file is as we expect<br />
* ELSE &#8216;FAILED&#8217; (one possible keyword Zentral is looking for in the ‘failure’ state)<br />
* END ztl_status ← this actually renames the adhoc key/column in our results to ztl_status, and is how Zentral knows where it&#8217;s trying to read a value from. The END closes the case statement<br />
* FROM hash ← tells it what table we&#8217;re getting the sha256 of the file via<br />
* WHERE path = &#8216;/path/to/conf&#8217;; ← these are our sql-isms to stretch how you think a database works, since it&#8217;s not something statically stored that we&#8217;re retrieving, it&#8217;s &#8216;at-time-of-query&#8217; that it does the lookup on the device in question to determine what the sha256 of the path stated is</p>
<p>And being more rigorous with an advanced example of a query for the NTP config which SHOULD be parsed and applied as expected on most recent macOS versions… <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/1f605.png" alt="😅" class="wp-smiley" style="height: 1em; max-height: 1em;" /> (which… we never got the red team to confirm does anything worthwhile? But we felt there wouldn’t be collateral damage/much downside to making this the default, and the GUI is still accessible to confirm SOMETHING we consider accurate is being applied):</p>
<p><code>with compliance_check as (<br />
select sha256,<br />
case when sha256 = 'omgwtfbbqrandomchars00112233445566' then 'OK' else 'FAILED' end ztl_status<br />
from hash<br />
where path = '/path/to/conf'<br />
)<br />
select compliance_check.sha256,<br />
coalesce(compliance_check.ztl_status, 'FAILED') ztl_status<br />
from (select 1 as x) x<br />
left join compliance_check on x.x = 1;</code></p>
<p>For osquery via sqlite, &#8216;join&#8217;s allow us to pull results to correlate or otherwise mash up from multiple tables. Dot-notation is used with those table names to map the &#8216;field&#8217;/column label to either return in the subsequent results or match on when sewing tables together. You can additionally adhoc create new tables to work with or wrap the output from a select statement looking up values in a table into a newly named one.</p>
<p>Zentral provided code to cover the case when the thing we&#8217;re auditing may actually not be present and therefore not return a result, whereas we&#8217;d prefer to fail the check in that context instead of letting it be unknown, to improve on the outcome of the v1 query above. To explain what&#8217;s going on in the v2 example code provided:</p>
<ul>
<li>First we&#8217;re overriding the name of the table &#8216;nesting&#8217; the results of the query in parentheses using the &#8216;as&#8217; keyword to make it &#8216;compliance_check&#8217;</li>
<li>We&#8217;re then requesting two columns/rows &#8211;</li>
<li>sha256,</li>
<li>and the results of the case statement, labeled &#8216;ztl_status&#8217;.</li>
<li>From that new &#8216;metatable&#8217;, we say we&#8217;d like to end up with its sha256 column and a default of failed for ztl_status (via coalesce)</li>
<li>In the process/same line we&#8217;re also re-applying the ztl_status label (since the way we&#8217;re asking for it would be the column header otherwise)</li>
<li>To &#8216;trick&#8217; the query into returning/generating/allowing an empty/NULL result (when as in this case the &#8216;where path&#8217; isn&#8217;t found/the query would normally return no result) we make a SECOND, new table to join against with one key/value, x = 1, and re-lable that single column &#8216;x&#8217;.</li>
<li>Left joining excludes that new tables content but now we can return an empty sha256 and failed for that case on the right-side&#8217;s table. Think of it like a venn diagram where our join only cares about the values distinctly only in the ‘compliance_check’… metatable</li>
</ul>
<p>And now that we’ve broken that down, to bring it all home, here’s the true not-so-secret sauce of the setup: compliance checks status change events get written out to Prometheus metrics, which Grafana can dynamically pull in via variables to a self-maintaining dashboard! The one we use reflects the raw numbers alongside percentages as a time series trend a.k.a. a burndown chart (if we had a threshold, which you can also trivially represent in Grafana as an arbitrary horizontal line, to work towards getting the fleet under). Looks impressive on slides! Get yours today!</p>
<p>Zentral actually has more exciting visualizations, but I’ll round out this article underlying the governance benefits of using this tool for proof when being audited; on Windows and Linux via puppet we check in the code, on Mac we checked in the Munki pkg with the metadata and payload that made the change, then with Terraform to manage the compliance check or gitops to manage the osquery pack we verified that the job got done. And THAT is how you apply rigor to your processes so you can own the ‘engineer’ in your title as Client Platform Engineers, if I do say so myself. I hope you agree and get to work with tooling that enables rigor like this!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2022/09/29/zentral-for-observability-governance-compliance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Code Wranglers Speak In Tongues (Generally)</title>
		<link>https://www.afp548.com/2020/04/30/code-wranglers-speak-in-tongues-generally/</link>
					<comments>https://www.afp548.com/2020/04/30/code-wranglers-speak-in-tongues-generally/#respond</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Thu, 30 Apr 2020 13:51:00 +0000</pubDate>
				<category><![CDATA[AFP548 Site News]]></category>
		<category><![CDATA[bash]]></category>
		<category><![CDATA[ffs]]></category>
		<category><![CDATA[gist]]></category>
		<category><![CDATA[git]]></category>
		<category><![CDATA[github]]></category>
		<category><![CDATA[gitlab]]></category>
		<category><![CDATA[mossad]]></category>
		<category><![CDATA[pastebin]]></category>
		<category><![CDATA[path]]></category>
		<category><![CDATA[python]]></category>
		<category><![CDATA[style]]></category>
		<category><![CDATA[sysadmin oath]]></category>
		<category><![CDATA[tls]]></category>
		<category><![CDATA[venv]]></category>
		<category><![CDATA[yolo]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387632</guid>

					<description><![CDATA[It is commonly said that python tends to be at least the second-best tool to reach for when approaching a task, whereas some of us get by just fine with shell. The lack of a compilation step for scripting languages tends to help us iterate and ship good-enough solutions quickly, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>It is commonly said that python tends to be at least the second-best tool to reach for when approaching a task, whereas some of us get by just fine with shell. The lack of a compilation step for scripting languages tends to help us iterate and ship good-enough solutions quickly, and I&#8217;ll be touching on specific style points for both later. For how sysadmins commonly interact with either language, for glue code or tasks to automate, some guidelines I tend to focus on are equally applicable across both.</p>
<p>To even be able to look at the style used (instead of extracting the code from whatever prod system it&#8217;s running on&#8230;) we need to get it in a more share-able format at least. Although we&#8217;ve already touched on git, you don&#8217;t actually need to know proper care and feeding of a repo to let others see your code at whatever state it&#8217;s in; sites like Pastebin have been around forever, GitLab has a snippets concept, and GitHub gists can even accept multiple files in a single page. GitHub then tracks versions for you mimicking a simplified repo. Some of these sites accept anonymous posts, and some links can be shared &#8216;privately&#8217; (for anyone with the link, not even limited to logged-in users with an account) without being discoverable/indexed by search engines &#8211; handy when wanting to share (<em>ahem</em> hopefully sanitized) log files or debug output, as a more general-purpose usage example.</p>
<h2>It&#8217;s all downhill from here</h2>
<p>Now before I go further, don&#8217;t get me wrong &#8211; my pet peeves are not important, and neither is my opinion. (Until it is, intuition is a real thing dedicated practitioners might develop over time. If at all, ever.) Don&#8217;t take it personally if you blissfully do the things I&#8217;m warning against, or ignore the things I&#8217;m advocating for in your code, even though I obviously consider these important enough to write (at length!) about. Maybe I&#8217;m just &#8216;owning the brand&#8217; of being a snob/elitist, but as with anything in life, you either complete tasks where you only care about the goal/&#8217;it appears to do the thing&#8217;, or you show care about the process and let it influence how you operate (on code). All of this is for as much as you both find relevant, personally, and like, remember/take the time/effort to do. Real scientists <strike>publish</strike> ship.</p>
<p>But, except for XML, use spaces, not tabs. Or be wrong, up to you. :sarcmark:</p>
<h2>Trust no one</h2>
<p>Taking it from the top, she bangs! <code>#!'s</code>! Pay attention to how you&#8217;re finding an interpreter in whatever execution environment it is that the code is supposed to/ is intended to be running in, especially if you&#8217;re pointing to a path that&#8217;s actually a symlink. (Like /bin/sh, which actually points to&#8230; /bin/bash&#8230;) With python 3, using an env shebang is definitely a risky click if not exactly YOLOMODE. (Unless you&#8217;re in a venv, natch.) It&#8217;s more reliable to be as explicit/exact as you can, at least feel free to explain what e.g. version of python and what 3rd party modules you&#8217;re expecting (if any) in comments or associated docs. Keep in mind when contributing to other projects that they have probably considered this fundamental piece carefully; sometimes you may actually not want to specify an interpreter, for example in &#8216;library&#8217;-like modules loaded by other code. Reasonable people can disagree, but it&#8217;s house/maintainer rules. Or you can go fork yourself. (That might sound aggressive, I mean it that you&#8217;re more than welcome to show the world how it&#8217;s done in your own repo &#8211; I&#8217;m happy as long as others can see how you&#8217;re solving a problem at all, if you&#8217;re kind enough to share.)</p>
<p>Continuing with not trusting PATHs, anytime you&#8217;re essentially shelling out or leveraging a binary, when I&#8217;m the reviewer, I&#8217;m going to expect you to use the full path. (Again, including symlinks, if you weren&#8217;t already aware, var, tmp, and etc are symlinks to inside root /private in macOS). This is definitely a style point, but instead of assuming the contents of a defined path is what you expect, (or worse, assume that e.g. launchd or the agent running this code knows what paths/lookup order has the version you need) doing this is (again) potentially better/more defensive by being explicit/exact. With Apple SIP-protecting some defined paths you may find it less important, and that&#8217;s ok, but they&#8217;re also requiring us to ship our own interpreters more and more. (Targeting a SIP-protected path in your shebang is probably good practice for as long as those paths can be relied on :sweat_smile:)</p>
<p>I usually check for the complete path with <code>which &lt;binaryname&gt;</code>, but also consider the output of <code>type &lt;binaryname&gt;</code> &#8211; if it&#8217;s not a shell builtin, it may not be portable to *nix, on the off chance that&#8217;s important for you to know/handle. And don&#8217;t take it for granted that Apple won&#8217;t move stuff around or delete it on a whim/in a point release. (Yes it was a major OS release, but we all recall how telnet got pulled.)</p>
<p>If you&#8217;re talking to an API or expecting specific schema for a data structure your code parses, consider putting your assumption of that criteria in comments/docstrings, a sanitized example is even more straightforward. That leaves a paper trail/gives context for why you operated on it in a specific way (and what was valid input to your program at some point in time), with the bonus feature that people can take that mockup and replicate your experience/confirm your assumptions (even without access to the same system). Same for links to API docs or other references that were used to understand the system being interacted with. It&#8217;s ideal if someone other than just the creator can follow the references and see the &#8216;shadow&#8217; inputs operated on by the design, this really shortens ramping-up time to reason about whether the decisions made were the best way to approach the problem.</p>
<h2>Easiest computer to throw out a window: yours (ideally into the ocean)</h2>
<p>You don&#8217;t run random code from the internet without at least attempting to confirm it does what you need, I hope/presume? The same goes with boilerplate/scaffolding/auto-generated stuff that bootstrapping tools may &#8216;crap up&#8217; your repo with. Some of it can certainly be helpful when you&#8217;re getting started, but it&#8217;s all code you&#8217;ll end up owning, and you want to strike a good balance of &#8216;I think I have to include at least this amount of bloat to ship&#8217; with &#8216;now that I understand enough of the moving parts, let&#8217;s strip this back to focus on what we actually need/use&#8217;. Convention is a good thing to follow when collaborating with a larger community that imposes boilerplate on us or wants us to grapple with what they consider best practices, but otherwise let&#8217;s leave it as close to brass tacks as possible so we don&#8217;t have trouble remembering where we actually started making the code do things. Unused functions that are mostly stubbed out or exercise nothing should be purged until activated. Ideally, everything we commit to git should have a reason to exist <em>that we can explain</em>.<br />
To mix all the metaphors, code both rots and is flammable, don&#8217;t have too much of it in one place. Teach a person to code, they&#8217;re on fire for a lifetime, amirite?</p>
<p>As part of these style nitpicks, this may not be as valuable/important to you/your team, but consider removing extra/extraneous comments, verbose manual logging, or debug echo/print() statements (unless gated by arguments/a flag and/or logging level) &#8211; ideally debug would stick around in something like integration tests that help you validate assumptions/confirm serviceability, but elsewhere in your code it could leak secrets or otherwise add unnecessary overhead/bloat to sift through when put into service/deployed to &#8216;prod&#8217;.</p>
<h2>Can&#8217;t believe I have to say this</h2>
<p>I think besides being overtly&#8230; &#8216;particular&#8217;/borderline-cargo-culty in points above, I&#8217;ve been a benevolent <strike>rubocop</strike> dictator. (All the metaphors!) Time for bad cop.<br />
I don&#8217;t know who needs to hear this (jk, you unconscionable no-good-niks, you know who you are)&#8230;</p>
<p>Log.</p>
<p>It&#8217;s usually not hard and will save your butt later, it&#8217;s certainly cheaper and more practical than a time machine or &#8216;god mode&#8217;. Put whatever output you generate somewhere discoverable, e.g. in /private/var/log for system-wide or ~/Library/Logs, and check out the python and bash follow-up post for examples. If you then have those logs shipped/aggregated, know what value it will actually provide to put stuff at different &#8216;levels&#8217; &#8211; &#8216;info&#8217;, &#8216;warning&#8217;, &#8216;error&#8217;, &#8216;debug&#8217;, etc., are those labels too numerous and too open to interpretation for your team? Maybe just assume only qualified individuals even look at/read this stuff anyway, so shove anything helpful in there and let your log ingest system discard whatever you don&#8217;t need, maybe? Consider writing json blobs if appropriate (or some other parseable format, just throw us a bone with UTC timestamps and delimiters or something, especially if you want to quickly turn the corner from shipped data to metrics). Got state you want to track locally? Consider shoving stuff into a database format, primarily if you are generating data that would benefit from ledger-style operations. Just leave a record somewhere, optimized for retrieval!<br />
For more &#8216;vanilla&#8217; logs, rotate and purge using system-native facilities (logrotate for Linux, apple system logging conf&#8217;s on Mac, etc. Don&#8217;t be like vanilla Docker.) Or don&#8217;t purge! If you don&#8217;t think it&#8217;ll be more than a couple hundred MBs in your average computer lifespan, maybe who cares, maybe disk is cheap and plentiful enough and this historical data local is helpful to have? Hey, if you break it you get to keep the pieces. (Or Apple will just helpfully dump it for you on OS point upgrades, surprise!)</p>
<p>Continuing, ffs, go out of your way to trust (and confirm your code validates) TLS certs when transiting the series of tubes. It&#8217;s worth it not just because it encrypts bits on the move, it also assists in validating the host you&#8217;re talking to. Another side effect is, when it&#8217;s wrong and due to our inspection it gets fixed, we&#8217;re being part of the solution. Calling protecting people&#8217;s privacy a side effect is almost forgetting <a href="https://www.usenix.org/system-administrators-code-ethics">the oath</a> we all took as sysadmins.</p>
<p>Are you pulling random dookie over the network to then pass to processes running as root? Checksum it. It&#8217;s literally the least you could do. Don&#8217;t let you and me get joined at the MacMule. Otherwise you may be able to join that startup working on Privesc-as-a-Service.</p>
<p>And finally, to wrap up Things You Should Already Know: don&#8217;t put api keys, passwords, or secrets-in-general in plaintext or trivially-reversible formats. Options are available, but for crissakes obfuscation shouldn&#8217;t be one of them &#8211; truffle hogs look for that stuff, you&#8217;re not being clever, herr hax0r. Private repo&#8217;s are not an excuse, what happens in obscurity ends up dehydrated on a casino rooftop in Vegas. Environment variables? Decoupling a credentials file? Those are (arguably) Less Turrible, but Still Not Great<img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" />, but again I&#8217;m not in security and can&#8217;t tell what your threat model is (although hopefully it&#8217;s not <a href="https://vimeo.com/95066828">Mossad</a>). Is SAML hard? Yes. Should we all find ways we should integrate with it when appropriate? It&#8217;s often the only game in town, take yer lumps. Using libraries written/evaluated by crypto pro&#8217;s? Yes pls. I&#8217;d add that PKI (e.g. client certs signed by a trusted CA) is another potential avenue to handle authentication/identification which you can therefore leverage as an ingredient when at least authorizing access, but don&#8217;t listen to me! I&#8217;m a rando on the interwebs. (They says, ~1800 words later.)</p>
<p>Tune in for our next installment where we get into the actual programming language slingin&#8217; specifics!<br />
Got other general coding &#8216;best practices&#8217; thoughts? Use your tweet horn to holler at me!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2020/04/30/code-wranglers-speak-in-tongues-generally/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What we talk about when we talk about code wranglin&#8217;</title>
		<link>https://www.afp548.com/2020/04/20/what-we-talk-about-when-we-talk-about-code-wranglin/</link>
					<comments>https://www.afp548.com/2020/04/20/what-we-talk-about-when-we-talk-about-code-wranglin/#respond</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Mon, 20 Apr 2020 13:23:00 +0000</pubDate>
				<category><![CDATA[AFP548 Site News]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387605</guid>

					<description><![CDATA[Part Two, Git Recommendations Revisited. (Part One is Here) Since we&#8217;re talking code, we should come to grips with the ungainly topic of version control itself before going much further, which in 2020 still means git. On this site (~6 years ago! with not the most generous title) we went [&#8230;]]]></description>
										<content:encoded><![CDATA[<h2><em>Part Two, Git Recommendations Revisited. (Part One is <a href="https://www.afp548.com/2020/04/13/code-wranglin-part-one/">Here</a>)</em></h2>
<p>Since we&#8217;re talking code, we should come to grips with the ungainly topic of version control itself before going much further, which in 2020 still means git. On this site (~6 years ago! with not the most generous <a href="https://www.afp548.com/2014/08/05/look-less-silly-on-github/">title</a>) we went over what could be considered a common-enough basic workflow and tips for git beginners, this time around I&#8217;m going to point out a more team-focused workflow and newer developments or caveats about that post:</p>
<p>Originally I encouraged forking into your user&#8217;s namespace/account instead of potentially piling up branches, or having to be explicitly added to the destination repo so you can create them there. This helped folks who often forgot to click the box to delete their short-lived branches on merge, which the web tools can even clean up in the (remote) forked space for you. I recommended this guidance as good hygiene, but there&#8217;s one caveat: automated testing/CI/CD runners may only be available on the main repo/project, so if it&#8217;s configured that way then sending a pull/merge request could trigger tests to be run on the remote/&#8217;userspace&#8217; repo, and it could complain/not work. If the runner can&#8217;t be configured for individual users it&#8217;s no big deal to branch &#8216;locally&#8217; to the repo, especially if common &#8216;guard rails&#8217; like approvals/protected branches are in place. If you haven&#8217;t run into that as an issue, the overarching guidance remains. (Or if it&#8217;s too obscure to understand what I&#8217;m warning against, like, y&#8217;know, git is weird, please follow up with us in the comments or on the Twitters.)</p>
<p>One of the first times I really started to hate git&#8217;s user experience was squashing commits or trying to not include updates/&#8217;catch up&#8217; commits in a fork/branch in my user account/space. You lose no &#8216;internet cool&#8217; points if you delete your fork entirely in the web GUI and start over wherever the main branch you&#8217;d previously forking off of is at now, but there&#8217;s also <a href="https://rick.cogley.info/post/update-your-forked-repository-directly-on-github/">this post</a> about catching up your fork in a way that people have had success with.</p>
<p>If your team tracks work in a bug tracking/ticketing system, it may provide good context and extra metadata (or just be convention) to prefix your branch names and/or commit messages with the related identifier. (Some Jira-specific integrations I&#8217;ve seen actually rely on it and can provide context/feed metrics.) No, the code is not the documentation, unlike my obscure elusive magical unicorn comments, even the most verbose/perfectly-written comments are probably not enough context&#8230; This is just to say when an out-of-band communication method is preferred for a project, please consider using it. Wiki&#8217;s similarly exist for a reason.</p>
<h2>Quick Tangent</h2>
<p>Definitely don&#8217;t be like me and call &#8216;everything my beginners mind gets tripped up on&#8217; a &#8216;frequently asked question&#8217;, although these getting started questions could use recording somewhere in my opinion. Imagine someone new to your team, new to the language, the framework, new to coding/version control, and help them in a discoverable way like you would have appreciated!</p>
<p>While we&#8217;re talking docs, something that previous teams I worked with in sysadminery aspired to, (at least take the time to think about actually doing eventually) is straight up test plans &#8211; formal documentation on how to exercise the code, how to pull together the moving parts or get access to applicable test data/accounts/systems. If it&#8217;s a trickier deployment, what&#8217;s the (set of) success metric(s), rollback steps to follow, etc. This may actually be coupled with the idea that your design document (something else we&#8217;ll cover in a subsequent post) led to these decisions. Side benefit, it can become a preview of potential maintenance tasks to support a measurable <a href="https://landing.google.com/sre/sre-book/chapters/service-level-objectives/">service-level objective</a>. If we want to get buzz-wordy. (I promise, no story points or other stuff I haven&#8217;t actually been required to show &#8216;agility&#8217; with! Got a reason why scrummy work is helpful/horrible? Holler at us on the Twitters or in the comments!)</p>
<p>Back to the-actual-name-is-Linus&#8217;s-Monster, there&#8217;s all sorts of opinions on styles and conventions you could use for the commits themselves, and the first reference I recall coming across was <a href="https://chris.beams.io/posts/git-commit/">this one</a>. The gist is, code diffs already explicitly say what changed, so you&#8217;re encouraged to think about it from a workflow perspective: the specifics are less important than <em>why</em> it changed over time: imagine scanning the commit history to notice when a &#8216;shift&#8217; occurred, when intent or functionality shifted. Issue tracker/ticket notes and the full body/description of a commit are where you can provide additional context. Git was designed to use email as its primary communication method, all of this back-and-forth should be considered written communication and approached accordingly. Or maybe you don&#8217;t want to think about it too much, all of this is to say record more of what you were trying to achieve (to use a Neagle-ism), less of how you mechanically did it. (Typo commits can get rolled up and squashed anyway.)</p>
<p>Another less commonly discussed topic is how often to commit, what or how often to squash, and I guess we can relate it to why we&#8217;re putting this under &#8216;revision control&#8217; in the first place. Just like <em>why</em> you backup your files &#8211; it&#8217;s for the restore, right? Only your team knows how much value seeing the code&#8217;s evolution over time actually has, keep in mind tags and &#8216;topic&#8217; branches that you use while working on a specific release/feature/issue may provide a good-enough record of signposts in time that will still be around in retrievable history, even if every iteration isn&#8217;t in the primary branch those events are formulated in. This is all to say I personally tend to not see value in making granular commits e.g. per file changed, nor does every commit from the topic branch require preserving when merging; I often squash in the GUI and not worry what iterations were involved. It&#8217;s almost as if the mainline history should consist only of merge &#8216;events&#8217; and the state of commits in branches don&#8217;t matter until after it gets eyes on/potentially altered in a review. (&#8216;Draft&#8217;/WIP/Work In Progress designations are similarly used to conserve on commits that need to get merged with a topic branch into the &#8216;main&#8217; branch.)</p>
<p><a href="https://twitter.com/FiloSottile/status/1248110291751231488">Maybe you don&#8217;t need to gpg/pgp-sign commits</a> (via <a href="https://twitter.com/wikiwalk">@groob</a>). Maybe you work in &#8216;security&#8217; and want to show you&#8217;re tall enough to ride/can attest to managing some extra sort of complication for benefits you perceive as important. I&#8217;m basically doing it for the <strike>&#8216;gram</strike> internet cool points fo sho, via a <a href="https://krypt.co/">system/app</a> that may already be abandonware (like <a href="https://gpgtools.org/">GPGTools</a> tends to look like at times with slower new-OS-release adoption).</p>
<p>In web GUIs/systems like Gitlab and GitHub, committing tweaks to the branch you sent a pull/merge request from will automagically update the changelist/merge/pull request accordingly. Just in case that&#8217;s news to some. Also specific to those webapps, reviewers can make code change suggestions to be accepted/applied by the author right there in the web interface while commenting on lines/reviewing, which is often ideal &#8211; it&#8217;s certainly more collaborative, since the reviewer can spell out exactly how they expect requested changes to be implemented.</p>
<p>In our next post we&#8217;ll get into semi language-agnostic style generalities, stay tuned!<br />
Got other <strike>hot goss</strike>git tips/code review thoughts? Use your tweet horn to holler at me!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2020/04/20/what-we-talk-about-when-we-talk-about-code-wranglin/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Code Wranglin&#8217;, A Series</title>
		<link>https://www.afp548.com/2020/04/13/code-wranglin-part-one/</link>
					<comments>https://www.afp548.com/2020/04/13/code-wranglin-part-one/#respond</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Mon, 13 Apr 2020 13:31:00 +0000</pubDate>
				<category><![CDATA[AFP548 Site News]]></category>
		<category><![CDATA[Chelsea Troy]]></category>
		<category><![CDATA[client platform engineer]]></category>
		<category><![CDATA[code]]></category>
		<category><![CDATA[code review]]></category>
		<category><![CDATA[coding]]></category>
		<category><![CDATA[contributing]]></category>
		<category><![CDATA[Daniel Kahneman]]></category>
		<category><![CDATA[demGoogs]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[macadmin]]></category>
		<category><![CDATA[Princess Bride]]></category>
		<category><![CDATA[review]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<category><![CDATA[Thinking Fast and Slow]]></category>
		<category><![CDATA[yolo]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387589</guid>

					<description><![CDATA[Welcome to a series about wranglin&#8217; cats code. Some of us are inclined to care about titles, and while I&#8217;m fine being a MacAdmin, Client Platform &#8216;Engineer&#8216; is a more recent term recruiters want to plug into LinkedIn. That engineer part is meant to differentiate us from the apes sysadmins, [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Welcome to a series about wranglin&#8217; <strike>cats</strike> code. Some of us are inclined to care about titles, and while I&#8217;m fine being a MacAdmin, Client Platform &#8216;<strong>Engineer</strong>&#8216; is a more recent term recruiters want to plug into LinkedIn. That engineer part is meant to differentiate us from <strike>the apes</strike> sysadmins, which should already have been on a path to automation domination and therefore probably sling code like we might aspire to&#8230; Anyways, being even tangentially related to Software Engineering as a discipline has the connotation that we&#8217;ll show our code to someone eventually and want people to collaborate on it, or give it a sniff before it blows things up in &#8216;prod&#8217;. When we do that with others often enough, seeing various code suggestions and workflows, it starts to feel like it would be good for everyone to collaborate and standardize on a minimally-viable-process so us code-butchers can ship better scripts &amp; stuff.</p>
<p>It&#8217;s not helpful to be opinionated AND inconsistent, so I wanted to&#8230; codify some guidelines, and an approach that the folks in my team (and hopefully the community at large) can use as a starting point to collaborate on/argue about in the service of getting us all on the same page regarding style/design &#8216;quality&#8217;/etc. Even open source projects/loosely affiliated groupings of individuals shipping code may benefit from writing this down, but I didn&#8217;t have a good example of how to approach the task for us sysadmin-y types in particular. (If you have references you&#8217;ve found/like for any of the topics in this series, please share on Twitter or in the comments!) It turns out this is a big enough thing for several posts, so I&#8217;m breaking it up and will cover the scripting languages MacAdmins commonly use later on. &#8216;Code reviews&#8217; as a concept may be new to some, so that&#8217;s where I&#8217;ll start in this post.</p>
<h2>Why We&#8217;re Here</h2>
<p>The point of code review is not to &#8216;rubber stamp&#8217; as a mindless formality, nor is it to just confirm it kindof looks like a familiar file extension the author wants to spread blame for prior to shipping into prod. In more rigorous environments, it is also not a given that the reviewer will fully grasp the problem the code is attempting to solve, or be able to exercise all parts/functions in the code, or safely run it against &#8216;production&#8217; data/live systems to 100% validate it,[1]</p>
<p>the point of code review is to have someone else spot-check the code, to&#8230; y&#8217;know, read it, every line, and to make sure it passes some degree of muster for maintainability/whole-team support/adoption that could achieve something almost like endorsement when it ships. The reviewer should display empathy and hopefully have TIL (today I learned) moments, they should reason about the design used and offer alternatives, constructively argue when individual nitpicks clash and help the team collaborate on and grow the style guide with things that are important enough to at least bury hatchets about,[2]</p>
<p>and the changes proposed in the review can be followed through/integrated into the code before/after shipping if/when we have the time/see the value. Because rhetoric is all well and good, but as they say, real artists ship.</p>
<ol>
<li>BTW, &#8216;correctness&#8217; by the original author is not taken for granted, there&#8217;s ways to help prove it to the reviewer we&#8217;ll get into later, but it&#8217;s the author&#8217;s name in <code>git blame</code> who is considered primarily responsible.</li>
<li>Sometimes a style guide contains headstones for your litany of hangups that got sacrificed for team unity/sanity/productivity.</li>
</ol>
<p>The statement right before these sort-of footnotes is important to expand, because developers/traditional &#8216;engineers&#8217; have different incentives and expectations set on them than sysadmins who ship code. Many startups can get years down the road before the YOLO/eff-it-ship-it attitude starts losing its luster and editing code on the live production instance actually causes enough of an outage that business processes are interrupted. Or maybe you get interested in verification processes when you&#8217;re getting audited/acquired, or the lack of rigor just becomes unseemly. For organizations that don&#8217;t value computers, or have low-enough standards, or a high-enough tolerance for pain, maybe that time never comes. For MacAdmins, we know that on the technology treadmill there&#8217;s constantly demands to ship product and anticipate issues. Most (if not all) of our tasks are not purely intellectual exercises, and our managers are incentivized to help us ship better than &#8216;working prototypes&#8217; more often than not. At least, after a company&#8217;s culture evolves to a point that it reaches a certain scale (or plateau) and decides to emphasize stability over velocity.</p>
<p>Ideally, ultimately, we hope things ship with increasing <strong>reliability</strong>. And in the process of getting muscle memory going through a process, we hope reviewers understand both what was written and why, agree that the implementation should ship, potentially help introduce efficiencies, and, where the REAL value comes from, (which is the reason management encourages us to do this in the first place,) reduce/prevent bugs/unwanted side effects.</p>
<p>Opposition to taking the time to check the code into version control in the first place, let alone do a review, usually starts with &#8216;well if there&#8217;s an emergency&#8230;&#8217; Define what an emergency is to your team, otherwise they get the Princess Bride quote, or the &#8216;lack of planning on your part doesn&#8217;t constitute an emergency on mine&#8217; <a href="https://everythingsysadmin.com/books.html">tough love</a>. Getting everyone to stop being independent operators is not easy, so management needs to be on board and lower the (usually false) sense of urgency. We&#8217;ll talk about asynchronous tips below, but anecdotally, having a distributed team increasing turnaround expectations actually helps with this. (When folks <strong>aren&#8217;t</strong> chained to Slack or have Zoom sessions open all day also helps, because often code design benefits from your brain operating in <a href="https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow">system two</a> and the stimulation of a busy chat with instant gratification tends to prevent you from being idle at regular intervals.)</p>
<h2>It&#8217;s dangerous in the code base, take this</h2>
<p>I&#8217;ve already droned on waving my arms for a bit, but needless to say Google shipped their <a href="https://google.github.io/eng-practices/review/">code review</a> and <a href="http://google.github.io/styleguide/">style</a> documentation years back as well-edited references. An important point they discuss for reviews is calling out the good things you see as a reviewer, since this can often look like a critique that the humans submitting the code can feel bruised by, especially in public/on the official record. Breaking the computers, less people are worried about (eventually we all have to <a href="https://en.wikipedia.org/wiki/Escape_from_L.A.">Escape from LA</a>). They get understandably annoyed if you break the malleable humans.</p>
<p>I also benefitted from this <a href="https://chelseatroy.com/2019/12/13/async-collaboration-1-submitting-pull-requests/">series</a> about being on a distributed, asynchronous/non-time-zone-overlapping team landing code together. It does a great job illustrating that this is all about context transfer/communication by taking action and thinking about the other side of the coin and our shared goals. AFP548 lets me publish text in english on the internet, but only you as the reader can judge if I made my point/got across what I set out to say. (Communication is judged on the receiving end, <a href="https://www.youtube.com/watch?v=QhXJe8ANws8">exhibit A</a>.) Positive code review experiences are the result of good back-and-forth communication.</p>
<p>Out in the real world for us sysadmins where reviews may be less codified, if you&#8217;re lucky enough that the project you&#8217;re sending code to already has contributing guidelines publicly stated (e.g. in a CONTRIBUTING.md), then that&#8217;s your defined jumping-off point. We often can&#8217;t choose who reviews our code nor express tone clearly in text from either side of the interaction, so each has to provide context and be as welcoming as possible. As the author, be equal parts ego-less and greedy &#8211; showing you really care that others help you is a great look. If a review process is still in flux, hopefully you find these concepts helpful to mull over, and we can all have fun painting bike sheds later when we get into code style details.</p>
<p>Up next, it&#8217;s unavoidable to talk about the version control tooling itself. TTFN!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2020/04/13/code-wranglin-part-one/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Name Goes Here, code bass=&#8221;clash&#8221;</title>
		<link>https://www.afp548.com/2020/03/02/name-goes-here/</link>
					<comments>https://www.afp548.com/2020/03/02/name-goes-here/#comments</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Mon, 02 Mar 2020 13:53:00 +0000</pubDate>
				<category><![CDATA[AFP548 Site News]]></category>
		<category><![CDATA[computername]]></category>
		<category><![CDATA[frogor]]></category>
		<category><![CDATA[hostname]]></category>
		<category><![CDATA[localhostname]]></category>
		<category><![CDATA[macOS]]></category>
		<category><![CDATA[scutil]]></category>
		<category><![CDATA[silly mgmt tools]]></category>
		<category><![CDATA[silly security agents]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387574</guid>

					<description><![CDATA[On the Twitters not too long ago the question was posed: what topic could you give a 20 minute talk about without any prep? macOS Computer Naming is a thing I can&#8217;t find a great canonical resource for on the internets, so here&#8217;s an attempt to fill that gap, since [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>On the <a href="https://twitter.com/JenMsft/status/1231085133807022081">Twitters</a> not too long ago the question was posed: what topic could you give a 20 minute talk about without any prep? macOS Computer Naming is a thing I can&#8217;t find a great canonical resource for on the internets, so here&#8217;s an attempt to fill that gap, since the topic would potentially be shorter than 20 minutes, but as it&#8217;s said: they go low, we go deep.</p>
<h1>What&#8217;s in a name?</h1>
<p>Names are just labels. Your inventory and/or management system could contain an identifier like the serial number, hardware UUID, MAC address, and/or device certificate to more accurately track a computer (and help with linking it to a user), so IMO it&#8217;s not necessary to stop people from renaming their computer in System Preferences in general. We all know that by default it chooses the computer model to name itself, and on a home computer will update that name with whoever was the user created in Setup Assistant (E.g. Testy McTesterston&#8217;s Compy386).</p>
<p>It <em>could</em> be that your management tools are&#8230; silly, or security agents only pick up that easily-changed name and therefore you need to extract whatever other unique identifier may be stored locally on the client and ship it to the aforementioned mythical inventory tool for correlation. What those vendors of silly tools might need a primer on is the actual three possible labels we could be using for &#8216;name&#8217; information on a Mac, as query-able by scutil, a mnemonic for system configuration utility. Let&#8217;s do it!</p>
<ol>
<li>ComputerName &#8211; that&#8217;s the label we&#8217;ve been referring to above, accessible from the System Preferences → Sharing preference pane. Some docs historically have referred to it as the &#8216;user-friendly&#8217; name, one claim to fame is that Apple Remote Desktop leverages this e.g. in school lab environments.</li>
<li>LocalHostName &#8211; this designates the computer on a local broadcast domain or subnet/vlan. This is the name that will be visible through file sharing such as AirDrop, and is critical to ZeroIP/Bonjour (also formerly Apple-branded as Rendezvous) addressing for computers so services can benefit from name resolution even when the device only has a link-local address or otherwise doesn&#8217;t have DHCP.</li>
<blockquote><p>  Fun Fact!<br />
Ever see Computer Name (2) or similar number increments in parentheses? That could be because a resolution lookup returned the local name and Apple&#8217;s framework told the computer to keep incrementing until it saw an unused increment. In more rocky releases of macOS, it was thought to even occur just if both WiFi and Ethernet were connected to the same network&#8230;</p></blockquote>
<p>When setting &#8216;ComputerName&#8217; in System Preferences → Sharing, the change is inherited by this value, but you&#8217;ll see it append &#8216;.local&#8217; in the text below that field.</p>
<li>HostName &#8211; by default, this is not set on out-of-the-box computers, and in testing we see the <code>hostname</code> command inherits the LocalHostName as a substitute. That default &#8216;unset&#8217; state would result in the bash or zsh prompt matching the LocalHostName, unless HostName is set, in which case both prompt and the hostname command would use that. This is also why opening a terminal/command prompt may show a random name pulled via some quirk with name resolution on that particular network.</li>
</ol>
<pre>&lt;code class=&quot;bash&quot;&gt;~ allister$ scutil --get HostName
HostName: not set
air:~ allister$ scutil --get ComputerName
air
air:~ allister$ hostname
air.local
air:~ allister$ sudo scutil --set LocalHostName michaeljordon
air:~ allister$ hostname
michaeljordon.local

Last login: Tue Nov  5 15:16:11 on ttys000
michaeljordon:~ allister$
&lt;/code&gt;</pre>
<p>In many business settings, all three names are set to the same value at deploy time, and management tools sometimes enforce this. Some prefer to base device certificate names off the UUID or serial or a combination of derived values when applicable.</p>
<p>Thank you for coming to my Ted Talk, and belated welcome to 2020!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2020/03/02/name-goes-here/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Stuck in the past? Your time may get warped, too</title>
		<link>https://www.afp548.com/2018/03/05/stuck-in-the-past-your-time-may-get-warped-too/</link>
					<comments>https://www.afp548.com/2018/03/05/stuck-in-the-past-your-time-may-get-warped-too/#comments</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Mon, 05 Mar 2018 12:26:33 +0000</pubDate>
				<category><![CDATA[Odds and Ends]]></category>
		<category><![CDATA[Puppet]]></category>
		<category><![CDATA[10.12]]></category>
		<category><![CDATA[10.13]]></category>
		<category><![CDATA[command line]]></category>
		<category><![CDATA[eclecticlight]]></category>
		<category><![CDATA[High Sierra]]></category>
		<category><![CDATA[NTP]]></category>
		<category><![CDATA[Sierra]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387558</guid>

					<description><![CDATA[If the recent kerfluffle among autopkg folks (over GitHub shutting off TLS less-than 1.2) is any indication, a lot of people are still on 10.12.6. If I may editorialize for a second, this is probably contributed to by Apple&#8217;s less-than-ideal planning, execution, and communication around the High Sierra release. What [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>If the recent kerfluffle among autopkg folks (over GitHub <a href="https://github.com/autopkg/autopkg/issues/408">shutting off TLS less-than 1.2</a>) is any indication, a lot of people are still on 10.12.6. If I may editorialize for a second, this is probably contributed to by Apple&#8217;s less-than-ideal planning, execution, and communication around the High Sierra release.</p>
<p>What makes me come out of an unintentional blogging sabbatical to talk about 10.12? Well, a pretty widespread issue we saw is time went out of sync all of a sudden recently, across a bunch of networks and even with a known-good image. NTP as a protocol is supposed to be lossy and expensive, so it should be resilient to failure to the point that you may not notice it skewing for quite some time, and querying it even shows a &#8216;resolution&#8217; or tolerance that is measured across the various command line tools in 5+ digits/decimal places. Some implementations can&#8217;t even correct themselves if you&#8217;re off by greater than 15 minutes (or 1000 seconds). Apple has had its ups and downs, from the only other time a silent patch was pushed (previous to 10.13.2&#8217;s passwordless-root bug) to treating it like DNS resolution and rewriting its mechanisms every couple of releases. (Remember lookupd? AFP548 alumni have blogged about DNS confusion for <a href="https://www.afp548.com/2005/07/17/using-scutil-to-set-dns-server/">quite some time</a>.) I was particularly flummoxed when this issue cropped up again because it had previously occurred for all of the signage iPads in one of our offices. You could directly reproduce the issue by trying <code>ntpq -p time.euro.apple.com</code> and get <code>time-ios.g.aaplimg.com: timed out, nothing received</code>. Nothing was being explicitly blocked or&#8230; delayed at the firewall, but in that instance Apple&#8217;s NTP servers were not able to send a response back to the device due to an issue with how the wireless LAN controller was configured.</p>
<p><img decoding="async" src="https://www.afp548.com/wp-content/uploads/2018/03/Screenshot-2018-03-04-16.02.59.png" alt="Time.aint" /></p>
<p>In this case, however, we were seeing <code>ntpq: read: Connection refused</code>, as if the process wasn&#8217;t running. But a trip to launchctl as sudo would disagree, telling you that the job <em>was</em> loaded/running, so&#8230; what gives? There are a few <a href="https://apple.stackexchange.com/questions/117864/how-can-i-tell-if-my-mac-is-keeping-the-clock-updated-properly">stackoverflows</a> that will tell you about the mindblowingly deep new-math Apple&#8217;s been using to just win so hard, dunking on us with the swiftness by never meeting a good-enough that it couldn&#8217;t rewrite, but none of it seemed applicable anymore &#8211; ain&#8217;t no pacemaker in /usr/libexec on my systems, and knowing that timed is a completely new animal as of 10.13 anyway means I wasn&#8217;t too hot on the idea of learning older stuff now.</p>
<p>Luckily Mr. Oakley wrote <a href="https://eclecticlight.co/2015/03/08/time-gentlemen-please-ntp/">this post</a>, which pointed me to the /usr/libexec/ntpd-wrapper&#8230; shell script (:jackie:) which Sierra still relies on. Inspect that loaded launchd job at /System/Library/LaunchDaemons/org.ntp.ntpd.plist and you&#8217;ll notice it has a KeepAlive/PathState key with a value that it only runs as long as&#8230; /private/etc/ntp.conf is true, meaning the file exists. The contents would be something like <code>server time.euro.apple.com.</code>, but for whatever reason over the past few weeks that file went missing entirely across hundreds of computers under our care. Putting it back in place immediately synced the clock. To the fixing machine! We use a lot of ugly Ruby in an elegant way via facter to do site detection, and even already had a &#8216;location&#8217; fact for use with Simian that told us overall region (although Apple only shows 3 NTP server choices, and we consider India separate). Just detect if the file is missing, and if so, echo the appropriate server into that path, and Bob&#8217;s Your Uncle, Muriel&#8217;s Your Aunt. But then we hit another stumbling block &#8211; SIP!</p>
<p>You can&#8217;t echo text to /private/etc. You <em>can</em> mv files to /private/etc. Because, y&#8217;know, reasons. Hope this helps whoever out there is still clinging to 10.12 become less crazy. (But really, we just have an appliance-type use case that we haven&#8217;t migrated to 10.13 yet, everyone accepts the fact we&#8217;re on borrowed time and should upgrade ASAP. Cheers!)</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2018/03/05/stuck-in-the-past-your-time-may-get-warped-too/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>UEFI, 10.13/APFS, and You(r Imaging)</title>
		<link>https://www.afp548.com/2017/08/31/uefi-10-13apfs-and-your-imaging/</link>
					<comments>https://www.afp548.com/2017/08/31/uefi-10-13apfs-and-your-imaging/#comments</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Thu, 31 Aug 2017 19:53:46 +0000</pubDate>
				<category><![CDATA[Articles]]></category>
		<category><![CDATA[OS X]]></category>
		<category><![CDATA[APFS]]></category>
		<category><![CDATA[AutoDMG]]></category>
		<category><![CDATA[EFI]]></category>
		<category><![CDATA[eOS]]></category>
		<category><![CDATA[firmware]]></category>
		<category><![CDATA[High Sierra]]></category>
		<category><![CDATA[Pepijn Bruienne]]></category>
		<category><![CDATA[short shelf life]]></category>
		<category><![CDATA[Siracusa]]></category>
		<category><![CDATA[TouchBar]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387547</guid>

					<description><![CDATA[Let&#8217;s discuss the basic input/output system for IBM PC compatible computing devices, aka BIOS. Wait, that&#8217;s not a good start, P.eople C.an&#8217;t reM.ember C.omputer and I.nternet A.cronyms. Ok, EFI &#8211; that&#8217;s a thing that&#8217;s like BIOS, right? You can lock it, it makes sure all your most vital hardware components [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Let&#8217;s discuss the basic input/output system for IBM PC compatible computing devices, aka BIOS. Wait, that&#8217;s not a good start, <a href="https://en.wikipedia.org/wiki/PC_Card">P.eople C.an&#8217;t reM.ember C.omputer and I.nternet A.cronyms</a>. Ok, EFI &#8211; that&#8217;s a thing that&#8217;s like BIOS, right? You can lock it, it makes sure all your most vital hardware components are attached, and on the Mac it even throws up a faux loginwindow if FileVault 2 is enabled.</p>
<p>(Paragraph-long parenthetical: We are going to stop calling it FV2 at some point and just let it own the &#8216;FileVault&#8217; name, since we all know that&#8217;s what we&#8217;re talking about, right? FV1 <em>mattered</em>, but it sure didn&#8217;t catch on like its successor. FV1 was a neat parlor trick, like Time Machine doing hard links to :all_the_things:. Anyway.)</p>
<p>Although Apple always likes to implement things that would only make sense to its design/engineering/marketing departments, they still need to comply with the (U)EFI spec by having a partition available just for updating boot ROM, aka firmware. I&#8217;m a simple lad, so here&#8217;s the long &#8216;disambiguation&#8217; section: EFI applied to the boot ROM chip has nothing to do with firmware on lots of other components jangling around in your Mac, like the TouchBar (running embeddedOS) or wifi controller, and removing the partition is pretty harmless pre-TouchBar because it only had been a staging area to apply an update.</p>
<p><img decoding="async" src="https://www.afp548.com/wp-content/uploads/2017/08/640px-Final_Trophee_Monal_2012_n08-1.jpg" alt="https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/Single/2012-05-21#/media/File:Final_Trophee_Monal_2012_n08.jpg" /></p>
<p>TouchBar Macs came along and now <a href="http://blog.eriknicolasgomez.com/2016/11/27/the-untouchables-apples-new-os-activation-for-touch-bar-macbook-pros/">we had A Problem<img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /></a>: part of the activation process of what the TouchBar is, which is to say an iOS device very similar to the Apple Watch, was now embedded inside that &#8216;invisible&#8217; EFI partition on the &#8216;host&#8217; Mac. OS updates also updated the eOS image living in that same EFI partition, which needs to stay in-sync: if you leave the EFI partition but upgrade the version of the Mac OS via imaging, you haven&#8217;t delivered the new eOS image to the EFI partition, and you&#8217;ll see a nice black screen with Apple logo while it reaches across the internet to grab it. Which you had better hope is available and no proxies are in the way and you have an active internet connection. (eOS <em>activation</em> servers may have been up the other day, but we couldn&#8217;t get that image downloaded, even after multiple attempts at internet recovery, because whatever server hosts eOS <em>versions</em> was down. All Friday. Thanks Obama!)</p>
<p>Now APFS is bearing down on us, looking pretty ominous on the horizon. I&#8217;ll love the day-to-day speed boost for common operations, and the promise of Time Machine (and <a href="https://arstechnica.com/gadgets/2011/07/mac-os-x-10-7/12/">Siracusa&#8217;s dreams</a>) being fulfilled, but otherwise it&#8217;s a very &#8216;hold on to your butts&#8217; moment. The tricky part of where EFI overlaps with a filesystem is the APFS driver &#8211; as announced at the WWDC session, it&#8217;s a filesystem-based driver so that they can modify the format in the future without hard-coding it into the boot ROM directly. This means your firmware needs to be able to 1. recognize the APFS container format and 2. look for the driver <em>inline</em> (I guess is one way to put it) with that APFS container/volume(s).</p>
<p>So how do we get the firmware updated so that a 10.13 APFS image can restore successfully? Well <a href="http://maclabs.jazzace.ca/2017/08/12/firmware-follow-up.html">a technique made reference to</a> previously by others including <a href="https://twitter.com/AnthonyReimer">@jazzace</a> and adapted by <a href="https://twitter.com/bruienne">@bruienne</a> and <a href="https://twitter.com/arjenvan">@bochoven</a> is to grab the FirmwareUpdate.pkg out of the full OS installer app bundle, then make sure the tools and scripts are in place so that it is universal across all supported hardware. Here&#8217;s what you&#8217;d run at the commandline, assuming you have the High Sierra Beta app in it&#8217;s default location and <a href="https://github.com/munki/munki-pkg">munkipkg</a> installed and in your path:</p>
<p><code>/usr/bin/hdiutil mount /Applications/Install\ macOS\ High\ Sierra\ Beta.app/Contents/SharedSupport/InstallESD.dmg<br />
/usr/sbin/pkgutil --expand "/Volumes/InstallESD/Packages/FirmwareUpdate.pkg" /tmp/FirmwareUpdate<br />
munkipkg --create /tmp/FirmwareUpdateStandalone<br />
/bin/cp /tmp/FirmwareUpdate/Scripts/postinstall_actions/update /tmp/FirmwareUpdateStandalone/scripts/postinstall<br />
/bin/cp -R /tmp/FirmwareUpdate/Scripts/Tools /tmp/FirmwareUpdateStandalone/scripts/<br />
munkipkg /tmp/FirmwareUpdateStandalone</code></p>
<p>And then it&#8217;s up to you to put it in your deployment mechanism of choice. Cheers!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2017/08/31/uefi-10-13apfs-and-your-imaging/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Hook The Lintings</title>
		<link>https://www.afp548.com/2017/07/27/hook-the-lintings/</link>
					<comments>https://www.afp548.com/2017/07/27/hook-the-lintings/#respond</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Thu, 27 Jul 2017 15:08:33 +0000</pubDate>
				<category><![CDATA[Deployment]]></category>
		<category><![CDATA[Management]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387542</guid>

					<description><![CDATA[Friends don&#8217;t let friends commit puppet code with obvious errors. Especially when you&#8217;re working with a team, having a consistent style enforced by something like puppet-lint means less messy diffs as you send changes to each other to review. And if you&#8217;re leveraging stuff like r10k, you definitely don&#8217;t want [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Friends don&#8217;t let friends commit puppet code with obvious errors. Especially when you&#8217;re working with a team, having a consistent style enforced by something like puppet-lint means less messy diffs as you send changes to each other to review. And if you&#8217;re leveraging stuff like r10k, you definitely don&#8217;t want that screwed up, where the rubber meets the road.</p>
<p>Here&#8217;s a way to use one of the more popular collection of puppet-specific hooks to add checks before you commit locally across the modules you maintain and deploy.</p>
<p>First, for consistencies sake, you probably want to make sure you have a consistent directory in your home folder to point at. If you don&#8217;t have one already, you can create a binary directory to store stuff like csshX, munkipkg, and other fun items that you want in your path.</p>
<p><code>mkdir bin<br />
chmod 700 bin<br />
chmod +a "group:everyone deny delete" bin</code></p>
<p>(if you want to add it to your path temporarily, you&#8217;d run <code>PATH=$PATH:~/bin</code> or add it on each login by creating or appending <code>~/.bash_profile</code> with the following <code>export PATH=${PATH}:~/bin</code>)</p>
<p>Download the current master version of the code here https://github.com/drwahl/puppet-git-hooks/archive/master.zip and unpack it into that bin directory.</p>
<p>Now in case you haven&#8217;t had the pure unadultered joy, it&#8217;s time to have fun with ruby via bundler and all of the things you&#8217;re actually going to perform the checks with:</p>
<p><code>sudo gem install -n /usr/local/bin bundler r10k rspec puppet-lint</code> avoids the default gem install behavior of trying to write to SIP-protected directories <code>cd ~/bin/puppet-git-hooks-master/<br />
bundle install --path vendor/bundle</code></p>
<p>Now you just go to the git repo directory you&#8217;re hoping to lint or perform pre-commit checks on and run the following: <code>ln -s ~/bin/puppet-git-hooks-master/pre-commit .git/hooks/pre-commit</code></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2017/07/27/hook-the-lintings/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Hipster Software Management</title>
		<link>https://www.afp548.com/2017/05/08/hipster-software-management/</link>
					<comments>https://www.afp548.com/2017/05/08/hipster-software-management/#respond</comments>
		
		<dc:creator><![CDATA[Allister Banks]]></dc:creator>
		<pubDate>Mon, 08 May 2017 22:21:02 +0000</pubDate>
				<category><![CDATA[AFP548 Site News]]></category>
		<category><![CDATA[autopkg]]></category>
		<category><![CDATA[Michael Lynn]]></category>
		<category><![CDATA[Munki]]></category>
		<category><![CDATA[osquery]]></category>
		<category><![CDATA[santa]]></category>
		<category><![CDATA[security]]></category>
		<guid isPermaLink="false">https://www.afp548.com/?p=387527</guid>

					<description><![CDATA[Socially, Slack and Twitter are the two poles I gravitate between: Slack for when I&#8217;m hoping to be a burden on or distracted by our always-up-to-something community, and Twitter when I&#8217;m more in the mood to consume the echo chamber than reverberate sound out in to it. And then there&#8217;s [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Socially, Slack and Twitter are the two poles I gravitate between: Slack for when I&#8217;m hoping to be a burden on or distracted by our always-up-to-something community, and Twitter when I&#8217;m more in the mood to consume the echo chamber than reverberate sound out in to it. And then there&#8217;s the worst type of echo chamber, <s>YouTube Comments</s> <s>Reddit</s> Hacker News. It&#8217;s a mixture of potent elements like money, ego, and tech tinkering that has captured some small amount of mindshare. <a href="https://news.ycombinator.com/item?id=14282120">Here&#8217;s</a> an example of thread hijacking, in response to a post about Handbrake&#8217;s release hosting having been compromised to serve a version altered with malware.</p>
<p>Which brought to mind a vaguely-related tweet by a sometimes-visitor to the <a href="https://macadmins.slack.com/messages/C083RF51D">#security</a> channel in MacAdmins Slack</p>
<p>https://twitter.com/osxreverser/status/860910010276216832</p>
<p>If I may pervert its purpose a bit and editorialize to do a send-up of that HN post&#8230;</p>
<p>I&#8217;m going to take this opportunity to plug my favourite open source project &#8211; the AutoPkg framework that gets stuff done with software, safely.</p>
<p>It can work as a homebrew replacement (and is custom-built to work on the Mac), comes with a humble collection of recipes that the community has expanded exponentially, and while the code is spare and written in python, it uses <a href="https://github.com/autopkg/autopkg/wiki/AutoPkg-and-recipe-parent-trust-info">&#8216;trust info&#8217;</a> to fingerprint every moving part. Better than homebrew, many recipes check code signatures of signed downloaded artifacts, but also doesn&#8217;t require manual interaction to verify what the sha256 on an unsigned binary is. Unlike something like homebrew-cask, it doesn&#8217;t have homebrew in its name.</p>
<p>It can also work as a great way of bootstrapping an admin machine or just patching it out-of-band while testing a new release because it has install functionality! All the advantages of a package manager, without actually using *nix. Due to its functional nature, it comes with a wealth of advantages over homebrew and other hipster package managers. Once you get past the relatively trivial learning curve due to its huge adoption among Mac Admins, creating your own recipes or modifying existing ones is a breeze. It can create metadata artifacts that allow you to automatically ingest the software into whatever management system you work with, and one of the extensions even adds <a href="https://github.com/hjuutilainen/autopkg-virustotalanalyzer">VirusTotal</a> integration! Check out the AutoPkg wiki for more information.</p>
<p>It&#8217;s so flexible that people have built support for really out-there workflows: fetching <a href="https://github.com/autopkg/hansen-m-recipes/tree/master/SharedProcessors">Windows software</a>, patching Macs with <a href="https://github.com/autopkg/cgerke-recipes/blob/master/SharedProcessors/CmmacCreator.py">SCCM</a>, uploading artifacts to random destinations via rsync or scp&#8230; and then another couple of doozies that help prove the extensibility of the framework, which we haven&#8217;t discussed here before.</p>
<p>Really, we&#8217;re going off the rails &#8211; you can put up a wall at the kernel level and delegate security to a product like <a href="https://objective-see.com/products/blockblock.html">BlockBlock</a>, or take a hands-off, watch-and-know approach by just alerting wherever a new launchd job or executable shows up via <a href="https://osquery.io/">osquery</a>. But we&#8217;re already well down the autopkg path, so&#8230;</p>
<p>Since <a href="https://github.com/autopkg/autopkg/releases/tag/v0.4.1">late 2014</a>, AutoPkg has had a feature called CodeSignatureVerification that will look at a signed pkg or app and check it against a &#8216;known-good&#8217; value. The certificate chain that ships with macOS means your computer trusts that artifact was signed by someone with access to the developers public/private key pair. <a href="https://github.com/google/santa">Santa</a> from the MacOps team at Google can monitor or lock down what apps or binaries can run, based on a certificate. But say you don&#8217;t have their fancy internal voting webapp with which to log and crowd-source the approval of unsigned/new binaries for <em>your</em> organization. Or you want to be able to tell the moment the cert on an app you provide via your org&#8217;s software mgmt system differs from the one you expect. Santa logs almost all script or binary executions, and you should <em>really</em> be aggregating those logs to build your whitelist, you should <em>really</em> ship down those whitelists from something like <a href="https://github.com/zentralopensource/zentral">Zentral</a> or <a href="https://github.com/groob/moroz">Moroz</a> in the absence of that server Google has yet to release. But if you haven&#8217;t yet, and use Munki, you can integrate <a href="https://github.com/autopkg/arubdesu-recipes/blob/master/SharedProcessors/SantaUnsignedSha.py">this</a> processor in your recipes to whitelist the new binary being installed to a system that has Santa during the preflight. This comes in handy when <a href="https://github.com/autopkg/arubdesu-recipes/tree/master/santaRecipes">certain vendors</a> can&#8217;t quite figure out how to track down all their moving parts and sign them. And if you&#8217;ve got a vendor with a <a href="https://github.com/autopkg/recipes/commit/38df2919730038571421adffae8d3e23ae49d24f">less than helpful</a> build process changing the cert in use, or there IS an actual hijack of the vendor&#8217;s release site where a new certificate is in use, you can get a head start by seeing the mismatch in your AutoPkg results report with <a href="https://github.com/autopkg/arubdesu-recipes/blob/master/SharedProcessors/SantaCertSha.py">this processor</a>.</p>
<p>So to finish (wait, AFP548 still publishes posts?), you can either listen to reason</p>
<blockquote class="twitter-tweet" data-width="550" data-dnt="true">
<p lang="en" dir="ltr">If Handbrake is one of your <a href="https://twitter.com/hashtag/autopkg?src=hash&amp;ref_src=twsrc%5Etfw">#autopkg</a> recipes, make sure you check for the bad SHAs; Santa or <a href="https://twitter.com/osquery?ref_src=twsrc%5Etfw">@osquery</a> can also help detect it. <a href="https://t.co/t8WegzIAbq">https://t.co/t8WegzIAbq</a></p>
<p>&mdash; Victor (groob) (@wikiwalk) <a href="https://twitter.com/wikiwalk/status/860937363538812928?ref_src=twsrc%5Etfw">May 6, 2017</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>or rants like mine. <img fetchpriority="high" decoding="async" src="https://www.afp548.com/wp-content/uploads/2017/05/FullSizeRender-2.jpg" alt="newPuppetLogo" width="658" height="331" class="aligncenter size-large wp-image-387524" /> <a href="https://macadmins.slack.com/archives/C083RF51D/p1489425796384818"><em>/me resumes sitting on hands</em></a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.afp548.com/2017/05/08/hipster-software-management/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/

Page Caching using Disk: Enhanced (Page is feed) 
Minified using Disk

Served from: afp548.com @ 2025-04-19 16:19:18 by W3 Total Cache
-->