<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Enterprise storage Archives - Stephen Foskett, Pack Rat</title>
	<atom:link href="https://blog.fosketts.net/category/everything/EnterpriseStorage/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.fosketts.net/category/everything/enterprisestorage/</link>
	<description>Understanding the accumulation of data</description>
	<lastBuildDate>Mon, 09 Dec 2019 18:47:32 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>ZFS Is the Best Filesystem (For Now&#8230;)</title>
		<link>https://blog.fosketts.net/2017/07/10/zfs-best-filesystem-now/</link>
					<comments>https://blog.fosketts.net/2017/07/10/zfs-best-filesystem-now/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Mon, 10 Jul 2017 21:20:42 +0000</pubDate>
				<category><![CDATA[Computer History]]></category>
		<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Everything]]></category>
		<category><![CDATA[Features]]></category>
		<category><![CDATA[Terabyte home]]></category>
		<category><![CDATA[APFS]]></category>
		<category><![CDATA[Btrfs]]></category>
		<category><![CDATA[copy-on-write]]></category>
		<category><![CDATA[ECC]]></category>
		<category><![CDATA[EXT3]]></category>
		<category><![CDATA[FreeBSD]]></category>
		<category><![CDATA[FreeNAS]]></category>
		<category><![CDATA[GPL]]></category>
		<category><![CDATA[greenBytes]]></category>
		<category><![CDATA[iXsystems]]></category>
		<category><![CDATA[NetApp]]></category>
		<category><![CDATA[Nexenta]]></category>
		<category><![CDATA[OpenSolaris]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[RAID]]></category>
		<category><![CDATA[RAID-Z]]></category>
		<category><![CDATA[ReFS]]></category>
		<category><![CDATA[Solaris]]></category>
		<category><![CDATA[Storage Spaces]]></category>
		<category><![CDATA[Sun]]></category>
		<category><![CDATA[WAFL]]></category>
		<category><![CDATA[ZFS]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9412</guid>

					<description><![CDATA[<p>ZFS should have been great, but I kind of hate it: ZFS seems to be trapped in the past, before it was sidelined it as the cool storage project of choice; it's inflexible; it lacks modern flash integration; and it's not directly supported by most operating systems. But I put all my valuable data on ZFS because it simply offers the best level of data protection in a small office/home office (SOHO) environment. Here's why.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/07/10/zfs-best-filesystem-now/">ZFS Is the Best Filesystem (For Now&#8230;)</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>ZFS should have been great, but I kind of hate it: ZFS seems to be trapped in the past, before it was sidelined it as the cool storage project of choice; it&#8217;s inflexible; it lacks modern flash integration; and it&#8217;s not directly supported by most operating systems. But I put all my valuable data on ZFS because it simply offers the best level of data protection in a small office/home office (SOHO) environment. Here&#8217;s why.</p>
<figure id="attachment_8863" aria-describedby="caption-attachment-8863" style="width: 500px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" class="size-large wp-image-8863" src="http://blog.fosketts.net/wp-content/uploads/2014/12/One-Does-Not-Simple-Return-0-Instead-of-1-500x294.png" alt="" width="500" height="294" srcset="https://blog.fosketts.net/wp-content/uploads/2014/12/One-Does-Not-Simple-Return-0-Instead-of-1-500x294.png 500w, https://blog.fosketts.net/wp-content/uploads/2014/12/One-Does-Not-Simple-Return-0-Instead-of-1-150x88.png 150w, https://blog.fosketts.net/wp-content/uploads/2014/12/One-Does-Not-Simple-Return-0-Instead-of-1-300x176.png 300w, https://blog.fosketts.net/wp-content/uploads/2014/12/One-Does-Not-Simple-Return-0-Instead-of-1-100x58.png 100w, https://blog.fosketts.net/wp-content/uploads/2014/12/One-Does-Not-Simple-Return-0-Instead-of-1.png 568w" sizes="(max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-8863" class="wp-caption-text">The Prime Directive of storage: Do not return the wrong data!</figcaption></figure>
<h3>The ZFS Revolution, Circa 2006</h3>
<p>In <a href="http://blog.fosketts.net/2016/08/25/freenas-first-impressions/">my posts on FreeNAS</a>, I emphatically state that &#8220;ZFS is the best filesystem&#8221;, but if you follow me on social media, it&#8217;s clear that I don&#8217;t really love it. I figured this needs some explanation and context, so at the risk of agitating the ZFS fanatics, let&#8217;s do it.</p>
<p>When ZFS first appeared in 2005, it was absolutely with the times, but it&#8217;s remained stuck there ever since. The ZFS engineers did a lot right when they combined the best features of a volume manager with a &#8220;zettabyte-scale&#8221; filesystem in Solaris 10:</p>
<ul>
<li>ZFS achieves the kind of scalability every modern filesystem should have, with few limits in terms of data or metadata count and volume or file size.</li>
<li>ZFS includes checksumming of all data and metadata to detect corruption, an absolutely essential feature for long-term large-scale storage.</li>
<li>When ZFS detects an error, it can automatically reconstruct data from mirrors, parity, or alternate locations.</li>
<li>Mirroring and multiple-parity &#8220;RAID Z&#8221; are built in, combining multiple physical media devices seamlessly into a logical volume.</li>
<li>ZFS includes robust snapshot and mirror capabilities, including the ability to update the data on other volumes incrementally.</li>
<li>Data can be compressed on the fly and deduplication is supported as well.</li>
</ul>
<p>When ZFS appeared, <a href="http://blog.fosketts.net/2008/02/27/zfs-super-file-system/">it was a revolution</a> compared to older volume managers and filesystems. And Sun open-sourced most of ZFS, allowing it to be ported to other operating systems. The darling of the industry, ZFS quickly appeared on Linux and FreeBSD and Apple even began work to incorporate it as the next-generation filesystem for Mac OS X! The future seemed bright indeed!</p>
<blockquote><p>Checksums for user data are essential or you <em>will</em> lose data: <a href="http://blog.fosketts.net/2014/12/19/big-disk-drives-require-data-integrity-checking/">Why Big Disk Drives Require Data Integrity Checking</a> and <a href="http://blog.fosketts.net/2014/12/12/prime-directive-storage-lose-data/">The Prime Directive of Storage: Do Not Lose Data</a></p></blockquote>
<h3>2007 to 2010: ZFS is Derailed</h3>
<p>But something terrible happened to ZFS on the way to its coronation: Lawsuits, licensing issues, and FUD.</p>
<p>The skies first darkened in 2007, as NetApp sued Sun, claiming that their WAFL patents were infringed by ZFS. Sun counter-sued later that year, and the legal issues dragged on. Although ZFS definitely did not copy code from NetApp, the copy-on-write approach to snapshots was similar to WAFL, and those of us in the industry grew concerned that the NetApp suit could impact the future availability of open-source ZFS. And this appears to have been concerning enough to Apple that <a href="http://dtrace.org/blogs/ahl/2016/06/15/apple_and_zfs/">they dropped ZFS support</a> from Mac OS X 10.6 &#8220;Snow Leopard&#8221; just before it was released.</p>
<blockquote><p>Here&#8217;s a great blog about ZFS and Apple from Adam Leventhal, who worked on it: <a href="http://dtrace.org/blogs/ahl/2016/06/15/apple_and_zfs/">ZFS: Apple’s New Filesystem That Wasn’t</a></p></blockquote>
<p>By then, Sun was hitting hard times and Oracle swooped in to purchase the company. This sowed further doubt about the future of ZFS, since Oracle did not enjoy wide support from open source advocates. And the CDDL license Sun applied to the ZFS code was <a href="https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/">judged incompatible</a> with the GPLv2 that covers Linux, making it a non-starter for inclusion in the world&#8217;s server operating system.</p>
<p>Although OpenSolaris continued after the Oracle acquisition, and FreeBSD embraced ZFS, this was pretty much the extent of its impact outside the enterprise. Sure, NexentaStor and <a href="http://blog.fosketts.net/2008/09/15/greenbytes-embraces-extends-zfs/">GreenBytes</a> helped push ZFS forward in the enterprise, but Oracle&#8217;s lackluster commitment to Sun in the datacenter started having an impact.</p>
<h3>What&#8217;s Wrong With ZFS Today</h3>
<p>OpenZFS remains little-changed from what we had a decade ago.</p>
<p>Many remain skeptical of deduplication, which hogs expensive RAM in the best-case scenario. And I do mean expensive: Pretty much every ZFS FAQ flatly declares that ECC RAM is a must-have and 8 GB is the bare minimum. In my own experience with FreeNAS, 32 GB is a nice amount for an active small ZFS server, and this costs $200-$300 even at today&#8217;s prices.</p>
<p>And ZFS never really adapted to today&#8217;s world of widely-available flash storage: Although flash can be used to support the ZIL and L2ARC caches, these are of dubious value in a system with sufficient RAM, and ZFS has no true hybrid storage capability. It&#8217;s laughable that the ZFS documentation obsesses over a few GB of SLC flash when multi-TB 3D NAND drives are on the market. And no one is talking about NVMe even though it&#8217;s everywhere in performance PC&#8217;s.</p>
<p>Then there&#8217;s the question of flexibility, or lack thereof. Once you build a ZFS volume, it&#8217;s pretty much fixed for life. There are only three ways to expand a storage pool:</p>
<ol>
<li>Replace each and every drive in the pool with a larger one (which is great but limiting and expensive)</li>
<li>Add a stripe on another set of drives (which can lead to imbalanced performance and redundancy and a whole world of potential stupid stuff)</li>
<li>Build a new pool and &#8220;zfs send&#8221; your datasets to it (which is what I do, even though it&#8217;s kind of tricky)</li>
</ol>
<p>Apart from option 3 above, you can&#8217;t shrink a ZFS pool. Worse, you can&#8217;t change the data protection type without rebuilding the pool, and this includes adding a second or third parity drive. The FreeNAS faithful spend an inordinate amount of time trying to talk new users out of using RAID-Z1 <sup class='footnote'><a href='#fn-9412-1' id='fnref-9412-1' onclick='return fdfootnote_show(9412)'>1</a></sup> and moaning when they choose to use it anyway.</p>
<p>These may sound like little, niggling concerns but they combine to make ZFS feel like something from the dark ages after using Drobo, Synology, or today&#8217;s cloud storage systems. With ZFS, it&#8217;s &#8220;buy some disks and a lot of RAM, build a RAID set, and never touch it again&#8221;, which is not exactly in line with how storage is used these days.<sup class='footnote'><a href='#fn-9412-2' id='fnref-9412-2' onclick='return fdfootnote_show(9412)'>2</a></sup></p>
<h3>Where Are the Options?</h3>
<p>I&#8217;ve probably made ZFS sound pretty unappealing right about now. It was revolutionary but now it&#8217;s startlingly limiting and out of touch with the present solid-state-dominated storage world. So what are your other choices?</p>
<p>Linux has a few decent volume managers and filesystems, and most folks use a combination of LVM or MD and ext4. Btrfs really got storage nerds excited, appearing to be a ZFS-like combination of volume manager and filesystem with added flexibility, picking up where ReiserFS flopped. And Btrfs might just become &#8220;the ZFS of Linux&#8221; but development has faltered lately, with a scary data loss bug derailing RAID 5 and 6 last year and not much heard since. Still, I suspect that I&#8217;ll be recommending Btrfs for Linux users five years from now, especially with strong potential in containerized systems.<sup class='footnote'><a href='#fn-9412-3' id='fnref-9412-3' onclick='return fdfootnote_show(9412)'>3</a></sup></p>
<p>On the Windows side, Microsoft is busy rolling out their own next-generation filesystem. ReFS uses B+ trees (similar to Btrfs), scales like crazy, and has built-in resilience and data protection features<sup class='footnote'><a href='#fn-9412-4' id='fnref-9412-4' onclick='return fdfootnote_show(9412)'>4</a></sup>. When combined with Storage Spaces, Microsoft has a viable next-generation storage layer for Windows Server that can even use SSD and 3D-XPoint as a tier or cache.</p>
<p>Then there&#8217;s Apple, which reportedly rebooted their next-generation storage layer a few times before <a href="http://blog.fosketts.net/2016/06/13/macos-sierra-includes-new-apple-file-system-apfs/">coming up with APFS</a>, launched this year in macOS High Sierra. APFS looks a lot like Btrfs and ReFS, though implemented completely differently with more of a client focus. Although lacking in a few areas (user data is not checksummed and compression is not supported), APFS is the filesystem iOS and macOS need. And APFS is the final nail in the coffin for the &#8220;ZFS on Mac OS X&#8221; crowd.</p>
<p>Each major operating system now has a next-generation filesystem (and volume manager): Linux has Btrfs, Windows has ReFS and Storage Spaces, and macOS has APFS. FreeBSD seems content with ZFS, but that&#8217;s a small corner of the datacenter. And every enterprise system has already moved way past what ZFS can do, including enterprise-class offerings based on ZFS from Sun, Nexenta, and iXsystems.</p>
<p>Still, ZFS is way better than legacy storage SOHO filesystems. The lack of integrity checking, redundancy, and error recovery makes NTFS (Windows), HFS+ (macOS), and ext3/4 (Linux) wholly inappropriate for use as a long-term storage platform. And even ReFS and APFS, lacking data integrity checking, <a href="http://blog.fosketts.net/2014/12/19/big-disk-drives-require-data-integrity-checking/">aren&#8217;t appropriate</a> where data loss cannot be tolerated.</p>
<h3>Stephen&#8217;s Stance: Use ZFS (For Now)</h3>
<p>Sad as it makes me, as of 2017, ZFS is the best filesystem for long-term, large-scale data storage. Although it can be a pain to use (except in FreeBSD, Solaris, and purpose-built appliances), the robust and proven ZFS filesystem is the only trustworthy place for data outside enterprise storage systems. After all, <a href="http://blog.fosketts.net/2014/12/12/prime-directive-storage-lose-data/">reliably storing data is the only thing a storage system really has to do</a>. All my important data goes on ZFS, from photos to music and movies to office files. It&#8217;s going to be a long time before I trust anything other than ZFS!</p>
<div class='footnotes' id='footnotes-9412'>
<div class='footnotedivider'></div>
<ol>
<li id='fn-9412-1'> RAID-Z2 and RAID-Z3, with more redundancy, is preferred for today&#8217;s large disks to avoid data loss during rebuild <span class='footnotereverse'><a href='#fnref-9412-1'>&#8617;</a></span></li>
<li id='fn-9412-2'> Strangely, although multiple pools and removable drives work perfectly well with ZFS, almost no one talks about using it that way. It&#8217;s always a single pool named &#8220;tank&#8221; that includes every drive in the system. <span class='footnotereverse'><a href='#fnref-9412-2'>&#8617;</a></span></li>
<li id='fn-9412-3'> One thing really lacking in Btrfs is support for flash, and especially hybrid storage. But I&#8217;d rather that they got RAID-6 right first. <span class='footnotereverse'><a href='#fnref-9412-3'>&#8617;</a></span></li>
<li id='fn-9412-4'> Though data checksums are still turned off by default in ReFS <span class='footnotereverse'><a href='#fnref-9412-4'>&#8617;</a></span></li>
</ol>
</div>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/07/10/zfs-best-filesystem-now/">ZFS Is the Best Filesystem (For Now&#8230;)</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2017/07/10/zfs-best-filesystem-now/feed/</wfw:commentRss>
			<slash:comments>75</slash:comments>
		
		
			</item>
		<item>
		<title>What is OCuLink?</title>
		<link>https://blog.fosketts.net/2017/06/22/what-is-oculink/</link>
					<comments>https://blog.fosketts.net/2017/06/22/what-is-oculink/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Thu, 22 Jun 2017 18:34:17 +0000</pubDate>
				<category><![CDATA[Computer History]]></category>
		<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[AMD]]></category>
		<category><![CDATA[Epyc]]></category>
		<category><![CDATA[external PCIe]]></category>
		<category><![CDATA[HDminiSAS]]></category>
		<category><![CDATA[Light Peak]]></category>
		<category><![CDATA[M.2]]></category>
		<category><![CDATA[NGFF]]></category>
		<category><![CDATA[NVMe]]></category>
		<category><![CDATA[OCuLink]]></category>
		<category><![CDATA[OCuLink-2]]></category>
		<category><![CDATA[PCI-SIG]]></category>
		<category><![CDATA[PCIe]]></category>
		<category><![CDATA[PCIe 3.0]]></category>
		<category><![CDATA[PCIe 4.0]]></category>
		<category><![CDATA[SFF-8087]]></category>
		<category><![CDATA[SFF-8639]]></category>
		<category><![CDATA[Threadripper]]></category>
		<category><![CDATA[Thunderbolt]]></category>
		<category><![CDATA[Thunderbolt 3]]></category>
		<category><![CDATA[Tyan]]></category>
		<category><![CDATA[U.2]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9623</guid>

					<description><![CDATA[<p>With the advent of AMD Threadripper and Epyc, we are about to see an explosion of PCIe lanes in the pro-sumer and datacenter market. Although many of those lanes will be taken up by conventional PCIe cards, some will be used for SSD's (M.2 and U.2) or for external connectivity. This is where OCuLink might finally take off: As an AMD alternative to Thunderbolt for external PCIe peripheral connectivity.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/06/22/what-is-oculink/">What is OCuLink?</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>I&#8217;m a storage connectivity nerd, but I often learn new things I missed. Reading about the <a href="http://www.tyan.com/Motherboards_S8026_S8026GM2NRE">new Tyan servers</a> with AMD&#8217;s Epyc server processor, I was surprised by an off-hand mention of &#8220;OCuLink&#8221; as a storage expansion port. Naturally, I did some digging and discovered what it is: OCuLink is a competitor for Thunderbolt for cabled PCI Express connectivity, offering similar performance to Thunderbolt 3 in a different form factor.</p>
<figure id="attachment_9626" aria-describedby="caption-attachment-9626" style="width: 500px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><img decoding="async" class="size-large wp-image-9626" src="http://blog.fosketts.net/wp-content/uploads/2017/06/S8026-b-500x546.jpg" alt="" width="500" height="546" srcset="https://blog.fosketts.net/wp-content/uploads/2017/06/S8026-b-500x546.jpg 500w, https://blog.fosketts.net/wp-content/uploads/2017/06/S8026-b-137x150.jpg 137w, https://blog.fosketts.net/wp-content/uploads/2017/06/S8026-b-275x300.jpg 275w, https://blog.fosketts.net/wp-content/uploads/2017/06/S8026-b.jpg 600w" sizes="(max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-9626" class="wp-caption-text">Six OCuLink-2 ports on Tyan&#8217;s new S8026 AMD Epyc server motherboard</figcaption></figure>
<h3>Zero to Hero: Thunderbolt</h3>
<p>I was an early enthusiast for Thunderbolt (<a href="http://blog.fosketts.net/2011/02/24/thunderbolt-light-peak-pci-express/">originally called Light Peak</a>), since it opened up a whole new world of I/O performance. And, being a storage nerd, I was particularly keen on the combination of fast PCIe connectivity and flash storage peripherals. Although the original Thunderbolt fell short of expectations (being mainly Mac and copper, not a universal PCIe-over-fiber dream), later revisions have matured nicely.</p>
<p>Today&#8217;s Thunderbolt 3 boasts 40 Gbps of PCIe 3.0 bandwidth along with up to 100 Watts of power delivery and support for USB, DisplayPort, and HDMI protocols besides. Although interoperability is <a href="http://blog.fosketts.net/2016/10/29/total-nightmare-usb-c-thunderbolt-3/">more of a nightmare</a> than a dream, Thunderbolt 3 works and is supported on many platforms and peripherals. With Intel and Apple pushing solidly to promote the interface, and leveraging popularity of USB-C, Thunderbolt 3 looks to become a real multi-platform standard.</p>
<p>We are seeing generic motherboards shipping with Thunderbolt 3 onboard, not to mention high-end PC&#8217;s and just about every Apple Macintosh computer. It wouldn&#8217;t be surprising to start seeing Thunderbolt 3 showing up in server hardware as an interconnect for external PCIe chassis full of NVMe storage or co-processors for machine learning. And Intel is talking about moving Thunderbolt support inside the CPU for the next generation.</p>
<h3>So What&#8217;s OCuLink?</h3>
<p>But this wasn&#8217;t always a given. And apparently the PCI-SIG (standards body for the PCI interface) thought so too. So in 2012, word started spreading that PCI-SIG was developing a standard cabled protocol for PCIe devices off the motherboard. And this standard would be free and unencumbered by corporate overlords, Apple and Intel.</p>
<figure id="attachment_9624" aria-describedby="caption-attachment-9624" style="width: 400px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><img decoding="async" class="size-full wp-image-9624" src="http://blog.fosketts.net/wp-content/uploads/2017/06/008.jpg" alt="" width="400" height="300" srcset="https://blog.fosketts.net/wp-content/uploads/2017/06/008.jpg 400w, https://blog.fosketts.net/wp-content/uploads/2017/06/008-150x113.jpg 150w, https://blog.fosketts.net/wp-content/uploads/2017/06/008-300x225.jpg 300w" sizes="(max-width: 400px) 100vw, 400px" /><figcaption id="caption-attachment-9624" class="wp-caption-text">PCI-SIG developed OCuLink to enable out-of-box PCIe connectivity (just like Thunderbolt)</figcaption></figure>
<p>Thunderbolt had already reached the market by then, but it used Mini DisplayPort connectors and only supported PCIe 2.0, for aggregate throughput of 20 Gbps. In contrast, OCuLink would start with four lanes of PCIe 3.0, good for 32 Gbps of throughput.</p>
<p>Like Thunderbolt, the initial plan was for both optical and copper cables to be used. But both standards hit roadblocks as optical cables failed to materialize in volume. Although there are optical Thunderbolt cables, it&#8217;s a far cry from the silicon photonics wonderland promised by Light Peak. And although the &#8220;O&#8221; in OCuLink stands for &#8220;optical&#8221;, every implementation I could find uses the &#8220;Cu&#8221; for copper!</p>
<p>Early reports gushed about OCuLink for laptops, allowing external GPU&#8217;s to be connected to thin-and-light PC&#8217;s. But this has been a rare application even in the Thunderbolt space (Apple just introduced official support in 2017), and I couldn&#8217;t find any OCuLink laptops or GPU enclosures on the market.</p>
<p>Instead, OCuLink was picked up by server developers looking for an in-box PCIe interconnect for storage or I/O virtualization connectivity. And the standard 48-pin connector and cable was sometimes re-purposed to carry multiple 12 Gbps SAS 3.0 channels as an alternative to HDminiSAS, replacing the old 4-channel SFF-8087 mini-SAS connectors we in storage have learned to love.</p>
<h3>SFF-8639, SATA Express, and U.2</h3>
<p>OCuLink has a kinship to another PCIe interface that might just be more popular. At the same time that the OCuLink cable was introduced, the PCI-SIG also pushed for a drive-attachment interface with PCIe using a connector called SFF-8639. This was initially used for SATA Express drives, a 2-lane PCIe storage interface.</p>
<p>However, in 2015 SFF-8639 was officially renamed &#8220;U.2&#8243; for four-lane PCIe storage applications. This has become somewhat more popular. So, in a way, U.2 is a cousin of OCuLink and some devices might even use OCuLink protocol over the U.2 connector!</p>
<p>U.2 drives look like conventional 2.5&#8221; SSD&#8217;s. So they might take off in servers and datacenter applications. But lately, most PCIe storage implementations are leaning towards the compact M.2 interface instead. And on the pro-sumer side, it&#8217;s almost as difficult to find a U.2 motherboard or drive as it was to find SATA Express!</p>
<figure id="attachment_9625" aria-describedby="caption-attachment-9625" style="width: 400px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-full wp-image-9625" src="http://blog.fosketts.net/wp-content/uploads/2017/06/004.jpg" alt="" width="400" height="300" srcset="https://blog.fosketts.net/wp-content/uploads/2017/06/004.jpg 400w, https://blog.fosketts.net/wp-content/uploads/2017/06/004-150x113.jpg 150w, https://blog.fosketts.net/wp-content/uploads/2017/06/004-300x225.jpg 300w" sizes="auto, (max-width: 400px) 100vw, 400px" /><figcaption id="caption-attachment-9625" class="wp-caption-text">OCuLink was developed alongside &#8220;NGFF&#8221; (which became the popular M.2 standard) and SFF-8639 (now known as U.2)</figcaption></figure>
<h3>OCuLink-2</h3>
<p>In 2016, PCI-SIG announced OCuLink-2, bringing PCIe 4.0 bandwidth and a new connector. Ironically, the OCuLink-2 external connector and cable bears a strong resemblance to full-size DisplayPort, bringing us back to the days of Thunderbolt&#8217;s Mini-DisplayPort cable.</p>
<p>So far, there has been little discussion of OCuLink-2, but perhaps that&#8217;s changing. The aforementioned Tyan server chassis actually appears to use OCuLink-2, supporting either PCIe 3.0 x8 or 16 6 Gbps SAS/SATA drives.</p>
<h3>Stephen&#8217;s Stance</h3>
<p>With the advent of <a href="http://gestaltit.com/exclusive/stephen/high-end-desktop-heats-intel-i9-amd-threadripper/">AMD Threadripper and Epyc</a>, we are about to see an explosion of PCIe lanes in the pro-sumer and datacenter market. Although many of those lanes will be taken up by conventional PCIe cards, some will be used for SSD&#8217;s (M.2 and U.2) or for external connectivity. This is where OCuLink might finally take off: As an AMD alternative to Thunderbolt for external PCIe peripheral connectivity.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/06/22/what-is-oculink/">What is OCuLink?</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2017/06/22/what-is-oculink/feed/</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
		<item>
		<title>Storage is Getting Cloudier!</title>
		<link>https://blog.fosketts.net/2017/06/21/storage-getting-cloudier/</link>
					<comments>https://blog.fosketts.net/2017/06/21/storage-getting-cloudier/#respond</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Wed, 21 Jun 2017 20:40:24 +0000</pubDate>
				<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Boulder]]></category>
		<category><![CDATA[cloud extension]]></category>
		<category><![CDATA[cloud storage]]></category>
		<category><![CDATA[Data Fabric]]></category>
		<category><![CDATA[Gestalt IT]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[NetApp]]></category>
		<category><![CDATA[Primary Data]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9620</guid>

					<description><![CDATA[<p>From iCloud Photos to Google Drive to NetApp and Primary Data, we're putting storage wherever it needs to be. And this is a major shift for computing, from the iPhone to the datacenter. Watch this space!</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/06/21/storage-getting-cloudier/">Storage is Getting Cloudier!</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>Cloud storage is nothing new, but we&#8217;re now starting to see real integration between on-premises enterprise arrays and as-a-service cloud storage solutions. I&#8217;ve recently written two pieces on this subject for <a href="http://GestaltIT.com">Gestalt IT</a> and thought I would direct you, my beloved readers, over there to have a look!</p>
<figure id="attachment_9621" aria-describedby="caption-attachment-9621" style="width: 500px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-9621" src="http://blog.fosketts.net/wp-content/uploads/2017/06/fullsizeoutput_42d7-500x333.jpeg" alt="" width="500" height="333" srcset="https://blog.fosketts.net/wp-content/uploads/2017/06/fullsizeoutput_42d7-500x333.jpeg 500w, https://blog.fosketts.net/wp-content/uploads/2017/06/fullsizeoutput_42d7-150x100.jpeg 150w, https://blog.fosketts.net/wp-content/uploads/2017/06/fullsizeoutput_42d7-300x200.jpeg 300w, https://blog.fosketts.net/wp-content/uploads/2017/06/fullsizeoutput_42d7-768x512.jpeg 768w" sizes="auto, (max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-9621" class="wp-caption-text">This is the view from the NetApp Boulder office over the St. Julien Hotel, where the analyst day was held, to the gorgeous &#8220;flatiron&#8221; mountains on the Front Range in Colorado. So in effect, this is a photo of storage extending into the clouds. Use your imagination.</figcaption></figure>
<h3>Storage in the Cloud</h3>
<p>First up is <a href="http://gestaltit.com/exclusive/stephen/netapp-heads-microsofts-azure-cloud/">NetApp Heads Into Microsoft’s Azure Cloud</a>, wherein I discuss my impressions of an overlooked announcement at the recent NetApp Analysts Day in Boulder, CO. In short, NetApp is putting their enterprise storage expertise behind a new NFS storage offering inside Microsoft Azure.</p>
<p>This is major news since it extends NetApp&#8217;s Data Fabric into the cloud <em>for real</em>, completely integrated with Azure. It&#8217;s a great fit with Microsoft&#8217;s world of enterprise cloud solutions and could really help pull old NetApp out of the datacenter.</p>
<p>Next, I posited that 2017 is shaping up as <a href="http://gestaltit.com/exclusive/stephen/year-cloud-extension/">The Year of Cloud Extension</a>, as more and more storage systems truly connect to the cloud. See a trend here? It&#8217;s not just a gateway to the cloud, it&#8217;s real integration from datacenter to cloud.</p>
<blockquote><p>If you&#8217;re not reading <a href="http://GestaltIT.com">Gestalt IT</a> regularly, you&#8217;re really missing out. Along with content from me, you&#8217;ll find some from <a href="http://twitter.com/NetworkingNerd">Tom Hollingsworth</a> and lots more by <a href="http://twitter.com/MrAnthropology">Rich Stroffolino</a> just about every day! <a href="http://gestaltit.com/category/gestalt-news/">Subscribe to the newsletter</a> and check out the new <a href="http://gestaltit.com/category/podcast/">On-Premise IT podcast</a> too!</p></blockquote>
<h3>Stephen&#8217;s Stance</h3>
<p>The Big Story here isn&#8217;t that some storage is connected to the cloud. It&#8217;s that we&#8217;re fundamentally changing how we look at storage locality. We&#8217;ve always thought of storage as being &#8220;here&#8221;, whether it&#8217;s in the datacenter or on our own devices. But we&#8217;re starting to see storage as being fundamentally &#8220;there&#8221;, in the cloud, with a local representation wherever it&#8217;s needed.</p>
<p>From iCloud Photos to Google Drive to <a href="http://NetApp.com">NetApp</a> and <a href="http://primarydata.com">Primary Data</a>, we&#8217;re putting storage wherever it needs to be. And this is a major shift for computing, from the iPhone to the datacenter. Watch this space! And read about it on Gestalt IT.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/06/21/storage-getting-cloudier/">Storage is Getting Cloudier!</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2017/06/21/storage-getting-cloudier/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Turn Off Error Recovery in RAID Drives: TLER, ERC, and CCTL</title>
		<link>https://blog.fosketts.net/2017/05/30/turn-off-error-recovery-raid-drives-tler-erc-cctl/</link>
					<comments>https://blog.fosketts.net/2017/05/30/turn-off-error-recovery-raid-drives-tler-erc-cctl/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Tue, 30 May 2017 19:19:26 +0000</pubDate>
				<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Terabyte home]]></category>
		<category><![CDATA[CCTL]]></category>
		<category><![CDATA[CRC]]></category>
		<category><![CDATA[CRC32C]]></category>
		<category><![CDATA[enterprise disk]]></category>
		<category><![CDATA[ERC]]></category>
		<category><![CDATA[FreeNAS]]></category>
		<category><![CDATA[hard disk]]></category>
		<category><![CDATA[hard disk drive]]></category>
		<category><![CDATA[NAS]]></category>
		<category><![CDATA[NAS disk]]></category>
		<category><![CDATA[RAID]]></category>
		<category><![CDATA[SMART]]></category>
		<category><![CDATA[TLER]]></category>
		<category><![CDATA[ZFS]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9608</guid>

					<description><![CDATA[<p>Hard disk drives encounter errors from time to time, so it's a good thing that most have the ability to recover data anyway. But RAID systems usually have their own error recovery capabilities and can be thrown off when a hard disk pauses I/O. So it's a good idea to use hard disk drives that allow you to disable or limit error recovery in RAID systems.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/05/30/turn-off-error-recovery-raid-drives-tler-erc-cctl/">Turn Off Error Recovery in RAID Drives: TLER, ERC, and CCTL</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>Hard disk drives encounter errors from time to time, so it&#8217;s a good thing that most have the ability to recover data anyway. But RAID systems usually have their own error recovery capabilities and can be thrown off when a hard disk pauses I/O. So it&#8217;s a good idea to use hard disk drives that allow you to disable or limit error recovery in RAID systems.</p>
<figure id="attachment_9609" aria-describedby="caption-attachment-9609" style="width: 500px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-9609" src="http://blog.fosketts.net/wp-content/uploads/2017/05/fullsizeoutput_408d-500x500.jpeg" alt="" width="500" height="500" srcset="https://blog.fosketts.net/wp-content/uploads/2017/05/fullsizeoutput_408d-500x500.jpeg 500w, https://blog.fosketts.net/wp-content/uploads/2017/05/fullsizeoutput_408d-150x150.jpeg 150w, https://blog.fosketts.net/wp-content/uploads/2017/05/fullsizeoutput_408d-300x300.jpeg 300w, https://blog.fosketts.net/wp-content/uploads/2017/05/fullsizeoutput_408d-768x768.jpeg 768w, https://blog.fosketts.net/wp-content/uploads/2017/05/fullsizeoutput_408d-100x100.jpeg 100w, https://blog.fosketts.net/wp-content/uploads/2017/05/fullsizeoutput_408d.jpeg 1746w" sizes="auto, (max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-9609" class="wp-caption-text">It&#8217;s a good idea to limit error recovery for hard disk drives used in RAID systems</figcaption></figure>
<h3>Error Recovery Basics</h3>
<p>Hard disk drives have more points of failure than most other modern computer components: They are physical devices that rely on magnetism and mechanical precision, not just solid state electronics. And ever-increasing drive density magnifies the challenge of always returning valid data. In fact, magnetic disk media is surprisingly unreliable, with hard drives often relying on error recovery technologies to cover for read and write errors.</p>
<p>The most basic form of error recovery on hard disk drives is <a href="https://en.wikipedia.org/wiki/Cyclic_redundancy_check">CRC32C</a>, a simple error-detecting code that reliably uncovers read and write errors. In most cases, disk drives can re-try a read, adjusting the heads slightly to detect the correct value. Once an error is detected and the correct data is uncovered, the disk drive will either re-write the data in place or mark that spot as bad and re-map it to another physical location.</p>
<p>All this should happen very quickly, but the application must wait for it to complete. Under light load, this process is barely noticeable. But systems with heavy I/O can escalate this wait time to unacceptable levels. In busy systems, an error recovery can take many seconds or even minutes to complete.</p>
<h3>RAID and Error Recovery</h3>
<p>Multi-drive systems, including RAID and similar solutions, can&#8217;t tolerate long waits for error recovery. Most RAID controllers assume that a drive that hasn&#8217;t completed an I/O request within a few seconds has failed. The controller will then mark the entire disk drive as &#8220;offline&#8221; and attempt to rebuild using an available spare disk or simply take the entire RAID set offline to avoid data loss. This can prove problematic, since a RAID rebuild can take hour or days to complete!</p>
<p>It&#8217;s not the fault of the RAID system, either. There has to be some threshold where a disk is declared to have failed. It wouldn&#8217;t be practical (or even desirable) to escalate the I/O wait &#8220;up the stack&#8221; and pause all operations until a disk recovers (if ever). So most RAID solutions or controllers set a threshold of a few seconds.</p>
<p>The rule of thumb for RAID controllers is 8 seconds, though this can vary. Some controllers wait for 10, 20, or 30 seconds, for example, and this can be configured on many. <a href="https://forums.freenas.org/index.php?threads/checking-for-tler-erc-etc-support-on-a-drive.27126/">ZFS will generally wait as long as needed</a> for error recovery, and this can dramatically impact performance.</p>
<h3>Time-Limited Error Recovery</h3>
<p>Disk drives intended for RAID use typically implement some form of time limiting for error recovery. Western Digital calls this <a href="http://products.wdc.com/library/other/2579-001098.pdf">Time Limited Error Recovery (TLER)</a>, while Seagate calls it Error Recovery Control (ERC) and Samsung and Hitachi call it Command Completion Time Limit (CCTL).</p>
<p>Regardless of what it&#8217;s called, the drive will limit the wait time on any error recovery command to a settable value, typically 7 seconds by default. The drive will usually report a failed I/O up the stack and attempt to re-try the error recovery at a later time. Meanwhile, the RAID controller will likely recover the data from parity or erasure code and continue operation.</p>
<p>ZFS, and other software RAID systems, will typically &#8220;react&#8221; the same way when TLER is enabled, recovering data and remapping that block.</p>
<p>Note that most desktop hard disk drives to not have this capability. Error recovery is always turned on and recovery will take as long as necessary. This is one reason that conventional desktop disk drives are not appropriate for use in RAID solutions.</p>
<h3>Checking and Setting TLER</h3>
<p>If a hard drive is to be used in a RAID or similar setup, it is desirable to have TLER or ERC enabled and set to a value under 8 seconds.</p>
<p>Most UNIX-like systems have the &#8220;smartmon&#8221; tools package, including the command, smartctl. This can be used to query TLER and similar settings. For example, here is the result of that command in FreeNAS (FreeBSD) for a Western Digital Red NAS drive:</p>
<pre># smartctl -l scterc /dev/da2

smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.0-STABLE amd64] (local build)
 Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

SCT Error Recovery Control:
 Read: 70 (7.0 seconds)
 Write: 70 (7.0 seconds)</pre>
<p>This tool can also set TLER on a drive as follows:</p>
<pre>smartctl -l scterc,80,80 /dev/da2</pre>
<p>Western Digital provides a DOS utility, WDTLER.EXE, with similar functionality.</p>
<h3>Stephen&#8217;s Stance</h3>
<p>One reason to use enterprise or NAS hard disk drives is the capability to limit error recovery for smoother performance. I strongly recommend only using such drives with RAID systems, especially ZFS (as in FreeNAS)!</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/05/30/turn-off-error-recovery-raid-drives-tler-erc-cctl/">Turn Off Error Recovery in RAID Drives: TLER, ERC, and CCTL</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2017/05/30/turn-off-error-recovery-raid-drives-tler-erc-cctl/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Download My Free E-Book, &#8220;Essential Enterprise Storage Concepts&#8221;!</title>
		<link>https://blog.fosketts.net/2017/04/04/download-free-e-book-essential-enterprise-storage-concepts/</link>
					<comments>https://blog.fosketts.net/2017/04/04/download-free-e-book-essential-enterprise-storage-concepts/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Tue, 04 Apr 2017 14:47:42 +0000</pubDate>
				<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Everything]]></category>
		<category><![CDATA[Features]]></category>
		<category><![CDATA[Personal]]></category>
		<category><![CDATA[e-book]]></category>
		<category><![CDATA[ebook]]></category>
		<category><![CDATA[enterprise storage]]></category>
		<category><![CDATA[PDF]]></category>
		<category><![CDATA[SolarWinds]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9582</guid>

					<description><![CDATA[<p>I've got a lot to say about storage, as you might have noticed from reading my blog. So I finally sat down and wrote a book on enterprise storage. Now you can download the e-book for free, thanks to support from my friends at SolarWinds!</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/04/04/download-free-e-book-essential-enterprise-storage-concepts/">Download My Free E-Book, &#8220;Essential Enterprise Storage Concepts&#8221;!</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve got a lot to say about storage, as you might have noticed from reading my blog. So I finally sat down and wrote a book on enterprise storage. Now you can <a href="http://launch.solarwinds.com/rs/564-VFR-008/images/1602_SRM_Enterprise-Storage-eBook.pdf">download the e-book for free</a>, thanks to support from my friends at <a href="http://www.solarwinds.com">SolarWinds</a>!</p>
<p><a href="http://launch.solarwinds.com/rs/564-VFR-008/images/1602_SRM_Enterprise-Storage-eBook.pdf"><img style=' display: block; margin-right: auto; margin-left: auto;'  loading="lazy" decoding="async" class="aligncenter size-large wp-image-9581" src="http://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2-500x647.png" alt="" width="500" height="647" srcset="https://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2-500x647.png 500w, https://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2-116x150.png 116w, https://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2-232x300.png 232w, https://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2.png 510w" sizes="auto, (max-width: 500px) 100vw, 500px" /></a></p>
<p>The book, &#8220;<a href="http://launch.solarwinds.com/rs/564-VFR-008/images/1602_SRM_Enterprise-Storage-eBook.pdf">Essential Enterprise Storage Concepts</a>&#8220;, is intended as an introduction to the field of enterprise storage for technical audiences. It&#8217;s not some grand discourse on how storage ought to be done. Rather, it&#8217;s an overview of <strong>what you need to know about enterprise storage</strong>.</p>
<p>I start with some <strong>storage basics</strong>: The difference between memory and storage, a discussion of storage media, and the perennial topic of block vs. file storage. Then I talk about scaling storage arrays and the capabilities that enterprise arrays bring to the table. In a dozen pages, you&#8217;ll know enough to have meaningful conversations with storage vendors.</p>
<p>Then I shift gears, talking about <strong>performance and capacity</strong>. I begin with throughput vs. latency and the implications of parallel and serial storage access. Then I go into queueing, caching, and streaming and the difference between asynchronous and synchronous access. This will help you to understand how storage works in production.</p>
<p>The next section focuses on <strong>understanding storage usage</strong>. Storage administrators spend much of their time measuring and analyzing capacity, and this question of utilization is essential. We then talk about optimization of capacity and the trade-off between capacity and performance. If you&#8217;re looking at storage management software, this section is for you.</p>
<p>Finally, I wrap up with a discussion of <strong>storage best practices</strong>: Managing and monitoring storage, storage system sizing and planning, storage design considerations, and data protection fundamentals. This is a very-brief summary of the rules of thumb I&#8217;ve come to rely on after decades in enterprise storage.</p>
<p><a href="http://launch.solarwinds.com/rs/564-VFR-008/images/1602_SRM_Enterprise-Storage-eBook.pdf"><img style=' float: right; padding: 4px; margin: 0 0 2px 7px;'  loading="lazy" decoding="async" class="size-medium wp-image-9581 alignright" src="http://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2-232x300.png" alt="" width="232" height="300" srcset="https://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2-232x300.png 232w, https://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2-116x150.png 116w, https://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2-500x647.png 500w, https://blog.fosketts.net/wp-content/uploads/2017/04/1602_SRM_Enterprise-Storage-eBook-2.png 510w" sizes="auto, (max-width: 232px) 100vw, 232px" /></a></p>
<p>The whole book is under 30 pages long, and I am very proud to have produced this concise treatment of enterprise storage. I urge you to <a href="http://launch.solarwinds.com/rs/564-VFR-008/images/1602_SRM_Enterprise-Storage-eBook.pdf">download the book</a> and give it a read! And I welcome your feedback!</p>
<blockquote><p>Note: Somehow, the sidebar entitled &#8220;Flash Storage&#8221; has the wrong text. Here&#8217;s the correct sidebar text: &#8220;We’ll be talking a lot about flash memory later. Although this is dynamic, solid-state, and read/write, it has special characteristics that make it ill-suited as primary system memory. Plus, it’s non-volatile. Therefore, even though it’s technically memory, we typically use it as storage!&#8221;</p></blockquote>
<p>This e-book is sponsored by <a href="http://www.solarwinds.com">SolarWinds</a>, who also handled the formatting, illustrations, and design. I encourage you to learn more about SolarWinds products and check out their <a href="https://thwack.solarwinds.com/welcome">thwack</a> technical community, and especially the <a href="https://thwack.solarwinds.com/community/solarwinds-community/geek-speak_tht">Geek Speak</a> blog!</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2017/04/04/download-free-e-book-essential-enterprise-storage-concepts/">Download My Free E-Book, &#8220;Essential Enterprise Storage Concepts&#8221;!</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2017/04/04/download-free-e-book-essential-enterprise-storage-concepts/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Dell, Wall Street, Magic Beans, and the End of EMC</title>
		<link>https://blog.fosketts.net/2016/09/07/dell-wall-street-magic-beans-end-emc/</link>
					<comments>https://blog.fosketts.net/2016/09/07/dell-wall-street-magic-beans-end-emc/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Wed, 07 Sep 2016 17:45:24 +0000</pubDate>
				<category><![CDATA[Computer History]]></category>
		<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Cisco]]></category>
		<category><![CDATA[Dell]]></category>
		<category><![CDATA[EMC]]></category>
		<category><![CDATA[HPE]]></category>
		<category><![CDATA[investing]]></category>
		<category><![CDATA[Nutanix]]></category>
		<category><![CDATA[SimpliVity]]></category>
		<category><![CDATA[stock market]]></category>
		<category><![CDATA[VMware]]></category>
		<category><![CDATA[Wall Street]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9421</guid>

					<description><![CDATA[<p>As of today, EMC Corporation is no longer an independent company. Who thought we would see this day? From now on, EMC is simply a brand for parts of Dell's Infrastructure Solutions and Services businesses. This marks a major shift in the enterprise storage world, for IT, and perhaps for American business in general.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/09/07/dell-wall-street-magic-beans-end-emc/">Dell, Wall Street, Magic Beans, and the End of EMC</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>As of today, EMC Corporation is no longer an independent company. Who thought we would see this day? From now on, EMC is simply a brand for parts of Dell&#8217;s Infrastructure Solutions and Services businesses. This marks a major shift in the enterprise storage world, for IT, and perhaps for American business in general.</p>
<p><img style=' display: block; margin-right: auto; margin-left: auto;'  loading="lazy" decoding="async" class="aligncenter size-large wp-image-9422" src="http://blog.fosketts.net/wp-content/uploads/2016/09/DellEMC_Logo_Prm_Blue_Gry_rgb-500x89.jpg" alt="DellEMC_Logo_Prm_Blue_Gry_rgb" width="500" height="89" srcset="https://blog.fosketts.net/wp-content/uploads/2016/09/DellEMC_Logo_Prm_Blue_Gry_rgb-500x89.jpg 500w, https://blog.fosketts.net/wp-content/uploads/2016/09/DellEMC_Logo_Prm_Blue_Gry_rgb-150x27.jpg 150w, https://blog.fosketts.net/wp-content/uploads/2016/09/DellEMC_Logo_Prm_Blue_Gry_rgb-300x53.jpg 300w, https://blog.fosketts.net/wp-content/uploads/2016/09/DellEMC_Logo_Prm_Blue_Gry_rgb-768x137.jpg 768w" sizes="auto, (max-width: 500px) 100vw, 500px" /></p>
<h3>Investors Are Happy with Magic Beans?</h3>
<p>I have been <a href="http://blog.fosketts.net/2015/10/12/doodling-on-the-value-of-emc-vmware-and-dells-offer/">skeptical about the Dell and EMC merger</a> since the announcement. The price seemed too low, the included &#8220;tracking stock&#8221; too flimsy, and the questions about VMware, RSA, and other Dell and EMC businesses too great. The purported value of the deal ($33.15 per share) was based on a cash price of $24.05 per share of EMC plus 1/9 of a share in a tracking stock to value the company&#8217;s ownership of VMware. As discussed at the time, this offer dramatically under-values not just EMC itself but also its ownership stake in VMware, Pivotal, and the new SecureWorks company.</p>
<p>Essentially, Dell bought all the $28 EMC shares for $24 plus some magic beans.</p>
<p><iframe loading="lazy" title="Into the Woods Official Clip &quot;Magic Beans&quot; (2014) - Meryl Streep, Emily Blunt HD" width="500" height="281" src="https://www.youtube.com/embed/2wzPytN5oB0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>But it seems that Wall Street was willing to go along with the deal to rid itself of EMC. There has long been a cloud hanging over the EMC&#8217;s enterprise value due to a perception that the traditional enterprise storage world is on the verge of massive disruption. Analysts and investors apparently do not believe that EMC can adapt to the new world of IT and maintain earnings and profitability. Investors appear to be willing to accept this compromise, letting EMC go to Dell to get it off their hands.</p>
<p>I&#8217;m betting the VMware tracking stock ($DVMT) will look pretty good for a while, rising to between $45 and $60 per share before collapsing for good. Why $45? That&#8217;s the difference between EMC&#8217;s share price the day before the merger ($29.05) and Dell&#8217;s cash ($24.05) times 9 (remember that each shareholder gets 1/9 of a share of $DVMT). Owners of $EMC shares will sell out somewhere in there and that will be that.</p>
<p>In the mean time, I also expect lots of &#8220;pump the price&#8221; stories to be published, suggesting that $DVMT is a good way to get shares of VMware on the cheap. But tracking shares have <a href="http://www.sec.gov/answers/track.htm">no intrinsic value and no voting power</a>. Unless Dell adds a dividend to $DVMT, it&#8217;s likely to follow <a href="https://en.wikipedia.org/wiki/Tracking_stock">the trajectory of similar tracking stock offerings</a> (and magic beans): Right down the drain.</p>
<h3>The Future of Dell EMC is Brighter</h3>
<p>But the Dell/EMC deal is fundamentally all about brushing aside Wall Street&#8217;s demands and focusing on corporate success. And that aspect of the deal is a whole lot more positive. Dell has handled the integration as well as could be hoped, and it is likely that the newly-formed Dell Infrastructure Solutions business (selling as Dell EMC and managed out of Hopkinton) has a strong few years ahead of it.</p>
<p>Although Dell has traditionally been a weak spender when it comes to R&amp;D, EMC is awash in solid technologies. Just about every product line is competitive in its market segment and some (DSSD, ScaleIO) are trend-setters. Dell EMC ought to be able to make hay with these products, selling to traditional EMC buyers as well as Dell&#8217;s massive customer base. And now they have the Dell Enterprise Solutions products to push as well.</p>
<p>Certainly Dell EMC has too many product lines now, but the correction won&#8217;t hurt customers. One can only hope that management will have the wisdom to cut products based on long-term viability rather than short-term sales. EMC has been content to allow competitive product lines to continue until a clear winner emerges. Perhaps Dell will continue this practice, especially now that Wall Street isn&#8217;t watching as closely.</p>
<p>Personally, I remain concerned for my many friends inside this new company. Dell would be wise to be graceful and kind to employees, allowing them to move to other areas instead of letting them go. But many have already given notice, casting a cloud on the future of the company. Dell needs to retain these talented people.</p>
<p>Dell also needs more clearly to explain what happens with the other parts of the combined business. Security has a home, but what about Dell&#8217;s networking assets? I&#8217;m not really worried about VMware under this new management regime, but what was the deal with that SecureWorks transaction? There are so many questions and still not many answers.</p>
<h3>Impact Beyond Dell</h3>
<p>The biggest question is the impact of this combination on the rest of the industry. HP clearly doesn&#8217;t agree with the Dell &#8220;embiggening&#8221; strategy: They split into HP <del>Ink</del>Inc and HPE and seem to be doing well so far. But Cisco has yet to react.</p>
<p>Then there&#8217;s the changed acquisition landscape for all the startups in the industry. Dell and EMC aren&#8217;t going to be bidding on startups any time soon, and HPE and Oracle seem fairly sated right now. This leaves Cisco as the only hungry company at the buffet. I expect some feasting to be done, building a new full-line competitor for Dell and HPE!</p>
<p>We must also consider the new world of IT, with a resurgent Microsoft joining Amazon, Google, and the cloud providers in attacking the IT industry. This is the reason investors were willing to take Dell&#8217;s magic beans in the first place, and it&#8217;s the real deal. Many are saying that sales of enterprise IT products have peaked, leaving companies like Dell to fight it out in a narrowing ring.</p>
<p>But this transaction is also a vote of &#8220;no confidence&#8221; in Wall Street. Dell went private and has, by all accounts, done very well since. Many other companies are following suit. Clearly, corporate management is fed up with the quarterly mindset so prevalent among investors and analysts. Will we see fewer IPO&#8217;s, as management realizes that being a public company isn&#8217;t a positive for a growing company? Watch for Nutanix and SimpliVity IPO&#8217;s as bellweathers.</p>
<h3>Stephen&#8217;s Stance</h3>
<p>EMC&#8217;s management clearly knew that it was time to make a move, and were willing to take a little less for the company than folks like me thought. Dell management clearly thought they could leverage EMC to solidify their position at the top of the IT market, and I bet they&#8217;ll do a good job with it at least for a while. Investors clearly are happy to take what they got (some cash and some magic beans), and who am I to question their logic? As for me, I&#8217;m just sad to see one more pillar of the enterprise storage industry wiped away.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/09/07/dell-wall-street-magic-beans-end-emc/">Dell, Wall Street, Magic Beans, and the End of EMC</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2016/09/07/dell-wall-street-magic-beans-end-emc/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Migrating Data With ZFS Send and Receive</title>
		<link>https://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/</link>
					<comments>https://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Thu, 18 Aug 2016 21:15:24 +0000</pubDate>
				<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Terabyte home]]></category>
		<category><![CDATA[data movement]]></category>
		<category><![CDATA[data replication]]></category>
		<category><![CDATA[FreeNAS]]></category>
		<category><![CDATA[replication]]></category>
		<category><![CDATA[rsync]]></category>
		<category><![CDATA[ZFS]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9387</guid>

					<description><![CDATA[<p>I like ZFS Send and Receive, but I'm not totally sold on it. I've used rsync for decades, so I'm not giving it up anytime soon. Even so, I can see the value of ZFS Send and Receive for local migration and data management tasks as well as the backup and replication tasks that are typically talked about.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/">Migrating Data With ZFS Send and Receive</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>I&#8217;m a huge fan of rsync as a migration tool, but FreeNAS is ZFS-centric so I decided to take a shot at using some of the native tools to move data. I&#8217;m not sold on it for daily use, but ZFS Send and Receive is awfully useful for &#8220;internal&#8221; maintenance tasks like moving datasets and rebuilding pools. Since this kind of migration isn&#8217;t well-documented online, I figured I would make my notes public here.</p>
<p><img style=' display: block; margin-right: auto; margin-left: auto;'  loading="lazy" decoding="async" class="aligncenter size-large wp-image-9388" src="http://blog.fosketts.net/wp-content/uploads/2016/08/6254961323_a156af5f81_z-500x368.jpg" alt="Shipping by Constance Abram" width="500" height="368" srcset="https://blog.fosketts.net/wp-content/uploads/2016/08/6254961323_a156af5f81_z-500x368.jpg 500w, https://blog.fosketts.net/wp-content/uploads/2016/08/6254961323_a156af5f81_z-150x110.jpg 150w, https://blog.fosketts.net/wp-content/uploads/2016/08/6254961323_a156af5f81_z-300x221.jpg 300w, https://blog.fosketts.net/wp-content/uploads/2016/08/6254961323_a156af5f81_z.jpg 640w" sizes="auto, (max-width: 500px) 100vw, 500px" /></p>
<h3>ZFS Send and Receive</h3>
<p>ZFS is a volume manager, a file system, and a set of data management tools all bundled together.<sup class='footnote'><a href='#fn-9387-1' id='fnref-9387-1' onclick='return fdfootnote_show(9387)'>1</a></sup> One of the interesting features of ZFS is the ability to backup and restore data natively using the &#8220;send&#8221; and &#8220;receive&#8221; (or &#8220;recv&#8221;) commands.</p>
<p>Like rsync, tar, and other UNIX backup utilities, ZFS Send and Receive is a two-part process:</p>
<ol>
<li>&#8220;zfs send&#8221; reads snapshot data and outputs a datastream to stdout</li>
<li>&#8220;zfs recv&#8221; reads a stream and restores data to a snapshot</li>
</ol>
<p>You&#8217;ll note that I said &#8220;snapshot&#8221; there: In the interest of consistency, ZFS Send and Receive mainly operates on ZFS snapshots. This is a huge advantage over other utilities since it gives a point-in-time copy rather than reading some data at one time and other at a different time.</p>
<p>It&#8217;s also important to see that ZFS Send and Receive respectively output or input a stream of data. You can redirect this to a file or send it over an ssh connection, but you can also use it locally with a simple pipe. This means that ZFS Send and Receive can be used to move data internally to a system as well as for backup or remote replication.</p>
<h3>Copying a ZFS Dataset</h3>
<p>Let&#8217;s say you have a dataset called gang/scooby and wanted to create a perfect copy, gang/scrappy. ZFS Send and Receive can do this quickly and easily, guaranteeing a perfect and consistent copy! Here&#8217;s how:</p>
<ol>
<li>Try to quiet gang/scooby (turn off Samba, stop applications, etc)</li>
<li>Make a snapshot: zfs snap gang/scooby@migrate</li>
<li>Send that snapshot to overwrite gang/scrappy: zfs send -R gang/scooby@migrate | zfs recv -F gang/scrappy<sup class='footnote'><a href='#fn-9387-2' id='fnref-9387-2' onclick='return fdfootnote_show(9387)'>2</a></sup></li>
<li>Check out the results: zfs list -t snapshot -r gang</li>
<li>Promote gang/scrappy&#8217;s new snapshot to become the dataset&#8217;s data: zfs rollback gang/scrappy@migrate</li>
</ol>
<p>Note that this will totally blow away any existing data in the target dataset. If this is not what you want to do, you have been warned! But no one likes Scrappy Doo anyway&#8230;</p>
<h3>Wait For It&#8230;</h3>
<p>You can also redirect ZFS Send to a file and tell ZFS Receive to read from a file. This is handy when you need to rebuild a pool as well as for backup and replication.</p>
<p>In this example, we will send gang/scooby to a file and then restore that file later.</p>
<ol>
<li>Try to quiet gang/scooby</li>
<li>Make a snapshot: zfs snap gang/scooby@ghost</li>
<li>Send that snapshot to a file: zfs send -R gang/scooby@ghost &gt; gzip /tmp/ghost.gz</li>
<li>Do what you need to gang/scooby</li>
<li>Restore the data to gang/scooby: gzcat /tmp/ghost.gz | zfs recv -F gang/scooby</li>
<li>Promote gang/scooby&#8217;s new snapshot to become the dataset&#8217;s data: zfs rollback gang/scooby@ghost</li>
</ol>
<h3>Incremental Improvements</h3>
<p>You can also send incremental updates of ZFS datasets. This is handy for more-active data that can&#8217;t be offline for the hours it might take to move data from place to place.</p>
<p>In this example, we&#8217;ll send an initial snapshot using ZFS Send and Receive, followed by an incremental snapshot of the data that changed.</p>
<ol>
<li>Make a snapshot: zfs snap gang/scooby@initial</li>
<li>Send that snapshot to overwrite gang/scrappy: zfs send -R gang/scooby@initial | zfs recv -F gang/scrappy</li>
<li>Check out the results: zfs list -t snapshot -r gang</li>
<li>Try to quiet gang/scooby</li>
<li>Make another snapshot: zfs snap gang/scooby@incremental</li>
<li>Send the incremental snapshot to gang/scrappy: zfs send -i -R initial gang/scooby@incremental | zfs recv gang/scrappy</li>
<li>Note that both gang/scooby and gang/scrappy now have both @initial and @incremental snapshots: zfs list -t snapshot -r gang</li>
<li>Promote the latest gang/scrappy snapshot to become the dataset&#8217;s data: zfs rollback gang/scrappy@incremental</li>
</ol>
<p>You could script this to make sure it happens as quickly as possible. Of course it&#8217;s best not to try to do something like this on an active dataset, but it could still be useful on an active server: You only need to truly quiet the source application before the incremental snapshot is taken, and this will hopefully be much quicker than the initial data transfer.</p>
<h3>Stephen&#8217;s Stance</h3>
<p>I like ZFS Send and Receive, but I&#8217;m not totally sold on it. I&#8217;ve used rsync for decades, so I&#8217;m not giving it up anytime soon. Even so, I can see the value of ZFS Send and Receive for local migration and data management tasks as well as the backup and replication tasks that are typically talked about.</p>
<p><em>Update: Fixed a few typos. You need to add -R to zfs send! Thanks!</em></p>
<blockquote><p>Here&#8217;s my entire FreeNAS series so far:</p>
<ol>
<li><a href="http://blog.fosketts.net/2016/08/03/hello-freenas-goodbye-drobo-iomega/">Hello FreeNAS! Goodbye Drobo and Iomega…</a></li>
<li><a href="http://blog.fosketts.net/2016/08/04/freenas-build-supermicro-x10sl7-intel-haswell-xeon-ecc-ram/">My FreeNAS Build: Supermicro X10SL7, Intel Haswell Xeon, ECC RAM</a></li>
<li><a href="http://blog.fosketts.net/2016/08/10/14-drives-14-ports-case-freenas/">14 Drives For 14 Ports: A Case For FreeNAS</a></li>
<li><a href="http://blog.fosketts.net/2016/08/13/fine-mound-hard-drives-side-even-upside/">It’s Fine To Mount Hard Drives On Their Side Or Even Upside-Down</a></li>
<li><a href="http://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/">Migrating Data With ZFS Send and Receive</a></li>
<li><a href="http://blog.fosketts.net/2016/08/25/freenas-first-impressions/">FreeNAS First Impressions</a></li>
</ol>
</blockquote>
<p><em>Image credit: &#8220;<a href="https://www.flickr.com/photos/constance182/6254961323/in/photolist-awJj5K-ekV7j4-rjfTHm-mDxWPF-7xYLpb-5p3Dbm-rbzKpj-5wFUaB-hYiMdJ-p5vDoH-54qMU7-pSpK1o-M79hz-hN5Jpn-8vEXd9-qEScwW-8vBWaH-a4S5yU-dE8uA5-8vEXtU-CFqVR-8vEXmG-CFqXA-oSjJ1h-9NFVNX-8K5xBM-34Ydj3-9NH9LR-8vtJRX-bYLbvy-9chSab-9NEwpU-e22Pii-de124v-5oT6ha-f4G7Qh-zLuaA-88jr2T-8vEXbY-8vEXpw-2rrP1L-5zYqyR-9NJAiA-fUkmTs-8vBVYn-cwtvgj-9yuEYN-bjm6in-iCKMic-8vsETH">Shipping</a>&#8221; by <a href="https://www.flickr.com/photos/constance182/">Constance Abram</a>, CC-by-NC-ND</em></p>
<div class='footnotes' id='footnotes-9387'>
<div class='footnotedivider'></div>
<ol>
<li id='fn-9387-1'> Ok, it&#8217;s not <em>technically</em> a volume manager, but neither is it <i>technically</i> a file system! <span class='footnotereverse'><a href='#fnref-9387-1'>&#8617;</a></span></li>
<li id='fn-9387-2'> If you need to have root privileges, use: sudo zfs send -R gang/scooby@migrate | ( sudo zfs recv -F gang/scrappy ) <span class='footnotereverse'><a href='#fnref-9387-2'>&#8617;</a></span></li>
</ol>
</div>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/">Migrating Data With ZFS Send and Receive</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/feed/</wfw:commentRss>
			<slash:comments>10</slash:comments>
		
		
			</item>
		<item>
		<title>It&#8217;s Fine To Mount Hard Drives On Their Side Or Even Upside-Down</title>
		<link>https://blog.fosketts.net/2016/08/13/fine-mound-hard-drives-side-even-upside/</link>
					<comments>https://blog.fosketts.net/2016/08/13/fine-mound-hard-drives-side-even-upside/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Sat, 13 Aug 2016 18:06:01 +0000</pubDate>
				<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Features]]></category>
		<category><![CDATA[Terabyte home]]></category>
		<category><![CDATA[disk drive]]></category>
		<category><![CDATA[hard disk]]></category>
		<category><![CDATA[hard disk drive]]></category>
		<category><![CDATA[HGST]]></category>
		<category><![CDATA[mount]]></category>
		<category><![CDATA[Panzura]]></category>
		<category><![CDATA[Seagate]]></category>
		<category><![CDATA[Toshiba]]></category>
		<category><![CDATA[Western Digital]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9367</guid>

					<description><![CDATA[<p>Have you ever wondered if mounting a hard disk drive on its side or even upside-down affects its lifespan or reliability? According to every drive manufacturer, it's perfectly acceptable to mount a hard disk drive in any orientation as long as it's not tilted and has sufficient cooling.</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/08/13/fine-mound-hard-drives-side-even-upside/">It&#8217;s Fine To Mount Hard Drives On Their Side Or Even Upside-Down</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>Have you ever wondered if mounting a hard disk drive on its side or even upside-down affects its lifespan or reliability? According to every drive manufacturer, it&#8217;s perfectly acceptable to mount a hard disk drive in any orientation as long as it&#8217;s not tilted and has sufficient cooling. File this under &#8220;now you know&#8221;&#8230;</p>
<figure id="attachment_9355" aria-describedby="caption-attachment-9355" style="width: 500px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-large wp-image-9355" src="http://blog.fosketts.net/wp-content/uploads/2016/08/image-1-500x375.jpeg" alt="This &quot;double-decker&quot; drive sled from my new NZXT H440 case has one drive on top and another on the bottom, upside-down. And that's perfectly fine." width="500" height="375" srcset="https://blog.fosketts.net/wp-content/uploads/2016/08/image-1-500x375.jpeg 500w, https://blog.fosketts.net/wp-content/uploads/2016/08/image-1-150x113.jpeg 150w, https://blog.fosketts.net/wp-content/uploads/2016/08/image-1-300x225.jpeg 300w, https://blog.fosketts.net/wp-content/uploads/2016/08/image-1-768x576.jpeg 768w" sizes="auto, (max-width: 500px) 100vw, 500px" /><figcaption id="caption-attachment-9355" class="wp-caption-text">This &#8220;double-decker&#8221; drive sled from my new NZXT H440 case has one drive on top and another on the bottom, upside-down. And that&#8217;s perfectly fine.</figcaption></figure>
<p>My new server case, the NZXT H440, has five drive sleds with mounting holes for one drive on top and another on the bottom. Although this makes power and data cable routing a little more interesting, it&#8217;s perfectly fine to mount drives like this. Especially with the H440, since there are three very large fans blowing air across them at all times!</p>
<p>Just to make sure, here&#8217;s what each drive manufacturer says on the subject of drive mounting:<sup class='footnote'><a href='#fn-9367-1' id='fnref-9367-1' onclick='return fdfootnote_show(9367)'>1</a></sup></p>
<ul>
<li><a href="http://knowledge.seagate.com/articles/en_US/FAQ/195931en">Seagate</a>: &#8220;All Seagate and Maxtor-brand hard drives can be fitted sideways or upside down. As long as they are not moved during use and get enough cooling, it is irrelevant in which direction they are mounted.&#8221;</li>
<li><a href="http://support.wdc.com/KnowledgeBase/answer.aspx?ID=981#mount">Western Digital</a>: &#8220;The drive can be mounted sideways, on end, or even upside down as long as the mounting screws are used properly.&#8221;</li>
<li><a href="https://www.hgst.com/sites/default/files/resources/SATAInstallposter.pdf">HGST</a>: &#8220;Hitachi Deskstar drive can be mounted with any side or end vertical or horizontal. Do not mount the drive in a tilted position.&#8221;</li>
</ul>
<p>So it&#8217;s perfectly fine to mount a hard disk drive upside down, vertical, or on either end as long as it&#8217;s secure from vibration and shock, not tilted at an angle, and gets enough cooling. This is no surprise since every vendor sells external USB drives with vertical or end-on mounting.<sup class='footnote'><a href='#fn-9367-2' id='fnref-9367-2' onclick='return fdfootnote_show(9367)'>2</a></sup></p>
<blockquote><p>Here&#8217;s my entire FreeNAS series so far:</p>
<ol>
<li><a href="http://blog.fosketts.net/2016/08/03/hello-freenas-goodbye-drobo-iomega/">Hello FreeNAS! Goodbye Drobo and Iomega…</a></li>
<li><a href="http://blog.fosketts.net/2016/08/04/freenas-build-supermicro-x10sl7-intel-haswell-xeon-ecc-ram/">My FreeNAS Build: Supermicro X10SL7, Intel Haswell Xeon, ECC RAM</a></li>
<li><a href="http://blog.fosketts.net/2016/08/10/14-drives-14-ports-case-freenas/">14 Drives For 14 Ports: A Case For FreeNAS</a></li>
<li><a href="http://blog.fosketts.net/2016/08/13/fine-mound-hard-drives-side-even-upside/">It’s Fine To Mount Hard Drives On Their Side Or Even Upside-Down</a></li>
<li><a href="http://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/">Migrating Data With ZFS Send and Receive</a></li>
<li><a href="http://blog.fosketts.net/2016/08/25/freenas-first-impressions/">FreeNAS First Impressions</a></li>
</ol>
</blockquote>
<div class='footnotes' id='footnotes-9367'>
<div class='footnotedivider'></div>
<ol>
<li id='fn-9367-1'> Sorry, I couldn&#8217;t find Toshiba&#8217;s stance on their web site, but this is pretty convincing anyway. <span class='footnotereverse'><a href='#fnref-9367-1'>&#8617;</a></span></li>
<li id='fn-9367-2'> Amusingly, most external USB drives from the &#8220;big names&#8221; definitely do not have sufficient airflow for adequate cooling! <span class='footnotereverse'><a href='#fnref-9367-2'>&#8617;</a></span></li>
</ol>
</div>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/08/13/fine-mound-hard-drives-side-even-upside/">It&#8217;s Fine To Mount Hard Drives On Their Side Or Even Upside-Down</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2016/08/13/fine-mound-hard-drives-side-even-upside/feed/</wfw:commentRss>
			<slash:comments>10</slash:comments>
		
		
			</item>
		<item>
		<title>Free as in Coffee &#8211; Thoughts on the State of OpenStack</title>
		<link>https://blog.fosketts.net/2016/05/02/free-coffee-thoughts-state-openstack/</link>
					<comments>https://blog.fosketts.net/2016/05/02/free-coffee-thoughts-state-openstack/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Mon, 02 May 2016 22:01:51 +0000</pubDate>
				<category><![CDATA[Computer History]]></category>
		<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Features]]></category>
		<category><![CDATA[Amazon]]></category>
		<category><![CDATA[Cinder]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[Mirantis]]></category>
		<category><![CDATA[NetApp]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[OpenStack]]></category>
		<category><![CDATA[OpenStack Summit]]></category>
		<category><![CDATA[Rackspace]]></category>
		<category><![CDATA[Tech Field Day]]></category>
		<category><![CDATA[VMware]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9298</guid>

					<description><![CDATA[<p>Last week I headed to Austin, Texas to attend the semi-annual OpenStack Summit there. Along with the usual socializing, I was looking to understand the current state of the technology: What does OpenStack really mean these days, and where is it going? Let&#8217;s start with &#8220;free&#8221;. As &#8220;the Internet&#8221; is quick to point out, this critical word has multiple [&#8230;]</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/05/02/free-coffee-thoughts-state-openstack/">Free as in Coffee &#8211; Thoughts on the State of OpenStack</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>Last week I headed to Austin, Texas to attend the semi-annual OpenStack Summit there. Along with the usual socializing, I was looking to understand the current state of the technology: What does OpenStack really mean these days, and where is it going?</p>
<p><img style=' display: block; margin-right: auto; margin-left: auto;'  loading="lazy" decoding="async" class="aligncenter size-large wp-image-9302" src="http://blog.fosketts.net/wp-content/uploads/2016/05/Free-Coffee-500x334.jpg" alt="Free Coffee!" width="500" height="334" srcset="https://blog.fosketts.net/wp-content/uploads/2016/05/Free-Coffee-500x334.jpg 500w, https://blog.fosketts.net/wp-content/uploads/2016/05/Free-Coffee-150x100.jpg 150w, https://blog.fosketts.net/wp-content/uploads/2016/05/Free-Coffee-300x200.jpg 300w, https://blog.fosketts.net/wp-content/uploads/2016/05/Free-Coffee.jpg 640w" sizes="auto, (max-width: 500px) 100vw, 500px" /></p>
<p>Let&#8217;s start with &#8220;free&#8221;. As &#8220;the Internet&#8221; is quick to point out, this critical word has multiple meanings, and corporate interests have a tendency to divert everything toward profit. OpenStack is a free and open-source software platform for cloud computing, but it&#8217;s also a community of innovators ranging from enthusiasts to lone hackers to the world&#8217;s largest corporations.</p>
<p>OpenStack is free, but like all open source projects, someone has to pick up the tab. Who&#8217;s paying for development and supporting users? Certainly many individual contributors are involved, but the vast majority of code commits and new project components are coming from vendors trying to leverage OpenStack to sell their products and services.</p>
<p>OpenStack may be &#8220;free as in freedom&#8221; and &#8220;free as in coffee&#8221;, but it sure as heck costs a lot to deploy and productize. This is why so many companies came to Austin to be part of the Summit. Some, like Mirantis, see a tremendous opportunity to profit from growing OpenStack deployments. Others want to plug their existing products into these massive installs. Yet more seem to be hedging their bets or even just happy to be part of &#8220;the new thing&#8221;.</p>
<p>One popular refrain in the vendor booths at the Summit is that &#8220;no one but Mirantis is making any money here at this point&#8221;. This was refuted by a few companies who have managed to land big service provider customers for hardware and software, but the overall sense is that there&#8217;s a lot of free coffee being given out without a lot of sales coming in return. &#8220;Elephant hunting&#8221; was a familiar phrase.</p>
<p><img style=' display: block; margin-right: auto; margin-left: auto;'  loading="lazy" decoding="async" class="aligncenter size-large wp-image-9299" src="http://blog.fosketts.net/wp-content/uploads/2016/05/NetApp-SolidFire-Jenga-500x333.jpg" alt="NetApp/SolidFire Jenga" width="500" height="333" srcset="https://blog.fosketts.net/wp-content/uploads/2016/05/NetApp-SolidFire-Jenga-500x333.jpg 500w, https://blog.fosketts.net/wp-content/uploads/2016/05/NetApp-SolidFire-Jenga-150x100.jpg 150w, https://blog.fosketts.net/wp-content/uploads/2016/05/NetApp-SolidFire-Jenga-300x200.jpg 300w, https://blog.fosketts.net/wp-content/uploads/2016/05/NetApp-SolidFire-Jenga.jpg 640w" sizes="auto, (max-width: 500px) 100vw, 500px" /></p>
<p>As with any large open source project, OpenStack is developed mainly by large corporations. Companies like IBM, HP, Intel, and NetApp see it as an entrance to the new cloud/service provider market. Red Hat, Mirantis, and Canonical want to sell services and support. Rackspace needs a functioning cloud. All of these companies are pulling the project in different directions, and all are doing the right thing as far as they are concerned.</p>
<p>The risk, however, is that each of these agendas might disrupt or derail the whole project. Ultimately, customers will decide which to adopt, but this isn&#8217;t as democratic as it seems. Their decisions are affected by development resources and the availability of quality integration components more than the inherent technical appropriateness of a protocol or product. This is why NetApp is smart to be pouring resources into storage integration projects like Cinder: They enable storage integration generally while reassuring customers that NetApp products will work really well!</p>
<p>The happy outcome isn&#8217;t necessarily a cynical product push. Rather, by contributing to the code base, all these companies improve the entire product. But what about the companies that aren&#8217;t contributing much base code but are instead focused only on integrating their own products? Although not helping build OpenStack, these companies still contribute to the feeling of longevity for the project as a whole.</p>
<p>What will OpenStack be outside the enterprise? Although most assume it&#8217;s just for building &#8220;your own Amazon&#8221;, customers have been clamoring for an overarching data center management framework. OpenStack might just become that, too. This is certainly how Platform9 sees OpenStack: They can leverage the common OpenStack management framework across heterogeneous enterprise data centers, not just for the cloud. VMware too is quick to point out that OpenStack isn&#8217;t tied to a single hypervisor and works just as well with ESXi inside.</p>
<p><img style=' display: block; margin-right: auto; margin-left: auto;'  loading="lazy" decoding="async" class="aligncenter size-large wp-image-9301" src="http://blog.fosketts.net/wp-content/uploads/2016/05/We-Are-OpenStack-500x334.jpg" alt="We Are OpenStack" width="500" height="334" srcset="https://blog.fosketts.net/wp-content/uploads/2016/05/We-Are-OpenStack-500x334.jpg 500w, https://blog.fosketts.net/wp-content/uploads/2016/05/We-Are-OpenStack-150x100.jpg 150w, https://blog.fosketts.net/wp-content/uploads/2016/05/We-Are-OpenStack-300x200.jpg 300w, https://blog.fosketts.net/wp-content/uploads/2016/05/We-Are-OpenStack.jpg 640w" sizes="auto, (max-width: 500px) 100vw, 500px" /></p>
<p>OpenStack might prove to be the ultimate unifier not just of cloud but of conventional IT systems as well. One could envision a future where OpenStack is both the unified management layer for the entire data center but also the framework on which next-generation applications rest. That&#8217;s the optimistic projection.</p>
<p>The pessimist might spot more clouds than blue skies. OpenStack could become just what it is today: The cloud platform for service providers not named Amazon. This is an acceptable outcome, but not exactly a transformative one for the enterprise data center.</p>
<p><em>Disclaimer: Tech Field Day came to OpenStack Summit, and <a href="http://techfieldday.com/appearance/netapp-presents-at-tech-field-day-extra-at-openstack-summit-austin-2016/">NetApp was a sponsor</a>. We also interviewed a dozen non-sponsors as part of the Tech Field Day Forum. See our <a href="http://techfieldday.com/appearance/tech-field-day-forum-openstack-summit-austin-2016/">Day 1</a> and <a href="http://techfieldday.com/appearance/tech-field-day-forum-at-openstack-summit-austin-2016/">Day 2</a> interviews. About half of the other companies mentioned here have sponsored Tech Field Day events in the past.</em></p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/05/02/free-coffee-thoughts-state-openstack/">Free as in Coffee &#8211; Thoughts on the State of OpenStack</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2016/05/02/free-coffee-thoughts-state-openstack/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Nimble Storage Rolls Out an All-Flash Array</title>
		<link>https://blog.fosketts.net/2016/02/24/nimble-storage-rolls-out-an-all-flash-array/</link>
					<comments>https://blog.fosketts.net/2016/02/24/nimble-storage-rolls-out-an-all-flash-array/#comments</comments>
		
		<dc:creator><![CDATA[Stephen]]></dc:creator>
		<pubDate>Wed, 24 Feb 2016 21:21:02 +0000</pubDate>
				<category><![CDATA[Enterprise storage]]></category>
		<category><![CDATA[Everything]]></category>
		<category><![CDATA[Features]]></category>
		<category><![CDATA[3D XPoint]]></category>
		<category><![CDATA[3PAR]]></category>
		<category><![CDATA[all-flash array]]></category>
		<category><![CDATA[Compellent]]></category>
		<category><![CDATA[data deduplication]]></category>
		<category><![CDATA[de-duplication]]></category>
		<category><![CDATA[deduplication]]></category>
		<category><![CDATA[Dell]]></category>
		<category><![CDATA[HP]]></category>
		<category><![CDATA[NAND]]></category>
		<category><![CDATA[NetApp]]></category>
		<category><![CDATA[NetApp AFF]]></category>
		<category><![CDATA[Nimble]]></category>
		<category><![CDATA[Nimble Storage]]></category>
		<category><![CDATA[Pure Storage]]></category>
		<category><![CDATA[XtremIO]]></category>
		<guid isPermaLink="false">http://blog.fosketts.net/?p=9277</guid>

					<description><![CDATA[<p>It took longer than I expected for Nimble Storage to introduce an all-flash array, but their AF7000 looks to be a very credible offering. They're targeting XtremIO and Pure with their marketing, but I expect HP, Dell, and especially NetApp to be cross-shopped more frequently. In that fight, I expect the Nimble AF7000 to be very attractive indeed!</p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/02/24/nimble-storage-rolls-out-an-all-flash-array/">Nimble Storage Rolls Out an All-Flash Array</a></small></p>
]]></description>
										<content:encoded><![CDATA[<p>As of today, there&#8217;s one more all-flash storage array on the market: Nimble Storage convened a big San Francisco shindig to roll out their own &#8220;AF&#8221; array, complete with lots of XtremIO and Pure Storage comparisons. Finally, Nimble no longer has to suffer the competitive digs and nasty rumors about this missing product, but there&#8217;s a lot more to this launch than an array sans disk!</p>
<figure id="attachment_9278" aria-describedby="caption-attachment-9278" style="width: 500px;  border: 1px solid #dddddd; background-color: #f3f3f3; padding: 4px; margin: 10px; text-align:center; display: block; margin-right: auto; margin-left: auto;" class="wp-caption aligncenter"><a href="http://blog.fosketts.net/wp-content/uploads/2016/02/L1000195.jpg" rel="attachment wp-att-9278"><img loading="lazy" decoding="async" class="size-large wp-image-9278" src="http://blog.fosketts.net/wp-content/uploads/2016/02/L1000195-500x333.jpg" alt="Suresh Vasudevan and Varun Mehta unveil Nimble's all-flash AF7000 storage array" width="500" height="333" srcset="https://blog.fosketts.net/wp-content/uploads/2016/02/L1000195-500x333.jpg 500w, https://blog.fosketts.net/wp-content/uploads/2016/02/L1000195-150x100.jpg 150w, https://blog.fosketts.net/wp-content/uploads/2016/02/L1000195-300x200.jpg 300w, https://blog.fosketts.net/wp-content/uploads/2016/02/L1000195-768x512.jpg 768w, https://blog.fosketts.net/wp-content/uploads/2016/02/L1000195.jpg 1280w" sizes="auto, (max-width: 500px) 100vw, 500px" /></a><figcaption id="caption-attachment-9278" class="wp-caption-text">Suresh Vasudevan and Varun Mehta unveil Nimble&#8217;s all-flash AF series array</figcaption></figure>
<h3>Not Just Another All-Flash Array</h3>
<p>Many of us in the storage industry wondered if the day would ever come that Nimble would introduce an all-flash array. What took them so long? Was there some legal impediment to them introducing an all-flash storage array? Was it some kind of internal resistance from this hybrid array standard-bearer? Or was there a technical issue?</p>
<p>Even after spending hours with the executives and founders of Nimble, I can&#8217;t answer those questions. But I can say that none of this matters anymore. Nimble finally has an all-flash array to sell.</p>
<p>Every time a company comes out with a new all-flash storage array, I fear I&#8217;ll have to point out that it&#8217;s just a regular disk array with SSDs swapped in. This criticism is inevitable when an all-flash array shares the same codebase as a disk-based storage array. Just ask NetApp (AFF), HP (3PAR), and Dell (Compellent) how often they hear this kind of jab. Then ask them to explain why it&#8217;s not fair in their case.</p>
<p>I must point out that Nimble&#8217;s AF series retains much of the same code from their existing hybrid arrays, so we all know what EMC (XtremIO) and Pure Storage sales reps are going to be saying. But Nimble will certainly counter that this is a proven platform with mature enterprise features and that their all-flash solution integrates seamlessly with existing disk-based arrays. Unlike the feature-light all-flash arrays from some competitors.</p>
<p>If they&#8217;re smart, Nimble will also point out a few novel elements of the AF series:</p>
<ul>
<li>The Nimble InfoSight analytics platform allowed the company to design their all-flash array based on real-world customer usage patterns</li>
<li>Nimble added in-line data deduplication to the array but implemented it in a clever way that doesn&#8217;t use as much memory as competing products (more on this below)</li>
<li>The familiar Nimble write sequentialization technique has been tuned for SSD, allowing them to use bigger/cheaper SSD&#8217;s without losing performance or longevity</li>
<li>The Nimble approach to clustering arrays (which I&#8217;d rather call &#8220;pooling&#8221;) means you can seamlessly migrate data between all-flash and hybrid arrays</li>
</ul>
<p>Is this just another &#8220;rip out the disks&#8221; array? I don&#8217;t think so. Nimble deserves credit for some serious engineering.</p>
<h3>Some Cool Details About the AF Series</h3>
<p>I spent some time with Nimble founder and VP of Engineering, Varun Mehta in an attempt to figure out what&#8217;s cool about this array. Our wide-ranging and honest discussion brought up quite a few interesting details, which I&#8217;ll try to summarize here.</p>
<p>It is clear that Nimble was concerned on locating the areas of technological inefficiency in storage array design. Varun pointed out that the amount of RAM available on a controller tends to limit overall array scalability, especially when data deduplication is used, and that the cost of these controllers has remained high even as the cost of NAND flash capacity has dropped dramatically. He also noted that most of the price drop for NAND is in low-end MLC, not the enterprise flash some arrays use. As Nimble set about developing an all-flash offering, they focused on optimizing the system around these limits.</p>
<p>Let&#8217;s start with the controllers. When designing an all-flash array, there is a temptation to cut out the SSD and try directly to integrate flash chips. After all, flash is not disk and ought to work better when treated as such. But every flash implementation needs specialized code and processors to function, not to mention physical carriers and connectors. Why not leverage the vast economies of scale by using SSD&#8217;s rather than chips directly? This is what most mainstream flash-based storage arrays do, and it&#8217;s what Nimble chose to do too.</p>
<p>Since flash remains fairly expensive relative to disk capacity, Nimble decided to use cheaper 3D NAND in their array. But cheaper NAND has less longevity and different performance characteristics. In an attempt to &#8220;protect&#8221; the chips in these SSD&#8217;s from excessive wear, Nimble applied a write cache and sequentialization strategy similar to the CASL concept from their hybrid arrays. Incoming writes are committed to NVRAM and then written sequentially to the SSD&#8217;s in whole blocks. Nimble believes that this will allow their 3D NAND drives to last many years longer than their competition.</p>
<p>One challenge for implementing data deduplication is the amount of memory it requires. The &#8220;map&#8221; of block &#8220;fingerprints&#8221; takes up quite a bit of space on every controller in the cluster. This is the main reason many all-flash arrays with inline deduplication can only support tens of terabytes of SSD. Nimble&#8217;s solution to this problem is to &#8220;page out&#8221; parts of the deduplication map to SSD and aggressively pre-fetch what is likely to be needed based on proprietary algorithms.</p>
<p>This is a novel approach, but it remains to be seen if it will impact performance. I can already hear the XtremIO competitive analysis folks thinking up ways to explain deterministic latency! But this trick allows Nimble to support five times the amount of storage capacity per array compared to Pure or XtremIO. For the sake of Nimble&#8217;s AF customers, I hope it works!</p>
<p>Nimble&#8217;s clustering approach really works to their advantage in environments with both all-flash and hybrid arrays. Unlike most symmetrical clusters, Nimble&#8217;s act as a virtualized pool. A LUN can be moved transparently between the arrays participating in the cluster, regardless of whether they are all-flash or hybrid. Clones and snapshots can also be stored on a hybrid array, where capacity costs 1/3 as much. This is a tremendous differentiator for customers, and I expect to hear a lot about it from Nimble in the coming year.</p>
<blockquote><p>See also:</p>
<ul>
<li><a href="http://www.penguinpunk.net/blog/nimble-storage-announces-predictive-flash-platform/">Nimble Storage Announces Predictive Flash Platform</a> by Dan Frith</li>
<li><a href="http://juku.it/en/nimble-storage-all-flash-late-but-right/">Nimble Storage All-Flash, late but right</a> by Enrico Signoretti</li>
<li><a href="http://vipinvk.in/2016/02/nimble-storage-announced-their-predictive-all-flash-arrays/">Nimble Storage announced their Predictive All Flash Arrays</a> by Vipin V.K</li>
</ul>
</blockquote>
<h3>Stephen&#8217;s Stance</h3>
<p>It took longer than I expected for Nimble Storage to introduce an all-flash array, but their AF series looks to be a very credible offering. They&#8217;re targeting XtremIO and Pure with their marketing, but I expect HP, Dell, and especially NetApp to be cross-shopped more frequently. In that fight, I expect the Nimble AF series to be very attractive indeed!</p>
<p><em>Disclaimer: Nimble Storage flew me to San Francisco for this product launch and they&#8217;re a frequent Tech Field Day presenter. In fact, <a href="http://techfieldday.com/appearance/nimble-storage-presents-at-tech-field-day-3/">they launched the company at Tech Field Day 3</a> back in 2010! But no one can force me to write about them on my blog.</em></p>
<hr />
<p><small>© Stephen Foskett for <a href="https://blog.fosketts.net">Stephen Foskett, Pack Rat</a>: <a href="https://blog.fosketts.net/2016/02/24/nimble-storage-rolls-out-an-all-flash-array/">Nimble Storage Rolls Out an All-Flash Array</a></small></p>
]]></content:encoded>
					
					<wfw:commentRss>https://blog.fosketts.net/2016/02/24/nimble-storage-rolls-out-an-all-flash-array/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
	</channel>
</rss>
