<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Data Center Guru Blog</title>
	<atom:link href="http://datacentergurublog.rosendin.com/feed/" rel="self" type="application/rss+xml" />
	<link>http://datacentergurublog.rosendin.com</link>
	<description>Data Center Guru Blog</description>
	<lastBuildDate>Fri, 26 Jun 2015 15:47:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.4</generator>
	<item>
		<title>Stay Working &amp; Stay Humble</title>
		<link>http://datacentergurublog.rosendin.com/2015/06/stay-working-stay-humble/</link>
		<comments>http://datacentergurublog.rosendin.com/2015/06/stay-working-stay-humble/#comments</comments>
		<pubDate>Fri, 26 Jun 2015 15:47:48 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[california academy of science]]></category>
		<category><![CDATA[renewable energy]]></category>
		<category><![CDATA[rosendin]]></category>
		<category><![CDATA[solar]]></category>

		<guid isPermaLink="false">http://datacentergurublog.rosendin.com/?p=371</guid>
		<description><![CDATA[Everyday, I get to do special and interesting with great people. http://americanbuildersquarterly.com/2015/rosendin-electric/]]></description>
				<content:encoded><![CDATA[<p>Everyday, I get to do special and interesting with great people.</p>
<p>http://americanbuildersquarterly.com/2015/rosendin-electric/</p>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2015/06/stay-working-stay-humble/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>What Are the Key Elements to Data Center Development?</title>
		<link>http://datacentergurublog.rosendin.com/2014/09/what-are-the-key-elements-to-data-center-development/</link>
		<comments>http://datacentergurublog.rosendin.com/2014/09/what-are-the-key-elements-to-data-center-development/#comments</comments>
		<pubDate>Wed, 10 Sep 2014 06:00:55 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://datacentergurublog.rosendin.com/?p=351</guid>
		<description><![CDATA[I’ve been offline for a while, working on a large project for the firm.  The great news is that it’s closed, and we’re fortunate to have a new client for both Rosendin and BladeRoom.  It will make our Twitter and social media outlets soon enough, so now I can return to a bit more normal <a href="http://datacentergurublog.rosendin.com/2014/09/what-are-the-key-elements-to-data-center-development/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p>I’ve been offline for a while, working on a large project for the firm.  The great news is that it’s closed, and we’re fortunate to have a new client for both Rosendin and BladeRoom.  It will make our Twitter and social media outlets soon enough, so now I can return to a bit more normal pace of play.  A couple of months ago, Diane Alber called me up and offered to make me look 15 years younger.  How can anyone turn down an offer like that?  I couldn’t, and I will let you judge the quality of her work.</p>
<p>In exchange, she suggested a few topics as a guest blogger.  This is going up on her site first, with my post trailing by about a week.  Blogs typically discuss the nuances of what UPS system to buy or what is a better widget to employ in your data center, and I will confess that the Data Center Guru blog is no different.  So, I wanted to step back and speak on what really makes a data center successful.</p>
<p><strong>The two key elements to data center development are:  availability &amp; meeting and beating target benchmarks that are dead-matched to your business and goals.</strong></p>
<p>Hersey!   No doctoral thesis on game theory and the mathematics of reliability!  No discourse on differing battery technologies!  No boring diatribe on something technical!  What is the world coming to?  Sheer madness.</p>
<p>The fact is a data center is simply nothing more than a factory that makes information.  Virtualization?  Nothing more than industrial optimization, albeit one you can’t touch.  PUE?  Merely using less energy or skylights in the old days.  Compaction?  Process reengineering.</p>
<p><strong>And the purpose of the information factory is to make information continuously at the lowest cost per transaction or activity, just like any factory-made product.</strong></p>
<p>When we embarked on the ANSI/BICSI 002 Data Center Standard in the early 2000’s, one of the glaring needs that had to be met was a performance-based set of metrics for the data center.  This would be for all four aspects of the data center – the facility, network, platform and application.</p>
<p>In all cases, we should cease to care about the constituent pieces of the ecosystem and start worrying about the end-to-end ecosystem.  What we all worry about for data is continuity – the continuous flow of data or the protection of stored data.  While most facility solutions focus on the particular engineer’s or owner’s design, the key thing to remember is that we have redundancy built in to all the major layers (facility, network, application and platform).  Redundancy is there, for the sole reason that things break, fail, don’t load right, somebody digs up your fiber with a backhoe, etc.  In short, redundancy allows for when things must be maintained or do fail.</p>
<p>Business-wise, executive leadership speaks to the output.  Technical-wise, stakeholders speak to the components.  There is vast difference.  The truth is what really matters, and what most of you are compensated on is the continuous operation and not the level of system availability in the facility, network, platform or application.  Redundancy is simply the manifestation of the failure analysis and business recovery model you are choosing to operate.</p>
<p>Back in the early 2000’s, I was commissioned to review the data processing facilities for a major high tech company.  Like many firms at the time, they were data center space-rich but infrastructure-poor.  Upon review, they had a series of solid Tier II and one Tier III facility.  The goal was to study the facility piece of the enterprise, to determine the work require to bring six live data centers to a Tier III stance.  Yep, all six, retrofitted live, to the tune in excess of $110M.</p>
<p>So, we asked the basic question, “How much space do you need?”  The reply, “For this program, not as much as we have.”  Ok, now we’re getting somewhere.  We didn’t ask about availability, yet.  Our observation was that this client had good application and platform diversity, older but well-maintained MEP systems and plants and a serviceable network.  We simply asked could you achieve the same result by hardening and enhancing the network, which had an added benefit of turbocharging their disaster recovery program.  In short, the failover would be more DR-ish on the network than forcing each site to stand-alone.  After a three-month period to mull it over, the answer was yes.  That day, we coined the term Enterprise Reliability (ask Jeff Davis).  While we went home without a contract for the design, we earned a life-long client.  And we saved them $95M in the process.</p>
<p>The contrasting allegory today is the large-scale social media, entertainment, portal, e-commerce and search firms (did I miss anyone?) approach things differently from many, as they have a processing, storage and network scale of great scope.  When an IT program grows to a specific scale, the failure analysis allows for portions of data centers, even entire sites or regions, to fail, without compromising SLAs to their customers.  What’s occurred is that all systems are now considered collaborating and integrated to the end result of the application availability.</p>
<p>Once you plow through what your internal and external SLAs need to be, the benchmarking, sell and price targeting are then brought to bear.  In the industrial case, you first have to decide on the product then how you are going to fulfill it.  Here’s the rub.  Unlike 15 years ago, where the only options were build-to-suit/operate-your-own and a limited amount of managed services, today there are a host of options.  Heck, 15 years ago, SaaS was mistaken for a Swedish car or British Tier 1 commando unit.  What may occur in the benchmarking phase is that the solution for the facility, software, network or platform solution may take a sharp turn from the initial assumptions and make you reconsider how you are going to achieve your solution.  What should not change is the cost and sell baselines or the SLA, only how you got there.</p>
<p>You might run down to see my friends Chris Crosby or Jim Smith for a facility solution.  You might call any of the cloud service providers for an IaaS solution on the virtual side of the business.  Or VM Ware might hop in and tune up your platforms, saving power, space and hardware.  You get the picture.  There are many tools, and the trick is which one you pick up to solve your problem to your specs.</p>
<p>The benchmarks are levels that your business expects to execute, profit and pay for services.  You start with the business model you’ve undertaken, then you work through the templates on benchmarks to narrow the cost/unit, cost/transaction and the operating carry.  The trick is don’t get to enamored with your first approach, as it’s likely to change.  Remember Sal Tessio’s line at the conclusion of The Godfather, “It’s was never personal, it was just business.”  But also remember when he said it that he was being driven off to be executed, because he did not stay loyal to the family business.  That is the result of misalignment between goals, requirements and expectations.</p>
<p>We’ll get into the benchmark builds in the next blog.</p>
<p>Salute’ a tutti!</p>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2014/09/what-are-the-key-elements-to-data-center-development/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why Has It Been So Hard to Solve the 787 Battery System Issues?</title>
		<link>http://datacentergurublog.rosendin.com/2013/06/why-has-it-been-so-hard-to-solve-the-787-battery-system-issues/</link>
		<comments>http://datacentergurublog.rosendin.com/2013/06/why-has-it-been-so-hard-to-solve-the-787-battery-system-issues/#comments</comments>
		<pubDate>Sun, 16 Jun 2013 02:37:03 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[787 Dreamliner]]></category>
		<category><![CDATA[batteries; battery monitoring]]></category>

		<guid isPermaLink="false">http://datacentergurublog.rosendin.com/?p=340</guid>
		<description><![CDATA[It’s been over four months since the 787 Dreamliner was grounded due to flaws in its DC power system.  As many of you have read, the battery system in the plan was subject to thermal runaway and fire.  Not to use the old line, ”This isn’t rocket science”, but in this case, it should not <a href="http://datacentergurublog.rosendin.com/2013/06/why-has-it-been-so-hard-to-solve-the-787-battery-system-issues/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p>It’s been over four months since the 787 Dreamliner was grounded due to flaws in its DC power system.  As many of you have read, the battery system in the plan was subject to thermal runaway and fire.  Not to use the old line, ”This isn’t rocket science”, but in this case, it should not be. </p>
<p>We have seen a monumental leap forward in battery technology in the past five years.  Some of this is a result of military technology being passed down to the industrial and commercial sectors, and some of this is related to the stored energy needs for hybrid cars and wind and solar distributed generation power systems.  While we now enjoy the luxury of batteries that can tolerate higher and lower temperatures than the traditional 77F; offer varying discharge and charging profiles; and can now offer a wide mix of weight to kW ratios.  Super!</p>
<p>At the end of the day, we’re still dealing with a chemically-based stored energy system.  And the traditions that we have used for years in data centers to prevent the exact types of failures that occurred on the 787 looked to have been missed or inadequately provisioned in the aircraft.  This blog is not about the indictment of the 787 design team.  I’m certainly not skilled enough to develop a design that can fly 10,000 miles at 590 MPH under the unwavering scrutiny of the FAA.  But some of the old battery rules need to be followed for the same reasons they always have – you don’t want a battery fire.</p>
<p>There are two basis approaches for battery management – voltage and impedance.  One of the great adages in our industry is we design for redundancy when failure is not acceptable, and the systems that are subject to failure or maintenance are offered with predictive maintenance or failure precursors that allow for any operator to intervene at an appropriate time.  One of the great benefits of batteries is that they tend not to failure <span style="text-decoration: underline;">quickly and catastrophically</span>, if they are properly monitored.</p>
<p>Of paramount importance is the monitoring and controlling of the battery charger and well as the health of the individual cells, both for impedance and voltage.  While the failure of many battery types tend to be open-circuit resulting in the loss of the DC buss, the failure should never be so severe that it results in a life-safety hazard or a hazard to craft or building.</p>
<p>Sometimes, the old school is the best school.</p>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2013/06/why-has-it-been-so-hard-to-solve-the-787-battery-system-issues/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>My Topology Is Better Than Your Topology!</title>
		<link>http://datacentergurublog.rosendin.com/2012/10/my-topology-is-better-than-your-topology/</link>
		<comments>http://datacentergurublog.rosendin.com/2012/10/my-topology-is-better-than-your-topology/#comments</comments>
		<pubDate>Mon, 29 Oct 2012 20:54:34 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Maintenance]]></category>
		<category><![CDATA[Reliability]]></category>
		<category><![CDATA[Topology]]></category>

		<guid isPermaLink="false">http://datacentergurublog.rosendin.com/?p=332</guid>
		<description><![CDATA[Nothing is as blind or discriminates like personal opinion.  When you assemble a group of engineers or owners, each will speak passionately about how their system is superior to the other, either in design or nuanced detail.  And that passion is typically fueled by their history, habits and worries.  What most don’t recognize is that <a href="http://datacentergurublog.rosendin.com/2012/10/my-topology-is-better-than-your-topology/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p>Nothing is as blind or discriminates like personal opinion.  When you assemble a group of engineers or owners, each will speak passionately about how their system is superior to the other, either in design or nuanced detail.  And that passion is typically fueled by their history, habits and worries.  What most don’t recognize is that the lens that each protagonist views their system through is data center by their experiences, and the expressed desire not to repeat past mistakes, unanticipated failures or trying to rationalize and value engineer their way out of a bad budget or business case.</p>
<p>What happens is topology and cost creep, where nuance, detail and pathway are piled atop each other in search of the system that is resistant to every conceivable circumstance.  This is admirable in intent, but typically a massive failure in cost and complexity.  What really matters is the system availability over time.  As we discussed with maintenance in the previous blog, simpler tends to be better.  The rub here is that ultimate system size has a large impact on the topology that you select.</p>
<p>Before we get to system architecture, let’s talk about the parts that we typically use, namely switchgear, generators and transformers.  When you look at the industry over the past ten years, a few habits begin to emerge:</p>
<ul>
<li> Main switchboards are evolving to match substation transformer and generator sizing, in the 2,000 kW to 3,150 kW range.</li>
<li>Arc flash hazard mitigation is forcing some changes to systems via the reduction of energy present in any given breaker cubicle.</li>
<li>Data centers are getting physically smaller on a per-unit load basis when viewed by kW/compute cycle and kW/TB (and a great friend of mine reminded me that it’s going to be the  kW/PB or petabyte as the new normal where storage is king).</li>
<li>Servers are sexy, but storage pays the bills.  You have to put all that data somewhere.</li>
<li>Data centers IT environments are usually isolated by load group and consist of several rooms or single user facilities of up to 10,000 SF raised floor each, with a few notable exceptions in the large-scale colocation, social media, content and search communities.  Smaller rooms allow for compartmentalization of systems, and this lends itself to the smaller “footprint” noted above.</li>
<li>The amount of large, one-of-a-kind facilities is plummeting.  Organizations tend to find what works and then stick with it.</li>
</ul>
<p>Let’s be honest.  In the past ten years, the wholesale and retail data center and the search/media/social networking folks have pretty much built the lion’s share of the facilities on the planet.  This has driven a couple of fascinating phenomena.    First is the de facto industry standardization on the 2N topology in a wholesale facility, and the N+1 standard in most larger data centers, save for transaction processing.  Those who disagree might have forgotten that the first prescribed Tier IV system by The Uptime Institute was system-plus-system, 2N or 2N+1.  Second, with the amount of systems and data center space built to this wholesale 2N standard, there are now millions of hours in operating time on this 2N architecture over tens of millions of square feet of space.  And the performance data, excluding human factors, appears to favor wholesale’s business model.</p>
<p>The rules of thumb for the larger critical power loads (those exceeding 5 MW, the old school break between low voltage and medium voltage systems), a design might utilize multiple and paralleled systems, looped distribution and utility systems that appears more “grid like” in the topology.  In this case, those several utility, generator and UPS services collaborate to supply a given load any may offer multiple connections to loads thoughout the facility.  At that scale, this “grid” or “looped” topology is likely to be far more cost effective and operationally efficient to the more “siloed” 2 MW segregated rooms and load blocks.  The key is that <strong><em>both</em></strong> of those choices are mapped to a business’ need that exceeds your typical 1,100 kW-1,500 kW / 110 – 150 W/SF requirement in that 10,000 SF space.  The key to larger builds is that there is a <span style="text-decoration: underline;">known and disciplined</span> connection of this type of system architecture with the business it supports.</p>
<p>Moving back to the wholesale data center model “2MW” system, they are nearly exclusively 2N or N+1.  One interesting point is how we define 2N.  For these smaller system, isn’t it simply 2N = N+1, or 1+N?  Sometimes it is, often it isn’t.  To slice through the semantics and to borrow from ANSI/BICSI DC Standards 002:</p>
<ul>
<li> 2N is a fully redundant architecture consisting of two equally sized systems, each capable of carrying the total system load.</li>
<li>N+1 implies an N system capable of carrying the load, with a single system able to replace one of the systems comprising the “N” capacity in case of maintenance or failure.  When”N” =“1” and there are two systems, you have then have 2N.</li>
<li>1+N implies either 2N or N+1, depending on system size.  For this reason, this term is not commonly used in the industry standards, TUI Tier Rating and ANSI/BICSI 002 Class Standards.</li>
</ul>
<p> The ability for a system to respond to failure or maintenance modes of operation is what determines its Tier or Class rating.  Recalling ANSI/BISCI 002, where performance-based criteria was first introduced for electrical and mechanical topologies, Class III allows for a single failure or maintenance event to render the system to an “N” state.  The wholesale data center industry has evolved to this Tier or Class III model, where the critical power architecture is 2N, allowing for complete redundancy from the UPS input switchboards to the PDU output, while using N+1 in the mechanical and supporting systems.  This did not happen by accident. </p>
<p> When examining the wholesale data center business model, what was introduced to the critical infrastructure building business was a direct connection between the capital cost to build and the business.  Wholesale data center, being essentially a high-tech real estate business, holds the sensibilities of <span style="text-decoration: underline;">cost discipline and<em> </em>functionality.</span></p>
<p> When examining how to achieve cost effectiveness, one typically chooses the cheapest part, with the caveat in the data center business that the part also needs to be reliable.  That’s why we see 2,000 or 2,500 kW generators, 3,000A or 4,000A switchboards, 750 kVA UPS modules and Trane Intellipaks everywhere. </p>
<p>While solid reliability at the component level is mandatory, it’s how those parts are procured, arranged and connected that result in system availability at the lowest.</p>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2012/10/my-topology-is-better-than-your-topology/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>LED Lighting Hits Home &#8211; At Least My In Law&#8217;s Home</title>
		<link>http://datacentergurublog.rosendin.com/2012/09/led-lighting-hits-home-at-least-my-in-laws-home/</link>
		<comments>http://datacentergurublog.rosendin.com/2012/09/led-lighting-hits-home-at-least-my-in-laws-home/#comments</comments>
		<pubDate>Tue, 04 Sep 2012 21:41:56 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[LED Lighting]]></category>
		<category><![CDATA[LED lighting; LED payback]]></category>

		<guid isPermaLink="false">http://datacentergurublog.rosendin.com/?p=325</guid>
		<description><![CDATA[Rosendin has been a fan of LED lighting for some time now.  For most of our work, our lighting systems are functional – your basic area lighting package.  What has been a very tough sell for LED-based systems is the initial cost of the fixtures and the replacement beds for the lamps themselves.  In fact, <a href="http://datacentergurublog.rosendin.com/2012/09/led-lighting-hits-home-at-least-my-in-laws-home/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p>Rosendin has been a fan of LED lighting for some time now.  For most of our work, our lighting systems are functional – your basic area lighting package.  What has been a very tough sell for LED-based systems is the initial cost of the fixtures and the replacement beds for the lamps themselves.  In fact, one local city refused to use LEDs on a large parking structure, primarily due to the initial cost of the fixtures.  The fixtures, at proposal time, we’re about a 30% premium to your standard high pressure sodium fixture package. </p>
<p>Three years later, the city is replacing all of its street lights with LED systems, primarily due to avoided lamp replacement cost and the massive energy savings they would realize.  Life cycle costs clearly won out here.  We also see some resistance in the built environment versus the more utilitarian applications such as street lighting or exterior lighting.</p>
<p>We find it both rewarding and amusing that the city got religion on this issue.  The case in point is that LED offers a significant energy savings over fluorescent or HID lamp systems for <em><span style="text-decoration: underline;">continuously-lit systems</span></em>.  LED is challenged in areas where the lighting is activated by occupancy or where it’s not in continuous service.  Our experience has shown that you may realize up to a 40% reduction in input watts for an LED system over your standard T8 fixture, at the identical lumen output.  For incandescent fixtures, this wattage reduction could be as large as 90%.</p>
<p>So what about price?  We attribute the current price point for LED OEM systems to be a cost-recovery exercise by the fixture manufacturers.  LEDs are widely used in a host of industries, where the LED themselves are a commodity item, even for the high-end diodes.  We have also see LED lighting systems be gamed by distributors, manufacturing reps and the manufacturers themselves in quotes and fixture orders with the purpose of packaging to a certain manufacturer.  The key here is to get the line item costs for your fixtures. </p>
<p>What are the drawbacks for LEDs?  First, standard fluorescent and incandescent dimmers don’t work well or at all with LEDs.  Second, the color rendition is a bit on the cooler side and into the blue spectrum from pure white light.  That being said, LED color indexing is being rendered closer to daylight and now offers a few more color options, approaching the offerings we see for the fluorescent lamping in today’s market.</p>
<p>What’s the proof in the pudding?  About six months ago, my brother-in-law and sister-in-law gave me a call.  My sister-in-law operates a thriving college counseling practice out of their home in Silicon Valley.  As a result, the public area lighting in their house is on about 12-14 hours per day.  The lamping package was exclusively PAR- and MR- type lamps, 42 in total.  They have also experienced significant lamp mortality, especially on the MRs, due to both overall hours on the lamp as well as the per-day run time, which well exceeds 10 hours per day.</p>
<p>After a $1,050 lamp retrofit and a single new LED-appropriate dimmer, the utility bill was cut by one-third, or about $300/month, resulting in a three-month payback on the initial $1,000 lamp investment.  The CR on the MR and PAR retrofit LED lamps are identical, so now we have identical color throughout the spaces.  The only rub was a single dimmer that had to be replaced in the dining room.</p>
<p>Five months later, my brother-in-law has not replaced a single MR-type LED lamp yet.  We did splurge and choose the 25,000 hour life lamps, a four-fold improvement over the published MR-16 lifespan systems they replaced.   When looking at the MR lamp mortality, we never realized a great than 4,000 hour run time on those lamps, so the lifespan increase further reinforces the life cycle payback.</p>
<p>And my brother-in-law is thrilled to NOT be up on a ladder twice a month to change lamps.</p>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2012/09/led-lighting-hits-home-at-least-my-in-laws-home/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Piercing the Reliability Myth – The Math Matters</title>
		<link>http://datacentergurublog.rosendin.com/2012/08/piercing-the-reliability-myth-the-math-matters/</link>
		<comments>http://datacentergurublog.rosendin.com/2012/08/piercing-the-reliability-myth-the-math-matters/#comments</comments>
		<pubDate>Fri, 31 Aug 2012 21:22:31 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Maintenance]]></category>
		<category><![CDATA[Reliability]]></category>
		<category><![CDATA[ANSI/BISCI Data Center Best Practices Manual]]></category>
		<category><![CDATA[Bill Mazzetti]]></category>
		<category><![CDATA[Black Swan Event]]></category>
		<category><![CDATA[Chaos Monkey]]></category>
		<category><![CDATA[Class rating]]></category>
		<category><![CDATA[Compass Data Centers]]></category>
		<category><![CDATA[complex state machine]]></category>
		<category><![CDATA[cross tie]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[David Shirmacher]]></category>
		<category><![CDATA[Digital Realtry Trust]]></category>
		<category><![CDATA[facebook]]></category>
		<category><![CDATA[generator]]></category>
		<category><![CDATA[Law of Large Numbers]]></category>
		<category><![CDATA[maintenance]]></category>
		<category><![CDATA[maintenance activity; IR scanning]]></category>
		<category><![CDATA[maintenance cycle]]></category>
		<category><![CDATA[maintenance frequency]]></category>
		<category><![CDATA[mean time between failure]]></category>
		<category><![CDATA[mena time to repair]]></category>
		<category><![CDATA[MILSPEC 472]]></category>
		<category><![CDATA[MOP]]></category>
		<category><![CDATA[MTBF]]></category>
		<category><![CDATA[MTTR]]></category>
		<category><![CDATA[over-maintenance]]></category>
		<category><![CDATA[PLC]]></category>
		<category><![CDATA[predictive maintenance]]></category>
		<category><![CDATA[switching operation]]></category>
		<category><![CDATA[The Gambler's Fallacy]]></category>
		<category><![CDATA[Tier rating]]></category>
		<category><![CDATA[UPS]]></category>

		<guid isPermaLink="false">http://datacentergurublog.rosendin.com/?p=316</guid>
		<description><![CDATA[I&#8217;m going to apologize to my readers for this one &#8211; it&#8217;s long and mathematics-based.  But it does assert that there&#8217;s math to refute or support certain design or operatational habits.  Here&#8217;s what happens when you spend a lot of hours on plane and you have smart and challenging colleagues to kick some ideas around <a href="http://datacentergurublog.rosendin.com/2012/08/piercing-the-reliability-myth-the-math-matters/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p><span style="font-size: small;"><span style="font-family: Calibri;">I&#8217;m going to apologize to my readers for this one &#8211; it&#8217;s long and mathematics-based.  But it does assert that there&#8217;s math to refute or support certain design or operatational habits.  Here&#8217;s what happens when you spend a lot of hours on plane and you have smart and challenging colleagues to kick some ideas around with. </span></span></p>
<p><span style="font-size: small;"><span style="font-family: Calibri;">This all started about three months ago with </span></span><span style="font-size: small;"><span style="font-family: Calibri;">David Shirmacher, the SVP of Technical Operations at Digital Realty Trust over a casual lunch (as if Dave and I have ever done anything casually).  Dave is one of my favorite guys in the business.  We&#8217;ve worked together for a long time, and he&#8217;s got one of the best minds in the field.  The subject of reliability and maintenance frequency came up during lunch at the Salt House in San Francisco.  The debate focused on both maintenance activity frequency and the complexity of that specific, multi-step activity.    That got me going on where did this start and what is the quant to back up the industry&#8217;s well-ingrained behaviors.</span></span></p>
<p><strong><span style="font-family: Calibri; font-size: small;"> </span></strong><span style="font-size: small;"><span style="font-family: Calibri;">While pundits sit and preach about system reliability, they fail to recognize one salient point:  </span></span><span style="font-size: small;"><span style="font-family: Calibri;"><strong><em>It’s not about reliability, it’s about availability.</em></strong>  </span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">No one cares what happens behind the curtain in Oz if your data keeps right on flowing.</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">An old saying in the data center business is that you don’t design systems for how they work – you design them for how they fail and how they are to be maintained.  While individual system focus has driven the reliability of components to very high Sigma levels, it’s how applications, platforms, systems and facilities are brought together, interact and are changed that drives your application availability.  So, what are the arguments and where did this all begin?</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-family: Calibri; font-size: small;">Reliability calculations were born in 1966 during the Space Race in the form of MILSPEC Handbook 472, at the behest of the aerospace programs of military.  On page 2, 472 defines the tenets we use today – Mean Time Between Failures (MTBF) and the Mean Time to Repair (MTTR).  Go ahead and Google “MILSPEC 472”.  The handbook is available as a free PDF download on several websites.  Another excellent resource for the most modern version of 472 is the BICSI International Supplemental Information 001, <em>Methodology for Selecting Data Center Design Class Utilizing Performance Criteria</em>.  It’s available on the BICSI website at </span><a href="https://www.bicsi.org/dcsup/Default.aspx"><span style="font-family: Calibri; color: #0000ff; font-size: small;">https://www.bicsi.org/dcsup/Default.aspx</span></a><span style="font-size: small;"><span style="font-family: Calibri;">  (it’s free, but you have to fill out a request form).  Here, we’ve moved to performance-based metrics backed by the mathematics in this article.</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">MILSPEC 472 was the first effort to develop the basics of system and component uptime.  It remains a  seminal work.  While the scorecards, matrixes and the network analysis methods are still quite relevant today, 472 does have its drawbacks.  It focuses on the examination of one system, not a series of physically separated but logically- and operationally-joined systems.  Nor does 472 consider concurrent maintenance and operations risk, where in the data center world, routine human intervention is both required and offers a varying risk proposition.</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">Our availability assumptions have been reinforced by the historical performance of a variety of system architectures and features.  This manifests itself in the anecdotal Sigma levels you see published for a variety of electrical and mechanical system architectures.   However, these Sigma levels only consider the MTBF and MTTR for the system, <span style="text-decoration: underline;">with no outside influences</span>, such as manually-initiated maintenance operations.  While 472 was developed for electronics, aircraft and spacecraft, a majority of system failures for those assemblies, short of a catastrophic failure from a series of events or force majeure (like a missile to the fuselage), simply means the system is offline, to be replaced by a squadron spare.  In the data center business, that squadron spare is that instantly-available redundant system that stands ready to assume a processing, storage, network, power or cooling load should a platform, component or connection fail.</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">When you actually look at the probability’s mathematics on Page 1-16, it’s a compounding multiplier of:</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">P(A1) x P(A2) x P(A3)…</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> W</span><span style="font-size: small;"><span style="font-family: Calibri;">hile this drives the steady-state to high very Sigma levels and a near zero theoretical outage value, it says nothing about manually-induced incidents, resulting either from maintenance or modifications, or in the case of software, new code loads.  And it’s in that frequency of change, coupled with the competency of the effort that has the most profound effect on your system availability.  </span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">Most facility guys happily follow the manufacturers’ edicts for maintenance frequency.  Most facilities will run their generators monthly.  But if you experience a real power outage, does that count for the gen test?  If you test weekly, would it better to test monthly or visa versa?  If you test monthly, would it better to test quarterly, assuming your batteries, fuel and other support systems appear to be in excellent health?  The fact is that every time you have to maintain a piece of equipment, you change the facility’s steady state operation.  </span></span></p>
<p><span style="font-size: small;"><span style="font-family: Calibri;">Let’s swing into the math.  You may possess exceptional topographic redundancy in your system and find comfort in the Law of Large Numbers (LLN) when looking to minimize incidents in your facility.  LLN does tangentially recognize the fact is that one activity in any large number pool will simply go haywire, the dreaded Black Swan event.  Or as I call it, Bad S*** Happens to Everyone at Least Once.</span></span></p>
<p><span style="font-size: small;"><span style="font-family: Calibri;">LLN clearly speaks to normalized returns over a large data sample pool and a long period of time.  While the Black Swan event is not likely to occur, best practices speak to changing our behavior and design choices to reduce your exposure to the “Swan” as much as practical.  Don’t get too comfortable.  Let’s now throw in the Gambler’s Fallacy.  </span></span><span style="font-size: small;"><span style="font-family: Calibri;">The Gambler’s Fallacy, or the fallacy of the maturity of chances, is rooted in the false belief that deviations from an expected behavior or outcome observed in a repeated independent trials or events (of some random process like a dice roll), future deviations in the opposite direction are more likely.  </span></span></p>
<p> <span style="font-size: small;"><span style="font-family: Calibri;">Take a moderately large set of repeating events, like a 16-sided dice rolled 16 times for a single win or the 16 quarterly maintenance cycles on your 4 UPS modules in a year.  One would expect that if you had not had a failure during that set number of transactions, the probability of the failure would decrease as you proceed through the finite count.  That’s not true.  Each event is unique, and the outcome of the succeeding event is not known until it’s over.  </span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> That 16-sided die is the requisite, but separate maintenance or switching activities you undertake in your data center every year.  </span><span style="font-size: small;"><span style="font-family: Calibri;">The chance of success in this series would be: </span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><em><span style="font-size: small;"><span style="font-family: Calibri;">1 – {number of successful events/number of total events} <sup>exp of the event in the sequence</sup></span></span></em></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">The math of the dice versus the UPS modules are very different for one reason – the number of transactions within the activity.  The chance of a successful event on the first attempt in a 16-series dice roll would be <em>1 – {15/16}<sup>16 </sup>= 64.39%</em>.  On the succeeding event, the chance of a successful event is: </span></span></p>
<p><span style="font-size: small;"><span style="font-family: Calibri;"><em>1 – {15/16}<sup>15 </sup>= 62.02%</em>, as so forth.  But when you look at a single UPS module undergoing quarterly maintenance, you have to consider the <span style="text-decoration: underline;">individual steps</span> (transactions) within the activity as noted in the MOP.  It’s not a single event.  Assuming 10 major switching or maintenance steps per module per activity, or 40 activities per year, this would result in: <em>1 – {39/40}<sup>40</sup> = 36.31%</em> for the first attempt, and <em>1 – {39/40}<sup>39</sup> = 37.31%</em> for the second activity.  When you reach the last activity, the percentage of success has dropped to essentially zero.  Herein lays the math verifying that less maintenance is better.</span></span></p>
<p><span style="font-size: small;"><span style="font-family: Calibri;">We all acknowledge that zero maintenance is not acceptable, but neither is over-maintenance.  The balance must be struck between component count, maintenance frequency and system complexity to yield the lowest risk to uptime.  Predictive maintenance certainly addresses activity frequency, where services is applied only when the systems requires it.  Less frequent but necessary maintenance or having less components to maintain might actually be your best approach.  Regardless of when the maintenance activity is undertaken, each activity poses a risk.  And the way the mathematics work, the total count of these maintenance activities has the most significant impact to system availability.  The fact is past success or failure does not promise future success or failure, just that each is possible in independent events.  That one event might be the one that drops only the “+1” of the service, or it may be the one that drops the load or facility.  The Black Swan has come to roost.  </span></span></p>
<p> <span style="font-size: small;"><span style="font-family: Calibri;">Some low-risk activities, like checking oil on the gens or non-invasive IR scanning, pose little risk to the concurrent maintenance and operation of the system.  When you look at taking a UPS module offline, it takes dozens of steps in an MOP to successfully complete the activity.  Each of those tasks in a given activity carries different weight on the uptime calculation.  Performing a close-transition key interlock circuit breaker transfer poses a far greater risk than operating a “UPS Module to Static Bypass” button (as long as you choose the correct module).   The key point here is to reduce both the total number of activities or changes of state on any given system and the more risky activities if you wish to radically reduce your uptime risk.</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">Let’s look at a Tier/Class III 2N UPS data center with a single generator and no swing generator.  Assume:</span></span></p>
<ul>
<li><span style="font-size: small;"><span style="font-family: Calibri;">One dedicated generator plus one redundant (N+1) generator.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Two UPS modules per “A” and “B” side, for a total of four modules, distributed parallel.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Quarterly UPS module invasive maintenance.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Annual invasive UPS module maintenance.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">One UPS module automatically going to static bypass annually.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Monthly non-invasive generator maintenance.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Quarterly non-invasive generator maintenance.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Annual invasive generator maintenance.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">One Utility outage per two years.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Two circuit breaker switching operations per year.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Annual IR scan on the main boards, 48 breakers total.</span></span></li>
</ul>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">This offers a total event profile would consist of 66 minor maintenance activities and 197 major maintenance or system state changes of state per year.  Over a ten year period, that equals over 2,600 activities or system changes.  If you reduce the system content to one module per side (from 4 to 2 on the  total module count) and change the UPS modules and generators to one major maintenance cycle per year, you drop the counts to 35 minor activities, 6 major activities for a total of 41 activities per year.</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">If you simply make a minor topology change and modify your generator maintenance habits, you can reduce your exposure to human-induced incidents by 64.6%.  By having quantity two-less, UPS modules in a center would decrease critical load switching operations by <strong><em><span style="text-decoration: underline;">ONE-HUNDRED AND TWELVE (112)</span></em></strong>  activities per year, or 1,120  switching operations in a 10 year period.  This is seriously good stuff!</span></span></p>
<p><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">So, here’s what we see:</span></span></p>
<ol>
<li><span style="font-family: Calibri; font-size: small;"> </span><span style="font-size: small;"><span style="font-family: Calibri;">Fewer components equal less maintenance activities – thereby reducing risk of downtime from human error and intervention.  Non-invasive scanning methods, like ExerTherm, will allow you to monitor your system without opening cabinets or exposing your staff to arc flash hazards.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">To avoid killing a “side” when maintaining the UPS bypass, utilize cross tie between the systems at the UPS input or output in a 2N system.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">There’s a reason that most of the more sophisticated data center wholesale operators use Trane Intellipaks.  While each company puts their own “pimp” on the units, they are simple, robust and, as we say about a 1970s GM car, anyone can fix them.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">The more dynamic the system is, the more likely it is to drop a load.  Follow the suggestion of the ANSI/BISCI Data Center Best Practices Manual 002 – use only one step transfers or switching operations to arrive at the next steady state in any maintenance or failure mode of operation.  Do not have the automatic system response depend on something else happening.  In other words, avoid complexity.   For data center reliability, the KISS principle certainly applies to data center design.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">The likelihood of new code failing is directly proportional to the degree of burn in and test/debug it underwent.  State machines are very important.  Unfortunately, unless coders are classically trained in how to develop a complete state machine, “putting things back to normal state” in any given failure scenario is not a common practice.  Most software developers focus only on the most likely outcomes.  The successful ones try to create the unexpected (see Netflix’s Chaos Monkey).  Don’t forget that any of those PLCs in your switchgear or paralleling gear require software development as well.  Buyer beware if design, test and commissioning  do not follow the best practices for software development.  Outages will occur.</span></span></li>
<li><span style="font-size: small;"><span style="font-family: Calibri;">Most of the truly epic data center outages have not been due to power or cooling.  It’s been the network or software.  Knight Trading, the Security Pacific ATM outage, the Bank America Securities outage and the “fail-to-fail over” at the central offices during 9/11 were all root caused to the network architecture or software systems.</span></span></li>
</ol>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2012/08/piercing-the-reliability-myth-the-math-matters/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>New Blog Format and The Big Upcoming Topic &#8211; The King Is Dead, Long Live the King</title>
		<link>http://datacentergurublog.rosendin.com/2012/05/new-blog-format-and-the-big-upcoming-topic-the-king-is-dead-long-live-the-king/</link>
		<comments>http://datacentergurublog.rosendin.com/2012/05/new-blog-format-and-the-big-upcoming-topic-the-king-is-dead-long-live-the-king/#comments</comments>
		<pubDate>Sun, 06 May 2012 07:27:49 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[Digital Realty Trust]]></category>
		<category><![CDATA[Google Blogspot]]></category>
		<category><![CDATA[Loudmouth Group]]></category>
		<category><![CDATA[modular power]]></category>
		<category><![CDATA[Open Compute]]></category>
		<category><![CDATA[power distribution]]></category>
		<category><![CDATA[PUE]]></category>
		<category><![CDATA[Rosendin]]></category>
		<category><![CDATA[UPS power]]></category>

		<guid isPermaLink="false">http://datacentergurublog.rosendin.com/?p=179</guid>
		<description><![CDATA[Well, it seems that I have finally come in from the cold.  I&#8217;m off the Google Blogspot site and am now avowed and set up on the Rosendin server.  The blog has been quiet since we have been working with Rosendin Corporate IT and The Loudmouth Group on the reconstruction and rebranding.  Thank you to <a href="http://datacentergurublog.rosendin.com/2012/05/new-blog-format-and-the-big-upcoming-topic-the-king-is-dead-long-live-the-king/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p>Well, it seems that I have finally come in from the cold.  I&#8217;m off the Google Blogspot site and am now avowed and set up on the Rosendin server.  The blog has been quiet since we have been working with Rosendin Corporate IT and The Loudmouth Group on the reconstruction and rebranding.  Thank you to Sam, Terry and the entire Rosendin IT team for moving me over, and a big shout out to Carter, Deb and the crew at Loudmouth for getting me all cleaned up!</p>
<p>Coming up in the next couple of weeks &#8211; The Death of the Snowflake.  How Digital Realty Trust and the Wholesale Colo providers coupled with the Open Compute Initative is about to make the modern data center over in a way you could have never expected.</p>
<p>Stay tuned and thanks for following me.</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2012/05/new-blog-format-and-the-big-upcoming-topic-the-king-is-dead-long-live-the-king/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>2011 Data Center Trend Reviews</title>
		<link>http://datacentergurublog.rosendin.com/2012/02/2011-data-center-trend-reviews/</link>
		<comments>http://datacentergurublog.rosendin.com/2012/02/2011-data-center-trend-reviews/#comments</comments>
		<pubDate>Thu, 23 Feb 2012 19:28:00 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[modular power]]></category>
		<category><![CDATA[power distribution]]></category>
		<category><![CDATA[PUE]]></category>
		<category><![CDATA[UPS power]]></category>

		<guid isPermaLink="false">http://datacenterguru.vrazer.com/?p=6</guid>
		<description><![CDATA[Sorry for not getting back to you all with the review of the 2011 predictions.  All in all, I think we hit it on the head on two items and missed on the third. Cloud Computing The greatest increase in data center, space and utilization was clearly in the cloud space last year.  When the <a href="http://datacentergurublog.rosendin.com/2012/02/2011-data-center-trend-reviews/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p>Sorry for not getting back to you all with the review of the 2011 predictions.  All in all, I think we hit it on the head on two items and missed on the third.</p>
<h1>Cloud Computing</h1>
<p>The greatest increase in data center, space and utilization was clearly in the cloud space last year.  When the 2011 prediction was made, the inference is that cloud, as a service or as a cornerstone to the technology plan for a business.  In review of where the money flowed last year, the kings of the hill were Amazon, Facebook, Google, Apple, Zynga, Linked In and other service systems.  In 2011, Siri, Apple Cloud, Facebook user growth, the Zynga and LinkedIn expansions simply offer irrefutable evidence that the big growth is in geographically independant, enterprise-level application delivery &#8211; the ALL/ANYWHERE/ANYTIME paradigm.  We also take this as a major and continuing shift to SaaS, IaaS and PaaS, depending on your needs.</p>
<h1>Modular Data Centers</h1>
<p>Don&#8217;t get me wrong, our modular business was very robust this last 12 months.  We have viewed the modularity of the data center speaks to the larger issue of outsourcing real estate solutions to the wholesale colocation providers.  This is related to scale, cost/SF or cost/kW and the generally efficiencies you see when you scale a business to, say 18M square feet of data center space.</p>
<h1>Creativity from Manufacturers</h1>
<p>Dead missed this one.  We expected breakthroughs on design or manufactured systems this year.  What we observed was the general failure of new designs when faced with a highly-efficient, capacitive IT load.  While not to count the high efficiency data center dead from an electrical standpoint, it certainly is handicapped by having to serve a heterogeneous IT equipment system.  Where Amazon, Facebook, Apple or Google (herein and now referred to in this blog as the CLOUD GIANTS) can vertically integrate their application, platform and facility solutions to come up with some very efficient models, not everyone can enjoy those savings unless they are willing to make a profound change in their hardware systems.  Everyone&#8217;s looking at this, but it may take a decade for our traditional enterprise-level users in the banking and manufacturing businesses to be able to churn out older hardware for newer systems current seen in the CLOUD GIANTS.</p>
<p>We also observed limited advancement in critical power and cooling systems outside of Facebook Open Compute.  While we applaud the use of new long-life battery systems, the power systems we have seen are merely derivative in their advancement.</p>
<p>And packaging into a modular building does NOT count!  That&#8217;s just a modification of a construction technique.</p>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2012/02/2011-data-center-trend-reviews/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Server Power Supplies Take an Ugly Turn</title>
		<link>http://datacentergurublog.rosendin.com/2011/12/server-power-supplies-take-an-ugly-turn/</link>
		<comments>http://datacentergurublog.rosendin.com/2011/12/server-power-supplies-take-an-ugly-turn/#comments</comments>
		<pubDate>Thu, 08 Dec 2011 15:32:00 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[modular power]]></category>
		<category><![CDATA[power distribution]]></category>
		<category><![CDATA[PUE]]></category>
		<category><![CDATA[UPS power]]></category>

		<guid isPermaLink="false">http://datacenterguru.vrazer.com/?p=7</guid>
		<description><![CDATA[Well, it&#8217;s official.  I have lived long enough to see history repeat itself. So this is what they mean when people mention the benefit from the wisdom gained from age.  It&#8217;s more like the movie &#8220;Groundhog Day&#8221;, where you are stuck in a tragic comedy. On several occasions in the past six months, our team has <a href="http://datacentergurublog.rosendin.com/2011/12/server-power-supplies-take-an-ugly-turn/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p>Well, it&#8217;s official.  <strong><em>I have lived long enough to see history repeat itself.</em></strong></p>
<p>So this is what they mean when people mention the benefit from the wisdom gained from age.  It&#8217;s more like the movie &#8220;Groundhog Day&#8221;, where you are stuck in a tragic comedy.</p>
<p>On several occasions in the past six months, our team has run into significant power quality issues with 3rd order harmonics in a few of our data centers, both in the wholesale colo and enterprise markets.  Third order harmonics in data centers is like saying that it&#8217;s going to be hot in the desert &#8211; you know it&#8217;s true, but you hope it&#8217;s not.</p>
<p>We have come to discover that the commodity servers from Dell have been the culprit of some pretty ugly load-generated, third-order harmonics..</p>
<p>Power factor correction has been the boogeyman for servers and data center for the past three years.  Stemming from the need to run engine-generators at or below unity power factor, or a lagging power factor.  Power factor improvement was a byproduct of effciency improvements demanded by the marketplace.  This is accomplished by capacitors.  However, the amount of capacitance being injected into critical power system is starting to exhibit some darned strange things.  In solving one problem, the server manufacturers, as well as the OEM power supply vendors, are creating others.  It&#8217;s like the Will Smith movie, &#8220;I Am Legend&#8221;, where the vaccine for cancer ends up turning everyone into flesh-eating zombies.  Great.</p>
<p>Here&#8217;s what we are seeing.</p>
<ul>
<li>Rectifier walk-in setting up a harmonic feedback on a 10s interval in transformer-less UPS modules.  And this is happening on 100% transistorized IGBT modules, not old 6-pulse units.</li>
<li>Harmonics in large multiples of the funamental, just like the 1990&#8217;s and early Dot Com years.  In fact, we dug 15 year old waveforms out of the files, and the new power supplies have a waveform exactly just like the crap that used to be supplied back then.</li>
<li>Power factor correction in the server power supply is now &#8220;swtichable&#8221; by the customer.  If the power factor correction and power supply were engineered correctly (as so aptly pointed out by one of our Engineering Directors Clint Summers), why is it switchable in a passive critical power electrical system.</li>
</ul>
<p>When you peel back all the hysterics back, the bottom line is that the THD and the amplification of the harmonic disturbance into large multiples of what&#8217;s expected and from the fundamental frequency are well outside of industry-stated standards.</p>
<p>Most data center electrical infrastructure systems can deal with relatively strong harmonics.  We have seen some issues with zig-zag neutral reference operations.  But we have also seen this pop up in PDU operations as well.  Our position here is that the Z-Z would not have an issue if the power supplies worked according to the industry tolerances in the first place.  It&#8217;s like arresting a home owner for murder when the mob dumped a dead body on the guy&#8217;s front lawn.  You are blaming the wrong party.</p>
<p>The best of all, your power professionals have sufficient power quality metering that allows complete diagnosis of these issues.  So, don&#8217;t let that server sales guy pull an ole&#8217; on you.  The matador goes home and the bull ends up being steak dinners for everyone!</p>
<p>Make sure that you demand the power spectrum data for the server power supply, both in a waveform format as well as the summary of frequency values, similar to what we&#8217;re showing below.</p>
<p>And for you reading enjoyment, please see the redacted power survey from the site.  I thought I was back in 1999 when I read it.  Note the PU multiples!  Holy crap!</p>
<p>Sorry for the inverse order of the pages.  Enjoy!</p>
<p>And Merry Christmas everyone!</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_102.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_1011.jpg" alt="" width="490" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_112.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_1111.jpg" alt="" width="494" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_092.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0911.jpg" alt="" width="490" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_082.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0811.jpg" alt="" width="494" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_072.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0711.jpg" alt="" width="494" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_062.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0611.jpg" alt="" width="494" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_012.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0111.jpg" alt="" width="494" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_022.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0211.jpg" alt="" width="494" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0321.jpg"><img src="http://4.bp.blogspot.com/-heHlOnfrWHA/TuDicH7vmHI/AAAAAAAAAbs/FOxDWcXDTAs/s640/V2+Measurements+-+Redacted_Page_03.jpg" alt="" width="494" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_033.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0311.jpg" alt="" width="494" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_042.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0411.jpg" alt="" width="490" height="640" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_052.jpg"><img src="http://192.237.205.97/wp-content/uploads/2012/02/V2+Measurements+-+Redacted_Page_0511.jpg" alt="" width="492" height="640" border="0" /></a></div>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2011/12/server-power-supplies-take-an-ugly-turn/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Pros and Cons of Modular Data Centers</title>
		<link>http://datacentergurublog.rosendin.com/2011/10/the-pros-and-cons-of-modular-data-centers/</link>
		<comments>http://datacentergurublog.rosendin.com/2011/10/the-pros-and-cons-of-modular-data-centers/#comments</comments>
		<pubDate>Thu, 27 Oct 2011 18:57:00 +0000</pubDate>
		<dc:creator><![CDATA[Bill]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[data center]]></category>
		<category><![CDATA[data centers]]></category>
		<category><![CDATA[modular power]]></category>
		<category><![CDATA[power distribution]]></category>
		<category><![CDATA[PUE]]></category>
		<category><![CDATA[UPS power]]></category>

		<guid isPermaLink="false">http://datacenterguru.vrazer.com/?p=8</guid>
		<description><![CDATA[I knew I forgot something.  Back in June, Rosendin delivered a presentation at the 7&#215;24 Exchange National meeting in Orlando, Florida.  The purpose of this presentation is to speak to modular and containerized data center solutions, as compated to traditionally built mission critical environments.  Exclusive to this presentation is the cost, schedule and form factors <a href="http://datacentergurublog.rosendin.com/2011/10/the-pros-and-cons-of-modular-data-centers/">[Read more]</a> ]]></description>
				<content:encoded><![CDATA[<p>I knew I forgot something.  Back in June, Rosendin delivered a presentation at the 7&#215;24 Exchange National meeting in Orlando, Florida.  The purpose of this presentation is to speak to modular and containerized data center solutions, as compated to traditionally built mission critical environments.  Exclusive to this presentation is the cost, schedule and form factors for each, something that has not been revealed before this time anywhere in industry writings.</p>
<p>Enjoy.</p>
<div class="separator" style="clear: both; text-align: center;"></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://3.bp.blogspot.com/-R-dYe0MyXN8/Tqmn3LwUGtI/AAAAAAAAAWc/2OKpywYHsMM/s1600/Slide1.JPG"><img src="http://3.bp.blogspot.com/-R-dYe0MyXN8/Tqmn3LwUGtI/AAAAAAAAAWc/2OKpywYHsMM/s320/Slide1.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://1.bp.blogspot.com/-gxq0DCXn2Lk/Tqmn4x6Sl6I/AAAAAAAAAWk/G15lYEIzdIQ/s1600/Slide2.JPG"><img src="http://1.bp.blogspot.com/-gxq0DCXn2Lk/Tqmn4x6Sl6I/AAAAAAAAAWk/G15lYEIzdIQ/s320/Slide2.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://3.bp.blogspot.com/-8tkvVXttQdM/Tqmn6cjhQ7I/AAAAAAAAAWs/2n9BOq0Bonw/s1600/Slide3.JPG"><img src="http://3.bp.blogspot.com/-8tkvVXttQdM/Tqmn6cjhQ7I/AAAAAAAAAWs/2n9BOq0Bonw/s320/Slide3.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://4.bp.blogspot.com/-MVHOq3wslt4/Tqmn7jBvb2I/AAAAAAAAAW0/JYuo-6UyMdo/s1600/Slide4.JPG"><img src="http://4.bp.blogspot.com/-MVHOq3wslt4/Tqmn7jBvb2I/AAAAAAAAAW0/JYuo-6UyMdo/s320/Slide4.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://3.bp.blogspot.com/-BPPX-Htq42o/Tqmn9BCReCI/AAAAAAAAAW8/z4ezn3PX0OE/s1600/Slide5.JPG"><img src="http://3.bp.blogspot.com/-BPPX-Htq42o/Tqmn9BCReCI/AAAAAAAAAW8/z4ezn3PX0OE/s320/Slide5.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://3.bp.blogspot.com/-vXQLHiia8SQ/Tqmn-3K-KRI/AAAAAAAAAXE/E_SU8almpk0/s1600/Slide6.JPG"><img src="http://3.bp.blogspot.com/-vXQLHiia8SQ/Tqmn-3K-KRI/AAAAAAAAAXE/E_SU8almpk0/s320/Slide6.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://2.bp.blogspot.com/-QjhVb7Q97OM/TqmoBTX3nnI/AAAAAAAAAXM/Ikre84Ac_SQ/s1600/Slide7.JPG"><img src="http://2.bp.blogspot.com/-QjhVb7Q97OM/TqmoBTX3nnI/AAAAAAAAAXM/Ikre84Ac_SQ/s320/Slide7.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://1.bp.blogspot.com/-3Lao5_YabQY/TqmoEZ7XuvI/AAAAAAAAAXU/ugVgm8j2LeU/s1600/Slide8.JPG"><img src="http://1.bp.blogspot.com/-3Lao5_YabQY/TqmoEZ7XuvI/AAAAAAAAAXU/ugVgm8j2LeU/s320/Slide8.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://3.bp.blogspot.com/-8ucNa7wCgj0/TqmoIZre1NI/AAAAAAAAAXc/Af-ayI5fNmw/s1600/Slide9.JPG"><img src="http://3.bp.blogspot.com/-8ucNa7wCgj0/TqmoIZre1NI/AAAAAAAAAXc/Af-ayI5fNmw/s320/Slide9.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://2.bp.blogspot.com/-t71dOwWfQIU/TqmoOvOtLmI/AAAAAAAAAXk/mg52aMuQyWk/s1600/Slide10.JPG"><img src="http://2.bp.blogspot.com/-t71dOwWfQIU/TqmoOvOtLmI/AAAAAAAAAXk/mg52aMuQyWk/s320/Slide10.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://4.bp.blogspot.com/-Mqx5O3QZiKo/TqmoQeTE5ZI/AAAAAAAAAXs/FRGu0OpSPe4/s1600/Slide11.JPG"><img src="http://4.bp.blogspot.com/-Mqx5O3QZiKo/TqmoQeTE5ZI/AAAAAAAAAXs/FRGu0OpSPe4/s320/Slide11.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://1.bp.blogspot.com/-YGwvK7Mxjdo/TqmoTWw8qwI/AAAAAAAAAX0/xucw4F5r6-c/s1600/Slide12.JPG"><img src="http://1.bp.blogspot.com/-YGwvK7Mxjdo/TqmoTWw8qwI/AAAAAAAAAX0/xucw4F5r6-c/s320/Slide12.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://2.bp.blogspot.com/-XPigJc9B-Jw/TqmoVbZrbmI/AAAAAAAAAX8/lmdN5D5m0o0/s1600/Slide13.JPG"><img src="http://2.bp.blogspot.com/-XPigJc9B-Jw/TqmoVbZrbmI/AAAAAAAAAX8/lmdN5D5m0o0/s320/Slide13.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://2.bp.blogspot.com/-Ol5GWqKF5jM/TqmoX_sCFYI/AAAAAAAAAYE/61fcIyrgBNA/s1600/Slide14.JPG"><img src="http://2.bp.blogspot.com/-Ol5GWqKF5jM/TqmoX_sCFYI/AAAAAAAAAYE/61fcIyrgBNA/s320/Slide14.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://2.bp.blogspot.com/-_04aftOtkwY/TqmobxwMo4I/AAAAAAAAAYM/Bbf2Jy-loFU/s1600/Slide15.JPG"><img src="http://2.bp.blogspot.com/-_04aftOtkwY/TqmobxwMo4I/AAAAAAAAAYM/Bbf2Jy-loFU/s320/Slide15.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://4.bp.blogspot.com/-7pC1oH7Kn_A/Tqmofan-O3I/AAAAAAAAAYU/KWum9fX-H-U/s1600/Slide16.JPG"><img src="http://4.bp.blogspot.com/-7pC1oH7Kn_A/Tqmofan-O3I/AAAAAAAAAYU/KWum9fX-H-U/s320/Slide16.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://2.bp.blogspot.com/-yKbY2jcCyN4/TqmojvEy23I/AAAAAAAAAYc/yz4xvnlwuQo/s1600/Slide17.JPG"><img src="http://2.bp.blogspot.com/-yKbY2jcCyN4/TqmojvEy23I/AAAAAAAAAYc/yz4xvnlwuQo/s320/Slide17.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://3.bp.blogspot.com/-aUwcjaWT6g0/Tqmom1VpKUI/AAAAAAAAAYk/p2uLIinyCgA/s1600/Slide18.JPG"><img src="http://3.bp.blogspot.com/-aUwcjaWT6g0/Tqmom1VpKUI/AAAAAAAAAYk/p2uLIinyCgA/s320/Slide18.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://2.bp.blogspot.com/-WjtW5isHTtA/TqmoqKx1P6I/AAAAAAAAAYs/uMCF9w39sUg/s1600/Slide19.JPG"><img src="http://2.bp.blogspot.com/-WjtW5isHTtA/TqmoqKx1P6I/AAAAAAAAAYs/uMCF9w39sUg/s320/Slide19.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://4.bp.blogspot.com/-YwYPTleN9ds/TqmotkUnCFI/AAAAAAAAAY0/z2tB1kCEr6E/s1600/Slide20.JPG"><img src="http://4.bp.blogspot.com/-YwYPTleN9ds/TqmotkUnCFI/AAAAAAAAAY0/z2tB1kCEr6E/s320/Slide20.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://4.bp.blogspot.com/-TXr3ZC0zyJw/Tqmo0HCwxuI/AAAAAAAAAY8/UIpp7I5BILA/s1600/Slide21.JPG"><img src="http://4.bp.blogspot.com/-TXr3ZC0zyJw/Tqmo0HCwxuI/AAAAAAAAAY8/UIpp7I5BILA/s320/Slide21.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://2.bp.blogspot.com/-Cr6-QDCBl_c/Tqmo4Yha2NI/AAAAAAAAAZE/2fAKAU-b4S8/s1600/Slide22.JPG"><img src="http://2.bp.blogspot.com/-Cr6-QDCBl_c/Tqmo4Yha2NI/AAAAAAAAAZE/2fAKAU-b4S8/s320/Slide22.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://3.bp.blogspot.com/-VY59xx0M51c/Tqmo8JMAfOI/AAAAAAAAAZM/ZnCm1T9QaXI/s1600/Slide23.JPG"><img src="http://3.bp.blogspot.com/-VY59xx0M51c/Tqmo8JMAfOI/AAAAAAAAAZM/ZnCm1T9QaXI/s320/Slide23.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://3.bp.blogspot.com/-AzS2Odiq9hk/TqmnkVBFkLI/AAAAAAAAAWU/VWVPXUFkqDo/s1600/Slide24.JPG"><img src="http://3.bp.blogspot.com/-AzS2Odiq9hk/TqmnkVBFkLI/AAAAAAAAAWU/VWVPXUFkqDo/s320/Slide24.JPG" alt="" width="320" height="240" border="0" /></a></div>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="http://1.bp.blogspot.com/-I1YdcEQwEls/TqmncqPz8dI/AAAAAAAAAWM/uI1Pb0ue8uw/s1600/Slide25.JPG"><img src="http://1.bp.blogspot.com/-I1YdcEQwEls/TqmncqPz8dI/AAAAAAAAAWM/uI1Pb0ue8uw/s320/Slide25.JPG" alt="" width="320" height="240" border="0" /></a></div>
]]></content:encoded>
			<wfw:commentRss>http://datacentergurublog.rosendin.com/2011/10/the-pros-and-cons-of-modular-data-centers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
