<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-720961704184714048</id><updated>2019-12-18T21:08:38.148-08:00</updated><category term="storage"/><category term="virtualization"/><category term="&quot;EMC World 2008&quot;"/><category term="EMC"/><category term="Green"/><category term="VCE EMC Cisco VMware coalition"/><category term="VMWare"/><category term="VTL"/><category term="block"/><category term="block storage virtualization sea-change vendors"/><category term="storage disk drive future"/><title type='text'>Joerg&#39;s IT Blog</title><subtitle type='html'>Welcome!&#xa;&#xa;This is where I post some of my personal thoughts on data center trends, storage, computers, and what ever else happens to be on my mind. I have no shortage of opinions, so hang on, &#39;cause it might be a bumpy ride!&#39;</subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default?redirect=false'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><link rel='next' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default?start-index=26&amp;max-results=25&amp;redirect=false'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>40</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-3899180842865541422</id><published>2016-08-25T17:32:00.000-07:00</published><updated>2016-08-25T17:41:18.112-07:00</updated><title type='text'>Ae we living in a cloud computing bubble?</title><content type='html'>Talking with our customers, I keep hearing the same thing &quot;IT management has decided on a Cloud First strategy&quot;.&lt;br /&gt;&lt;br /&gt;Are we working in a cloud computing bubble? &amp;nbsp;That is, have business leaders (and perhaps some IT leaders) been lulled into assuming that moving or deploying to cloud is as easy as 1-2-3, without any heavy lifting required? So, have at it, spend the money, get cloud for cloud’s sake?&lt;br /&gt;&lt;br /&gt;Listening to the cloud vendor messages, one can be forgiven for thinking that cloud deployments are a snap, and will quickly put a business on the path to digital nirvana. However, the history of enterprise software tells a different story. There have been many cases in recent decades of companies slapping expensive technology solutions on top of calcified processes and even more calcified business models, and expecting overnight success — but getting none.&lt;br /&gt;&lt;br /&gt;Technology is essential to keep up, and the faster an organization can move to digital, the better. But like the tires on a race car, technology only makes the ride smoother, but is not the reason for success. A scan of the Forbes Global 2000 List of the World’s Largest Public Companies between 2006 and 2016 shows other forces at work that determine business success. The top companies in 2006 were Citigroup, GE, Bank of America, AIG, and HSBC — four US-based, the fifth in the UK. &amp;nbsp;The top companies this year are ICBC, China Construction Bank, Agricultural Bank of China, Berkshire Hathaway, and JPMorgan Chase — the top three based in China, the next two in the United States. &amp;nbsp;The point here is that no amount of advanced technology would have necessarily helped the 2006 leaders hold their leads, as they were knocked off their perches by players that emerged from other parts of the global economy with different models and markets.&lt;br /&gt;&lt;br /&gt;Successful organizations understand that underlying corporate culture and business models, led by forward-thinking managers or leaders who nurture a spirit of innovation among all levels of employees, are what matter in the long run. They also understand that technology supports this in a big way, but cannot replace any deficiencies.&lt;br /&gt;&lt;br /&gt;This is a point to keep in mind when considering the fact that enterprises may be investing hundreds of thousands or millions of dollars, euros, pounds or rupees in cloud computing solutions every year, yet are still uncertain about how and where this technology is best applied. IDC recently released estimates that up to $96.5 billion will be spent on cloud services worldwide this year alone, a number that will reach $195 billion by 2020. Gartner adds a few billion to the equation, suggesting that the worldwide public cloud services market is forecast to reach $204 billion this year alone.&lt;br /&gt;&lt;br /&gt;Is this money being well spent? A recent survey of more than 400 enterprise IT executives, conducted by Wakefield Research and sponsored by LogicWorks, confirms there is general uncertainty of IT and business leaders on how to best leverage the cloud to drive growth and efficiency across their organizations, as well as the need for more thoughtful planning for both cloud migration and ongoing cloud maintenance. Eight in 10 executives believe that their company’s leadership team underestimates the time and cost required to maintain resources in the cloud.&lt;br /&gt;&lt;br /&gt;There are issues with staffing for the new cloud and digital realities as well. The need for technically proficient people does not go away when things are shifted to the cloud. Even as the demand for enterprise-level, cloud-based services expands, the LogicWorks survey finds 43 percent believe their organization’s IT workforce is not completely prepared to address the challenges of managing their cloud resources over the next five years. It is a problem compounded by the high demand and relatively low supply for workers skilled in cloud, security, DevOps engineering and other IT positions.&lt;br /&gt;&lt;br /&gt;Organizational preparation needs to be the most essential piece of cloud and digital engagements, and according to the LogicWorks survey, not enough is being done. In a compelling read at TechTarget, Mark Tonsetic, IT practice leader at CEB, outlines four ways to tell the “cloud story,” to ensure that the money and time spent on technology is met by appropriate transformation of the business:&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Cloud and digital transformation are one in the same.&lt;/b&gt; Cloud may offering a compelling cost savings, but this is only one small piece of its value proposition, Tonsetic says. Information technology should be seen for what it is becoming: “an ingredient in corporate growth.” He urges cloud proponents to “stress speed and flexibility gains that can enable enterprise digital strategy,” and elevate this discussion to the board and investor level. Enterprise digitization and growth is today’s corporate holy grail.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Cloud and digital elevates the roles of IT leaders.&lt;/b&gt; IT leaders need to shift their focus from governance and policy to advice and education on the technology options available to their businesses. IT leaders need to serve as partners to their business users, making the “The cloud story that IT leaders need to tell their business partners should be about how IT will build new partnerships with the business to explore and exploit cloud opportunities. Relay the case for cloud computing in the language of new business opportunities, new models for business engagement and collaboration, and new opportunities for technology careers.”&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Cloud and digital transform IT departments into competitive service brokers.&lt;/b&gt; Corporate IT no longer needs to operate as the owners and operators of email, CRM or even ERP systems, Tonsetic relates. These departments are now in a different kind of business — consulting and working with corporate leaders to define and execute transformational digital strategies.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Cloud and digital means new types of career opportunities. &lt;/b&gt;As mentioned in the LogicWorks survey, more than four in 10 executives say they don’t have the available skills to move forward with cloud. At the same time,there are highly capable IT staff members still involved in legacy or on-premises systems development, maintenance or operations. This pool of talent shouldn’t go to waste. “Forward-thinking IT organizations are careful to send teams the message that cloud and other technology developments present an opportunity for experimentation, innovation, and growth across IT — a refreshing change of pace for teams that have labored mostly to the tune of efficiency, Tonsetic relates.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/3899180842865541422/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=3899180842865541422' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/3899180842865541422'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/3899180842865541422'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2016/08/talking-with-our-customers-i-keep.html' title='Ae we living in a cloud computing bubble?'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-6045647765886822795</id><published>2016-08-09T17:51:00.000-07:00</published><updated>2016-08-09T17:51:45.491-07:00</updated><title type='text'>Modernizing backups, or as I like to call it, data protection</title><content type='html'>&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;Nearly every enterprise IT organization out there is experiencing a significant increase in customer expectations around application performance and business continuity. Response time once measured in seconds are now measured in milliseconds, &amp;nbsp;and downtime measured in hours or days is now expected to be minutes, if that.&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;To keep pace with all of that change in expectations, many organizations are implementing application modernization programs that allow them to take advantage of new technologies such as cloud enabled applications, etc. These changes are also driving changes in the infrastructure that those new/modern application run on. Modern network technologies, IaaS, even more virtualization, and containers are all being driven deep within the modern datacenter by these needs.&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;What hasn&#39;t been keeping up, however, are backups, or as I prefer to call it, data protection. &amp;nbsp;The old full&#39;s and incremental backups to some medium such as tape, or even disk, just isn&#39;t getting it done anymore. &amp;nbsp;The entire concept of backups/data protection is being replaced with Business Continuance. The focus has shifted from a siloed attempt to protect the businesses data, to ensuring that the business can continue to run.&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;With organizations considering availability, it’s no longer about simply needing to restore a single file or folder. It’s about complex processes like recovering a multi-tiered application that spans multiple servers and bringing each server and the data it requires into a consistent state with the others. It’s far more involved than the restore jobs of yesteryear.&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;In some ways, basic backup no longer has a place in the modern data center. As new technologies have come into use—the foremost being virtualization—you can now easily move workloads from one location to another. You can even performing maintenance during the day. The options around availability are much greater and more flexible than what backups alone provide.&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;Even so, the idea of a backup—that is, having copies of your data—is still viable. Now data centers have moved to advanced concepts like replicating data at the disk level or entire virtual machines, both from one store to another or even one site to another. This provides both increased protection and faster recoverability.&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;Organizations today aren’t just looking at availability on a per-application or per-server basis. The goal is to make everything available in the event of an outage. So it’s not “we have our order processing back online, but e-mail is still down.” Now it’s essential to have the entire business back up and running, not just a few services.&lt;/span&gt;&lt;br /&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;        &lt;br /&gt;&lt;div class=&quot;p1&quot;&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;Should you have an availability event, can you benefit from backup alone? Backups certainly still have a place. For example, if you’re replicating changes to a VM and the source VM is somehow corrupted, that corruption will simply get replicated. So having a backup of the critical data on that server can play a role in ensuring recoverability. However, backup as the only method is no longer an option for businesses focused on being available.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;p1&quot;&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;p1&quot;&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;As newer technologies have emerged, the frequency of backups has also shortened. In previous years, your backup window simply couldn’t be anything less than nightly. These days, backups occur&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;p1&quot;&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;much more frequently—even during production hours. And with technology like instant VM recovery in place, the concept of restoring a backup job is somewhat obsolete.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;p1&quot;&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;p1&quot;&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;If your organization is like most, you’ve already begun or have made the investment in a modern data center. Despite your desire to simplify what you manage, it’s a complex mix of virtual machines, servers, storage and networking. Because you’ve made the investment to meet the demand to maintain operations, traditional backups alone just won’t scale to meet the availability needs of the organization in such an advanced data center environment.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;p1&quot;&gt;        &lt;/div&gt;&lt;div class=&quot;p1&quot;&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;p1&quot;&gt;&lt;span style=&quot;font-family: Arial, Helvetica, sans-serif;&quot;&gt;Your organization must have standards for what is and is not acceptable downtime. Comparing the businesses’ required levels of availability against what’s currently attainable can help you create a service baseline from which to work towards availability. Begin with the business requirements around application and environment availability, instead of what your backups can do today. This will help IT look for ways to cost-effectively take advantage of current technologies or invest in new ones to make meeting availability requirements a reality.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/6045647765886822795/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=6045647765886822795' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/6045647765886822795'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/6045647765886822795'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2016/08/modernizing-backups-or-as-i-like-to.html' title='Modernizing backups, or as I like to call it, data protection'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-855967179204657561</id><published>2015-12-15T11:40:00.000-08:00</published><updated>2015-12-15T11:40:55.046-08:00</updated><title type='text'>Being a good listener is one of the key&#39;s to success, in any business</title><content type='html'>I&#39;m in the sales business, and in my business learning to listen well is a major key to success. Actually hearing what people are saying, and not just waiting for your turn to speak is critical if you want to learn what it is your customer is looking for in a solution. Knowing what your customer needs, allows you provide a solution that the customer will embrace, and increase your chances of a sale. &lt;br /&gt;&lt;br /&gt;Unfortunately, all to often I&#39;ve been in meetings where there is at least one person who is more interested in formulating what they are going to say next than they are in hearing what the person who&#39;s currently speaking has to say. &amp;nbsp;In my business (IT) I find this to be endemic. I suspect that has a lot to do with the passion with which many IT professionals bring to their jobs. &amp;nbsp;That passion is a good thing, but it can also be a two edged sword. &amp;nbsp;Not hearing and absorbing what the current speaker has to say means that they often come across as &quot;preachy&quot; or &quot;professorial&quot; and miss important points of view and information.&lt;br /&gt;&lt;br /&gt;So, I would encourage everyone to become a better listener. But being a good listener does not come easy for some of us. It takes time, practice and dedication. What comes to your mind when you think about listening to a friend or co-worker? Do you find yourself thinking about what you want to say in response to what they have said or are you fully engaged with what they are talking about? When it comes to connecting with others, it’s all about consciously listening to them and the information that they are sharing with you.&lt;br /&gt;&lt;br /&gt;1. Eye contact&lt;br /&gt;When it comes to being a good listener, it’s important for you to have eye contact with the other person. It shows that you are paying attention and engaged with the conversation. When you don’t have eye contact with the other person, it shows that you don’t care and are not interested in what they have to say. Practice having eye contact with the next person you have a conversation with.&lt;br /&gt;&lt;br /&gt;2. Find the “Why” and “What”&lt;br /&gt;For you to be a good listener, you need to find out the “Why” and “What.” Why are they talking to you and what is the message they are trying to share with you? Being a good listener takes practice and when you are able to practice finding out the “Why and “What” of the other person, you will be much more engaged in the conversation.&lt;br /&gt;&lt;br /&gt;3. Focus on the other person&lt;br /&gt;It’s easy for us to think about what we want to say after the other person has stopped talking. This will not make you a better listener. If you are constantly thinking about your response, you will always miss out on carefully listening to the other person. Focus on what they have to say. Find out the “Why” and “What” and maintain eye contact. Once the other person stops talking, then think about your response. But while you are listening, you must be consciously listening with your ears. A lot of times, when we listen to people, we are thinking within our brain what we want to say rather than opening our ears and purely listening to their message.&lt;br /&gt;&lt;br /&gt;4. Limit distractions&lt;br /&gt;We live in a society that is filled with so many distractions. We are constantly listening to so much noise that it’s a challenge to truly listen to another person. In order for you to be a good listener, you need to limit distractions during your conversation, whether it be the television, telephones or interruptions. It takes a mental decision to limit distractions when you are listening to someone else. How can you possibly be a good listener if you have the television blasting or you phone continues to ring? It would be near to impossible to be a good listener with these distractions. Limit as much interruptions as you can when you are listening to someone else. This not only shows them that you care but you are practicing good social skills.&lt;br /&gt;&lt;br /&gt;5. Engage&lt;br /&gt;Engage yourself in the conversation. Being engaged is showing your attention towards the other person. Let the other person know that they have your attention and focus. When you are not engaged in the conversation, the other person will notice and will most likely not want to talk to you again. Show the other person that you care about them and are interested in what they have to say. One way you can show this is by responding with a short comment, such as &amp;nbsp;“Yes” or “I understand.” This expresses to the other person that you are truly listening. Make sure that you allow the other person to primarily do the talking while you are still engaged.&lt;br /&gt;&lt;br /&gt;I believe that if you become a better listener, you&#39;ll be more successful both in business and in your personal relationships.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/855967179204657561/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=855967179204657561' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/855967179204657561'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/855967179204657561'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2015/12/being-good-listener-is-one-of-keys-to.html' title='Being a good listener is one of the key&#39;s to success, in any business'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-3589777877527747109</id><published>2015-12-15T10:52:00.000-08:00</published><updated>2015-12-15T10:52:03.967-08:00</updated><title type='text'></title><content type='html'>Folks,&lt;br /&gt;&lt;br /&gt;Please note that I&#39;ve changed the name of my blog. &amp;nbsp;The name change reflects the change that I, the company I work for, and the industry is making. &amp;nbsp;Converged/hyperconverged infrastructure, the cloud, Openstack, DEVOPS, etc. are all changing the IT landscape and if you aren&#39;t changing with it, then you&#39;re going to get left behind.&lt;br /&gt;&lt;br /&gt;So, I&#39;m changing the name and focus of this blog since, hey, none of want to be called a dinosaur, do we?&lt;br /&gt;&lt;br /&gt;--Joerg</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/3589777877527747109/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=3589777877527747109' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/3589777877527747109'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/3589777877527747109'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2015/12/folks-please-note-that-ive-changed-name.html' title=''/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-1340709041843013070</id><published>2015-05-12T17:25:00.000-07:00</published><updated>2015-05-12T17:25:11.680-07:00</updated><title type='text'>Container Wars!</title><content type='html'>The container wars have started!&lt;br /&gt;&lt;br /&gt;Containers have a huge amount of hype and momentum, and there are many spoils for whoever becomes dominant in the container ecosystem. The two major startups innovating in this space–CoreOS and Docker–have declared war on each other as part of gaining that control.&lt;br /&gt;&lt;br /&gt;The Current Landscape&lt;br /&gt;&lt;br /&gt;Recently, CoreOS announced Tectonic. Tectonic is a full solution for running containers, including CoreOS as the host OS, Docker or rkt as the container format, and Kubernetes for managing the cluster. It also uses a number of other CoreOS tools, such as etcd and fleet.&lt;br /&gt;&lt;br /&gt;Despite Docker being an option on Tectonic, CoreOS’s messaging is not focused on Docker, and neither the Tectonic site nor the announcement included a single mention of Docker. It’s clear that CoreOS is moving in a different direction. CoreOS’ CEO Alex Polvi says that “Docker Platform will be a choice for companies that want vSphere for containers”, but that Rocket is the choice for “enterprises that already have an existing environment and want to add containers to it”. The latter is a far larger prize.&lt;br /&gt;&lt;br /&gt;Companies will choose Docker Platform as an alternative to things like Cloud Foundry. Companies like Cloud Foundry will use things like Rocket to build Cloud Foundry.&lt;br /&gt;&lt;br /&gt;Docker meanwhile, doesn’t mention CoreOS or Kubernetes anywhere in their docs, and on their site only mentions them in passing. Docker CEO Solomon Hykes reacted fairly negatively to the announcement of rkt back in December. He has also started to use the phrase “docker-native” to differentiate tools that Docker Inc. builds from other tools in the ecosystem, indicating other tools are second class.&lt;br /&gt;&lt;br /&gt;Right now, both companies provide different pieces with their respective stacks and platforms.&lt;br /&gt;&lt;br /&gt;To run containers successfully on bare metal server infrastructure, you need:&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;A Linux host OS (Windows support for containers is coming with the next release of Windows).&lt;/li&gt;&lt;li&gt;The container runtime system to start, stop, and monitor the containers running on a host.&lt;/li&gt;&lt;li&gt;Some sort of orchestration to manage all those containers.&lt;/li&gt;&lt;/ol&gt;Tectonic provides all of these, with CoreOS, Docker or rkt, and Kubernetes. However, it appears to omit a pre-defined way to build a container image, the equivalent of the Dockerfile. This is by design, and there are many ways to build images (Chef, Puppet, Bash, etc) that can be leveraged.&lt;br /&gt;&lt;br /&gt;On Docker, things are less clear. Docker isn’t opinionated on the host OS, but also doesn’t provide much help there. Docker Machine abstracts over it for some infrastructure services, where you use whatever host OS exists already, but doesn’t provide much help when you want to run the whole thing. Docker Swarm and Docker Machine provide some parts of orchestration. There is also Docker compose, which Docker has been recommending as part of this puzzle, but simultaneously saying it’s not intended to be used as part of production. Of course they have a Dockerfile to build the containers, though some indicate that this is immature for large teams.&lt;br /&gt;&lt;br /&gt;The impression you get from Docker is that they want to own the entire stack. If you visit the Docker site, you could be forgiven for thinking that the only tools you use with Docker are Docker Inc’s tooling. However, Docker doesn’t really have good solutions for the host OS and orchestration components at present.&lt;br /&gt;&lt;br /&gt;Similarly, Docker are pushing their own tools instead of alternatives, even when those tools aren’t really valid alternatives. Docker Compose, for example, is being pushed as an orchestration framework though this feature is still in the roadmap.&lt;br /&gt;&lt;br /&gt;The container landscape is fairly new, but Docker has a pretty clear lead in terms of mindshare. Both companies are trying to control the conversation: Docker talking about Docker-native and generally focussing it’s marketing around the term Docker, while others in the space – CoreOS and Google for example – are focussing the conversation on “containers” rather than “Docker”.&lt;br /&gt;&lt;br /&gt;This is made a little bit difficult by the head start that Docker has – they basically created the excitement around containers, and most people in the ecosystem talk about Docker rather than the container space. Docker has also done an incredibly good job of making docker easy to use and to try out.&lt;br /&gt;&lt;br /&gt;By contrast, CoreOS and Kubernetes are not tools for beginners. You don’t really need them until you get code in production and suffer from the problems they solve, while Docker is something you can play around with locally. Docker’s ease of use, everything from the command line to the well thought out docs to boot2docker, are also well ahead of rkt and and CoreOS’s offering – which are much harder to get started with.&lt;br /&gt;&lt;br /&gt;How does this play out?&lt;br /&gt;&lt;br /&gt;If you’re a consumer in this space, looking to deploy production containers soon, this isn’t a particularly helpful war. The ecosystem is very young, people shipping containers in production are few and far between, and a little bit of maturing of the components would have been useful before the war emerged. We are going to end up with a multitude of different ways to do things, and it’s clear we’re far from having one true way.&lt;br /&gt;&lt;br /&gt;From a business perspective, it’s difficult for any of the players to capitulate on their directions. Docker is certainly focusing on building the Docker ecosystem, to the exclusion of everyone else. Unfortunately, they don’t have all the pieces yet.&lt;br /&gt;&lt;br /&gt;Other companies who want to play in the ecosystem are unlikely to be pleased by Docker’s positioning. CoreOS certainly isn’t alone in their desire for a more open ecosystem.&lt;br /&gt;&lt;br /&gt;Ironically, Docker came about due to a closed ecosystem with a single dominant player. Heroku dominates the PaaS ecosystem to the extent that there really isn’t a PaaS ecosystem, just Heroku. Dotcloud failed to make inroads, and so opened its platform up to disrupt Heroku’s position and move things in a different direction such that Heroku’s dominance didn’t matter. In Docker, they certainly appear to have succeeded with that. Now that Docker is the dominant player is the disruptive ecosystem, CoreOS and other players will want to unseat them and fight on a level playing field before things settle too much.&lt;br /&gt;&lt;br /&gt;The risk for Docker is that on this trajectory, if they lose the war they risk losing everything. If nobody else can play in this space, all of the companies that are left outside will build their own ecosystem that leaves Docker on the outside. Given that Docker lacks considerable parts of the ecosystem (mature orchestration being an obvious one), their attempts at owning the ecosystem are unlikely to succeed in the near term.&lt;br /&gt;&lt;br /&gt;Meanwhile, CoreOS will need to replicate the approachability of the Docker toolset to compete effectively, and will need to do so before Docker solves the missing parts of its puzzle.&lt;br /&gt;&lt;br /&gt;All of the other companies are sitting neutral right now. Google, MS, VMware are all avoiding committing one way or the other. Their motivations are typically clear, and it doesn’t benefit any of them to pick one or the other. The exception here is that the open ACI standard is likely to interest VMware at least, but I wouldn’t be surprised to see Google doing something in this space, too.&lt;br /&gt;&lt;br /&gt;There is massive risk for all of the players in the ecosystem, depending on how this plays out. Existing players like Amazon, Google and Microsoft are providing differentiated services and tools around containers. The risk of not doing so, of owning no piece of the puzzle, is being sidelined and eventual commoditization. The one API that abstracts over the other tools is the one which wins.&lt;br /&gt;&lt;br /&gt;Long story short – this is the start of a war that will probably be quite bloody, and that none of us is going to enjoy.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/1340709041843013070/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=1340709041843013070' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/1340709041843013070'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/1340709041843013070'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2015/05/container-wars.html' title='Container Wars!'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-5596049089333579040</id><published>2015-04-27T17:20:00.005-07:00</published><updated>2015-04-27T17:20:52.795-07:00</updated><title type='text'>The Data Center of the Future, what does it look like?</title><content type='html'>Folks,&lt;br /&gt;&lt;br /&gt;I&#39;ve been spending a lot of time talking with customers about storage, flash, HDDs, Hyper-converged, cloud, etc. lately. &amp;nbsp;What&#39;s become clear to me recently, yes, I&#39;m a &amp;nbsp;little slow, is that all of these technology changes are driving us toward sea changes in the enterprise data center. In this blog posting, I want to talk a little about how things are changing in regards to storage. &amp;nbsp;I&#39;m going to talk a bit about Flash vs. HDD technology and where I see each of them going in the next few years, and I&#39;ll finish up with a discussion on how that will effect the enterprise data center going forward as well the the data center infrastructure industry in general.&lt;br /&gt;&lt;br /&gt;I believe that the competition between flash and hard disk-based storage systems will continue to drive developments in both. Flash has the upper hand in performance and benefits from Moore&#39;s Law improvements in cost per bit, but has increasing limitations in lifecycle and reliability. Finding well-engineered solutions to these issues will define its progress. Hard disk storage, on the other hand, has cost and capacity on its side. Maintaining those advantages is the primary driver in its roadmap but I see limits to where that will take them.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Hard disk Drives (HDDs)&lt;/b&gt;&lt;br /&gt;So, let&#39;s start with a discussion of HDDs. &amp;nbsp;Hard disk developments continue to wring a mixture of increased capacity and either stable or increased performance at lower cost. For example, Seagate introduced a 6TB disk in early 2014 which finessed existing techniques, but subsequently announced an 8TB disk at the end of 2014 based on Shingled Magnetic Recording (SMR). This works by allowing tracks on the disk to overlap each other, eliminating the fallow area previously used to separate them. The greater density this allows is offset by the need to rewrite multiple tracks at once. This slows down some write operations, but for a 25 percent increase in capacity -- and with little need for expensive revamps in manufacturing techniques.&lt;br /&gt;&lt;br /&gt;If SMR is commercially successful, then it will speed the adoption of another technique, Two-Dimensional Magnetic Recording (TDMR) signal processing. This becomes necessary when tracks are so thin and/or close together that the read head picks up noise and signals from adjacent tracks when trying to retrieve the wanted data. A number of techniques can solve this, including multiple heads that read portions of multiple tracks simultaneously to let the drive mathematically subtract inter-track interference signals.&lt;br /&gt;&lt;br /&gt;A third major improvement in hard disk density is Heat-Assisted Magnetic Recording (HAMR). This uses drives with lasers strapped to their heads, heating up the track just before the data is recorded. This produces smaller, better-defined magnetized areas with less mutual interference. Seagate had promised HAMR drives this year, but now says that 2017 is more likely.&lt;br /&gt;&lt;br /&gt;Meanwhile, Hitachi has improved capacity in its top-end drives by filling them with helium. The gas has a much lower viscosity than air, so platters can be packed closer together. This allows for greater density at the drive level.&lt;br /&gt;&lt;br /&gt;All these techniques are becoming adopted as previous innovations -- perpendicular rather than longitudinal recording, for example, where bits are stacked up like biscuits in a packet instead of on a plate -- are running out of steam. By combining all of the above ideas, the hard disk industry expects to be able to produce around three or four years of continuous capacity growth while maintaining price differential with flash. However, it should be noted that all of the innovation in HDDs is around capacity. I believe that HDDs will continue to dominate the large capacity, archive type of workloads for the next 2 or 3 years. After that ... well, read the next section on flash.&lt;br /&gt;&lt;br /&gt;Some argue that the cloud will be taking over this space. However, even if this is true, cloud providers will continue to need very cheap high capacity HDDs until flash is able to take over this high capacity space as well based on $$/GB.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Flash&lt;/b&gt;&lt;br /&gt;Flash memory is changing rapidly, with many innovations moving from small-scale deployment into the mainstream. Companies such as Intel and Samsung are predicting major advances in 3D NAND, where the basic one-transistor-per-cell architecture of flash memory is stacked into three dimensional arrays within a chip.&lt;br /&gt;&lt;br /&gt;Intel, in conjunction with its partner Micron, is predicting 48GB per die this year by combining 32-deep 3D NAND with multi-level cells (MLC) that double the storage per transistor. The company says this will create 1TB SSDs that will fit in mobile form factors and be much more competitive with consumer hard disk drives -- still around five times cheaper at that size -- and 10TB enterprise-class SSDs, by 2018. Moore&#39;s Law will continue to drive down the cost per TB of flash at the same time as these capacity increases occur this making flash a viable replacement for high capacity HDDs in the next 3 to 5 years. Note that this assumes that SSD&#39;s will leverage technology such as deduplication in order to help reduce the footprint of data and drive down cost.&lt;br /&gt;&lt;br /&gt;The following is a chart from a Wikibon article on the future of flash:&lt;br /&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;http://wikibon.org/w/images/b/be/ProjectionCapacityDiskNANDManagementSummary.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;http://wikibon.org/w/images/b/be/ProjectionCapacityDiskNANDManagementSummary.png&quot; height=&quot;433&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;As you can see from the graph above, by 2017 the 4 year cost per TB of flash will be well below that of HDDs, and that this trend will continue until 2020 when the 4 year cost per TB of flash hits $9 per TB vs $74 per TB for HDDs. You can read the entire article&amp;nbsp;&lt;a href=&quot;http://wikibon.org/wiki/v/Evolution_of_All-Flash_Array_Architectures#Conclusions_.26_Recommendations&quot;&gt;here.&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Conclusions&lt;/b&gt;&lt;br /&gt;So, what does all this mean? &amp;nbsp;Among other things, it means that you can expect a shift to what the Wikibon article calls the &quot;Electronic Data Center&quot;. &amp;nbsp;The Electronic Data Center is simply a data center where the mechanical disk drive has been replaced by something like flash, thus eliminating the last of the mechanical devices (they assume tape and tape robots are already gone in their scenario). &amp;nbsp;This will reduce the electricity and cooling needs, as well as the size/footprint of the data center of the future.&lt;br /&gt;&lt;br /&gt;Let&#39;s assume for a moment that Wikibon is correct. &amp;nbsp;What does this mean to the data center infrastructure industry?&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;Companies that build traditional storage arrays will need to shift their technology to &quot;all flash&quot;, and they need to do it quickly. &amp;nbsp;You can see this already happening in companies such as EMC with the acquisition of XtremIO in order to obtain all flash technology. &amp;nbsp;Companies like NetApp, on the other hand, are developing their all flash solutions in house. In both those cases, however, all flash solutions are facing internal battles against engineering organizations that are vested in the status quo. &amp;nbsp;That will mean they could be slow to market with potentially inferior products. However, their sheer size in the market may protect them from complete failure.&lt;/li&gt;&lt;li&gt;What about the raft of new startups producing all flash arrays? &amp;nbsp;Might the above provide an opening for one or more of those startups to &quot;go big&quot; in the market? &amp;nbsp;What about the rest? My take on this is that indeed, one or more might have the opportunity to &quot;go big&quot; due to the gap that might be created by the &quot;big boys&quot; moving too slowly or trying to shoe-horn old existing technologies into the data center. Most of them, however, will either die off, or be acquired buy a larger competitor.&lt;/li&gt;&lt;/ol&gt;However, I think that there is an even larger risk to the &quot;storage only&quot; companies both new and old. I believe that a couple of other market forces will put significant pressure on these &quot;storage only&quot; companies, including the new all flash startups.&lt;br /&gt;&lt;br /&gt;Specifically, the trends towards cloud computing, Hyper-converged, and more and more emphasis on automation that is being driven by other IT trends such as DevOps will make standalone storage arrays less and less desirable to IT organizations. &amp;nbsp;This will force those companies to move beyond their roots into hyper-converged infrastructure, for example where they currently have little or no engineering expertise or management experience. &lt;br /&gt;&lt;br /&gt;The companies who are able to embrace these kinds of moves will likely have a bright future in the data center of the future. &amp;nbsp;However, issues around &quot;not invented here&quot;, and lack of engineering talent in the new areas of technology are going to make it a challenge for those very large storage companies going forward. Again, how they address these issues is going to be a determining factor in their future success.&lt;br /&gt;&lt;br /&gt;To wrap it it up, I firmly believe that not everything is &quot;moving to the public cloud&quot; in the enterprise space. What I do believe is that:&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;Some workloads currently running in the enterprise data center will move to the public cloud, and be managed by IT.&lt;/li&gt;&lt;li&gt;Some workloads will remain in &quot;private&quot; clouds owned and operated by IT. However, those private clouds must offer at all of the same ease of use the internal customers that the public could offers. Most likely, they will leverage web-scale architectures (hyper-converged) in order to make management and management automation easier.&lt;/li&gt;&lt;li&gt;Hybrid cloud management software will be used to allow both management, and automation to span between the enterprises private cloud and it&#39;s public cloud(s).&lt;/li&gt;&lt;li&gt;DevOps and similar initiatives will drive significant automation into the hybrid clouds I describe above, as well as significant change to IT organizations.&lt;/li&gt;&lt;li&gt;these changes will all be highly disruptive, and those IT organizations that embrace change will have an easier time over the next few years than those that don&#39;t. Very large IT organizations will have the hardest time making the changes. Yes, it is hard to turn the aircraft carrier. However, internal customers are demanding it of IT, and will go outside the IT organization to get what the want/need if necessary.&lt;/li&gt;&lt;/ol&gt;In the end the Data Center of the Future will look very different than the current enterprise data center. It will be a hybrid cloud that spans on-premise, and public clouds. It will be an all electronic data center that uses significantly less footprint, and electricity than current data centers. And finally, this infrastructure will leverage significant automation and be managed by an IT organization that looks very different than the current IT organization.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/5596049089333579040/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=5596049089333579040' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/5596049089333579040'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/5596049089333579040'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2015/04/the-data-center-of-future-what-does-it.html' title='The Data Center of the Future, what does it look like?'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-2907008808795362832</id><published>2015-04-22T15:23:00.001-07:00</published><updated>2015-04-22T15:23:43.175-07:00</updated><title type='text'>Structured or Unstructured PaaS??</title><content type='html'>Words, labels, tags, etc. in our industry mean something – at least for a while – and then marketing organizations tend to get involved and use words and labels and tags to best align to their specific agenda. For example, things like “webscale” or “cloud native apps” were strongly associated with the largest web companies (Google, Amazon, Twitter, Facebook, etc.). But over time, those terms got usurped by other groups in an effort to link their technologies to hot trends in the broader markets.&lt;br /&gt;&lt;br /&gt;Another one that seems to be shifting is PaaS, or Platform as a Service. It’s sort of a funny acronym to say out loud, and people are starting to wonder about it’s future. But we’re not an industry that likes to stand still, so let’s move things around a little bit. Maybe PaaS is the wrong term, and it really should be “Platform”, since everything in IT should eventually be consumed as a service. I&#39;m already hearing about XaaS (X as a Service) which pretty much means anything as a server, or perhaps everything as a service.&lt;br /&gt;&lt;br /&gt;But not everyone believes that a Platform (or PaaS) should be an entirely structured model. There is lots of VC money being pumped into less structured models for delivering a platform, such as Mesosphere, CoreOS, Docker, Hashicorp, Kismatic, Cloud66, Apache Brooklyn (project) and Engine Yard acquiring OpDemand.&lt;br /&gt;&lt;br /&gt;I’m not sure if “Structured PaaS” and “Unstructured PaaS” are really the right terms to use for this divergence of thinking about how to deliver a Platform, but they work for me. The Unstructured approach seems to appeal more to the DIY-focused start-ups, while Structured PaaS (eg. Cloud Foundry, OpenShift) seem to appeal more towards Enterprise markets that expect a lot more “structure” in terms of built-in governance, monitoring/logging, and infrastructure services (eg. load-balancing, higher-availability, etc.). The unstructured approach can be built in a variety of configurations, aka “batteries included but removable“, whereas the structured model will incorporate more out-of-the-box elements in a more closely configured model.&lt;br /&gt;&lt;br /&gt;Given the inherent application portability that comes with either a container-centric model, or PaaS-centric, both of these are areas that IT professionals and developers should be taking a close look at, especially if they believe in a Hybrid Cloud model – whether that’s Private/Public or Public/Public. It’s also an area that will drive quite a bit of change around the associated operational tools, which are beginning to overlap with the native platform tools for deployment or config management (eg. CF BOSH or Dockerfiles or Vagrant).&lt;br /&gt;&lt;br /&gt;It’s difficult to tell at this point which approach will likely gain greater market-share. The traditional money would tend to follow a more structured approach which aligns to Enterprise buying centers. But the unstructured IaaS approach by AWS has led it to a significant market-share lead for developers. Will unstructured history be any indication of the Platform market? Or will too many of those companies struggle to find viable financial models after taking all that VC capital and eventually just be a feature within a broader structured platform? &amp;nbsp;I want to hear what you think, all respectful comments are welcome.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/2907008808795362832/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=2907008808795362832' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/2907008808795362832'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/2907008808795362832'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2015/04/structured-or-unstructured-paas.html' title='Structured or Unstructured PaaS??'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-4410453293497985079</id><published>2014-12-26T11:03:00.000-08:00</published><updated>2014-12-26T11:03:01.797-08:00</updated><title type='text'>Some thoughts on Converged and Hyperconverged Infrastructure</title><content type='html'>First we had converged infrastructure and then hyperconverged infrastructure. If the trend continues, next up will be the ultimate hyperconverged infrastructure. As these systems continue to evolve and become more advanced, they are gaining in popularity. Very few people doubt that products from Nutanix or the newly released VMware EVO:RAIL will be huge hits. The real question is whether a hyperconverged infrastructure is right for your data center.&lt;br /&gt;&lt;br /&gt;Before converged infrastructure, the IT world had limited choices when deploying x86-based architecture. Rack/tower units and blades were the only choices available. A common feature among these approaches is that they often used external storage to accommodate virtualization -- a common feature and a downside. Blades had an additional advantage in a shared networking, power and cooling infrastructure. Of course, the downside was that you had a higher density of servers in a single enclosure, which could increase your risk in the event of a failure. Rack servers had the advantage of separating the failure points at the additional cost of rack space and power, cooling and networking infrastructure. The middle ground between the two extremes did not exist until the introduction of converged and hyperconverged infrastructure. &lt;br /&gt;&lt;br /&gt;Hyperconverged infrastructure in particular is looking to take the best points of the blades and rackmount servers and combine them into a better approach. This is a major step forward from most converged infrastructure solutions that marry traditional blade servers, external storage, and networking into a single consumable &quot;block&quot;.&lt;br /&gt;&lt;br /&gt;Compute: One of the drawbacks to blades was the high density of blades in the chassis. A single chassis failure could affect many hosts, bringing down hundreds or even thousands of virtual machines. A hyperconverged infrastructure often resembles four blade-style servers in a 2U form factor. This reduction of the possible outage footprint can be more appealing to the customer looking for higher availability. You are still gaining the benefit of data center consolidation while preserving a level of outage protection.&lt;br /&gt;&lt;br /&gt;Networking: Connecting everything together presents a challenge in almost all environments. A rack server&#39;s ports are normally allocated for production traffic and storage. These ports come with a per-port cost, along with management overhead. In blade environments, virtual switching is often required, which adds an additional pair of switches to your environment but removes the trouble of having to cable the blades to it. A converged infrastructure does not include virtual switching and requires connections from each node to the existing switching infrastructure. This approach resembles the rack environment, just without the storage connections.&lt;br /&gt;&lt;br /&gt;Storage: Traditional servers use internal server storage or larger external storage frames. The external storage frame enables the shared storage concept, and virtualization was quick to take advantage of the benefits shared storage enabled, including features like vMotion, load balancing and high availability. Hyperconverged infrastructure turns this on its head by using localized storage that is shared across the four hosts within the single frame or even further, across the entire cluster. This gives the advantage of shared storage without the need for the costly storage frames or the dedicated fiber infrastructure that often accompanies it. The local disk can be an enterprise-class spinning or solid-state drive, giving the converged infrastructure tremendous IOPS potential. As more converged infrastructure product vendors couple their hardware with software-defined storage abilities, the traditional storage frame designs are showing their age.&lt;br /&gt;&lt;br /&gt;Compute, networking, storage: When you compare the benefits of the hyperconverged infrastructure over the traditional infrastructure, it&#39;s hard to not to see all of the positives (and very few negatives). Hyperconverged infrastructure has found that perfect midpoint between large blade enclosures and the single-server approach. With all of the benefits, where is the downside to going with the hyperconverged approach?&lt;br /&gt;&lt;br /&gt;Design: When you look at your requirements and want to come to a decision about what hardware platform (hyperconverged or not converged) to go with, a key factor should come to mind: Do you plan to deploy or design in one or two deployment factors, or are you more likely to deploy in a two- to four-node method?&lt;br /&gt;&lt;br /&gt;Nodes in a converged infrastructure are prepackaged, and knowing how you purchase is a big factor in knowing which is right for you. Converged infrastructure, by its nature, is not designed to support a single compute deployment where you would add additional compute nodes to an existing enclosure similar to a blade enclosure. The enclosure exists as a prepackaged unit that works with all of the compute nodes in the cluster.&lt;br /&gt;&lt;br /&gt;As businesses trend toward the prepackaged approach, they also need to consider how they will approach working with a converged infrastructure.&lt;br /&gt;&lt;br /&gt;Downside: With all the positives of a converged infrastructure, there are some downsides. The prepackaged nature of converged infrastructure is a higher investment in capital costs. This simple math would suggest it costs four times the price of a single server. However, this is a flawed assumption because a converged infrastructure also replaces some storage and networking needs. Traditional external storage frames with fiber cabling and switches are expensive.&lt;br /&gt;&lt;br /&gt;The second downside can also be leveraged as an upside since today&#39;s IT departments are often silos of professionals responsible for defined infrastructure roles with little cross-training or responsibility. While virtualization has started to break down these IT staff silos, it is still a work in progress and moving very slowly for many organizations. Infrastructure ownership and the division of groups have deep roots in IT. Combining these silos in not simply about reducing the hardware pieces; it can also affect staffing levels. The integration of virtualization has caused some jobs to be eliminated and others expanded, but IT continues to survive and evolve, and the same will occur with a hyperconverged infrastructure. Sometimes the introduction of hyperconverged infrastructure can be the catalyst needed to break down these silo&#39;s since it often &amp;nbsp;comes with management software that handles servers, storage, and some networking tasks from the same GUI.&lt;br /&gt;&lt;br /&gt;Hyperconverged infrastructure isn&#39;t for everyone right now, with its prepackaged requirements and price. However, its ease of use and integration of storage and networking means it&#39;s only going to grow. The breakdown of separate IT silos is not a stopping point, as it is something that the business world cannot ignore even if traditional IT would like to. Just like virtualization, it is not a question of whether to use Hyperconverged infrastructure. The real question is whether you choose to adopt it soon or find yourself in a constantly shrinking pool that is having trouble keeping up.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/4410453293497985079/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=4410453293497985079' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4410453293497985079'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4410453293497985079'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/12/some-thoughts-on-converged-and.html' title='Some thoughts on Converged and Hyperconverged Infrastructure'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-4753860619931686381</id><published>2014-12-20T10:57:00.000-08:00</published><updated>2014-12-20T10:57:03.836-08:00</updated><title type='text'>I think that the definition of company culture is wrong</title><content type='html'>&lt;br /&gt;Although cool, your company’s free organic food and Uber allowance are not “culture.” The fact that everyone, including the CEO, comes to work clad in jeans and a hoodie is not “culture” either.&lt;br /&gt;&lt;br /&gt;When we talk about culture, too often we talk about the wildly luxurious perks, or we celebrate employees’ traits or behaviors that have little to do with their actual performance at work, as if those were the reasons why the tech sector has been so successful.&lt;br /&gt;&lt;br /&gt;Something like, “Company X is cool because they give everyone a free puppy and they’re known to be rockstar engineers and they all kite surf,” is really only describing a company at a very surface level, if at all.&lt;br /&gt;&lt;br /&gt;Unfortunately, celebrating, or at minimum acknowledging, that definition of culture is now the cost of doing business, especially when you’re hiring talented engineers, fresh out of school. I can imagine that when you’re looking for your first job, a doggy-day-care subsidy might seem more like a concrete, positive reason to work at a given company, versus transparency or continuous improvement.&lt;br /&gt;&lt;br /&gt;Company culture should be about the business&lt;br /&gt;&lt;br /&gt;Here’s how I would define company culture:&lt;br /&gt;&lt;br /&gt;&lt;b&gt;&lt;i&gt;It’s the set of values, traits, and systems that are deliberate, obvious and inherent in a company, existing to make the company successful.&lt;/i&gt;&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;When you define (and celebrate!) company culture this way, it becomes obvious why culture should be important to an organization, why culture eats perks for breakfast. Culture is the way the work gets done, the decisions get made, the people get hired. If you’re not focused on culture as a holistic mechanism to build the company’s success, then you’re coming at it from a potentially bifurcated point of view.&lt;br /&gt;&lt;br /&gt;Call it culture, beliefs, values, organizing principles, whatever. It can be real and right now, but also aspirational. But no matter what, it has to come back to driving company success.&lt;br /&gt;&lt;br /&gt;Let’s take a value like distributed decision making: trusting and empowering the experts (not necessarily the execs) in a company to make decisions in their domain. A company might value distributed decision making because it increases the speed of business; employees can move faster and do more if they don’t have to wait on executive approval, and if the decisions are made by the people with the most information.&lt;br /&gt;&lt;br /&gt;Sounds great, right? Is that kind of empowerment part of your company’s culture? Here’s how you can tell:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;When you hire someone, do you ask questions to see if they have decision making skills, if they’re able to make decisions without looking upwards?&lt;/li&gt;&lt;li&gt;Do you get rewarded through formal rewards programs for having made good decisions?&lt;/li&gt;&lt;li&gt;Are negotiation and evaluation skills critical to your career development?&lt;/li&gt;&lt;li&gt;When you finish big projects, do you do a post-mortem that includes assessing whether or not you made the right decisions?&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;Basically, your company’s culture should be baked into the org structure, the people systems (hiring, firing, performance management), and especially into your business plan.&lt;br /&gt;&lt;br /&gt;Let’s get to work on the real definition of company culture&lt;br /&gt;&lt;br /&gt;And I know, this process of inculcating a true definition of culture sounds like a lot of work. No organization is perfect at living up to its values. But rather than spend hours and hours wordsmithing the perfect vision for your company’s culture, the real work is find and bridge the gap from where your culture is today and where you want it to be. Aligning the company culture with the success of your business takes it from a nice-to-have to an imperative - a way to achieve success, not just market it.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/4753860619931686381/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=4753860619931686381' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4753860619931686381'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4753860619931686381'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/12/i-think-that-definition-of-company.html' title='I think that the definition of company culture is wrong'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-3763455211638293262</id><published>2014-05-17T09:59:00.003-07:00</published><updated>2014-05-17T09:59:46.568-07:00</updated><title type='text'>The Task of Democratizing Big Data </title><content type='html'>Companies that fail to take advantage of the opportunities presented by &quot;big data&quot; management and analytics technologies can expect to fall behind the competition and possibly go out of business altogether.&lt;br /&gt;&lt;br /&gt;The world is just getting started with big data technologies like Hadoop and MapReduce, and several obstacles – such as a dearth of skills and old-fashioned thinking about data -- continue to stand in the way of their adoption.&lt;br /&gt;&lt;br /&gt;But, companies that embrace the concept now are the ones who will lead the way in the not-too-distant future when entry barriers are not so high. Companies that exploit big data will gain the ability to make more informed decisions about the future and will ultimately bring in more money than those that do not.&lt;br /&gt;&lt;br /&gt;The phrase &quot;big data&quot; is most often used to refer to the massive amounts of both structured and unstructured information being generated by machines, social media sites and mobile devices today. The phrase is also used to refer to the storage, management and analytical technologies used to draw valuable business insights from such information. Some of the more well-known big data management technologies include the Apache Hadoop Distributed File System, MapReduce, Hive, Pig and Mahout.&lt;br /&gt;&lt;br /&gt;There is certainly no shortage of hype around big data management technologies, but actual adoption levels remain low for two main reasons. First, Hadoop and other big data technologies are extremely difficult to use and the right skill sets are in short order. Today, organizations often hire PhDs to handle the analytics side of the big data equation, and those well-educated individuals demand high salaries.&lt;br /&gt;&lt;br /&gt;The skills used to manage, deploy and monitor Hadoop are not necessarily the same skills that an Oracle DBA might have. For instance, if you want to be a data scientist on the analytics side, you need to know how to write MapReduce jobs, which is not the same as writing SQL queries by any means.&lt;br /&gt;&lt;br /&gt;The second major obstacle standing in the way of increased adoption centers on the notion that most companies currently lack the mindset required to get the most out of big data.&lt;br /&gt;&lt;br /&gt;Most large companies today are accustomed to gaining business insights through a combination of data warehousing and business intelligence (BI) re¬porting technologies. But, the BI/data warehousing model is about using data to examine the past, whereas big data technologies are about using data to predict the future. To take advantage of big data requires a shift, a very basic shift in some organizations, to actually trusting data and actually going where the data leads you. Big data is about looking forward, making predictions and taking action.&lt;br /&gt;&lt;br /&gt;As with all emerging technologies, big data management and analytics will eventually become more accessible to the masses -- or democratized -- over time. But some important things need to happen first.&lt;br /&gt;&lt;br /&gt;For starters, new tools and technologies will be needed to reduce the complexity associated with working with big data technologies. Several companies -- like Talend, Hortonworks and Cloudera -- are working to reduce big data difficulties right now. But, more innovation is needed to make it easier for users to deploy, administer and secure Hadoop clusters and create integrations between processes and data sources.&lt;br /&gt;&lt;br /&gt;Right now you need some pretty sophisticated skills around MapReduce and other languages, or SAS and others to be a top line data scientist. We need tools that can abstract away some of that expertise so that you don&#39;t need to have a PhD to really explore big data.&lt;br /&gt;&lt;br /&gt;The task of democratizing big data will also require a great deal of user training and education on topics like big data infrastructure, deploying and managing Hadoop, integration and scheduling MapReduce jobs. We really need to tackle the problem from both ends. One is to make the tools and technologies easier to use. But we also have to invest in training and education resources to help DBAs and business analysts up their game and operate in the big data world.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/3763455211638293262/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=3763455211638293262' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/3763455211638293262'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/3763455211638293262'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/05/the-task-of-democratizing-big-data.html' title='The Task of Democratizing Big Data '/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-5923124809530365201</id><published>2014-05-12T09:21:00.000-07:00</published><updated>2014-05-12T09:21:49.554-07:00</updated><title type='text'>Modernizing Your Backups</title><content type='html'>This week, I&#39;d like to spend a little time talking about backup modernization, or as I prefer to call it, data protection modernization. The process we use for traditional backups hasn&#39;t really changed much in 20 or 30 years. &amp;nbsp;We do a full backup once per week, and take some kind of incremental backup of our data every day in between. These backups are aways copied to some other storage mechanism like tape, or now disk and a retention is attached to the backups that defines how long we need to keep that backup. &amp;nbsp;Those retentions are important, since they define things like how much dedicated backup disk we need, or how many tapes we need to have on hand, etc. &amp;nbsp;They also play an important roll later on when/if we decide to change the way we do backups.&lt;br /&gt;&lt;br /&gt;But first, let&#39;s talk about the fact that traditional backup processes are really beginning to become more and more problematic. &amp;nbsp;Why? &amp;nbsp;There are actually a number of reasons. First, and perhaps the most obvious, are that data sets are becoming lager and larger every day. &amp;nbsp;This means that either the backups are taking longer and longer to complete, or, &amp;nbsp;more and more backup infrastructure needs to be put in place. &amp;nbsp;Dedicated 10GbE connections, backup to disk, more and more and faster and fast tape drives, all need to be put into place to keep up. Yet it&#39;s a losing battle. The data sets just keep getting bigger. For example having a NAS array today that holds a petabyte of data isn&#39;t terribly unusual like it was not all that long ago. &amp;nbsp;These bigger and bigger data sets are now benign to outstrip the ability of the storage system to send data to the backup system in a timely manner. Things like NDMP are just not able to keep up with these very large data sets. So data set size is certainly one of the more pressing reasons that people are beginning to look into modernizing their backups.&lt;br /&gt;&lt;br /&gt;Another reason that people are bringing to look at modernizing their backup processes is that backup windows are getting smaller and smaller, and in some cases, closing completely. Back in the day, we had all night to run backups. Yes, of course we had to dodge in-between the batch jobs, that that was easy enough to do when &amp;nbsp;you had 12 or more hours to do that. &amp;nbsp;Those days are pretty much over. Today you are luck to get any time at all to backup the data, and as I said above, in some cases, you really don&#39;t have a window at all.&lt;br /&gt;&lt;br /&gt;Finally, Recovery Time Objectives (RTO&#39;s) are getting shorter and shorter and the Recovery Point Objectives are getting smaller and smaller. &amp;nbsp;What this means for the backup administrator is that they must take more backups, and must be able to retire form those backups more quickly.&lt;br /&gt;&lt;br /&gt;So, what to do? &amp;nbsp;The first step that many of my customers have taken s to start to include snapshots are part of the backup process. &amp;nbsp;This addressees the issue of RTOs and RPOs since you can take those snapshots quickly, and you can recover from them quickly. &amp;nbsp;You can also take multiple snapshots per day, so you have a much more fine grained ability to recover that data to a particular point in time. However, must people continue to do their regular backups as well, based on the premiss that snapshots aren&#39;t backups since they don&#39;t make a full copy of the data to another storage medium. However, for some customers it&#39;s beginning to become so problematic to do those tradition fills and incrementals, they are revisiting this position. Specifically, if they were to have a problem with their storage array, such that they lost data, and couldn&#39;t recover from a snapshot, isn&#39;t that the definition of a disaster in the data center? &amp;nbsp;if you accept that premise, then you can start to consider a combination of snapshots, and say data replication for disaster recover, as a fixable, complete backup solution and drop traditional backups entirely.&lt;br /&gt;&lt;br /&gt;A move to nothing but snapshots and replication as your data protection mechanism solves a number of issues. &amp;nbsp;It address the ever growing backup infrastructure, for example, by leveraging space you already have on your storage array, and a DR plan (replication) you may very well already have in place. Admittedly, for some longer retentions it might mean you need a bit more disk space in your array, but because of the nature of snapshots it&#39;s probably the same or less space than you would need for disk based backups. If you are already backing up to an external backup to disk array like a Data Domain, you can repurpose your DD budget and add the space you need to your storage array to hold all of the snapshots you need/want.&lt;br /&gt;&lt;br /&gt;Another method now bringing to become popular to modernize your backups is to leverage change block tracking. &amp;nbsp;This is a mechanism in which the backup application, the storage array, or the hypervisor keeps track of the specific blocks that have changed, &amp;nbsp;and the backup application only &quot;backs up&quot; these changed blocks. &amp;nbsp;This can reduce the amount of backup traffic from the storage array to the backup infrastructure significantly, thus addressing the issue of the ever growing backup data sets. &amp;nbsp;If you couple this with CDP (Continuous Data Protection) or near CDP functionality, it will also address the RPO issues, and since recovery from this kind of backup often means sending less data back to the storage array/application it can also address the RTO issues.&lt;br /&gt;&lt;br /&gt;However, since you are probably already do some kind of backup, most like a traditional backup, the question becomes, how do I get from my current traditional backups to one of these more modern backup techniques? &amp;nbsp;While on the surface it may seem simple enough, there are a number of issues to consider. First, &amp;nbsp;you need to consider your existing backups. Those backups have a retention, and so you need to keep you existing backup software/mechanism in place, at least until the retentions on those existing backups have expired. One question that often crops up in this regard is what if I have backups with very long retentions, like 7 years? &amp;nbsp;Does this mean I need to keep my existing backup mechanism in place for 7 years?Well, that&#39;s certainly one way to handle the problem. &amp;nbsp;One way to mitigate the issue a little if you can, is to PTV your existing backup servers once you&#39;ve switch all you backups to the new method. &amp;nbsp;You can then shut down those VM&#39;s, and only spin them up if you need to get back at that old data for some reason. &amp;nbsp;Another way to address the issue to to recognize that backups with long retentions are often not backups at all, they are actually archives, and they probably shouldn&#39;t have been backups in the first place. This is the perfect opportunity to start a dialog with your customers about the difference between backup and archive, and getting an archive mechanism in place to handle that data. The difference between archive and backup is a topic near and dear to my heart, but it&#39;s also beyond the scope of this posting. Just keep this in mind when you go to do your backup modernization planning.&lt;br /&gt;&lt;br /&gt;The other issue to that you should consider when planning to modernize your backups is management. &amp;nbsp;Much of the utility of today&#39;s backup software such as CommVault, NetBackup, and TSM is around managing the backups. &amp;nbsp;Scheduling them, monitoring that they complete successfully, &amp;nbsp;and reporting on them both from a administrative perspective, but also up the management tree and to your customers so that everyone is assured that their data is protected. &amp;nbsp;Many people think that moving to a new more modern backup process means getting rid of these tried and true software programs. However, these may be an advantage to keeping them in place. &amp;nbsp;For example, that reportage mechanism that is so important to your business then also stays in place. &amp;nbsp;Considering that many snapshots, for example, are managed by software provided by the array manufacturer, &amp;nbsp;and often only manage the snapshots on once array at a time, you could end up in a situation where your backups are modernized, but your backup management has taken a step back in time. this is also true if you bring on several deterrent techniques to backup you data. &amp;nbsp;For example, I know of customers who use snapshots and replication for the databases, and then use something like Veeam to backup their virtual infrastructure. &amp;nbsp;This has the potential to create an even bigger management/administrative/reporting headache.&lt;br /&gt;&lt;br /&gt;So, if you can leverage your current backup software &amp;nbsp; to manage your snapshots, and/or perform CDP like functions via change block tracking, then I believe that you&#39;ve hit on the best of both worlds. &amp;nbsp;The good news is, that most of the backup software vendors are recognized this, and are moving aggressively to add these kinds of features into their products. Admittedly, some are further ahead in some areas than others, but it&#39;s not like you have to change overnight, so implementing the features as they appear in your backup software isn&#39;t necessarily a bad thing.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/5923124809530365201/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=5923124809530365201' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/5923124809530365201'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/5923124809530365201'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/05/modernizing-your-backups.html' title='Modernizing Your Backups'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-4569726700375116026</id><published>2014-05-03T09:16:00.000-07:00</published><updated>2014-05-03T09:18:15.266-07:00</updated><title type='text'>It takes courage to say &quot;yes&quot;</title><content type='html'>Today I want to talk about something a little different. &amp;nbsp;While my posts on here &amp;nbsp;have, in the past, all been technical, some of us are also in leadership roles. &amp;nbsp;So, I think that occasionally I might share some of my near 30 years of experience in that regard as well.&lt;br /&gt;&lt;br /&gt;What I want to talk about in this post is, from a leadership pony of view, it really does take courage to say &quot;yes&quot;, especially to a new idea. &amp;nbsp;“Definitely not,” is quicker, simpler, and easier than, saying, “Tell me more.” But, a quick “no” devalues and deflates teammates.&lt;br /&gt;&lt;br /&gt;Some of the reasons that leaders are constantly saying &quot;no&quot; include:&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;They think that it makes them look weak when they say &quot;yes&quot; too often.&lt;/li&gt;&lt;li&gt;They prefer the &quot;safety&quot; of the state-quo. &amp;nbsp;This is another way of saying they are afraid of change, or at least it makes them uncomfortable.&lt;/li&gt;&lt;li&gt;They&amp;nbsp;haven’t clearly articulated mission and vision. Off-the-wall suggestions indicate the people in the ranks don’t see the big picture.&lt;/li&gt;&lt;/ol&gt;There are some dangers to offhanded yeses however. &amp;nbsp;Offhanded yeses can dilute your resources, &amp;nbsp;divide energy, and distract focus. &amp;nbsp;So, what do good leaders do? &amp;nbsp;They explore &quot;yes&quot;. I know that takes time, but I believe that the time spent is a good investment.&lt;br /&gt;&lt;br /&gt;Here are 8 questions to yes:&lt;br /&gt;&lt;ol&gt;&lt;li&gt;What are you trying to accomplish?&lt;/li&gt;&lt;li&gt;How does this align with mission or vision?&lt;/li&gt;&lt;li&gt;Who does this idea impact? How?&lt;/li&gt;&lt;li&gt;How will this impact what we are currently doing?&lt;/li&gt;&lt;li&gt;What resources are required to pull this off?&lt;/li&gt;&lt;li&gt;How does this move us toward simplicity and clarity? But, remember new ideas often feel complex at first.&lt;/li&gt;&lt;li&gt;Is a test-run appropriate?&lt;/li&gt;&lt;li&gt;How will we determine success or failure?&lt;/li&gt;&lt;/ol&gt;Leaders who say yes end up doing what others want and that’s a good thing. &amp;nbsp;Remember too that courageous leaders are willing to risk being wrong sometimes in order to be right most of the time. They know that decisions move the organization forward. They know that a lack of a decision is in fact a decision; it’s a decision to do nothing and that’s a decision that is almost always wrong and at times catastrophic.&lt;br /&gt;&lt;br /&gt;So, are you a leader that says &quot;yes&quot;?&lt;br /&gt;&lt;br /&gt;</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/4569726700375116026/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=4569726700375116026' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4569726700375116026'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4569726700375116026'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/05/it-takes-courage-to-say-yes.html' title='It takes courage to say &quot;yes&quot;'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-4023431890831331030</id><published>2014-04-23T16:42:00.000-07:00</published><updated>2014-04-23T16:42:47.870-07:00</updated><title type='text'>Openstack Icehouse release, a first look</title><content type='html'>On April 17, the OpenStack Foundation announced the availability of the ninth release of OpenStack, codenamed Icehouse. The release boasts 350 new features, 2,902 bug fixes and contributions from over 1200 contributors.&lt;br /&gt;&lt;br /&gt;Icehouse focuses on maturity and stability as can be seen by its attention to continuous integration (CI) systems, which featured the testing of 53 third party hardware and software systems on OpenStack Icehouse.&lt;br /&gt;&lt;br /&gt;The hallmark of the Icehouse release consists of its support for rolling upgrades in OpenStack Compute Nova. With Icehouse&#39;s support for rolling upgrades, VMs no longer need to be shut down in order to install upgrades. Icehouse &quot;enables deployers to upgrade controller infrastructure first, and subsequently upgrade individual compute nodes without requiring downtime of the entire cloud to complete.&quot; As a result, upgrades can be completed with decreased system downtime, thereby rendering OpenStack significantly more appealing to enterprise customers. &amp;nbsp;There are also some added functions for KVM, Hyper-V, &amp;nbsp;VMware, and XenServer which are too numerous to go into here. See the Openstack Icehouse release notes for more details.&lt;br /&gt;&lt;br /&gt;Icehouse also features a &quot;discoverability&quot; enhancement to OpenStack Swift that allows admins to obtain data about which features are supported in a specific cluster by means of an API call. Swift now also supports system-level metadata on accounts and containers. System metadata provides a means to store internal custom metadata with associated Swift resources in a safe and secure fashion without actually having to plumb custom metadata through the core swift servers. The new gatekeeper middleware prevents this system metadata from leaking into the request or being set by a client.&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;On the networking front, OpenStack now contains new drivers and support for the IBM SDN-VE, Nuage, OneConvergence and OpenDaylight software defined networking protocols. &amp;nbsp;It also supports &amp;nbsp;new load balancing as a service drivers from Embane, NetScaler, and Radware as well as a new VPN driver that supports Cisco CSR.&lt;br /&gt;&lt;br /&gt;Meanwhile, OpenStack Keystone identity management allows users to leverage federated authentication for &quot;multiple identity providers&quot; such that customers can now use the same authentication credentials for public and private OpenStack clouds.&amp;nbsp;The assignments backend (the source of authorization data) has now been completely separated from the identity backend (the source of authentication data). This means that you can now back your deployment&#39;s identity data to LDAP, and your authorization data to SQL, for example.&lt;br /&gt;&lt;br /&gt;The Openstack Dashboard (Horizon) has support for managing a number of new features. &lt;br /&gt;&lt;br /&gt;Horizon Nova support now includes:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Live Migration Support&lt;/li&gt;&lt;li&gt;HyperV console support&lt;/li&gt;&lt;li&gt;Disk config option support&lt;/li&gt;&lt;li&gt;Improved support for managing host aggregates and availability zones.&lt;/li&gt;&lt;li&gt;Support for easily setting flavor extra specs&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;Horizon Cinder support now includes:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Role based access support for Cinder views&lt;/li&gt;&lt;li&gt;v2 API support&lt;/li&gt;&lt;li&gt;Extend volume support&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;Horizon Neutron support now includes:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Router Rules Support -- displays router rules on routers when returned by neutron&lt;/li&gt;&lt;/ul&gt;Hoizon Swift support now includes:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Support for creating public containers and providing links to those containers&lt;/li&gt;&lt;li&gt;Support explicit creation of pseudo directories&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;Horizon Heat support now includes:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Ability to update an existing stack&lt;/li&gt;&lt;li&gt;Template validation&lt;/li&gt;&lt;li&gt;Support for adding an environment files&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;Horizon Ceilometer support now includes:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Administrators can now view daily usage reports per project across services.&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;In total, Icehouse constitutes an impressive release that focuses on improving existing functionality as opposed to deploying a slew of Beta-level functionalities. OpenStack&#39;s press release claims &quot;the voice of the user&quot; is reflected in Icehouse but the real defining feature of this release is a tighter integration of OpenStack&#39;s computing, storage, networking, identity and orchestration functionality.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/4023431890831331030/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=4023431890831331030' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4023431890831331030'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4023431890831331030'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/04/on-april-17-openstack-foundation.html' title='Openstack Icehouse release, a first look'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-1609853390398772829</id><published>2014-04-12T09:11:00.000-07:00</published><updated>2014-04-12T09:11:06.175-07:00</updated><title type='text'>Simplivity vs. Nutanix</title><content type='html'>&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;span style=&quot;color: black; font-family: Calibri; font-size: x-small;&quot;&gt;&lt;span style=&quot;font-family: Calibri, sans-serif; font-size: 10.5pt;&quot;&gt;At a high level both of these products provide the same service(s) for the user. &amp;nbsp;Certainly the two “leap-frog” each other in terms of features, but at this point in time, they are very close. Both of them are “Hyper-converged VMware appliances”, though Nutanix is able to support other hypervisors such as Hyper-V and KVM as well as VMware. &amp;nbsp;Simplivity will allow large customers to utilize their own hardware, however, the customer must buy the Simplivity software as well as the&amp;nbsp;OmniCube Accelerator Card for each server since the card is what does all of the writes in the Simplivity architecture.&amp;nbsp;&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;span style=&quot;color: black; font-family: Calibri; font-size: x-small;&quot;&gt;&lt;span style=&quot;font-family: Calibri, sans-serif; font-size: 10.5pt;&quot;&gt;From an architectural perspective both systems provide a “hyper-converged” solution made up of X86 servers with internal storage which are networked/clustered together. You grow the overall system by simply adding additional nodes to the cluster. &amp;nbsp;As of this writing, Nutanix offers more different kinds of options for those nodes, giving the user more flexibility on how the clusters is gown. &amp;nbsp;Both systems provide for multiple tiers of storage including SSD and HDDs and will automatically move hot data between tiers. &amp;nbsp; It should be noted that Nutanix offers an interesting feature that Simplivity does not. &amp;nbsp;Nutanix has the concept of “data locality”. &amp;nbsp;With data locality, when you v-motion a VM to a different node in the cluster, Nutanix will move the datastore(s) for that VM to the same node (assuming there is space). &amp;nbsp;This movement is done in the background, over time so as not to impact performance.&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;span style=&quot;color: black; font-family: Calibri; font-size: x-small;&quot;&gt;&lt;span style=&quot;font-family: Calibri, sans-serif; font-size: 10.5pt;&quot;&gt;As of the latest version both systems provide deduplication of data natively built into the system. There is some discussion about which method of deduplication is “better”, however, in the end I believe that both will provide the user good deduplication results. Both systems also provide compression of data at the lower tiers.&amp;nbsp;&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;span style=&quot;color: black; font-family: Calibri; font-size: x-small;&quot;&gt;&lt;span style=&quot;font-family: Calibri, sans-serif; font-size: 10.5pt;&quot;&gt;Again, in regards to backups, replication, DR, etc. both system provide very similar features. Both systems allow for replication of deduplicated/compressed data thus providing “WAN optimization”, both systems provide for snapshots, and both systems replicate data within the cluster for data durability. Simpilivity is able to provide one feature that Nutanix is not currently able to support, and that is replication to the “cloud”. &amp;nbsp;Specifically, Simplivity provides their software as a VM image running in AWS which can be federated to an Omnicube running in the users data center.&amp;nbsp;&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;span style=&quot;color: black; font-family: Calibri; font-size: x-small;&quot;&gt;&lt;span style=&quot;font-family: Calibri, sans-serif; font-size: 10.5pt;&quot;&gt;In regards to management. &amp;nbsp;Both systems provide for a GUI management environment which allows the user to manage the entire footprint from a single pane of glass. &amp;nbsp;Again, how this is implemented is somewhat different. Nutanix provides a somewhat traditional management GUI based on HTML 5 that can be used to manage the Nutanix system. Simplivity takes a different approach. Simplivity utilizes a Vcenter plug-in to manage the Simplivity Omicube. &amp;nbsp;This ties Simplivity to VMware, and will make it more difficult to support other hypervisors.&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;span style=&quot;color: black; font-family: Calibri; font-size: x-small;&quot;&gt;&lt;span style=&quot;font-family: Calibri, sans-serif; font-size: 10.5pt;&quot;&gt;In conclusion, I believe that the two products would provide effectively the same capabilities for most customers with the single exception of the AWS support that Simplivity &amp;nbsp;provides. This support would provide the ability for customers to create a Hybrid cloud infrastructure that span the customers private cloud and the AWS public cloud.&amp;nbsp;&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;font-family: Calibri;&quot;&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: &#39;Times New Roman&#39;, serif; font-size: 12pt; margin: 0in 0in 0.0001pt;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/1609853390398772829/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=1609853390398772829' title='7 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/1609853390398772829'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/1609853390398772829'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/04/simplivity-vs-nutanix.html' title='Simplivity vs. Nutanix'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>7</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-1253360095954975520</id><published>2014-04-02T13:37:00.003-07:00</published><updated>2014-04-02T13:37:59.014-07:00</updated><title type='text'>Is 2014 the Year of Object Based Storage?</title><content type='html'>&lt;div class=&quot;MsoNormal&quot;&gt;Object based storage has actually been around for a long time.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Some implementations started to appear as early as 1996, and there have been different vendors offering the technology ever since.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;However, it has never experienced the “explosion” in usage that some were predicting that it would.&amp;nbsp;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;It least until now.&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;IDC said the OBS market is still in its infancy but it offers a promising future for organizations trying to balance scale, complexity, and costs. The leaders include Quantum, Amplidata, Cleversafe, Data Direct Networks, EMC, and Scality, with other notables such as Caringo, Cloudian, Hitachi Data Systems, NetApp, Basho, Huawei, NEC, and Tarmin.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;Last year OBS solutions were expected to account for nearly 37% of file-and-OBS (FOBS) market revenues, with the overall FOBS market projected to be worth $23 billion, and reach $38 billion in 2017, according to IDC. At a compound annual growth rate (CAGR) of 24.5% from 2012 to 2017, scale-out FOBS – delivered either as software, virtual storage appliances, hardware appliances, or self-built for delivering cloud-based offerings – is taking advantage of the evolution of storage to being software-based.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;IDC predicts that scale-up solutions, including unitary file servers and scale-up appliances and gateways, will fall on hard times throughout the forecast period, experiencing sluggish growth through 2016 before beginning to decline in 2017.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;IDC said emerging OBS technologies include: Compustorage (hyperconverged), Seagate Open Storage platform, and Intel’s efforts with OpenStack. The revenue of all of OBS vendors combined is relatively small right now (but expected to grow rapidly) with a total addressable market (TAM) expected to be in the billions. &lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp;&lt;/span&gt;Noted Ashish Nadkarni, Research Director, Storage Systems, IDC. “Vendors like EMC and NetApp have not ignored this market – if anything they have laid the groundwork for it.”&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;One of the challenges that IT continues to confront is the growth of unstructured data.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This growth creates challenges around data protection, as well as for users when they go to find their data.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Object based storage addresses both of these issues. Use of technologies like Erasure Codes allows OBS to store data in a way that is both highly durable, as well as geographically distributed.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This eliminates the need to create multiple full copies of the data in multiple locations, as you would have to do with traditional NAS arrays. So, rather than having to place storage systems that comprise 300% of your actual data size, you can utilize as little as 50%. &lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;In addition, because many object storage systems are software solutions that can be run on nodes using low cost server hardware and high capacity disk drives, they can cost significantly less than proprietary NAS systems. Throw in better data protection and enhanced features that can enhance search performance and efficient data tiering and it’s easy to see why OBS is catching on.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;!--[if gte mso 9]&gt;&lt;xml&gt; &lt;o:DocumentProperties&gt;  &lt;o:Revision&gt;0&lt;/o:Revision&gt;  &lt;o:TotalTime&gt;0&lt;/o:TotalTime&gt;  &lt;o:Pages&gt;1&lt;/o:Pages&gt;  &lt;o:Words&gt;581&lt;/o:Words&gt;  &lt;o:Characters&gt;3316&lt;/o:Characters&gt;  &lt;o:Company&gt;EVT Corp.&lt;/o:Company&gt;  &lt;o:Lines&gt;27&lt;/o:Lines&gt;  &lt;o:Paragraphs&gt;7&lt;/o:Paragraphs&gt;  &lt;o:CharactersWithSpaces&gt;3890&lt;/o:CharactersWithSpaces&gt;  &lt;o:Version&gt;14.0&lt;/o:Version&gt; &lt;/o:DocumentProperties&gt; &lt;o:OfficeDocumentSettings&gt;  &lt;o:AllowPNG/&gt; &lt;/o:OfficeDocumentSettings&gt;&lt;/xml&gt;&lt;![endif]--&gt; &lt;!--[if gte mso 9]&gt;&lt;xml&gt; &lt;w:WordDocument&gt;  &lt;w:View&gt;Normal&lt;/w:View&gt;  &lt;w:Zoom&gt;0&lt;/w:Zoom&gt;  &lt;w:TrackMoves/&gt;  &lt;w:TrackFormatting/&gt;  &lt;w:PunctuationKerning/&gt;  &lt;w:ValidateAgainstSchemas/&gt;  &lt;w:SaveIfXMLInvalid&gt;false&lt;/w:SaveIfXMLInvalid&gt;  &lt;w:IgnoreMixedContent&gt;false&lt;/w:IgnoreMixedContent&gt;  &lt;w:AlwaysShowPlaceholderText&gt;false&lt;/w:AlwaysShowPlaceholderText&gt;  &lt;w:DoNotPromoteQF/&gt;  &lt;w:LidThemeOther&gt;EN-US&lt;/w:LidThemeOther&gt;  &lt;w:LidThemeAsian&gt;JA&lt;/w:LidThemeAsian&gt;  &lt;w:LidThemeComplexScript&gt;X-NONE&lt;/w:LidThemeComplexScript&gt;  &lt;w:Compatibility&gt;   &lt;w:BreakWrappedTables/&gt;   &lt;w:SnapToGridInCell/&gt;   &lt;w:WrapTextWithPunct/&gt;   &lt;w:UseAsianBreakRules/&gt;   &lt;w:DontGrowAutofit/&gt;   &lt;w:SplitPgBreakAndParaMark/&gt;   &lt;w:EnableOpenTypeKerning/&gt;   &lt;w:DontFlipMirrorIndents/&gt;   &lt;w:OverrideTableStyleHps/&gt;   &lt;w:UseFELayout/&gt;  &lt;/w:Compatibility&gt;  &lt;m:mathPr&gt;   &lt;m:mathFont m:val=&quot;Cambria Math&quot;/&gt;   &lt;m:brkBin m:val=&quot;before&quot;/&gt;   &lt;m:brkBinSub m:val=&quot;&amp;#45;-&quot;/&gt;   &lt;m:smallFrac m:val=&quot;off&quot;/&gt;   &lt;m:dispDef/&gt;   &lt;m:lMargin m:val=&quot;0&quot;/&gt;   &lt;m:rMargin m:val=&quot;0&quot;/&gt;   &lt;m:defJc m:val=&quot;centerGroup&quot;/&gt;   &lt;m:wrapIndent m:val=&quot;1440&quot;/&gt;   &lt;m:intLim m:val=&quot;subSup&quot;/&gt;   &lt;m:naryLim m:val=&quot;undOvr&quot;/&gt;  &lt;/m:mathPr&gt;&lt;/w:WordDocument&gt;&lt;/xml&gt;&lt;![endif]--&gt;&lt;!--[if gte mso 9]&gt;&lt;xml&gt; &lt;w:LatentStyles DefLockedState=&quot;false&quot; DefUnhideWhenUsed=&quot;true&quot;   DefSemiHidden=&quot;true&quot; DefQFormat=&quot;false&quot; DefPriority=&quot;99&quot;   LatentStyleCount=&quot;276&quot;&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;0&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Normal&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;heading 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 7&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 8&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 9&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 7&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 8&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 9&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;35&quot; QFormat=&quot;true&quot; Name=&quot;caption&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;10&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Title&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;1&quot; Name=&quot;Default Paragraph Font&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;11&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Subtitle&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;22&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Strong&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;20&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Emphasis&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;59&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Table Grid&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; UnhideWhenUsed=&quot;false&quot; Name=&quot;Placeholder Text&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;1&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;No Spacing&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; UnhideWhenUsed=&quot;false&quot; Name=&quot;Revision&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;34&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;List Paragraph&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;29&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Quote&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;30&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Intense Quote&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;19&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Subtle Emphasis&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;21&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Intense Emphasis&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;31&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Subtle Reference&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;32&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Intense Reference&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;33&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Book Title&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;37&quot; Name=&quot;Bibliography&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; QFormat=&quot;true&quot; Name=&quot;TOC Heading&quot;/&gt; &lt;/w:LatentStyles&gt;&lt;/xml&gt;&lt;![endif]--&gt; &lt;!--[if gte mso 10]&gt;&lt;style&gt; /* Style Definitions */ table.MsoNormalTable  {mso-style-name:&quot;Table Normal&quot;;  mso-tstyle-rowband-size:0;  mso-tstyle-colband-size:0;  mso-style-noshow:yes;  mso-style-priority:99;  mso-style-parent:&quot;&quot;;  mso-padding-alt:0in 5.4pt 0in 5.4pt;  mso-para-margin:0in;  mso-para-margin-bottom:.0001pt;  mso-pagination:widow-orphan;  font-size:12.0pt;  font-family:Cambria;  mso-ascii-font-family:Cambria;  mso-ascii-theme-font:minor-latin;  mso-hansi-font-family:Cambria;  mso-hansi-theme-font:minor-latin;} &lt;/style&gt;&lt;![endif]--&gt;   &lt;!--StartFragment--&gt;                               &lt;!--EndFragment--&gt;&lt;br /&gt;&lt;div class=&quot;MsoNormal&quot;&gt;So, what’s the downside?&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;There are a couple.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;First, it’s performance.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;OBS typically cannot match the performance of traditional NAS arrays. With object retrieval latency in the 30-50ms range, applications that require high performance are going to have a problem with OBS.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This is one of the reasons that AWS recommends that you put data on Elastic Block Storage if you need good performance, as opposed to using S3.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;The other challenge is that applications today are often not written to access data on OBS. Therefore changes to applications must be made, or the OBS storage must be accessed through a NAS gateway.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Introducing a NAS gateway, however, eliminates the flat namespace, as well as the ability to attach meaningful metadata to your files/object.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This reduces the utility of OBS significantly.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;However, the use of NAS gateways as an interim solution may simply be a necessity if OBS is to take over the NAS space.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/1253360095954975520/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=1253360095954975520' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/1253360095954975520'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/1253360095954975520'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/04/is-2014-year-of-object-based-storage.html' title='Is 2014 the Year of Object Based Storage?'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-6087609535004615216</id><published>2014-03-08T11:21:00.000-08:00</published><updated>2014-03-08T11:21:40.321-08:00</updated><title type='text'>Backing Up Openstack</title><content type='html'>&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Today, &amp;nbsp;I want to talk a little backup backups. &amp;nbsp;Specifically, how to backup your Openstack environment. But not only how to backup the contents of your Openstack environment, but how to backup Openstack itself.&amp;nbsp;&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;The thing to keep in mind here is that OpenStack is based around a modular architecture in which a number of different components can be combined together to offer&amp;nbsp;cloud&amp;nbsp;services on standardized hardware. These modules are freely available under the Apache license.&lt;/div&gt;&lt;h3 style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 1.3em; line-height: 1; margin: 0px 0px 5px;&quot;&gt;Backing up OpenStack&lt;/h3&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Backup solutions are typically developed with either operating systems or applications in mind. OpenStack is neither. OpenStack is merely a collection of components that can be combined to provide various types of cloud services. As such, OpenStack administrators must consider what needs to be backed up and how to perform the backup.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;OpenStack backups should focus on backing up configuration files and databases for Openstack itself. The configuration files can be backed up at the file level since Openstack is just software running on a Linux machine.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;The /etc/nova /var/lib/nova folders should be backed up on both the cloud controller and the compute nodes. However, you must exclude the /var/lib/nova/instances folder on any compute nodes. This folder contains live KVM instances. Restoring a backup that was made of a live KVM instance will typically result in an unbootable image.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;One of the most important folders to include in your backup is /etc/swift/. This folder contains the ring files, ring builder files, and swift configuration files. If the contents of this folder are lost, the cluster data will become inaccessible. As such, it is a good idea to copy the contents of this folder to each storage node so that multiple backups exist within your storage cluster.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Some other folders that contain configuration data and should be included in your backups include:&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;/etc/keystone&lt;br /&gt;/var/log/keystone&lt;br /&gt;/etc/cinder&lt;br /&gt;/var/log/cinder&lt;br /&gt;/etc/glance&lt;br /&gt;/var/log/glance&lt;br /&gt;/var/lib/glance&lt;br /&gt;/var/lib.glance/images&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;In addition to the folders listed above, there are also several databases that need to be backed up. Typically the databases will reside on the cloud controller, which doubles as a MySQL Server. This server hosts databases related to the Keystone, Cinder, Nova, and Glance components of OpenStack.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;You can back up these databases by using the mysqldump command. The command requires you to specify the names of the databases that you want to back up as well as an output file. For example, if you wanted to back up the keystone database to a file named KeystoneBackup, you could do so with the following command:&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;# mysqldump –opt keystone &amp;gt; KeystoneBackup.sql&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;As a shortcut, you can substitute the all-databases parameter in place of the database name. For instance, if you wanted to back up all of the databases to a file named MyCloud, you could use the following command:&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;# mysqldump –opt –all-databases &amp;gt;MyCloud.sql&lt;/div&gt;&lt;h3 style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 1.3em; line-height: 1; margin: 0px 0px 5px;&quot;&gt;What&#39;s missing?&lt;/h3&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Backing up configuration files and databases will allow you to protect your OpenStack configuration, but there are some things that are not protected by this type of backup. This method does not protect individual objects within object storage. Similarly, block storage data is also left unprotected. According to the OpenStack documentation, these types of data are left for users to back up on their own.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;You can use any compatible backup application. The OpenStack documentation basically says that it is up to the users to back up data residing on the&amp;nbsp; virtual machines&amp;nbsp;that they create. As such, the backup application would have to be compatible with the virtual machines. It should be noted at this point that users of the public cloud have the same problem. Public cloud providers also leave it up to the user to backup their virtual machines and the data that this virtual machines use.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Of course, this raises the question of how you can better protect an OpenStack cloud. One thing to keep in mind is that like any cloud environment, OpenStack makes use of&amp;nbsp;server virtualization.&amp;nbsp;In fact, OpenStack is designed to work with a number of different hypervisors. You can see the full hypervisor support matrix at:&lt;a href=&quot;https://wiki.openstack.org/wiki/HypervisorSupportMatrix&quot; style=&quot;color: #663366; outline: 0px; text-decoration: none;&quot;&gt;https://wiki.openstack.org/wiki/HypervisorSupportMatrix&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;One way that you can better protect your OpenStack environment is to adopt a backup application that is specifically designed for the hypervisor that you are using. You will still need to protect the OpenStack configuration files and databases, but you can use the backup software to protect the individual virtual machines and their contents&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Another thing that you can do is to adopt a backup application that is OpenStack aware. However, this is more easily said than done. As previously mentioned, OpenStack is a collection of modular components that can be used to construct a private cloud. As such, none of the major backup products come preconfigured to back up OpenStack clouds.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Backup vendor Druva recently made headlines when they announced that their inSync software now supports OpenStack based scale-out storage. The software is designed to access OpenStack storage using the SWIFT OpenStack storage access protocol. It will also have the ability to back up file and object storage, as well as mobile endpoints (laptops, smart phones, etc).&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Similarly, Zmanda supports the OpenStack framework with its Amanda enterprise backup software. The software is designed to create backups from the remote server layer.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;Both Druva and Zmanda back up specific OpenStack resources, as opposed to the entire OpenStack infrastructure. It should be possible to also use traditional backup apps like NetBackup for Linux to back up the required components. However, NetBackup is not OpenStack aware. It would, therefore, be the backup admin&#39;s responsibility to manually configure a backup job that includes all of the required config data and databases.&lt;/div&gt;&lt;div style=&quot;background-color: white; font-family: Calibri, Helvetica, Arial, sans-serif; font-size: 16px; line-height: 19.200000762939453px; margin-bottom: 20px;&quot;&gt;The key to adequately protecting your OpenStack environment is to determine what it is that needs to be protected and then build a backup solution to meet those needs. While there are commercial products that can back up certain OpenStack resources, those products may not offer the level of protection that you require. You may have to combine commercial backup products with script-based backup techniques.&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/6087609535004615216/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=6087609535004615216' title='7 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/6087609535004615216'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/6087609535004615216'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/03/backing-up-openstack.html' title='Backing Up Openstack'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>7</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-5942115983864442234</id><published>2014-02-09T12:24:00.000-08:00</published><updated>2014-02-09T12:24:05.402-08:00</updated><title type='text'>PaaS Outlook for 2014</title><content type='html'>&lt;div class=&quot;MsoNormal&quot;&gt;Yup, it&#39;s that time of year again where everyone makes their 2014 predictions. I guess I&#39;m no exception...&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;In this blog posting I’d like to spend a little time talking about Cloud, and specifically, about PaaS.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;But first, a little background material.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;First, lets define the different “kinds” of Clouds there are:&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;b&gt;Public Cloud&lt;/b&gt; – Gartner defines&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;public cloud &lt;/span&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;as a style of computing where scalable and elastic IT-enabled capabilities are provided as a service to external customers using Internet technologies—i.e., public cloud computing uses cloud computing technologies to support customers that are external to the provider’s organization.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;b&gt;P&lt;/b&gt;&lt;/span&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;b&gt;rivate Cloud&lt;/b&gt; – Webopedia defines &lt;/span&gt;&lt;i style=&quot;text-indent: -0.25in;&quot;&gt;Private cloud&lt;/i&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&amp;nbsp;as the phrase used to describe a&amp;nbsp;cloud-computing platform that is implemented within the corporate&amp;nbsp;firewall, under the control of the IT department.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;b&gt;H&lt;/b&gt;&lt;/span&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;b&gt;ybrid Cloud&lt;/b&gt; – SearchCloudComputing.com defines a hybrid cloud as a cloud-computing environment in which an organization provides and manages some resources in-house and has others provided externally.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoListParagraphCxSpFirst&quot; style=&quot;mso-list: l0 level1 lfo1; text-indent: -.25in;&quot;&gt;&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoListParagraphCxSpMiddle&quot; style=&quot;mso-list: l0 level1 lfo1; text-indent: -.25in;&quot;&gt;&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoListParagraphCxSpLast&quot; style=&quot;mso-list: l0 level1 lfo1; text-indent: -.25in;&quot;&gt;&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;There are some others, but they are all basically variations of the above.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;Once you have a Cloud solution, the question is, what kind of Cloud is it? Here are the definitions that NIST provides.&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;span style=&quot;font-family: &#39;Times New Roman&#39;; font-size: 7pt;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;span style=&quot;font-family: &#39;Times New Roman&#39;; font-size: 7pt;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;b&gt;Infrastructure as a Service (Iaas) &lt;/b&gt;- The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls).&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;b&gt;Platform as a Service (PaaS)&lt;/b&gt; - The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span style=&quot;text-indent: -0.25in;&quot;&gt;&lt;b&gt;Software as a Service (SaaS) &lt;/b&gt;- The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure2. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoListParagraphCxSpLast&quot; style=&quot;mso-list: l1 level1 lfo2; text-indent: -.25in;&quot;&gt;&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;I recently did some research on PaaS, but I found the research tough to do because it’s almost impossible to put all of the PaaS players in the same bucket, and common patterns are hard to find.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;Unlike the IaaS players that provide IT resources as a service, PaaS providers are really solution development platforms.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Therefore, they are built around the types of problems they solve, not some industry-accepted approach.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;At the heart of the problem is the fact that PaaS is today’s most ill-defined area of cloud computing.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;The approaches, features, and definitions vary widely, with many PaaS providers offering a specific focus.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This may include support for specific programming languages such as Salesforce.com’s Heroku, support for Ruby, Node.js, Python, and Java, or perhaps tight integration with major databases, such as Oracle’s Cloud Platform.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Or, perhaps it’s the delivery model, with private PaaS offerings from Active State, App Fog, or Apprenda, for those of you who can’t yet trust the public PaaS offerings from Google or AWS. Then there is an entirely new set of PaaS providers such as Elastic Box that bring a completely different approach to the problem.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;Overall it’s largely a function of the providers all of which are trying to be relevant in this emerging marketplace.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;PaaS is the last frontier of cloud computing, and thus the least defined.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;So, it’s still possible for vendors to manipulate the market by positioning their products to better define what PaaS is and its value, or, more likely, their campaigns will just confuse people. &lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;In 2013, the PaaS market took on some new dimensions.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Private PaaS players saw strong growth as some enterprises looked to keep applications and data in-house.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Also, there is greater support for the emerging use of DevOps, better database integration, and better support for emerging multi-cloud deployments.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This builds upon, not replaces, the traditional uses of PaaS to automate application development, testing, and deployment processes.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;Moreover, the PaaS market saw increased meshing with the IaaS space in 2013.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This includes strong showings from AWS Elastic Beanstalk, and other IaaS-focused players.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;We also saw the arrival of some new PaaS players, including Oracle, and we got a clearer picture of how Saleforce.com’s and Pivotal’s PaaS offerings will likely exist in the emerging market.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;Given all of these developments, there is a need to reevaluate the PaaS market and the PaaS players, in terms of how PaaS truly fits within an enterprise application development strategy.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Questions are emerging such as:&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;When will PaaS work for enterprise IT?&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;When will PaaS not work for enterprise IT?&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp;&amp;nbsp; &lt;/span&gt;What is the changing value of PaaS technology, now, and into 2014?&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;Some confusion in the attempts to answer these questions, and complexity has emerged.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This led to some pushback when it comes to PaaS within enterprise IT.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Many consider PaaS too complicated and too limiting for most development efforts, and for most developers.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;For instance, most PaaS offerings place the developer into a sandbox, with only the features and functions that the PaaS provider furnishes to build and deploy applications.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;While this makes development an easy and controlled process, many developers need to gain access to the resources and tools required to support specific features, such as remote and native APIs, as well as middleware and database services.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;While the PaaS providers consider this abstraction from the underlying “metal” a path to productivity, many developers don’t agree.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;PaaS does provide the ability to automate much of the development and deployment activities, as well as provide the developers with the ability to offer self- and auto-provisioning capabilities.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This means that application developers can focus on the applications, and not have to deal with the purchase of hardware, software, and development tools to support increasing demands on the applications or the need to scale.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;Moreover, PaaS supports new and more innovative approaches to delivery, including DevOps and the move to “continuous delivery.”&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;Approaches such as continuous integration, automated testing, and continuous deployment allow software to be developed to a high standard and easily packaged and deployed.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;This results in the ability to rapidly, reliably, and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;If there is a core pattern that is a part of most PaaS, it’s that&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;it’s solution-oriented.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;PaaS providers are focused on being the factory for cloud applications, and they understand there are many paths to get to that goal.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;As such, the offerings are very different from provider to provider, and thus the market is fragmented, complex, and confusing to those in enterprise IT.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;!--[if gte mso 9]&gt;&lt;xml&gt; &lt;o:DocumentProperties&gt;  &lt;o:Revision&gt;0&lt;/o:Revision&gt;  &lt;o:TotalTime&gt;0&lt;/o:TotalTime&gt;  &lt;o:Pages&gt;1&lt;/o:Pages&gt;  &lt;o:Words&gt;1164&lt;/o:Words&gt;  &lt;o:Characters&gt;6640&lt;/o:Characters&gt;  &lt;o:Company&gt;EVT Corp.&lt;/o:Company&gt;  &lt;o:Lines&gt;55&lt;/o:Lines&gt;  &lt;o:Paragraphs&gt;15&lt;/o:Paragraphs&gt;  &lt;o:CharactersWithSpaces&gt;7789&lt;/o:CharactersWithSpaces&gt;  &lt;o:Version&gt;14.0&lt;/o:Version&gt; &lt;/o:DocumentProperties&gt; &lt;o:OfficeDocumentSettings&gt;  &lt;o:AllowPNG/&gt; &lt;/o:OfficeDocumentSettings&gt;&lt;/xml&gt;&lt;![endif]--&gt; &lt;!--[if gte mso 9]&gt;&lt;xml&gt; &lt;w:WordDocument&gt;  &lt;w:View&gt;Normal&lt;/w:View&gt;  &lt;w:Zoom&gt;0&lt;/w:Zoom&gt;  &lt;w:TrackMoves/&gt;  &lt;w:TrackFormatting/&gt;  &lt;w:PunctuationKerning/&gt;  &lt;w:ValidateAgainstSchemas/&gt;  &lt;w:SaveIfXMLInvalid&gt;false&lt;/w:SaveIfXMLInvalid&gt;  &lt;w:IgnoreMixedContent&gt;false&lt;/w:IgnoreMixedContent&gt;  &lt;w:AlwaysShowPlaceholderText&gt;false&lt;/w:AlwaysShowPlaceholderText&gt;  &lt;w:DoNotPromoteQF/&gt;  &lt;w:LidThemeOther&gt;EN-US&lt;/w:LidThemeOther&gt;  &lt;w:LidThemeAsian&gt;JA&lt;/w:LidThemeAsian&gt;  &lt;w:LidThemeComplexScript&gt;X-NONE&lt;/w:LidThemeComplexScript&gt;  &lt;w:Compatibility&gt;   &lt;w:BreakWrappedTables/&gt;   &lt;w:SnapToGridInCell/&gt;   &lt;w:WrapTextWithPunct/&gt;   &lt;w:UseAsianBreakRules/&gt;   &lt;w:DontGrowAutofit/&gt;   &lt;w:SplitPgBreakAndParaMark/&gt;   &lt;w:EnableOpenTypeKerning/&gt;   &lt;w:DontFlipMirrorIndents/&gt;   &lt;w:OverrideTableStyleHps/&gt;   &lt;w:UseFELayout/&gt;  &lt;/w:Compatibility&gt;  &lt;m:mathPr&gt;   &lt;m:mathFont m:val=&quot;Cambria Math&quot;/&gt;   &lt;m:brkBin m:val=&quot;before&quot;/&gt;   &lt;m:brkBinSub m:val=&quot;&amp;#45;-&quot;/&gt;   &lt;m:smallFrac m:val=&quot;off&quot;/&gt;   &lt;m:dispDef/&gt;   &lt;m:lMargin m:val=&quot;0&quot;/&gt;   &lt;m:rMargin m:val=&quot;0&quot;/&gt;   &lt;m:defJc m:val=&quot;centerGroup&quot;/&gt;   &lt;m:wrapIndent m:val=&quot;1440&quot;/&gt;   &lt;m:intLim m:val=&quot;subSup&quot;/&gt;   &lt;m:naryLim m:val=&quot;undOvr&quot;/&gt;  &lt;/m:mathPr&gt;&lt;/w:WordDocument&gt;&lt;/xml&gt;&lt;![endif]--&gt;&lt;!--[if gte mso 9]&gt;&lt;xml&gt; &lt;w:LatentStyles DefLockedState=&quot;false&quot; DefUnhideWhenUsed=&quot;true&quot;   DefSemiHidden=&quot;true&quot; DefQFormat=&quot;false&quot; DefPriority=&quot;99&quot;   LatentStyleCount=&quot;276&quot;&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;0&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Normal&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;heading 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 7&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 8&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;9&quot; QFormat=&quot;true&quot; Name=&quot;heading 9&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 7&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 8&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; Name=&quot;toc 9&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;35&quot; QFormat=&quot;true&quot; Name=&quot;caption&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;10&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Title&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;1&quot; Name=&quot;Default Paragraph Font&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;11&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Subtitle&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;22&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Strong&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;20&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Emphasis&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;59&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Table Grid&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; UnhideWhenUsed=&quot;false&quot; Name=&quot;Placeholder Text&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;1&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;No Spacing&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; UnhideWhenUsed=&quot;false&quot; Name=&quot;Revision&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;34&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;List Paragraph&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;29&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Quote&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;30&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Intense Quote&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 1&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 2&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 3&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 4&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 5&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;60&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Shading Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;61&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light List Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;62&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Light Grid Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;63&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 1 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;64&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Shading 2 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;65&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 1 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;66&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium List 2 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;67&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 1 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;68&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 2 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;69&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Medium Grid 3 Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;70&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Dark List Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;71&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Shading Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;72&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful List Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;73&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; Name=&quot;Colorful Grid Accent 6&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;19&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Subtle Emphasis&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;21&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Intense Emphasis&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;31&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Subtle Reference&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;32&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Intense Reference&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;33&quot; SemiHidden=&quot;false&quot;    UnhideWhenUsed=&quot;false&quot; QFormat=&quot;true&quot; Name=&quot;Book Title&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;37&quot; Name=&quot;Bibliography&quot;/&gt;  &lt;w:LsdException Locked=&quot;false&quot; Priority=&quot;39&quot; QFormat=&quot;true&quot; Name=&quot;TOC Heading&quot;/&gt; &lt;/w:LatentStyles&gt;&lt;/xml&gt;&lt;![endif]--&gt; &lt;!--[if gte mso 10]&gt;&lt;style&gt; /* Style Definitions */ table.MsoNormalTable  {mso-style-name:&quot;Table Normal&quot;;  mso-tstyle-rowband-size:0;  mso-tstyle-colband-size:0;  mso-style-noshow:yes;  mso-style-priority:99;  mso-style-parent:&quot;&quot;;  mso-padding-alt:0in 5.4pt 0in 5.4pt;  mso-para-margin:0in;  mso-para-margin-bottom:.0001pt;  mso-pagination:widow-orphan;  font-size:12.0pt;  font-family:Cambria;  mso-ascii-font-family:Cambria;  mso-ascii-theme-font:minor-latin;  mso-hansi-font-family:Cambria;  mso-hansi-theme-font:minor-latin;} &lt;/style&gt;&lt;![endif]--&gt;   &lt;!--StartFragment--&gt;                                                                                     &lt;!--EndFragment--&gt;&lt;br /&gt;&lt;div class=&quot;MsoNormal&quot;&gt;I suspect this situation won’t improve much as we enter 2014.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;However, PaaS continues to be a consideration for those moving to the cloud.&lt;span style=&quot;mso-spacerun: yes;&quot;&gt;&amp;nbsp; &lt;/span&gt;How and if it’s leveraged will be defined by the particular enterprise.&lt;o:p&gt;&lt;/o:p&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/5942115983864442234/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=5942115983864442234' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/5942115983864442234'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/5942115983864442234'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2014/02/paas-outlook-for-2014.html' title='PaaS Outlook for 2014'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-6331079686724729373</id><published>2013-11-25T14:09:00.000-08:00</published><updated>2013-11-25T14:09:37.601-08:00</updated><title type='text'>Forecast, Cloudy with a lot of public and a touch of private...</title><content type='html'>If you believe that the pundits tell you, then private cloud is all the rage for enterprise customers. &amp;nbsp;Certainly, if you look at what we&#39;ve been doing here at EVT, there seems to be some evidence to suggest that&#39;s actually true. &amp;nbsp;Our Enterprise customers seem to all be either interested in, currently deploying, or running some kind of private cloud.&lt;br /&gt;&lt;br /&gt;Forrester Research says that 31% of Enterprise &amp;nbsp;customers already have a private cloud in place and 17% plan to build one over the next year. &amp;nbsp;However, when you dig down a little further, what you&#39;ll find is that only 13% actually have something that fits the &quot;true&quot; definition of private cloud. &amp;nbsp;Most have some kind of virtualization in place with some added software to help manage that virtual infrastructure. &amp;nbsp;But, more often than not, those so called private clouds are missing some key elements of a &quot;true&quot; private cloud.&lt;br /&gt;&lt;br /&gt;Part of the problem could be that IT has a very loosey-goosey definition of what private cloud really is - and therefore what it brings to the table for IT and for IT&#39;s customers. &amp;nbsp;The National Institute of Standards and Technology (NIST) says that for Infrastructure as a Service (IaaS) to be considered as a cloud it must have 5 attributes:&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;On-demand self-service&lt;/li&gt;&lt;li&gt;Broad network access&lt;/li&gt;&lt;li&gt;Resource pooling&lt;/li&gt;&lt;li&gt;Rapid elasticity&lt;/li&gt;&lt;li&gt;Measured service&lt;/li&gt;&lt;/ol&gt;IT&#39;s definition of a cloud is often very different, and can vary from &quot;We have a data center&quot; to &quot;we look just like Amazon Web Services&quot;. &amp;nbsp;But without the 5 essential characteristics above, IT will not be able to achieve the goals of going to &quot;The Cloud&quot; that most people are trying to achieve. &amp;nbsp;The scalability, elasticity, and cost savings that the public cloud promises to the business customers of IT are the real goals that IT should be looking to match with the private cloud.&lt;br /&gt;&lt;br /&gt;So why is public cloud growing so rapidly? &amp;nbsp;AWS was a $2 billion business last year, and they are predicted to double that this year. &amp;nbsp;Yet, as you can see above, private cloud seems to be struggling to gain traction in the data center. Especially when you consider the number of data centers that have a private cloud in name only (PCINO). &amp;nbsp;I suspect that there are a number of reasons.&lt;br /&gt;&lt;br /&gt;First, moving to a true private cloud is a very difficult cultural and organizational hurdle for most IT departments. &amp;nbsp;It really means a shake-up of IT at the most fundamental level all the way from the top to the bottom. &amp;nbsp;It means that IT will truly have to morph into that service organization that they have been trying to morph into for a long time, and many have yet to reach. That&#39;s the cultural change. They also need to change from an organizational perspective. They need to move away from vertically siloed departments within IT such as Server, Storage, network, etc. to horizontally organized departments is key to IT achieving the results that they desire and to be able to compete successfully with public cloud providers.&lt;br /&gt;&lt;br /&gt;It should be noted here that IT often attempts to &quot;cheat&quot; the organizational change by &quot;matrixing&quot; people from existing IT departments into new &quot;cloud&quot; organizations. This often leads to failure since those &quot;matrixed&quot; people often bring with them old ideas about how things should be run as well as old processes and procedures. &lt;br /&gt;&lt;br /&gt;This change also must go beyond just IT. &amp;nbsp;For example, the purchasing department must understand the new model for purchasing converged infrastructure. For example, they can&#39;t be allowed to &quot;break up&quot; the converged infrastructure and purchase the individual components through old, existing vendor relationships. &amp;nbsp;This continues on into IT as well, converged infrastructure means just that, converged. This often means that equipment that was traditional purchased directly from the manufacturer may now be part of the converged infrastructure stack and thus will be purchased as part of that solution. These old relationships with vendors and manufacturers often get in to way to achieving &quot;true&quot; cloud.&lt;br /&gt;&lt;br /&gt;So, IT&#39;s inability to make the cultural and organizational changes to successfully compete with the public cloud is one reason I believe that private cloud adoption is where it is today. &amp;nbsp;A related reason is that in some cases IT recognizes the issues, and actually starts to utilize the public cloud to deliver services to their end users. &amp;nbsp;This is often an attempt to reel back into the fold &quot;shadow IT&quot; that has already deployed solutions in the public cloud. This if often followed quickly by a discussion on IT&#39;s part of hybrid cloud. In many cases that&#39;s because IT feels it just can&#39;t compete against the public cloud for all applications, and thus comes to the reluctant conclusion that rather than lose the entire pie, it&#39;s willing to give some part of the pie to the public cloud and build a private cloud for the rest. There&#39;s also an unspoken idea on the IT of IT that once they get their hybrid cloud up and running, they they will eventually prove to the business that they are better than public cloud and thus a majority of the application will move into that private cloud over time leaving only a small hand full of applications in the public cloud.&lt;br /&gt;&lt;br /&gt;In the end, I think that unless IT can address the barriers to private cloud discussed above, that their dream of making the public cloud a temporary home is actually just a pipe dream. But in either case, IT&#39;s future is one in which they are a service provider and service director that helps the business find the best, most cost effective home for their applications.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/6331079686724729373/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=6331079686724729373' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/6331079686724729373'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/6331079686724729373'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2013/11/forecast-cloudy-with-lot-of-public-and.html' title='Forecast, Cloudy with a lot of public and a touch of private...'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-8666257379186828309</id><published>2013-09-08T19:15:00.000-07:00</published><updated>2013-09-09T13:58:07.163-07:00</updated><title type='text'>Is OpenStack ready for prime time yet?</title><content type='html'>For those who&#39;ve been reading this blog for a while, or who know me, you know that while I&#39;ve been in the data center business for a long time, that lately I&#39;ve been focused on storage and backup. However, over the last couple of years I&#39;ve been watching the infrastructure business change. &amp;nbsp;What I find interesting is that what&#39;s old is new again!&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;When I first started out in &quot;Open Systems&quot;, network, server, and storage was all managed as a single entity. So, here we are again. A &quot;pod&quot; or stack is just network, server, and storage all managed together, as a single entity. &amp;nbsp;The new wrinkle here is that we also size them as a single entity which provides a number of advantages. But that&#39;s for another blog. As a matter of fact, I plan to write a couple of blogs on IaaS/PaaS/SaaS, how to move successfully to &quot;the cloud&quot;, and data protection a cloud environment.&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;In this blog, I want to talk about one of the &quot;stacks&quot; called &quot;OpenStack&quot;. The first questions I get asked when I first begin to talk about OpenStack is, what&#39;s the difference between a &quot;stack&quot; and a &quot;pod&quot;? &amp;nbsp;Why is it called OpenStack and not OpenPod? The confusion is quite understandable, since the amount of hype and marketecture around everything having to do with &quot;the cloud&quot;, including this topic, is enormous. &amp;nbsp;As a matter of fact, it&#39;s so bad, that some of the terms are, in my opinion, starting to become meaningless. &amp;nbsp;So I like to start out any discussion of any of these topics with a couple of definitions so that the audience and I are on the same page. According to Wikipedia:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;&lt;b&gt;OpenStack&lt;/b&gt; is a cloud computing project to provide an infrastructure as a service (IaaS). It is free open source software released under the terms of the Apache License. The project is managed by the OpenStack Foundation, a non-profit corporate entity established in September 2012 to promote OpenStack software and its community.&lt;/i&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;&lt;br /&gt;&lt;/i&gt;&lt;/div&gt;&lt;div&gt;This begs a definition of IaaS (Infrastructure as a Service) again form Wikipedia:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often) virtual machines - and other resources. (A hypervisor, such as VMware, Hyper-V, Xen or KVM, runs the virtual machines as guests. Pools of hypervisors within the cloud operational support-system can support large numbers of virtual machines and the ability to scale services up and down according to customers&#39; varying requirements.) IaaS clouds often offer additional resources such as a virtual-machine disk image library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles. IaaS-cloud providers supply these resources on-demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks).&amp;nbsp;&lt;/i&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;&lt;br /&gt;&lt;/i&gt;&lt;/div&gt;&lt;div&gt;Note that IaaS can also be&amp;nbsp;implemented&amp;nbsp;in a private cloud (in your data center), or in both the public and a private cloud called a&amp;nbsp;Hybrid&amp;nbsp;Cloud. &amp;nbsp;This ability to utilize the resources of both a private cloud, and a public cloud, is becoming more and more interesting to large enterprises. &amp;nbsp;Again, more on this in a later blog where I will talk about the economics of &quot;cloud&quot;.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;OK, so enough of laying the groundwork. &amp;nbsp;Let&#39;s talk about OpenStack, and see of we can answer the basic question, is it ready for &quot;prime time&quot;? &amp;nbsp;Can I use it in the enterprise to implement my private cloud IaaS infrastructure? The answer is, maybe. Let&#39;s talk about it a bit.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;First, clearly the interest in OpenStack is definitely growing, and growing quickly. You can see this by looking at the attendance of The OpeStack Summit which started out life with a $15,000.00 budget, and 75 people were basically coerced to go. The most recent OpenStack Summit had a $2 million budget and over 3,000 attendees. So, clearly, interest is up, but no where near the kind of interest that VMware has managed to get. The most recent VMworld had over 23,000 attendees. &amp;nbsp;So, no doubt, lots of interest. But what&#39;s driving the interest? Obviously, cost is a big consideration. Since OpenStack is open source, the cost of implementing it is significantly lower than for any of the commercial software out there. &amp;nbsp;But are there hidden costs that perhaps make it not as good a &quot;buy&quot; as perhaps one might think at first blush? &amp;nbsp;the short answer to that is &quot;yes&quot;, just like it is with any open source software. Things like support costs as well as the cost of finding/training staff, etc. all add to the TCO of any open source solution, including that of OpenStack.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;But lets talk about OpenStack itself a bit. &amp;nbsp;One of the things that I think was holding back OpenStack was the difficulty of deploying the solution. &amp;nbsp;However, this is rapidly being address by software such as Canonical&#39;s Juju. There are also a number of companies that provide IOpenStack based solutions such as Pistson OpenStack. &amp;nbsp;Piston provides a turn-key OpenStack solution that includes:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;http://2.bp.blogspot.com/-g1wD-DKdPgU/Ui0rFEDMLdI/AAAAAAAAACw/7IpSNUNEBPc/s1600/Piston.tiff&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;252&quot; src=&quot;http://2.bp.blogspot.com/-g1wD-DKdPgU/Ui0rFEDMLdI/AAAAAAAAACw/7IpSNUNEBPc/s320/Piston.tiff&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The other way we can tell if anything is ready for prime time is if we look at existing adoption of the technology. A year of two ago, there were almost no enterprise implementations of openStack outside of some service providers such as Rackspace, as well as NASA. &amp;nbsp;This has changed, companies such as Bloomberg, Comcast, and Best Buy have all implemented OpenStack.&amp;nbsp;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;At the most recent OpenStack Summit Bloomberg CTO Pravir Chandra, one of several company executives who detailed their real-world experience with the platform at the summit, said his team set a high bar for OpenStack. Bloomberg’s goals included capabilities such as high availability, no cascading failures, and smooth scale down and scale up. As described in GigaOM:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;&quot;They were able to get there by deploying OpenStack along with considerable custom work of their own, both above and below that layer. They ended up setting up the high-availability databases and figuring out how to aggregate logs from the hypervisor level.&quot;&lt;/i&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;&lt;br /&gt;&lt;/i&gt;&lt;/div&gt;&lt;div&gt;&lt;div&gt;A story about Best Buy in ITWorld describes Bestbuy.com as &quot;the poster child for organizations that can benefit from the cloud.&quot; The online retailer built an internal cloud on OpenStack that the company says speeds up the ecommerce site, allows faster development cycles, and scales.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;For example, at the beginning of the Christmas shopping season last year, Bestbuy.com saw a spike of eight times its normal traffic, Joel Crabb, chief architect, told ITWorld. &quot;If that doesn’t scream out for elastic scaling, I don’t know what does.&quot;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;OpenStack also dramatically cut costs for Best Buy, company executives told summit attendees. Director of eBusiness Architecture Steve Eastham said past releases of the website cost about $20,000 to provision a single managed VM. With OpenStack, he said, the company is spending around $91,000 per rack.&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;div&gt;So I think that it’s still an open question about how OpenStack will ultimately stack up against Amazon Web Services in the public cloud infrastructure sector and VMware in the (mostly) private cloud market, where legacy applications are in play. But OpenStack evangelists like Rackspace CTO John Engages are gearing up to bring their solutions to enterprise customers. In an interview, he told Ryan Cox:&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;The enterprise community is thirsty for the cloud and that ball will soon drop. The opportunity to innovate in open source with OpenStack is one that the legacy solutions in enterprise will soon be eaten. Mobile devices, Big Data, your and my Internet of Things … access to all of these through infrastructure that can scale quickly at low cost is a common theme we’re hearing at the OpenStack Summit 2013.&lt;/i&gt;&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;&lt;br /&gt;&lt;/i&gt;&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;i&gt;S&lt;/i&gt;o, back to our original question, is OpenStack &quot;ready for prime time&quot;? &amp;nbsp;I think that the answer is, maybe. If you&#39;re looking to build a private cloud infrastructure, I think it&#39;s a ready option. If you&#39;re looking for a hybrid solution, it&#39;s a bit less clear, but it&#39;s certainly possible.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Let me know what you think in the comments. I&#39;m particularly curious if our involved in a OpenStack deployment.&amp;nbsp;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/8666257379186828309/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=8666257379186828309' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/8666257379186828309'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/8666257379186828309'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2013/09/is-openstack-ready-for-prime-time-yet.html' title='Is OpenStack ready for prime time yet?'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://2.bp.blogspot.com/-g1wD-DKdPgU/Ui0rFEDMLdI/AAAAAAAAACw/7IpSNUNEBPc/s72-c/Piston.tiff" height="72" width="72"/><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-3732741883836255373</id><published>2013-06-03T10:44:00.000-07:00</published><updated>2013-06-03T10:44:30.805-07:00</updated><title type='text'>Upgrading Your Storage Microcode</title><content type='html'>Folks,&lt;br /&gt;&lt;br /&gt;I was just reading a posting by Chris Evans on this topic at&amp;nbsp;http://architecting.it/2013/06/03/managing-microcode-upgrades/ and he makes a lot of great points. &amp;nbsp;I agree with everything that Chris posted, &amp;nbsp;only I would go even further and say that based on my experience that having a regular process for upgrading your storage microcode is critical to managing any storage environment.&lt;br /&gt;&lt;br /&gt;There seem to be three competing&amp;nbsp;philosophies&amp;nbsp;that cause problems on this topic &quot;in the wild&quot;:&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;&quot;If it Ain&#39;t Broke, Don&#39;t Fix It!&quot; - This is the idea that you should only patch or upgrade your storage infrastructure if you run into a problem. &amp;nbsp;I run into this approach more often than you would think, and&amp;nbsp;invariably&amp;nbsp;what this means is that you will run into every problem that exists in the microcode and have to deal with it on an &quot;emergency&quot; basis. It also means that you will often go for long periods of time without patching or updating, and then when you hit a problem, you have a huge jump, which almost always means that you also have a lot of servers that need HBA firmware and/or driver updates. &amp;nbsp;This usually ends up being &amp;nbsp;aHUGE and painful project, that, in some people&#39;s minds simply confirms&amp;nbsp;why&amp;nbsp;they are avoiding doing the storage microcode upgrades in the first place. &amp;nbsp;What they don&#39;t realize is that the main reason it&#39;s so&amp;nbsp;painful&amp;nbsp;is that they are so far behind. If they actually kept up, then the pain would be less and spread over time.&lt;/li&gt;&lt;li&gt;&quot;Pick a standard, and keep it as long as possible&quot; - This approach is one I see fairly often as well. Here the storage team picks a &quot;standard&quot;&amp;nbsp;version&amp;nbsp;of the OS, ans sticks to it only patching it when there is a problem, or until they are forced to change because new hardware doesn&#39;t support than version of the OS any longer. Then they adopt the new version of the OS as their standard, and bring everything up to that level. It&#39;s actually similar to #1, and&amp;nbsp;suffers&amp;nbsp;from the same sorts of issues.&lt;/li&gt;&lt;li&gt;&quot;Apply every patch and/or upgrade the vendor releases as soon as it becomes GA&quot; - I see this much less frequently mainly because people are afraid, often rightfully so, that patching/upgrading&amp;nbsp;this&amp;nbsp;frequently&amp;nbsp;will cause more problems than it solves.&lt;/li&gt;&lt;/ol&gt;&lt;br /&gt;&lt;br /&gt;The process that Chris outlines in his blog post, is, in my opinion, the right way to go. Apply your patches either quarterly, or twice per year in predefined upgrade windows. &amp;nbsp;This doesn&#39;t mean that you can&#39;t apply patches to resolve specific issues as they arise. &lt;br /&gt;&lt;br /&gt;But I would go a bit further in my definition of the process. &amp;nbsp;Specifically, I would have a process that works something like this:&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;ol&gt;&lt;li&gt;Between upgrades (i.e. during the quarter or 6 month period between upgrades) I would pull down every patch and upgrade that the storage manufacturer releases into the storage team lab, and apply it to a lab box. &amp;nbsp;I would then apply a set of&amp;nbsp;regression&amp;nbsp;tests to validate that the patch/upgrade worked in my environment, with my servers, HBA&#39;s, etc.&lt;/li&gt;&lt;li&gt;About a week prior to my upgrade window I would pull together an &quot;upgrade&quot; package where I decide what patches/upgrades, etc I was going to apply to the storage, as well as any that were required for the HBA&#39;s, host OS&#39;s, etc. &amp;nbsp;Note it&#39;s critically that the host HBA&#39;s be upgraded to the latest version of&amp;nbsp;their&amp;nbsp;drivers, etc. that are supported by the patches/upgrades that you are going to roll out to avoid issues. Upgrades to the servers are often avoided even more than the storage OS upgrades since they are usually the source of outages (reboot required) and due to the fact that it&#39;s not the storage team doing those upgrades in many cases.&lt;/li&gt;&lt;li&gt;I would actually have two windows, once for arrays that support dev/test, and one for arrays that support production if it&#39;s possible. I would then roll out the patches to dev/test, and let them bake there for a week or two, and then roll them out to production. This isn&#39;t 100% necessary, especially if you&#39;ve done good testing in your lab, but it would be nice.&lt;/li&gt;&lt;li&gt;Go to step #1 and start the process all over again.&lt;/li&gt;&lt;/ol&gt;When I&#39;ve described this process to people I often get push back like &quot;hey, that means that we will constantly either testing, or performing upgrades&quot;! &amp;nbsp;This is especially the case if you decide to go on the quarterly schedule. My response if &quot;yup, because that&#39;s part of what a storage team does, and why you have a storage team&quot;. Frankly, the team&#39;s time is better spent on this, than on, say, doing a lot of LUN allocations which you can automate, and even delegate, once it&#39;s automated.&lt;br /&gt;&lt;br /&gt;The bottom line is, it&#39;s a &quot;pay me now, or pay me later&quot; situation and I would rather do as much of my patching/upgrading in a proactive manner, than in a reactie maner where&#39;s an a big emergency, and a big project with a lot of downtime at once.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/3732741883836255373/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=3732741883836255373' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/3732741883836255373'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/3732741883836255373'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2013/06/upgrading-your-storage-microcode.html' title='Upgrading Your Storage Microcode'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-4491235303394043924</id><published>2012-09-23T15:12:00.002-07:00</published><updated>2012-09-23T15:12:35.036-07:00</updated><title type='text'>Keeping the Lights on Syndrome</title><content type='html'>Does your IT organization suffer from &quot;Keeping the Lights on Syndrome&quot;? For those of you who are asking, what the heck is &quot;Keeping on the Lights Syndrome&quot; here&#39;s a quick definition of the problem. &quot;Keeping the Lights on Syndrome&quot; is a situation that more and more IT organizations are finding themselves in where they are spending 70% of their IT budget on &quot;keeping the lights on&quot; and only 30% on innovating with the business and modernizing their technology.&lt;br /&gt;&lt;br /&gt;So, what&#39;s the right number? Should it be 60% - 40%? 50% - 50%?&amp;nbsp; Well for a lot of years the number has been closer to 50-50%, and that&#39;s probably a good number to strive toward for most IT organizations. The next question I&#39;m often asked is how can I address this problem?&amp;nbsp; What can I do to get my Infrastructure and Operations costs down? I&#39;m already virtualizing my server infrastructure, and I&#39;m looking at more virtualization including virtualizing my storage and my networks, what more can I do to get my I&amp;amp;O expenses down even further?&lt;br /&gt;&lt;br /&gt;The answer is that virtualization has been a great help to keep the number down to just 70-30%. Without virtualization a lot of organization might be staring at 80-20%, or even 90-10%.&amp;nbsp; Ok, so what&#39;s the next step you ask? Please don&#39;t say &quot;cloud&quot;, I&#39;ve heard that enough already in the last year! As a matter of fact, every manufacturer of infrastructure, and infrastructure software has been telling me that all I have to do is buy their solution and I have a &quot;cloud solution&quot; in place.&lt;br /&gt;&lt;br /&gt;I tend to agree, the term &quot;cloud&quot; is over used. So, let&#39;s not use it here, lets instead look at some practical things that the I&amp;amp;O organization can do to address the &quot;Keeping the Lights on Syndrome&quot;. Longer term, yes something like IT as a Service whether it&#39;s implemented using a private (internal) cloud, a public cloud, or a combination of the two called Hybrid Cloud doesn&#39;t matter. But that&#39;s a longer term solution. So what can the I&amp;amp;O organization do in the shorter term to address the problem, and maybe lay the groundwork for the longer term cloud solution as well? &lt;br /&gt;&lt;br /&gt;What can I&amp;amp;O do beyond completing the current drive toward virtualizing almost the entire infrastructure? They can start &quot;comoditizing&quot; their infrastructure. What is &quot;commodity&quot; infrastructure? Its the idea that you buy your infrastructure including network, server, and storage as a single unit.&amp;nbsp; Some people call this converged infrastructure, but what ever you call it the idea is to buy your infrastructure as a single SKU which defines a single unit of capacity for your infrastructure.&lt;br /&gt;&lt;br /&gt;How does this help with the &quot;Keeping the Lights on Syndrome&quot;? It removes a major cost from your&amp;nbsp; I&amp;amp;O organization. That cost being the cost of developing the &quot;right&quot; solution for each and every application, and then building a customer infrastructure to support that &quot;right&quot; solution. instead you buy your infrastructure capacity in &quot;chunks&quot;, and then carve those &quot;chunks&quot;into standard sized pieces.&amp;nbsp; That doesn&#39;t mean that those standard peices must all be identical, but rather there should be a limited number od standard sized &quot;chunks&quot;. For example, Small, Medium, Large, and X-Large.&lt;br /&gt;&lt;br /&gt;How does this help your I&amp;amp;O organization address the &quot;Keeping the Lights on Syndrome&quot;? It does so by making your purchasing more efficient. It also reduces the amount of engineering you have to do by eliminating most of the custom engineering and custom building that is still happening in the I&amp;amp;O organization in spite of the fact that you have virtualized much of your infrastructure.&lt;br /&gt;&lt;br /&gt;So, how would this work, you ask me?&amp;nbsp; My application teams need to have their requirements met! My response is that it would work just like buying a car. If you have a family of, say, 5 people, and you like to go on family driing trips, do yo ugo to the Ford dealership and tell them what you want in a car, and then have them build you a custom car that exactly meets your needs? No, of course not, you go to the dealership and choose among several different offering that they have, and then buy the one that is closest to meeting your needs. Sure, you can &quot;customize&quot; that car buy picking the color, the size of the engine, maybe pick some custom wheels, etc. But all of this is based on a limited set of standard platforms that Ford builds, it&#39;s not custom from the ground up.&lt;br /&gt;&lt;br /&gt;Right now, I would argue that, even with virtualization, most I&amp;amp;O organizations are still building custom cars from the ground up. What I&#39;m suggesting is that instead the I&amp;amp;O organization should be buying a &quot;standard&quot; platform, provide some standard sized &quot;environments&quot; that the application teams can pick from, and then only &quot;customize&quot; the application environments based on the standard platforms/environments. So, lets say for example that you have a converged infrastructure where a single unit of converged infrastructure can handle any combination of 2,000 small VM&#39;s, 1,000 medium VM&#39;s, 500 large VM&#39;s, and 250 X-large VM&#39;s.&amp;nbsp; When the applications teams need new VM&#39;s they simply request one of the standard sizes. No custom engineering required. If, however, none of the standard sizes fits the needs of the application teams, then a custom engineered VM is built for them. The key here is to keep the number of custom VM&#39;s down to a minimum. Mostly this can be done through the charge back process by making any custom VM cost significantly more than an X-large VM.&lt;br /&gt;&lt;br /&gt;What does this buy the I&amp;amp;O organization? First it addresses the &quot;Kepping the Lights on Syndrome&quot; by reducing the cost of deploying new infrastructure.&amp;nbsp; It also makes the I&amp;amp;O organization more agile since it saves all of the time that is needed to engineer custom solutions.&amp;nbsp; Finally, this approach also lays the groundwork for automation the deployment of the infrastructure, otherwise known as Infrastructure as a Service and IaaS is one of the first layers on the way to ITaaS and &quot;cloud&quot;.&lt;br /&gt;&lt;br /&gt;So, what are the barriers to implementing a converged infrastructure solution for your I&amp;amp;O organization? There are a number of them, actually, and they are all organizational in nature. First, you need to pick a partner that can provide you with the right converged infrastructure for your I&amp;amp;O organization.&amp;nbsp; This can be an issue because typically your purcasing organization already has agreements in place with your storage, server, and network vendors, so if you want to continue to use that technology you are going to have to get your purchasing people on board and they are going to have to talk with your storage, server, and network suppliers about working with a partner that can pull together all three and provide them as a single SKU.&amp;nbsp; Once you have that worked out, you need to get buy in from your engineering and architecture organization.&amp;nbsp; The architecture organization is going to be threatened by this move to converged infrastructure since they will perceive this a a move to reduce their control over the infrastructure in the organization. However, the engineering organization is the one that will likely feel the most threatened by the move to converged infrastructure. They will very like view it as a direct attack against them and will though up every argument for why &quot;this won&#39;t work&quot; you can imagine. Finally, your storage, server, and network administration organizations will need to be revamped. Managing a converged infrastructure with 3 separate teams. Unless you reorganize to support/manage your converged infrastructure by a single organization much of the advantage of pulling storage/server/network together physically can be lost.&lt;br /&gt;&lt;br /&gt;Finally, let me say that several of our customers who are at various stages of implementation of converged infrastructure, IaaS, and Cloud infrastructure, and how successful these initiatives are is directly related to the organization&#39;s ability to change and embrace the new technology. It also is directly related to the partner&#39;s ability to deliver on the organizations needs. Without a good partner who understand the needs and goals of the organization, they are doomed to failure.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/4491235303394043924/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=4491235303394043924' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4491235303394043924'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/4491235303394043924'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2012/09/keeping-lights-on-syndrome.html' title='Keeping the Lights on Syndrome'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-2393598850000268305</id><published>2012-08-17T13:51:00.003-07:00</published><updated>2012-08-17T13:51:49.063-07:00</updated><title type='text'>ILM/HSM part 2, Return of ILM/HSM</title><content type='html'>Folks,  Sorry it&#39;s been so long since my last posting! Time fly&#39;s when you&#39;re having fun and I&#39;ve been having a lot of fun over the last year. What have I been doing, you ask? Well a lot and among other things I&#39;ve been trying to help our customers sort through a changing storage environment, and I&#39;ve learned a few things in the process.  What&#39;s all this change I&#39;m referring to?  Well, among other things, Flash/SSD has really started to take off, and that has a lot of implications for the storage team. So I have spend a lot of time helping our customers sort through the different options, etc. and discovered some things in the process that I would like to share with you.  But first, a quick review of what&#39;s up with storage and Flash/SSD. As I indicated above, Flash/SSd is really beginning to make in-roads into the data center. Flash/SSD comes in basically three different flavors.  First, Flash/SSD&#39;s can be used in something that looks like a traditional storage array. There are a couple of different variations of this type of storage array. Some use SSD drives in place of traditional disk drives, and some use Flash memory directly. Typically, the arrays that use SSD&#39;s provide many of the same features as other tradition storage arrays such as snapshots, replication, etc. Arrays based on Flash memory, on the other hand, typically provide better performance than arrays that use SSD drives mainly because the avoid all of the overhead involved with the SCSI protocol, etc. However, these arrays also often don&#39;t have all of the features we need in the data center such as snaps and replication, etc. In both cases, from a  storage management perspective you would manage it much like any other storage array in your data center.  Second there are the traditional storage arrays with Flash/SSD added to them. Again, these arrays come in basically two flavors. In both cases, however, an effort is made to only utilize the Flash/SSD for data which is currently &quot;in use&quot; or &quot;hot&quot; in an effort to keep the costs down.  With the first flavor, SSD drives are used to hold &quot;hot&quot; blocks of data, with &quot;cool&quot; blocks of data being stored on traditional disk drives. This requires sophisticated software that monitors how &quot;hot&quot; the data is and moves it appropriately. With the second type of array Flash is added to the controller and used to extend the cache. This has the advantage that the software is a simple extension of the existing controller software, and as I mentioned above, the overhead of the SCSI protocol is avoided. The downside is that this only provides a performance boost for the read half of the equation.   Finally, there is the ability to add Flash memory to the servers that run your applications. Once again, there are two flavors here.  The first, and simplest flavor is to utilize the Flash memory as an extended disk cache.  The advantage to this is that it accelerates I/O to/from any disk arrays you may already own. The down side is that it is often limited in what kinds of OS&#39;s it works with. The second flavor makes the Flash memory appear to the OS on the server as a disk drive.  This has the advantage of very high performance, but is limited in size. It is also limited in that you can&#39;t use features like server clustering, etc. since this data can&#39;t be shared among a group of servers.  So what&#39;s the lesson learned from all of the above?  I think that there are a couple. One is that if we are going to utilize some or all of this technology in the data center, we are really looking at bringing back the old ILM/HSM days. For the &quot;Flash/SSD&quot; only arrays, because of their cost, most data centers aren&#39;t going to bring them in to replace all of their traditional storage array capacity. So some way to move data from the expensive storage to the less expense storage needs to be found if costs are going to be kept under control. With the second type of array software to move the data is supplied, but there are questions about how effective this software is particularly in keeping up with quickly changing &quot;temperature&quot; data. The third type of Flash/SSD certainly improves performance, but increases the &quot;storage islands&quot;in your data center unless some kind of ILM/HSM software can be applied.  Where this leaves us is with many of the same issues that, ultimately, derailed ILM the last time around. The main issue at the time was the classification of the data. Getting the business to classify their data was very difficult, and in the end,we often threw up our hands and just moved data based on &quot;last access date&quot;. While this works for file based data, it doesn&#39;t work for database data, for example, at all.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/2393598850000268305/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=2393598850000268305' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/2393598850000268305'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/2393598850000268305'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2012/08/ilmhsm-part-2-return-of-ilmhsm.html' title='ILM/HSM part 2, Return of ILM/HSM'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-701752325379708314</id><published>2011-06-12T12:51:00.000-07:00</published><updated>2011-06-12T12:51:23.423-07:00</updated><title type='text'>NetApp Deduplication An In-depth Look</title><content type='html'>There has been a lot of discussion lately about the NetApp deduplication technology, especially on twitter.&amp;nbsp; We had a lot of misinformation and FUD flying around, so I thought that a blog entry that takes a close look at the technology was in order.&lt;br /&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;But first a bit of disclosure,&amp;nbsp; I currently work for a storage reseller that sells NetApp as well as other storage. The information in this blog posting is derived from NetApp documents, as well as my own personal experience with the technology at our customer sites.&amp;nbsp; This posting is not intended to promote the technology as much as it is to explain it. The intent here is to provide information from an independent perspective. Those reading this blog post are, of course, free to interpret it the way they choose.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;How NetApp writes data to disk.&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;First lets talk about how the technology works.&amp;nbsp; For those who aren&#39;t familiar with how a NetApp array stores data on disk, here&#39;s the key to understanding how NetApp approaches writes.&amp;nbsp; NetApp stores data on disk using a simple file system called WAFL (Write Anywhere File Layout).&amp;nbsp; The file system stores metadata which contains information about the data blocks, has inodes that point to indirect blocks, and indirect blocks point to the data blocks. One other thing that should be noted about the way that NetApp writes data is that the controller will coalesce writes into full stripes when ever possible. Furthermore, the concept of updating a block is unknown in the NetApp world. Block updates are simply handled as new writes, and the pointers to the updated blocks are moved to point to the new &quot;updated&quot; block.&amp;nbsp; &lt;br /&gt;&lt;br /&gt;&lt;b&gt;How deduplication works.&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;First, it should be noted that NetApp deduplication operates on a volume level.&amp;nbsp; In other words,all of the data within a single NetApp volume is a candidate for deduplication. This includes both file data, and block (LUN) data that is stored within that Netapp volume.&amp;nbsp; NetApp deduplication is a post-process that occurs based on either a watermark for the volume, or on a schedule.&amp;nbsp; For example, if the volume exceeds 80% of it&#39;s capacity a deduplication run can be started automatically. Or, a&amp;nbsp; deduplication run can be started at a particular time of day, usually at a time when the user thinks the array will be less utilized. &lt;br /&gt;&lt;br /&gt;The maximum sharing for a block is 255. This means that if there are 500 duplicate blocks,there will be 2 blocks actually stored with 1/2 of the pointers pointing to the first block and 1/2 of the pointers pointing to the second block. Note that this 255 maximum is separate from the 255 maximum for snapshots.&lt;br /&gt;&lt;br /&gt;When deduplication runs for the first time on a NetApp volume with existing data, it scans the blocks in the volume and creates a fingerprint database, which contains a sorted list of all fingerprints for used blocks in the volume.&amp;nbsp; After the fingerprint file is created, fingerprints are checked for duplicates, and, when found, first a byte- by-byte comparison of the blocks is done to make sure that the blocks are indeed identical. If they are found to be identical, the block‘s pointer is updated to the already existing data block, and the new (duplicate) data block is released. Releasing a duplicate data block entails updating the indirect inode pointing to it, incrementing the block reference count for the already existing data block, and freeing the duplicate data block. &lt;br /&gt;&lt;br /&gt;As new data is written to the deduplicated volume, a fingerprint is created for each new block and written to a change log file. When deduplication is run subsequently, the change log is sorted, its sorted fingerprints are merged with those in the fingerprint file, and then the deduplication processing occurs as described above.&amp;nbsp; There are two change log files, so that as deduplication is running and merging the new blocks from one change log file into the fingerprint file, new data that is being written to the flexible volume is causing fingerprints for these new blocks to be written to the second change log file. The roles of the two files are then reversed the next time that deduplication is run. (For those familiar with Data ONTAP usage of NVRAM, this is analogous to when it switches from one half to the other to create a consistency point.)&amp;nbsp; Note that when deduplication is run an an empty volume, the fingerprint file is still created from the log file.&lt;br /&gt;&lt;b&gt;&lt;br /&gt;Performance of NetApp deduplication&lt;/b&gt;.&lt;br /&gt;&lt;br /&gt;There has been a lot of discussion about the performance of Netapp deduplication. In general, deduplication will use CPU and memory in the controller. How much CPU will be ustilied is very had to determine ahead of time, however in general you can expect to use from 0% to 15% of the CPU in most cases, but as much as 50% has been observed in some cases. The impact of deduplication on a host or application can very significantly and depends on a number of different factors including:&lt;br /&gt;&lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The application and the type of dataset being used &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The data access pattern (for example, sequential versus random access, the size and pattern of the &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; I/O) &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The amount of duplicate data, the compressibility of the data, the amount of total data, and the &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; average file size &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The nature of the data layout in the volume &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The amount of changed data between deduplication runs &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The number of concurrent deduplication processes and compression scanners running &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The number of volumes that have compression/deduplication enabled on the system &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The hardware platform—the amount of CPU/memory in the system &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The amount of load on the system &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Disk types ATA/FC, and the RPM of the disk &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; The number of disk spindles in the aggregate&amp;nbsp; &lt;br /&gt;&lt;br /&gt;The deduplication is a low priority process, so host I/O will take precedence over dedupllication. However, all of the items above will effect the performance of the deduplication process itself.&amp;nbsp; In general you can expect to get somewhere between 100MB/sec to 200/MB/sec of data dedupication from a NetApp controller. &lt;br /&gt;&lt;br /&gt;The effect of deduplication on the write performance of a system is very dependent on the model of controller and the amont of load that is being put on the system. For deduplicated volumes, if the load on a system is low—that is, for systems where the CPU utilization is around 50% or lower—there is a negligible difference in performance when writing data to a deduplicated volume, and there is no noticeable impact on other applications running on the system. On heavily used systems, however, where the system is nearly saturated, the impact on write performance can be expected to be around 15% for most models of controllers.&lt;br /&gt;&lt;br /&gt;Read performance of a deduplicated volume depends on the type of reads being performed. The implicit on random reads is negligible. In early versions of ONTAP the impact of deduplication was noticeable with heavy sequential read applications. However with version 7.3.1 and above NetApp added something they called &quot;intelligent cache&quot; to ONTAP specifically to help with the performance of sequential reads on deduplicated volumes and were able to mitigate the performance impact of sequential reads nearly completely. Finally, with the addition of FlashCache cards to a controller, performance of deduplicated volumes can actually be better than non-deduplicated volumes. &lt;br /&gt;&lt;br /&gt;&lt;b&gt;Deuplication Interoperability with Snapshots&lt;/b&gt;.&lt;br /&gt;&lt;br /&gt;Snapshots and their interoperability with deduplication has been a hotly debated topic on the internet lately. Snapshot copies lock blocks on disk that cannot be freed until the Snapshot copy expires or is deleted. On any volume, once a Snapshot copy of data is made, any subsequent changes to that data temporarily require additional disk space, until the snapshot is deleted or expires. The is true with deduplicated volumes as well as non-deduplicated volumes. Thus the space savings from deuplication for any data held by a snapshot prior to a deduplication run will not be recognized until after that snapshot expires or is deleted. &lt;br /&gt;&lt;br /&gt;Some best practices to achieve the best space savings from deduplication-enabled volumes that contain Snapshot copies include: &lt;br /&gt;&lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Run deduplication before creating new Snapshot copies. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Limit the number of Snapshot copies you maintain. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; If possible, reduce the retention duration of Snapshot copies. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Schedule deduplication only after significant new data has been written to the volume. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Configure appropriate reserve space for the Snapshot copies. &lt;br /&gt;&lt;br /&gt;&lt;b&gt;Some Application Best Practices&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;VMWare&lt;br /&gt;&lt;br /&gt;In general VMware deduplicates well, especially if a few best practices in laying out the VMDK files are considered. The following best practices should be considered for VMware implementations:&lt;br /&gt;&lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Operating system data deduplicates very well therefore you should stack as many OS&#39;s&amp;nbsp; onto the same volume as possible.&lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Keep VM swap files, pagefiles, user and system temp directories on separate VMDK files.&lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Utilize FlashCache where ever possible to cache frequently accessed blocks (like those from the OS).&lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Always perform proper alignment of your VM&#39;s on the NetApp 4K boundaries. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;br /&gt;&lt;br /&gt;Microsoft Exchange&lt;br /&gt;&lt;br /&gt;In general deduplication provides little benefit for versions of Microsoft Exchange prior to Exchange 2010. Starting with Exchange 2010 Microsoft has eliminated single instance storage and deduplication can reclaim much of the additional space created by this change. &lt;br /&gt;&lt;br /&gt;Backups (NDMP, SnapMirror and SnapVault)&lt;br /&gt;&lt;br /&gt;The following are some best practices to consider for backups of deduplicated volumes:&lt;br /&gt;&lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Ensure deduplication operations initiate only after your backup completes. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Deduplication operations on the destination volume complete prior to initiating the next backup. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; If backing up data from multiple volumes to a single volume you may achieve significant space savings from deduplication beyond that of the deduplication savings from the source volumes.&amp;nbsp; This is because you are able to run deduplication on the destination volume which could contain duplicate &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; data from multiple source volumes. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; If you are backing up data from your backup disk to tape consider using SMTape to preserve the deduplication/compression savings.&amp;nbsp; Utilizing NDMP to tape will not preserve the deduplication savings on tape. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; Data compression can affect the throughput of your backups.&amp;nbsp; The amount of impact is dependent upon the type of data, compressibility, storage system type and available resources on the destination storage system.&amp;nbsp; It is important to test the affect on your environment before implementing &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; into production. &lt;br /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; •&amp;nbsp;&amp;nbsp;&amp;nbsp; If the application that you are using to perform backups already does compression, NetApp data compression will not add significant additional savings. &lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;b&gt;Conclusions&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;In general, NetApp deduplication can help drive down the TCO of your storage systems significantly, especially when combined with FlashCache in a VMware or Virtual Desktop environment. If best practices are followed carefully, the performance impact of deduplication is negligible, and the space savings for some applications can be considerable. Some careful planning and testing in the customers environment are necessary to ensure that maximum advantage is taken of deduplication, however the ability to schedule when the operations take place combined with the ability to turn on and off deduplication provide significant flexibility in to tune the environment for a customer&#39;s particular application profile.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/701752325379708314/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=701752325379708314' title='7 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/701752325379708314'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/701752325379708314'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2011/06/netapp-deduplication-in-depth-look.html' title='NetApp Deduplication An In-depth Look'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><thr:total>7</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-1869903923844143684</id><published>2011-05-30T22:51:00.000-07:00</published><updated>2011-05-30T22:55:27.873-07:00</updated><title type='text'>EMC FAST and NetApp FlashCache a Comparison</title><content type='html'>&lt;b&gt;Introduction&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;This article is intended to provide the reader with an introduction to two technologies,&amp;nbsp; EMC FAST and NetApp FlashCache. Both of these technologies are intended to improve the performance of storage arrays, while also helping to bend the cost curve of storage downward. With the amount of data that needs to be stored increasing on a daily basis, anything that addresses the cost of storage is a welcome addition to the data center portfolio.&lt;br /&gt;&lt;br /&gt;&lt;b&gt;EMC FAST&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;EMC FAST (Fully Automated Storage Tiering) is actually a suite made of of two different products. the first, called FAST Cache operates by keeping a copy of &quot;hot&quot; blocks of data on SSD drives. In effect it acts as a very fast disk cache for data that is currently being accessed while the data itself is being stored on either 15K SAS or 7200 RPM NL-SAS (SATA) drives. &lt;br /&gt;&lt;br /&gt;FAST Cache provides the ability to improve the performance of SATA drives, as well as to turbo charge the performance of fiber channel and SAS drives as well. In general, this kind of technology helps to divide performance from spindle count, which helps drive down the number of drives required for many workloads, thus driving down the cost of storage, and the overall TCO of storage.&lt;br /&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;http://1.bp.blogspot.com/-NN97t3uRmeQ/TeSCXfIbujI/AAAAAAAAACQ/Js0Qv0IbfVo/s1600/i1.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;210&quot; src=&quot;http://1.bp.blogspot.com/-NN97t3uRmeQ/TeSCXfIbujI/AAAAAAAAACQ/Js0Qv0IbfVo/s320/i1.jpg&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;br /&gt;The other product in the FAST suite is FAST Virtual Pool.&amp;nbsp; This is the product that most people associate with FAST since it is the one that leverages&amp;nbsp; three different disk technologies, SSD, high speed drives such as 15K RPM SAS, and slower high capacity drives such as 7200 RPM NL-SAS. By placing only data that requires high speed access on the SSD drives, data that is receiving a moderate amount of access on the 15K SAS drives, and putting the rest on the slower, high capacity disks EMC FAST is able to drive the TCO of storage downward.&lt;br /&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;http://3.bp.blogspot.com/-5SaNnt1oElY/TeSCX5LL24I/AAAAAAAAACU/irOerV3zy8g/s1600/i2.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;320&quot; src=&quot;http://3.bp.blogspot.com/-5SaNnt1oElY/TeSCX5LL24I/AAAAAAAAACU/irOerV3zy8g/s320/i2.jpg&quot; width=&quot;184&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;br /&gt;&lt;b&gt;NetApp FlashCache&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;NetApp approaches the overall issue of improved performance while simultaneously driving down the TCO of storage in a different way. NetApp believes that using fewer disks to store the same amount of data is the best way to drive down TCO. Therefore NetApp has spent a significant amount of time developing storage efficiency tools to help their customer&#39;s store more data in less space.&amp;nbsp; For example, they developed a variant of RAID-6 called RAID-DP which provides the protection and performance of RAID-10, while utilizing significantly less space. NetApp has also developed block level de-duplication which can be utilized with primary production data. &lt;br /&gt;&lt;br /&gt;However, as with many technologies of this type there could be a performance penalty paid for it&#39;s utilization. Therefore, Netapp needed to develop a way to improve the performance if it&#39;s arrays while also supporting it&#39;s storage efficiency technology. With the advent of Flash memory, Netapp found a way to do this without any need for significant changes in the architecture of it&#39;s arrays. Thus was born FlashCache. &lt;br /&gt;&lt;br /&gt;FlashCahce provides a secondary read cache for hot blocks of data. This proves a way to separate performance from spindle count,&amp;nbsp; and thus not only allows workloads intended for Fiber Channel or SAS drives to potentially run on SATA drives, but it also addresses some of the performance issues with the storage efficiency technologies that NetApp developed. For example, with FlashCache utilized in a virtual desktop environment Netapp de-duplication allows many individual Windows images to be represented in a very small footprint on disk. However a problem arrises when a large numer of desktops all try to access their Windows image at once. However with the addition of FlashCache, most, if not all of the Windows image would end up being storage in Flash memory, thus avoiding the performance issue of a boot storm, virus checking storm, etc.&lt;br /&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;http://3.bp.blogspot.com/-II-6a8XSNgY/TeSBSR1YNQI/AAAAAAAAACM/VcXTDQrzzag/s1600/I3.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;239&quot; src=&quot;http://3.bp.blogspot.com/-II-6a8XSNgY/TeSBSR1YNQI/AAAAAAAAACM/VcXTDQrzzag/s320/I3.jpg&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;b&gt;Conclusion&lt;/b&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;Both EMC and Netapp have developed ways to help both improve the performance, and drive the TCO of storage downward. the two vendors approached the problem is somewhat different ways, but in the end they have both solved the problem in unique and effective ways.&amp;nbsp; &lt;br /&gt;&lt;br /&gt;The NetApp technology requires that the user buy-in completely to the NetApp vision of storage efficiency. If the user ignores the advantages of de-dupication in particular, or has data or workloads&amp;nbsp; that simply don&#39;t allow for the application of the NetApp storage efficiency technology then the TCO saving that NetApp promises will not be achieved. Utilizing FlashCache to seperate performance from spindle count is also critical in maintaining the performance of the array. This separation of performance from spindle count also in and of itself drives dwn the number ofd drives needed to support a workload, and thus also drives down the TCO.&lt;br /&gt;&lt;br /&gt;The EMC technology requires a very good understanding of your application workloads, and careful planning and sizing of the different tiers of storage. EMC could do more to make the two sub-products work together so that a single solution could provide both the TCO and the performance improvements at the same time. However, EMC FAST is a product that provides the TCO improvement promised, and doe it with a clean and elegant solution.&lt;br /&gt;&lt;br /&gt;Finally, a little on the future. With the cost of Flash memory coming down 50% year over year, it will soon reach the same price point that we currently see 15K HDD&#39;s at. Once that happens one has to wonder what role 15K HHDs will fill? If 15K HDDs are, indeed, squeezed out of existence by this reduction in the price of Flash memory, what purpose will 3 tiered automated storage tiering fill? Or, will the future simply be 2 tiers of storage, one that provides bulk capacity, and one that accelerates the performance of this bult capacity? if that predication is correct, then FAST VP will have a limited life, and FAST Cache and FlashCache will be the longer surviving technology.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/1869903923844143684/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=1869903923844143684' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/1869903923844143684'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/1869903923844143684'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2011/05/emc-fast-and-netapp-flashcache.html' title='EMC FAST and NetApp FlashCache a Comparison'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://1.bp.blogspot.com/-NN97t3uRmeQ/TeSCXfIbujI/AAAAAAAAACQ/Js0Qv0IbfVo/s72-c/i1.jpg" height="72" width="72"/><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-720961704184714048.post-8166580940513276073</id><published>2011-05-20T11:12:00.000-07:00</published><updated>2011-05-20T11:12:43.284-07:00</updated><title type='text'>Flash Storage and Automated Storage Tiering</title><content type='html'>In recent years, a move toward automated storage tiering has begun in the data center. This move has been inspired by the desire to continue to drive down the cost of storage, as well as the introduction of faster, but more expensive storage in the form of Flash memory in the storage array marketplace. Flash memory is significantly faster than spinning disks, and thus it’s ability to provide very high performance storage has been of interest. However, its cost is considerable, and therefore a way to utilize it and still bend the cost curve downward was needed. Note that Flash memory has been implemented in different ways. It can be obtained as a card for the storage array controller, or as SSD disk drives, and even, as cache on regular spinning disks. However it is implemented, it’s speed and expense remains the same.&lt;br /&gt;&lt;br /&gt;Enter the concept of tiered storage again. The idea was to place only that data which absolutely required the very high performance of Flash on Flash, and to leave the remaining data on spinning disk. The challenge with tiered storage in the way that it has been defined in the past was that it meant that too much data would be placed on very expensive Flash since traditionally an entire application would have all it’s data placed on a single tier. Even if only specific parts of the data at the file, or LUN level were placed on Flash, the quantity needed would still be very high, thus driving the costs of for a particular application up. It was quickly recognized that the only way to make Flash cost effective would be to place only the blocks which are “hot” for an application in Flash storage, thereby minimizing the footprint of Flash storage.&lt;br /&gt;&lt;br /&gt;The issue addressed by automated storage tiering is that you no longer need to know ahead of time what the proper tier of storage for a particular application’s data needs to be. Furthermore the classification of the data can occur at a much more fine-grained block level rather than the file or the LUN as with some earlier automated storage tiering implementations.&lt;br /&gt;&lt;br /&gt;Flash has changed the landscape of storage for the enterprise. Currently, Fash/SSD storage can cost 16-20X what Fiber channel, SAS, or SATA storage can cost. The dollars per GB model ends up looking something like the following:&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;http://1.bp.blogspot.com/-epdVz-bwsi8/TdatYM_WjTI/AAAAAAAAAB8/RNBssUAd1QA/s1600/pic1.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;400&quot; src=&quot;http://1.bp.blogspot.com/-epdVz-bwsi8/TdatYM_WjTI/AAAAAAAAAB8/RNBssUAd1QA/s400/pic1.png&quot; width=&quot;392&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;However the IOPS per $ model looks more like this:&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;http://4.bp.blogspot.com/-LnMVDmm1_wg/TdatindCiUI/AAAAAAAAACA/7OIqz0hRFvk/s1600/pic2.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;400&quot; src=&quot;http://4.bp.blogspot.com/-LnMVDmm1_wg/TdatindCiUI/AAAAAAAAACA/7OIqz0hRFvk/s400/pic2.png&quot; width=&quot;363&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;br /&gt;The impact on the tiered storage architectural model of Flash storage has been, in effect, to add a tier-0 level of storage where application data is placed that requires extremely fast random I/O performance. Typical examples of such data are database index tables or key lookup tables, etc. Placing this kind of data, which may only be part of an application’s data, on Flash storage can often have a dramatically positive effect on the performance of an application.&amp;nbsp; However, due to the cost of Flash storage the question is often raised, how can data centers ensure that only data that requires this level of performance resides on SSD or Flash storage so that they can continue to contain costs? Furthermore, is there a way to put only the “hot” parts of the data in the very expensive tier-0 capacity, and leave less hot, and cold data in slower, less expensive capacity? Block based automated storage tiering is the answer to these questions.&lt;br /&gt;&lt;br /&gt;Different storage array vendors have approached this problem in different ways. However, in all cases, the object is to place data at a block level, on tier-0 or Flash storage only while that data is actually being accessed, and then to store the rest of the data on lower tiered storage while the data is at rest. Note that this movement must be done at the block level in order to avoid performance issues, and to truly minimize the capacity of the tier-0 storage. &lt;br /&gt;&lt;br /&gt;One approach used by several storage vendors is to move blocks of data between multiple tiers of storage via a policy. For example, the policy might dictate that writes always occur to tier-0, and then if that data is not read immediately it is moved to tier-1. Then if the data isn’t read for 3 months that data is then moved to tier-2. The policy might also dictate that if the data is then read from the tier-2 disk then it is placed back on tier-0 in case additional reads are required and the entire process starts all over again. Logically this mechanism provides what enterprises are looking for, minimizing tier-0 storage and placing blocks of data on the lowest-cost storage possible. The challenge with this approach is that the I/O profile of the application needs to be well understood when the policies are developed in order to avoid accessing data from tier-2 storage too frequently and generally moving data up and down the stack too often since this movement is not “free” from a performance perspective. Additionally, EVT has found that for most customers, data rarely needs to spend time in tier-1 (FC or SAS) storage, that most of the data ends up spending most of it’s live on the SATA storage.&lt;br /&gt;&lt;br /&gt;Therefore as the cost of Flash storage continues to come down, the need for the SAS or Fiber Channel storage will continue to decline, and eventually disappear leaving just Flash and SATA storage in most arrays.&lt;br /&gt;&lt;br /&gt;Another approach that at least one storage vendor is using is to avoid all the policy based movement and to treat the Flash storage as a large read cache. This places the blocks that are most used on tier-0, and leaves the rest on spinning disk. When the fact that the sequential write performance of Flash, SAS/FC, and SATA is similar is taken into consideration along with a controller that orders its random writes, this approach can provide a much more robust way to implement Flash storage.&amp;nbsp; In some cases, it allows an application that would not normally be considered a good candidate for SAS or Fiber Channel storage to be able to utilize SATA disks instead. In general, this technique de-couples spindle count from performance thus providing more subtle advantages as well.&amp;nbsp; For example, applications which has traditionally required very small disk drives so that the spindle could would be might (many, many 146GB FC drives, for example) can now be run on much higher capacity 600GB SAS drives and still provide the same, or better performance. &lt;br /&gt;&lt;br /&gt;Overall, automated storage tiering is becoming a de-facto standard in the storage industry. However different storage array vendors have taken very different approaches to the implementation of automated tiering, but in the end the result is uniformly the same. The ability of the enterprise to purchase Flash storage to help improve the performance of their applications while at the same time continuing to bend the cost curve of storage downward.</content><link rel='replies' type='application/atom+xml' href='http://joergsstorageblog.blogspot.com/feeds/8166580940513276073/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment.g?blogID=720961704184714048&amp;postID=8166580940513276073' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/8166580940513276073'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/720961704184714048/posts/default/8166580940513276073'/><link rel='alternate' type='text/html' href='http://joergsstorageblog.blogspot.com/2011/05/flash-storage-and-automated-storage.html' title='Flash Storage and Automated Storage Tiering'/><author><name>Joerg Hallbauer</name><uri>http://www.blogger.com/profile/13798325042032407345</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='32' height='32' src='//4.bp.blogspot.com/-8eM_7tnJcQM/XPrzoUAqtII/AAAAAAAAAIQ/9IKhqmPZIJM_lo08_2e_ThWf5BoCog-ogCK4BGAYYCw/s220/IMG_1837%2Bcopy.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://1.bp.blogspot.com/-epdVz-bwsi8/TdatYM_WjTI/AAAAAAAAAB8/RNBssUAd1QA/s72-c/pic1.png" height="72" width="72"/><thr:total>1</thr:total></entry></feed>