<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" media="screen" href="/~d/styles/atom10full.xsl"?><?xml-stylesheet type="text/css" media="screen" href="http://feeds.feedburner.com/~d/styles/itemcontent.css"?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:blogger="http://schemas.google.com/blogger/2008" xmlns:georss="http://www.georss.org/georss" xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr="http://purl.org/syndication/thread/1.0" xmlns:feedburner="http://rssnamespace.org/feedburner/ext/1.0"><id>tag:blogger.com,1999:blog-5589634522109419319</id><updated>2017-07-13T09:44:18.167-07:00</updated><category term="Compute" /><category term="Storage &amp; Databases" /><category term="Open Source" /><category term="Big Data &amp; Machine Learning" /><category term="Developer Tools &amp; Insights" /><category term="Customers" /><category term="Partners" /><category term="Announcements" /><category term="Containers &amp; Kubernetes" /><category term="Management Tools" /><category term="Events" /><category term="Security &amp; Identity" /><category term="Infrastructure" /><category term="Pricing" /><category term="Networking" /><category term="Stackdriver" /><category term="Solutions" /><category term="Weekly Roundups" /><category term="CRE" /><title type="text">Google Cloud Platform Blog</title><subtitle type="html">Product updates, customer stories, and tips and tricks on Google Cloud Platform</subtitle><link rel="alternate" type="text/html" href="http://cloudplatform.googleblog.com/" /><link rel="next" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default?start-index=26&amp;max-results=25&amp;redirect=false" /><author><name>Google Blogs</name><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="32" height="32" src="//lh3.googleusercontent.com/-SMZmHeOVbFs/AAAAAAAAAAI/AAAAAAAAR7c/esUAZEvmr9M/s512-c/photo.jpg" /></author><generator version="7.00" uri="http://www.blogger.com">Blogger</generator><openSearch:totalResults>1074</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><atom10:link xmlns:atom10="http://www.w3.org/2005/Atom" rel="self" type="application/atom+xml" href="http://feeds.feedburner.com/ClPlBl" /><feedburner:info uri="clplbl" /><atom10:link xmlns:atom10="http://www.w3.org/2005/Atom" rel="hub" href="http://pubsubhubbub.appspot.com/" /><feedburner:emailServiceId>ClPlBl</feedburner:emailServiceId><feedburner:feedburnerHostname>https://feedburner.google.com</feedburner:feedburnerHostname><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-4114491257625116809</id><published>2017-07-13T04:00:00.001-07:00</published><updated>2017-07-13T04:04:24.292-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Infrastructure" /><title type="text">Google Cloud Platform now open in London</title><content type="html">&lt;span class="byline-author"&gt;By Dave Stiver, Product Manager, Google Cloud Platform&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Starting today, &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP) customers can use the new region in London (&lt;a href="https://cloud.google.com/london" target="_blank"&gt;europe-west2&lt;/a&gt;) to run applications and store data in London. London is our tenth region and joins our existing European region in Belgium. Future European regions include Frankfurt, the Netherlands and Finland.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-CmXngBPG6D0/WWZpRN17YvI/AAAAAAAAEGs/WFdYXGaSeY4fvirFwKli5aZorh1E9AEfACLcBGAs/s1600/london-region-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="1012" data-original-width="1326" height="488" src="https://2.bp.blogspot.com/-CmXngBPG6D0/WWZpRN17YvI/AAAAAAAAEGs/WFdYXGaSeY4fvirFwKli5aZorh1E9AEfACLcBGAs/s640/london-region-1.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
Incredible user experiences hinge on performant infrastructure. GCP customers throughout the British Isles and Western Europe will see significant reductions in latency when they run their workloads in the London region. In cities like London, Dublin, Edinburgh and Amsterdam, our performance testing shows 40%-82% reductions in round-trip time latency when serving customers from London compared with the Belgium region.&lt;br /&gt;
&lt;br /&gt;
We’ve launched London with three zones and the following services:&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-bD2CHGeZF-M/WWZpabkzBMI/AAAAAAAAEGw/SdKJxuGWERENafkzfwJB5-syePj2kKiAACLcBGAs/s1600/london-region-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="342" data-original-width="684" height="320" src="https://2.bp.blogspot.com/-bD2CHGeZF-M/WWZpabkzBMI/AAAAAAAAEGw/SdKJxuGWERENafkzfwJB5-syePj2kKiAACLcBGAs/s640/london-region-2.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
The London region puts the control over how to deploy resources directly in the hands of GCP customers&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;giving them choice in some GCP services on where to run their applications and store their data. When a customer signs up for GCP services, they have three different options, depending on the service:&lt;br /&gt;
&lt;ol&gt;
&lt;li&gt;Regional: Run applications and store data in a specific region, e.g., London, Tokyo, Iowa, etc.&lt;/li&gt;
&lt;li&gt;Multi-regional: Distribute applications and storage across two or more cloud regions on a given continent, e.g., Americas, Asia or Europe.&lt;/li&gt;
&lt;li&gt;Global: Distribute applications and store data globally across our entire global network for optimal performance and redundancy.&lt;/li&gt;
&lt;/ol&gt;
In addition, we've worked diligently over the last decade to help customers directly address EU data protection requirements. Most recently, Google &lt;a href="https://www.blog.google/topics/google-cloud/google-cloud-our-commitment-general-data-protection-regulation-gdpr/" target="_blank"&gt;announced a commitment to GDPR compliance across GCP&lt;/a&gt;. The General Data Protection Regulation (GDPR), which takes effect on May 25, 2018, is the most significant piece of European privacy legislation in the last 20 years.&lt;br /&gt;
&lt;blockquote class="tr_bq"&gt;
"&lt;i&gt;Google’s decision to choose London for its latest Google Cloud Region is another vote of confidence in our world-leading digital economy and proof Britain is open for business. It's great, but not surprising, to hear they've picked the UK because of the huge demand for this type of service from the nation's firms. Earlier this week the Digital Evolution Index named us among the most innovative digital countries in the world and there has been a record £5.6bn investment in tech in London in the past six months.&lt;/i&gt;"&amp;nbsp;&lt;/blockquote&gt;
&lt;blockquote class="tr_bq"&gt;
&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 14.6667px; white-space: pre-wrap;"&gt;— &lt;/span&gt; Karen Bradley, Secretary of State for Digital, Culture, Media and Sport &lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-G1wvrhj_bww/WWbvISk8ACI/AAAAAAAAEHM/RBNocwt5r3sHB4PE-FEesI7oUY7MEukQACLcBGAs/s1600/london-region-7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="928" data-original-width="1280" height="232" src="https://4.bp.blogspot.com/-G1wvrhj_bww/WWbvISk8ACI/AAAAAAAAEHM/RBNocwt5r3sHB4PE-FEesI7oUY7MEukQACLcBGAs/s320/london-region-7.png" width="320" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;blockquote class="tr_bq"&gt;
"&lt;i&gt;At WP Engine, we look forward to extending our digital experience platform to an even broader set of our 10,000 European customers who want to be hosted on Google Cloud Platform based in the London region. We are excited about bringing reduced latency benefits from the ability to store and process data in London to our UK customers."&lt;/i&gt;&amp;nbsp;&amp;nbsp;&lt;/blockquote&gt;
&lt;blockquote class="tr_bq"&gt;
&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 14.6667px; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;Jason Cohen, Founder and CTO&lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-lwvHRg0bR3I/WWbwOljQ4hI/AAAAAAAAEHQ/fuNIWHMYpzoY2ga86DPu_IKBVOICXEiOwCLcBGAs/s1600/london-region-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="322" data-original-width="1103" height="58" src="https://4.bp.blogspot.com/-lwvHRg0bR3I/WWbwOljQ4hI/AAAAAAAAEHQ/fuNIWHMYpzoY2ga86DPu_IKBVOICXEiOwCLcBGAs/s200/london-region-4.png" width="200" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;blockquote class="tr_bq"&gt;
"&lt;i&gt;The Telegraph benefits greatly from Google Cloud’s global scale and is pleased to see continued investment from Google Cloud in the UK. We look forward to working with them closely as they expand their business in the UK and Europe."&amp;nbsp;&lt;/i&gt;&amp;nbsp;&lt;/blockquote&gt;
&lt;blockquote class="tr_bq"&gt;
&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 14.6667px; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;Toby Wright, CTO, The Telegraph&lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://3.bp.blogspot.com/-07SmHdFoTCw/WWZqSfj3LoI/AAAAAAAAEG4/NVBf2E3cMEsKYEsD0ePNLzIcaDQNG3oNwCLcBGAs/s1600/london-region-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="55" data-original-width="320" height="34" src="https://3.bp.blogspot.com/-07SmHdFoTCw/WWZqSfj3LoI/AAAAAAAAEG4/NVBf2E3cMEsKYEsD0ePNLzIcaDQNG3oNwCLcBGAs/s200/london-region-3.png" width="200" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;blockquote class="tr_bq"&gt;
"&lt;i&gt;Google Cloud enables Revolut to try new ideas and stay agile while providing secure, reliable services for our customers at scale."&lt;/i&gt;&amp;nbsp;&amp;nbsp;&lt;/blockquote&gt;
&lt;blockquote class="tr_bq"&gt;
&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 14.6667px; white-space: pre-wrap;"&gt;— &lt;/span&gt;Vladyslav Yatsenko, Co-founder &amp;amp; CTO, Revolut&lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-dIkx2QYjTHU/WWZqh0261FI/AAAAAAAAEG8/GxXI0MEQH2E-E8oqzwwrhznKHyMg-tZWQCLcBGAs/s1600/london-region-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="431" data-original-width="1600" height="53" src="https://2.bp.blogspot.com/-dIkx2QYjTHU/WWZqh0261FI/AAAAAAAAEG8/GxXI0MEQH2E-E8oqzwwrhznKHyMg-tZWQCLcBGAs/s200/london-region-5.png" width="200" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
For the latest on the terms of availability for services from this new region as well as additional regions and services, visit our&amp;nbsp;&lt;a href="https://cloud.google.com/about/locations/london" target="_blank"&gt;London region page&lt;/a&gt; or &lt;a href="https://cloud.google.com/about/locations/" target="_blank"&gt;locations page&lt;/a&gt;. For guidance on how to build and create highly available applications, take a look at our &lt;a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank"&gt;zones and regions page&lt;/a&gt;. Give us a &lt;a href="https://goo.gl/forms/U5qAZB1tGR9NUlgB2" target="_blank"&gt;shout&lt;/a&gt; to request early access to new regions and help us prioritize what we build next.&lt;br /&gt;
&lt;br /&gt;
We’re excited to see what you’ll build on top of the new London region!&lt;br /&gt;
&lt;br /&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/Yam9X6YQ-Fo" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/4114491257625116809" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/4114491257625116809" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/Yam9X6YQ-Fo/Google-Cloud-Platform-now-open-in-London.html" title="Google Cloud Platform now open in London" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://2.bp.blogspot.com/-CmXngBPG6D0/WWZpRN17YvI/AAAAAAAAEGs/WFdYXGaSeY4fvirFwKli5aZorh1E9AEfACLcBGAs/s72-c/london-region-1.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/07/Google-Cloud-Platform-now-open-in-London.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-7142496099670783747</id><published>2017-07-12T08:00:00.000-07:00</published><updated>2017-07-13T04:08:44.946-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Compute" /><category scheme="http://www.blogger.com/atom/ns#" term="Open Source" /><title type="text">Container Engine now runs Kubernetes 1.7 to drive enterprise-ready secure hybrid workloads</title><content type="html">&lt;span class="byline-author"&gt;By Aparna Sinha, Group Product Manager, Container Engine&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Just over a week ago Google led the most recent open source release of Kubernetes 1.7, and today, that version is available on &lt;a href="https://cloud.google.com/container-engine/" target="_blank"&gt;Container Engine&lt;/a&gt;, &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt;’s (GCP) managed container service. Container Engine is one of the first commercial &lt;a href="https://kubernetes.io/" target="_blank"&gt;Kubernetes&lt;/a&gt; offerings running the latest 1.7 release, and includes differentiated features for enterprise security, extensibility, hybrid networking and developer efficiency. Let’s take a look at what’s new in Container Engine. &lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Enterprise security&lt;/h3&gt;
&lt;br /&gt;
Container Engine is designed with enterprise security in mind. By default, Container Engine clusters run a minimal, Google curated &lt;a href="https://cloud.google.com/container-optimized-os/" target="_blank"&gt;Container-Optimized OS&lt;/a&gt; (COS) to help minimize OS vulnerabilities. On top of that, a team of Google &lt;a href="https://landing.google.com/sre/" target="_blank"&gt;Site Reliability Engineers&lt;/a&gt; continuously monitor and manage the Container Engine clusters, so you don’t have to. Now, Container Engine adds several new security enhancements:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Starting with this release, kubelet will only have access to the objects it needs to know. The &lt;a href="https://kubernetes.io/docs/admin/authorization/node/" target="_blank"&gt;Node authorizer&lt;/a&gt; beta restricts each kubelet’s API access to resources (such as secrets) belonging to its scheduled pods. This feature increases the protection of a cluster from a compromised/untrusted node.&lt;/li&gt;
&lt;li&gt;Network isolation can be an important extra boundary for sensitive workloads. The Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" target="_blank"&gt;NetworkPolicy API&lt;/a&gt; allows users to control which pods can communicate with each other, providing defense-in-depth and improving secure multi-tenancy. Policy enforcement can now be enabled in &lt;a href="https://cloud.google.com/container-engine/docs/alpha-clusters" target="_blank"&gt;alpha clusters&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/ingress/tree/master/controllers/gce#backend-https" target="_blank"&gt;HTTP re-encryption&lt;/a&gt; through &lt;a href="https://cloud.google.com/load-balancing/" target="_blank"&gt;Google Cloud Load Balancing&lt;/a&gt; (GCLB) allows customers to use HTTPS from the GCLB to their service backends. This is an often requested feature that gives customers the peace of mind knowing that their data is fully encrypted in-transit even after it enters Google’s global network.&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
Together the above features improve workload isolation within a cluster, which is a frequently requested security feature in Kubernetes. Node Authorizer and NetworkPolicy can be combined with the existing RBAC control in Container Engine to improve the foundations of multi-tenancy:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Network isolation between Pods (network policy)&lt;/li&gt;
&lt;li&gt;Resource isolation between Nodes (node authorizer)&lt;/li&gt;
&lt;li&gt;Centralized control over cluster resources (RBAC)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
Enterprise and hybrid networks&lt;/h3&gt;
&lt;br /&gt;
Perhaps the most awaited features by our enterprise users are networking support for hybrid cloud and VPN with Container Engine. New in this release:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/container-engine/docs/ip-masquerade-agent" target="_blank"&gt;GA Support for all private IP (RFC-1918) addresses&lt;/a&gt;, allowing users to create clusters and access resources in all private IP ranges and extending the ability to use Container Engine clusters with existing networks.&lt;/li&gt;
&lt;li&gt;Exposing services by &lt;a href="https://cloud.google.com/container-engine/docs/internal-load-balancing" target="_blank"&gt;internal load balancing&lt;/a&gt; is beta, allowing Kubernetes and non-Kubernetes services to access one another on a private network&lt;sup&gt;1&lt;/sup&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" target="_blank"&gt;Source IP preservation&lt;/a&gt; is now generally available and allows applications to be fully aware of client IP addresses for services exposed through Kubernetes&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;h3&gt;
Enterprise extensibility&lt;/h3&gt;
As more enterprises use Container Engine, we're making a major investment to improve extensibility. We heard feedback that customers want to offer custom Kubernetes-style APIs in their clusters.&lt;br /&gt;
&lt;br /&gt;
&lt;a href="https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/" target="_blank"&gt;API Aggregation&lt;/a&gt;, launching today in beta on Container Engine, enables you to extend the Kubernetes API with custom APIs. For example, you can now add existing API solutions such as &lt;a href="https://github.com/kubernetes-incubator/service-catalog/blob/master/README.md" target="_blank"&gt;service catalog&lt;/a&gt;, or build your own in the future. &lt;br /&gt;
&lt;br /&gt;
Users also want to incorporate custom business logic and third-party solutions into their Container Engine clusters. So we’re introducing &lt;a href="https://kubernetes.io/docs/admin/extensible-admission-controllers/" target="_blank"&gt;Dynamic Admission Control&lt;/a&gt; in alpha clusters, providing two ways to add business logic to your cluster:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Initializers can modify Kubernetes objects as they are created. For example, you can use an initializer to add &lt;a href="https://istio.io/" target="_blank"&gt;Istio&lt;/a&gt; capability to a Container Engine alpha cluster, by injecting an Istio sidecar container in every Pod deployed.&lt;/li&gt;
&lt;li&gt;Webhooks enable you to validate enterprise policy. For example, you can verify that containers being deployed pass your enterprise security audits.&lt;/li&gt;
&lt;/ul&gt;
As part of our plans to improve extensibility for enterprises, we're replacing the Third Party Resource (TPR) API with the improved Custom Resource Definition (CRD) API. CRDs are a lightweight way to store structured metadata in Kubernetes, which make it easy to interact with custom controllers via kubectl. If you use the TPR beta feature, please plan to &lt;a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/migrate-third-party-resource/" target="_blank"&gt;migrate&lt;/a&gt; to CRD before upgrading to the 1.8 release.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Workload diversity&lt;/h3&gt;
&lt;br /&gt;
Container Engine now enhances your ability to run stateful workloads like databases and key value stores, such as ZooKeeper, with a new automated application update capability. You can:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Select from a range of &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies" target="_blank"&gt;StatefulSet update strategies&lt;/a&gt; beta, including rolling updates&lt;/li&gt;
&lt;li&gt;Optimize roll-out speed with parallel or ordered pod provisioning, particularly useful for applications such as Kafka.&lt;/li&gt;
&lt;/ul&gt;
A popular workload on Google Cloud and Container Engine is training machine learning models for better predictive analytics. Many of you have requested GPUs to speed up training time, so we’ve updated Container Engine to support NVIDIA K80 GPUs in &lt;a href="https://cloud.google.com/container-engine/docs/alpha-clusters" target="_blank"&gt;alpha clusters&lt;/a&gt; for experimentation with this exciting feature. We’ll support additional GPUs in the future.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Developer efficiency&lt;/h3&gt;
&lt;br /&gt;
When developers don’t have to worry about infrastructure, they can spend more time building applications. Kubernetes provides building blocks to de-couple infrastructure and application management, and Container Engine builds on that foundation with best-in-class automation features.&lt;br /&gt;
&lt;br /&gt;
We’ve automated large parts of maintaining the health of the cluster, with auto-repair and auto-upgrade of nodes.&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/container-engine/docs/node-auto-repair" target="_blank"&gt;Auto-repair&lt;/a&gt; beta keeps your cluster healthy by proactively monitoring for unhealthy nodes and repairs them automatically without developer involvement.&lt;/li&gt;
&lt;li&gt;In this release, Container Engine’s &lt;a href="https://cloud.google.com/container-engine/docs/node-auto-upgrade" target="_blank"&gt;auto-upgrade&lt;/a&gt; beta capability incorporates &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" target="_blank"&gt;Pod Disruption Budgets&lt;/a&gt; at the node layer, making upgrades to infrastructure and application controllers predictable and safer.&lt;/li&gt;
&lt;/ul&gt;
Container Engine also offers cluster- and pod-level auto-scaling so applications can respond to user demand without manual intervention. This release introduces several GCP-optimized enhancements to cluster autoscaling:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Support for scaling node pools to 0 or 1, for when you don’t need capacity&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders" target="_blank"&gt;Price-based expander&lt;/a&gt; for auto-scaling in the most cost-effective way&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler" target="_blank"&gt;Balanced scale-out&lt;/a&gt; of similar node groups, useful for clusters that span multiple zones&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
The combination of auto-repair, auto-upgrades and cluster autoscaling in Container Engine enables application developers to deploy and scale their apps without being cluster admins.&lt;br /&gt;
&lt;br /&gt;
We’ve also updated the &lt;a href="http://console.cloud.google.com/kubernetes" target="_blank"&gt;Container Engine UI&lt;/a&gt; to assist in debugging and troubleshooting by including detailed workload-related views.  For each workload, we show the type (DaemonSet, Deployment, StatefulSet, etc.), running status, namespace and cluster. You can also debug each pod and view annotations, labels, the number of replicas and status, etc. All views are cross-cluster so if you're using multiple clusters, these views allow you to focus on your workloads, no matter where they run. In addition, we also include load balancing and configuration views with deep links to GCP networking, storage and compute. This new UI will be rolling out in the coming week.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-KKyKEnWa9Go/WWVJboVL-FI/AAAAAAAAEGI/Y6I7waifyIAiFxObYw17LKJQVTEy82FGwCLcBGAs/s1600/kubernetes-1.7-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="427" data-original-width="933" height="292" src="https://4.bp.blogspot.com/-KKyKEnWa9Go/WWVJboVL-FI/AAAAAAAAEGI/Y6I7waifyIAiFxObYw17LKJQVTEy82FGwCLcBGAs/s640/kubernetes-1.7-1.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;h3&gt;
Container Engine everywhere&lt;/h3&gt;
&lt;br /&gt;
Google Cloud is enabling a shift in enterprise computing: from local to global, from days to seconds, and from proprietary to open. The benefits of this model are becoming clear and exemplified by Container Engine, which saw more than 10x growth last year.&lt;br /&gt;
&lt;br /&gt;
To keep up with demand, we're expanding our global capacity with new Container Engine clusters in our latest &lt;a href="https://cloud.google.com/about/locations/" target="_blank"&gt;GCP regions&lt;/a&gt;:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Sydney (australia-southeast1)&lt;/li&gt;
&lt;li&gt;Singapore (asia-southeast1)&lt;/li&gt;
&lt;li&gt;Oregon (us-west1)&lt;/li&gt;
&lt;li&gt;London (europe-west2)&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
These new &lt;a href="https://cloud.google.com/about/locations/" target="_blank"&gt;regions&lt;/a&gt; join the half dozen others from Iowa to Belgium to Taiwan where Container Engine clusters are already up and running.&lt;br /&gt;
&lt;br /&gt;
This blog post highlighted some of the new features available in Container Engine. You can find the complete list of new features in the Container Engine &lt;a href="https://cloud.google.com/container-engine/release-notes" target="_blank"&gt;release notes&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
The rapid adoption of Container Engine and its technology is translating into real customer impact. Here are a few recent stories that highlight the benefits companies are seeing:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.bq.com/en/" target="_blank"&gt;&lt;b&gt;BQ&lt;/b&gt;&lt;/a&gt;, one of the leading technology companies in Europe that designs and develops consumer electronics, was able to scale quickly from 15 to 350 services while reducing its cloud hosting costs by approximately 60% through better utilization and use of &lt;a href="https://cloud.google.com/preemptible-vms/" target="_blank"&gt;Preemptible VMs&lt;/a&gt; on Container Engine. Read the full story &lt;a href="https://cloud.google.com/customers/bq/" target="_blank"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.meetup.com/" target="_blank"&gt;&lt;b&gt;Meetup&lt;/b&gt;&lt;/a&gt;, the social media networking platform, switched from a monolithic application in on-premises data centers to an agile microservices architecture in a multi-cloud environment with the help of Container Engine. This gave its engineering teams autonomy to work on features and develop roadmaps that are independent from other teams, translating into faster release schedules, greater creativity and new functionality. Read the case study &lt;a href="https://cloud.google.com/customers/meetup/" target="_blank"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.lootcrate.com/" target="_blank"&gt;&lt;b&gt;Loot Crate&lt;/b&gt;&lt;/a&gt;, a leader in fan subscription boxes, launched a new offering on Container Engine to quickly get their Rails app production ready and able to scale with demand and zero downtime deployments. Read how it built its continuous deployment pipeline with Jenkins &lt;a href="https://cloudplatform.googleblog.com/2017/07/guest-post-Loot-Crate-unboxes-Google-Container-Engine-for-new-Sports-Crate-venture.html" target="_blank"&gt;in this post&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
At Google Cloud we’re really proud of our compute infrastructure, but what really makes it valuable is the services that run on top. Google creates game-changing services on top of world-class infrastructure and tooling. With Kubernetes and Container Engine, Google Cloud makes these innovations available to developers everywhere. &lt;br /&gt;
&lt;br /&gt;
GCP is the first cloud offering a fully managed way to try the newest Kubernetes release, and with our generous 12-month &lt;a href="https://cloud.google.com/free/" target="_blank"&gt;free trial&lt;/a&gt; of $300 credits, there’s no excuse to not &lt;a href="https://cloud.google.com/free/" target="_blank"&gt;try it today&lt;/a&gt;. &lt;br /&gt;
&lt;br /&gt;
Thanks for your feedback and support. Keep the conversation going and connect with us on the &lt;a href="https://googlecloud-community.slack.com/?redir=%2Fmessages%2FC0B9GKTKJ" target="_blank"&gt;Container Engine Slack channel&lt;/a&gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;hr width="80%" /&gt;
&lt;span class="Apple-style-span" style="font-size: x-small;"&gt;&lt;br /&gt;
&lt;b&gt;1 &lt;/b&gt;Support for accessing Internal Load Balancers over Cloud VPN is currently in alpha; customers can apply for access &lt;a href="https://docs.google.com/forms/d/1eKiZ-PxzsBNhdgWryd6UXF0M-3kNU2RXGPliI2MxFj4/viewform?edit_requested=true"&gt;here&lt;/a&gt;.&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/zFn1Ewdc4xY" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/7142496099670783747" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/7142496099670783747" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/zFn1Ewdc4xY/Container-Engine-now-runs-Kubernetes-1-7-to-drive-enterprise-ready-secure-hybrid-workloads.html" title="Container Engine now runs Kubernetes 1.7 to drive enterprise-ready secure hybrid workloads" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://4.bp.blogspot.com/-KKyKEnWa9Go/WWVJboVL-FI/AAAAAAAAEGI/Y6I7waifyIAiFxObYw17LKJQVTEy82FGwCLcBGAs/s72-c/kubernetes-1.7-1.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/07/Container-Engine-now-runs-Kubernetes-1-7-to-drive-enterprise-ready-secure-hybrid-workloads.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-2448576320219754206</id><published>2017-07-12T07:59:00.000-07:00</published><updated>2017-07-12T07:59:14.283-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Compute" /><category scheme="http://www.blogger.com/atom/ns#" term="Customers" /><title type="text">Guest post: Loot Crate unboxes Google Container Engine for new Sports Crate venture</title><content type="html">&lt;span class="byline-author"&gt;By Greg Brown, Director, DevOps, Loot Crate&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
[&lt;i&gt;Editor’s note: Gamers and superfans know &lt;a href="https://www.lootcrate.com/" target="_blank"&gt;Loot Crate&lt;/a&gt;, which delivers boxes of themed swag to 650,000 subscribers every month. Loot Crate built its back-end on Heroku, but for its next venture&amp;nbsp;&lt;/i&gt;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&lt;i&gt;&amp;nbsp;Sports Crate&amp;nbsp;&lt;/i&gt;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&lt;i&gt;&amp;nbsp;the company decided to containerize its Rails app with Google Container Engine, and added continuous deployment with Jenkins. Read on to learn how they did it.]&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
Founded in 2012, Loot Crate is the worldwide leader in fan subscription boxes, partnering with entertainment, gaming and pop culture creators to deliver monthly themed crates, produce interactive experiences and digital content and film original video productions. In our first five years, we’ve delivered over 14 million crates to fans in 35 territories across the globe.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;a href="https://1.bp.blogspot.com/-GyhIhUtP_UM/WWVZt7Ciz8I/AAAAAAAAEGY/jOjXakpG98Y3999rHTcxYwe9iGdbU51jACLcBGAs/s1600/loot-crate-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="1180" data-original-width="1600" height="472" src="https://1.bp.blogspot.com/-GyhIhUtP_UM/WWVZt7Ciz8I/AAAAAAAAEGY/jOjXakpG98Y3999rHTcxYwe9iGdbU51jACLcBGAs/s640/loot-crate-1.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;In early 2017 we were tasked with launching an offering to Major League Baseball fans called Sports Crate. There were only a couple of months until the 2017 MLB season started on April 2nd, so we needed the site to be up and capturing emails from interested parties as fast as possible. Other items on our wish list included the ability to scale the site as traffic increased, automated zero-downtime deployments, effective secret management and to reap the benefits of Docker images. Our other Loot Crate properties are built on Heroku, but for Sports Crate, we decided to try &lt;a href="https://cloud.google.com/container-engine/" target="_blank"&gt;Container Engine&lt;/a&gt;, which we suspected would allow our app to scale better during peak traffic, manage our resources using a single Google login and better manage our costs. &lt;br /&gt;
&lt;h3&gt;&lt;br /&gt;
Continuous deployment with Jenkins&lt;/h3&gt;Our goal was to be able to successfully deploy an application to Container Engine with a simple git push command. We created an auto-scaling, dual-zone Kubernetes cluster on Container Engine, and tackled how to do automated deployments to the cluster. After a lot of research and a conversation with Google Cloud Solutions Architect &lt;a href="https://twitter.com/vicnastea" target="_blank"&gt;Vic Iglesias&lt;/a&gt;, we decided to go with &lt;a href="https://jenkins.io/doc/book/pipeline/multibranch/" target="_blank"&gt;Jenkins Multibranch Pipelines&lt;/a&gt;. We followed this guide on &lt;a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes" target="_blank"&gt;continuous deployment on Kubernetes&lt;/a&gt; and soon had a working Jenkins deployment running in our cluster ready to handle deploys.&lt;br /&gt;
&lt;br /&gt;
Our next task was to create a Dockerfile of our Rails app to deploy to Container Engine. To speed up build time, we created our own base image with Ruby and our gems already installed, as well as a &lt;a href="https://martinfowler.com/articles/rake.html" target="_blank"&gt;rake task&lt;/a&gt; to precompile assets and upload them to &lt;a href="https://cloud.google.com/storage/" target="_blank"&gt;Google Cloud Storage&lt;/a&gt; when Jenkins builds the Docker image.&lt;br /&gt;
&lt;br /&gt;
Dockerfile in hand, we set up the Jenkins Pipeline to build the Docker image, push it to &lt;a href="https://cloud.google.com/container-registry/" target="_blank"&gt;Google Container Registry&lt;/a&gt;&amp;nbsp;and deploy Kubernetes and its services to our environment. We put a Jenkinsfile in our GitHub repo that uses a switch statement based on the GitHub branch name to choose which Kubernetes namespace to deploy to. (We have three QA environments, a staging environment and production environment).&lt;br /&gt;
&lt;br /&gt;
The Jenkinsfile checks out our code from GitHub, builds the Docker image, pushes the image to Container Registry, runs a Kubernetes job that performs any database migrations (checking for success or failure) and runs tests. It then deploys the updated Docker image to Container Engine and reports the status of the deploy to Slack. The entire process takes under 3 minutes.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;Improving secret management in the local development environment&lt;/h3&gt;Next, we focused on making local development easier and more secure. We do our development locally, and with our Heroku-based applications, we deploy using environment variables that we add in the Heroku config or in the UI. That means that anyone with the Heroku login and permission can see them. For Sports Crate, we wanted to make the environment variables more secure; we put them in a Kubernetes secret that the applications can easily consume, which also keeps the secrets out of the codebase and off developer laptops. &lt;br /&gt;
&lt;br /&gt;
The local development environment consumes those environmental variables using a &lt;a href="http://api.rubyonrails.org/classes/Rails/Railtie.html" target="_blank"&gt;railtie&lt;/a&gt; that goes out to Kubernetes, retrieves the secrets for the development environment, parses them and puts them into the Rails environment. This allows our developers to "cd" into a repo and run "rails server" or "rails console" with the Kubernetes secrets pulled down before the app starts.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;TLS termination and load balancing&lt;/h3&gt;Another requirement was to set up effective TLS termination and load balancing. We used a Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" target="_blank"&gt;Ingress resource&lt;/a&gt; with an &lt;a href="https://github.com/kubernetes/ingress" target="_blank"&gt;Nginx Ingress Controller&lt;/a&gt;, whose automatic HTTP-to-HTTPS redirect functionality isn’t available from &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt;'s (GCP) Ingress controller. Once we had the Ingress resource configured with our certificate and our Nginx Ingress controller running behind a service with a static IP, we were able to get to our application from the outside world. Things were starting to come together!&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;Auto-scaling and monitoring&lt;/h3&gt;With all of the basic pieces of our infrastructure on GCP in place, we looked towards auto-scaling, monitoring and educating our QA team on deployment practices and logging. For pod auto-scaling, we implemented a &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" target="_blank"&gt;Kubernetes Horizontal Pod Autoscaler&lt;/a&gt; on our deployment. This checks CPU utilization and scales the pods up if we start getting a lot of traffic to our app. For monitoring, we implemented &lt;a href="http://docs.datadoghq.com/integrations/kubernetes/" target="_blank"&gt;Datadog’s Kubernetes Agent&lt;/a&gt; and set up metrics to check for any critical issues, and send alerts to &lt;a href="https://www.pagerduty.com/" target="_blank"&gt;PagerDuty&lt;/a&gt;. We use &lt;a href="https://cloud.google.com/stackdriver/" target="_blank"&gt;StackDriver&lt;/a&gt; for logging and educated our team on how to use the StackDriver Logging console to properly drill down to the app, namespace and pod for which they wanted information.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;Net-net&lt;/h3&gt;With launch day around the corner, we ran load tests on our new app and were amazed at how well it handled large amounts of traffic. The pods auto-scaled exactly as we needed them to and our QA team fell in love with continuous deployment with Jenkins Multibranch Pipelines. All told, Container Engine met all of our requirements, and we were up and running within a month. &lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;a href="https://1.bp.blogspot.com/-bRT_xVpNMdQ/WWVZ6YdSRQI/AAAAAAAAEGc/QoVE3VRXsm0oES0OKL3wczIuj6FZua7DgCLcBGAs/s1600/loot-crate-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="840" data-original-width="1260" height="426" src="https://1.bp.blogspot.com/-bRT_xVpNMdQ/WWVZ6YdSRQI/AAAAAAAAEGc/QoVE3VRXsm0oES0OKL3wczIuj6FZua7DgCLcBGAs/s640/loot-crate-2.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;Our next project is to move our other monolithic Rails apps off of Heroku and onto Container Engine as decoupled microservices that can take advantage of the newest Kubernetes features. We look forward to improving on what has already been an extremely powerful tool.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/sUzxWYG8vbQ" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2448576320219754206" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2448576320219754206" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/sUzxWYG8vbQ/guest-post-Loot-Crate-unboxes-Google-Container-Engine-for-new-Sports-Crate-venture.html" title="Guest post: Loot Crate unboxes Google Container Engine for new Sports Crate venture" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://1.bp.blogspot.com/-GyhIhUtP_UM/WWVZt7Ciz8I/AAAAAAAAEGY/jOjXakpG98Y3999rHTcxYwe9iGdbU51jACLcBGAs/s72-c/loot-crate-1.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/07/guest-post-Loot-Crate-unboxes-Google-Container-Engine-for-new-Sports-Crate-venture.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-8530751589793566654</id><published>2017-07-10T08:59:00.001-07:00</published><updated>2017-07-10T08:59:48.203-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Containers &amp; Kubernetes" /><category scheme="http://www.blogger.com/atom/ns#" term="Open Source" /><category scheme="http://www.blogger.com/atom/ns#" term="Partners" /><title type="text">Going Hybrid with Kubernetes on Google Cloud Platform and Nutanix</title><content type="html">&lt;span class="byline-author"&gt;By Allan Naim, Product GTM Lead, Kubernetes and Container Engine&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Recently, we announced &lt;a href="https://www.blog.google/topics/google-cloud/nutanix-and-google-cloud-team-simplify-hybrid-cloud/" target="_blank"&gt;a strategic partnership&lt;/a&gt; with &lt;a href="https://www.nutanix.com/" target="_blank"&gt;Nutanix&lt;/a&gt; to help remove friction from hybrid cloud deployments for enterprises. You can find the announcement blog post &lt;a href="https://www.blog.google/topics/google-cloud/nutanix-and-google-cloud-team-simplify-hybrid-cloud/" target="_blank"&gt;here&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
Hybrid cloud allows organizations to run a variety of applications either on-premise or in the public cloud. With this approach, enterprises can:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;Increase the speed&lt;/b&gt;&amp;nbsp;at which they're releasing products and features&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Scale&lt;/b&gt; applications to meet customer demand&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Move applications&lt;/b&gt; to the public cloud at their own pace&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Reduce time spent on infrastructure&lt;/b&gt; and increase time spent on writing code&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Reduce cost&lt;/b&gt; by improving resource utilization and compute efficiency&lt;/li&gt;
&lt;/ul&gt;
The vast majority of organizations have a portfolio of applications with varying needs. In some cases, data sovereignty and compliance requirements force a jurisdictional deployment model where an application and its data must reside in an on-premises environment or within a country’s boundaries. Alternatively, mobile and IoT applications are characterized with unpredictable consumption models that make the on-demand, pay-as-you-go cloud model the best deployment target for these applications.&lt;br /&gt;
&lt;br /&gt;
Hybrid cloud deployments can help deliver the security, compliance and compute power you require with the agility, flexibility and scale you need. Our hybrid cloud example will encompass three key components:&lt;br /&gt;
&lt;ol&gt;
&lt;li&gt;On-premise: &lt;b&gt;Nutanix infrastructure&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;Public cloud: &lt;b&gt;&lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP)&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;Open source: &lt;b&gt;Kubernetes and Containers&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
Containers provide an immutable and highly portable infrastructure that enables developers to predictably deploy apps across any environment where the container runtime engine can run. This makes it possible to run the same containerized application on bare metal, private cloud or public cloud. However,  as developers move towards microservice architectures, they must solve a new set of challenges such as scaling, rolling updates, discovery, logging, monitoring and networking connectivity.&lt;br /&gt;
&lt;br /&gt;
Google’s experience running our own container-based internal systems inspired us to create &lt;a href="https://kubernetes.io/" target="_blank"&gt;Kubernetes&lt;/a&gt;, and Google Container Engine, an open source and Google Cloud managed platform for running containerized applications  across a pool of compute resources. Kubernetes abstracts away the underlying infrastructure, and provides a consistent experience for running containerized applications. Kubernetes introduces the concept of a declarative deployment model. In this model, an ops person supplies a template that describes how the application should run, and Kubernetes ensures the application’s actual state is always equal to the desired state. Kubernetes also manages container scheduling, scaling, health, lifecycle, load balancing, data persistence, logging and monitoring. &lt;br /&gt;
&lt;br /&gt;
In a first phase, the Google Cloud-Nutanix partnership focuses on easing hybrid operations using Nutanix Calm as a single control plane for workload management across both on-premises Nutanix and GCP environments, using Kubernetes as the container management layer across the two. Nutanix Calm was recently &lt;a href="https://www.nutanix.com/2017/06/28/app-centric-infrastructure-cloud/" target="_blank"&gt;announced at Nutanix .NEXT conference&lt;/a&gt; and once publicly available, will be used to automate provisioning and lifecycle operations across hybrid cloud deployments. Nutanix Enterprise Cloud OS supports a hybrid Kubernetes environment running on Google Compute Engine in the cloud and a Kubernetes cluster on Nutanix on-premises. Through this, customers can deploy portable application blueprints that run on both an on-premises Nutanix environment as well as in GCP.&lt;br /&gt;
&lt;br /&gt;
Let’s walk through the steps involved in setting up a hybrid environment using Nutanix and GCP. &lt;br /&gt;
&lt;br /&gt;
The steps involved are as follows:&lt;br /&gt;
&lt;ol&gt;
&lt;li&gt;Provision an on premise 4-node Kubernetes cluster using a Nutanix Calm blueprint&lt;/li&gt;
&lt;li&gt;Provision a Google Compute Engine 4-node Kubernetes cluster using the same Nutanix Calm Kubernetes blueprint, configured for Google Cloud&lt;/li&gt;
&lt;li&gt;Use Kubectl to manage both on premise and Google Cloud Kubernetes clusters&lt;/li&gt;
&lt;li&gt;Using Helm, we’ll deploy the same Wordpress chart on both on premise and Google Cloud Kubernetes clusters&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
Provisioning an on-premise Kubernetes cluster using a Nutanix Calm blueprint&lt;/h3&gt;
You can use Nutanix Calm to provision a Kubernetes cluster on premise, and Nutanix Prism, an infrastructure management solution for virtualized data centers, to bootstrap a cluster of virtualized compute and storage. This results in a Nutanix managed pool of compute and storage that's now ready to be orchestrated by Nutanix Calm, for one-click deployment of popular commercial and open source packages. &lt;br /&gt;
&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://3.bp.blogspot.com/-Tye6KegjtcQ/WWOdmCvkFYI/AAAAAAAAEFM/q9yHglL98tMurjXaVJ1myS2qyRDNodEEQCLcBGAs/s1600/nutanix-kubernetes-7.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="240" data-original-width="653" height="234" src="https://3.bp.blogspot.com/-Tye6KegjtcQ/WWOdmCvkFYI/AAAAAAAAEFM/q9yHglL98tMurjXaVJ1myS2qyRDNodEEQCLcBGAs/s640/nutanix-kubernetes-7.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;The tools used to deploy the Nutanix and Google hybrid cloud stacks.&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
You can then select the Kubernetes blueprint  to target the Nutanix on-premise environment.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-RirM6HgqB7I/WWOdvLqp71I/AAAAAAAAEFQ/CWxFpLEvR80QkcWjxs8IBWpUvok-3Li5QCLcBGAs/s1600/nutanix-kubernetes-8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="678" data-original-width="1600" height="270" src="https://2.bp.blogspot.com/-RirM6HgqB7I/WWOdvLqp71I/AAAAAAAAEFQ/CWxFpLEvR80QkcWjxs8IBWpUvok-3Li5QCLcBGAs/s640/nutanix-kubernetes-8.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
The Calm Kubernetes blueprint pictured below configures a four-node Kubernetes cluster that includes all the base software on all the nodes and the master. We’ve also customized our Kubernetes blueprint to configure Helm Tiller on the cluster, so you can use Helm to deploy a Wordpress chart. Calm blueprints also allow you to create workflows so that configuration tasks can take place in a specified order, as shown below with the “create” action. &lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-gHfoTcHR2Sw/WWOd5WRj8mI/AAAAAAAAEFU/HhVvPsINSYYBgICfeL2N2_3IFKxpffPbgCLcBGAs/s1600/nutanix-kubernetes-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="1003" data-original-width="1600" height="400" src="https://2.bp.blogspot.com/-gHfoTcHR2Sw/WWOd5WRj8mI/AAAAAAAAEFU/HhVvPsINSYYBgICfeL2N2_3IFKxpffPbgCLcBGAs/s640/nutanix-kubernetes-3.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
Now, launch the Kubernetes Blueprint:&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-ZtmFOLGe4gg/WWOeAwUFD0I/AAAAAAAAEFY/VBqmV88Ghac14JX8-dn_KL1Nb-VHpLI2QCLcBGAs/s1600/nutanix-kubernetes-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="722" data-original-width="1600" height="288" src="https://2.bp.blogspot.com/-ZtmFOLGe4gg/WWOeAwUFD0I/AAAAAAAAEFY/VBqmV88Ghac14JX8-dn_KL1Nb-VHpLI2QCLcBGAs/s640/nutanix-kubernetes-4.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
After a couple of minutes, the Kubernetes cluster is up and running with five VMs (one master node and four worker nodes): &lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://3.bp.blogspot.com/-iDXATaCt43k/WWOecjyi4-I/AAAAAAAAEFk/qPVJP6mS-Bg8XTDzEBSE5C_-RUc8gLIQQCLcBGAs/s1600/nutanix-kubernetes-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="670" data-original-width="1600" height="266" src="https://3.bp.blogspot.com/-iDXATaCt43k/WWOecjyi4-I/AAAAAAAAEFk/qPVJP6mS-Bg8XTDzEBSE5C_-RUc8gLIQQCLcBGAs/s640/nutanix-kubernetes-5.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;h3&gt;
Provisioning a Kubernetes cluster on Google Compute Engine with the same Nutanix Calm Kubernetes blueprint&lt;/h3&gt;
Using Nutanix Calm, you can now deploy the Kubernetes blueprint onto GCP. The Kubernetes cluster is up and running on Compute Engine within a couple of minutes, again with five VMs (one master node + four worker nodes):&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-iQi0ZhFmyTU/WWOf8_796cI/AAAAAAAAEFo/wNYNO5peUMA_FR1SjZP4mynLoaWQDSJ2gCLcBGAs/s1600/nutanix-kubernetes-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="602" data-original-width="1600" height="240" src="https://2.bp.blogspot.com/-iQi0ZhFmyTU/WWOf8_796cI/AAAAAAAAEFo/wNYNO5peUMA_FR1SjZP4mynLoaWQDSJ2gCLcBGAs/s640/nutanix-kubernetes-2.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-aSGdUcpqZ9s/WWOgAy1DRUI/AAAAAAAAEFs/AheOyhE19MYmWRqkifY20qVoE8amNWFcwCLcBGAs/s1600/nutanix-kubernetes-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="649" data-original-width="1600" height="258" src="https://2.bp.blogspot.com/-aSGdUcpqZ9s/WWOgAy1DRUI/AAAAAAAAEFs/AheOyhE19MYmWRqkifY20qVoE8amNWFcwCLcBGAs/s640/nutanix-kubernetes-1.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;br /&gt;&lt;/div&gt;
You’re now ready to deploy workloads across the hybrid environment. In this example, you'll deploy a containerized WordPress stack. &lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Using Kubectl to manage both on-premise and Google Cloud Kubernetes clusters&lt;/h3&gt;
Kubectl is a command line interface tool that comes with Kubernetes to run commands against Kubernetes clusters. &lt;br /&gt;
&lt;br /&gt;
You can now target each Kubernetes cluster across the hybrid environment and use kubectl to run basic commands. First, ssh into your on-premise environment and run a few commands.&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;# List out the nodes in the cluster

$ kubectl get nodes

NAME          STATUS    AGE
10.21.80.54   Ready     16m
10.21.80.59   Ready     16m
10.21.80.65   Ready     16m
10.21.80.67   Ready     16m

# View the cluster config

$ kubectl config view

apiVersion: v1
clusters:
- cluster:
    server: http://10.21.80.66:8080
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    user: default-admin
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users: []

# Describe the storageclass configured. This is the Nutanix storage volume plugin for Kubernetes

$ kubectl get storageclass

NAME      KIND
silver    StorageClass.v1.storage.k8s.io

$ kubectl describe storageclass silver

Name:  silver
IsDefaultClass: No
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/nutanix-volume&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;h3&gt;
Using Helm, you can deploy the same WordPress chart on both on-premise and Google Cloud Kubernetes clusters&lt;/h3&gt;
This example uses Helm, a package manager used to install and manage Kubernetes applications. In this example, the Calm Kubernetes blueprint includes Helm as part of the cluster setup. The on-premise Kubernetes cluster is configured with Nutanix Acropolis, a storage provisioning system, which automatically creates Kubernetes persistent volumes for the WordPress pods. &lt;br /&gt;
&lt;br /&gt;
Let’s deploy WordPress on-premise and on Google Cloud:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;# Deploy wordpress

$ helm install wordpress-0.6.4.tgz

NAME:   quaffing-crab
LAST DEPLOYED: Sun Jul  2 03:32:21 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==&amp;gt; v1/Secret
NAME                     TYPE    DATA  AGE
quaffing-crab-mariadb    Opaque  2     1s
quaffing-crab-wordpress  Opaque  3     1s

==&amp;gt; v1/ConfigMap
NAME                   DATA  AGE
quaffing-crab-mariadb  1     1s

==&amp;gt; v1/PersistentVolumeClaim
NAME                     STATUS   VOLUME  CAPACITY  ACCESSMODES  STORAGECLASS  AGE
quaffing-crab-wordpress  Pending  silver  1s
quaffing-crab-mariadb    Pending  silver  1s

==&amp;gt; v1/Service
NAME                     CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
quaffing-crab-mariadb    10.21.150.254  &lt;none&gt;       3306/TCP                    1s
quaffing-crab-wordpress  10.21.150.73   &lt;pending&gt;    80:32376/TCP,443:30998/TCP  1s

==&amp;gt; v1beta1/Deployment
NAME                     DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
quaffing-crab-wordpress  1        1        1           0          1s
quaffing-crab-mariadb  &lt;/pending&gt;&lt;/none&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;br /&gt;
Then, you can run a few kubectl commands to browse the on-premise deployment.&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;# Take a look at the persistent volume claims 

$ kubectl get pvc

NAME                      STATUS    VOLUME                                                                               CAPACITY   ACCESSMODES   AGE
quaffing-crab-mariadb     Bound     94d90daca29eaafa7439b33cc26187536e2fcdfc20d78deddda6606db506a646-nutanix-k8-volume   8Gi        RWO           1m
quaffing-crab-wordpress   Bound     764e5462d809a82165863af8423a3e0a52b546dd97211dfdec5e24b1e448b63c-nutanix-k8-volume   10Gi       RWO           1m

# Take a look at the running pods

$ kubectl get po

NAME                                      READY     STATUS    RESTARTS   AGE
quaffing-crab-mariadb-3339155510-428wb    1/1       Running   0          3m
quaffing-crab-wordpress-713434103-5j613   1/1       Running   0          3m

# Take a look at the services exposed

$ kubectl get svc

NAME                      CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
kubernetes                10.254.0.1      &lt;none&gt;        443/TCP                      16d
quaffing-crab-mariadb     10.21.150.254   &lt;none&gt;        3306/TCP                     4m
quaffing-crab-wordpress   10.21.150.73    #.#.#.#     80:32376/TCP,443:30998/TCP   4m&lt;/none&gt;&lt;/none&gt;&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;
&lt;br /&gt;
This on-premise environment did not have a load balancer provisioned, so we used the cluster IP to browse the WordPress site. The Google Cloud WordPress deployment automatically assigned a load balancer to the WordPress service along with an external IP address.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-IllVhztukiw/WWOkHRm7eWI/AAAAAAAAEF4/b1BGoqePYnkVQx71hjSpQoCm68QZ4d6qgCLcBGAs/s1600/nutanix-kubernetes-6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="1252" data-original-width="1502" height="332" src="https://2.bp.blogspot.com/-IllVhztukiw/WWOkHRm7eWI/AAAAAAAAEF4/b1BGoqePYnkVQx71hjSpQoCm68QZ4d6qgCLcBGAs/s400/nutanix-kubernetes-6.png" width="400" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;h3&gt;
&lt;none&gt;Summary&lt;/none&gt;&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Nutanix Calm provided a one-click consistent deployment model to provision a Kubernetes cluster on both Nutanix Enterprise Cloud and Google Cloud.&lt;/li&gt;
&lt;li&gt;Once the Kubernetes cluster is running in a hybrid environment, you can use the same tools (Helm, kubectl) to deploy containerized applications targeting the respective environment. This represents a “write once deploy anywhere” model.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;Kubernetes abstracts away the underlying infrastructure constructs, making it possible to consistently deploy and run containerized applications across heterogeneous cloud environments&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
Next steps&lt;/h3&gt;
&lt;none&gt;&lt;pending&gt;&lt;none&gt;&lt;none&gt;&lt;/none&gt;&lt;/none&gt;&lt;/pending&gt;&lt;/none&gt;&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Get started on &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt;&amp;nbsp;(GCP)&lt;/li&gt;
&lt;li&gt;Visit Kubernetes getting started &lt;a href="https://kubernetes.io/" target="_blank"&gt;site&lt;/a&gt; and &lt;a href="https://github.com/kubernetes/kubernetes" target="_blank"&gt;code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Join Kubernetes &lt;a href="https://kubernetes.io/community/" target="_blank"&gt;community&lt;/a&gt; and Slack &lt;a href="http://slack.k8s.io/" target="_blank"&gt;chat&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/kubernetesio?lang=en" target="_blank"&gt;Follow&lt;/a&gt; Kubernetes on Twitter&lt;/li&gt;
&lt;li&gt;Learn about Nutanix &lt;a href="https://www.nutanix.com/calmio/" target="_blank"&gt;Calm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;If you have feedback and/or questions, reach out to us &lt;a href="https://cloud.google.com/contact/" target="_blank"&gt;here&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/d4Dn_Pqzb-8" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/8530751589793566654" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/8530751589793566654" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/d4Dn_Pqzb-8/going-Hybrid-with-Kubernetes-on-Google-Cloud-Platform-and-Nutanix.html" title="Going Hybrid with Kubernetes on Google Cloud Platform and Nutanix" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://3.bp.blogspot.com/-Tye6KegjtcQ/WWOdmCvkFYI/AAAAAAAAEFM/q9yHglL98tMurjXaVJ1myS2qyRDNodEEQCLcBGAs/s72-c/nutanix-kubernetes-7.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/07/going-Hybrid-with-Kubernetes-on-Google-Cloud-Platform-and-Nutanix.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-7847677692392482449</id><published>2017-07-07T09:01:00.000-07:00</published><updated>2017-07-07T09:01:09.246-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CRE" /><title type="text">Making the most of an SRE service takeover - CRE life lessons</title><content type="html">&lt;span class="byline-author"&gt;By Adrian Hilton, Customer Reliability Engineer&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
In &lt;a href="https://cloudplatform.googleblog.com/2017/06/how-SREs-find-the-landmines-in-a-service-CRE-life-lessons.html" target="_blank"&gt;Part 2&lt;/a&gt;&amp;nbsp;of this blog post we explained what an SRE team would want to learn about a service angling for SRE support, and what kind of improvements they want to see in the service before considering it for take-over. And in&amp;nbsp;&lt;a href="https://cloudplatform.googleblog.com/2017/06/why-should-your-app-get-SRE-support-CRE-life-lessons.html" target="_blank"&gt;Part 1&lt;/a&gt;, we looked at why an SRE team would or wouldn’t choose to onboard a new application.  Now, let’s look at what happens once the SREs agree to take on the pager.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Onboarding preparation&lt;/h3&gt;
If a service entrance review determines that the service is suitable for SRE support, developers and the SRE team move into the “onboarding” phase, where they prepare for SREs to support the service.&lt;br /&gt;
&lt;br /&gt;
While developers address the action items, the SRE team starts to familiarize itself with the service, building up service knowledge and familiarity with the existing monitoring tools, alerts and crisis procedures. This can be accomplished through several methods:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;Education&lt;/b&gt;: present the new service to the rest of the team through tech talks, discussion sessions and "wheel of misfortune" scenarios.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;“Take the pager for a spin”&lt;/b&gt;: share pager alerts with the developers for a week, and assess each page on the axes of criticality (does this indicate a user-impacting problem with the service?) and actionability (is there a clear path for the on-call to to resolve the underlying issue?). This gives the SRE team a quantitative measure of how much operational load the service is likely to impose.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;On-call shadow&lt;/b&gt;: page the primary on-call developer and SRE at the same time. At this stage, responsibility for dealing with emergencies rests on the developer, but the developer and the SRE collaborate on debugging and resolving production issues together.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
Measuring success&lt;/h3&gt;
&lt;br /&gt;
&lt;i&gt;Q: I’ve gone through a lot of effort to make my service ready to hand over to SRE. How can I tell whether it was a good expenditure of scarce engineering time?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
If the developer and SRE teams have agreed to hand over a system, they should also agree on criteria (including a timeframe) to measure whether the handover was successful. Such criteria may include (with appropriate numbers):&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Absolute decrease of paging/outages count&lt;/li&gt;
&lt;li&gt;Decreasing paging/outages as a proportion of (increasing) service scale and complexity.&lt;/li&gt;
&lt;li&gt;Reduced time/toil from the point of new code passing tests to being deployed globally, and a flat (or decreasing) rollback rate.&lt;/li&gt;
&lt;li&gt;Increased utilization of reserved resources (CPU, memory, disk etc.)&lt;/li&gt;
&lt;/ul&gt;
Setting these criteria can then prepare the ground for future handover proposals; if the success criteria for a previous handover were not met, the teams should carefully reconsider how this will change the handover plans for a new service.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Taking over the pager&lt;/h3&gt;
&lt;br /&gt;
Once all the blocking action items have been resolved, it’s time for SREs to take over the service pager. This should be a "no drama" event, with few, well-documented service alerts, that can be easily resolved by following procedures in the service playbook.&lt;br /&gt;
&lt;br /&gt;
In theory, the SRE team will have identified most of these issues in the entrance review phase, but realistically there any many issues that are only apparent with sustained exposure to a service.&lt;br /&gt;
&lt;br /&gt;
In the medium term (one to two months), SREs should build a list of deficiencies or areas for optimization in the system with regard to monitoring, resource consumption etc. This hitlist should primarily aim to reduce SRE “toil” (manual, repetitive, tactical work that has no enduring value), and secondarily fix aspects of the system, e.g., resource consumption or cruft accumulation, which can impact system performance. Tertiary changes may include things like updating the documentation to facilitate onboarding new SREs for system support. &lt;br /&gt;
&lt;br /&gt;
In the long term (three to six months), SREs should expect to meet most or all of the pre-established measurements for takeover success as described above.&lt;br /&gt;
&lt;i&gt;&lt;br /&gt;
&lt;/i&gt; &lt;i&gt;Q: That’s great, so now my developers can turn off their pager?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
Not so fast, my friend. Although the SRE team has learned a lot about the service in the preceding months, they're still not experts; there will inevitably be failure modes involving arcane service behavior where the SRE on-call will not know what has broken, or how to fix it. There's no substitute for having a developer available, and we normally require developers to keep their on-call rotation so that the SRE on-call can page them if needed. We expect this to be a low rate of pages.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
The nuclear option — handing back the pager&lt;/h3&gt;
&lt;br /&gt;
Not all SRE takeovers go smoothly, and even if the SREs have taken over the pager for a service, it’s possible for reliability to regress or operational load to increase. This might be for good reasons such as a “success disaster”&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;a sustained and unexpected spike in usage&amp;nbsp;&lt;span style="font-family: arial; font-size: 14.6667px; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;or for bad reasons such as poor QA testing.&lt;br /&gt;
&lt;br /&gt;
An SRE team can only handle so many services, and if one service starts to consume a disproportionate amount of SRE time, it's at risk of crowding out other services. In this case, the SRE team should proactively tell the developer team that they have a problem, and should do so in a neutral way that’s data-heavy:&lt;br /&gt;
&lt;br /&gt;
In the past month we’ve seen 100 pages/week for service S1, compared to a steady rate of 20-30 pages/week over the past few weeks. Even though S1 is within SLO, the pages are dominating our operational work and crowding out service improvement work. You need to do one of the following:&lt;br /&gt;
&lt;ol&gt;
&lt;li&gt;bring S1’s paging rate down to the original rate by reducing S1’s rate of change&lt;/li&gt;
&lt;li&gt;de-tune S1’s alerts so that most of them no longer page&lt;/li&gt;
&lt;li&gt;tell us to drop SRE support for services S2, S3 so our overall paging rate remains steady&lt;/li&gt;
&lt;li&gt;tell us to drop SRE support for S1&lt;/li&gt;
&lt;/ol&gt;
This lets the developer team decide what’s most important to them, rather than the SRE team imposing a solution. &lt;br /&gt;
&lt;br /&gt;
There are also times when developers and SREs agree that handing back the pager to developers is the right thing to do, even if the operational load is normal. For example, imagine SREs are supporting a service, and developers come up with a new, shiny, higher-performing version. Developers support the new version initially, while working out its kinks, and migrate more and more users to it. Eventually the new version is the most heavily used&amp;nbsp;&lt;span style="font-family: arial; font-size: 14.6667px; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;this is when SREs should take on the pager for the new service and hand the old service’s pager back to developers. Developers can then finish user migrations and turn down the old service at their convenience.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Converging your SRE and dev teams&lt;/h3&gt;
&lt;br /&gt;
Onboarding a service is about more than transferring responsibility from developers to SREs&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;it also improves mutual understanding between the two teams. The dev team gets to know what the SRE team does, and why, who the individual SREs are, and perhaps how they got that way. Similarly the SRE team gains a better understanding of the development team’s work and concerns. This increase in empathy is a Good Thing in itself, but is also an opportunity to improve future applications.&lt;br /&gt;
&lt;br /&gt;
Now, when a developer team designs a new application or service, they should take the opportunity to invite the SRE team to the discussion. SRE teams can easily spot reliability issues in the design, and advise developers on ways to make the service easier to operate, set up good monitoring and configure sensible rollout policies from the start. &lt;br /&gt;
&lt;br /&gt;
Similarly, when the SREs do future planning or design new tooling, they should include developers in the discussions; developers can advise them on future launches and projects, and give feedback on making the tools easier to operate or a better fit for developers’ needs.&lt;br /&gt;
&lt;br /&gt;
Imagine that there was a brick wall between the SRE and developer teams; our original plan for service takeover was to throw the service over the wall and hope. Over the course of these blog posts, we’ve shown you how to make a hole in the wall so there can be two-way communication as the service is passed through, then expand it into a doorway so that SREs can come into the developers’ backyard and vice versa. Eventually, developers and SREs should tear down the wall entirely, and replace it with a low hedge and ornamental garden arch. SREs and developers should be able to see what’s going on in each others’ yard, and wander over to the other side as needed.&lt;br /&gt;
&lt;h3&gt;
&lt;br /&gt;
Summary&lt;/h3&gt;
&lt;br /&gt;
When an SRE takes on pager responsibility for developer-supported service, don’t just throw it over the fence into their yard. Work with the SRE team to help them understand how the service works and how it breaks, and to find ways to make it more resilient and easier to support. Make sure that supporting your service is a good use of the SRE team’s time, making use of their particular skills. With a carefully-planned handover process, you can both be confident that the queries will flow and your pagers will be (mostly) silent.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/FCtaUgO5Uhg" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/7847677692392482449" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/7847677692392482449" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/FCtaUgO5Uhg/making-the-most-of-an-SRE-service-takeover-CRE-life-lessons.html" title="Making the most of an SRE service takeover - CRE life lessons" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/07/making-the-most-of-an-SRE-service-takeover-CRE-life-lessons.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-5654166887871239505</id><published>2017-07-06T09:00:00.000-07:00</published><updated>2017-07-06T09:02:56.154-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Networking" /><title type="text">Reimagining virtual private clouds</title><content type="html">&lt;span class="byline-author"&gt;By Zach Pohlman, Cloud Solutions Architect&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
At Cloud Next '17 this year, we announced our reimagining of &lt;a href="https://cloud.google.com/vpc/" target="_blank"&gt;Virtual Private Cloud (VPC)&lt;/a&gt;, a product that used to be known as GCP Virtual Networks. Today, we thought we’d share a little more insight into what’s different about VPC and what it can do.&lt;br /&gt;
&lt;br /&gt;
Virtual Private Cloud offers you a privately administered space within &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP), providing the flexibility to scale and control how workloads connect regionally and globally. This means global connectivity across locations and regions, and the elimination of silos across projects and teams. When you connect your on-premise or remote resources to GCP, you’ll have global access to your VPCs without needing to replicate connectivity or administrative policies per region. &lt;br /&gt;
&lt;br /&gt;
Here’s a little more on what that means.&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;VPC is global&lt;/b&gt;. Unlike traditional VPCs that communicate across the public internet, requiring redundant, complex VPNs and interconnections to maintain security, a single Google Cloud VPC can span multiple regions. Single connection points to on-premise resources via VPN or &lt;a href="https://cloud.google.com/interconnect/" target="_blank"&gt;Cloud Interconnect&lt;/a&gt; provide private access, reducing costs and configuration complexity.&lt;/li&gt;
&lt;/ul&gt;
&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://4.bp.blogspot.com/-ZrUuYvyKx2A/WV3DAz0u5WI/AAAAAAAAEEg/xarp2qAjMfEcVAQ3llSxCLdq7qXIzJr0ACLcBGAs/s1600/VPC-2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="532" data-original-width="1484" src="https://4.bp.blogspot.com/-ZrUuYvyKx2A/WV3DAz0u5WI/AAAAAAAAEEg/xarp2qAjMfEcVAQ3llSxCLdq7qXIzJr0ACLcBGAs/s1600/VPC-2.png" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;VMs in VPC do not need VPNs to communicate between regions. Inter-region traffic is both encrypted and kept on Google's private network.&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;VPC is sharable&lt;/b&gt;. With a single VPC for an entire organization, you can build multi-tenant architectures and share single private network connectivity between teams and projects with a centralized security model. Your teams can use the network as plug-and-play, instead of stitching connectivity with VPNs. Shared VPC also allows teams to be isolated within projects, with separate billing and quotas, yet still maintain a shared IP space and access to commonly used services such as Interconnect or BigQuery.&lt;/li&gt;
&lt;/ul&gt;
&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://2.bp.blogspot.com/-6xFzCfLqZRk/WV3DT3pESsI/AAAAAAAAEEk/TGpN3ZElVPIlLC-e5L8vhEMoOQZTgl2UwCLcBGAs/s1600/VPC-3.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="520" data-original-width="1438" src="https://2.bp.blogspot.com/-6xFzCfLqZRk/WV3DT3pESsI/AAAAAAAAEEk/TGpN3ZElVPIlLC-e5L8vhEMoOQZTgl2UwCLcBGAs/s1600/VPC-3.png" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;A single network can be shared across teams and regions, all within the same administrative domain, preventing duplicate work.&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;VPC is expandable&lt;/b&gt;. Google Cloud VPCs let you increase the &lt;a href="https://en.wikipedia.org/wiki/IP_address#IPv4_addresses" target="_blank"&gt;IP space&lt;/a&gt; of any subnets without any workload shutdown or downtime. This gives you flexibility and growth options to meet your needs. If you initially build on an IP space of /24s, for example, but need to grow this in one or multiple regions, you can do so quickly and easily without impacting your users.&lt;/li&gt;
&lt;/ul&gt;
&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://2.bp.blogspot.com/-Vin3mjt_D-o/WV3D12Xe_3I/AAAAAAAAEEo/LyHGkrmWFUkkkWVo5JqLSJKf_MjAUyzdACLcBGAs/s1600/image3.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="581" data-original-width="1500" src="https://2.bp.blogspot.com/-Vin3mjt_D-o/WV3D12Xe_3I/AAAAAAAAEEo/LyHGkrmWFUkkkWVo5JqLSJKf_MjAUyzdACLcBGAs/s1600/image3.gif" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;In Google VPC, the expanded IP range is available in the new zone without rebooting the running VMs. In other VPCs this incurs downtime.&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;VPC is private&lt;/b&gt;. With Google VPC you get private access to Google services, such as storage, big data, analytics or machine learning, without having to give your service a public IP address. Configure your application’s front-end to receive internet requests and shield your back-end services from public endpoints, all while being able to access Google Cloud services.&lt;/li&gt;
&lt;/ul&gt;
&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://2.bp.blogspot.com/-jwOosAKdZVY/WV3ERwS1hII/AAAAAAAAEEs/nlTnqRk36lMUF-auoSAsRjavnwAfe47GQCLcBGAs/s1600/VPC-1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="546" data-original-width="1432" src="https://2.bp.blogspot.com/-jwOosAKdZVY/WV3ERwS1hII/AAAAAAAAEEs/nlTnqRk36lMUF-auoSAsRjavnwAfe47GQCLcBGAs/s1600/VPC-1.png" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;Within Google Cloud, services are directly addressable across regions using private networks and IP addresses without crossing the best-effort public internet.&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
Global VPCs are divided into regional subnets that use Google’s private backbone to communicate as needed. This allows you to easily distribute different parts of your application across multiple regions to enhance uptime, reduce end-user latency or address data sovereignty needs.&lt;br /&gt;
&lt;br /&gt;
With these enhancements, GCP is delivering alternatives for increasingly complex networks and workloads, and enhancing the abilities for organizations to create and manage spaces in the cloud that map closely to business requirements. You can learn more about Google Virtual Private Clouds at &lt;a href="https://cloud.google.com/vpc/"&gt;https://cloud.google.com/vpc/&lt;/a&gt;.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/eJdynnHmLx0" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/5654166887871239505" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/5654166887871239505" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/eJdynnHmLx0/reimagining-virtual-private-clouds.html" title="Reimagining virtual private clouds" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://4.bp.blogspot.com/-ZrUuYvyKx2A/WV3DAz0u5WI/AAAAAAAAEEg/xarp2qAjMfEcVAQ3llSxCLdq7qXIzJr0ACLcBGAs/s72-c/VPC-2.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/07/reimagining-virtual-private-clouds.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-8858439936084728866</id><published>2017-07-05T06:02:00.000-07:00</published><updated>2017-07-07T19:32:16.917-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Compute" /><title type="text">Choosing the right compute option in GCP: a decision tree</title><content type="html">&lt;span class="byline-author"&gt;By Terrence Ryan, Developer Advocate and Adam Glick, Product Marketing Manager&lt;br /&gt;
&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
When you start a new project on &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP), one of earliest decisions you make is which computing service to use: Google Compute Engine, Google Container Engine, App Engine or even Google Cloud Functions and Firebase.&lt;br /&gt;
&lt;br /&gt;
GCP offers  a range of compute services that go from giving users full control (i.e., Compute Engine) to highly-abstracted (i.e., Firebase and Cloud Functions), letting Google take care of more and more of the management and operations along the way.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://1.bp.blogspot.com/-ydX6eSNBEHU/WWBEIq4e9qI/AAAAAAAAEE8/RZUBP6boui8FF5ZdCUAeZIKrDP3mfQbXgCLcBGAs/s1600/Screen%2BShot%2B2017-07-07%2Bat%2B7.31.19%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="407" data-original-width="1600" height="162" src="https://1.bp.blogspot.com/-ydX6eSNBEHU/WWBEIq4e9qI/AAAAAAAAEE8/RZUBP6boui8FF5ZdCUAeZIKrDP3mfQbXgCLcBGAs/s640/Screen%2BShot%2B2017-07-07%2Bat%2B7.31.19%2BPM.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
Here’s how many long-time readers of our blog think about GCP compute options. If you're used to managing VMs and want a similar experience in the cloud, pick Compute Engine. If you use containers and need to coordinate more than one container in your solution, you can abstract away some of the necessary management overhead by using Container Engine. If you want to focus on your code and avoid the infrastructure pieces entirely, use App Engine. Finally, if you want to focus purely on code and build microservices that expose API endpoints for your applications, use Firebase and Cloud Functions.&lt;br /&gt;
&lt;br /&gt;
Over the years, you've told us that this model works great if you have no constraints, but can be challenging if you do. We’ve heard your feedback and propose another way to choose your compute options using a constraint-based set of questions. (It should go without saying that we’re considering very small aspects of your project.) &lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;1. Are you building a mobile or HTML application that does its heavy lifting, processing-wise, on the client?&lt;/b&gt; If you're building a thick client that only relies on a backend for synchronization and/or storage, Firebase is a great option. Firebase allows you to store complex NoSQL documents (or objects if that’s how you think of them) and files using a very easy-to-use API and client available for iOS, Android and Javascript. There’s also a REST API for access from other platforms. &lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;2. Are you building a system based more on events than user interaction? In other words, are you building an app that responds to uploaded files, or maybe logins to other applications?&lt;/b&gt; Are you already looking at “serverless” or “Functions as a Service” solutions? Look no further than Cloud Functions. Cloud Functions allows you to write Javascript functions that run on Node.js and that can call any one of our APIs including Cloud Vision, Translate, Cloud Storage or over 100 others. With Cloud Functions, you can build complex individual functions that get exposed as microservices to take advantage of all our services without having to maintain systems and glue them all together. &lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;3. Does your solution already exist somewhere else? Does it include licensed software? Does it require anything other than HTTP/S? &lt;/b&gt;If you answered “no,” App Engine is worth a look. App Engine is a serverless solution that runs your code on our infrastructure and charges you only for what you use. We scale it up or down for you depending on demand. In addition, App Engine has access to all the Google SDKs available so you can take advantage of the full Google Cloud ecosystem. &lt;br /&gt;
&lt;br /&gt;
4&lt;b&gt;. Are you looking to build a container-based system? Do you require orchestration?&lt;/b&gt;  If you're building a multi-container solution, orchestration becomes a consideration. Container orchestration is service that handles deployment, redundancy and load distribution of your containers. One of the most mature, and popular, orchestrators is Kubernetes. If you're considering using Kubernetes on GCP, you should just use  Container Engine. (You should think about it wherever you're going to run Kubernetes actually.) Container Engine reduces building a Kubernetes solution to a single click. Additionally, it auto-scales Kubernetes cluster members, allowing you to build Kubernetes solutions that grow and shrink based on demand. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;5. Are you building a stateful system? Are you looking to use GPUs in your solution? Are you building a non-Kubernetes container-based solution? Are you migrating an existing on-prem solution to the cloud? Are you using licensed software? Do you need a custom kernel or arbitrary OS? Have you not found another solution to meet your needs?&lt;/b&gt; If you answered “yes” to any of these questions, you’re probably going to need to run your solution on virtual machines on Compute Engine. Compute Engine is our most flexible computing product, and allows you the most freedom to configure and manage your VMs however you like. &lt;br /&gt;
&lt;br /&gt;
Put all of these questions together and you get the following flowchart:&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://3.bp.blogspot.com/-tRZWsjc86NU/WVx27L3w26I/AAAAAAAAEEQ/ktXObqRCaS4O0pZdCsdPW1D71zPUaDDEgCLcBGAs/s1600/compute-options-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="1600" data-original-width="629" src="https://3.bp.blogspot.com/-tRZWsjc86NU/WVx27L3w26I/AAAAAAAAEEQ/ktXObqRCaS4O0pZdCsdPW1D71zPUaDDEgCLcBGAs/s1600/compute-options-3.png" /&gt;&lt;/a&gt;&lt;/div&gt;
This is by no means a comprehensive decision tree, and each one of our products supports a wider range of use cases than is presented here. But this should be a good guide to get you started. &lt;br /&gt;
&lt;br /&gt;
To find out more about or computing solutions please check out&lt;a href="https://cloud.google.com/products/compute/" target="_blank"&gt; Computing on Google Cloud Platform&lt;/a&gt; and then try it out for yourself today with $300 in free credits when you &lt;a href="https://console.cloud.google.com/freetrial?_ga=1.42138917.1875422736.1494577439" target="_blank"&gt;sign up&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
Happy building!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/EEsGpAd70RI" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/8858439936084728866" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/8858439936084728866" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/EEsGpAd70RI/choosing-the-right-compute-option-in-GCP-a-decision-tree.html" title="Choosing the right compute option in GCP: a decision tree" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://1.bp.blogspot.com/-ydX6eSNBEHU/WWBEIq4e9qI/AAAAAAAAEE8/RZUBP6boui8FF5ZdCUAeZIKrDP3mfQbXgCLcBGAs/s72-c/Screen%2BShot%2B2017-07-07%2Bat%2B7.31.19%2BPM.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/07/choosing-the-right-compute-option-in-GCP-a-decision-tree.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-3452253305382433067</id><published>2017-06-30T06:00:00.001-07:00</published><updated>2017-06-30T07:47:51.690-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Compute" /><title type="text">Solution guide: Building connected vehicle apps with Cloud IoT Core</title><content type="html">&lt;span class="byline-author"&gt;By Charles Baer, Solutions Architect&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
With the Internet of Things (IoT), vehicles are evolving from self-contained commodities focused on transportation to sophisticated, Internet-connected endpoints often capable of two-way communication. The new data streams generated by modern connected vehicles drive innovative business models such as usage-based insurance, enable new in-vehicle experiences and build the foundation for advances such as autonomous driving and vehicle-to-vehicle (V2V) communication. &lt;br /&gt;
Through all this, we here at Google Cloud are excited to help make this world a reality. We recently published a &lt;a href="https://cloud.google.com/solutions/designing-connected-vehicle-platform" target="_blank"&gt;solution guide&lt;/a&gt; that describes how various &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP) services fit into the picture.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;A data deluge&lt;/h3&gt;Vehicles can produce upwards of 560 GB data per vehicle, per day. This deluge of data represents both incredible opportunities and daunting challenges for the platforms that connect and manage vehicle data, including:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;&lt;b&gt;Device management&lt;/b&gt;. Connecting devices to any platform requires authentication, authorization, the ability to push update software, configuration and monitoring. These services must be able to scale to millions of devices and constant availability.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Data ingestion&lt;/b&gt;. Messages must be reliably received, processed and stored.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Data analytics&lt;/b&gt;. Complex analysis of time-series data generated from devices must be used to gain insights into event, tolerances, trends and possible failures.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Applications&lt;/b&gt;. Business-level application logic must be developed and integrated with existing data sources that may come from a third party or exist in on-premise data centers.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Predictive models&lt;/b&gt;. In order to predict business-level outcomes, predictive models based on current and historical data must be developed.&lt;/li&gt;
&lt;/ul&gt;&lt;br /&gt;
GCP services, including the recently launched &lt;a href="https://cloud.google.com/iot-core/" target="_blank"&gt;Cloud IoT Core&lt;/a&gt; provides a robust computing platform that takes advantage of Google’s &lt;a href="https://cloud.google.com/security/" target="_blank"&gt;end-to-end security model&lt;/a&gt;. Let’s take a look at how we can implement a connected vehicle platform using Google Cloud services.&lt;br /&gt;
&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://2.bp.blogspot.com/-OQT8MqP9yXQ/WVWiUvRU69I/AAAAAAAAED8/wwWVD_P2wbE30d-dJl9KriWcFuTbFniWgCLcBGAs/s1600/connected-vehicle.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="1600" data-original-width="924" src="https://2.bp.blogspot.com/-OQT8MqP9yXQ/WVWiUvRU69I/AAAAAAAAED8/wwWVD_P2wbE30d-dJl9KriWcFuTbFniWgCLcBGAs/s1600/connected-vehicle.png" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;(click to enlarge)&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;&lt;b&gt;&lt;br /&gt;
Device Management&lt;/b&gt;: To handle secure device management and communications, Cloud IoT Core makes it easy for you to securely connect your globally distributed devices to GCP and centrally manage them. IoT Core Device Manager provides authentication and authorization, while IoT Core Protocol Bridge enables the messaging between the vehicles and the platform.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Data Ingestion&lt;/b&gt;: Cloud Pub/Sub provides a scalable data ingestion point that can handle large data volumes generated by vehicles sending GPS location, engine RPM or images. Cloud BigTable’s scalable storage services are well-suited for time series data storage and analytics.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Data Analytics&lt;/b&gt;: Cloud Dataflow can process data pipelines that combine the vehicle device data with corporate vehicle and customer data, then store the combined data in BigQuery. BigQuery provides a powerful analytics engine as-a-service and integrates with common visualization tools such as Tableau, Looker and Qlik.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Applications&lt;/b&gt;: Compute Engine, Container Engine and App Engine all provide computing components for a connected vehicle platform. Compute Engine offers a range of different machine types that make it an ideal service for any third-party integration components. Container Engine runs and manages containers, which provide a high degree of flexibility and scalability thanks to their microservices architecture. Finally, App Engine is a scalable serverless platform ideal for consumer mobile and web application frontend services. &lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;Predictive Models&lt;/b&gt;:  &lt;a href="https://www.tensorflow.org/" target="_blank"&gt;TensorFlow&lt;/a&gt; and &lt;a href="https://cloud.google.com/ml-engine/" target="_blank"&gt;Cloud Machine Learning Engine&lt;/a&gt; provide a sophisticated modeling framework and scalable execution environment. TensorFlow provides the framework to develop custom deep neural network models and is optimized for performance, flexibility and scale&amp;nbsp;&lt;span style="font-family: Arial; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;all of which are critical when leveraging IoT-generated data. Machine Learning Engine provides a scalable environment to train TensorFlow models using specialized Google computing infrastructure hardware including GPUs and TPUs.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;Summary&lt;/h3&gt;Vehicles are becoming sophisticated IoT devices with built-in mobile technology platforms to which third parties can connect and offer advanced services. GCP provides a secure, robust and scalable platform to connect IoT devices ranging from sophisticated head units to simple, low-powered sensors. You can learn more about the next generation of connected vehicles with GCP by reading the solution paper: &lt;a href="https://cloud.google.com/solutions/designing-connected-vehicle-platform" target="_blank"&gt;Designing a Connected Vehicle Platform on Cloud IoT Core&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/jQ8eGub5LtE" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/3452253305382433067" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/3452253305382433067" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/jQ8eGub5LtE/solution-guide-building-connected-vehicle-apps-with-Cloud-IoT-Core.html" title="Solution guide: Building connected vehicle apps with Cloud IoT Core" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://2.bp.blogspot.com/-OQT8MqP9yXQ/WVWiUvRU69I/AAAAAAAAED8/wwWVD_P2wbE30d-dJl9KriWcFuTbFniWgCLcBGAs/s72-c/connected-vehicle.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/solution-guide-building-connected-vehicle-apps-with-Cloud-IoT-Core.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-1569426703269605160</id><published>2017-06-29T09:01:00.002-07:00</published><updated>2017-07-13T09:44:18.180-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="CRE" /><title type="text">How SREs find the landmines in a service - CRE life lessons</title><content type="html">&lt;span class="byline-author"&gt;By Adrian Hilton, Customer Reliability Engineer&lt;br /&gt;
&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
In &lt;a href="https://cloudplatform.googleblog.com/2017/06/why-should-your-app-get-SRE-support-CRE-life-lessons.html" target="_blank"&gt;Part 1&lt;/a&gt; of this blog post we looked at why an SRE team would or wouldn’t choose to onboard a new application. In this installment, we assume that the service would benefit substantially from SRE support, and look at what needs to be done for SREs to onboard it with confidence.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Onboarding review&lt;/h3&gt;
&lt;br /&gt;
&lt;i&gt;Q: We have a new application that would make sense for SRE to support. Do I just throw it over the wall and tell the SRE team “Here you are; you’re on call for this now, best of luck”?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
That’s a great approach&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;if your goal is failure. At first, your developer team’s assessment of the application’s importance for their support&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;and whether it’s in a supportable state&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;is likely to be rather different from your SRE team’s assessment, and arbitrarily imposing support for a service onto an SRE team is unlikely to work. Think about it&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;you haven’t convinced them that the service is a good use of their time yet, and human nature is that people don’t enthusiastically embrace doing something that they don’t really believe in, so they're unlikely to be active participants in making the service materially more reliable.&lt;br /&gt;
&lt;br /&gt;
At Google, we’ve found that to successfully onboard a service into SRE, the service owner and SRE team must agree to a process for the SRE team to understand and assess the service, and identify critical issues to be resolved upfront (Incidentally, we follow a similar process when deciding whether or not to onboard a Google Cloud customer’s application into our &lt;a href="https://cloudplatform.googleblog.com/2016/10/introducing-a-new-era-of-customer-support-Google-Customer-Reliability-Engineering.html" target="_blank"&gt;Customer Reliability Engineering&lt;/a&gt; program). We typically split this into two phases:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;SRE entrance review&lt;/b&gt;: where an SRE team assess whether a developer-supported service should be onboarded by SRE, and what the onboarding preconditions should be.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;SRE onboarding/takeover&lt;/b&gt;: where a dev and SRE team agree in principle that the SRE team should take on primary operational responsibility for a service, and start negotiating the exact conditions for takeover (how and when the SREs will onboard the service).&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
It’s important to remember the motivations of the various parties in this process:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;Developers want someone else to pick up support for the service, and make it run as well as possible. They want users to feel that the service is working properly, otherwise they'll move to a service run by someone else.&lt;/li&gt;
&lt;li&gt;The SRE team wants to be sure that they're not being “sold a pup” with a hard-to-support service, and have a vision for making the production service lower in toil and more robust.&lt;/li&gt;
&lt;li&gt;Meanwhile the company management wants to reduce the number of embarrassing service outages, as long as it doesn’t cost them too much in engineer time.&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;h3&gt;
The SRE entrance review&lt;/h3&gt;
During an SRE entrance review (SER), also referred to as a Production Readiness Review (PRR), the SRE team takes the measure of a service currently running in production. The purpose of an SER is to:&lt;br /&gt;
&lt;br /&gt;
&lt;ol&gt;
&lt;li&gt;Assess how the service would benefit from SRE ownership&lt;/li&gt;
&lt;li&gt;Identify service design, implementation and operational deficiencies that could be a barrier to SRE takeover&lt;/li&gt;
&lt;li&gt;And if SRE ownership is determined to be beneficial, identify the bug fixes, process changes and necessary service behavior needed before onboarding the service&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
An SRE team typically designates a single person or a small subset of the team to familiarize themselves with the service, and evaluate it for fitness for takeover. &lt;br /&gt;
&lt;br /&gt;
The SRE looks at the service as-is: its performance, monitoring, associated operational processes and recent outage history, and asks themselves: “If I were on-call for this service right now, what are the problems I’d want to fix?” They might be visible problems, such as too many pages happening per day, or potential problems such as a dependency on a single machine that will inevitably fail some day. &lt;br /&gt;
&lt;br /&gt;
A critical part of any SRE analysis is the service’s &lt;a href="https://cloudplatform.googleblog.com/2017/01/availability-part-deux--CRE-life-lessons.html" target="_blank"&gt;Service Level Objectives&lt;/a&gt; (SLOs), and associated Service Level Indicators (SLIs). SREs assume that if a service is meeting its SLOs then paging alerts should be rare or non-existent; conversely, if the service is in danger of falling out of SLO then paging alerts are loud and actionable. If these expectations don’t match reality, the SRE team will focus on changing either the SLO definitions or the SLO measurements.&lt;br /&gt;
&lt;br /&gt;
In the review phase, SREs aim to understand:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;what the service does&lt;/li&gt;
&lt;li&gt;day-to-day service operation (traffic variation, releases, experiment management, config pushes)&lt;/li&gt;
&lt;li&gt;how the service tends to break and how this manifests in alerts&lt;/li&gt;
&lt;li&gt;rough edges in monitoring and alerting&lt;/li&gt;
&lt;li&gt;where the service configuration diverges from the SRE team’s practices&lt;/li&gt;
&lt;li&gt;major operational risks for the service&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;br /&gt;
The SRE team also considers:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;whether the service follows SRE team best practices, and if not, how to retrofit it&lt;/li&gt;
&lt;li&gt;how to integrate the service with the SRE team’s existing tools and processes&lt;/li&gt;
&lt;li&gt;the desired engagement model and separation of responsibilities between the SRE team and the SWE team. When debugging a critical production problem, at what point should the SRE on-call page the developer on-call?&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;h3&gt;
&lt;br /&gt;
The SRE takeover&lt;/h3&gt;
&lt;br /&gt;
The SRE entrance review typically produces a prioritized list of issues with the service that need to be fixed. Most will be assigned to the development team, but the SRE team may be better suited for others. In addition, not all issues are blockers to SRE takeover (there might be design or architectural changes that SREs recommend for service robustness that could take many months to implement).&lt;br /&gt;
&lt;br /&gt;
There are four main axes of improvement for a service in an onboarding process: extant bugs, reliability, automation and monitoring/alerting. On each axis there will be issues which will have to be solved before takeover (“blockers”), and others which would be beneficial to solve but not critical.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;&lt;i&gt;Extant bugs&lt;/i&gt;&lt;/b&gt;&lt;br /&gt;
The primary source of issues blocking SRE takeover tends to be action items from the service’s previous postmortems. The SRE team expects to read recent postmortems and verify that a) the proposed actions to resolve the outage root causes are what they’d expect and b) those actions are actually complete. Further, the absence of recent postmortems is a red flag for many SRE teams.&lt;br /&gt;
&lt;b&gt;&lt;i&gt;Reliability&lt;/i&gt;&lt;/b&gt;&lt;br /&gt;
Some reliability-related change requests might not directly block SRE takeover, as many reliability improvements relate to design, significant code changes, a change in back-end integrations or migration off a deprecated infrastructure component, and are targeting the longer-term evolution of the system towards a desired reliability increase.&lt;br /&gt;
&lt;br /&gt;
The reliability-related changes that block takeover would be those which mitigate or remove issues which are known to cause significant downtime, or mitigate risks which are expected to cause an outage in the future. &lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;&lt;i&gt;Automation&lt;/i&gt;&lt;/b&gt;&lt;br /&gt;
This is a key concern for SREs considering take over of a service: how much manual work needs to be done to “operate” the service on a week-to-week basis, including configuration pushes, binary releases and similar time-sinks. &lt;br /&gt;
&lt;br /&gt;
In order to find out what would be most useful to automate, the best way is for the SRE to get practical experience of the developer’s world. This means that the SREs should shadow the developer team’s typical week and get a feel for what routine manual work is actually involved for their on-call.&lt;br /&gt;
&lt;br /&gt;
If there’s excessive manual work involved in supporting a service, automation usually solves the problem. &lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;&lt;i&gt;Monitoring/alerting&lt;/i&gt;&lt;/b&gt;&lt;br /&gt;
The dominant concern with most services undergoing SRE takeover is the paging rate&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;how many times the service wakes up the on-call staff. At Google, we adhere to the  ”&lt;a href="https://www.wired.com/2016/04/google-ensures-services-almost-never-go/" target="_blank"&gt;Treynor&lt;/a&gt; Maximum” of an average of two incidents per 12 hour shift (for an on-call team as a whole). Thus, an SRE team looks at the average incident load of a new service over the past month or so to see how it fits with their current incident load.&lt;br /&gt;
&lt;br /&gt;
Generally, excessive paging rates are the result of one of three things:&lt;br /&gt;
&lt;br /&gt;
&lt;ol&gt;
&lt;li&gt;Paging on something that’s not intrinsically important e.g., task restart or hitting 80% capacity of disk. Instead, downgrade the page to a bug (if it’s not urgent) or eliminate it entirely. Moving to symptom-based monitoring (“users are actually seeing problems”) can help improve this situation.&lt;/li&gt;
&lt;li&gt;Page storms where one small incident/outage generates many pages. Try to group related pages for an incident into a single outage, to get a clearer picture of the system’s outage metrics.&lt;/li&gt;
&lt;li&gt;A system that’s having too many genuine problems. In this case SRE takeover in the near future is unlikely, but SREs may be able to help diagnose and resolve the root causes of the problems.&lt;/li&gt;
&lt;/ol&gt;
SREs generally want to see several weeks of low paging levels before agreeing to take over a service.&lt;br /&gt;
&lt;br /&gt;
More general ways to improve the service might include:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;integrating the service with standard SRE tools and practices e.g., &lt;a href="https://cloudplatform.googleblog.com/2016/12/using-load-shedding-to-survive-a-success-disaster-CRE-life-lessons.html" target="_blank"&gt;load shedding&lt;/a&gt;, release processes and configuration pushes&lt;/li&gt;
&lt;li&gt;extending and improving playbook entries to rely less on the developer team’s tribal knowledge&lt;/li&gt;
&lt;li&gt;aligning the service’s configurations with the SRE team’s common languages and infrastructure&lt;/li&gt;
&lt;/ul&gt;
Ultimately, an SRE entrance review  should produce guidance that's useful to developers even if the SRE team declines to onboard the service. In that event, the guidance from the review should still help developers make their service easier to operate and more reliable.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Smoothing the path&lt;/h3&gt;
&lt;br /&gt;
SREs need to understand the developers’ service, but SREs and developers also need to understand each other. If the developer team has not worked with SREs before, it can be useful for SREs to give “lightning” talks to the developers on SRE topics such as monitoring, canarying, rollouts and data integrity. This gives the developers a better idea of why the SREs are asking particular questions and pushing particular concerns.&lt;br /&gt;
&lt;br /&gt;
One of Google’s SREs found that it was useful to “pretend that I am a dev team novice, and have the developer take me through the codebase, explain the history, show me where the main() function is, and so on.”&lt;br /&gt;
&lt;br /&gt;
Similarly, SREs should understand the developers’ point of view and experience. During the SER, at least one SRE should sit with the developers, attend their weekly meetings and stand-ups, informally shadow their on-call and help out with day-to-day work to get a “big picture” view of the service and how it runs. It also helps remove distance between the two teams. Our experience has been that this is so positive in improving the developer-SRE relationship that the practice tends to continue even after the SER has finished.&lt;br /&gt;
&lt;br /&gt;
Last but not least, the SRE entrance review document should also state clearly whether the service merits SRE takeover, and if so, why (or why not).&lt;br /&gt;
&lt;br /&gt;
At this point, the developer team and SRE team both understand what needs to be done to make a service suitable for SRE takeover, if it is indeed feasible at all. In Part 3 of this blog post, we’ll look at how to proceed with a service takeover, and so both teams can benefit from the process.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;See part 3 of this series, "&lt;a href="https://cloudplatform.googleblog.com/2017/07/making-the-most-of-an-SRE-service-takeover-CRE-life-lessons.html" target="_blank"&gt;Making the most of an SRE service takeover - CRE life lessons&lt;/a&gt;&lt;span id="goog_683055871"&gt;&lt;/span&gt;&lt;a href="https://www.blogger.com/"&gt;&lt;/a&gt;&lt;span id="goog_683055872"&gt;&lt;/span&gt;."&lt;/i&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/Yl_Q_0R-KMY" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/1569426703269605160" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/1569426703269605160" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/Yl_Q_0R-KMY/how-SREs-find-the-landmines-in-a-service-CRE-life-lessons.html" title="How SREs find the landmines in a service - CRE life lessons" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/how-SREs-find-the-landmines-in-a-service-CRE-life-lessons.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-2431105323277138043</id><published>2017-06-28T09:05:00.000-07:00</published><updated>2017-06-28T09:05:04.811-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Developer Tools &amp; Insights" /><title type="text">Google App Engine standard now supports Java 8</title><content type="html">&lt;span class="byline-author"&gt;By Amir Rouzrokh, Product Manager &lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://1.bp.blogspot.com/-fMzsI_Vpkx0/WVL0Kt8evjI/AAAAAAAAEDs/d5XIniNsL60xtMyB5MrHsnhufhdi5knMACLcBGAs/s1600/java-8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="256" data-original-width="256" height="200" src="https://1.bp.blogspot.com/-fMzsI_Vpkx0/WVL0Kt8evjI/AAAAAAAAEDs/d5XIniNsL60xtMyB5MrHsnhufhdi5knMACLcBGAs/s200/java-8.png" width="200" /&gt;&lt;/a&gt;&lt;/div&gt;
Java 8 support has been one of the &lt;a href="https://issuetracker.google.com/savedsearches/559750" target="_blank"&gt;top requests&lt;/a&gt; from the App Engine developer community. Today, we're excited to announce the beta availability of &lt;a href="https://cloud.google.com/appengine/docs/java/" target="_blank"&gt;Java 8 on App Engine standard environment&lt;/a&gt;. Supporting Java 8 on App Engine standard environment is a significant milestone. In addition to support for an updated JDK and &lt;a href="http://www.eclipse.org/jetty/" target="_blank"&gt;Jetty 9&lt;/a&gt; with Servlet 3.1 specs, this launch enables enhanced application performance. Further, this release improves the developer experience with full &lt;a href="http://www.grpc.io/" target="_blank"&gt;gRPC&lt;/a&gt; and &lt;a href="http://googlecloudplatform.github.io/google-cloud-java/0.19.0/index.html" target="_blank"&gt;Google Cloud Java Library support&lt;/a&gt;, and we have finally removed the &lt;a href="https://cloud.google.com/appengine/docs/standard/java/runtime-java8#enhancements_and_upgrades_in_the_java_8_runtime" target="_blank"&gt;class whitelist&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
App Engine standard now &lt;b&gt;fully supports&lt;/b&gt; off-the-shelf frameworks such as&lt;a href="https://projects.spring.io/spring-boot/" target="_blank"&gt; Spring Boot&lt;/a&gt; and alternative languages like &lt;a href="https://kotlinlang.org/" target="_blank"&gt;Kotlin&lt;/a&gt; or &lt;a href="http://groovy-lang.org/" target="_blank"&gt;Apache Groovy&lt;/a&gt;. At the same time, the new runtime environment still provides all the great benefits developers have come to depend on and love about App Engine standard, including rapid deployments in seconds, near instantaneous scale up and scale down (including to zero instances when no traffic is detected), native microservices and versioning support, traffic splitting between any two languages (including Java 7 and Java 8), local development tooling and &lt;a href="https://cloud.google.com/appengine/docs/standard/java/apis" target="_blank"&gt;App Engine APIs&lt;/a&gt;. &lt;br /&gt;
&lt;br /&gt;
Developer tooling is critical to the Java community. The new runtime supports &lt;a href="https://cloud.google.com/stackdriver/" target="_blank"&gt;Stackdriver&lt;/a&gt;, &lt;a href="https://cloud.google.com/sdk/" target="_blank"&gt;Cloud SDK&lt;/a&gt;, &lt;a href="https://cloud.google.com/appengine/docs/standard/java/tools/using-maven" target="_blank"&gt;Maven&lt;/a&gt;, &lt;a href="https://cloud.google.com/appengine/docs/standard/java/tools/gradle" target="_blank"&gt;Gradle&lt;/a&gt;, &lt;a href="https://cloud.google.com/tools/intellij/docs/" target="_blank"&gt;IntelliJ&lt;/a&gt;&amp;nbsp;and &lt;a href="https://cloud.google.com/eclipse/docs/" target="_blank"&gt;Eclipse&lt;/a&gt; plugins. In particular, the IntelliJ and Eclipse plugins provide a modern experience optimized for developer flow. Watch the &lt;a href="https://cloudnext.withgoogle.com/" target="_blank"&gt;Google Cloud Next 2017&lt;/a&gt; session “&lt;a href="https://www.youtube.com/watch?v=P9TMB_3-WxA" target="_blank"&gt;Power your Java Workloads on Google Cloud Platform&lt;/a&gt;” to learn more about the new IntelliJ plugin, Stackdriver Debugger, traffic splitting, auto scaling and other App Engine features.&lt;br /&gt;
&lt;br /&gt;
As always, developers can choose between App Engine standard and flexible environments&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;deploy your application to one environment now, and another environment later. Or deploy to both simultaneously, mixing and matching environments as well as languages. (Here’s a guide on &lt;a href="https://cloud.google.com/appengine/docs/the-appengine-environments" target="_blank"&gt;how to choose between App Engine flexible and standard environments&lt;/a&gt;.)&lt;br /&gt;
&lt;br /&gt;
Below is a one-minute video that demonstrates how easy it is to &lt;a href="https://www.youtube.com/watch?v=RaYeOlT1jHQ&amp;amp;feature=youtu.be" target="_blank"&gt;deploy your first application to App Engine&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;iframe allowfullscreen="" frameborder="0" height="360" src="https://www.youtube.com/embed/RaYeOlT1jHQ" width="640"&gt;&lt;/iframe&gt;&lt;br /&gt;
&lt;br /&gt;
To get started with Java 8 for App Engine standard, follow this&amp;nbsp;&lt;a href="https://cloud.google.com/appengine/docs/standard/java/quickstart-java8" target="_blank"&gt;quickstart&lt;/a&gt;. Or if you’re a current App Engine standard Java 7 user, upgrade to the new runtime by adding &lt;runtime&gt;java8&lt;/runtime&gt; to your appengine-web.xml file, as described in the video above. Be sure to deploy a new version of your service and direct a small portion of your traffic to this instance and monitor for errors.&lt;br /&gt;
&lt;br /&gt;
You can find samples of all the code in the documentation &lt;a href="https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/appengine-java8" target="_blank"&gt;here&lt;/a&gt;. For sample applications running Kotlin, Spring-Boot and SparkJava, check out &lt;a href="https://github.com/GoogleCloudPlatform/getting-started-java/tree/master/appengine-standard-java8" target="_blank"&gt;this repository&lt;/a&gt;. &lt;br /&gt;
&lt;br /&gt;
We've been investing heavily in language and infrastructure updates for both App Engine environments (we recently announced the general availability of &lt;a href="https://cloudplatform.googleblog.com/2017/03/your-favorite-languages-now-on-Google-App-Engine.html" target="_blank"&gt;Java 8 on App Engine flexible&lt;/a&gt; and &lt;a href="https://cloudplatform.googleblog.com/2017/06/enhancing-the-Python-experience-on-App-Engine.html" target="_blank"&gt;Python upgrades&lt;/a&gt;), with many more to come. We’d love to hear from you during the Java 8 beta period and beyond. Submit your feedback on the &lt;a href="https://github.com/GoogleCloudPlatform/app-maven-plugin" target="_blank"&gt;Maven&lt;/a&gt;, &lt;a href="https://github.com/GoogleCloudPlatform/app-gradle-plugin" target="_blank"&gt;Gradle&lt;/a&gt;, &lt;a href="https://github.com/GoogleCloudPlatform/google-cloud-intellij" target="_blank"&gt;IntelliJ&lt;/a&gt;&amp;nbsp;and &lt;a href="https://github.com/GoogleCloudPlatform/google-cloud-eclipse" target="_blank"&gt;Eclipse&lt;/a&gt; plugins, as well as the &lt;a href="https://github.com/GoogleCloudPlatform/google-cloud-java" target="_blank"&gt;Google Cloud Java Libraries&lt;/a&gt; on their respective GitHub repositories. &lt;br /&gt;
&lt;br /&gt;
Happy Coding!&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/TQvXBQAUI0A" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2431105323277138043" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2431105323277138043" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/TQvXBQAUI0A/Google-App-Engine-standard-now-supports-Java-8.html" title="Google App Engine standard now supports Java 8" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://1.bp.blogspot.com/-fMzsI_Vpkx0/WVL0Kt8evjI/AAAAAAAAEDs/d5XIniNsL60xtMyB5MrHsnhufhdi5knMACLcBGAs/s72-c/java-8.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/Google-App-Engine-standard-now-supports-Java-8.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-3917203126879226998</id><published>2017-06-27T09:00:00.000-07:00</published><updated>2017-06-27T09:00:00.449-07:00</updated><title type="text">Enterprise identity made easy in Google Cloud Platform with Cloud Identity</title><content type="html">&lt;span class="byline-author"&gt;By Zack Ontiveros, Product Manager, Google Cloud Identity&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
As an organization, you want to be able to control how your users access Google’s products and other services online. Millions of &lt;a href="https://gsuite.google.com/" target="_blank"&gt;G Suite&lt;/a&gt; customers already rely on &lt;a href="https://static.googleusercontent.com/media/get.google.com/en//cloudidentity/whitepaper.pdf" target="_blank"&gt;Google Cloud’s identity services&lt;/a&gt; to secure their online identities, perform single sign on and enforce multi-factor authentication. We're excited to announce that the same identity management features used for years in G Suite will be made available for free to &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP) customers to manage their developers online with Cloud Identity. &lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Introducing Cloud Identity support in GCP&lt;/h3&gt;
Starting today, we’re rolling out native support for Cloud Identity right into GCP. Cloud Identity makes it easy to provision and manage users and groups directly from the Google Admin Console. Once you sign up for Cloud Identity, you'll also get access to the Cloud Resource Manager to administer your new GCP organization. &lt;a href="https://cloud.google.com/resource-manager/" target="_blank"&gt;Cloud Resource Manager&lt;/a&gt; allows you to centrally manage all of your organization's GCP projects and IAM roles. With Cloud Identity and Cloud Resource Manager, you now have full control over how your organization uses Google Cloud.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-gmNiLysVBY4/WUqfIPCaIJI/AAAAAAAAEDM/dhdN3ZprYqc2DTA8fWNZrbxzMNKrkwdqQCLcBGAs/s1600/cloud-identity.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="847" data-original-width="1600" height="338" src="https://4.bp.blogspot.com/-gmNiLysVBY4/WUqfIPCaIJI/AAAAAAAAEDM/dhdN3ZprYqc2DTA8fWNZrbxzMNKrkwdqQCLcBGAs/s640/cloud-identity.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;h3&gt;
Try it today &lt;/h3&gt;
&lt;br /&gt;
To start using Cloud Identity, head to the Cloud Console to find the new “Identity” section under Cloud IAM. Here you'll be able to find the Cloud Identity sign up flow, where you'll create your new Cloud Identity admin account and Cloud Identity organization. For more information, check out our &lt;a href="https://support.google.com/a/answer/7319251" target="_blank"&gt;Getting Started Guide&lt;/a&gt;.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/cxmfM-X9jWY" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/3917203126879226998" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/3917203126879226998" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/cxmfM-X9jWY/enterprise-identity-made-easy-in-GCP-with-Cloud-Identity.html" title="Enterprise identity made easy in Google Cloud Platform with Cloud Identity" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://4.bp.blogspot.com/-gmNiLysVBY4/WUqfIPCaIJI/AAAAAAAAEDM/dhdN3ZprYqc2DTA8fWNZrbxzMNKrkwdqQCLcBGAs/s72-c/cloud-identity.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/enterprise-identity-made-easy-in-GCP-with-Cloud-Identity.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-5871037774829623333</id><published>2017-06-26T12:04:00.001-07:00</published><updated>2017-06-26T12:04:30.449-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Developer Tools &amp; Insights" /><title type="text"> Versioning APIs at Google</title><content type="html">&lt;span class="byline-author"&gt;By Dan Ciruli, Product Manager&lt;br /&gt;
&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Versioning APIs is difficult, and everyone in the API space has opinions about how to do it properly. It’s also almost impossible to avoid. As teams build new software, occasionally they need to get rid of a feature (or provide that feature in a different way). Versioning gives your API users a reliable way to understand semantic changes in the API. While some companies will go to great lengths to never change a version, we don’t have that luxury: with the number of APIs we operate, the number of teams developing them here and the number of developers relying on them, we version APIs so developers know what to expect from them.&lt;br /&gt;
&lt;br /&gt;
Versioning APIs should be done according to a consistent and comprehensive policy. At Google, we follow the general principles of &lt;a href="http://semver.org/" target="_blank"&gt;semantic versioning&lt;/a&gt; for our APIs. The principles behind semantic versioning are simple: each release gets a number, X and a number Y. X indicates a major version, and Y indicates a minor version. A new major version indicates a backward-incompatible change while a new minor version indicates a backward-compatible change. &lt;br /&gt;
&lt;br /&gt;
Our major versions are reflected in the path of our APIs, immediately following the domain. Why? Well, it means that any API URL that you call will never rename or drop any of the fields you rely on. If you're doing a GET on &lt;code&gt;coolcloudapi.googleapis.com/v1/coolthings/12301221312132&lt;/code&gt;, you can rely on the fact that the JSON returned will never have fields renamed or removed.&lt;br /&gt;
&lt;br /&gt;
There are pros and cons to this approach, of course, and many smart people have heated debates over the “right” way to version. Some people prefer encoding a version request in a header, others “keep track” of the version that any individual API consumer is used to getting. We’ve seen and heard them all, and collectively we’ve decided that, for our broad purposes, encoding the major version in the URL makes the most sense most of the time.&lt;br /&gt;
&lt;br /&gt;
Note that the minor version is not encoded in the URL. That means that if we enhance the Cool Cloud API by adding a new field, you may one day be surprised when a call to &lt;code&gt;coolcloudapi.googleapis.com/v1/coolthings/12301221312132&lt;/code&gt; starts returning some additional data. But we’ll never "break" your app by removing fields.&lt;br /&gt;
&lt;br /&gt;
When we release a new major version, we generally write a single backend that can handle both versions. All requests (regardless of version) are sent to the backend, and it uses the version in the path to decide which surface to return.&lt;br /&gt;
&lt;br /&gt;
For customers using &lt;a href="https://cloud.google.com/endpoints/" target="_blank"&gt;Cloud Endpoints&lt;/a&gt;, our API gateway, we’re starting to release the features that will enable you to follow these same versioning practices.&lt;br /&gt;
&lt;br /&gt;
First, our proxy can now serve multiple versions of your API and reports the API version. This will let you see how much traffic different versions of your API receive. In practice, this means that you can tell how much of your traffic has migrated to a new version.&lt;br /&gt;
&lt;br /&gt;
Second, it can give you a strategy for deprecating and turning down an API&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;by finding out who's still using the old version. But that’s a topic for another blog post for another day.&lt;br /&gt;
&lt;br /&gt;
Versioning is the thorn on the rose of making better APIs. We believe in the approach we’ve adopted internally, and are happy to share the best practices we’ve developed with the community. To get started with Cloud Endpoints, check out our &lt;a href="https://cloud.google.com/endpoints/docs/quickstart-endpoints" target="_blank"&gt;10-minute-quickstart&lt;/a&gt; or &lt;a href="https://cloud.google.com/endpoints/docs/tutorials" target="_blank"&gt;in-depth tutorials&lt;/a&gt;, or reach out to us on our Google Group at google-cloud-endpoints@googlegroups.com&amp;nbsp;&lt;span style="font-family: arial; font-size: 14.6667px; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;we’d love to hear from you!&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/e7_f_B_a3EY" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/5871037774829623333" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/5871037774829623333" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/e7_f_B_a3EY/versioning-APIs-at-Google.html" title=" Versioning APIs at Google" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/versioning-APIs-at-Google.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-3093583133408467852</id><published>2017-06-23T08:57:00.001-07:00</published><updated>2017-07-10T15:03:32.510-07:00</updated><title type="text">Why should your app get SRE support? - CRE life lessons</title><content type="html">&lt;span class="byline-author"&gt;By Adrian Hilton, Customer Reliability Engineer&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;&lt;b&gt;Editor’s note&lt;/b&gt;: When you start to run many applications or services in your company, then you'll start to bump up against the limit of what your primary SRE (or Ops) team can support. In this installment of &lt;a href="https://cloudplatform.googleblog.com/search/label/CRE" target="_blank"&gt;CRE Life Lessons&lt;/a&gt; we're going to look at how you can make good, principled and defensible decisions about which of your company’s applications and services you should give to your SREs to support, and how to decide when that subset needs to change.&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
In Google, we're fortunate to have Site Reliability Engineering (SRE) teams supporting both our horizontal infrastructure such as storage, networking and load balancing, and our major applications such as Search, Maps and Photos. Nevertheless, the combination of software engineering and system engineering skills required of the role make it hard to find and recruit SREs, and demand for them steadily outstrips supply.&lt;br /&gt;
&lt;br /&gt;
Over time we’ve found some practical limits to the number of applications that an SRE team can support, and learned the characteristics of applications that are more trouble to support than others. If your company runs many production applications, your SRE team is unlikely to be able to support them all. &lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;Q: How will I know when my company’s SRE team is at its limit? How do I choose the best subset of applications to support? When should the SRE team drop support for an application?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
Good questions all; let’s explore them in more detail.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Practical limitations on SRE support&lt;/h3&gt;
&lt;br /&gt;
At  Google, the rule of thumb for the minimum SRE team needed to staff a pager rotation without burn-out is six engineers; for a 24/7 pager rotation with a target response time under 30 minutes, we don’t want any engineer to be on-call for more than 12 continuous hours because we don’t want paging alerts interrupting their sleep. This implies two groups of six engineers each, with a wide geographic spread so that each team can handle pages &lt;i&gt;mostly&lt;/i&gt; in their daytime. &lt;br /&gt;
&lt;br /&gt;
At any one time, there's usually a designated primary who responds to pages, and a secondary who catches fall-through pages e.g., if the primary is temporarily out of contact, or is in the middle of managing an incident. The primary and secondary handle normal ops work, freeing the rest of the team for project work such as improving reliability, building better monitoring or increasing automation of ops tasks. Therefore every engineer has two weeks out of six focused on operational work -- one as primary, one as secondary.&lt;br /&gt;
&lt;i&gt;&lt;br /&gt;
&lt;/i&gt; &lt;i&gt;Q: Surely 12 to 16 engineers can handle support for all the applications your development team can feasibly write? &lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
Actually, no. Our experience is that there is a definite cognitive limit to how many &lt;i&gt;different&lt;/i&gt; applications or services an SRE team can effectively manage; any single engineer needs to be sufficiently familiar with each app to troubleshoot, diagnose and resolve most production problems with each app. If you want to make it easy to support many apps at once, you’ll want to make them as similar as possible: design them to use common patterns and back-end services, standardize on common tools for operational tasks like rollout, monitoring and alerting, and deploy them on similar schedules. This reduces the per-app cognitive load, but doesn’t eliminate it. &lt;br /&gt;
&lt;br /&gt;
If you do have enough SREs then you might consider making two teams (again, subject to the 2 x 6 minimum staffing limit) and give them separate responsibilities. At Google, it’s not unusual for a single SRE team to split into front-end and back-end shards, each taking responsibility for supporting only that half of the system, as it grows in size. (We call this &lt;i&gt;team mitosis&lt;/i&gt;.)&lt;br /&gt;
&lt;br /&gt;
Your SRE team’s maximum number of supported services will be strongly influenced by factors such as:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;the regular operational tasks needed to keep the services running well, for example releases, bug fixes, non-urgent alerts/bugs. These can be reduced (but not eliminated) by automation;&lt;/li&gt;
&lt;li&gt;“interrupts” -- unscheduled non-critical human requests. We’ve found these awkwardly resistant to efforts to reduce them; the most effective strategy has been self-service tools that address the 50-70% of repeated queries;&lt;/li&gt;
&lt;li&gt;emergency alert response, incident management and follow-up. The best way to spend less time on these is to make the service more reliable, and to have better-tuned alerts (i.e., that are actionable and which, if they fire, strongly indicate real problems with the service).&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;Q: What about the four weeks out of six during which an SRE isn’t doing operational work&amp;nbsp;&lt;/i&gt;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&lt;i&gt;&amp;nbsp;could we use that time to increase our SRE team’s supported service capacity?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
You could do this but at Google we view this as “eating your seed corn.” The goal is to have the machines do all the things that are possible for machines to do, and for that to happen you need to leave breathing room for your SREs to do project work such as producing new automation for your service. In our experience, once a team crosses the 50% ops work threshold, it quickly descends a slippery slope to 100% ops. In that condition you’re losing the engineering effort that will give you medium-to-long term operational benefits such as reducing the frequency, duration and impact of future incidents. When you move your SRE team into nearly full-time ops work, you lose the benefit of its engineering design and development skills.&lt;br /&gt;
&lt;br /&gt;
Note in particular that SRE engineering project work can reduce operational load by addressing many of the factors we described above, which were limiting how many services an SRE team could support. &lt;br /&gt;
&lt;br /&gt;
Given the above, you may well find yourself in a position where you want your SRE team to onboard a new service but in practice they are not able to support it on on a sustainable basis.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
You’re out of SRE support capacity - now what?&lt;/h3&gt;
At Google our working principle is that any service that’s not explicitly supported by SRE must be supported by its developer team; if you have enough developers to write a new application then you probably have enough developers to support it. Our developers tend to use the same monitoring, rollout and incident management tools as the SREs they work with, so the operational support workload is similar. In any case, we like developers that wrote an application to directly support it for a little while so they can get a good feel for how customers are experiencing it. The things they learn doing so help SREs to onboard the service later.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;Q: What about the &lt;b&gt;next&lt;/b&gt; application we want the developers to write? Won’t they be too busy supporting the current application?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
This may be true&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;the current application may be generating a high operational workload, due to excessive alerts, or a lack of automation. However, this gives the developer team a practical incentive to spend time making the application easier to support — tuning alerts, spending developer time on automation, and reducing the velocity of functional changes.&lt;br /&gt;
&lt;br /&gt;
When developers are overloaded with operational work, SREs might be able to lend operational expertise and development effort to reduce the developers’ workloads to a manageable level. However, SREs &lt;b&gt;still&lt;/b&gt; shouldn’t take on operational responsibility for the service, as this won’t solve the fundamental problem.&lt;br /&gt;
&lt;br /&gt;
When one team develops an application and another team bears the brunt of the operational work for it, moral hazard thrives. Developers want high development velocity; it’s not in their interest to spend days running down and eliminating every odd bug that occasionally causes their server to run out of memory and need to be restarted. Meanwhile, the operational team is getting paged to do those restarts several times per day&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;— &lt;/span&gt;it’s very much in their interest to get that bug fixed since it’s their sleep that is being interrupted. Not surprisingly, when developers bear the operational load for their own system, they too are incented to spend time making it easier to support. This also turns out to be important for persuading an SRE team to support their application, as we shall see later.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Choosing which applications to support&lt;/h3&gt;
&lt;br /&gt;
The easiest way to prioritize the applications for SRE to support is by revenue or other business criticality, i.e., how important it will be if the service goes down. After all, having an SRE team supporting your service should improve its reliability and availability. &lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;Q: Sounds good to me; surely prioritizing by business impact is always the right choice?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
Not always. There are services which actually don’t need much support work; a good example is a simple infrastructure service (say, a distributed key-value store) that has reached maturity and is updated only infrequently. Since nothing is really changing in the service, it’s unlikely to break spontaneously. Even if it’s a critical dependency of several user-facing applications, it might not make sense to dedicate SRE support; rather, let its developers hold the pager and handle the low volume of operational work. &lt;br /&gt;
&lt;br /&gt;
In Google we consider that SRE teams have seven areas of focus that developers typically don’t:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;Monitoring and metrics&lt;/b&gt;. For example, detecting response latency, error or unanswered query rate, and peak utilization of resources&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Emergency response&lt;/b&gt;. Running on-call rotations, traffic-dip detection, primary/secondary/escalation, writing playbooks, running &lt;a href="https://landing.google.com/sre/interview/ben-treynor.html" target="_blank"&gt;Wheels of Misfortune&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Capacity planning&lt;/b&gt;.  Doing quarterly projections, handling a sudden sustained load spike, running utilization-improvement projects&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Service turn-up and turn-down&lt;/b&gt;. For services which run in many locations (e.g., to reduce end-user latency), planning location turn-up/down schedules and automating the process to reduce risks and operational load&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Change management&lt;/b&gt;. Canarying, 1% experiments, rolling upgrades, quick-fail rollbacks, and measuring error budgets&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Performance&lt;/b&gt;. Stress and load testing, resource-usage efficiency monitoring and optimization.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Data Integrity&lt;/b&gt;. Ensuring that non-reconstructible data is stored resiliently and highly available for reads, including the ability to rapidly restore it from backups&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;br /&gt;
With the possible exception of  “emergency response” and “data integrity,” our key-value store wouldn’t benefit substantially from any of these areas of expertise, and the marginal benefit of having SREs rather than developers support it is low. On the other hand, the opportunity cost of spending SRE support capacity on it is high; there are likely to be other applications which could benefit from more of SREs’ expertise.&lt;br /&gt;
&lt;br /&gt;
One other reason that SREs might take on responsibility for an infrastructure service that doesn’t need SRE expertise if it is a crucial dependency of services they already run. In that case, there could be a significant benefit to them of having visibility into, and control of, changes to that service.&lt;br /&gt;
&lt;br /&gt;
In part 2 of this blog post, we’ll take a look at how our SRE team could determine how&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;and indeed, whether&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;to onboard a business-critical service once it has been identified as able to benefit from SRE support.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;See parts 2, "&lt;a href="https://cloudplatform.googleblog.com/2017/06/how-SREs-find-the-landmines-in-a-service-CRE-life-lessons.html" target="_blank"&gt;How SREs find the landmines in a service - CRE life lessons&lt;/a&gt;," and 3, "&lt;a href="https://cloudplatform.googleblog.com/2017/07/making-the-most-of-an-SRE-service-takeover-CRE-life-lessons.html" target="_blank"&gt;Making the most of an SRE service takeover - CRE life lessons&lt;/a&gt;," of this series.&lt;/i&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/JbQR8qnBhEc" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/3093583133408467852" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/3093583133408467852" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/JbQR8qnBhEc/why-should-your-app-get-SRE-support-CRE-life-lessons.html" title="Why should your app get SRE support? - CRE life lessons" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/why-should-your-app-get-SRE-support-CRE-life-lessons.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-5660508568747265823</id><published>2017-06-22T09:00:00.001-07:00</published><updated>2017-06-22T09:00:55.337-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Compute" /><title type="text">Google Compute Engine ranked #1 in price-performance by Cloud Spectator</title><content type="html">&lt;span class="byline-author"&gt;By Paul Nash, Group Product Manager, Compute Engine&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Cloud Spectator, an independent benchmarking and consulting agency, has released a new &lt;a href="https://cloudplatformonline.com/g-suite-cloud-spectator.html?utm_source=blogs&amp;amp;utm_medium=gcpblog&amp;amp;utm_campaign=2017-17q3-gc-cc-connected-googlelcoud-referralgoogle-google-leadgeneration-blog&amp;amp;utm_term=blogpost&amp;amp;utm_content=cloudspectator" target="_blank"&gt;comparative benchmarking study&lt;/a&gt; that ranks Google Cloud #1 for price-performance and block storage performance against AWS, Microsoft Azure and IBM SoftLayer.&lt;br /&gt;
&lt;br /&gt;
In January 2017, Cloud Spectator tested the overall price-performance, VM performance and block storage performance of four major cloud service providers: Google Compute Engine, Amazon Web Services, Microsoft Azure, and IBM SoftLayer. The result is a rare apples-to-apples comparison among major Cloud Service Providers (CSPs), whose distinct pricing models can make them difficult to compare.&lt;br /&gt;
&lt;br /&gt;
According to Cloud Spectator, “A lack of transparency in the public cloud IaaS marketplace for performance often leads to misinformation or false assumptions.” Indeed, RightScale &lt;a href="http://assets.rightscale.com/uploads/pdfs/RightScale-2017-State-of-the-Cloud-Report.pdf" target="_blank"&gt;estimates&lt;/a&gt; that up to 45% of cloud spending is wasted on resources that never end up being used — a serious hit to any company’s IT budget. &lt;br /&gt;
&lt;br /&gt;
The report can be distilled into three key insights, which upend common misconceptions about cloud pricing and performance:&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;b&gt;Insight #1: VM performance varies across cloud providers&lt;/b&gt;. In testing, Cloud Spectator observed differences of up to 1.4X in VM performance and 6.1X in block storage performance.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Insight #2: You don’t always get what you pay for&lt;/b&gt;. Cloud Spectator’s study found no correlation between price and performance.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Insight #3: Resource contention (the “Noisy Neighbor Effect”) can affect performance&lt;/b&gt; — but CSPs can limit those effects. Cloud Spectator points out that noisy neighbors are a real problem with some cloud vendors. To try and handle the problem, some vendors throttle down their customers access to resources (like disks) in an attempt to compensate for other VMs (so called Noisy Neighbors) on the same host machine.&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
You can download the full report &lt;a href="https://cloudplatformonline.com/g-suite-cloud-spectator.html?utm_source=blogs&amp;amp;utm_medium=gcpblog&amp;amp;utm_campaign=2017-17q3-gc-cc-connected-googlelcoud-referralgoogle-google-leadgeneration-blog&amp;amp;utm_term=blogpost&amp;amp;utm_content=cloudspectator" target="_blank"&gt;here&lt;/a&gt;, or keep reading for key findings.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Key finding: Google leads for overall price-performance&lt;/h3&gt;
Value, defined as the ratio of price and performance, varies by 2.4x across the compared IaaS providers, with Google achieving the highest CloudSpecs Score (see Methodology, below) among the four cloud IaaS providers. This is due to strong disk performance and the most inexpensive packaged pricing found in the study.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://1.bp.blogspot.com/-s6ttY-O-Ws0/WUvo-l3wuwI/AAAAAAAAEDc/F0xX5xacPXMbqaEBkF-0XHBjiLGEsn1GQCLcBGAs/s1600/image1.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="800" data-original-width="1600" src="https://1.bp.blogspot.com/-s6ttY-O-Ws0/WUvo-l3wuwI/AAAAAAAAEDc/F0xX5xacPXMbqaEBkF-0XHBjiLGEsn1GQCLcBGAs/s1600/image1.gif" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
To learn more, download “&lt;a href="https://cloudplatformonline.com/g-suite-cloud-spectator.html?utm_source=blogs&amp;amp;utm_medium=gcpblog&amp;amp;utm_campaign=2017-17q3-gc-cc-connected-googlelcoud-referralgoogle-google-leadgeneration-blog&amp;amp;utm_term=blogpost&amp;amp;utm_content=cloudspectator" target="_blank"&gt;2017 Best Hyperscale Cloud Providers: AWS vs. Azure vs. Google vs. SoftLayer&lt;/a&gt;,” a report by Cloud Spectator.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Methodology&lt;/h3&gt;
Cloud Spectator’s price-performance calculation, the CloudSpecs Score™, provides information on how much performance the user receives for each unit of cost. The CloudSpecs Score™ is an indexed, comparable score ranging from 0-100 indicative of value based on a combination of cost and performance. The calculation of the CloudSpecs Score™ is: price-performance_value = [VM performance score] / [VM cost] best_VM_value = max{price-performance_values} CloudSpecs Score™ = 100*price-performance_value / best_VM_value&lt;br /&gt;
Overall storage CloudSpecs Score™ was calculated by averaging block storage and vCPU-memory price-performance scores together so that they have equal weight for each VM size. Then, all resulting VM size scores were averaged together. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/WvZlyCkF6Ck" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/5660508568747265823" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/5660508568747265823" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/WvZlyCkF6Ck/Google-Compute-Engine-ranked-1-in-price-performance-by-Cloud-Spectator.html" title="Google Compute Engine ranked #1 in price-performance by Cloud Spectator" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://1.bp.blogspot.com/-s6ttY-O-Ws0/WUvo-l3wuwI/AAAAAAAAEDc/F0xX5xacPXMbqaEBkF-0XHBjiLGEsn1GQCLcBGAs/s72-c/image1.gif" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/Google-Compute-Engine-ranked-1-in-price-performance-by-Cloud-Spectator.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-2453209641319103376</id><published>2017-06-20T16:00:00.000-07:00</published><updated>2017-06-21T11:47:07.520-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Infrastructure" /><title type="text">Google Cloud Platform expands to Australia with new Sydney region - open now</title><content type="html">&lt;span class="byline-author"&gt;By Dave Stiver, Product Manager, Google Cloud Platform&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
Starting today, developers can choose to run applications and store data in Australia using the new &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP) region in Sydney. This is our first GCP region in Australia and the fourth in Asia Pacific, joining Taiwan, Tokyo and the &lt;a href="https://cloudplatform.googleblog.com/2017/06/Google-Cloud-Platform-comes-to-Singapore.html" target="_blank"&gt;recently launched Singapore&lt;/a&gt;.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-khWu_87bATU/WUmpkX1pgJI/AAAAAAAAECs/G7fDg1nnsGA0ZSUJ60YYLZPnV_w_FOiAACLcBGAs/s1600/sydney-6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="779" data-original-width="650" src="https://2.bp.blogspot.com/-khWu_87bATU/WUmpkX1pgJI/AAAAAAAAECs/G7fDg1nnsGA0ZSUJ60YYLZPnV_w_FOiAACLcBGAs/s1600/sydney-6.png" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;br /&gt;&lt;/div&gt;
GCP customers down under will see significant reductions in latency when they run their applications in Sydney. Our performance testing shows 80% to 95% reductions in round-trip time (RTT) latency when serving customers from New Zealand and Australian cities such as Sydney, Auckland, Wellington, Melbourne, Brisbane, Perth and Adelaide, compared to using regions in Singapore or Taiwan.&lt;br /&gt;
&lt;iframe allowfullscreen="" frameborder="0" height="360" src="https://www.youtube.com/embed/a6fLCl_z2_o" width="640"&gt;&lt;/iframe&gt;&lt;br /&gt;
The &lt;a href="https://cloud.google.com/about/locations/sydney/" target="_blank"&gt;Sydney GCP region&lt;/a&gt; is launching with three zones and several GCP services, and App Engine and Datastore will be available shortly:&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://1.bp.blogspot.com/-BDJvi1kpdWU/WUmpvLFS6oI/AAAAAAAAECw/ZqBcBWoqrvw3_hkmog0-8oa7_GtY2qHGACLcBGAs/s1600/sydney-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="325" data-original-width="650" src="https://1.bp.blogspot.com/-BDJvi1kpdWU/WUmpvLFS6oI/AAAAAAAAECw/ZqBcBWoqrvw3_hkmog0-8oa7_GtY2qHGACLcBGAs/s1600/sydney-4.png" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-mN8tlW4S3dU/WUmp20SZTNI/AAAAAAAAEC0/UvTkmbQChDYvz-01Ar0xp2QLRfg5cFfQACLcBGAs/s1600/sydney-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="688" data-original-width="650" src="https://4.bp.blogspot.com/-mN8tlW4S3dU/WUmp20SZTNI/AAAAAAAAEC0/UvTkmbQChDYvz-01Ar0xp2QLRfg5cFfQACLcBGAs/s1600/sydney-2.png" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;/div&gt;
Google Cloud customers benefit from our commitment to large-scale infrastructure investments. With the addition of each new region, developers have more choice on how to run applications closest to their customers. Google’s networking backbone, meanwhile, transforms compute and storage infrastructure into a global-scale computer, giving developers around the world access to the same cloud infrastructure that Google engineers use every day. &lt;br /&gt;
&lt;br /&gt;
In Asia-Pacific, we’re already building another region &lt;a href="https://cloud.google.com/about/locations/" target="_blank"&gt;in Mumbai&lt;/a&gt;, as well as new network infrastructure to tie them all together, including the &lt;a href="https://en.wikipedia.org/wiki/SJC_(cable_system)" target="_blank"&gt;SJC cable&lt;/a&gt; and &lt;a href="https://blog.google/topics/google-cloud/google-invests-indigo-undersea-cable-improve-cloud-infrastructure-southeast-asia/" target="_blank"&gt;Indigo cable&lt;/a&gt; fiber optic systems.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
What customers are saying&lt;/h3&gt;
Here’s what the new regions means to a few of our customers and partners.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://3.bp.blogspot.com/-r40-AySCCmk/WUmp9sHkRMI/AAAAAAAAEC4/0RdO_nZR04kePEJVM50utitiWjVbRVktQCLcBGAs/s1600/sydney-5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="113" data-original-width="147" src="https://3.bp.blogspot.com/-r40-AySCCmk/WUmp9sHkRMI/AAAAAAAAEC4/0RdO_nZR04kePEJVM50utitiWjVbRVktQCLcBGAs/s1600/sydney-5.png" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;blockquote class="tr_bq"&gt;
&lt;i&gt;"The regional expansion of Google Cloud Platform to Australia will help enable PwC's rapidly growing need to experiment and innovate and will further extend our work with Google Cloud. &lt;br /&gt;
&lt;br /&gt;
It not only provides a reliable and resilient platform that can support our firm's core technology needs, it also makes available to us, GCP's market leading technologies and capabilities to support the unprecedented demand of our diverse and evolving business."&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
—Hilda Clune, Chief Information Officer, PwC Australia&lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-Mvnz_r-ymCo/WUmjVsiLS8I/AAAAAAAAECY/agjXI9Rq0qgwVtNWKbrK1b2d_wrlTdDtQCLcBGAs/s1600/sydney-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="43" data-original-width="147" src="https://2.bp.blogspot.com/-Mvnz_r-ymCo/WUmjVsiLS8I/AAAAAAAAECY/agjXI9Rq0qgwVtNWKbrK1b2d_wrlTdDtQCLcBGAs/s1600/sydney-1.png" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;/div&gt;
&lt;blockquote class="tr_bq"&gt;
&lt;i&gt;"Monash University has one of the most ambitious digital transformation agendas in tertiary education. We're executing our strategy at pace and needed a platform which would give us the scale, flexibility and functionality to respond rapidly to our development and processing needs. Google Cloud Platform (GCP) and in particular App Engine have been a great combination for us, and we're very excited at the results we're getting. Having Google Cloud Platform hosted now in Australia is a big bonus."&lt;/i&gt;&amp;nbsp;&lt;/blockquote&gt;
&lt;blockquote class="tr_bq"&gt;
&lt;i&gt; &lt;/i&gt; —Trevor Woods, Chief Information Officer, Monash University&lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-lljv8hd9_hY/WUmqDuDiwtI/AAAAAAAAEC8/revdpuvPeB4E2sil-5_lmdTJ77eMc1_MQCLcBGAs/s1600/sydney-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="91" data-original-width="147" src="https://4.bp.blogspot.com/-lljv8hd9_hY/WUmqDuDiwtI/AAAAAAAAEC8/revdpuvPeB4E2sil-5_lmdTJ77eMc1_MQCLcBGAs/s1600/sydney-3.png" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;blockquote class="tr_bq"&gt;
&lt;i&gt;Modern geophysical technologies place a huge demand on supercomputing resources. Woodside utilises Google Cloud as an on-demand solution for our large computing requirements. This has allowed us to push technological boundaries and dramatically reduce turnaround time.&lt;/i&gt;&lt;/blockquote&gt;
&lt;blockquote class="tr_bq"&gt;
— Sean Salter, VP Technology,Woodside Energy Ltd.&lt;/blockquote&gt;
&lt;br /&gt;
&lt;h3&gt;
Next steps&lt;/h3&gt;
We want to help you build what’s next for you. If you’re looking for help to understand how to deploy GCP, please contact local partners: &lt;a href="https://shinesolutions.com/" target="_blank"&gt;Shine Solutions&lt;/a&gt;, &lt;a href="http://www.servian.com/" target="_blank"&gt;Servian&lt;/a&gt;, &lt;a href="https://3wks.com.au/" target="_blank"&gt;3WKS&lt;/a&gt;, &lt;a href="http://axalon.io/" target="_blank"&gt;Axalon&lt;/a&gt;, &lt;a href="http://onigroup.com.au/" target="_blank"&gt;Onigroup&lt;/a&gt;, &lt;a href="http://www.pwc.com.au/" target="_blank"&gt;PwC&lt;/a&gt;,&amp;nbsp;&lt;a href="https://www2.deloitte.com/au/en.html" target="_blank"&gt;Deloitte&lt;/a&gt;, &lt;a href="http://www.glintech.com/" target="_blank"&gt;Glintech&lt;/a&gt;, &lt;a href="http://www.fronde.com/what-we-do/google-workforce/" target="_blank"&gt;Fronde&lt;/a&gt;, &lt;a href="https://google.dialog.com.au/" target="_blank"&gt;Dialog&lt;/a&gt; or &lt;a href="https://www.megaport.com/services/google-cloud-platform-google-apps-business/" target="_blank"&gt;Megaport&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
For more details on Australia’s first region, please visit our &lt;a href="https://cloud.google.com/about/locations/sydney" target="_blank"&gt;Sydney region page&lt;/a&gt; where you’ll get access to free resources, whitepapers, an on-demand training video series called "Cloud On-Air" and more. These will help you get started on GCP. Give us a &lt;a href="https://goo.gl/forms/U5qAZB1tGR9NUlgB2" target="_blank"&gt;shout&lt;/a&gt; to request early access to new regions and help us prioritize what we build next.&lt;br /&gt;
&lt;br /&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/NU9c1EB7vUA" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2453209641319103376" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2453209641319103376" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/NU9c1EB7vUA/Google-Cloud-Region-in-Sydney.html" title="Google Cloud Platform expands to Australia with new Sydney region - open now" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://2.bp.blogspot.com/-khWu_87bATU/WUmpkX1pgJI/AAAAAAAAECs/G7fDg1nnsGA0ZSUJ60YYLZPnV_w_FOiAACLcBGAs/s72-c/sydney-6.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/Google-Cloud-Region-in-Sydney.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-6262970204431112734</id><published>2017-06-14T19:00:00.000-07:00</published><updated>2017-06-14T19:00:17.718-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Announcements" /><category scheme="http://www.blogger.com/atom/ns#" term="Infrastructure" /><title type="text">New Singapore GCP region – open now</title><content type="html">&lt;span class="byline-author"&gt;By Dave Stiver, Product Manager, Google Cloud Platform&lt;br /&gt;
&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
The Singapore region is now open as asia-southeast1. This is our first &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP) region in Southeast Asia (and our third region in Asia), and it promises to significantly improve latency for GCP customers and end users in the area. &lt;br /&gt;
&lt;br /&gt;
Customers are loving GCP in Southeast Asia; the total number of paid GCP customers in Singapore has increased by 100% over the last 12 months.&lt;br /&gt;
&lt;br /&gt;
And the experience for GCP customers in Southeast Asia is better than ever too; performance testing shows 51% to 98% reductions in round-trip time (RTT) latency when serving customers in Singapore, Jakarta, Kuala Lumpur and Bangkok compared to using other GCP regions in Taiwan or Tokyo. &lt;br /&gt;
&lt;a href="http://3.bp.blogspot.com/-5gCxmUSyGPY/WUHW7O4Dy6I/AAAAAAAAEAk/PhnLMt9ONj8CPL8BYm0nXqXC_G7H3dlPACK4BGAYYCw/s1600/image5.gif" imageanchor="1"&gt;&lt;img border="0" src="https://3.bp.blogspot.com/-5gCxmUSyGPY/WUHW7O4Dy6I/AAAAAAAAEAk/PhnLMt9ONj8CPL8BYm0nXqXC_G7H3dlPACK4BGAYYCw/s1600/image5.gif" /&gt;&lt;/a&gt;&lt;br /&gt;
Customers with a global footprint like BBM Messenger, Carousell and Go-Jek have been looking forward to the launch of the Singapore region.&lt;br /&gt;
&lt;blockquote class="tr_bq"&gt;
"&lt;i&gt;We are excited to be able to deploy into the GCP Singapore region, as it will allow us to offer our services closer to BBM Messenger key markets. Coupled with Google's global load balancers and extensive global network, we expect to be able to provide a low latency, high-speed experience for our users globally. During our POCs, we found that GCP outperformed most vendors on key metrics such as disk I/O and network performance on like-for-like benchmarks. With sustained usage discounts and continuous support from Google's PSO and account team, we are excited to make GCP the foundation for the next generation of BBM consumer services.&lt;/i&gt;"&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;Matthew Talbot, CEO of Creative Media Works, the company that runs BBM Messenger Consumer globally.&lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-fdqpuvYTooo/WUHXJ6hBQlI/AAAAAAAAEAo/VU_j7eO1cFMvpqpy5emy0xqQmoFkEouOQCLcBGAs/s1600/singapore-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="219" data-original-width="230" src="https://2.bp.blogspot.com/-fdqpuvYTooo/WUHXJ6hBQlI/AAAAAAAAEAo/VU_j7eO1cFMvpqpy5emy0xqQmoFkEouOQCLcBGAs/s1600/singapore-4.png" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;blockquote class="tr_bq"&gt;
"&lt;i&gt;As one of the largest and fastest growing mobile classifieds marketplaces in the world, Carousell needed a platform that was agile enough for a startup, but could scale quickly as we expand. We found all these qualities in the Google Cloud Platform (GCP), which gives us a level of control over our systems and environment that we didn't find elsewhere, along with access to cutting edge technologies. We're thrilled that GCP is launching in Singapore, and look forward to being inspired by the way Google does things at scale.&lt;/i&gt;"&amp;nbsp;&amp;nbsp;—&amp;nbsp;Jordan Dea-Mattson, Vice President Engineering, Carousell&lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://3.bp.blogspot.com/-p0dP4qbsaoE/WUHXQvJNw4I/AAAAAAAAEAs/kSnGIeXOjtAi9EyZPL0S3NKU9XMe6QAOQCLcBGAs/s1600/singapore-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="234" data-original-width="216" src="https://3.bp.blogspot.com/-p0dP4qbsaoE/WUHXQvJNw4I/AAAAAAAAEAs/kSnGIeXOjtAi9EyZPL0S3NKU9XMe6QAOQCLcBGAs/s1600/singapore-3.png" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;blockquote class="tr_bq"&gt;
"&lt;i&gt;We are extremely pleased with the performance of GCP, and we are excited about the opportunities opening in Indonesia and other markets, and making use of the Singapore Cloud Region. The outcomes we’ve achieved in scaling, stability and other areas have proven how fantastic it is to have Google and GCP among our key service partners.&lt;/i&gt;" —&amp;nbsp;Ajey Gore, CTO, Go-Jek&lt;/blockquote&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-E8_teyLeZw4/WUHXYypkaMI/AAAAAAAAEAw/9fCBpSZyvrE2ZVcLxDR0zU12cU3WsnR3gCLcBGAs/s1600/singapore-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="210" data-original-width="340" height="197" src="https://4.bp.blogspot.com/-E8_teyLeZw4/WUHXYypkaMI/AAAAAAAAEAw/9fCBpSZyvrE2ZVcLxDR0zU12cU3WsnR3gCLcBGAs/s320/singapore-1.png" width="320" /&gt;&lt;/a&gt;&lt;/div&gt;
We’ve launched Singapore with two zones and the following services:&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-_9Vx1rJZWcA/WUHXf-sohPI/AAAAAAAAEA0/Evxxe-u3CaYVrsgeoBBS8r1vfkkeG4AmACLcBGAs/s1600/singapore-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="500" data-original-width="1000" height="320" src="https://4.bp.blogspot.com/-_9Vx1rJZWcA/WUHXf-sohPI/AAAAAAAAEA0/Evxxe-u3CaYVrsgeoBBS8r1vfkkeG4AmACLcBGAs/s640/singapore-2.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
In addition, you can combine any of the services you deploy in Singapore with other GCP services around the world such as &lt;a href="http://cloud.google.com/dlp" target="_blank"&gt;DLP&lt;/a&gt;, &lt;a href="https://cloud.google.com/spanner/" target="_blank"&gt;Spanner&lt;/a&gt; and &lt;a href="https://cloud.google.com/bigquery/" target="_blank"&gt;BigQuery&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Singapore Multi-Tier Cloud Security certification&lt;/h3&gt;
Google Cloud is pleased to announce that having completed the required assessment, it has been recommended, by an approved certification body, for Level 3 certification of Singapore's Multi-Tier Cloud Security (MTCS) standard (SS 584:2015+C1:2016). Customers can expect formal approval of Google Cloud's certification in the coming months. As a result of achieving this certification, organizations who require compliance with the strictest levels of the MTCS standard can now confidently adopt Google Cloud services and host this data on Google Cloud's infrastructure.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Next steps&lt;/h3&gt;
If you’re looking for help to understand how to deploy GCP, please contact local partners &lt;a href="https://www.sakurasky.com/" target="_blank"&gt;Sakura Sky&lt;/a&gt;, &lt;a href="http://cldcvr.com/clouds/google-cloud-platform/" target="_blank"&gt;CloudCover&lt;/a&gt;, &lt;a href="https://www.cloudcomrade.com/" target="_blank"&gt;Cloud Comrade&lt;/a&gt; and &lt;a href="https://powerupcloud.com/us/" target="_blank"&gt;Powerupcloud&lt;/a&gt;. &lt;br /&gt;
&lt;br /&gt;
For more details on the Singapore region, please visit our &lt;a href="https://cloud.google.com/about/locations/singapore/" target="_blank"&gt;Singapore region portal&lt;/a&gt;, where you’ll get access to free resources, whitepapers, on-demand video series called "Cloud On-Air" and more. These will help you get started on GCP. Our &lt;a href="https://cloud.google.com/about/locations/" target="_blank"&gt;locations page&lt;/a&gt; provides updates on other regions coming online soon. Give us a &lt;a href="https://goo.gl/forms/U5qAZB1tGR9NUlgB2" target="_blank"&gt;shou&lt;/a&gt;t to request early access to new regions and help us prioritize what we build next.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/15FgXSZDMjQ" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/6262970204431112734" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/6262970204431112734" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/15FgXSZDMjQ/Google-Cloud-Platform-comes-to-Singapore.html" title="New Singapore GCP region – open now" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://3.bp.blogspot.com/-5gCxmUSyGPY/WUHW7O4Dy6I/AAAAAAAAEAk/PhnLMt9ONj8CPL8BYm0nXqXC_G7H3dlPACK4BGAYYCw/s72-c/image5.gif" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/Google-Cloud-Platform-comes-to-Singapore.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-4334266747979281592</id><published>2017-06-13T09:24:00.001-07:00</published><updated>2017-06-13T09:24:31.259-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Compute" /><title type="text">Best practices for App Engine startup time: Google Cloud Performance Atlas</title><content type="html">&lt;span class="byline-author"&gt;By Colt McAnlis, Developer Advocate&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;[Editor’s note: In the past couple of months, &lt;a href="https://medium.com/@duhroach" target="_blank"&gt;Colt McAnlis&lt;/a&gt; of Android Developers fame joined the Google Cloud developer advocate team. He jumped right in and started blogging — and vlogging&amp;nbsp;&lt;/i&gt;&lt;i&gt;—&lt;/i&gt;&lt;i&gt;&amp;nbsp;for the new &lt;a href="https://www.youtube.com/playlist?list=PLIivdWyY5sqK5zce0-fd1Vam7oPY-s_8X" target="_blank"&gt;Google Cloud Performance Atlas&lt;/a&gt; series, focused on extracting the best performance from your GCP assets. Check out this synopsis of his first video, where he tackles the problem of cold boot performance in App Engine standard environment. Vroom vroom!]&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
One of the fantastic features of &lt;a href="https://cloud.google.com/appengine/docs/the-appengine-environments" target="_blank"&gt;App Engine standard environment&lt;/a&gt; is that it has load balancing built into it, and can spin up or spin down instances based upon traffic demands. This is great in situations where your content goes viral, or for daily ebb-and-flows of traffic, since you don’t have to spend time thinking about provisioning whatsoever.&lt;br /&gt;
&lt;br /&gt;
As a baseline, it’s easy to establish that &lt;a href="https://medium.com/google-cloud/understanding-and-profiling-app-engine-cold-boot-time-908431aa971d" target="_blank"&gt;App Engine startup time is really fast&lt;/a&gt;. The following graph charts instance type vs. startup time for a basic Hello World application:&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://1.bp.blogspot.com/-CxvdoBR-FNk/WUAO8aN0oAI/AAAAAAAAD_0/HkqA85F2mTQDcRue7RswT76ObvkwFwIXQCLcB/s1600/GAEstartup-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="358" data-original-width="585" height="390" src="https://1.bp.blogspot.com/-CxvdoBR-FNk/WUAO8aN0oAI/AAAAAAAAD_0/HkqA85F2mTQDcRue7RswT76ObvkwFwIXQCLcB/s640/GAEstartup-1.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
250ms is pretty fast to boot up an App Engine F2 type instance class. That’s faster than fetching a &lt;a href="http://mobile.httparchive.org/interesting.php" target="_blank"&gt;Javascript file from most CDNs&lt;/a&gt; on a 4G connection, and shows that App Engine responds quickly to requests to create new instances. &lt;br /&gt;
&lt;br /&gt;
There are great resources that detail how &lt;a href="https://cloud.google.com/appengine/docs/standard/go/how-instances-are-managed" target="_blank"&gt;App Engine manages instances&lt;/a&gt;, but for our purposes, there’s one main concept we’re concerned with: &lt;b&gt;loading requests&lt;/b&gt;.&lt;br /&gt;
&lt;br /&gt;
A loading request triggers App Engine’s load balancer to spin up a new instance. This is important to note, since the response time for a loading request will be significantly higher than average, since the request must wait for the instance to boot up before it's serviced.  &lt;br /&gt;
&lt;br /&gt;
As such, the key to being able to respond to rapid load balancing while keeping user experience high is to optimize the cold-boot performance of your App Engine application. Below, we’ve gathered a few suggestions on addressing the most common problems to cold-boot performance.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Leverage resident instances&lt;/h3&gt;
&lt;a href="https://cloud.google.com/appengine/docs/standard/python/config/appref#min_idle_instances" target="_blank"&gt;&lt;b&gt;Resident instances&lt;/b&gt;&lt;/a&gt; are instances that stick around regardless of the type of load your app is handling; &lt;b&gt;even when you’ve scaled to zero, these instances will still be alive&lt;/b&gt;.&lt;br /&gt;
&lt;br /&gt;
When spikes do occur, resident instances service requests that cannot be serviced in the time it would take to spin up a new instance; requests are routed to them while a new instance spins up. Once the new instance is up, traffic is routed to it and the resident instance goes back to being idle.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-qVrk5L43MKY/WUAPFvhL-XI/AAAAAAAAD_4/xfWbC8x6Nrohbc0P-JANRRREoLh5qbr8QCLcB/s1600/GAEStartup-4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="309" data-original-width="624" height="316" src="https://2.bp.blogspot.com/-qVrk5L43MKY/WUAPFvhL-XI/AAAAAAAAD_4/xfWbC8x6Nrohbc0P-JANRRREoLh5qbr8QCLcB/s640/GAEStartup-4.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
The point here is that resident instances are the key to rapid scale and not shooting users’ perception of latency through the roof. In effect, resident instances hide instance startup time from the user, which is a good thing!&lt;br /&gt;
&lt;br /&gt;
For more information, check our our Cloud Performance Atlas article on how &lt;a href="https://medium.com/google-cloud/app-engine-resident-instances-and-the-startup-time-problem-8c6587040a80" target="_blank"&gt;Resident instances helped a developer reduce their startup time&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Be careful with initializing global variables during parallel requests &lt;/h3&gt;
While using global variables is a common programming practice, they can create a performance pitfall in certain scenarios relating to cold boot performance. If your global variable is initialized during the loading request AND you’ve got parallel requests enabled, your application can fall into a bit of a trap, where multiple parallel requests end up blocking, waiting on the first loading request to finish initializing of your global variable. You can see this effect in the logging snapshot below:&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://3.bp.blogspot.com/-CpYA5egdib4/WUAPSFHERyI/AAAAAAAAD_8/nTWc0we3n803ZOSxEaLH2838tGw4K_viACLcB/s1600/GAEStartup-3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="256" data-original-width="624" height="262" src="https://3.bp.blogspot.com/-CpYA5egdib4/WUAPSFHERyI/AAAAAAAAD_8/nTWc0we3n803ZOSxEaLH2838tGw4K_viACLcB/s640/GAEStartup-3.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
The very first request is our loading request, and the next batch is a set of blocked parallel requests, waiting for a global variable to initialize. You can see that these blocked requests can easily end up with 2x higher response latency, which is less than ideal.&lt;br /&gt;
&lt;br /&gt;
For more info, check our our Cloud Performance Atlas article on how &lt;a href="https://medium.com/google-cloud/app-engine-startup-time-and-the-global-variable-problem-7ab10de1f349" target="_blank"&gt;Global variables caused one developer a lot of headaches&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Be careful with dependencies&lt;/h3&gt;
During cold-boot time, your application code is busy scanning and importing dependencies. The longer this takes, the longer it will take for your first line of code to execute. Some languages can optimize this process to be exceptionally fast, other languages are slower, but provide more flexibility. &lt;br /&gt;
&lt;br /&gt;
And to be fair, most of the time, a standard application importing a few modules should have a negligible impact on performance. However, when third-party libraries get big enough, we start to see them do weird things with import semantics, which can mess up your boot time significantly.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://2.bp.blogspot.com/-UElTCwTGZSQ/WUAPYpPPSAI/AAAAAAAAEAA/EEdI1tyuwm4CfTYbo07YNodY_zvW988vQCLcB/s1600/GAEStartup-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="286" data-original-width="572" height="320" src="https://2.bp.blogspot.com/-UElTCwTGZSQ/WUAPYpPPSAI/AAAAAAAAEAA/EEdI1tyuwm4CfTYbo07YNodY_zvW988vQCLcB/s640/GAEStartup-2.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
Addressing dependency issues is no small feat. You might have to use warm-up requests, lazy-load your imports, or in the most extreme case, prune your dependency tree.&lt;br /&gt;
&lt;br /&gt;
For more info, check our our Cloud Performance Atlas article on how the developer of &lt;a href="https://medium.com/google-cloud/gae-startup-time-and-the-dependency-problem-b866d92c1e6f" target="_blank"&gt;a platypus-based calculator tracked down a dependency problem&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Every millisecond counts&lt;/h3&gt;
In the end, optimizing cold-boot performance for App Engine instances is critical for scaling quickly and keeping user perception of latency in a good place. If you’d like to know more about ways to optimize your Google Cloud applications, check out the rest of the &lt;a href="https://medium.com/@duhroach/" target="_blank"&gt;Google Cloud Performance Atlas blog posts&lt;/a&gt; and &lt;a href="https://www.youtube.com/watch?list=PLIivdWyY5sqK5zce0-fd1Vam7oPY-s_8X&amp;amp;v=vuVpxOIA8Wc" target="_blank"&gt;videos&lt;/a&gt;. Because when it comes to performance, every millisecond counts.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/ZZBPe4ik5hY" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/4334266747979281592" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/4334266747979281592" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/ZZBPe4ik5hY/best-practices-for-App-Engine-startup-time-Google-Cloud-Performance-Atlas.html" title="Best practices for App Engine startup time: Google Cloud Performance Atlas" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://1.bp.blogspot.com/-CxvdoBR-FNk/WUAO8aN0oAI/AAAAAAAAD_0/HkqA85F2mTQDcRue7RswT76ObvkwFwIXQCLcB/s72-c/GAEstartup-1.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/best-practices-for-App-Engine-startup-time-Google-Cloud-Performance-Atlas.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-2019696158742189910</id><published>2017-06-12T09:13:00.000-07:00</published><updated>2017-06-12T10:56:48.251-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Management Tools" /><title type="text">Add log statements to your application on the fly with Stackdriver Debugger Logpoints</title><content type="html">&lt;span class="byline-author"&gt;By Morgan McLean, Product Manager, Stackdriver Trace and Debugger&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
In 2014 we launched Snapshots for &lt;a href="https://cloud.google.com/debugger/" target="_blank"&gt;Stackdriver Debugger&lt;/a&gt;, which gave developers the ability to examine their application’s call stack and variables in production with no impact to users. In the past year, developers have taken over three hundred thousand production snapshots across their services running on Google App Engine and on VMs and containers hosted anywhere.&lt;br /&gt;
&lt;br /&gt;
Today we’re showing off &lt;a href="https://cloud.google.com/debugger/docs/logpoints" target="_blank"&gt;Stackdriver Debugger Logpoints&lt;/a&gt;. With Logpoints, you can instantly add log statements to your production application without rebuilding or redeploying it. Like Snapshots, this is immensely useful when diagnosing tricky production issues that lack an obvious root cause. Even better, Logpoints fits into existing logs-based workflows.&lt;br /&gt;
&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://1.bp.blogspot.com/-c0C34Ac_OQ8/WT63uJWo__I/AAAAAAAAD_I/pISvau9d-qMJMNEDaVS-nP5OFwB9WxRHQCLcB/s1600/image1.gif" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="270" data-original-width="480" height="360" src="https://1.bp.blogspot.com/-c0C34Ac_OQ8/WT63uJWo__I/AAAAAAAAD_I/pISvau9d-qMJMNEDaVS-nP5OFwB9WxRHQCLcB/s640/image1.gif" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;(click to enlarge)&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
Adding a logpoint is as simple as clicking a line in the Debugger source viewer and typing in your new log message (just make sure that you open the Logpoints tab in the right hand pane first). If you haven’t synced your source code, you can add Logpoints by specifying the target file and line number in the right-hand pane or via the gcloud command line tools. Variables can be referenced by &lt;i&gt;{variableName}&lt;/i&gt;. You can review the full documentation &lt;a href="https://cloud.google.com/debugger/docs/logpoints" target="_blank"&gt;here&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
Because Logpoints writes its output through your app’s existing logging mechanism, it's compatible with any logging aggregation and analysis system, including Splunk or Kibana, or you can read its output from locally stored logs. However, Stackdriver Logging customers benefit from being able to read their log output from within the Stackdriver Debugger UI.&lt;br /&gt;
&lt;iframe allowfullscreen="" frameborder="0" height="360" src="https://www.youtube.com/embed/q9_kMqWasWk" width="640"&gt;&lt;/iframe&gt;&lt;br /&gt;
&lt;br /&gt;
Logpoints is already available for applications written in &lt;a href="https://cloud.google.com/debugger/docs/setup/compute-engine-java" target="_blank"&gt;Java&lt;/a&gt;, &lt;a href="https://cloud.google.com/debugger/docs/setup/compute-engine-go" target="_blank"&gt;Go&lt;/a&gt;, &lt;a href="https://github.com/GoogleCloudPlatform/cloud-debug-nodejs" target="_blank"&gt;Node.js&lt;/a&gt;, &lt;a href="https://cloud.google.com/debugger/docs/setup/compute-engine-python" target="_blank"&gt;Python&lt;/a&gt;&amp;nbsp;and &lt;a href="https://github.com/GoogleCloudPlatform/google-cloud-ruby/tree/master/google-cloud-debugger" target="_blank"&gt;Ruby&lt;/a&gt; via the Stackdriver Debugger agents. As with Snapshots, this same set of languages is supported across VMs (including Google Compute Engine), containers (including Google Container Engine), and Google App Engine. Logpoints has been accessible through the gcloud command line interface &lt;a href="https://cloudplatform.googleblog.com/2016/06/Stackdriver-Debugger-add-application-logs-on-the-fly-with-no-restarts.html" target="_blank"&gt;for some time&lt;/a&gt;, and the process for using Logpoints in the CLI hasn’t changed.&lt;br /&gt;
&lt;br /&gt;
Each logpoint lasts up to twenty-four hours or until it's deleted or when the application is redeployed. Adding a logpoint incurs a performance cost on par with adding an additional log statement to your code directly. However, the Stackdriver Debugger agents automatically throttle any logpoints that negatively impact your application’s performance and any logpoints or snapshots with conditions that take too long to evaluate.&lt;br /&gt;
&lt;br /&gt;
At Google, we use technology like Snapshots and Logpoints to solve production problems every day to make our services more performant and reliable. We’ve heard from our customers how snapshots are the bread and butter of their problem-solving processes, and we’re excited to see how you use Logpoints to make your cloud applications better.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/7sAAuPbw604" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2019696158742189910" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2019696158742189910" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/7sAAuPbw604/add-log-statements-to-your-application-on-the-fly-with-Stackdriver-Debugger-Logpoints.html" title="Add log statements to your application on the fly with Stackdriver Debugger Logpoints" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://1.bp.blogspot.com/-c0C34Ac_OQ8/WT63uJWo__I/AAAAAAAAD_I/pISvau9d-qMJMNEDaVS-nP5OFwB9WxRHQCLcB/s72-c/image1.gif" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/add-log-statements-to-your-application-on-the-fly-with-Stackdriver-Debugger-Logpoints.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-2763887569041672271</id><published>2017-06-09T09:30:00.000-07:00</published><updated>2017-06-09T09:43:11.904-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Partners" /><title type="text">Partnering on open source: Google and Ansible engineers on managing GCP infrastructure</title><content type="html">&lt;span class="byline-author"&gt;By Tom Melendez, Senior Software Engineer, Google&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
It's time for the third chapter in the &lt;a href="https://www.youtube.com/playlist?list=PLIivdWyY5sqKINn12CqO97nc_9INR8sAv" target="_blank"&gt;Partnering on open source series&lt;/a&gt;. This time around, we cover some of the work we’ve done with &lt;a href="https://www.ansible.com/" target="_blank"&gt;Ansible&lt;/a&gt;, a popular open source IT automation engine, and how to use it to provision, manage and orchestrate &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP) resources.&lt;br /&gt;
&lt;br /&gt;
Ansible, by Red Hat, is a simple automation language that can perfectly describe an IT application infrastructure on GCP including virtual machines, disks, network load-balancers, firewall rules and more. In this series, I'll walk you through my former life as a DevOps engineer at a satellite space imaging company. You'll get a glimpse into how I used Ansible to update satellites in orbit along with other critical infrastructure that serve imagery to interested viewers around the globe. &lt;br /&gt;
&lt;br /&gt;
In this first video, we set the stage and talk about Ansible in general, before diving into hands-on walkthroughs in subsequent episodes.&lt;br /&gt;
&lt;br /&gt;
&lt;iframe width="640" height="360" src="https://www.youtube.com/embed/ugGZIR3rul4" frameborder="0" allowfullscreen&gt;&lt;/iframe&gt;&lt;br /&gt;
&lt;br /&gt;
Upcoming videos demonstrate how to use Ansible and GCP to:&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Apply a camera-settings hotfix to a satellite orbiting Earth by spinning up a &lt;a href="https://cloud.google.com/compute/" target="_blank"&gt;Google Compute Engine&lt;/a&gt; instance, testing the latest satellite image build and pushing the settings to the satellite.&lt;/li&gt;
&lt;li&gt;Provision and manage GCP's advanced networking features like globally available load-balancers with L7 routing to serve satellite ground images on a public website.&lt;/li&gt;
&lt;li&gt;Create a set of networks, routes and firewall rules with security rules to help isolate and protect the various systems involved in the imagery processing pipeline. The raw images may contain sensitive data that must be appropriately screened and scrubbed before being added to the public image repository and network security is critical.&lt;/li&gt;
&lt;/ul&gt;&lt;br /&gt;
The series wraps up with a demonstration of how to extend Ansible's capabilities by writing custom modules. The videos in this series make use of custom and publicly available modules for GCP. &lt;br /&gt;
&lt;br /&gt;
Join us on YouTube to watch the upcoming videos or go back and watch the other videos on the series. You can also follow &lt;a href="https://www.youtube.com/user/googlecloudplatform" target="_blank"&gt;Google Cloud on YouTube&lt;/a&gt;, or &lt;a href="https://twitter.com/googlecloud" target="_blank"&gt;@GoogleCloud on Twitter&lt;/a&gt; to find out when new videos are published. And stay tuned for more blog posts and videos about work we’re doing with open-source providers like Puppet, Chef, Cloud Foundry, Red Hat, SaltStack and others.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/oxOhpPwV5iE" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2763887569041672271" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2763887569041672271" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/oxOhpPwV5iE/partnering-on-open-source-Google-and-Ansible-engineers-on-managing-GCP-infrastructure.html" title="Partnering on open source: Google and Ansible engineers on managing GCP infrastructure" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://img.youtube.com/vi/ugGZIR3rul4/default.jpg" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/partnering-on-open-source-Google-and-Ansible-engineers-on-managing-GCP-infrastructure.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-6845628951268820910</id><published>2017-06-08T09:00:00.000-07:00</published><updated>2017-06-08T09:00:13.565-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Compute" /><title type="text">App Engine users, now you can configure custom domains from the API or CLI</title><content type="html">&lt;span class="byline-author"&gt;By Lorne Kligerman, Product Manager&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
As a developer, your job is to provide a professional branded experience for your users. If you’re developing web apps, that means you’ll need to host your application on its own custom domain accessed securely over HTTPS with an SSL certificate. &lt;br /&gt;
&lt;br /&gt;
With App Engine, it’s always been easy to access applications from their own hostname, e.g., &amp;lt;YOUR_PROJECT_ID&amp;gt;.appspot.com, but custom domains and SSL certificates could only be configured through the App Engine component of the Cloud Platform Console. &lt;br /&gt;
&lt;br /&gt;
Today, we’re happy to announce that you can now manage both your custom domains and SSL certificates using the new beta features of the Admin API and gcloud command-line tool. These new beta features provide improved management, including the ability to automate mapping domains and uploading SSL certificates.&lt;br /&gt;
&lt;br /&gt;
We hope these new API and CLI commands will simplify managing App Engine applications, help your business scale, and ultimately, allow you to spend more time writing code.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;a href="https://2.bp.blogspot.com/-RCg8bkHV0B4/WTlwpjakD5I/AAAAAAAAD-k/BPSSOk3Lux0qZk-UQbxjcRNXhXwZy9jzgCLcB/s1600/app-engine.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="256" data-original-width="256" src="https://2.bp.blogspot.com/-RCg8bkHV0B4/WTlwpjakD5I/AAAAAAAAD-k/BPSSOk3Lux0qZk-UQbxjcRNXhXwZy9jzgCLcB/s1600/app-engine.png" /&gt;&lt;/a&gt;&lt;/div&gt;&lt;h3&gt;Managing App Engine custom domains from the CLI &lt;/h3&gt;&lt;br /&gt;
To get started with the CLI, first install the&amp;nbsp;&lt;a href="https://cloud.google.com/sdk/docs/" target="_blank"&gt;Google Cloud SDK&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
To use the new beta commands, make sure you’ve installed the beta component:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;gcloud components install beta&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
And if you’ve already installed that component, make sure that it's up to date:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;gcloud components update&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
Now that you’ve installed the new beta command, verify your domain to register ownership:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;gcloud beta domains verify &amp;lt;DOMAIN&amp;gt;
gcloud beta domains list-verified&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
After you've verified ownership, map that domain to your App Engine application:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;gcloud beta app domain-mappings create &amp;lt;DOMAIN&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
You can also map your subdomains this way. Note that as of today, only the verified owner can create mappings to a domain. &lt;br /&gt;
&lt;br /&gt;
With the response from the last command, complete the mapping to your application by updating the DNS records of your domain.&lt;br /&gt;
&lt;br /&gt;
To create an HTTPS connection, upload your SSL certificate: &lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;gcloud beta app ssl-certificates create --display-name 
&amp;lt;CERT_DISPLAY_NAME&amp;gt; --certificate &amp;lt;CERT_DIRECTORY_PATH&amp;gt; 
--private-key &amp;lt;KEY_DIRECTORY_PATH&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
Then update your domain mapping to include the certificate that you just uploaded:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;gcloud beta app domain-mappings update &amp;lt;DOMAIN&amp;gt; --certificate-id 
&amp;lt;CERT_ID&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
We're also excited to provide a single command that you can use to renew your certificate before it expires:&lt;br /&gt;
&lt;br /&gt;
&lt;pre&gt;&lt;code&gt;gcloud beta app ssl-certificates update &amp;lt;CERT_ID&amp;gt; --certificate 
&amp;lt;CERT_DIRECTORY_PATH&amp;gt; --private-key &amp;lt;KEY_DIRECTORY_PATH&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;
As with all beta releases, these commands should not yet be used in production environments. For complete details, please check out the &lt;a href="https://cloud.google.com/appengine/docs/standard/python/using-custom-domains-and-ssl" target="_blank"&gt;full set of instructions&lt;/a&gt;, along with the &lt;a href="https://cloud.google.com/appengine/docs/admin-api/reference/rest/" target="_blank"&gt;API reference&lt;/a&gt;. If you have any questions or feedback, we’ll be watching the &lt;a href="https://groups.google.com/forum/#!forum/google-appengine" target="_blank"&gt;Google App Engine forum&lt;/a&gt;, you can log a &lt;a href="https://issuetracker.google.com/issues?q=componentid:187191%2B" target="_blank"&gt;public issue&lt;/a&gt;, or get in touch on the &lt;a href="https://googlecloud-community.slack.com/" target="_blank"&gt;App Engine slack channel (#app-engine)&lt;/a&gt;.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/6RzoJffsOU4" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/6845628951268820910" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/6845628951268820910" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/6RzoJffsOU4/App-Engine-users-now-you-can-configure-custom-domains-from-the-API-or-CLI.html" title="App Engine users, now you can configure custom domains from the API or CLI" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://2.bp.blogspot.com/-RCg8bkHV0B4/WTlwpjakD5I/AAAAAAAAD-k/BPSSOk3Lux0qZk-UQbxjcRNXhXwZy9jzgCLcB/s72-c/app-engine.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/App-Engine-users-now-you-can-configure-custom-domains-from-the-API-or-CLI.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-6453808379015756834</id><published>2017-06-08T06:04:00.000-07:00</published><updated>2017-06-08T06:04:28.715-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Open Source" /><category scheme="http://www.blogger.com/atom/ns#" term="Solutions" /><title type="text">Solutions guide: Preparing Container Engine environments for production</title><content type="html">&lt;span class="byline-author"&gt;By Vic Iglesias, Cloud Solutions Architect&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Many &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt;&amp;nbsp;(GCP) users are now migrating production workloads to &lt;a href="https://cloud.google.com/container-engine/" target="_blank"&gt;Container Engine&lt;/a&gt;, our managed Kubernetes environment. You can spin up a Container Engine cluster for development, then quickly start porting your applications. First and foremost, a production application must be resilient and fault tolerant and deployed using Kubernetes best practices. You also need to prepare the Kubernetes environment for production by hardening it. As part of the migration to production, you may need to lock down who or what has access to your clusters and applications, both from an administrative as well as network perspective.&lt;br /&gt;
&lt;br /&gt;
We recently &lt;a href="https://cloud.google.com/solutions/prep-container-engine-for-prod" target="_blank"&gt;created a guide&lt;/a&gt; that will help you with the push towards production on Container Engine. The guide walks through various patterns and features that allow you to lock down your Container Engine workloads. The first half focuses on how to control access to the cluster administratively using IAM and Kubernetes RBAC. The second half dives into network access patterns teaching you to properly configure your environment and Kubernetes services. With the IAM and networking models locked down appropriately, you can rest assured that you're ready to start directing your users to your new applications.&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://1.bp.blogspot.com/--5Sv1o6x454/WTlLFbWraII/AAAAAAAAD-U/hqcd-arAqYYj6HEmIcZ_tf_KXcs4DLwyQCLcB/s1600/Screen%2BShot%2B2017-06-08%2Bat%2B6.03.02%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="985" data-original-width="1600" height="392" src="https://1.bp.blogspot.com/--5Sv1o6x454/WTlLFbWraII/AAAAAAAAD-U/hqcd-arAqYYj6HEmIcZ_tf_KXcs4DLwyQCLcB/s640/Screen%2BShot%2B2017-06-08%2Bat%2B6.03.02%2BAM.png" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
Read the &lt;a href="https://cloud.google.com/solutions/prep-container-engine-for-prod" target="_blank"&gt;full solution guide&lt;/a&gt; for using Container Engine for production workloads, or learn more about Container Engine from the &lt;a href="https://cloud.google.com/container-engine/docs/" target="_blank"&gt;documentation&lt;/a&gt;.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/wp4se0oi3V8" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/6453808379015756834" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/6453808379015756834" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/wp4se0oi3V8/Solutions-guide-preparing-Container-Engine-environments-for-production.html" title="Solutions guide: Preparing Container Engine environments for production" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://1.bp.blogspot.com/--5Sv1o6x454/WTlLFbWraII/AAAAAAAAD-U/hqcd-arAqYYj6HEmIcZ_tf_KXcs4DLwyQCLcB/s72-c/Screen%2BShot%2B2017-06-08%2Bat%2B6.03.02%2BAM.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/Solutions-guide-preparing-Container-Engine-environments-for-production.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-795927481858970263</id><published>2017-06-07T09:00:00.000-07:00</published><updated>2017-06-07T09:07:14.704-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Announcements" /><category scheme="http://www.blogger.com/atom/ns#" term="Networking" /><title type="text">Getting started with Shared VPC </title><content type="html">&lt;span class="byline-author"&gt;By Neha Pattan, Staff Software Engineer&lt;br /&gt;
&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Large organizations with multiple cloud projects value the ability to share physical resources, while maintaining logical separation between groups or departments. At Google Cloud Next '17, we announced &lt;a href="https://cloud.google.com/compute/docs/shared-vpc/" target="_blank"&gt;Shared VPC&lt;/a&gt;, which allows you to configure and centrally manage one or more virtual networks across multiple projects in your &lt;a href="https://cloud.google.com/resource-manager/docs/quickstart-organizations" target="_blank"&gt;Organization&lt;/a&gt;, the top level Cloud Identity Access Management (Cloud IAM) resource in the &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP) cloud resource hierarchy.&lt;br /&gt;
&lt;br /&gt;
With Shared VPC, you can centrally manage the creation of routes, firewalls, subnet IP ranges, VPN connections, etc. for the entire organization, and at the same time allow developers to own billing, quotas, IAM permissions and autonomously operate their development projects. Shared VPC is now generally available, so let’s look at how it works and how best to configure it.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
How does Shared VPC work?&lt;/h3&gt;
We implemented Shared VPC entirely in the management control plane, transparent to the data plane of the virtual network. In the control plane, the centrally managed project is enabled as a &lt;a href="https://cloud.google.com/compute/docs/shared-vpc/#concepts_and_terminology" target="_blank"&gt;host project&lt;/a&gt;, allowing it to contain one or more shared virtual networks. After configuring the necessary Cloud IAM permissions, you can then create virtual machines in shared virtual networks, by linking one or more &lt;a href="https://cloud.google.com/compute/docs/shared-vpc/#concepts_and_terminology" target="_blank"&gt;service projects&lt;/a&gt; to the host project. The advantage of sharing virtual networks in this way is being able to control access to critical network resources such as firewalls and centrally manage them with less overhead.&lt;br /&gt;
&lt;br /&gt;
Further, with shared virtual networks, virtual machines benefit from the &lt;a href="https://cloud.google.com/docs/compare/data-centers/networking#performance" target="_blank"&gt;same network throughput caps&lt;/a&gt; and VM-to-VM latency as when they're not on shared networks. This is also the case for VM-to-VPN and load balancer-to-VM communication.&lt;br /&gt;
&lt;br /&gt;
To illustrate, consider a single externally facing web application server that uses services such as personalization, recommendation and analytics, all internally available, but built by different development teams. &lt;br /&gt;
&lt;br /&gt;
&lt;table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://3.bp.blogspot.com/-QjMXB62TZOE/WTghBuV5kJI/AAAAAAAAD-E/Im2lrbZxQBw2jrVwyCI9MJKh9aZdlnhGACLcB/s1600/shared-vpc.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="420" data-original-width="642" height="418" src="https://3.bp.blogspot.com/-QjMXB62TZOE/WTghBuV5kJI/AAAAAAAAD-E/Im2lrbZxQBw2jrVwyCI9MJKh9aZdlnhGACLcB/s640/shared-vpc.png" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: center;"&gt;Example topology of a Shared VPC setup.&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;br /&gt;
Let’s look at the recommended patterns when designing such a virtual network in your organization.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Shared VPC administrator role&lt;/h3&gt;
The network administrator of the shared host project should also have the &lt;a href="https://cloud.google.com/compute/docs/access/iam#iam_roles" target="_blank"&gt;XPN administrator role&lt;/a&gt; in the organization. This allows a single central group to configure new service projects that attach to the shared VPC host project, while also allowing them to set up individual subnetworks in the shared network and configure IP ranges, for use by administrators of specific service projects. Typically, these administrators would have the &lt;a href="https://cloud.google.com/compute/docs/access/iam#iam_roles" target="_blank"&gt;InstanceAdmin role&lt;/a&gt; on the service project.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Subnetworks USE permission&lt;/h3&gt;
When connecting a service project to the shared network, we recommend you grant the service project administrators compute.subnetworks.use permission (through the &lt;a href="https://cloud.google.com/compute/docs/access/iam#iam_roles" target="_blank"&gt;NetworkUser role&lt;/a&gt;) on one (or more) subnetwork(s) per region, such that the subnetwork(s) are used by a single service project. &lt;br /&gt;
&lt;br /&gt;
This will help ensure cleaner separation of usage of subnetworks by different teams in your organization. In the future, you may choose to associate specific network policies for each subnetwork based on which service project is using it.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Subnetwork IP ranges&lt;/h3&gt;
When configuring subnetwork IP ranges in the same or different regions, allow sufficient IP space between subnetworks for future growth. GCP allows you to &lt;a href="https://cloud.google.com/compute/docs/vpc/using-vpc#expand-subnet" target="_blank"&gt;expand an existing subnetwork&lt;/a&gt; without affecting IP addresses owned by existing VMs in the virtual network and with zero downtime. &lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Shared VPC and folders&lt;/h3&gt;
When using &lt;a href="https://cloud.google.com/resource-manager/docs/creating-managing-folders" target="_blank"&gt;folders&lt;/a&gt; to manage projects created in your organization, place all host and service projects for a given shared VPC setup within the same folder. The parent folder of the host project should be in the parent hierarchy of the service projects, so that the parent folder of the host project contains all the projects in the shared VPC setup. When associating service projects with a host project, ensure that these projects will not move to other folders in the future, while still being linked to the host project.&lt;br /&gt;
&lt;h3&gt;
&lt;br /&gt;Control external access&lt;/h3&gt;
In order to control and restrict which VMs can have public IPs and thus access to the internet, you can now &lt;a href="https://cloud.google.com/compute/docs/configure-ip-addresses#disableexternalip" target="_blank"&gt;set up an organization policy that disables external IP access&lt;/a&gt; for VMs. Do this only for projects that should have only internal access, e.g. the personalization, recommendation and analytics services in the example above.&lt;br /&gt;
&lt;br /&gt;
As you can see, Shared VPC is a powerful tool that can make GCP more flexible and manageable for your organization. To learn more about Shared VPC, check out the &lt;a href="https://cloud.google.com/compute/docs/shared-vpc/" target="_blank"&gt;documentation&lt;/a&gt;. &lt;br /&gt;
&lt;br /&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/w0h86RIaBWc" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/795927481858970263" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/795927481858970263" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/w0h86RIaBWc/getting-started-with-shared-VPC.html" title="Getting started with Shared VPC " /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://3.bp.blogspot.com/-QjMXB62TZOE/WTghBuV5kJI/AAAAAAAAD-E/Im2lrbZxQBw2jrVwyCI9MJKh9aZdlnhGACLcB/s72-c/shared-vpc.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/getting-started-with-shared-VPC.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-2827023676984154780</id><published>2017-06-06T07:45:00.000-07:00</published><updated>2017-06-06T15:57:02.543-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Announcements" /><category scheme="http://www.blogger.com/atom/ns#" term="Developer Tools &amp; Insights" /><category scheme="http://www.blogger.com/atom/ns#" term="Open Source" /><title type="text">Spinnaker 1.0: a continuous delivery platform for cloud</title><content type="html">&lt;span class="byline-author"&gt;By Christopher Sanson, Product Manager&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
At Google we deploy a lot of code: tens of thousands of deployments a day, to thousands of services, seven of which have more than a billion users each around the globe. Along the way we’ve learned some best practices about how to deploy software at velocity&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;things like automated releases, immutable infrastructure, gradual rollouts and fast rollbacks.&lt;br /&gt;
&lt;br /&gt;
Back in 2014, we started working with the Netflix team that created Spinnaker, and saw in it a release management platform that embodied many of our first principles for safe, frequent and reliable releases. Excited by its potential, we collaborated with Netflix to bring Spinnaker to the public, and they open-sourced it in November 2015. Since then, the Spinnaker community has grown to include dozens of organizations including Microsoft, Oracle, Target, Veritas, Schibsted, Armory and Kenzan, to name a few.&lt;br /&gt;
&lt;br /&gt;
Today we’re happy to announce the release of Spinnaker 1.0, an open-source multi-cloud continuous delivery platform used in production at companies like Netflix, Waze, Target, and Cloudera, plus a new open-source command line interface (CLI) tool called halyard that makes it easy to deploy Spinnaker itself. Read on to learn what Spinnaker can do for your own software development processes.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Why Spinnaker?&lt;/h3&gt;
Let’s look at a few of the features and new updates that can make Spinnaker a great release management solution for enterprises:&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt; Open-source, multi-cloud deployments&lt;/b&gt;&lt;br /&gt;
Here at Google Cloud Platform (GCP), we believe in an open cloud. Spinnaker, including its rich UI dashboard, is 100% open-source. You can install it locally, on-prem, or in the cloud, running either on a virtual machine (VM) or Kubernetes.&lt;br /&gt;
&lt;br /&gt;
Spinnaker streamlines the deployment process by decoupling your release pipeline from your target cloud provider, which can reduce the complexity of moving from one platform to another or deploying the same application to multiple clouds.&lt;br /&gt;
&lt;br /&gt;
It has built-in support for Google Compute Engine, Google Container Engine, Google App Engine, AWS EC2, Microsoft Azure, Kubernetes, and OpenStack, with more added every year by the community, including Oracle Bare Metal and DC/OS, coming soon.&lt;br /&gt;
&lt;br /&gt;
Whether you’re releasing to multiple clouds or preventing vendor lock-in, Spinnaker helps you deploy your application based on what’s best for your business.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt; Automated releases&lt;/b&gt;&lt;br /&gt;
In Spinnaker, deployments are orchestrated using custom release pipelines, the stages of which can consist of almost anything you want -- integration or system tests, spinning a server group up or down, manual approvals, waiting a period of time, or running a custom script or Jenkins job.&lt;br /&gt;
&lt;br /&gt;
Spinnaker integrates seamlessly with your existing continuous integration (CI) workflows. You can trigger pipelines from git, Jenkins, Travis CI, Docker registries, on a cron-like schedule, or even other pipelines.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt; Best-practice deployment strategies&lt;/b&gt;&lt;br /&gt;
Out-of-the-box, Spinnaker supports sophisticated deployment strategies like release canaries, multiple staging environments, red/black (a.k.a. blue/green) deployments, traffic splitting and easy rollbacks. &lt;br /&gt;
&lt;br /&gt;
This is enabled in part by Spinnaker’s use of immutable infrastructure in the cloud, where changes to your application trigger a redeployment of your entire server fleet. Compare this to the traditional approach of configuring updates to running machines, which results in slower, riskier rollouts and hard-to-debug configuration-drift issues.&lt;br /&gt;
&lt;br /&gt;
With Spinnaker, you simply choose the deployment strategy you want to use for each environment, e.g. red/black for staging, rolling red/black for production, and it orchestrates the dozens of steps necessary under-the-hood. You don’t have to write your own deployment tool or maintain a complex web of Jenkins scripts to have enterprise-grade rollouts.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt; Role-based authorizations and permissions&lt;/b&gt;&lt;br /&gt;
Large companies often adopt Spinnaker across multiple product areas managed by a central DevOps team. For admins that need role-based access control for a project or account, Spinnaker supports multiple authentication and authorization options, including OAuth, SAML, LDAP, X.509 certs, GitHub teams, Azure groups or Google Groups.&lt;br /&gt;
&lt;br /&gt;
You can also apply permissions to manual judgements, a Spinnaker stage which requires a person’s approval before proceeding with the pipeline, ensuring that a release can’t happen without the right people signing off.&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt; Simplified installation and management with halyard&lt;/b&gt;&lt;br /&gt;
With the release of Spinnaker 1.0, we’re also announcing the launch of a new CLI tool, halyard, that helps admins more easily install, configure and upgrade a production-ready instance of Spinnaker. &lt;br /&gt;
&lt;br /&gt;
Prior to halyard and Spinnaker 1.0, admins had to manage each of the microservices that make up Spinnaker individually. Starting with 1.0, all new Spinnaker releases are individually versioned and follow semantic versioning. With halyard, upgrading to the latest Spinnaker release is as simple as running a CLI command.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
Getting started&lt;/h3&gt;
Try out Spinnaker and make your deployments faster, safer, and, dare we say, boring.&lt;br /&gt;
&lt;br /&gt;
For more info on Spinnaker, &lt;a href="https://www.spinnaker.io/" target="_blank"&gt;visit the new spinnaker.io website&lt;/a&gt; and learn how to get started.&lt;br /&gt;
&lt;br /&gt;
Or if you’re ready to try Spinnaker right now, &lt;a href="https://cloud.google.com/launcher/solution/click-to-deploy-images/spinnaker" target="_blank"&gt;click here to install and run Spinnaker&lt;/a&gt; with Google’s click-to-deploy option in the Cloud Launcher Marketplace.&lt;br /&gt;
&lt;br /&gt;
For questions, feedback, or to engage more with the Spinnaker community, you can find us on the &lt;a href="http://join.spinnaker.io/" target="_blank"&gt;Spinnaker Slack channel&lt;/a&gt;, submit issues to the &lt;a href="https://github.com/spinnaker/spinnaker" target="_blank"&gt;Spinnaker GitHub repository&lt;/a&gt;, or ask questions on &lt;a href="https://stackoverflow.com/questions/tagged/spinnaker" target="_blank"&gt;Stack Overflow&lt;/a&gt; using the “spinnaker” tag.&lt;br /&gt;
&lt;br /&gt;
&lt;h3&gt;
More on Spinnaker&lt;/h3&gt;
&lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/netflix-techblog/global-continuous-delivery-with-spinnaker-2a6896c23ba7" target="_blank"&gt;Global Continuous Delivery With Spinnaker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudplatform.googleblog.com/2015/11/Netflixs-Spinnaker-available-now-on-Google-Cloud-Platform.html" target="_blank"&gt;Netflix’s Spinnaker available now on Google Cloud Platform&amp;nbsp;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloudplatform.googleblog.com/2017/02/guest-post-multi-cloud-continuous-delivery-using-Spinnaker-at-Waze.html" target="_blank"&gt;Guest post: Multi-cloud continuous delivery using Spinnaker at Waze&amp;nbsp;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=05EZx3MBHSY" target="_blank"&gt;Spinnaker: continuous delivery from first principles to production (Google Cloud Next '17)&amp;nbsp;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;br /&gt;
&lt;br /&gt;&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/vN_rwAE-sXA" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2827023676984154780" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/2827023676984154780" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/vN_rwAE-sXA/spinnaker-10-continuous-delivery.html" title="Spinnaker 1.0: a continuous delivery platform for cloud" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/spinnaker-10-continuous-delivery.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-3726091641757741931</id><published>2017-06-05T09:01:00.002-07:00</published><updated>2017-06-05T11:01:51.825-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Big Data &amp; Machine Learning" /><category scheme="http://www.blogger.com/atom/ns#" term="Partners" /><title type="text">Join the Intelligent App Challenge brought to you by SAP and Google Cloud</title><content type="html">&lt;span class="byline-author"&gt;By &lt;a href="https://twitter.com/aiazkazi" target="_blank"&gt;Aiaz Kazi&lt;/a&gt;, Head of Platform Ecosystem, Google Cloud&lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Does your organization use SAP? At SAP SAPPHIRE last month, Nan Boden, Google Cloud head of Global Technology Partners, announced the &lt;a href="https://ideas.sap.com/googlecloud" target="_blank"&gt;Intelligent App Challeng&lt;/a&gt;e designed to encourage innovative integrations between the &lt;a href="https://www.sap.com/about.html" target="_blank"&gt;SAP&lt;/a&gt; and Google Cloud ecosystems, and we’re accepting submissions through August 1, 2017. Winning entries could receive up to US $20,000 in GCP credits, tickets to SAP TechEd '17 and SAP Sapphire '18, and on-stage presence at SAP TechEd '17. &lt;br /&gt;
&lt;br /&gt;
Earlier this year, we announced a &lt;a href="https://blog.google/topics/google-cloud/google-cloud-and-sap-forge-partnership-develop-enterprise-solutions/" target="_blank"&gt;strategic partnership with SAP&lt;/a&gt; at Google Cloud Next '17 with a focus on developing and integrating Google’s best cloud and machine learning solutions with SAP enterprise applications. The partnership includes certification of the in-memory database SAP HANA on &lt;a href="http://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP), new G Suite integrations, Google’s machine learning capabilities and data governance collaboration. It also offers Google Cloud and SAP customers more scope, scalability and opportunities to create new products, and has already resulted in the certification of &lt;a href="http://cloud.google.com/sap" target="_blank"&gt;several SAP products on GCP&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
The SAP + GCP collaboration allows developers to take advantage of SAP’s in-memory database running on GCP to store and index large amounts of transactional (OLTP) and analytical (OLAP) data in HANA, and combine it with GCP to use it in new ways. For example, you could build sophisticated and large-scale machine learning (ML) models without needing to transport or transform large subsets of data, or build out the ML infrastructure required to consume and analyze this information. Use Google Cloud Machine Learning tools and APIs along with SAP HANA, express edition to design intelligent business applications such as fraud detection, recommendation engines, talent engagement, intelligent campaign management, conversational interfaces, etc. &lt;br /&gt;
&lt;br /&gt;
We're excited to see how the ecosystem of partners of SAP and Google take our platform and use it to solve pressing business challenges. It’s our platform, and your imagination&amp;nbsp;&lt;span style="font-family: &amp;quot;arial&amp;quot;; font-size: 11pt; white-space: pre-wrap;"&gt;—&lt;/span&gt;&amp;nbsp;to build solutions that solve customer problems in new and unique ways.&lt;br /&gt;
&lt;br /&gt;
Entries to the Intelligent App Challenge must be built on GCP with SAP HANA, express edition. Extra consideration will be given to entries who use Machine Learning tools and capabilities. &lt;br /&gt;
&lt;br /&gt;
Registered applicants for the Intelligent App Challenge will also have access to a number of resources and tutorials. Judges will include industry experts, developers, mentors and industry analysts.&lt;br /&gt;
&lt;br /&gt;
Please visit the &lt;a href="https://ideas.sap.com/googlecloud" target="_blank"&gt;Intelligent App Challenge page&lt;/a&gt; to learn more, or &lt;a href="https://docs.google.com/a/google.com/forms/d/e/1FAIpQLSfCcAVTrHFOm8incfRUrP9WnWcR_KA4r3KSXyB-RgnDPb7jaA/viewform" target="_blank"&gt;register&lt;/a&gt; your company today.&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/_wOA55Sodfs" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/3726091641757741931" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/3726091641757741931" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/_wOA55Sodfs/join-the-Intelligent-App-Challenge-brought-to-you-by-SAP-and-Google-Cloud.html" title="Join the Intelligent App Challenge brought to you by SAP and Google Cloud" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/join-the-Intelligent-App-Challenge-brought-to-you-by-SAP-and-Google-Cloud.html</feedburner:origLink></entry><entry><id>tag:blogger.com,1999:blog-5589634522109419319.post-1929790191063790102</id><published>2017-06-02T08:59:00.001-07:00</published><updated>2017-06-02T08:59:56.388-07:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="Compute" /><title type="text">Enhancing the Python experience on App Engine</title><content type="html">&lt;span class="byline-author"&gt;By Amir Rouzrokh, Product Manager &lt;/span&gt; &lt;br /&gt;
&lt;br /&gt;
Developers have always been at the heart of &lt;a href="https://cloud.google.com/" target="_blank"&gt;Google Cloud Platform&lt;/a&gt; (GCP). And with &lt;a href="https://cloud.google.com/appengine/" target="_blank"&gt;App Engine&lt;/a&gt;, developers can focus on writing code that powers their business and leave the infrastructure hassle to Google, freeing themselves from tasks such as server management and capacity planning. Earlier this year, we announced the &lt;a href="https://cloudplatform.googleblog.com/2017/03/your-favorite-languages-now-on-Google-App-Engine.html" target="_blank"&gt;general availability of App Engine flexible environment&lt;/a&gt;, and later announced the &lt;a href="https://cloudplatform.googleblog.com/2017/03/Google-App-Engine-flexible-environment-now-available-from-Europe-west-regionnt.html" target="_blank"&gt;expansion of App Engine to the europe-west region&lt;/a&gt;. Today we're happy to announce additional upgrades for Python users for both App Engine &lt;a href="https://cloud.google.com/appengine/docs/flexible/python/" target="_blank"&gt;flexible&lt;/a&gt; and &lt;a href="https://cloud.google.com/appengine/docs/standard/python/" target="_blank"&gt;standard&lt;/a&gt; environments. &lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://4.bp.blogspot.com/-6D6by-QA-Bs/WTDes5NCSkI/AAAAAAAAD90/pNThrJaQAv8Ceg8xwbytAIF3SaizdcZHgCLcB/s1600/python.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="256" data-original-width="256" src="https://4.bp.blogspot.com/-6D6by-QA-Bs/WTDes5NCSkI/AAAAAAAAD90/pNThrJaQAv8Ceg8xwbytAIF3SaizdcZHgCLcB/s1600/python.png" /&gt;&lt;/a&gt;&lt;/div&gt;
Starting today, App Engine flexible environment users can deploy to the latest version of Python, 3.6. We first supported Python 3 for App Engine flexible environment back in 2016, and have continued to update the runtime as the community releases new versions of Python 3 and Python 2. We'll continue to update the runtimes to the latest versions as soon as they become available. To see a demo on how to deploy a simple “Hello World” &lt;a href="http://flask.pocoo.org/" target="_blank"&gt;Flask&lt;/a&gt; web application that can deploy to Python 3 in under ten minutes, see our &lt;a href="https://www.youtube.com/watch?v=T_4cGEtHqUs" target="_blank"&gt;video&lt;/a&gt; in an earlier &lt;a href="https://cloudplatform.googleblog.com/2016/08/python-3-on-Google-App-Engine-flexible-environment-now-in-beta.html" target="_blank"&gt;blogpost&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
On App Engine standard environment, we’ve updated more than 2 million apps running Python 2.7.5 to Python 2.7.12 without any input needed from our users, and as of today, all new deployments will run in this new runtime. To see a demo on how to deploy a simple “Hello World” Flask web application that deploys in seconds and scales to millions of requests per second to Python 2 in under a minute, see our &lt;a href="https://cloud.google.com/appengine/docs/standard/python/getting-started/python-standard-env" target="_blank"&gt;Getting Started guide&lt;/a&gt;.  We're committed to updating Python to the latest versions of Python 2 as they become available, and bringing the latest versions of Python 3 to the App Engine standard environment is on our roadmap. Stay tuned!&lt;br /&gt;
&lt;br /&gt;
On the libraries side, App Engine flexible environment users can continue to pull in any library the application requires by simply &lt;a href="https://cloud.google.com/appengine/docs/flexible/python/using-python-libraries" target="_blank"&gt;providing a requirements.txt file during deployment&lt;/a&gt;. App Engine standard environment users also now have updated runtime-provided libraries. Refer to the App Engine standard documentation for the full list of &lt;a href="https://cloud.google.com/appengine/docs/standard/python/tools/built-in-libraries-27" target="_blank"&gt;built-in&lt;/a&gt; and &lt;a href="https://cloud.google.com/appengine/docs/standard/python/tools/using-libraries-python-27" target="_blank"&gt;third-party&lt;/a&gt; libraries. We'll continue updating these libraries as new versions become available.  &lt;br /&gt;
&lt;br /&gt;
As of today, Python developers have deployed more than 6,000,000 applications to App Engine, and &lt;a href="https://cloud.google.com/customers/" target="_blank"&gt;companies large and small continue&lt;/a&gt; to innovate without having to worry about infrastructure. App Engine has built in support for micro-services, auto scaling, load balancing, traffic splitting and much more. And with a commitment to open source and open cloud, App Engine continues to welcome contribution from the developer community on both &lt;a href="https://github.com/GoogleCloudPlatform/python-runtime" target="_blank"&gt;the runtimes&lt;/a&gt; and &lt;a href="https://github.com/GoogleCloudPlatform/google-cloud-python" target="_blank"&gt;the libraries&lt;/a&gt;. If you wish to keep up to date with the latest runtime releases, please bookmark the release notes page for Python on App Engine &lt;a href="https://cloud.google.com/appengine/docs/standard/python/release-notes#may_15_2017" target="_blank"&gt;standard&lt;/a&gt; and &lt;a href="https://cloud.google.com/sdk/docs/release-notes#google_app_engine" target="_blank"&gt;flexible&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
Feel free to reach out to us on Twitter using the handle &lt;a href="https://twitter.com/googlecloud" target="_blank"&gt;@googlecloud&lt;/a&gt;. We're also on the Google Cloud Slack community. To get in touch, &lt;a href="https://gcp-slack.appspot.com/" target="_blank"&gt;request an invite&lt;/a&gt; to join the &lt;a href="https://googlecloud-community.slack.com/?redir=%2Farchives%2Fpython" target="_blank"&gt;Slack Python channel&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
Happy coding!&lt;div class="blogger-post-footer"&gt;&lt;img src="https://www.google-analytics.com/collect?v=1&amp;tid=UA-34322147-16&amp;t=pageview&amp;cs=googlecloudplatform.blogspot.com&amp;cm=feed"/&gt;&lt;/div&gt;&lt;img src="http://feeds.feedburner.com/~r/ClPlBl/~4/nMYaMu4vKOE" height="1" width="1" alt=""/&gt;</content><link rel="edit" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/1929790191063790102" /><link rel="self" type="application/atom+xml" href="http://www.blogger.com/feeds/5589634522109419319/posts/default/1929790191063790102" /><link rel="alternate" type="text/html" href="http://feedproxy.google.com/~r/ClPlBl/~3/nMYaMu4vKOE/enhancing-the-Python-experience-on-App-Engine.html" title="Enhancing the Python experience on App Engine" /><author><name>GCP Team</name><uri>https://plus.google.com/110350131288337198042</uri><email>noreply@blogger.com</email><gd:image rel="http://schemas.google.com/g/2005#thumbnail" width="16" height="16" src="http://img1.blogblog.com/img/b16-rounded.gif" /></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://4.bp.blogspot.com/-6D6by-QA-Bs/WTDes5NCSkI/AAAAAAAAD90/pNThrJaQAv8Ceg8xwbytAIF3SaizdcZHgCLcB/s72-c/python.png" height="72" width="72" /><gd:extendedProperty name="commentSource" value="1" /><gd:extendedProperty name="commentModerationMode" value="FILTERED_POSTMOD" /><feedburner:origLink>http://cloudplatform.googleblog.com/2017/06/enhancing-the-Python-experience-on-App-Engine.html</feedburner:origLink></entry></feed>
