CloudStatus

Google Cloud Status Alerts

Google Cloud Platform
UPDATE: Incident 17009 - The issue with the Google App Engine Admin API has been resolved for all affected users as of Thursday, 2017-12-14 12:15 US/Pacific.

The issue with the Google App Engine Admin API has been resolved for all affected users as of Thursday, 2017-12-14 12:15 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - We are investigating an issue with Google Cloud Storage. We will provide more information by 18:00 US/Pacific.

The issue with Cloud Storage elevated error rate has been resolved for all affected projects as of Friday 2017-11-30 16:10 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - We are investigating an issue with Google Cloud Storage. We will provide more information by 16:30 US/Pacific.

The Cloud Storage service is experiencing less than 10% error rate. We will provide another status update by 2017-11-30 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17006 - We are investigating an issue with Google Cloud Storage. We will provide more information by 15:00 US/Pacific.

The Cloud Storage service is experiencing less than 10% error rate. We will provide another status update by 2017-11-30 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17006 - We are investigating an issue with Google Cloud Storage. We will provide more information by 15:00 US/Pacific.

The Cloud Storage service is experiencing less than 10% error rate. We will provide another status update by YYYY-mm-dd HH:MM US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17006 - From 10:58 to 11:57 US/Pacific, GCE VM instances experienced packet loss from GCE instances to the Internet. The issue has been mitigated for all affected projects.

From 10:58 to 11:57 US/Pacific, GCE VM instances experienced packet loss from GCE instances to the Internet. The issue has been mitigated for all affected projects.

Google Cloud Platform
UPDATE: Incident 17004 - We are investigating an issue with Google Cloud Networking. We will provide more information by 07:00 US/Pacific.

The issue with Google Cloud Engine VM instances losing connectivity has been resolved for all affected users as of Friday, 2017-11-17 7:17am US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 17008 - App Engine increasingly showing 5xx

The issue with App Engine has been resolved for all affected projects as of 4:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17008 - App Engine increasingly showing 5xx

The issue also affected projects in other regions but should be resolved for the majority of projects. We will provide another status update by 05:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17007 - The Memcache service has recovered from a disruption between 12:30 US/Pacific and 15:30 US/Pacific.

ISSUE SUMMARY On Wednesday 6 November 2017, the App Engine Memcache service experienced unavailability for applications in all regions for 1 hour and 50 minutes. We sincerely apologize for the impact of this incident on your application or service. We recognize the severity of this incident and will be undertaking a detailed review to fully understand the ways in which we must change our systems to prevent a recurrence. DETAILED DESCRIPTION OF IMPACT On Wednesday 6 November 2017 from 12:33 to 14:23 PST, the App Engine Memcache service experienced unavailability for applications in all regions. Some customers experienced elevated Datastore latency and errors while Memcache was unavailable. At this time, we believe that all the Datastore issues were caused by surges of Datastore activity due to Memcache being unavailable. When Memcache failed, if an application sent a surge of Datastore operations to specific entities or key ranges, then Datastore may have experienced contention or hotspotting, as described in https://cloud.google.com/datastore/docs/best-practices#designing_for_scale. Datastore experienced elevated load on its servers when the outage ended due to a surge in traffic. Some applications in the US experienced elevated latency on gets between 14:23 and 14:31, and elevated latency on puts between 14:23 and 15:04. Customers running Managed VMs experienced failures of all HTTP requests and App Engine API calls during this incident. Customers using App Engine Flexible Environment, which is the successor to Managed VMs, were not impacted. ROOT CAUSE The App Engine Memcache service requires a globally consistent view of the current serving datacenter for each application in order to guarantee strong consistency when traffic fails over to alternate datacenters. The configuration which maps applications to datacenters is stored in a global database. The incident occurred when the specific database entity that holds the configuration became unavailable for both reads and writes following a configuration update. App Engine Memcache is designed in such a way that the configuration is considered invalid if it cannot be refreshed within 20 seconds. When the configuration could not be fetched by clients, Memcache became unavailable. REMEDIATION AND PREVENTION Google received an automated alert at 12:34. Following normal practices, our engineers immediately looked for recent changes that may have triggered the incident. At 12:59, we attempted to revert the latest change to the configuration file. This configuration rollback required an update to the configuration in the global database, which also failed. At 14:21, engineers were able to update the configuration by sending an update request with a sufficiently long deadline. This caused all replicas of the database to synchronize and allowed clients to read the mapping configuration. As a temporary mitigation, we have reduced the number of readers of the global configuration, which avoids the contention during write and led to the unavailability during the incident. Engineering projects are already under way to regionalize this configuration and thereby limit the blast radius of similar failure patterns in the future.

Google Cloud Platform
RESOLVED: Incident 17007 - The Memcache service has recovered from a disruption between 12:30 US/Pacific and 15:30 US/Pacific.

The issue with Memcache availability has been resolved for all affected projects as of 15:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation. This is the final update for this incident.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service experienced a disruption and is still recovering. We will provide more information by 16:00 US/Pacific.

The Memcache service is still recovering from the outage. The rate of errors continues to decrease and we expect a full resolution of this incident in the near future. We will provide an update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service experienced a disruption and is recovering now. We will provide more information by 15:30 US/Pacific.

The issue with Memcache and MVM availability should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide an update by 15:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service experienced a disruption and is being normalized now. We will provide more information by 15:15 US/Pacific.

We are experiencing an issue with Memcache availability beginning at November 6, 2017 at 12:30 pm US/Pacific. At this time we are gradually ramping up traffic to Memcache and we see that the rate of errors is decreasing. Other services affected by the outage, such as MVM instances, should be normalizing in the near future. We will provide an update by 15:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service is currently experiencing a disruption. We will provide more information by 14:30 US/Pacific.

We are experiencing an issue with Memcache availability beginning at November 6, 2017 at 12:30 pm US/Pacific. Our Engineering Team believes they have identified the root cause of the errors and is working to mitigate. We will provide an update by 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service is currently experiencing a disruption. We will provide more information by 14:30 US/Pacific.

We are experiencing an issue with Memcache availability beginning at November 6, 2017 at 12:30 pm US/Pacific. Current data indicates that all projects using Memcache are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 14:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - Investigating incident with AppEngine and Memcache.

We are experiencing an issue with Memcache availability beginning at November 6, 2017 at 12:30 pm US/Pacific. Current data indicate(s) that all projects using Memcache are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 14:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - Investigating incident with AppEngine and Memcache.

We are investigating an issue with Google App Engine and Memcache. We will provide more information by 13:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17018 - We are investigating an issue with Google Cloud SQL. We see failures for Cloud SQL connections from App Engine and connections using the Cloud SQL Proxy. We are also observing elevated failure rates f...

The issue with Cloud SQL connectivity affecting connections from App Engine and connections using the Cloud SQL Proxy as well as the issue with Cloud SQL admin activities have been resolved for all affected as of 20:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17019 - We are investigating an issue with Google Cloud SQL. We see failures for Cloud SQL connections from App Engine and connections using the Cloud SQL Proxy. We are also observing elevated failure rates f...

The issue with Cloud SQL connectivity affecting connections from App Engine and connections using the Cloud SQL Proxy as well as the issue with Cloud SQL admin activities have been resolved for all affected as of 2017-10-30 20:45 PDT. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17018 - We are investigating an issue with Google Cloud SQL. We see failures for Cloud SQL connections from App Engine and connections using the Cloud SQL Proxy. We are also observing elevated failure rates f...

We are continuing to experience an issue with Cloud SQL connectivity, affecting only connections from App Engine and connections using the Cloud SQL Proxy, beginning at 2017-10-30 17:00 US/Pacific. We are also observing elevated failure rates for Cloud SQL admin activities (using the Cloud SQL portion of the Cloud Console UI, using gcloud beta sql, directly using the Admin API, etc.). Our Engineering Team believes they have identified the root cause and mitigation effort is currently underway. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide another update by 2017-10-30 21:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17003 - We are investigating an issue with GKE. We will provide more information by 16:00 US/Pacific.

We are investigating an issue involving the inability of pods to be rescheduled on Google Container Engine (GKE) nodes after Docker reboots or crashes. This affects GKE versions 1.6.11, 1.7.7, 1.7.8 and 1.8.1. Our engineering team will roll out a fix next week; no further updates will be provided here. If experienced, the issue can be mitigated by manually restarting the affected nodes.

Google Cloud Platform
RESOLVED: Incident 17005 - Elevated GCS Errors from Canada

ISSUE SUMMARY Starting Thursday 12 October 2017, Google Cloud Storage clients located in the Northeast of North America experienced up to a 10% error rate for a duration of 21 hours and 35 minutes when fetching objects stored in multi-regional buckets in the US. We apologize for the impact of this incident on your application or service. The reliability of our service is a top priority and we understand that we need to do better to ensure that incidents of this type do not recur. DETAILED DESCRIPTION OF IMPACT Between Thursday 12 October 2017 12:47 PDT and Friday 13 October 2017 10:12 PDT, Google Cloud Storage clients located in the Northeast of North America experienced up to a 10% rate of 503 errors and elevated latency. Some users experienced higher error rates for brief periods. This incident only impacted requests to fetch objects stored in multi-regional buckets in the US; clients were able to mitigate impact by retrying. The percentage of total global requests to Cloud Storage that experienced errors was 0.03%. ROOT CAUSE Google ensures balanced use of its internal networks by throttling outbound traffic at the source host in the event of congestion. This incident was caused by a bug in an earlier version of the job that reads Cloud Storage objects from disk and streams data to clients. Under high traffic conditions, the bug caused these jobs to incorrectly throttle outbound network traffic even though the network was not congested. Google had previously identified this bug and was in the process of rolling out a fix to all Google datacenters. At the time of the incident, Cloud Storage jobs in a datacenter in Northeast North America that serves requests to some Canadian and US clients had not yet received the fix. This datacenter is not a location for customer buckets (https://cloud.google.com/storage/docs/bucket-locations), but objects in multi-regional buckets can be served from instances running in this datacenter in order to optimize latency for clients. REMEDIATION AND PREVENTION The incident was first reported by a customer to Google on Thursday 12 October 14:59 PDT. Google engineers determined root cause on Friday 13 October 09:47 PDT. We redirected Cloud Storage traffic away from the impacted region at 10:08 and the incident was resolved at 10:12. We have now rolled out the bug fix to all regions. We will also add external monitoring probes for all regional points of presence so that we can more quickly detect issues of this type.

Google Cloud Platform
UPDATE: Incident 17002 - Jobs not terminating

The issue with with Cloud Dataflow in which batch jobs are stuck and cannot be terminated has been resolved for all affected projects as of Wednesday, 201-10-18 02:58 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17002 - Jobs not terminating

A fix for the issue with Cloud Dataflow in which batch jobs are stuck and cannot be terminated is currently getting rolled out. We expect a full resolution in the near future. We will provide another status update by Wednesday, 2017-10-18 03:45 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17007 - Stackdriver Uptime Check Alerts Not Firing

The issue with Stackdriver Uptime Check Alerts not firing has been resolved for all affected projects as of Monday, 2017-10-16 13:08 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17007 - Stackdriver Uptime Check Alerts Not Firing

We are investigating an issue with Stackdriver Uptime Check Alerts. We will provide more information by 13:15 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17005 - Elevated GCS Errors from Canada

The issue with Google Cloud Storage request failures for users in Canada and Northeast North America has been resolved for all affected users as of Friday, 2017-10-13 10:08 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17005 - Elevated GCS Errors from Canada

We are investigating an issue with Google Cloud Storage users in Canada and Northeast North America experiencing HTTP 503 failures. We will provide more information by 10:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17004 - Elevated GCS errors in us-east1

The issue with GCS service has been resolved for all affected users as of 14:31 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our system to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17004 - Elevated GCS errors in us-east1

The issue with GCS service should be resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17004 - Elevated GCS errors in us-east1

We are investigating an issue that occurred with GCS starting at 13:19 PDT. We will provide more information by 14:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17007 - Project creation failure

The issue with Project Creation failing with "Unknown error" has been resolved for all affected users as of Tuesday, 2017-10-03 22:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - Stackdriver console unavailable

The issue with Google Stackdriver has been resolved for all affected users as of 2017-10-03 16:28 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - Stackdriver console unavailable

We are continuing to investigate the Google Stackdriver issue. Graphs are fully restored, but alerting policies and uptime checks are still degraded. We will provide another update at 17:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17006 - Stackdriver console unavailable

We are continuing to investigate the Google Stackdriver issue. In addition to graph and alerting policy unavailability, uptime checks are not completing successfully. We believe we have isolated the root cause and are working on a resolution, and we will provide another update at 16:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17006 - Stackdriver console unavailable

We are investigating an issue with Google Stackdriver that is causing charts and alerting policies to be unavailable. We will provide more information by 15:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17007 - Project creation failue

The project creation is experiencing a 100% error rate on requests. We will provide another status update by Tuesday, 2017-10-03 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - Project creation failue

We are investigating an issue with Project creation. We will provide more information by 12:40PM US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17005 - Errors creating new Stackdriver accounts and adding new projects to existing Stackdriver accounts.

The issue with Google Stackdriver has been resolved for all affected projects as of Friday, 2017-09-29 15:35 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - Activity Stream not showing new Activity Logs

We are currently investigating an issue with the Cloud Console's Activity Stream not showing new Activity Logs.

Google Cloud Platform
RESOLVED: Incident 17003 - Google Cloud Pub/Sub partially unavailable.

The issue with Pub/Sub subscription creation has been resolved for all affected projects as of 08:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17003 - Google Cloud Pub/Sub partially unavailable.

We are experiencing an issue with Pub/Sub subscription creation beginning at 2017-09-13 06:30 US/Pacific. Current data indicates that approximately 12% of requests are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 08:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - Google Cloud Pub/Sub partially unavailable.

We are investigating an issue with Google Pub/Sub. We will provide more information by 07:15 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

ISSUE SUMMARY On Tuesday 29 August and Wednesday 30 August 2017, Google Cloud Network Load Balancing and Internal Load Balancing could not forward packets to backend instances if they were live-migrated. This incident initially affected instances in all regions for 30 hours and 22 minutes. We apologize for the impact this had on your services. We are particularly cognizant of the failure of a system used to increase reliability and the long duration of the incident. We have completed an extensive postmortem to learn from the issue and improve Google Cloud Platform. DETAILED DESCRIPTION OF IMPACT Starting at 13:56 PDT on Tuesday 29 August 2017 to 20:18 on Wednesday 30 August 2017, Cloud Network Load Balancer and Internal Load Balancer in all regions were unable to reach any instance that live-migrated during that period. Instances which did not experience live-migration during this period were not affected. Our internal investigation shows that approximately 2% of instances using Network Load Balancing or Internal Load Balancing were affected by the issue. ROOT CAUSE Live-migration transfers a running VM from one host machine to another host machine within the same zone. All VM properties and attributes remain unchanged, including internal and external IP addresses, instance metadata, block storage data and volumes, OS and application state, network settings, network connections, and so on. In this case, a change in the internal representation of networking information in VM instances caused inconsistency between two values, both of which were supposed to hold the external and internal virtual IP addresses of load balancers. When an affected instance was live-migrated, the instance was deprogrammed from the load balancer because of the inconsistency. This made it impossible for load balancers that used the instance as backend to look up the destination IP address of the instance following its migration, which in turn caused for all packets destined to that instance to be dropped at the load balancer level. REMEDIATION AND PREVENTION Initial detection appeared with reports of lack of backend connectivity at 23:30 on Tuesday to the GCP support team. At 00:28 on Wednesday two Cloud Network engineering teams were paged to investigate the issue. Detailed investigations continued until 08:07 when the configuration change that caused the issue was confirmed as such. The roll back of the new configuration was completed by 08:32, at which point no new live-migration would cause the issue. Google engineers then started to run a program to fix all mismatched network information at 08:56, and all affected instances were restored to a healthy status by 20:18. In order to prevent the issue, Google engineers are working to enhance automated canary testing that simulates live-migration events, detection of load balancing packets loss, and enforce more restrictions on new configuration changes deployment for internal representation changes.

Google Cloud Platform
RESOLVED: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

The issue with Network Load Balancers has been resolved for all affected projects as of 20:18 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 18033 - We are investigating an issue with BigQuery queries failing starting at 10:15am PT

The issue with BigQuery queries failing has been resolved for all affected users as of 12:05pm US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18033 - We are investigating an issue with BigQuery queries failing starting at 10:15am PT

The BigQuery service is experiencing a 16% error rate on queries. We will provide another status update by 12:00pm US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18033 - We are investigating an issue with BigQuery queries failing starting at 10:15am PT

We are investigating an issue with BigQuery queries failing starting at 10:15am PT

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We wanted to send another update with better formatting. We will provide more another update on resolving effected instances by 12 PDT. Affected customers can also mitigate their affected instances with the following procedure (which causes Network Load Balancer to be reprogrammed) using gcloud tool or via the Compute Engine API. NB: No modification to the existing load balancer configurations is necessary, but a temporary TargetPool needs to be created. Create a new TargetPool. Add the affected VMs in a region to the new TargetPool. Wait for the VMs to start working in their existing load balancer configuration. Delete the new TargetPool. DO NOT delete the existing load balancer config, including the old target pool. It is not necessary to create a new ForwardingRule. Example: 1) gcloud compute target-pools create dummy-pool --project=<your_project> --region=<region> 2) gcloud compute target-pools add-instances dummy-pool --instances=<instance1,instance2,...> --project=<your_project> --region=<region> --instances-zone=<zone> 3) (Wait) 4) gcloud compute target-pools delete dummy-pool --project=<your_project> --region=<region>

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

Our first mitigation has completed at this point and no new instances should be effected. We are slowly going through an fixing affected customers. Affected customers can also mitigate their affected instances with the following procedure (which causes Network Load Balancer to be reprogrammed) using gcloud tool or via the Compute Engine API. NB: No modification to the existing load balancer configurations is necessary, but a temporary TargetPool needs to be created. Create a new TargetPool. Add the affected VMs in a region to the new TargetPool. Wait for the VMs to start working in their existing load balancer configuration. Delete the new TargetPool. DO NOT delete the existing load balancer config, including the old target pool. It is not necessary to create a new ForwardingRule. Example: gcloud compute target-pools create dummy-pool --project=<your_project> --region=<region> gcloud compute target-pools add-instances dummy-pool --instances=<instance1,instance2,...> --project=<your_project> --region=<region> --instances-zone=<zone> (Wait) gcloud compute target-pools delete dummy-pool --project=<your_project> --region=<region>

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We are experiencing an issue with a subset of Network Load Balance. The configuration change to mitigate this issue has been rolled out and we are working on further measures to completely resolve the issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 10:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-central1, europe-west1, asia-northeast1 and asia-east1 not being able to connect to backends. The configuration change to mitigate this issue has been rolled out and we are working on further measures to completly resolve the issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 09:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-central1, europe-west1, asia-northeast1 and asia-east1 not being able to connect to backends. We have identified the event that triggers this issue and are rolling back a configuration change to mitigate this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 09:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-central1, europe-west1, asia-northeast1 and asia-east1 not being able to connect to backends. Mitigation work is still in progress. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 08:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-central1, europe-west1, asia-northeast1 and asia-east1 not being able to connect to backends. Our previous actions did not resolve the issue. We are pursuing alternative solutions. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 07:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-west1 and asia-east1 not being able to connect to backends. Our Engineering Team has reduced the scope of possible root causes and is still investigating. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 06:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. The investigation is still ongoing. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. We have ruled out several possible failure scenarios. The investigation is still ongoing. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 04:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 04:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 03:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 03:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are investigating an issue with network load balancer connectivity. We will provide more information by 02:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are investigating an issue with network connectivity. We will provide more information by 01:50 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are investigating an issue with network connectivity. We will provide more information by 01:20 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

ISSUE SUMMARY On Tuesday 15 August 2017, Google Cloud SQL experienced issues in the europe-west1 zones for a duration of 3 hours and 35 minutes. During this time, new connections from Google App Engine (GAE) or Cloud SQL Proxy would timeout and return an error. In addition, Cloud SQL connections with ephemeral certs that had been open for more than one hour timed out and returned an error. We apologize to our customers whose projects were affected – we are taking immediate action to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT On Tuesday 15 August 2017 from 17:20 to 20:55 PDT, 43.1% of Cloud SQL instances located in europe-west1 were unable to be managed with the Google Cloud SQL Admin API to create or make changes. Customers who connected from GAE or used the Cloud SQL Proxy (which includes most connections from Google Container Engine) were denied new connections to their database. ROOT CAUSE The issue surfaced through a combination of a spike in error rates internal to the Cloud SQL service and a lack of available resources in the Cloud SQL control plane for europe-west1. By way of background, the Cloud SQL system uses a database to store metadata for customer instances. This metadata is used for validating new connections. Validation will fail if the load on the database is heavy. In this case, Cloud SQL’s automatic retry logic overloaded the control plane and consumed all the available Cloud SQL control plane processing in europe-west1. This in turn made the Cloud SQL Proxy and front end client server pairing reject connections when ACLs and certificate information stored in the Cloud SQL control plane could not be accessed. REMEDIATION AND PREVENTION Google engineers were paged at 17:20 when automated monitoring detected an increase in control plane errors. Initial troubleshooting steps did not sufficiently isolate the issue and reduce the database load. Engineers then disabled non-critical control plane services for Cloud SQL to shed load and allow the service to catch up. They then began a rollback to the previous configuration to bring back the system to a healthy state. This issue has raised technical issues which hinder our intended level of service and reliability for the Cloud SQL service. We have begun a thorough investigation of similar potential failure patterns in order to avoid this type of service disruption in the future. We are adding additional monitoring to quickly detect metadata database timeouts which caused the control plane outage. We are also working to make the Cloud SQL control plane services more resilient to metadata database latency by making the service not directly call the database for connection validation. We realize this event may have impacted your organization and we apologize for this disruption. Thank you again for your business with Google Cloud SQL.

Google Cloud Platform
UPDATE: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

The issue with Cloud SQL connectivity affecting connections from App Engine and connections using the Cloud SQL Proxy in europe-west1 has been resolved for all affected projects as of 20:55 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

We are continuing to experience an issue with Cloud SQL connectivity beginning at Tuesday, 2017-08-15 17:20 US/Pacific. Current investigation indicates that instances running in Europe-West1 are affected by this issue. Engineering is working on mitigating the situation. We will provide an update by 21:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

We are continuing to experience an issue with Cloud SQL connectivity beginning at Tuesday, 2017-08-15 17:20 US/Pacific. Current investigation indicates that instances running in Europe-West1 are affected by this issue. Engineering is currently working on mitigating the situation. We will provide an update by 20:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

We are experiencing an issue with Cloud SQL connectivity beginning at Tuesday, 2017-08-15 17:20 US/Pacific. Current investigation indicates that instances running in Europe-West1 are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 19:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - GCS triggers not fired when objects are updated

We are investigating an issue with Google Cloud Storage object overwrites. For buckets with Google Cloud Functions or Object Change Notification enabled, notifications were not being triggered when a new object overwrote an existing object. Other object operations are not affected. Buckets with Google Cloud Pub/Sub configured are also not affected. The root cause has been found and confirmed by partial rollback. Full rollback is expected to be completed within an hour. Between now and full rollback, affected buckets are expected to begin triggering on updates; it can be intermittent initially, and it is expected to stabilize when the rollback is complete. We will provide another update by 14:00 with any new details. ETA for resolution 14:00 US/Pacific time.

Google Cloud Platform
UPDATE: Incident 17003 - GCS triggers not fired when objects are updated

We are investigating an issue with Google Cloud Storage function triggering on object update. Apiary notifications on object updates were also not sent during this issue. Other object operations are not reporting problems. The root cause has been found and confirmed by partial rollback. Full rollback is expected to be completed within an hour. Between now and full rollback, affected functions are expected to begin triggering on updates; it can be intermittent initially, and it is expected to stabilize when the rollback is complete. We will provide another update by 14:00 with any new details. ETA for resolution 13:30 US/Pacific time.

Google Cloud Platform
UPDATE: Incident 17003 - GCS triggers not fired when objects are updated

We are investigating an issue with Google Cloud Storage function triggering on object update. Other object operations are not reporting problems. We will provide more information by 12:00 US/Pacific

Google Cloud Platform
RESOLVED: Incident 18032 - BigQuery Disabled for Projected

ISSUE SUMMARY On 2017-07-26, BigQuery delivered error messages for 7% of queries and 15% of exports for a duration of two hours and one minute. It also experienced elevated failures for streaming inserts for one hour and 40 minutes. If your service or application was affected, we apologize – this is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve BigQuery’s performance and availability. DETAILED DESCRIPTION OF IMPACT On 2017-07-26 from 13:45 to 15:45 US/PDT, BigQuery jobs experienced elevated failures at a rate of 7% to 15%, depending on the operation attempted. Overall 7% of queries, 15% of exports, and 9% of streaming inserts failed during this event. These failures occurred in 12% of customer projects The errors for affected projects varied from 2% to 69% of exports, over 50% for queries, and up to 28.5% for streaming inserts. Customers affected saw an error message stating that their project has “not enabled BigQuery”. ROOT CAUSE Prior to executing a BigQuery job, Google’s Service Manager validates that the project requesting the job has BigQuery enabled for the project. The Service Manager consists of several components, including a redundant data store for project configurations, and a permissions module which inspects configurations. The project configuration data is being migrated to a new format and new version of the data store, and as part of that migration, the permissions module is being updated to use the new format. As is normal production best practices, this migration is being performed in stages separated by time. The root cause of this event was that, during one stage of the rollout, configuration data for two GCP datacenters was migrated before the corresponding permissions module for BigQuery was updated. As a result, the permissions module in those datacenters began erroneously reporting that projects running there no longer had BigQuery enabled. Thus, while both BigQuery and the underlying data stores were unchanged, requests to BigQuery from affected projects received an error message indicating that they had not enabled BigQuery. REMEDIATION AND PREVENTION Google’s BigQuery on-call engineering team was alerted by automated monitoring within 15 minutes of the beginning of the event at 13:59. Subsequent investigation determined at 14:17 that multiple projects were experiencing BigQuery validation failures, and the cause of the errors was identified at 14:46 as being changed permissions. Once the root cause of the errors was understood, Google engineers focused on mitigating the user impact by configuring BigQuery in affected locations to skip the erroneous permissions check. This change was first tested in a portion of the affected projects beginning at 15:04, and confirmed to be effective at 15:29. That mitigation was then rolled out to all affected projects, and was complete by 15:44. Finally, with mitigations in place, the Google engineering team worked to safely roll back the data migration; this work completed at 23:33 and the permissions check mitigation was removed, closing the incident. Google engineering has created 26 high priority action items to prevent a recurrence of this condition and to better detect and more quickly mitigate similar classes of issues in the future. These action items include increasing the auditing of BigQuery’s use of Google’s Service Manager, improving the detection and alerting of the conditions that caused this event, and improving the response of Google engineers to similar events. In addition, the core issue that affected the BigQuery backend has already been fixed. Google is committed to quickly and continually improving our technology and operations to prevent service disruptions. We appreciate your patience and apologize again for the impact to your organization.

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

The issue with BigQuery access errors has been resolved for all affected projects as of 16:15 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

The issue with BigQuery errors should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

The BigQuery engineers have identified a possible workaround to the issue affecting the platform and are deploying it now. Next update at 16:00PDT

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

At this time BigQuery is experiencing a partial outage, reporting that the service is not available for the project. Engineers are currently investigating the issue.

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

We are investigating an issue with BigQuery. We will provide more information by 15:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Issues with Cloud VPN in us-west1

The issue with connectivity to Cloud VPN and External IPs in us-west1 has been resolved for all affected projects as of 14:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17005 - Issues with Google Cloud Console

The issue with listing projects in the Google Cloud Console has been resolved as of 2017-07-21 07:11 PDT.

Google Cloud Platform
UPDATE: Incident 17005 - Issues with Google Cloud Console

The issue with Google Cloud Console errors should be resolved for majority of users and we expect a full resolution in the near future. We will provide another status update by 09:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - Issues with Google Cloud Console

We are experiencing an issue with Google Cloud Console returning errors beginning at Fri, 2017-07-21 02:50 US/Pacific. Early investigation indicates that users may see errors when listing projects in Google Cloud Console and via the API. Some other pages in Google Cloud Console may also display an error, refreshing the pages may help. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - Issues with Google Cloud Console

We are investigating an issue with Google Cloud Console. We will provide more information by 03:45 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18031 - BigQuery API server returning errors

The issue with BigQuery API returning errors has been resolved for all affected users as of 04:10 US/Pacific. We apologize for the impact that this incident had on your application.

Google Cloud Platform
UPDATE: Incident 18031 - BigQuery API server returning errors

The issue with Google BigQuery should be resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18031 - BigQuery API server returning errors

We are investigating an issue with Google BigQuery. We will provide more information by 04:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18030 - Streaming API errors

ISSUE SUMMARY On Wednesday 28 June 2017, streaming data into Google BigQuery experienced elevated error rates for a period of 57 minutes. We apologize to all users whose data ingestion pipelines were affected by this issue. We understand the importance of reliability for a process as crucial as data ingestion and are taking committed actions to prevent a similar recurrence in the future. DETAILED DESCRIPTION OF IMPACT On Wednesday 28 June 2017 from 18:00 to 18:20 and from 18:40 to 19:17 US/Pacific time, BigQuery's streaming insert service returned an increased error rate to clients for all projects. The proportion varied from time to time, but failures peaked at 43% of streaming requests returning HTTP response code 500 or 503. Data streamed into BigQuery from clients that experienced errors without retry logic were not saved into target tables during this period of time. ROOT CAUSE Streaming requests are routed to different datacenters for processing based on the table ID of the destination table. A sudden increase in traffic to the BigQuery streaming service combined with diminished capacity in a datacenter resulted in that datacenter returning a significant amount of errors for tables whose IDs landed in that datacenter. Other datacenters processing streaming data into BigQuery were unaffected. REMEDIATION AND PREVENTION Google engineers were notified of the event at 18:20, and immediately started to investigate the issue. The first set of errors had subsided, but starting at 18:40 error rates increased again. At 19:17 Google engineers redirected traffic away from the affected datacenter. The table IDs in the affected datacenter were redistributed to remaining, healthy streaming servers and error rates began to subside. To prevent the issue from recurring, Google engineers are improving the load balancing configuration, so that spikes in streaming traffic can be more equitably distributed amongst the available streaming servers. Additionally, engineers are adding further monitoring as well as tuning existing monitoring to decrease the time it takes to alert engineers of issues with the streaming service. Finally, Google engineers are evaluating rate-limiting strategies for the backend to prevent them from becoming overloaded.

Google Cloud Platform
UPDATE: Incident 17004 - Stackdriver Uptime Monitoring - alerting policies with uptime check health conditions will not fire or resolve

We are experiencing an issue with Stackdriver Uptime Monitoring - alerting policies with uptime check health conditions will not fire or resolve and latency charts on uptime dashboard will be missing. Beginning at approx. Thursday, 2017-07-06 17:00:00 US/Pacific. Current data indicates that all projects are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 19:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - We are investigating an issue with Google Cloud Storage. We will provide more information by 18:30 US/Pacific.

We are experiencing an intermittent issue with Google Cloud Storage - JSON API requests are failing with 5XX errors (XML API is unaffected) beginning at Thursday, 2017-07-06 16:50:40 US/Pacific. Current data indicates that approximately 70% of requests globally are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - We are investigating an issue with Google Cloud Storage. We will provide more information by 18:30 US/Pacific.

We are investigating an issue with Google Cloud Storage. We will provide more information by 18:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17001 - Google Cloud Storage elevated error rates

The issue with degraded availability for some Google Cloud Storage objects has been resolved for all affected projects. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 17016 - Cloud SQL V2 instance failing to create

The issue with Cloud SQL V2 incorrect reports of 'Unable to Failover' state has been resolved for all affected instances as of 12:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17016 - Cloud SQL V2 instance failing to create

The issue with Cloud SQL V2 incorrect reports of 'Unable to Failover' state should be resolved for some of instances and we expect a full resolution in the near future. We will provide another status update by 12:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17016 - Cloud SQL V2 instance failing to create

Our Engineering Team believes they have identified the root cause of the incorrect reports of 'Unable to Failover' state and is working to mitigate. We will provide another status update by 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17016 - Cloud SQL V2 instance failing to create

The issue with Cloud SQL V2 some instance maybe failing to create should be resolved for some of porojects and we expect a full resolution in the near future. At this time we do not have additional information related to HA instances may report incorrect 'Unable to Failover' state. We will provide another status update by 11:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17002 - Cloud Pub/Sub admin operations failing

The issue with Cloud Pub/Sub admin operations failing has been resolved for all affected users as of 10:10 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17016 - Cloud SQL V2 instance failing to create

We are experiencing an issue with Cloud SQL V2, some instance maybe failing to create or HA instances may reports incorrect 'Unable to Failover' state beginning at Thursday, 2016-06-29 08:45 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 11:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Cloud Pub/Sub admin operations failing

We are investigating an issue where Cloud Pub/Sub admin operations are failing. We will provide more information by 10:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Google Cloud Storage elevated error rates

Google engineers are continuing to restore the service. Error rates are continuing to decrease. We will provide another status update by 15:00 June 29 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

The issue with Cloud Logging exports to BigQuery failing has been resolved for all affected projects on Tuesday, 2017-06-13 10:12 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 18030 - Streaming API errors

The issue with BigQuery Streaming insert has been resolved for all affected users as of 19:17 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18030 - Streaming API errors

Our Engineering Team believes they have identified the root cause of the errors and have mitigated the issue by 19:17 US/Pacific. We will provide another status update by 20:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18030 - Streaming API errors

We are investigating an issue with BigQuery Streaming insert. We will provide more information by 19:35 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Google Cloud Storage elevated error rates

Starting at Tuesday 27 June 2017 07:30 PST, Google Cloud Storage started experiencing degraded availability for some objects based in us-central1 buckets (Regional, Nearline, Coldline, Durable Reduced Availability) and US multi-region buckets. Between 08:00 and 18:00 PST error rate was ~3.5%, error rates have since decreased to 0.5%. Errors are expected to be consistent for objects. Customers do not need to make any changes at this time. Google engineers have identified the root cause and are working to restore the service. If your service or application is affected, we apologize. We will provide another status update by 05:00 June 29 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18029 - BigQuery Increased Error Rate

ISSUE SUMMARY For 10 minutes on Wednesday 14 June 2017, Google BigQuery experienced increased error rates for both streaming inserts and most API methods due to their dependency on metadata read operations. To our BigQuery customers whose business were impacted by this event, we sincerely apologize. We are taking immediate steps to improve BigQuery’s performance and availability. DETAILED DESCRIPTION OF IMPACT Starting at 10:43am US/Pacific, global error rates for BigQuery streaming inserts and API calls dependent upon metadata began to rapidly increase. The error rate for streaming inserts peaked at 100% by 10:49am. Within that same window, the error rate for metadata operations increased to a height of 80%. By 10:54am the error rates for both streaming inserts and metadata operations returned to normal operating levels. During the incident, affected BigQuery customers would have experienced a noticeable elevation in latency on all operations, as well as increased “Service Unavailable” and “Timeout” API call failures. While BigQuery streaming inserts and metadata operations were the most severely impacted, other APIs also exhibited elevated latencies and error rates, though to a much lesser degree. For API calls returning status code 2xx the operation completed with successful data ingestion and integrity. ROOT CAUSE On Wednesday 14 June 2017, BigQuery engineers completed the migration of BigQuery's metadata storage to an improved backend infrastructure. This effort was the culmination of work to incrementally migrate BigQuery read traffic over the course of two weeks. As the new backend infrastructure came online, there was one particular type of read traffic that hadn’t yet migrated to the new metadata storage. This caused a sudden spike of that read traffic to the new backend. The spike came when the new storage backend had to process a large volume of incoming requests as well as allocate resources to handle the increased load. Initially the backend was able to process requests with elevated latency, but all available resources were eventually exhausted, which lead to API failures. Once the backend was able to complete the load redistribution, it began to free up resources to process existing requests and work through its backlog. BigQuery operations continued to experience elevated latency and errors for another five minutes as the large backlog of requests from the first five minutes of the incident were processed. REMEDIATION AND PREVENTION Our monitoring systems worked as expected and alerted us to the outage within 6 minutes of the error spike. By this time, the underlying root cause had already passed. Google engineers have created nine high priority action items, and three lower priority action items as a result of this event to better prevent, detect and mitigate the reoccurence of a similar event. The most significant of these priorities is to modify the BigQuery service to successfully handle a similar root cause event. This will include adjusting capacity parameters to better handle backend failures and improving caching and retry logic. Each of the 12 action items created from this event have already been assigned to an engineer and are underway.

Google Cloud Platform
RESOLVED: Incident 18029 - BigQuery Increased Error Rate

ISSUE SUMMARY For 10 minutes on Wednesday 14 June 2017, Google BigQuery experienced increased error rates for both streaming inserts and most API methods due to their dependency on metadata read operations. To our BigQuery customers whose business were impacted by this event, we sincerely apologize. We are taking immediate steps to improve BigQuery’s performance and availability. DETAILED DESCRIPTION OF IMPACT Starting at 10:43am US/Pacific, global error rates for BigQuery streaming inserts and API calls dependent upon metadata began to rapidly increase. The error rate for streaming inserts peaked at 100% by 10:49am. Within that same window, the error rate for metadata operations increased to a height of 80%. By 10:54am the error rates for both streaming inserts and metadata operations returned to normal operating levels. During the incident, affected BigQuery customers would have experienced a noticeable elevation in latency on all operations, as well as increased “Service Unavailable” and “Timeout” API call failures. While BigQuery streaming inserts and metadata operations were the most severely impacted, other APIs also exhibited elevated latencies and error rates, though to a much lesser degree. For API calls returning status code 2xx the operation completed successfully with guaranteed data ingestion and integrity. ROOT CAUSE On Wednesday 14 June 2017, BigQuery engineers completed the migration of BigQuery's metadata storage to an improved backend infrastructure. This effort was the culmination of work to incrementally migrate BigQuery read traffic over the course of two weeks. As the new backend infrastructure came online, there was one particular type of read traffic that hadn’t yet migrated to the new metadata storage. This caused a sudden spike of that read traffic to the new backend. The spike came when the new storage backend had to process a large volume of incoming requests as well as allocate resources to handle the increased load. Initially the backend was able to process requests with elevated latency, but all available resources were eventually exhausted, which lead to API failures. Once the backend was able to complete the load redistribution, it began to free up resources to process existing requests and work through its backlog. BigQuery operations continued to experience elevated latency and errors for another five minutes as the large backlog of requests from the first five minutes of the incident were processed. REMEDIATION AND PREVENTION Our monitoring systems worked as expected and alerted us to the outage within 6 minutes of the error spike. By this time, the underlying root cause had already passed. Google engineers have created nine high priority action items, and three lower priority action items as a result of this event to better prevent, detect and mitigate the reoccurence of a similar event. The most significant of these priorities is to modify the BigQuery service to successfully handle a similar root cause event. This will include adjusting capacity parameters to better handle backend failures and improving caching and retry logic. Each of the 12 action items created from this event have already been assigned to an engineer and are underway.

Google Cloud Platform
RESOLVED: Incident 18029 - BigQuery Increased Error Rate

ISSUE SUMMARY For 10 minutes on Wednesday 14 June 2017, Google BigQuery experienced increased error rates for both streaming inserts and most API methods due to their dependency on metadata read operations. To our BigQuery customers whose business were impacted by this event, we sincerely apologize. We are taking immediate steps to improve BigQuery’s performance and availability. DETAILED DESCRIPTION OF IMPACT Starting at 10:43am US/Pacific, global error rates for BigQuery streaming inserts and API calls dependent upon metadata began to rapidly increase. The error rate for streaming inserts peaked at 100% by 10:49am. Within that same window, the error rate for metadata operations increased to a height of 80%. By 10:54am the error rates for both streaming inserts and metadata operations returned to normal operating levels. During the incident, affected BigQuery customers would have experienced a noticeable elevation in latency on all operations, as well as increased “Service Unavailable” and “Timeout” API call failures. While BigQuery streaming inserts and metadata operations were the most severely impacted, other APIs also exhibited elevated latencies and error rates, though to a much lesser degree. For API calls returning status code 2xx the operation completed with successful data ingestion and integrity. ROOT CAUSE On Wednesday 14 June 2017, BigQuery engineers completed the migration of BigQuery's metadata storage to an improved backend infrastructure. This effort was the culmination of work to incrementally migrate BigQuery read traffic over the course of two weeks. As the new backend infrastructure came online, there was one particular type of read traffic that hadn’t yet migrated to the new metadata storage. This caused a sudden spike of that read traffic to the new backend. The spike came when the new storage backend had to process a large volume of incoming requests as well as allocate resources to handle the increased load. Initially the backend was able to process requests with elevated latency, but all available resources were eventually exhausted, which lead to API failures. Once the backend was able to complete the load redistribution, it began to free up resources to process existing requests and work through its backlog. BigQuery operations continued to experience elevated latency and errors for another five minutes as the large backlog of requests from the first five minutes of the incident were processed. REMEDIATION AND PREVENTION Our monitoring systems worked as expected and alerted us to the outage within 6 minutes of the error spike. By this time, the underlying root cause had already passed. Google engineers have created nine high priority action items, and three lower priority action items as a result of this event to better prevent, detect and mitigate the reoccurence of a similar event. The most significant of these priorities is to modify the BigQuery service to successfully handle a similar root cause event. This will include adjusting capacity parameters to better handle backend failures and improving caching and retry logic. Each of the 12 action items created from this event have already been assigned to an engineer and are underway.

Google Cloud Platform
RESOLVED: Incident 17008 - Network issue in asia-northeast1

ISSUE SUMMARY On Thursday 8 June 2017, from 08:24 to 09:26 US/Pacific Time, datacenters in the asia-northeast1 region experienced a loss of network connectivity for a total of 62 minutes. We apologize for the impact this issue had on our customers, and especially to those customers with deployments across multiple zones in the asia-northeast1 region. We recognize we failed to deliver the regional reliability that multiple zones are meant to achieve. We recognize the severity of this incident and have completed an extensive internal postmortem. We thoroughly understand the root causes and no datacenters are at risk of recurrence. We are at work to add mechanisms to prevent and mitigate this class of problem in the future. We have prioritized this work and in the coming weeks, our engineering team will complete the action items we have generated from the postmortem. DETAILED DESCRIPTION OF IMPACT On Thursday 8 June 2017, from 08:24 to 09:26 US/Pacific Time, network connectivity to and from Google Cloud services running in the asia-northeast1 region was unavailable for 62 minutes. This issue affected all Google Cloud Platform services in that region, including Compute Engine, App Engine, Cloud SQL, Cloud Datastore, and Cloud Storage. All external connectivity to the region was affected during this time frame, while internal connectivity within the region was not affected. In addition, inbound requests from external customers originating near Google’s Tokyo point of presence intended for Compute or Container Engine HTTP Load Balancing were lost for the initial 12 minutes of the outage. Separately, Internal Load Balancing within asia-northeast1 remained degraded until 10:23. ROOT CAUSE At the time of incident, Google engineers were upgrading the network topology and capacity of the region; a configuration error caused the existing links to be decommissioned before the replacement links could provide connectivity, resulting in a loss of connectivity for the asia-northeast1 region. Although the replacement links were already commissioned and appeared to be ready to serve, a network-routing protocol misconfiguration meant that the routes through those links were not able to carry traffic. As Google's global network grows continuously, we make upgrades and updates reliably by using automation for each step and, where possible, applying changes to only one zone at any time. The topology in asia-northeast1 was the last region unsupported by automation; manual work was required to be performed to align its topology with the rest of our regional deployments (which would, in turn, allow automation to function properly in the future). This manual change mistakenly did not follow the same per-zone restrictions as required by standard policy or automation, which meant the entire region was affected simultaneously. In addition, some customers with deployments across multiple regions that included asia-northeast1 experienced problems with HTTP Load Balancing due to a failure to detect that the backends were unhealthy. When a network partition occurs, HTTP Load Balancing normally detects this automatically within a few seconds and routes traffic to backends in other regions. In this instance, due to a performance feature being tested in this region at the time, the mechanism that usually detects network partitions did not trigger, and continued to attempt to assign traffic until our on-call engineers responded. Lastly, the Internal Load Balancing outage was exacerbated due to a software defined networking component which was stuck in a state where it was not able to provide network resolution for instances in the load balancing group. REMEDIATION AND PREVENTION Google engineers were paged by automated monitoring within one minute of the start of the outage, at 08:24 PDT. They began troubleshooting and declared an emergency incident 8 minutes later at 08:32. The issue was resolved when engineers reconnected the network path and reverted the configuration back to the last known working state at 09:22. Our monitoring systems worked as expected and alerted us to the outage promptly. The time-to-resolution for this incident was extended by the time taken to perform the rollback of the network change, as the rollback had to be performed manually. We are implementing a policy change that any manual work on live networks be constrained to a single zone. This policy will be enforced automatically by our change management software when changes are planned and scheduled. In addition, we are building automation to make these types of changes in future, and to ensure the system can be safely rolled back to a previous known-good configuration at any time during the procedure. The fix for the HTTP Load Balancing performance feature that caused it to incorrectly believe zones within asia-northeast1 were healthy will be rolled out shortly. SUPPORT COMMUNICATIONS During the incident, customers who had originally contacted Google Cloud Support in Japanese did not receive periodic updates from Google as the event unfolded. This was due to a software defect in the support tooling — unrelated to the incident described earlier. We have already fixed the software defect, so all customers who contact support will receive incident updates. We apologize for the communications gap to our Japanese-language customers. RELIABILITY SUMMARY One of our biggest pushes in GCP reliability at Google is a focus on careful isolation of zones from each other. As we encourage users to build reliable services using multiple zones, we also treat zones separately in our production practices, and we enforce this isolation with software and policy. Since we missed this mark—and affecting all zones in a region is an especially serious outage—we apologize. We intend for this incident report to accurately summarize the detailed internal post-mortem that includes final assessment of impact, root cause, and steps we are taking to prevent an outage of this form occurring again. We hope that this incident report demonstrates the work we do to learn from our mistakes to deliver on this commitment. We will do better. Sincerely, Benjamin Lutch | VP Site Reliability Engineering | Google

Google Cloud Platform
RESOLVED: Incident 17006 - Network issue in asia-northeast1

ISSUE SUMMARY On Thursday 8 June 2017, from 08:24 to 09:26 US/Pacific Time, datacenters in the asia-northeast1 region experienced a loss of network connectivity for a total of 62 minutes. We apologize for the impact this issue had on our customers, and especially to those customers with deployments across multiple zones in the asia-northeast1 region. We recognize we failed to deliver the regional reliability that multiple zones are meant to achieve. We recognize the severity of this incident and have completed an extensive internal postmortem. We thoroughly understand the root causes and no datacenters are at risk of recurrence. We are at work to add mechanisms to prevent and mitigate this class of problem in the future. We have prioritized this work and in the coming weeks, our engineering team will complete the action items we have generated from the postmortem. DETAILED DESCRIPTION OF IMPACT On Thursday 8 June 2017, from 08:24 to 09:26 US/Pacific Time, network connectivity to and from Google Cloud services running in the asia-northeast1 region was unavailable for 62 minutes. This issue affected all Google Cloud Platform services in that region, including Compute Engine, App Engine, Cloud SQL, Cloud Datastore, and Cloud Storage. All external connectivity to the region was affected during this time frame, while internal connectivity within the region was not affected. In addition, inbound requests from external customers originating near Google’s Tokyo point of presence intended for Compute or Container Engine HTTP Load Balancing were lost for the initial 12 minutes of the outage. Separately, Internal Load Balancing within asia-northeast1 remained degraded until 10:23. ROOT CAUSE At the time of incident, Google engineers were upgrading the network topology and capacity of the region; a configuration error caused the existing links to be decommissioned before the replacement links could provide connectivity, resulting in a loss of connectivity for the asia-northeast1 region. Although the replacement links were already commissioned and appeared to be ready to serve, a network-routing protocol misconfiguration meant that the routes through those links were not able to carry traffic. As Google's global network grows continuously, we make upgrades and updates reliably by using automation for each step and, where possible, applying changes to only one zone at any time. The topology in asia-northeast1 was the last region unsupported by automation; manual work was required to be performed to align its topology with the rest of our regional deployments (which would, in turn, allow automation to function properly in the future). This manual change mistakenly did not follow the same per-zone restrictions as required by standard policy or automation, which meant the entire region was affected simultaneously. In addition, some customers with deployments across multiple regions that included asia-northeast1 experienced problems with HTTP Load Balancing due to a failure to detect that the backends were unhealthy. When a network partition occurs, HTTP Load Balancing normally detects this automatically within a few seconds and routes traffic to backends in other regions. In this instance, due to a performance feature being tested in this region at the time, the mechanism that usually detects network partitions did not trigger, and continued to attempt to assign traffic until our on-call engineers responded. Lastly, the Internal Load Balancing outage was exacerbated due to a software defined networking component which was stuck in a state where it was not able to provide network resolution for instances in the load balancing group. REMEDIATION AND PREVENTION Google engineers were paged by automated monitoring within one minute of the start of the outage, at 08:24 PDT. They began troubleshooting and declared an emergency incident 8 minutes later at 08:32. The issue was resolved when engineers reconnected the network path and reverted the configuration back to the last known working state at 09:22. Our monitoring systems worked as expected and alerted us to the outage promptly. The time-to-resolution for this incident was extended by the time taken to perform the rollback of the network change, as the rollback had to be performed manually. We are implementing a policy change that any manual work on live networks be constrained to a single zone. This policy will be enforced automatically by our change management software when changes are planned and scheduled. In addition, we are building automation to make these types of changes in future, and to ensure the system can be safely rolled back to a previous known-good configuration at any time during the procedure. The fix for the HTTP Load Balancing performance feature that caused it to incorrectly believe zones within asia-northeast1 were healthy will be rolled out shortly. SUPPORT COMMUNICATIONS During the incident, customers who had originally contacted Google Cloud Support in Japanese did not receive periodic updates from Google as the event unfolded. This was due to a software defect in the support tooling — unrelated to the incident described earlier. We have already fixed the software defect, so all customers who contact support will receive incident updates. We apologize for the communications gap to our Japanese-language customers. RELIABILITY SUMMARY One of our biggest pushes in GCP reliability at Google is a focus on careful isolation of zones from each other. As we encourage users to build reliable services using multiple zones, we also treat zones separately in our production practices, and we enforce this isolation with software and policy. Since we missed this mark—and affecting all zones in a region is an especially serious outage—we apologize. We intend for this incident report to accurately summarize the detailed internal post-mortem that includes final assessment of impact, root cause, and steps we are taking to prevent an outage of this form occurring again. We hope that this incident report demonstrates the work we do to learn from our mistakes to deliver on this commitment. We will do better. Sincerely, Benjamin Lutch | VP Site Reliability Engineering | Google

Google Cloud Platform
UPDATE: Incident 18029 - BigQuery Increased Error Rate

The BigQuery service was experiencing a 78% error rate on streaming operations and up to 27% error rates on other operations from 10:43 to 11:03 US/Pacific time. This issue has been resolved for all affected projects as of 10:53 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17004 - Cloud console: changing language preferences

The Google Cloud Console issue that was preventing the users from changing their language preferences has been resolved as of 06:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17004 - Cloud console: changing language preferences

The Google Cloud Console issue that is preventing the users from changing their language preferences is ongoing. Our Engineering Team is working on it. We will provide another status update by 06:00 US/Pacific with current details. A known workaround is to change the browser language.

Google Cloud Platform
UPDATE: Incident 17004 - Cloud console: changing language preferences

We are investigating an issue with the cloud console. Users are unable to change their language preferences. We will provide more information by 04:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17005 - High Latency in App Engine

ISSUE SUMMARY On Wednesday 7 June 2017, Google App Engine experienced highly elevated serving latency and timeouts for a duration of 138 minutes. If your service or application was affected the increase in latency, we sincerely apologize – this is not the level of reliability and performance we expect of our platform, and we are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT On Wednesday 7 June 2017, from 13:34 PDT to 15:52 PDT, 7.7% of active applications on the Google App Engine service experienced severely elevated latency; requests that typically take under 500ms to serve were taking many minutes. This elevated latency would have either resulted in users seeing additional latency when waiting for responses from the affected applications or 500 errors if the application handlers timed out. The individual application logs would have shown this increased latency or increases in “Request was aborted after waiting too long to attempt to service your request” error messages. ROOT CAUSE The incident was triggered by an increase in memory usage across all App Engine appservers in a datacenter in us-central. An App Engine appserver is responsible for creating instances to service requests for App Engine applications. When its memory usage increases to unsustainable levels, it will stop some of its current instances, so that they can be rescheduled on other appservers in order to balance out the memory requirements across the datacenter. This transfer of an App Engine instance between appservers consumes CPU resources, a signal used by the master scheduler of the datacenter to detect when it must further rebalance traffic across more appservers (such as when traffic to the datacenter increases and more App Engine instances are required). Normally, these memory management techniques are transparent to customers but in isolated cases, they can be exacerbated by large amounts of additional traffic being routed to the datacenter, which requires more instances to service user requests. The increased load and memory requirement from scheduling new instances combined with rescheduling instances from appservers with high memory usage resulted in most appservers being considered “busy” by the master scheduler. User requests needed to wait for an available instance to either be transferred or created before they were able to be serviced, which results in the increased latency seen at the app level. REMEDIATION AND PREVENTION Latencies began to increase at 13:34 PDT and Google engineers were alerted to the increase in latency at 13:45 PDT and were able to identify a subset of traffic that was causing the increase in memory usage. At 14:08, they were able to limit this subset of traffic to an isolated partition of the datacenter to ease the memory pressure on the remaining appservers. Latency for new requests started to improve as soon as this traffic was isolated; however, tail latency was still elevated due to the large backlog of requests that had accumulated since the incident started. This backlog was eventually cleared by 15:52 PDT. To prevent further recurrence, traffic to the affected datacenter was rebalanced with another datacenter. To prevent future recurrence of this issue, Google engineers will be re-evaluating the resource distribution in the us-central datacenters where App Engine instances are hosted. Additionally, engineers will be developing stronger alerting thresholds based on memory pressure signals so that traffic can be redirected before latency increases. And finally, engineers will be evaluating changes to the scheduling strategy used by the master scheduler responsible for scheduling appserver work to prevent this situation in the future.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

The issue with Cloud Logging exports to BigQuery failing should be resolved for the majority of projects and we expect a full resolution in the next 12 hours. We will provide another status update by 14:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

The issue with Cloud Logging exports to BigQuery failing should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 2am PST with current details.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

The issue with Cloud Logging exports to BigQuery failing should be resolved for some projects and we expect a full resolution in the near future. We will provide another status update by 11pm PST with current details.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

We are still investigating the issue with Cloud Logging exports to BigQuery failing. We will provide more information by 9pm PST. Currently, we are also working on restoring Cloud Logging exports to BigQuery.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

We are still investigating the issue with Cloud Logging exports to BigQuery failing. We will provide more information by 7pm PST. Currently, we are also working on restoring Cloud Logging exports to BigQuery.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

We are working on restoring Cloud Logging exports to BigQuery. We will provide further updates at 6pm PT.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

Cloud Logging exports to BigQuery failed from 13:19 to approximately 14:30 with loss of logging data. We have stopped the exports while we work on fixing the issue, so BigQuery will not reflect the latest logs. This incident only affects robot accounts using HTTP requests. We are working hard to restore service, and we will provide another update in one hour (by 5pm PT).

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

We are investigating an issue with Cloud Logging exports to BigQuery failing. We will provide more information by 5pm PT

Google Cloud Platform
RESOLVED: Incident 17008 - Network issue in asia-northeast1

Network connectivity in asia-northeast1 has been restored for all affected users as of 10:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 17006 - Network issue in asia-northeast1

Network connectivity in asia-northeast1 has been restored for all affected users as of 10:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17006 - Network issue in asia-northeast1

Network connectivity in asia-northeast1 should be restored for all affected users and we expect a full resolution in the near future. We will provide another status update by 09:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17008 - Network issue in asia-northeast1

Network connectivity in asia-northeast1 should be restored for all affected users and we expect a full resolution in the near future. We will provide another status update by 09:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17006 - Network issue in asia-northeast1

Google Cloud Platform services in region asia-northeast1 are experiencing connectivity issues. We will provide another status update by 9:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17008 - Network issue in asia-northeast1

Google Cloud Platform services in region asia-northeast1 are experiencing connectivity issues. We will provide another status update by 9:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17008 - Network issue in asia-northeast1

We are investigating an issue with network connectivity in asia-northeast1. We will provide more information by 09:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17006 - Network issue in asia-northeast1

We are investigating an issue with network connectivity in asia-northeast1. We will provide more information by 09:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17005 - High Latency in App Engine

The issue with Google App Engine displaying elevated error rate has been resolved for all affected projects as of 15:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17005 - High Latency in App Engine

The issue with Google App Engine displaying elevated error rate should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 15:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - High Latency in App Engine

We have identified an issue with App Engine that is causing increased latency to a portion of applications in the US Central region. Mitigation is under way. We will provide more information about the issue by 15:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

The fix has been fully deployed. We confirmed that the issue has been fixed.

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

The fix is still being deployed. We will provide another status update by 2017-05-26 19:00 US/Pacific

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

The fix is currently being deployed. We will provide another status update by 2017-05-26 16:00 US/Pacific

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

The root cause has been identified. The fix is currently being deployed. We will provide another status update by 2017-05-26 14:30 US/Pacific

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

We are currently investigating a problem that is causing app engine apps to experience an infinite redirect loop when users log in. We will provide another status update by 2017-05-26 13:30 US/Pacific

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

The issue with Cloud SQL First Generation automated backups should be resolved for all affected instances as of 12:52 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

We are still actively working on this issue. We are aiming on making the final fix available in production by end of day today.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

The daily backups continue to be taken and we are expect the final fix to be available tomorrow. We'll provide another update again Wednesday, May 24 10:00 US/Pacific time as originally planed.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

We are still actively working on this issue. We are aiming on making the fix available in production by end of day today. We will provide another update by end of day if there is anything changes.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

Daily backups are being taken for all Cloud SQL First Generation instances. For some instances, backups are being taken outside of defined backup windows. A fix is being tested and will be applied to First Generation instances pending positive test results. We will provide next update Wednesday, May 24 10:00 US/Pacific or if anything changes in between.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL v1 automated backups experiencing errors

The issue with automatic backups for Cloud SQL first generation is mitigated by forcing the backups. We’ll continue with this mitigation until Monday US/Pacific where a permanent fix will be rolled out. We will provide next update Monday US/Pacific or if anything changes in between.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL v1 automated backups experiencing errors

The Cloud SQL service is experiencing errors on automatic backups for 7% of Cloud SQL first generation instances. We’re forcing the backup for affected instances as short-term mitigation. We will provide another status update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL v1 automated backups experiencing errors

We are investigating an issue with Cloud SQL v1 automated backups. We will provide more information by 17:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Translate API elevated latency / errors

The issue with Translation Service and other API availability has been resolved for all affected users as of 19:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. This will be the final update on this incident.

Google Cloud Platform
UPDATE: Incident 17001 - Translate API elevated latency / errors

Engineering is continuing to investigate the API service issues impacting Translation Service API availability, looking into potential causes and mitigation strategies. Certain other API's, such as Speech and Prediction may also be affected. Next update by 20:00 Pacific.

Google Cloud Platform
UPDATE: Incident 17002 - GKE IP rotation procedure does not include FW rule change

Our Engineering Team has identified a fix for the issue with the GKE IP-rotation feature. We expect the rollout of the fix to begin next Tuesday, 2017-05-16 US/Pacific, completing on Friday, 2017-05-19.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

The issue with Google App Engine deployments and Memcache availability should have been resolved for all affected projects as of 18:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

The issue with Google App Engine deployments and Memcache availability is mitigated. We will provide an update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

We are continuing to investigate the issue with an underlying component that affects Google App Engine deployments and Memcache availability. The engineering team has tried several unsuccessful remediations and are continuing to investigate potential root causes and fixes. We will provide another update at 17:30 PDT.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

We are still investigating the issue with an underlying component that affects both Google App Engine deployments and Memcache availability. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

We are currently investigating an issue with an underlying component that affects both Google App Engine deployments and Memcache availability. Deployments will fail intermittently and memcache is returning intermittent errors for a small number of applications. For affected deployments, please try re-deploying while we continue to investigate this issue. For affected memcache users, retries in your application code should allow you to access your memcache intermittently while the underlying issue is being addressed. Work is ongoing to address the issue by the underlying component's engineering team. We will provide our next update at 15:30 PDT.

Google Cloud Platform
UPDATE: Incident 17002 - GKE IP rotation procedure does not include FW rule change

We are experiencing an issue with GKE ip-rotation feature. Kubernetes features that rely on the proxy (including kubectl exec and logs commands, as well as exporting cluster metrics into stackdriver) are broken by GKE ip-rotation feature. This only affects users who have disabled default ssh access on their nodes. There is a manual fix described [here](https://cloud.google.com/container-engine/docs/ip-rotation#known_issues)

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

The issue with BigQuery jobs being on pending state for too long has been resolved for all affected projects as of 03:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

The issue with BigQuery jobs being on pending state for too long should be resolved for all new jobs. We will provide another status update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery engineers are still working on a fix for the jobs being on pending state for too long. We will provide another update by 03:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery engineers have identified the root cause for the jobs being on pending state for too long and are still working on a fix. We will provide another update by 02:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery engineers have identified the root cause for the jobs being on pending state for too long and are applying a fix. We will provide another update by 01:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery engineers continue to investigate jobs being on pending state for too long. We will provide another update by 00:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery service is experiencing jobs staying on pending state for longer than usual and our engineering team is working on it. We will provide another status update by 23:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

We are investigating an issue with BigQuery import jobs pending. We will provide more information by 23:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Node pool creation failing in multiple zones

We are investigating an issue with Google Container Engine (GKE) that is affecting node-pool creations in the following zones: us-east1-d, asia-northeast1-c, europe-west1-c, us-central1-b, us-west1-a, asia-east1-a, asia-northeast1-a, asia-southeast1-a, us-east4-b, us-central1-f, europe-west1-b, asia-east1-c, us-east1-c, us-west1-b, asia-northeast1-b, asia-southeast1-b, and us-east4-c. We will provide more information by 17:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17007 - 502 errors for HTTP(S) Load Balancers

ISSUE SUMMARY On Friday 5 April 2017, requests to the Google Cloud HTTP(S) Load Balancer experienced a 25% error rate for a duration of 22 minutes. We apologize for this incident. We understand that the Load Balancer needs to be very reliable for you to offer a high quality service to your customers. We have taken and will be taking various measures to prevent this type of incident from recurring. DETAILED DESCRIPTION OF IMPACT On Friday 5 April 2017 from 01:13 to 01:35 PDT, requests to the Google Cloud HTTP(S) Load Balancer experienced a 25% error rate for a duration of 22 minutes. Clients received 502 errors for failed requests. Some HTTP(S) Load Balancers that were recently modified experienced error rates of 100%. Google paused all configuration changes to the HTTP(S) Load Balancer for three hours and 41 minutes after the incident, until our engineers had understood the root cause. This caused deployments of App Engine Flexible apps to fail during that period. ROOT CAUSE A bug in the HTTP(S) Load Balancer configuration update process caused it to revert to a configuration that was substantially out of date. The configuration update process is controlled by a master server. In this case, one of the replicas of the master servers lost access to Google's distributed file system and was unable to read recent configuration files. Mastership then passed to the server that could not access Google's distributed file system. When the mastership changes, it begins the next configuration push as normal by testing on a subset of HTTP(S) Load Balancers. If this test succeeds, the configuration is pushed globally to all HTTP(S) Load Balancers. If the test fails (as it did in this case), the new master will revert all HTTP(S) Load Balancers to the last "known good" configuration. The combination of a mastership change, lack of access to more recent updates, and the initial test failure for the latest config caused the HTTP(S) Load Balancers to revert to the latest configuration that the master could read, which was substantially out-of-date. In addition, the update with the out-of-date configuration triggered a garbage collection process on the Google Frontend servers to free up memory used by the deleted configurations. The high number of deleted configurations caused the Google Frontend servers to spend a large proportion of CPU cycles on garbage collection which lead to failed health checks and eventual restart of the affected Google Frontend server. Any client requests served by a restarting server received 502 errors. REMEDIATION AND PREVENTION Google engineers were paged at 01:22 PDT. They switched the configuration update process to use a different master server at 01:34 which mitigated the issue for most services within one minute. Our engineers then paused the configuration updates to the HTTP(S) Load Balancer until 05:16 while the root cause was confirmed. To prevent incidents of this type in future, we are taking the following actions: * Master servers will be configured to never push HTTP(S) Load Balancer configurations that are more than a few hours old. * Google Frontend servers will reject loading a configuration file that is more than a few hours old. * Improve testing for new HTTP(S) Load Balancer configurations so that out-of-date configurations are flagged before being pushed to production. * Fix the issue that caused the master server to fail when reading files from Google's distributed file system. * Fix the issue that caused health check failures on Google Frontends during heavy garbage collection. Once again, we apologize for the impact that this incident had on your service.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

The issue with Google App Engine Taskqueue has been resolved for all affected users as of 00:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

The issue with Google App Engine Taskqueue in us-east1/asia-northeast1 regions has been resolved. The issue with deployments in us-east1 is mitigated. For everyone who is still affected, we apologize for any inconvenience you may be experiencing. We will continue monitor and will provide another status update by 2017-04-07 02:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

The issue with Google App Engine Taskqueue in us-east1/asia-northeast1 regions has been partially resolved. The issue with deployments in us-east1 is partially mitigated. For everyone who is still affected, we apologize for any inconvenience you may be experiencing. We will provide another status update by 2017-04-07 00:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

The issue with Google App Engine Taskqueue in US-east1/Asia-northeast1 regions has been partially resolved. We are investigating related issues impacting deployments in US-east1. For everyone who is still affected, we apologize for any inconvenience you may be experiencing. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

We are still investigating reports of an issue with App Engine Taskqueue in US-east1/Asia-northeast1 regions. We will provide another status update by 2017-04-06 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

We are investigating an issue impacting Google App Engine Task Queue in US-east1/Asia-northeast1. We will provide more information by 09:00pm US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18027 - BigQuery Streaming API returned a 500 response from 00:04 to 00:38 US/Pacific.

The issue with BigQuery Streaming API returning 500 response code has been resolved for all affected users as of 00:38 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 17001 - Issues with Cloud Pub/Sub

ISSUE SUMMARY On Tuesday 21 March 2017, new connections to Cloud Pub/Sub experienced high latency leading to timeouts and elevated error rates for a duration of 95 minutes. Connections established before the start of this issue were not affected. If your service or application was affected, we apologize – this is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT On Tuesday 21 March 2017 from 21:08 to 22:43 US/Pacific, Cloud Pub/Sub publish, pull and ack methods experienced elevated latency, leading to timeouts. The average error rate for the issue duration was 0.66%. The highest error rate occurred at 21:43, when the Pub/Sub publish error rate peaked at 4.1%, the ack error rate reached 5.7% and the pull error rate was 0.02%. ROOT CAUSE The issue was caused by the rollout of a storage system used by the Pub/Sub service. As part of this rollout, some servers were taken out of service, and as planned, their load was redirected to remaining servers. However, an unexpected imbalance in key distribution led some of the remaining servers to become overloaded. The Pub/Sub service was then unable to retrieve the status required to route new connections for the affected methods. Additionally, some Pub/Sub servers didn’t recover promptly after the storage system had been stabilized and required individual restarts to fully recover. REMEDIATION AND PREVENTION Google engineers were alerted by automated monitoring seven minutes after initial impact. At 21:24, they had correlated the issue with the storage system rollout and stopped it from proceeding further. At 21:41, engineers restarted some of the storage servers, which improved systemic availability. Observed latencies for Pub/Sub were still elevated, so at 21:54, engineers commenced restarting other Pub/Sub servers, restoring service to 90% of users. At 22:29 a final batch was restarted, restoring the Pub/Sub service to all. To prevent a recurrence of the issue, Google engineers are creating safeguards to limit the number of keys managed by each server. They are also improving the availability of Pub/Sub servers to respond to requests even when in an unhealthy state. Finally they are deploying enhancements to the Pub/Sub service to operate when the storage system is unavailable.

Google Cloud Platform
UPDATE: Incident 17001 - Issues with Cloud Pub/Sub

The issue with Pub/Sub high latency has been resolved for all affected projects as of 22:02 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17001 - Issues with Cloud Pub/Sub

We are investigating an issue with Pub/Sub. We will provide more information by 22:40 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18026 - BigQuery streaming inserts issue

ISSUE SUMMARY On Monday 13 March 2017, the BigQuery streaming API experienced 91% error rate in the US and 63% error rate in the EU for a duration of 30 minutes. We apologize for the impact of this issue on our customers, and the widespread nature of the issue in particular. We have completed a post mortem of the incident and are making changes to mitigate and prevent recurrences. DETAILED DESCRIPTION OF IMPACT On Monday 13 March 2017 from 10:22 to 10:52 PDT 91% of streaming API requests to US BigQuery datasets and 63% of streaming API requests to EU BigQuery datasets failed with error code 503 and an HTML message indicating "We're sorry... but your computer or network may be sending automated queries. To protect our users, we can't process your request right now." All non-streaming API requests, including DDL requests and query, load extract and copy jobs were unaffected. ROOT CAUSE The trigger for this incident was a sudden increase in log entries being streamed from Stackdriver Logging to BigQuery by logs export. The denial of service (DoS) protection used by BigQuery responded to this by rejecting excess streaming API traffic. However the configuration of the DoS protection did not adequately segregate traffic streams resulting in normal sources of BigQuery streaming API requests being rejected. REMEDIATION AND PREVENTION Google engineers initially mitigated the issue by blocking the source of unexpected load. This prevented the overload and allowed all other traffic to resume normally. Engineers fully resolved the issue by identifying and reverting the change that triggered the increase in log entries and clearing the backlog of log entries that had grown. To prevent future occurrences, BigQuery engineers are updating configuration to increase isolation between different traffic sources. Tests are also being added to verify behavior under several new load scenarios.

Google Cloud Platform
UPDATE: Incident 17006 - GCE networking in us-central1 zones is experiencing disruption

GCP Services' internet connectivity has been restored as of 2:12 pm Pacific Time. We apologize for the impact that this issue had on your application. We are still investigating the root cause of the issue, and will take necessary actions to prevent a recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - GCE networking in us-central1 zones is experiencing disruption

We are experiencing a networking issue with Google Compute Engine instances in us-central zones beginning at Wednesday, 2017-03-15 01:00 PM US/Pacific.

Google Cloud Platform
UPDATE: Incident 18026 - BigQuery streaming inserts issue

The issue with BigQuery streaming inserts has been resolved for all affected projects as of 11:06 AM US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18026 - BigQuery streaming inserts issue

We are investigating an issue with BigQuery streaming inserts. We will provide more information by 11:45 AM US/Pacific.

Google Cloud Platform
UPDATE: Incident 16037 - Elevated Latency and Error Rates For GCS in Europe

During the period 12:05 - 13:57 PDT, GCS requests originating in Europe experienced a 17% error rate. GCS requests in other regions were unaffected.

Google Cloud Platform
UPDATE: Incident 16037 - Elevated Latency and Error Rates For GCS in Europe

The issue with elevated latency and error rates for GCS in Europe should be resolved as of 13:56 PDT. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16037 - Elevated Latency and Error Rates For GCS in Europe

We are continuing to investigate elevated errors and latency for GCS in Europe. The reliability team has performed several mitigation steps and error rates and latency are returning to normal levels. We will continue to monitor for recurrence.

Google Cloud Platform
UPDATE: Incident 16037 - Elevated Latency and Error Rates For GCS in Europe

We are currently investigating elevated latency and error rates for Google Cloud Storage traffic transiting through Europe.

Google Cloud Platform
RESOLVED: Incident 17003 - GCP accounts with credits are being charged without credits being applied

We have mitigated the issue as of 2017-03-10 09:30 PST

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

Some Cloud Console users may notice that Dataflow job logs are not displayed correctly. This is a known issue with the user interface that affects up to 35% of jobs. Google engineers are preparing a fix. Pipeline executions are not impacted and Dataflow services are operating as normal.

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

We are still investigating the issue with Dataflow Job Log in Cloud Console. Current data indicates that between 30% and 35% of jobs are affected by this issue. Pipeline execution is not impacted. The root cause of the issue is known and the Dataflow Team is preparing the fix for production.

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

The root cause of the issue is known and the Dataflow Team is preparing the fix for production. We will provide an update by 21:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

We are experiencing a visibility issue with Dataflow Job Log in Cloud Console beginning at Thursday, 2017-03-09 11:34 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 13:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

We are investigating an issue with Dataflow Job Log visibility in Cloud Console. We will provide more information by 12:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17014 - Cloud SQL instance creation in zones us-central1-c, us-east1-c, asia-northeast1-a, asia-east1-b, us-central1-f may be failing.

The issue should have been mitigated for all zones, except us-central1-c. Creating new Cloud SQL 2nd generation instances in us-central1-c with SSD Persistent Disk still have an issue. Workaround is to create your instances in a different zone or use Standard Persistent Disks in us-central1-c.

Google Cloud Platform
UPDATE: Incident 17014 - Cloud SQL instance creation in zones us-central1-c and asia-east1-c are failing.

Correction: Attempts to create new Cloud SQL instances in zones us-central1-c, us-east1-c, asia-northeast1-a, asia-east1-b, us-central1-f may be intermittently failing. New instances affected in these zones will show a status of "Failed to create". Users will not incur charges for instances that failed to create; these instances can be safely deleted.

Google Cloud Platform
UPDATE: Incident 17014 - Cloud SQL instance creation in zones us-central1-c and asia-east1-c are failing.

Attempts to create new Cloud SQL instances in zones us-central1-c and asia-east1-c are failing. New instances created in these zones will show a status of "Failed to create". Users will not incur charges for instances that failed to create; these instances can be safely deleted.

Google Cloud Platform
RESOLVED: Incident 17005 - Network packet loss to Compute Engine us-west1 region

We confirm that the issue with GCE network connectivity to us-west1 should have been resolved for all affected endpoints as of 03:27 US/Pacific and the situation is stable. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17005 - Network packet loss to Compute Engine us-west1 region

GCE network connectivity to us-west1 remains stable and we expect a final resolution in the near future. We will provide another status update by 05:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - Network packet loss to Compute Engine us-west1 region

Network connectivity to the Google Compute Engine us-west1 region has been restored but the issue remains under investigation. We will provide another status update by 05:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - Network packet loss to Compute Engine us-west1 region

We are experiencing an issue with GCE network connectivity to the us-west1 region beginning at Tuesday, 2017-02-28 02:57 US/Pacific. We will provide a further update by 04:45.

Google Cloud Platform
UPDATE: Incident 17005 - Network packet loss to Compute Engine us-west1 region

We are investigating an issue with network connectivity to the us-west1 region. We will provide more information by 04:15 US/Pacific time.

Google Cloud Platform
RESOLVED: Incident 17002 - Cloud Datastore Internal errors in the European region

ISSUE SUMMARY On Tuesday 14 February 2017, some applications using Google Cloud Datastore in Western Europe or the App Engine Search API in Western Europe experienced 2%-4% error rates and elevated latency for three periods with an aggregate duration of three hours and 36 minutes. We apologize for the disruption this caused to your service. We have already taken several measures to prevent incidents of this type from recurring and to improve the reliability of these services. DETAILED DESCRIPTION OF IMPACT On Tuesday 14 February 2017 between 00:15 and 01:18 PST, 54% of applications using Google Cloud Datastore in Western Europe or the App Engine Search API in Western Europe experienced elevated error rates and latency. The average error rate for affected applications was 4%. Between 08:35 and 08:48 PST, 50% of applications using Google Cloud Datastore in Western Europe or the App Engine Search API in Western Europe experienced elevated error rates. The average error rate for affected applications was 4%. Between 12:20 and 14:40 PST, 32% of applications using Google Cloud Datastore in Western Europe or the App Engine Search API in Western Europe experienced elevated error rates and latency. The average error rate for affected applications was 2%. Errors received by affected applications for all three incidents were either "internal error" or "timeout". ROOT CAUSE The incident was caused by a latent bug in a service used by both Cloud Datastore and the App Engine Search API that was triggered by high load on the service. Starting at 00:15 PST, several applications changed their usage patterns in one zone in Western Europe and began running more complex queries, which caused higher load on the service. REMEDIATION AND PREVENTION Google's monitoring systems paged our engineers at 00:35 PST to alert us to elevated errors in a single zone. Our engineers followed normal practice, by redirecting traffic to other zones to reduce the impact on customers while debugging the underlying issue. At 01:15, we redirected traffic to other zones in Western Europe, which resolved the incident three minutes later. At 08:35 we redirected traffic back to the zone that previously had errors. We found that the error rate in that zone was still high and so redirected traffic back to other zones at 08:48. At 12:45, our monitoring systems detected elevated errors in other zones in Western Europe. At 14:06 Google engineers added capacity to the service with elevated errors in the affected zones. This removed the trigger for the incident. We have now identified and fixed the latent bug that caused errors when the system was at high load. We expect to roll out this fix over the next few days. Our capacity planning team have generated forecasts for peak load generated by the Cloud Datastore and App Engine Search API and determined that we now have sufficient capacity currently provisioned to handle peak loads. We will be making several changes to our monitoring systems to improve our ability to quickly detect and diagnose errors of this type. Once again, we apologize for the impact of this incident on your application.

Google Cloud Platform
UPDATE: Incident 17004 - Persistent Disk Does Not Produce Differential Snapshots In Some Cases

Since January 23rd, a small number of Persistent Disk snapshots were created as full snapshots rather than differential. While this results in overbilling, these snapshots still correctly backup your data and are usable for restores. We are working to resolve this issue and also to correct any overbilling that occurred. No further action is required from your side.

Google Cloud Platform
RESOLVED: Incident 17002 - Cloud Datastore Internal errors in the European region

The issue with Cloud Datastore serving elevated internal errors in the European region should have been resolved for all affected projects as of 14:34 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17002 - Cloud Datastore Internal errors in the European region

We are investigating an issue with Cloud Datastore in the European region. We will provide more information by 15:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17003 - New VMs are experiencing connectivity issues

ISSUE SUMMARY On Monday 30 January 2017, newly created Google Compute Engine instances, Cloud VPNs and network load balancers were unavailable for a duration of 2 hours 8 minutes. We understand how important the flexibility to launch new resources and scale up GCE is for our users and apologize for this incident. In particular, we apologize for the wide scope of this issue and are taking steps to address the scope and duration of this incident as well as the root cause itself. DETAILED DESCRIPTION OF IMPACT Any GCE instances, Cloud VPN tunnels or GCE network load balancers created or live migrated on Monday 30 January 2017 between 10:36 and 12:42 PDT were unavailable via their public IP addresses until the end of that period. This also prevented outbound traffic from affected instances and load balancing health checks from succeeding. Previously created VPN tunnels, load balancers and instances that did not experience a live migration were unaffected. ROOT CAUSE All inbound networking for GCE instances, load balancers and VPN tunnels enter via shared layer 2 load balancers. These load balancers are configured with changes to IP addresses for these resources, then automatically tested in a canary deployment, before changes are globally propagated. The issue was triggered by a large set of updates which were applied to a rarely used load balancing configuration. The application of updates to this configuration exposed an inefficient code path which resulted in the canary timing out. From this point all changes of public addressing were queued behind these changes that could not proceed past the testing phase. REMEDIATION AND PREVENTION To resolve the issue, Google engineers restarted the jobs responsible for programming changes to the network load balancers. After restarting, the problematic changes were processed in a batch, which no longer reached the inefficient code path. From this point updates could be processed and normal traffic resumed. This fix was applied zone by zone between 11:36 and 12:42. To prevent this issue from reoccurring in the short term, Google engineers are increasing the canary timeout so that updates exercising the inefficient code path merely slow network changes rather than completely stop them. As a long term resolution, the inefficient code path is being improved, and new tests are being written to test behaviour on a wider range of configurations. Google engineers had already begun work to replace global propagation of address configuration with decentralized routing. This work is being accelerated as it will prevent issues with this layer having global impact. Google engineers are also creating additional metrics and alerting that will allow the nature of this issue to be identified sooner, which will lead to faster resolution.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

The issue with Google App Engine should have been resolved for all affected users as of 17:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

The issue with Google App Engine has been partially resolved. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide another status update by 18:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

The issue with Google App Engine has been partially resolved. For everyone who is still affected, we apologize for any inconvenience you may be experiencing. We will provide another status update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

The issue with Google App Engine should have been partially resolved . We will provide another status update by 15:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

We are investigating reports of intermittent errors(502s) in Google App Engine. We will provide more information by 15:00 US/Pacific

Google Cloud Platform
UPDATE: Incident 17001 - Investigating possible Google Cloud Datastore Application Monitoring Metrics problem

The issue with Google Cloud Datastore Application Monitoring Metrics has been fully resolved for all affected Applications as of 1:30pm US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Investigating possible Google Cloud Datastore Application Monitoring Metrics

We are investigating an issue with Google Cloud Datastore related with Application Monitoring Metrics. We will provide more information by 1:30pm US/Pacific.

Google Cloud Platform
UPDATE: Incident 17013 - Issue with Cloud SQL 2nd Generation instances beginning at Tuesday, 2017-01-26 01:00 US/Pacific.

The issue with Cloud SQL 2nd Generation Instances should have been resolved for all affected instances as of 21:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17013 - Issue with Cloud SQL 2nd Generation instances beginning at Tuesday, 2017-01-26 01:00 US/Pacific.

We are currently experiencing an issue with Cloud SQL 2nd Generation instances beginning at Tuesday, 2017-01-26 01:00 US/Pacific. This may cause poor performance or query failures for large queries on impacted instances. Current data indicates that 3% of Cloud SQL 2nd Generation Instances were affected by this issue. As of 2016-01-31 20:30 PT, a fix has been applied to the majority of impacted instances, and we expect a full resolution in the near future. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 21:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

We have fully mitigated the network connectivity issues for newly-created GCE instances as of 12:45 US/Pacific, with VPNs connectivity issues being fully mitigated at 12:50 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

The issue with newly-created GCE instances experiencing network connectivity problems should have been mitigated for all GCE regions except europe-west1, which is currently clearing. Newly-created VPNs are effected as well; we are still working on a mitigation for this. We will provide another status update by 13:10 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

The issue with newly-created GCE instances experiencing network connectivity problems should have been mitigated for the majority of GCE regions and we expect a full resolution in the near future. We will provide another status update by 12:40 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

We are experiencing a connectivity issue affecting newly-created VMs, as well as those undergoing live migrations beginning at Monday, 2017-01-30 10:54 US/Pacific. Mitigation work is currently underway. All zones should be coming back online in the next 15-20 minutes. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide another update at 12:10 PST.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

We are investigating an issue with newly-created VMs not having network connectivity. This also affects VMs undergoing live migrations. We will provide more information by 11:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17002 - Incident in progress - Some projects not visible for customers

The issue with listing GCP projects and organizations should have been resolved for all affected users as of 15:21 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17002 - Incident in progress - Some projects not visible for customers

The issue is still occurring for some projects for some users. Mitigation is still underway. We will provide the next update by 16:30 US/Pacific time.

Google Cloud Platform
UPDATE: Incident 17002 - Incident in progress - Some projects not visible for customers

Listing of Google Cloud projects and organizations is still failing to show some entries. As this only affects listing, users can access their projects by navigating directly to appropriate URLs. Google engineers have a mitigation plan that is underway. We will provide another status update by 14:30 US/Pacific time with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Incident in progress - Some projects not visible for customers

We are experiencing an (intermittent) issue with Google Cloud Projects search index beginning at Monday, 2017-01-23 00:00 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 8:30 US/Pacific with current details

Google Cloud Platform
RESOLVED: Incident 18025 - Bigquery query job failures

The issue with BigQuery's Table Copy service and query jobs should have been resolved for all affected projects as of 07:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence

Google Cloud Platform
UPDATE: Incident 18025 - Bigquery query job failures

The issue with BigQuery's Table Copy service and query jobs should be resolved for majority of users and expect a full resolution in the near future. We will provide another status update by 08:00 US/Pacific, 2017-01-21.

Google Cloud Platform
UPDATE: Incident 18025 - Bigquery query job failures

We believe the issue with BigQuery's Table Copy service and query jobs should be resolved for majority of users and expect a full resolution in the near future. We will provide another status update by 06:00 US/Pacific, 2017-01-21.

Google Cloud Platform
UPDATE: Incident 18025 - Bigquery query job failures

We believe the issue with BigQuery's Table Copy service and query jobs should be resolved for majority of users and expect a full resolution in the near future. We will provide another status update by 04:00 US/Pacific, 2017-01-21.

Google Cloud Platform
UPDATE: Incident 18025 - Bigquery query job failures

We are experiencing an issue with Bigquery query jobs beginning at Friday, 2017-01-21 18:15 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17001 - Cloud Console and Stackdriver may display the number of App Engine instances as zero for Java and Go

The issue with Cloud Console and Stackdriver showing the number of App Engine instances as zero should have been resolved for all affected projects as of 01:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17001 - Cloud Console and Stackdriver may display the number of App Engine instances as zero for Java and Go

We are experiencing an issue with Cloud Console and Stackdriver which may show the number of App Engine instances as zero beginning at Wednesday, 2017-01-18 18:45 US/Pacific. This issue should have been resolved for the majority of projects and we expect a full resolution by 2017-01-20 00:00 PDT.

Google Cloud Platform
RESOLVED: Incident 18024 - BigQuery Web UI currently unavailable for some Customers

The issue with BigQuery UI should have been resolved for all affected users as of 05:35MM US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18024 - BigQuery Web UI currently unavailable for some Customers

The issue with BigQuery Web UI should have been resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by 06:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18024 - BigQuery Web UI currently unavailable for some Customers

We are investigating an issue with BigQuery Web UI. We will provide more information by 2017-01-10 06:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

The issues with Cloud Shell and Compute Engine serial output should have been resolved for all affected instances as of 22:25 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

The issue with Cloud Shell should be resolved for all customers at this time. The issue with Compute Engine serial port output should be resolved for all new instances created after 19:45 PT in all zones. Instances created before 14:10 PT remain unaffected. Some instances created between 14:10 and 19:45 PT in us-east1-c and us-west1 may still be unable to view the serial output. We are currently in the process of applying the fix to zones in this region. Access to the serial console output should be restored for instances created between 14:10 and 19:45 PT in all other zones. We expect a full resolution in the near future. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

The issue with Cloud Shell should be resolved for all customers at this time. The issue with Compute Engine serial port output should be resolved for all new instances created after 19:45 PT in all zones. Instances created before 14:10 PT remain unaffected. Access to the serial console output should be restored for all instances in the asia-east1 and asia-northeast1 regions, and the us-central1-a zone, created between 14:10 and 19:45 PT. Some instances created between 14:10 and 19:45 PT in other zones may still be unable to view the serial console output. We are currently in the process of applying the fix to the remaining zones. We expect a full resolution in the near future. We will provide another status update by 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

The issue with Cloud Shell should be resolved for all customers at this time. The issue with Compute Engine serial port output should be resolved for all new instances created after 19:45 PT, and all instances in us-central1-a, and asia-east1-b created between 14:10 and 19:45 PT. All other instances created before 14:10 PT remain unaffected. We expect a full resolution in the near future. We will provide another status update by 21:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

We are still investigating the issues with Compute Engine and Cloud Shell, and do not have any news at this time. We will provide another status update by 20:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

We are experiencing issues with Compute Engine serial port output and Cloud Shell beginning at Sunday, 2017-01-08 14:10 US/Pacific. Current data indicates that newly-created instances are unable to use the "get-serial-port-output" command in "gcloud", or use the Cloud Console to retrieve serial port output. Customers can still use the interactive serial console on these newly-created instances. Instances created before the impact time do not appear to be affected. Additionally, the Cloud Shell is intermittently available at this time. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 19:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17001 - Cloud VPN issues in regions us-west1 and us-east1

The issue with Cloud VPN where some tunnels weren’t connecting in us-east1 and us-west1 should have been resolved for all affected tunnels as of 23:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17001 - Cloud VPN issues in regions us-west1 and us-east1

We are investigating reports of an issue with Cloud VPN in regions us-west1 and us-east1. We will provide more information by 23:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16036 - Cloud Storage is showing inconsistent result for object listing for multi-regional buckets in US

The issue with Cloud Storage showing inconsistent results for object listing for multi-regional buckets in US should have been resolved for all affected projects as of 23:50 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16036 - Cloud Storage is showing inconsistent result for object listing for multi-regional buckets in US

We are still investigating the issue with Cloud Storage showing inconsistent result for object listing for multi-regional buckets in US. We will provide another status update by 2016-12-20 02:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16036 - Cloud Storage is showing inconsistent result for object listing for multi-regional buckets in US

We are experiencing an intermittent issue with Cloud Storage showing inconsistent result for object listing for multi-regional buckets in US beginning approximately at Monday, 2016-12-16 09:00 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2016-12-17 00:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16035 - Elevated Cloud Storage error rate and latency

The issue with Google Cloud Storage seeing increased errors and latency should have been resolved for all affected users as of 09:40 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16035 - Elevated Cloud Storage error rate and latency

We are continuing experience an issue with Google Cloud Storage. Errors and latency have decreased, but are not yet at pre-incident levels. We are continuing to investigate mitigation strategies and identifying root cause. Impact is limited to the US region. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16035 - Elevated Cloud Storage error rate and latency

We are investigating an issue with Google Cloud Storage serving increased errors and at a higher latency. We will provide more information by 10:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16011 - App Engine remote socket API errors in us-central region

The issue with App Engine applications having higher than expected socket API errors should have been resolved for all affected applications as of 22:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

We are still investigating reports of an issue with App Engine applications having higher than expected socket API errors if they are located in the us-central region. We will provide another status update by 2016-12-02 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

We are still investigating reports of an issue with App Engine applications having higher than expected socket API errors if they are located in the us-central region. We will provide another status update by 2016-12-02 21:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

The issue with App Engine remote socket API errors in us-central region should have been resolved for all affected projects as of 19:46 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

We are still investigating reports of an issue with App Engine applications having higher than expected socket API errors if they are located in the us-central region. We will provide another status update by 2016-12-02 20:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

We are investigating reports of an issue with App Engine applications having higher than expected socket API errors if they are located in the us-central region. We will provide another status update by 2016-12-02 19:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 18023 - Increased 500 error rate for BigQuery API calls

The issue with increased 500 errors from the BigQuery API has been resolved. We apologize for the impact that this incident had on your application.

Google Cloud Platform
RESOLVED: Incident 18022 - BigQuery Streaming API failing

Small correction to the incident report. The resolution time of the incident was 20:00 US/Pacific, not 20:11 US/Pacific. Similarly, total downtime was 4 hours.

Google Cloud Platform
RESOLVED: Incident 18022 - BigQuery Streaming API failing

SUMMARY: On Tuesday 8 November 2016, Google BigQuery’s streaming service, which includes streaming inserts and queries against recently committed streaming buffers, was largely unavailable for a period of 4 hours and 11 minutes. To our BigQuery customers whose business analytics were impacted during this outage, we sincerely apologize. We will be providing an SLA credit for the affected timeframe. We have conducted an internal investigation and are taking steps to improve our service. DETAILED DESCRIPTION OF IMPACT: On Tuesday 8 November 2016 from 16:00 to 20:11 US/Pacific, 73% of BigQuery streaming inserts failed with a 503 error code indicating an internal error had occurred during the insertion. At peak, 93% of BigQuery streaming inserts failed. During the incident, queries performed on tables with recently-streamed data returned a result code (400) indicating that the table was unavailable for querying. Queries against tables in which data were not streamed within the 24 hours preceding the incident were unaffected. There were no issues with non-streaming ingestion of data. ROOT CAUSE: The BigQuery streaming service requires authorization checks to verify that it is streaming data from an authorized entity to a table that entity has permissions to access. The authorization service relies on a caching layer in order to reduce the number of calls to the authorization backend. At 16:00 US/Pacific, a combination of reduced backend authorization capacity coupled with routine cache entry refreshes caused a surge in requests to the authorization backends, exceeding their current capacity. Because BigQuery does not cache failed authorization attempts, this overload meant that new streaming requests would require re-authorization, thereby further increasing load on the authorization backend. This continual increase of authorization requests on an already overloaded authorization backend resulted in continued and sustained authorization failures which propagated into streaming request and query failures. REMEDIATION AND PREVENTION: Google engineers were alerted to issues with the streaming service at 16:21 US/Pacific. Their initial hypothesis was that the caching layer for authorization requests was failing. The engineers started redirecting requests to bypass the caching layer at 16:51. After testing the system without the caching layer, the engineers determined that the caching layer was working as designed, and requests were directed to the caching layer again at 18:12. At 18:13, the engineering team was able to pinpoint the failures to a set of overloaded authorization backends and begin remediation. The issue with authorization capacity was ultimately resolved by incrementally reducing load on the authorization system internally and increasing the cache TTL, allowing streaming authorization requests to succeed and populate the cache so that internal services could be restarted. Recovery of streaming errors began at 19:34 US/Pacific and the streaming service was fully restored at 20:11. To prevent short-term recurrence of the issue, the engineering team has greatly increased the request capacity of the authorization backend. In the longer term, the BigQuery engineering team will work on several mitigation strategies to address the currently cascading effect of failed authorization requests. These strategies include caching intermediary responses to the authorization flow for the streaming service, caching failure states for authorization requests and adding rate limiting to the authorization service so that large increases in cache miss rate will not overwhelm the authorization backend. In addition, the BigQuery engineering team will be improving the monitoring of available capacity on the authorization backend and will add additional alerting so capacity issues can be mitigated before they become cascading failures. The BigQuery engineering team will also be investigating ways to reduce the spike in authorization traffic that occurs daily at 16:00 US/Pacific when the cache is rebuilt to more evenly distribute requests to the authorization backend. Finally, we have received feedback that our communications during the outage left a lot to be desired. We agree with this feedback. While our engineering teams launched an all-hands-on-deck to resolve this issue within minutes of its detection, we did not adequately communicate both the level-of-effort and the steady progress of diagnosis, triage and restoration happening during the incident. We clearly erred in not communicating promptly, crisply and transparently to affected customers during this incident. We will be addressing our communications — for all Google Cloud systems, not just BigQuery — as part of a separate effort, which has already been launched. We recognize the extended duration of this outage, and we sincerely apologize to our BigQuery customers for the impact to your business analytics.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

The issue with the BigQuery Streaming API should have been resolved for all affected tables as of 20:07 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We're continuing to work to restore the service to the BigQuery Streaming API. We will add an update at 20:30 US/Pacific with further information.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are continuing to investigate the issue with BigQuery Streaming API. We will add an update at 20:00 US/Pacific with further information.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We have taken steps to mitigate the issue, which has led to some improvements. The issue continues to impact the BigQuery Streaming API and tables with a streaming buffer. We will provide a further status update at 19:30 US/Pacific with current details

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are continuing to investigate the issue with BigQuery Streaming API. The issue may also impact tables with a streaming buffer, making them inaccessible. This will be clarified in the next update at 19:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are still investigating the issue with BigQuery Streaming API. There are no other details to share at this time but we are actively working to resolve this. We will provide another status update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are still investigating the issue with BigQuery Streaming API. Current data indicates that all projects are affected by this issue. We will provide another status update by 18:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are investigating an issue with the BigQuery Streaming API. We will provide more information by 17:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18021 - BigQuery Streaming API fai

We are investigating an issue with the BigQuery Streaming API. We will provide more information by 5:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16008 - Delete operations on Cloud Platform Console not being performed

The issue with Cloud Platform Console not being able to perform delete operations should have been resolved for all affected users as of 12:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 17012 - Issue With Second Generation Cloud SQL Instance Creation

SUMMARY: On Monday, 31 October 2016, 73% of requests to create new subscriptions for Google Cloud Pub/Sub failed for a duration of 124 minutes. Creation of new Cloud SQL Second Generation instances also failed during this incident. If your service or application was affected, we apologize. We have conducted a detailed review of the causes of this incident and are ensuring that we apply the appropriate fixes so that it will not recur. DETAILED DESCRIPTION OF IMPACT: On Monday, 31 October 2016 from 13:11 to 15:15 PDT, 73% of requests to create new subscriptions for Google Cloud Pub/Sub failed. 0.1% of pull requests experienced latencies of up to 4 minutes for end-to-end message delivery. Creation of all new Cloud SQL Second Generation instances also failed during this incident. Existing instances were not affected. ROOT CAUSE: At 13:08, a system in the Cloud Pub/Sub control plane experienced a connectivity issue to its persistent storage layer for a duration of 83 seconds. This caused a queue of storage requests to build up. When the storage layer re-connected, the queued requests were executed, which exceeded the available processing quota for the storage system. The system entered a feedback loop in which storage requests continued to queue up leading to further latency increases and more queued requests. The system was unable to exit from this state until additional capacity was added. Creation of a new Cloud SQL Second Generation instance requires a new Cloud Pub/Sub subscription. REMEDIATION AND PREVENTION: Our monitoring systems detected the outage and paged oncall engineers at 13:19. We determined root cause at 14:05 and acquired additional storage capacity for the Pub/Sub control plane at 14:42. The outage ended at 15:15 when this capacity became available. To prevent this issue from recurring, we have already increased the storage capacity for the Cloud Pub/Sub control plane. We will change the retry behavior of the control plane to prevent a feedback loop if storage quota is temporarily exceeded. We will also improve our monitoring to ensure we can determine root cause for this type of failure more quickly in future. We apologize for the inconvenience this issue caused our customers.

Google Cloud Platform
RESOLVED: Incident 16003 - Issues with Cloud Pub/Sub

SUMMARY: On Monday, 31 October 2016, 73% of requests to create new subscriptions for Google Cloud Pub/Sub failed for a duration of 124 minutes. Creation of new Cloud SQL Second Generation instances also failed during this incident. If your service or application was affected, we apologize. We have conducted a detailed review of the causes of this incident and are ensuring that we apply the appropriate fixes so that it will not recur. DETAILED DESCRIPTION OF IMPACT: On Monday, 31 October 2016 from 13:11 to 15:15 PDT, 73% of requests to create new subscriptions for Google Cloud Pub/Sub failed. 0.1% of pull requests experienced latencies of up to 4 minutes for end-to-end message delivery. Creation of all new Cloud SQL Second Generation instances also failed during this incident. Existing instances were not affected. ROOT CAUSE: At 13:08, a system in the Cloud Pub/Sub control plane experienced a connectivity issue to its persistent storage layer for a duration of 83 seconds. This caused a queue of storage requests to build up. When the storage layer re-connected, the queued requests were executed, which exceeded the available processing quota for the storage system. The system entered a feedback loop in which storage requests continued to queue up leading to further latency increases and more queued requests. The system was unable to exit from this state until additional capacity was added. Creation of a new Cloud SQL Second Generation instance requires a new Cloud Pub/Sub subscription. REMEDIATION AND PREVENTION: Our monitoring systems detected the outage and paged oncall engineers at 13:19. We determined root cause at 14:05 and acquired additional storage capacity for the Pub/Sub control plane at 14:42. The outage ended at 15:15 when this capacity became available. To prevent this issue from recurring, we have already increased the storage capacity for the Cloud Pub/Sub control plane. We will change the retry behavior of the control plane to prevent a feedback loop if storage quota is temporarily exceeded. We will also improve our monitoring to ensure we can determine root cause for this type of failure more quickly in future. We apologize for the inconvenience this issue caused our customers.

Google Cloud Platform
UPDATE: Incident 16008 - Delete operations on Cloud Platform Console not being performed

The issue with the Cloud Platform Console not being able to perform delete operations should have been resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by Fri 2016-11-04 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - Delete operations on Cloud Platform Console not being performed

The issue with the Cloud Platform Console not being able to perform delete operations has been identified with the root cause and we expect a full resolution in the near future. We will provide another status update by Fri 2016-11-04 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - Delete operation on Cloud Platform Console not being performed

We are experiencing an issue with some delete operations within the Cloud Platform Console, beginning at Tuesday, 2016-10-01 10:00 US/Pacific. Current data indicates that all users are affected by this issue. The gcloud command line tool may be used as a workaround for those who need to manage their resources immediately. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 22:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17012 - Issue With Second Generation Cloud SQL Instance Creation

The issue with second generation Cloud SQL instance creation should be resolved for all affected projects as of 15:15 PDT. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 16003 - Issues with Cloud Pub/Sub

The issue with Cloud Pub/Sub should be resolved for all affected projects as of 15:15 PDT. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16003 - Issues with Cloud Pub/Sub

We are continuing to investigate an issue with Cloud Pub/Sub. We will provide an update at 16:00 PDT.

Google Cloud Platform
UPDATE: Incident 17012 - Issue With Second Generation Cloud SQL Instance Creation

We are continuing to investigate an issue with second generation Cloud SQL instance creation. We will provide another update at 16:00 PDT.

Google Cloud Platform
UPDATE: Incident 16003 - Issues with Cloud Pub/Sub

We are currently investigating an issue with Cloud Pub/Sub. We will provide an update at 15:00 PDT with more information.

Google Cloud Platform
UPDATE: Incident 17012 - Issue With Second Generation Cloud SQL Instance Creation

We are currently investigating an issue with second generation Cloud SQL instance creation. We will provide an update with more information at 15:00 PDT.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

The issue with Google Container Engine nodes connecting to the metadata server should have been resolved for all affected clusters as of 10:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

The issue with Google Container Engine nodes connecting to the metadata server has now been resolved for some of the existing clusters, too. We are continuing to repair the rest of the clusters. We will provide next status update when this repair is complete.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

The issue with Google Container Engine nodes connecting to the metadata server has been fully resolved for all new clusters. The work to repair the existing clusters is still ongoing and is expected to last for a few more hours. It may also result in the restart of the containers and/or virtual machines in the repaired clusters as per the previous update. If your cluster is affected by this issue and you wish to solve this problem more quickly, you could choose to delete and recreate your GKE cluster. We will provide next status update when the work to repair the existing clusters has completed.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We have now identified the cause of the issue and are in the process of rolling out the fix for it into production. This may result in the restart of the affected containers and/or virtual machines in the GKE cluster. We apologize for any inconvenience this might cause.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are continuing to work on resolving the issue with Google Container Engine nodes connecting to the metadata server. We will provide next status update as soon as the proposed fix for this issue is finalized and validated.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are still working on resolving the issue with Google Container Engine nodes connecting to the metadata server. We are actively testing a fix for it, and once it is validated, we will push this fix into production. We will provide next status update by 2016-10-24 01:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are still investigating the issue with Google Container Engine nodes connecting to the metadata server. Current data indicates that that less than 10% of clusters are still affected by this issue. We are actively testing a fix. Once confirmed we will push this fix into production. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are still investigating the issue with Google Container Engine nodes connecting to the metadata server. Further investigation reveals that less than 10% of clusters are affected by this issue. We will provide another status update by 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

Customers experiencing this error will see messages containing the following in their logs: "WARNING: exception thrown while executing request java.net.UnknownHostException: metadata" This is caused by a change that inadvertently prevents hosts from properly resolving the DNS address for the metadata server. We have identified the root cause and are preparing a fix. No action is required by customers at this time. The proposed fix should resolve the issue for all customers as soon as it is prepared, tested, and deployed. We will add another update at 21:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16020 - 502s from HTTP(S) Load Balancer

SUMMARY: On Thursday 13 October 2016, approximately one-third of requests sent to the Google Compute Engine HTTP(S) Load Balancers between 15:07 and 17:25 PDT received an HTTP 502 error rather than the expected response. If your service or application was affected, we apologize. We took immediate action to restore service once the problem was detected, and are taking steps to improve the Google Compute Engine HTTP(S) Load Balancer’s performance and availability. DETAILED DESCRIPTION OF IMPACT: Starting at 15:07 PDT on Thursday 13 October 2016, Google Compute Engine HTTP(S) Load Balancers started to return elevated rates of HTTP 502 (Bad Gateway) responses. The error rate rose progressively from 2% to a peak of 45% of all requests at 16:09 and remained there until 17:03. From 17:03 to 17:15, the error rate declined rapidly from 45% to 2%. By 17:25 requests were routing as expected and the incident was over. During the incident, the error rate seen by applications using GCLB varied depending on the network routing of their requests to Google. ROOT CAUSE: The Google Compute Engine HTTP(S) Load Balancer system is a global, geographically-distributed multi-tiered software stack which receives incoming HTTP(S) requests via many points in Google's global network, and dispatches them to appropriate Google Compute Engine instances. On 13 October 2016, a configuration change was rolled out to one of these layers with widespread distribution beginning at 15:07. This change triggered a software bug which decoupled second-tier load balancers from a number of first-tier load balancers. The affected first-tier load balancers therefore had no forwarding path for incoming requests and returned the HTTP 502 code to indicate this. Google’s networking systems have a number of safeguards to prevent them from propagating incorrect or invalid configurations, and to reduce the scope of the impact in the event that a problem is exposed in production. These safeguards were partially successful in this instance, limiting both the scope and the duration of the event, but not preventing it entirely. The first relevant safeguard is a canary deployment, where the configuration is deployed at a single site and that site is verified to be functioning within normal bounds. In this case, the canary step did generate a warning, but it was not sufficiently precise to cause the on-call engineer to immediately halt the rollout. The new configuration subsequently rolled out in stages, but was halted part way through as further alerts indicated that it was not functioning correctly. By design, this progressive rollout limited the error rate experienced by customers. REMEDIATION AND PREVENTION: Once the nature and scope of the issue became clear, Google engineers first quickly halted and reverted the rollout. This prevented a larger fraction of GCLB instances from being affected. Google engineers then set about restoring function to the GCLB instances which had been exposed to the configuration. They verified that restarting affected GCLB instances restored the pre-rollout configuration, and then rapidly restarted all affected GCLB instances, ending the event. Google understands that global load balancers are extremely useful, but also may be a single point of failure for your service. We are committed to applying the lessons from this outage in order to ensure that this type of incident does not recur. One of our guiding principles for avoiding large-scale incidents is to roll out global changes slowly and carefully monitor for errors. We typically have a period of soak time during a canary release before rolling out more widely. In this case, the change was pushed too quickly for accurate detection of the class of failure uncovered by the configuration being rolled out. We will change our processes to be more conservative when rolling out configuration changes to critical systems. As defense in depth, Google engineers are also changing the black box monitoring for GCLB so that it will test the first-tier load balancers impacted by this incident. We will also be improving the black box monitoring to ensure that our probers cover all use cases. In addition, we will add an alert for elevated error rates between first-tier and second-tier load balancers. We apologize again for the impact this issue caused our customers.

Google Cloud Platform
UPDATE: Incident 18020 - BigQuery query failures

The issue with BigQuery failing queries should have been resolved for all affected users as of 8:53 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18020 - BigQuery query failures

The issue with Google BigQuery API calls returning 500 Internal Errors should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 09:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18020 - BigQuery query failures

The issue with Google BigQuery API calls returning 500 Internal Errors should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 07:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18020 - BigQuery query failures

We are investigating an issue with Google BigQuery. We will provide more information by 6:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16020 - 502s from HTTP(s) Load Balancer

The issue with Google Cloud Platform HTTP(s) Load Balancer returning 502 response code should have been resolved for all affected customers as of 17:25 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16020 - 502s from HTTP(s) Load Balancer

We are still investigating the issue with Google Cloud Platform HTTP(S) Load Balancers returning 502 errors, and will provide an update by 18:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16020 - 502s from HTTP(s) Load Balancer

We are still investigating the issue with Google Cloud Platform HTTP(S) Load Balancers returning 502 errors, and will provide an update by 17:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16020 - 502s from HTTP(s) Load Balancer

We are experiencing an issue with Google Cloud Platform HTTP(s) Load Balancer returning 502 response codes, starting at 2016-10-13 15:30 US/Pacific. We are investigating the issue, and will provide an update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16019 - Hurricane Matthew may impact GCP services in us-east1

The Google Cloud Platform team is keeping a close watch on the path of Hurricane Matthew. The National Hurricane Center 3-day forecast indicates that the storm is tracking within 200 miles of the datacenters housing the GCP region us-east1. We do not anticipate any specific service interruptions. Our datacenter is designed to withstand a direct hit from a more powerful hurricane than Matthew without disruption, and we maintain triple-redundant diverse-path backbone networking precisely to be resilient to extreme events. We have staff on site and plan to run services normally. Despite all of the above, it is statistically true that there is an increased risk of a region-level utility grid or other infrastructure disruption which may result in a GCP service interruption. If we anticipate a service interruption – for example, if the regional grid loses power and our datacenter is operating on generator – our protocol is to share specific updates to our customers with a 12 hour notice.

Google Cloud Platform
UPDATE: Incident 16034 - Elevated Cloud Storage error rate and latency

The issue with Cloud Storage that some projects encountered elevated errors and latency should have been resolved for all affected projects as of 23:40 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16034 - Elevated Cloud Storage error rate and latency

We are investigating an issue with Cloud Storage. We will provide more information by 23:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We have restored most of the missing Google Cloud Pub/Sub subscriptions for affected projects. We expect to restore the remaining missing subscriptions within one hour. We have already identified and fixed the root cause of the issue. We will conduct an internal investigation and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are working on restoring the missing PubSub subscriptions for customers that are affected, and will provide an ETA for complete restoration when available.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 12:00 US/Pacific with current details

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 10:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 08:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 06:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 02:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. In the meantime, affected users can re-create missing subscriptions manually in order to make them available. We will provide another status update by Wednesday, 2016-09-28 00:00 US/Pacific with current details

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We experienced an issue with Cloud Pub/Sub that some subscriptions were deleted unexpectedly approximately from Tuesday, 2016-09-27 13:40-18:45 US/Pacific. We are going to recreate the deleted subscription. We will provide another status update by 22:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

The issue with slow Compute Engine operations in asia-east1-a is resolved since 13:37 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

We are still working on the issue with Compute Engine operations. After mitigation was applied operations in asia-east1-a have continued to run normally. A final fix is still underway. We will provide another status update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

We are still investigating the issue with Compute Engine operations. We have applied mitigation and currently operations in asia-east1-a are processing normally. We are applying some final fixes and monitoring the issue. We will provide another status update by 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

This incident only covers instances in the asia-east1-a zone

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

It is taking multiple minutes to create new VMs, restart existing VMs that abnormally terminate, or Hot-attach disks.

Google Cloud Platform
RESOLVED: Incident 16033 - Google Cloud Storage serving high error rates.

We have completed our internal investigation and results suggest this incident impacted a very small number of projects. We have reached out to affected users directly and if you have not heard from us, your project(s) were not impacted.

Google Cloud Platform
RESOLVED: Incident 16033 - Google Cloud Storage serving high error rates.

The issue with Google Cloud Storage serving a high percentage of errors should have been resolved for all affected users as of 13:05 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 16017 - Networking issue with Google Compute Engine services

SUMMARY: On Monday 22 August 2016, the Google Cloud US-CENTRAL1-F zone lost network connectivity to services outside that zone for a duration of 25 minutes. All other zones in the US-CENTRAL1 region were unaffected. All network traffic within the zone was also unaffected. We apologize to customers whose service or application was affected by this incident. We understand that a network disruption has a negative impact on your application - particularly if it is homed in a single zone - and we apologize for the inconvenience this caused. What follows is our detailed analysis of the root cause and actions we will take in order to prevent this type of incident from recurring. DETAILED DESCRIPTION OF IMPACT: We have received feedback from customers asking us to specifically and separately enumerate the impact of incidents to any service that may have been touched. We agree that this will make it easier to reason about the impact of any particular event and we have done so in the following descriptions. On Monday 22 August 2016 from 07:05 to 07:30 PDT the Google Cloud US-CENTRAL1-F zone lost network connectivity to services outside that zone. App Engine 6% of App Engine Standard Environment applications in the US-CENTRAL region served elevated error rates for up to 8 minutes, until the App Engine serving infrastructure automatically redirected traffic to a failover zone. The aggregate error rate across all impacted applications during the incident period was 3%. The traffic redirection caused a Memcache flush for affected applications, and also loading requests as new instances of the applications started up in the failover zones. All App Engine Flexible Environment applications deployed to the US-CENTRAL1-F zone were unavailable for the duration of the incident. Additionally, 4.5% of these applications experienced various levels of unavailability for up to an additional 5 hours while the system recovered. Deployments for US-CENTRAL Flexible applications were delayed during the incident. Our engineers disabled the US-CENTRAL1-F zone for new deployments during the incident, so that any customers who elected to redeploy, immediately recovered. Cloud Console The Cloud Console was available during the incident, though some App Engine administrative pages did not load for applications in US-CENTRAL and 50% of project creation requests failed to complete and needed to be retried by customers before succeeding. Cloud Dataflow Some Dataflow running jobs in the US-CENTRAL1 region experienced delays in processing. Although most of the affected jobs recovered gracefully after the incident ended, up to 2.5% of affected jobs in this zone became stuck and required manual termination by customers. New jobs created during the incident were not impacted. Cloud SQL Cloud SQL First Generation instances were not impacted by this incident. 30% of Cloud SQL Second Generation instances in US-CENTRAL1 were unavailable for up to 5 minutes, after which they became available again. An additional 15% of Second Generation instances were unavailable for 22 minutes. Compute Engine All instances in the US-CENTRAL1-F zone were inaccessible from outside the zone for the duration of the incident. 9% of them remained inaccessible from outside the zone for an additional hour. Container Engine Container Engine clusters running in US-CENTRAL1-F were inaccessible from outside of the zone during the incident although they continued to serve. In addition, calls to the Container Engine API experienced a 4% error rate and elevated latency during the incident, though this was substantially mitigated if the client retried the request. Stackdriver Logging 20% of log API requests sent to Stackdriver Logging in the US-CENTRAL1 region failed during the incident, though App Engine logging was not impacted. Clients retrying requests recovered gracefully. Stackdriver Monitoring Requests to the StackDriver web interface and the Google Monitoring API v2beta2 and v3 experienced elevated latency and an error rate of up to 3.5% during the incident. In addition, some alerts were delayed. Impact for API calls was substantially mitigated if the client retried the request. ROOT CAUSE: On 18 July, Google carried out a planned maintenance event to inspect and test the UPS on a power feed in one zone in the US-CENTRAL1 region. That maintenance disrupted one of the two power feeds to network devices that control routes into and out of the US-CENTRAL1-F zone. Although this did not cause any disruption in service, these devices unexpectedly and silently disabled the affected power supply modules - a previously unseen behavior. Because our monitoring systems did not notify our network engineers of this problem the power supply modules were not re-enabled after the maintenance event. The service disruption was triggered on Monday 22 August, when our engineers carried out another planned maintenance event that removed power to the second power feed of these devices, causing them to disable the other power supply module as well, and thus completely shut down. Following our standard procedure when carrying out maintenance events, we made a detailed line walk of all critical equipment prior to, and after, making any changes. However, in this case we did not detect the disabled power supply modules. Loss of these network devices meant that machines in US-CENTRAL1-F did not have routes into and out of the zone but could still communicate to other machines within the same zone. REMEDIATION AND PREVENTION: Our network engineers received an alert at 07:14, nine minutes after the incident started. We restored power to the devices at 07:30. The network returned to service without further intervention after power was restored. As immediate followup to this incident, we have already carried out an audit of all other network devices of this type in our fleet to verify that there are none with disabled power supply modules. We have also written up a detailed post mortem of this incident and will take the following actions to prevent future outages of this type: Our monitoring will be enhanced to detect cases in which power supply modules are disabled. This will ensure that conditions that are missed by the manual line walk prior to maintenance events are picked up by automated monitoring. We will change the configuration of these network devices so that power disruptions do not cause them to disable their power supply modules. The interaction between the network control plane and the data plane should be such that the data plane should "fail open" and continue to route packets in the event of control plane failures. We will add support for networking protocols that have the capability to continue to route traffic for a short period in the event of failures in control plane components. We will also be taking various actions to improve the resilience of the affected services to single-zone outages, including the following: App Engine Although App Engine Flexible Environment is currently in Beta, we expect production services to be more resilient to single zone disruptions. We will make this extra resilience an exit criteria before we allow the service to reach General Availability. Cloud Dataflow We will improve resilience of Dataflow to single-zone outages by implementing better strategies for migrating the job controller to a new zone in the event of an outage. Work on this remediation is already underway. Stackdriver Logging We will make improvements to the Stackdriver Logging service (currently in Beta) in the areas of automatic failover and capacity management before this service goes to General Availability. This will ensure that it is resilient to single-zone outages. Stackdriver Monitoring The Google Monitoring API (currently in beta) is already hosted in more than one zone, but we will further improve its resilience by adding additional capacity to ensure a single-zone outage does not cause overload in any other zones. We will do this before this service exits to General Availability. Finally, we know that you depend on Google Cloud Platform for your production workloads and we apologize for the inconvenience this event caused.

Google Cloud Platform
RESOLVED: Incident 16009 - Networking issue with Google App Engine services

SUMMARY: On Monday 22 August 2016, the Google Cloud US-CENTRAL1-F zone lost network connectivity to services outside that zone for a duration of 25 minutes. All other zones in the US-CENTRAL1 region were unaffected. All network traffic within the zone was also unaffected. We apologize to customers whose service or application was affected by this incident. We understand that a network disruption has a negative impact on your application - particularly if it is homed in a single zone - and we apologize for the inconvenience this caused. What follows is our detailed analysis of the root cause and actions we will take in order to prevent this type of incident from recurring. DETAILED DESCRIPTION OF IMPACT: We have received feedback from customers asking us to specifically and separately enumerate the impact of incidents to any service that may have been touched. We agree that this will make it easier to reason about the impact of any particular event and we have done so in the following descriptions. On Monday 22 August 2016 from 07:05 to 07:30 PDT the Google Cloud US-CENTRAL1-F zone lost network connectivity to services outside that zone. App Engine 6% of App Engine Standard Environment applications in the US-CENTRAL region served elevated error rates for up to 8 minutes, until the App Engine serving infrastructure automatically redirected traffic to a failover zone. The aggregate error rate across all impacted applications during the incident period was 3%. The traffic redirection caused a Memcache flush for affected applications, and also loading requests as new instances of the applications started up in the failover zones. All App Engine Flexible Environment applications deployed to the US-CENTRAL1-F zone were unavailable for the duration of the incident. Additionally, 4.5% of these applications experienced various levels of unavailability for up to an additional 5 hours while the system recovered. Deployments for US-CENTRAL Flexible applications were delayed during the incident. Our engineers disabled the US-CENTRAL1-F zone for new deployments during the incident, so that any customers who elected to redeploy, immediately recovered. Cloud Console The Cloud Console was available during the incident, though some App Engine administrative pages did not load for applications in US-CENTRAL and 50% of project creation requests failed to complete and needed to be retried by customers before succeeding. Cloud Dataflow Some Dataflow running jobs in the US-CENTRAL1 region experienced delays in processing. Although most of the affected jobs recovered gracefully after the incident ended, up to 2.5% of affected jobs in this zone became stuck and required manual termination by customers. New jobs created during the incident were not impacted. Cloud SQL Cloud SQL First Generation instances were not impacted by this incident. 30% of Cloud SQL Second Generation instances in US-CENTRAL1 were unavailable for up to 5 minutes, after which they became available again. An additional 15% of Second Generation instances were unavailable for 22 minutes. Compute Engine All instances in the US-CENTRAL1-F zone were inaccessible from outside the zone for the duration of the incident. 9% of them remained inaccessible from outside the zone for an additional hour. Container Engine Container Engine clusters running in US-CENTRAL1-F were inaccessible from outside of the zone during the incident although they continued to serve. In addition, calls to the Container Engine API experienced a 4% error rate and elevated latency during the incident, though this was substantially mitigated if the client retried the request. Stackdriver Logging 20% of log API requests sent to Stackdriver Logging in the US-CENTRAL1 region failed during the incident, though App Engine logging was not impacted. Clients retrying requests recovered gracefully. Stackdriver Monitoring Requests to the StackDriver web interface and the Google Monitoring API v2beta2 and v3 experienced elevated latency and an error rate of up to 3.5% during the incident. In addition, some alerts were delayed. Impact for API calls was substantially mitigated if the client retried the request. ROOT CAUSE: On 18 July, Google carried out a planned maintenance event to inspect and test the UPS on a power feed in one zone in the US-CENTRAL1 region. That maintenance disrupted one of the two power feeds to network devices that control routes into and out of the US-CENTRAL1-F zone. Although this did not cause any disruption in service, these devices unexpectedly and silently disabled the affected power supply modules - a previously unseen behavior. Because our monitoring systems did not notify our network engineers of this problem the power supply modules were not re-enabled after the maintenance event. The service disruption was triggered on Monday 22 August, when our engineers carried out another planned maintenance event that removed power to the second power feed of these devices, causing them to disable the other power supply module as well, and thus completely shut down. Following our standard procedure when carrying out maintenance events, we made a detailed line walk of all critical equipment prior to, and after, making any changes. However, in this case we did not detect the disabled power supply modules. Loss of these network devices meant that machines in US-CENTRAL1-F did not have routes into and out of the zone but could still communicate to other machines within the same zone. REMEDIATION AND PREVENTION: Our network engineers received an alert at 07:14, nine minutes after the incident started. We restored power to the devices at 07:30. The network returned to service without further intervention after power was restored. As immediate followup to this incident, we have already carried out an audit of all other network devices of this type in our fleet to verify that there are none with disabled power supply modules. We have also written up a detailed post mortem of this incident and will take the following actions to prevent future outages of this type: Our monitoring will be enhanced to detect cases in which power supply modules are disabled. This will ensure that conditions that are missed by the manual line walk prior to maintenance events are picked up by automated monitoring. We will change the configuration of these network devices so that power disruptions do not cause them to disable their power supply modules. The interaction between the network control plane and the data plane should be such that the data plane should "fail open" and continue to route packets in the event of control plane failures. We will add support for networking protocols that have the capability to continue to route traffic for a short period in the event of failures in control plane components. We will also be taking various actions to improve the resilience of the affected services to single-zone outages, including the following: App Engine Although App Engine Flexible Environment is currently in Beta, we expect production services to be more resilient to single zone disruptions. We will make this extra resilience an exit criteria before we allow the service to reach General Availability. Cloud Dataflow We will improve resilience of Dataflow to single-zone outages by implementing better strategies for migrating the job controller to a new zone in the event of an outage. Work on this remediation is already underway. Stackdriver Logging We will make improvements to the Stackdriver Logging service (currently in Beta) in the areas of automatic failover and capacity management before this service goes to General Availability. This will ensure that it is resilient to single-zone outages. Stackdriver Monitoring The Google Monitoring API (currently in beta) is already hosted in more than one zone, but we will further improve its resilience by adding additional capacity to ensure a single-zone outage does not cause overload in any other zones. We will do this before this service exits to General Availability. Finally, we know that you depend on Google Cloud Platform for your production workloads and we apologize for the inconvenience this event caused.

Google Cloud Platform
RESOLVED: Incident 17011 - Cloud SQL 2nd generation failing to create new instances

The issue in creating instances on Cloud SQL second generation should have been resolved for all affected projects as of 17:38 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17011 - Cloud SQL 2nd generation failing to create new instances

We are investigating an issue for creating new instances on Cloud SQL second generation. We will provide more information by 18:50 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16008 - App Engine Outage

SUMMARY: On Thursday 11 August 2016, 21% of Google App Engine applications hosted in the US-CENTRAL region experienced error rates in excess of 10% and elevated latency between 13:13 and 15:00 PDT. An additional 16% of applications hosted on the same GAE instance observed lower rates of errors and latency during the same period. We apologize for this incident. We know that you choose to run your applications on Google App Engine to obtain flexible, reliable, high-performance service, and in this incident we have not delivered the level of reliability for which we strive. Our engineers have been working hard to analyze what went wrong and ensure incidents of this type will not recur. DETAILED DESCRIPTION OF IMPACT: On Thursday 11 August 2016 from 13:13 to 15:00 PDT, 18% of applications hosted in the US-CENTRAL region experienced error rates between 10% and 50%, and 3% of applications experienced error rates in excess of 50%. Additionally, 14% experienced error rates between 1% and 10%, and 2% experienced error rate below 1% but above baseline levels. In addition, the 37% of applications which experienced elevated error rates also observed a median latency increase of just under 0.8 seconds per request. The remaining 63% of applications hosted on the same GAE instance, and applications hosted on other GAE instances, did not observe elevated error rates or increased latency. Both App Engine Standard and Flexible Environment applications in US-CENTRAL were affected by this incident. In addition, some Flexible Environment applications were unable to deploy new versions during this incident. App Engine applications in US-EAST1 and EUROPE-WEST were not impacted by this incident. ROOT CAUSE: The incident was triggered by a periodic maintenance procedure in which Google engineers move App Engine applications between datacenters in US-CENTRAL in order to balance traffic more evenly. As part of this procedure, we first move a proportion of apps to a new datacenter in which capacity has already been provisioned. We then gracefully drain traffic from an equivalent proportion of servers in the downsized datacenter in order to reclaim resources. The applications running on the drained servers are automatically rescheduled onto different servers. During this procedure, a software update on the traffic routers was also in progress, and this update triggered a rolling restart of the traffic routers. This temporarily diminished the available router capacity. The server drain resulted in rescheduling of multiple instances of manually-scaled applications. App Engine creates new instances of manually-scaled applications by sending a startup request via the traffic routers to the server hosting the new instance. Some manually-scaled instances started up slowly, resulting in the App Engine system retrying the start requests multiple times which caused a spike in CPU load on the traffic routers. The overloaded traffic routers dropped some incoming requests. Although there was sufficient capacity in the system to handle the load, the traffic routers did not immediately recover due to retry behavior which amplified the volume of requests. REMEDIATION AND PREVENTION: Google engineers were monitoring the system during the datacenter changes and immediately noticed the problem. Although we rolled back the change that drained the servers within 11 minutes, this did not sufficiently mitigate the issue because retry requests had generated enough additional traffic to keep the system’s total load at a substantially higher-than-normal level. As designed, App Engine automatically redirected requests to other datacenters away from the overload - which reduced the error rate. Additionally, our engineers manually redirected all traffic at 13:56 to other datacenters which further mitigated the issue. Finally, we then identified a configuration error that caused an imbalance of traffic in the new datacenters. Fixing this at 15:00 finally fully resolved the incident. In order to prevent a recurrence of this type of incident, we have added more traffic routing capacity in order to create more capacity buffer when draining servers in this region. We will also change how applications are rescheduled so that the traffic routers are not called and also modify that the system's retry behavior so that it cannot trigger this type of failure. We know that you rely on our infrastructure to run your important workloads and that this incident does not meet our bar for reliability. For that we apologize. Your trust is important to us and we will continue to all we can to earn and keep that trust.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The issue with Compute Engine network connectivity should have been resolved for nearly all instances. For the remaining few remaining instances we are working directly with the affected customers. No further updates will be posted, but we will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will also provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The issue with Compute Engine network connectivity should have been resolved for affected instances in us-central1-a, -b, and -c as of 08:00 US/Pacific. Less than 4% of instances in us-central1-f are currently affected. We will provide another status update by 12:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The issue with Compute Engine network connectivity should have been resolved for affected instances in us-central1-a, -b, and -c as of 08:00 US/Pacific. Less than 4% of instances in us-central1-f are currently affected and we expect a full resolution soon. We will provide another status update by 11:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The work on the remaining instances with network connectivity issues, located in us-central1-f, is still ongoing. We will provide another status update by 11:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The work on the remaining instances with network connectivity issues, located in us-central1-f, is still ongoing. We will provide another status update by 10:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The work on the remaining instances with network connectivity is still ongoing. Affected instances are located in us-central1-f. We will provide another status update by 10:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The work on the remaining instances with network connectivity is still ongoing. Affected instances are located in us-central1-f. We will provide another status update by 09:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

We are still investigating network connectivity issues for a subset of instances that have not automatically recovered. We will provide another status update by 09:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The issue with network connectivity to Google Compute Engine services should have been resolved for the majority of instances and we expect a full resolution in the near future. We will provide another status update by 08:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16009 - Networking issue with Google App Engine services

The issue with network connectivity to Google App Engine applications should have been resolved for all affected users as of 07:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

We are investigating an issue with network connectivity. We will provide more information by 08:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16009 - Networking issue with Google App Engine services

We are investigating an issue with network connectivity. We will provide more information by 08:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16016 - Networking issue with Google Compute Engine services

We are investigating an issue with network connectivity. We will provide more information by 08:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

The issue with App Engine apis being unavailable should have been resolved for nearly all affected projects as of 14:12 US/Pacific. We will follow up directly with few remaining affecting applications. We will also conduct a thorough internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. Finally, we will provide a more detailed analysis of this incident once we have completed this internal investigation.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

We are still investigating the issue with App Engine apis being unavailable. Current data indicates that some projects are affected by this issue. We will provide another status update by 15:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

The issue with App Engine apis being unavailable should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 15:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

We are experiencing an issue with App Engine apis being unavailable beginning at Thursday, 2016-08-11 13:45 US/Pacific. Current data indicates that Applications in us-central are affected by this issue.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

We are investigating reports of an issue with App Engine. We will provide more information by 02:15 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16015 - Networking issue with Google Compute Engine services

SUMMARY: On Friday 5 August 2016, some Google Cloud Platform customers experienced increased network latency and packet loss to Google Compute Engine (GCE), Cloud VPN, Cloud Router and Cloud SQL, for a duration of 99 minutes. If you were affected by this issue, we apologize. We intend to provide a higher level reliability than this, and we are working to learn from this issue to make that a reality. DETAILED DESCRIPTION OF IMPACT: On Friday 5th August 2016 from 00:55 to 02:34 PDT a number of services were disrupted: Some Google Compute Engine TCP and UDP traffic had elevated latency. Most ICMP, ESP, AH and SCTP traffic inbound from outside the Google network was silently dropped, resulting in existing connections being dropped and new connections timing out on connect. Most Google Cloud SQL first generation connections from sources external to Google failed with a connection timeout. Cloud SQL second generation connections may have seen higher latency but not failure. Google Cloud VPN tunnels remained connected, however there was complete packet loss for data through the majority of tunnels. As Cloud Router BGP sessions traverse Cloud VPN, all sessions were dropped. All other traffic was unaffected, including internal connections between Google services and services provided via HTTP APIs. ROOT CAUSE: While removing a faulty router from service, a new procedure for diverting traffic from the router was used. This procedure applied a new configuration that resulted in announcing some Google Cloud Platform IP addresses from a single point of presence in the southwestern US. As these announcements were highly specific they took precedence over the normal routes to Google's network and caused a substantial proportion of traffic for the affected network ranges to be directed to this one point of presence. This misrouting directly caused the additional latency some customers experienced. Additionally this misconfiguration sent affected traffic to next-generation infrastructure that was undergoing testing. This new infrastructure was not yet configured to handle Cloud Platform traffic and applied an overly-restrictive packet filter. This blocked traffic on the affected IP addresses that was routed through the affected point of presence to Cloud VPN, Cloud Router, Cloud SQL first generation and GCE on protocols other than TCP and UDP. REMEDIATION AND PREVENTION: Mitigation began at 02:04 PDT when Google engineers reverted the network infrastructure change that caused this issue, and all traffic routing was back to normal by 02:34. The system involved was made safe against recurrences by fixing the erroneous configuration. This includes changes to BGP filtering to prevent this class of incorrect announcements. We are implementing additional integration tests for our routing policies to ensure configuration changes behave as expected before being deployed to production. Furthermore, we are improving our production telemetry external to the Google network to better detect peering issues that slip past our tests. We apologize again for the impact this issue has had on our customers.

Google Cloud Platform
RESOLVED: Incident 16015 - Networking issue with Google Compute Engine services

The issue with Google Cloud networking should have been resolved for all affected users as of 02:40 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16015 - Networking issue with Google Compute Engine services

We are still investigating the issue with Google Compute Engine networking. Current data also indicates impact on other GCP products including Cloud SQL. We will provide another status update by 03:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16015 - Networking issue with Google Compute Engine services

We are investigating a networking issue with Google Compute Engine. We will provide more information by 02:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18002 - Execution of BigQuery jobs is delayed, jobs are backing up in Pending state

We are experiencing an intermittent issue with BigQuery connections beginning at Thursday, 2016-Aug-04 13:49 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 4:00pm US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18019 - BigQuery connection issues

We are experiencing an intermittent issue with BigQuery connections beginning at Thursday, 2016-Aug-04 13:49 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 4:00pm US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16014 - HTTP(S) Load Balancing returning some 502 errors

We are still investigating the issue with HTTP(S) Load Balancing returning 502 errors. We will provide another status update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16014 - HTTP(S) Load Balancing returning some 502 errors

The issue with HTTP(S) Load Balancing returning a small number 502 errors should have been resolved for all affected (instances) as of 11:05 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16014 - HTTP(S) Load Balancing returning some 502 errors

We are experiencing an issue with HTTP(S) Load Balancing returning a small number 502 errors, beginning at Friday, 2016-07-29 around 08:45 US/Pacific. The maximum error rate for affected users was below 2%. Remediation has been applied that should stop these errors; we are monitoring the situation. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 11:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16013 - HTTP(S) Load Balancing 502 Errors

We are investigating an issue with 502 errors from HTTP(S) Load Balancing. We will provide more information by 11:05 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18018 - Streaming API issues with BigQuery

SUMMARY: On Monday 25 July 2016, the Google BigQuery Streaming API experienced elevated error rates for a duration of 71 minutes. We apologize if your service or application was affected by this and we are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: On Monday 25 July 2016 between 17:03 and 18:14 PDT, the BigQuery Streaming API returned HTTP 500 or 503 errors for 35% of streaming insert requests, with a peak error rate of 49% at 17:40. Customers who retried on error were able to mitigate the impact. Calls to the BigQuery jobs API showed an error rate of 3% during the incident but could generally be executed reliably with normal retry behaviour. Other BigQuery API calls were not affected. ROOT CAUSE: An internal Google service sent an unexpectedly high amount of traffic to the BigQuery Streaming API service. The internal service used a different entry point that was not subject to quota limits. Google's internal load balancers drop requests that exceed the capacity limits of a service. In this case, the capacity limit for the Streaming API service had been configured higher than its true capacity. As a result, the internal Google service was able to send too many requests to the Streaming API, causing it to fail for a percentage of responses. The Streaming API service sends requests to BigQuery's Metadata service in order to handle incoming Streaming requests. This elevated volume of requests exceeded the capacity of the Metadata service which resulted in errors for BigQuery jobs API calls. REMEDIATION AND PREVENTION: The incident started at 17:03. Our monitoring detected the issue at 17:20 as error rates started to increase. Our engineers blocked traffic from the internal Google client causing the overload shortly thereafter which immediately started to mitigate the impact of the incident. Error rates dropped to normal by 18:14. In order to prevent a recurrence of this type of incident we will enforce quotas for internal Google clients on requests to the Streaming service in order to prevent a single client sending too much traffic. We will also set the correct capacity limits for the Streaming API service based on improved load tests in order to ensure that internal clients cannot exceed the service's capacity. We apologize again to customers impacted by this incident.

Google Cloud Platform
RESOLVED: Incident 18018 - Streaming API issues with BigQuery

We experienced an issue with BigQuery streaming API returning 500/503 responses that has been resolved for all affected customers as of 18:11 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 16007 - Intermittent Google App Engine URLFetch API deadline exceeded errors.

The issue with Google App Engine URLFetch API service should have been resolved for all affected applications as of 02:50 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16007 - Intermittent Google App Engine URLFetch API deadline exceeded errors.

We are still investigating an intermittent issue with Google App Engine URLFetch API calls to non-Google services failing with deadline exceeded errors. We will provide another status update by 03:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16007 - Intermittent Google App Engine URLFetch API deadline exceeded errors.

We are currently investigating an intermittent issue with Google App Engine URLFetch API service. Fetch requests to non-Google related services are failing with deadline exceeded errors. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 02:30 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

SUMMARY: On Tuesday, 28 June 2016 Google Compute Engine SSD Persistent Disks experienced elevated write latency and errors in one zone for a duration of 211 minutes. We would like to apologize for the length and severity of this incident. We are taking immediate steps to prevent a recurrence and improve reliability in the future. DETAILED DESCRIPTION OF IMPACT: On Tuesday, 28 June 2016 from 18:15 to 21:46 PDT SSD Persistent Disks (PD) in zone us-central1-a experienced elevated latency and errors for most writes. Instances using SSD as their root partition were likely unresponsive. For instances using SSD as a secondary disk, IO latency and errors were visible to applications. Standard (i.e. non-SSD) PD in us-central1-a suffered slightly elevated latency and errors. Latency and errors also occurred when taking and restoring from snapshots of Persistent Disks. Disk creation operations also had elevated error rates, both for standard and SSD PD. Persistent Disks outside of us-central1-a were unaffected. ROOT CAUSE: Two concurrent routine maintenance events triggered a rebalancing of data by the distributed storage system underlying Persistent Disk. This rebalancing is designed to make maintenance events invisible to the user, by redistributing data evenly around unavailable storage devices and machines. A previously unseen software bug, triggered by the two concurrent maintenance events, meant that disk blocks which became unused as a result of the rebalance were not freed up for subsequent reuse, depleting the available SSD space in the zone until writes were rejected. REMEDIATION AND PREVENTION: The issue was resolved when Google engineers reverted one of the maintenance events that triggered the issue. A fix for the underlying issue is already being tested in non-production zones. To reduce downtime related to similar issues in future, Google engineers are refining automated monitoring such that, if this issue were to recur, engineers would be alerted before users saw impact. We are also improving our automation to better coordinate different maintenance operations on the same zone to reduce the time it takes to revert such operations if necessary.

Google Cloud Platform
RESOLVED: Incident 16005 - Issue with Developers Console

SUMMARY: On Thursday 9 June 2016, the Google Cloud Console was unavailable for a duration of 91 minutes, with significant performance degradation in the preceding half hour. Although this did not affect user resources running on the Google Cloud Platform, we appreciate that many of our customers rely on the Cloud Console to manage those resources, and we apologize to everyone who was affected by the incident. This report is to explain to our customers what went wrong, and what we are doing to make sure that it does not happen again. DETAILED DESCRIPTION OF IMPACT: On Thursday 9 June 2016 from 20:52 to 22:23 PDT, the Google Cloud Console was unavailable. Users who attempted to connect to the Cloud Console observed high latency and HTTP server errors. Many users also observed increasing latency and error rates during the half hour before the incident. Google Cloud Platform resources were unaffected by the incident and continued to run normally. All Cloud Platform resource management APIs remained available, allowing Cloud Platform resources to be managed via the Google Cloud SDK or other tools. ROOT CAUSE: The Google Cloud Console runs on Google App Engine, where it uses internal functionality that is not used by customer applications. Google App Engine version 1.9.39 introduced a bug in one internal function which affected Google Cloud Console instances, but not customer-owned applications, and thus escaped detection during testing and during initial rollout. Once enough instances of Google Cloud Console had been switched to 1.9.39, the console was unavailable and internal monitoring alerted the engineering team, who restored service by starting additional Google Cloud Console instances on 1.9.38. During the entire incident, customer-owned applications were not affected and continued to operate normally. To prevent a future recurrence, Google engineers are augmenting the testing and rollout monitoring to detect low error rates on internal functionality, complementing the existing monitoring for customer applications. REMEDIATION AND PREVENTION: When the issue was provisionally identified as a specific interaction between Google App Engine version 1.9.39 and the Cloud Console, App Engine engineers brought up capacity running the previous App Engine version and transferred the Cloud Console to it, restoring service at 22:23 PDT. The low-level bug that triggered the error has been identified and fixed. Google engineers are increasing the fidelity of the rollout monitoring framework to detect error signatures that suggest negative interactions of individual apps with a new App Engine release, even the signatures are invisible in global App Engine performance statistics. We apologize again for the inconvenience this issue caused our customers.

Google Cloud Platform
RESOLVED: Incident 16012 - Newly created instances may be experiencing packet loss.

SUMMARY: On Wednesday 29 June 2016, newly created Google Compute Engine instances and newly created network load balancers in all zones were partially unreachable for a duration of 106 minutes. We know that many customers depend on the ability to rapidly deploy and change configurations, and apologise for our failure to provide this to you during this time. DETAILED DESCRIPTION OF IMPACT: On Wednesday 29 June 2016, from 11:58 PST until 13:44 US/Pacific, new Google Compute Engine instances and new network load balancers were partially unreachable via the network. In addition, changes to existing network load balancers were only partially applied. The level of unreachability depended on traffic path rather than instance or load balancer location. Overall, the average impact on new instances was 50% of traffic in the US and around 90% in Asia and Europe. Existing and unchanged instances and load balancers were unaffected. ROOT CAUSE: On 11:58 PST, a scheduled upgrade to Google’s network control system started, introducing an additional access control check for network configuration changes. This inadvertently removed the access of GCE’s management system to network load balancers in this environment. Only a fraction of Google's network locations require this access as an older design has an intermediate component doing access updates. As a result these locations did not receive updates for new and changed instances or load balancers. The change was only tested at network locations that did not require the direct access, which resulted in the issue not being detected during testing and canarying and being deployed globally. REMEDIATION AND PREVENTION: After identifying the root cause, the access control check was modified to allow access by GCE’s management system. The issue was resolved when this modification was fully deployed. To prevent future incidents, the network team is making several changes to their deployment processes. This will improve the level of testing and canarying to catch issues earlier, especially where an issue only affects a subset of the environments at Google. The deployment process will have the rollback procedure enhanced to allow the quickest possible resolution for future incidents. The access control system that was at the root of this issue will also be modified to improve operations that interacts with it. For example it will be integrated with a Google-wide change logging system to allow faster detection of issues caused by access control changes. It will also be outfitted with a dry run mode to allow consequences of changes to be tested during development time. Once again we would like to apologise for falling below the level of service you rely on.

Google Cloud Platform
RESOLVED: Incident 16012 - Newly created instances may be experiencing packet loss.

The issue with new Google Compute Engine instance experiencing packet loss on startup should have been resolved for all affected instances as of 13:57 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16012 - Newly created instances may be experiencing packet loss.

The issue with new Google Compute Engine instance experiencing packet loss on startup should have been resolved for some instances and we expect a full resolution in the near future. We will provide another status update by 14:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16012 - Newly created instances may be experiencing packet loss.

We are experiencing an issue with new Google Compute Engine instance experiencing packet loss on startup beginning at Wednesday, 2016-06-29 12:18 US/Pacific. Current data indicates that 100% of instances are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 1:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16012 - Newly created instances may be experiencing packet loss.

We are investigating reports of an issue with Compute Engine. We will provide more information by 01:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

The issue with Compute Engine SSD persistent disk latency in zone US-Central1-a should have been resolved for all projects as of 21:57 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

The issue with Compute Engine SSD Persistent disk latency in zone US-Central1-a should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

We are still investigating the issue with Compute Engine SSD Persistent disk latency in zone US-Central1-a. We will provide another status update by 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

We are investigating an issue with Compute Engine SSD Persistent disk latency in zone US-Central1-a. We will provide more information by 21:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16007 - Cloud Console is not displaying project lists

The issue with Cloud Console not displaying lists of projects should have been resolved for all affected projects as of 17:39 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16007 - Cloud Console is not displaying project lists

The issue with Cloud Console not displaying lists of projects should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16007 - Cloud Console is not displaying project lists

We are experiencing an ongoing issue with the Cloud Console not displaying lists of projects beginning at Tuesday, 2016-06-28 14:29 US/Pacific. Current data indicates that approximately 10% of projects are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 17:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Cloud Console failing

The issue with Cloud Console serving errors should have been resolved for all affected users as of 11:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Cloud Console failing

The issue with Cloud Console serving errors should be resolved for many users and we expect a full resolution shortly. We will provide another status update by 11:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Cloud Console failing

We are experiencing an issue with Cloud Console serving errors beginning at Tuesday, 2016-06-14 08:49 US/Pacific. Current data indicates that errors are intermittent but may affect all users. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 10:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Cloud Console failing

We are investigating reports of an issue with Google Cloud Console. We will provide more information by 09:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Pantheon failing

We are investigating reports of an issue with Pantheon. We will provide more information by 09:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16005 - Issue with Developers Console

The issue with Developers Console should have been resolved for all affected users as of 22:25 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16005 - Issue with Developers Console

We are experiencing an issue with Developers Console beginning at Thursday, 2016-06-09 21:09 US/Pacific. Current data indicates that all users are affected by this issue. The gcloud command line tool can be used as a workaround for those who need to manage their resources immediately. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 22:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16005 - Issue with Developers Console

We are investigating an issue with Developers Console. We will provide more information by 21:40 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16004 - We are investigating an issue with Developers Console

We are investigating an issue with Developers Console. We will provide more information by 21:40 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18016 - Intermittent connectivity issues with BigQuery

A incident report for this issue is available at https://status.cloud.google.com/incident/bigquery/18015 as these issues shared a common cause.

Google Cloud Platform
RESOLVED: Incident 18015 - Google BigQuery issues

SUMMARY: On Wednesday 18 May 2016 the BigQuery API was unavailable for two periods totaling 31 minutes. We understand how important access to your data stored in BigQuery is and we apologize for the impact this had on you. We have investigated the incident to determine how we can mitigate future issues and provide better service for you in the future. DETAILED DESCRIPTION OF IMPACT: On Wednesday 18 May 2016 from 11:50 until 12:15 PDT all non-streaming BigQuery API calls failed, and additionally from 14:41 until 14:47, 70% of calls failed. An error rate of 1% occurred from 11:28 until 15:34. API calls affected by this issue experienced elevated latency and eventually returned an HTTP 500 status with an error message of "Backend Error". The BigQuery web console was also unavailable during these periods. The streaming API and BigQuery export of logs and usage data were unaffected. ROOT CAUSE: In 2015 BigQuery introduced datasets located in Europe. This required infrastructure to allow BigQuery API calls to be routed to an appropriate zone. This infrastructure was deployed uneventfully and has been operating in production for some time. The errors on 18 May were caused when a new configuration was deployed to improve routing of APIs, and then subsequently rolled back. The engineering team has made changes to the routing configuration for BigQuery API calls to prevent this issue from recurring in the future, and to more rapidly detect elevated error levels in BigQuery API calls in the future Finally, we would like to apologize for this issue - particularly its scope and duration. We know that BigQuery is a critical component of many GCP deployments, and we are committed to continually improving its availability.

Google Cloud Platform
UPDATE: Incident 16011 - Snapshotting of some PDs in US-CENTRAL-1A are failing

We are investigating reports that some snapshotting of PDs in US-CENTRAL-1A are failing. We just wanted to let you know that we're aware of it. No action required on your end

Google Cloud Platform
UPDATE: Incident 16031 - Google Cloud Storage POST errors

From Wednesday, 2016-05-26 14:57 until 15:17 US/Pacific, the Google Cloud Storage XML and JSON APIs were unavailable for 72% of POST requests in the US. Requests originating outside of the US or using a method other than POST were unaffected. Affected queries returned error 500. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

The issue with networking in us-central1-a should have been resolved for all affected instances as of 16:55 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

The issue with networking in us-central1-c should have been resolved for the majority of affected instances and we expect a full resolution in the near future. Customers still experiencing networking issues in us-central1-c can perform a Stop/Start cycle on their instances to regain connectivity. We will provide another status update by 17:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

We are still investigating the issue with networking in us-central1-c. Current data indicates that between approximately 0.5% of instances in the zone are affected by this issue. We will provide another status update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

We are experiencing an issue with networking in us-central1-c beginning at Friday, 2016-05-20 13:08. Some instances will be inaccessible by internal and external IP addresses. No other zones are affected by this incident. We will provide more information by 15:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

We are investigating an issue with networking in us-central1-c. We will provide more information by 14:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17010 - Issues with Cloud SQL First Generation instances

SUMMARY: On Tuesday 17 May 2016, connections to Cloud SQL instances in the Central United States region experienced an elevated error rate for 130 minutes. We apologize to customers who were affected by this incident. We know that reliability is critical for you and we are committed to learning from incidents in order to improve the future reliability of our service. DETAILED DESCRIPTION OF IMPACT: On Tuesday 17 May 2016 from 04:15 to 06:12 and from 08:24 to 08:37 PDT, connections to Cloud SQL instances in the us-central1 region experienced an elevated error rate. The average rate of connection errors to instances in this region was 10.5% during the first part of the incident and 1.9% during the second part of the incident. 51% of in-use Cloud SQL instances in the affected region were impacted during the first part of the incident; 4.2% of in-use instances were impacted during the second part. Cloud SQL Second Generation instances were not impacted. ROOT CAUSE: Clients connect to a Cloud SQL frontend service that forwards the connection to the correct MySQL database server. The frontend calls a separate service to start up a new Cloud SQL instance if a connection arrives for an instance that is not running. This incident was triggered by a Cloud SQL instance that could not successfully start. The incoming connection requests for this instance resulted in a large number of calls to the start up service. This caused increased memory usage of the frontend service as start up requests backed up. The frontend service eventually failed under load and dropped some connection requests due to this memory pressure. REMEDIATION AND PREVENTION: Google received its first customer report at 04:39 PDT and we tried to remediate the problem by redirecting new connections to different datacenters. This effort proved unsuccessful as the start up capacity was used up there also. At 06:12 PDT, we fixed the issue by blocking all incoming connections to the misbehaving Cloud SQL instance. At 08:24 PDT, we moved this instance to a separate pool of servers and restarted it. However, the separate pool of servers did not provide sufficient isolation for the service that starts up instances, causing the incident to recur. We shutdown the instance at 08:37 PDT which resolved the incident. To prevent incidents of this type in the future, we will ensure that a single Cloud SQL instance cannot use up all the capacity of the start up service. In addition, we will improve our monitoring in order to detect this type of issue more quickly. We apologize for the inconvenience this issue caused our customers.

Google Cloud Platform
RESOLVED: Incident 17010 - Issues with Cloud SQL First Generation instances

SUMMARY: On Tuesday 17 May 2016, connections to Cloud SQL instances in the Central United States region experienced an elevated error rate for 130 minutes. We apologize to customers who were affected by this incident. We know that reliability is critical for you and we are committed to learning from incidents in order to improve the future reliability of our service. DETAILED DESCRIPTION OF IMPACT: On Tuesday 17 May 2016 from 04:15 to 06:12 and from 08:24 to 08:37 PDT, connections to Cloud SQL instances in the us-central1 region experienced an elevated error rate. The average rate of connection errors to instances in this region was 10.5% during the first part of the incident and 1.9% during the second part of the incident. 51% of in-use Cloud SQL instances in the affected region were impacted during the first part of the incident; 4.2% of in-use instances were impacted during the second part. Cloud SQL Second Generation instances were not impacted. ROOT CAUSE: Clients connect to a Cloud SQL frontend service that forwards the connection to the correct MySQL database server. The frontend calls a separate service to start up a new Cloud SQL instance if a connection arrives for an instance that is not running. This incident was triggered by a Cloud SQL instance that could not successfully start. The incoming connection requests for this instance resulted in a large number of calls to the start up service. This caused increased memory usage of the frontend service as start up requests backed up. The frontend service eventually failed under load and dropped some connection requests due to this memory pressure. REMEDIATION AND PREVENTION: Google received its first customer report at 04:39 PDT and we tried to remediate the problem by redirecting new connections to different datacenters. This effort proved unsuccessful as the start up capacity was used up there also. At 06:12 PDT, we fixed the issue by blocking all incoming connections to the misbehaving Cloud SQL instance. At 08:24 PDT, we moved this instance to a separate pool of servers and restarted it. However, the separate pool of servers did not provide sufficient isolation for the service that starts up instances, causing the incident to recur. We shutdown the instance at 08:37 PDT which resolved the incident. To prevent incidents of this type in the future, we will ensure that a single Cloud SQL instance cannot use up all the capacity of the start up service. In addition, we will improve our monitoring in order to detect this type of issue more quickly. We apologize for the inconvenience this issue caused our customers.

Google Cloud Platform
UPDATE: Incident 18016 - Intermittent connectivity issues with BigQuery

The issue with BigQuery intermittent connectivity issues should have been resolved. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18016 - Intermittent connectivity issues with BigQuery

We are currently investigating intermittent connectivity issues with BigQuery that are affecting some of our Customers. We'll provide another update at 5:30pm PDT

Google Cloud Platform
UPDATE: Incident 18015 - Google BigQuery issues

The issue with BigQuery API should have been resolved for all affected projects as of 12:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18015 - Google BigQuery issues

We are currently investigating an issue with the BigQuery API. We'll provide an update at 12:30 PDT

Google Cloud Platform
RESOLVED: Incident 17010 - Issues with Cloud SQL First Generation instances

The issue with Cloud SQL should have been resolved for all affected Cloud SQL instances as of 06:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17010 - Issues with Cloud SQL First Generation instances

The issue is confirmed to be confined to a subset of Cloud SQL First Generation instances. We have started to apply mitigation measures. We will provide next update by 07:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16005 - Issues with App Engine applications connecting to Cloud SQL

The issue is confirmed to affect a subset of Cloud SQL First Generation instances. All further updates will be provided here: https://cloud-status.googleplex.com/incident/cloud-sql/17010

Google Cloud Platform
UPDATE: Incident 17010 - Issues with Cloud SQL First Generation instances

We are currently experiencing an issue with Cloud SQL that affects Cloud SQL First Generation instances, and applications depending on them.

Google Cloud Platform
UPDATE: Incident 16005 - Issues with App Engine applications connecting to Cloud SQL

We are currently investigating an issue with App Engine that affects applications using Cloud SQL. We will provide more information about the issue by 06:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16009 - Elevated latency for operations on us-central1-a

The issue with elevated latency on GCE management operations on us-central1-a has been resolved as of 10:24 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16009 - Elevated latency for operations on us-central1-a

The applied mitigation measures have been successful, and as the result the latency on GCE management operations in us-central1-a have returned to normal levels. The customer visible impact should now be over. We are continuing with finalizing the investigation of the root cause of this issue and with applying further measures to prevent this issue from happening again in the future.

Google Cloud Platform
UPDATE: Incident 16009 - Elevated latency for operations on us-central1-a

We are continuing to apply mitigation measures and are seeing further latency improvements on GCE management operations in us-central1-a. We will provide next update by 04:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16009 - Elevated latency for operations on us-central1-a

We have started to apply mitigation measures and are seeing latency improvements on GCE management operations in us-central1-a. We are continuing to work on resolving this issue and will provide next status update by 02:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16009 - Elevated latency for operations on us-central1-a

We are still investigating the elevated latency on GCE management operations on us-central1-a. We will provide more another status update by 01:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16009 - Elevated latency for operations on us-central1-a

We are investigating elevated latency on GCE management operations on us-central1-a. Running instances and networking continue to operate normally. If you need to create new resources we recommend using other zones within this region for the time being. We will provide more information by 23:50 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16003 - Authentication issues with Google Cloud Platform APIs

SUMMARY: On Tuesday 19th April 2016, 1.1% of all requests to obtain new Google OAuth 2.0 tokens failed for a period of 70 minutes. Users of affected applications experienced authentication errors. This incident affected all Google services that use OAuth. We apologize to any customer whose application was impacted by this incident. We take outages very seriously and are strongly focused on learning from these incidents to improve the future reliability of our services. DETAILED DESCRIPTION OF IMPACT: On Tuesday 19 April 2016 from 06:12 to 07:22 PDT, the Google OAuth 2.0 service returned HTTP 500 errors for 1.1% of all requests. OAuth tokens are granted to applications on behalf of users. The application requesting the token is identified by its client ID. Google's OAuth service looks up the application associated with a client ID before granting the new token. If the mapping from client ID to application is not cached by Google's OAuth service, then it is fetched from a separate client ID lookup service. The client ID lookup service dropped some requests during the incident, which caused those token requests to fail. The token request failures predominantly affected applications which had not populated the client ID cache because they were less frequently used. Such infrequently-used applications may have experienced high error rates on token requests for their users, though the overall average error rate was 1.1% measured across all applications. Once access tokens were obtained, they could be used without problems. Tokens issued before the incident continued to function until they expired. Any requests for tokens that did not use a client ID were not affected by this incident. ROOT CAUSE: Google's OAuth system depends on an internal service to lookup details of the client ID that is making the token request. During this incident, the client ID lookup service had insufficient capacity to respond to all requests to lookup client ID details. Before the incident started, the client ID lookup service had been running close to its rated capacity. In an attempt to prevent a future problem, Google SREs triggered an update to add capacity to the service at 05:30. Normally adding capacity does not cause a restart of the service. However, the update process had a misconfiguration which caused a rolling restart. While servers were restarting, the capacity of the service was reduced further. In addition, the restart triggered a bug in a specific client's code that caused its cache to be invalidated, leading to a spike in requests from that client. Google's systems are designed to throttle clients in these situations. However, the throttling was insufficient to prevent overloading of the client ID lookup service. Google's software load balancer was configured to drop a fraction of incoming requests to the client ID lookup service during overload in order to prevent cascading failure. In this case, the load balancer was configured too conservatively and dropped more traffic than needed. REMEDIATION AND PREVENTION: Google's internal monitoring systems detected the incident at 06:28 and our engineers isolated the root cause as an overload in the client ID lookup service at 06:47. We added additional capacity to work around the issue at 07:07 and the error rate dropped to normal levels by 07:22. In order to prevent future incidents of this type from occurring, we are taking several actions. 1. We will improve our monitoring to detect immediately when usage of the client ID lookup service gets close to its capacity. 2. We will ensure that the client ID lookup service always has more than 10% spare capacity at peak. 3. We will change the load balancer configuration so that it will not uniformly drop traffic when overloaded. Instead, the load balancer will throttle the clients that are causing traffic spikes. 4. We will change the update process to minimize the capacity that is temporarily lost during an update. 5. We will fix the client bug that caused its client ID cache to be invalidated.

Google Cloud Platform
UPDATE: Incident 16008 - We are investigating a problem with GCE Instances that were created before mid-February in Zones us-central1-b, europe-west1-c and asia-east1-b

An issue is ongoing with a very small number of Google Compute Engine instances hanging during startup. The root cause has been established and mitigation is in progress. Affected instances can be recovered by manually stopping them and starting them again. We were able to identify affected projects and will notify appropriate project contacts within the next 60 minutes. Current data indicates that less than 0.001% of projects were affected by the issue. No further public communications will be made on this issue.

Google Cloud Platform
UPDATE: Incident 16008 - We are investigating a problem with GCE Instances that were created before mid-February in Zones us-central1-b, europe-west1-c and asia-east1-b

We are still investigating the issue with Google Compute Engine. We will provide another status update by 03:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - We are investigating a problem with GCE Instances that were created before mid-February in Zones us-central1-b, europe-west1-c and asia-east1-b

We are still investigating the issue with Google Compute Engine. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - We are investigating a problem with GCE Instances that were created before mid-February in Regions us-central1-b, europe-west1-c and asia-east1-a

We are still investigating the issue with Google Compute Engine. We will provide another status update by 21:00 US/Pacific with current details. Affected zones correction: Zone [asia-east1-b] is currently affected but zone [asia-east1-a] is not affected.

Google Cloud Platform
UPDATE: Incident 16008 - We are investigating a problem with GCE Instances that were created before mid-February in Regions us-central1-b, europe-west1-c and asia-east1-a

We are still investigating this problem and we'll provide a new update at 7:00 pm PT

Google Cloud Platform
UPDATE: Incident 16008 - We are investigating a problem with GCE Instances that were created before mid-February in Regions us-central1-b, europe-west1-c and asia-east1-a

We Continue investigating this problem and we'll provide a new update at 6:00 pm PT

Google Cloud Platform
UPDATE: Incident 16008 - We are investigating a problem with GCE Instances that were created before mid-February in Regions us-central1-b, europe-west1-c and asia-east1-a

We are currently investigating a problem with GCE instances created before mid-February in regions asia-east1-b and europe-west1-c where instance restarts will render them unavailable.

Google Cloud Platform
UPDATE: Incident 16003 - Authentication issues with Google Cloud Platform APIs

The issue with Authentication Services should have been resolved for all affected projects as of 07:24 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16003 - Authentication issues with Google Cloud Platform APIs

We are still investigating the issue with Authentication services for Google Cloud Platform APIs. We will provide another status update by 08:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16007 - Connectivity issues in all regions

SUMMARY: On Monday, 11 April, 2016, Google Compute Engine instances in all regions lost external connectivity for a total of 18 minutes, from 19:09 to 19:27 Pacific Time. We recognize the severity of this outage, and we apologize to all of our customers for allowing it to occur. As of this writing, the root cause of the outage is fully understood and GCE is not at risk of a recurrence. In this incident report, we are sharing the background, root cause and immediate steps we are taking to prevent a future occurrence. Additionally, our engineering teams will be working over the next several weeks on a broad array of prevention, detection and mitigation systems intended to add additional defense in depth to our existing production safeguards. Finally, to underscore how seriously we are taking this event, we are offering GCE and VPN service credits to all impacted GCP applications equal to (respectively) 10% and 25% of their monthly charges for GCE and VPN. These credits exceed what we promise in the Compute Engine Service Level Agreement (https://cloud.google.com/compute/sla) or Cloud VPN Service Level Agreement (https://cloud.google.com/vpn/sla), but are in keeping with the spirit of those SLAs and our ongoing intention to provide a highly-available Google Cloud product suite to all our customers. DETAILED DESCRIPTION OF IMPACT: On Monday, 11 April, 2016 from 19:09 to 19:27 Pacific Time, inbound internet traffic to Compute Engine instances was not routed correctly, resulting in dropped connections and an inability to reconnect. The loss of inbound traffic caused services depending on this network path to fail as well, including VPNs and L3 network load balancers. Additionally, the Cloud VPN offering in the asia-east1 region experienced the same traffic loss starting at an earlier time of 18:14 Pacific Time but the same end time of 19:27. This event did not affect Google App Engine, Google Cloud Storage, and other Google Cloud Platform products; it also did not affect internal connectivity between GCE services including VMs, HTTP and HTTPS (L7) load balancers, and outbound internet traffic. TIMELINE and ROOT CAUSE: Google uses contiguous groups of internet addresses -- known as IP blocks -- for Google Compute Engine VMs, network load balancers, Cloud VPNs, and other services which need to communicate with users and systems outside of Google. These IP blocks are announced to the rest of the internet via the industry-standard BGP protocol, and it is these announcements which allow systems outside of Google’s network to ‘find’ GCP services regardless of which network they are on. To maximize service performance, Google’s networking systems announce the same IP blocks from several different locations in our network, so that users can take the shortest available path through the internet to reach their Google service. This approach also enhances reliability; if a user is unable to reach one location announcing an IP block due to an internet failure between the user and Google, this approach will send the user to the next-closest point of announcement. This is part of the internet’s fabled ability to ‘route around’ problems, and it masks or avoids numerous localized outages every week as individual systems in the internet have temporary problems. At 14:50 Pacific Time on April 11th, our engineers removed an unused GCE IP block from our network configuration, and instructed Google’s automated systems to propagate the new configuration across our network. By itself, this sort of change was harmless and had been performed previously without incident. However, on this occasion our network configuration management software detected an inconsistency in the newly supplied configuration. The inconsistency was triggered by a timing quirk in the IP block removal - the IP block had been removed from one configuration file, but this change had not yet propagated to a second configuration file also used in network configuration management. In attempting to resolve this inconsistency the network management software is designed to ‘fail safe’ and revert to its current configuration rather than proceeding with the new configuration. However, in this instance a previously-unseen software bug was triggered, and instead of retaining the previous known good configuration, the management software instead removed all GCE IP blocks from the new configuration and began to push this new, incomplete configuration to the network. One of our core principles at Google is ‘defense in depth’, and Google’s networking systems have a number of safeguards to prevent them from propagating incorrect or invalid configurations in the event of an upstream failure or bug. These safeguards include a canary step where the configuration is deployed at a single site and that site is verified to still be working correctly, and a progressive rollout which makes changes to only a fraction of sites at a time, so that a novel failure can be caught at an early stage before it becomes widespread. In this event, the canary step correctly identified that the new configuration was unsafe. Crucially however, a second software bug in the management software did not propagate the canary step’s conclusion back to the push process, and thus the push system concluded that the new configuration was valid and began its progressive rollout. As the rollout progressed, those sites which had been announcing GCE IP blocks ceased to do so when they received the new configuration. The fault tolerance built into our network design worked correctly and sent GCE traffic to the the remaining sites which were still announcing GCE IP blocks. As more and more sites stopped announcing GCE IP blocks, our internal monitoring picked up two anomalies: first, the Cloud VPN in asia-east1 stopped functioning at 18:14 because it was announced from fewer sites than GCE overall, and second, user latency to GCE was anomalously rising as more and more users were sent to sites which were not close to them. Google’s Site Reliability Engineers started investigating the problem when the first alerts were received, but were still trying to determine the root cause 53 minutes later when the last site announcing GCE IP blocks received the configuration at 19:07. With no sites left announcing GCE IP blocks, inbound traffic from the internet to GCE dropped quickly, reaching >95% loss by 19:09. Internal monitors generated dozens of alerts in the seconds after the traffic loss became visible at 19:08, and the Google engineers who had been investigating a localized failure of the asia-east1 VPN now knew that they had a widespread and serious problem. They did precisely what we train for, and decided to revert the most recent configuration changes made to the network even before knowing for sure what the problem was. This was the correct action, and the time from detection to decision to revert to the end of the outage was thus just 18 minutes. With the immediate outage over, the team froze all configuration changes to the network, and worked in shifts overnight to ensure first that the systems were stable and that there was no remaining customer impact, and then to determine the root cause of the problem. By 07:00 on April 12 the team was confident that they had established the root cause as a software bug in the network configuration management software. DETECTION, REMEDIATION AND PREVENTION: With both the incident and the immediate risk now over, the engineering team’s focus is on prevention and mitigation. There are a number of lessons to be learned from this event -- for example, that the safeguard of a progressive rollout can be undone by a system designed to mask partial failures -- which yield similarly-clear actions which we will take, such as monitoring directly for a decrease in capacity or redundancy even when the system is still functioning properly. It is our intent to enumerate all the lessons we can learn from this event, and then to implement all of the changes which appear useful. As of the time of this writing in the evening of 12 April, there are already 14 distinct engineering changes planned spanning prevention, detection and mitigation, and that number will increase as our engineering teams review the incident with other senior engineers across Google in the coming week. Concretely, the immediate steps we are taking include: * Monitoring targeted GCE network paths to detect if they change or cease to function; * Comparing the IP block announcements before and after a network configuration change to ensure that they are identical in size and coverage; * Semantic checks for network configurations to ensure they contain specific Cloud IP blocks. A FINAL WORD: We take all outages seriously, but we are particularly concerned with outages which affect multiple zones simultaneously because it is difficult for our customers to mitigate the effect of such outages. This incident report is both longer and more detailed than usual precisely because we consider the April 11th event so important, and we want you to understand why it happened and what we are doing about it. It is our hope that, by being transparent and providing considerable detail, we both help you to build more reliable services, and we demonstrate our ongoing commitment to offering you a reliable Google Cloud platform. Sincerely, Benjamin Treynor Sloss | VP 24x7 | Google

Google Cloud Platform
RESOLVED: Incident 16007 - Connectivity issues in all regions

The issue with networking should have been resolved for all affected services as of 19:27 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident on the Cloud Status Dashboard once we have completed our internal investigation. For everyone who is affected, we apologize for any inconvenience you experienced.

Google Cloud Platform
UPDATE: Incident 16007 - Connectivity issues in all regions

The issue with networking should have been resolved for all affected services as of 19:27 US/Pacific. We're continuing to monitor the situation. We will provide another status update by 20:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16007 - Connectivity issues with Cloud VPN in asia-east1

Current data indicates that there are severe network connectivity issues in all regions. Google engineers are currently working to resolve this issue. We will post a further update by 20:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16007 - Connectivity issues with Cloud VPN in asia-east1

We are experiencing an issue with Cloud VPN in asia-east1 beginning at Monday, 2016-04-11 18:25 US/Pacific. Current data suggests that all Cloud VPN traffic in this region is affected. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 19:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16007 - Connectivity issues with Cloud VPN in asia-east1

We are investigating reports of an issue with Cloud VPN in asia-east1. We will provide more information by 19:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18014 - BigQuery streaming inserts delayed in EU

SUMMARY On Wednesday 5 April and Thursday 6 April 2016, some streaming inserts to BigQuery datasets in the EU were delayed by up to 16 hours and 46 minutes. We sincerely apologise for these delays and we are addressing the root causes of the issue as part of our commitment to BigQuery's availability and responsiveness. DETAILED DESCRIPTION OF IMPACT From 15:16 PDT to 23:40 on Wednesday 05 April 2016, some BigQuery streaming inserts to datasets in the EU did not immediately become available to subsequent queries. From 23:40, new streaming inserts worked normally, but some previously delayed inserts remained unavailable to BigQuery queries. Virtually all delayed inserts were committed and available by 07:52 on Thursday 06 April. The event was accompanied by slightly elevated error rates (< 0.7% failure rate) and latency (< 50% latency increase) of API calls for streaming inserts. ROOT CAUSE BigQuery streaming inserts are buffered in one of Google's large-scale storage systems before being committed to the main BigQuery repository. At 15:16 PDT on Wednesday 05 April, this storage system began to experience issues in one of the datacenters that host BigQuery datasets in the EU, blocking BigQuery's I/O operations for streaming inserts. The impact reached monitoring threshold levels after a few hours, and at 18:29 automated monitoring systems sent alerts to the Google engineering team, but the monitoring systems displayed the alerts in a way that disguised the scale of the issue and made it seem to be a low priority. This error was identified at 23:01, and Google engineers began routing all European streaming insert traffic to another EU datacenter, restoring normal insert behaviour by 23:40. The delayed inserts in the system were committed when the underlying storage system was restored to service. REMEDIATION AND PREVENTION Google engineers are addressing the technical root cause of the incident by increasing the fault-tolerance of I/O between BigQuery and the storage system that buffers streaming inserts. The principal remediation efforts for this event, however, are focused on the systems monitoring, alert escalation, and data visualisation issues which were involved. Google engineers are updating the BigQuery monitoring systems to more clearly represent the scale of system behaviour, and modifying internal procedures and documentation accordingly.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery streaming inserts delayed in EU

The issue with BigQuery job execution have been fully resolved. Affected customers will be notified directly in order to assess any potential lingering impact. We will also provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery streaming inserts delayed in EU

Current data indicates that BigQuery streaming inserts are being applied normally. We are still working on restoring the visibility of some streaming inserts to EU datasets from 18:00 to 00:00 US/Pacific. We will provide another status update by 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery streaming inserts delayed in EU

Current data indicates that BigQuery streaming inserts are being applied normally. Some streaming inserts to EU datasets from 18:00 to 00:00 US/Pacific are not yet visible in BigQuery and we are working to propagate them. We will provide another status update by 10:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery streaming inserts delayed in EU

Current data indicates that BigQuery streaming inserts are being applied normally. Some streaming inserts to EU datasets from 18:00 to 00:00 US/Pacific are not yet visible in BigQuery and we are working to propagate them. We will provide another status update by 04:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery streaming inserts delayed in EU

We are still investigating the issue with BigQuery job execution. Current data indicates that the issue only affects projects which use streaming inserts to datasets located in the EU. We will provide another status update by 04:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery Job Execution Issue

We are still investigating the issue with BigQuery job execution. Current data indicates that the issue only affects projects which use streaming inserts to datasets located in the EU. We will provide another status update by 03:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery Job Execution Issue

We are still investigating the issue with BigQuery Job execution. We will provide another status update by 02:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery Job Execution Issue

We are still investigating the issue with Bigquery Job execution. We will provide another status update by 01:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18014 - BigQuery Job Execution Issue

We are investigating an issue with BigQuery Job execution. We will provide more information by 2016-04-06 00:20 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16005 - Network Connectivity Issues in Europe-West1-C

SUMMARY: On Wednesday 24 February 2016, some Google Compute Engine instances in the europe-west1-c zone experienced network connectivity loss for a duration of 62 minutes. If your service or application was affected by these network issues, we sincerely apologize. We have taken immediate steps to remedy the issue and we are working through a detailed plan to prevent any recurrence. DETAILED DESCRIPTION OF IMPACT: On 24 February 2016 from 11:43 to 12:45 PST, up to 17% of Google Compute Engine instances in the europe-west1-c zone experienced a loss of network connectivity. Affected instances lost connectivity to both internal and external destinations. ROOT CAUSE: The root cause of this incident was complex, involving interactions between three components of the Google Compute Engine control plane: the main configuration repository, an integration layer for networking configuration, and the low-level network programming mechanism. Several hours before the incident on 24th February 2016, Google engineers modified the Google Compute Engine control plane in the europe-west1-c zone, migrating the management of network firewall rules from an older system to the modern integration layer. This was a well-understood change that had been carried out several times in other zones without incident. As on previous occasions, the migration was completed without issues. On this occasion, however, the migrated networking configuration included a small ratio (approximately 0.002%) of invalid rules. The GCP network programming layer is hardened against invalid or inconsistent configuration information, and continued to operate correctly in the presence of these invalid rules. Twenty minutes before the incident, however, a remastering event occurred in the network programming layer in the europe-west1-c zone. Events of this kind are routine but, in this case, the presence of the invalid rules in the configuration coupled with a race condition in the way the new master loads its configuration caused the new master to load its network configuration incorrectly.. The consequence, at 11:43 PST, was a loss of network programming configuration for a subset of Compute Engine instances in the zone, effectively removing their network connectivity until the configuration could be re-propagated from the central repository. REMEDIATION AND PREVENTION Google engineers restored service by forcing another remastering of the network programming layer, restoring a correct network configuration. To prevent recurrence, Google engineers are fixing both the race condition which led to an incorrect configuration during mastership change, and adding alerting for the presence of invalid rules in the network configuration so that they will be detected promptly upon introduction. The combination of these two changes provide defense in depth against future configuration inconsistency and we believe will preserve correct function of the network programming system in the face of invalid information.

Google Cloud Platform
UPDATE: Incident 16003 - Log viewer delays

The backlog in the log processing pipeline has now cleared and the issue was resolved as of Wednesday, 2016-03-02 17:50 US/Pacific. We do apologize for any inconvenience this may have caused.

Google Cloud Platform
UPDATE: Incident 16003 - Log viewer delays

We are experiencing an issue with Cloud Logging beginning at Wednesday, 2016-03-02 14:40 US/Pacific. The log processing pipeline is running behind demand, but no logs have been lost. New entries in the Cloud Logging viewer will be delayed. We apologize for any inconvenience you may be experiencing. We will provide an update by 18:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16003 - Log viewer delays

We are investigating reports of an issue with the Logs Viewer. We will provide more information by 17:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16004 - Network connectivity issue in us-central1-f

SUMMARY: On Tuesday 23 February 2015, Google Compute Engine instances in the us-central1-f zone experienced intermittent packet loss for 46 minutes. If your service or application was affected by these network issues, we sincerely apologize. A reliable network is one of our top priorities. We have taken immediate steps to remedy the issue and we are working through a detailed plan to prevent any recurrence. DETAILED DESCRIPTION OF IMPACT: On 23 February 2015 from 19:56 to 20:42 PST, Google Compute Engine instances in the us-central1-f zone experienced partial loss of network traffic. The disruption had a 25% chance of affecting any given network flow (e.g. a TCP connection or a UDP exchange) which entered or exited the us-central1-f zone. Affected flows were blocked completely. All other flows experienced no disruption. Systems that experienced a blocked TCP connection were often able to establish connectivity by retrying. Connections between endpoints within the us-central1-f zone were unaffected. ROOT CAUSE: Google follows a gradual rollout process for all new releases. As part of this process, Google network engineers modified a configuration setting on a group of network switches within the us-central1-f zone. The update was applied correctly to one group of switches, but, due to human error, it was also applied to some switches which were outside the target group and of a different type. The configuration was not correct for them and caused them to drop part of their traffic. REMEDIATION AND PREVENTION: The traffic loss was detected by automated monitoring, which stopped the misconfiguration from propagating further, and alerted Google network engineers. Conflicting signals from our monitoring infrastructure caused some initial delay in correctly diagnosing the affected switches. This caused the incident to last longer than it should have. The network engineers restored normal service by isolating the misconfigured switches. To prevent recurrence of this issue, Google network engineers are refining configuration management policies to enforce isolated changes which are specific to the various switch types in the network. We are also reviewing and adjusting our monitoring signals in order to lower our response times.

Google Cloud Platform
UPDATE: Incident 16006 - Internal DNS resolution in us-central1

The issue with Google Compute Engine internal DNS resolution should have been resolved for all affected instances as of 20:55 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16006 - Internal DNS resolution in us-central1

We are investigating reports of an issue with internal DNS resolution in us-central1. We will provide more information by 21:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16003 - Quotas were reset to default values for some Customers

SUMMARY: On Tuesday 23 February 2016, for a duration of 10 hours and 6 minutes, 7.8% of Google Compute Engine projects had reduced quotas. We know that the ability to scale is vital to our customers, and apologize for preventing you from using the resources you need. DETAILED DESCRIPTION OF IMPACT: On Tuesday 23 February 2016 from 06:06 to 16:12 PST, 7.8% of Google Compute Engine projects had quotas reduced. This impacted all quotas, including number of cores, IP addresses and disk size. If reduced quota was applied to your project and your usage reached this reduced quota you would have been unable to create new resources during this incident. Any such attempt would have resulted in a QUOTA_EXCEEDED error code with message "Quota 'XX_XX' exceeded. Limit: N". Any resources that were already created were unaffected by this issue. ROOT CAUSE: In order to maximize ease of use for Google Compute Engine customers, in some cases we automatically raise resource quotas. We then provide exclusions to ensure that no quotas previously raised are reduced. We occasionally tune the algorithm to determine which quotas can be safely raised. This incident occurred when one such change was made but a bug in the aforementioned exclusion process allowed some projects to have their quotas reduced. REMEDIATION AND PREVENTION: As soon as Google engineers identified the cause of the issue the initiating change was rolled back and quota changes were reverted. To provide faster resolution to quota related issues in the future we are creating new automated alerting and operational documentation. To prevent a recurrence of this specific issue, we have fixed the bug in the exclusion process. To prevent similar future issues, we are also creating a dry-run testing phase to verify the impact quota system changes will have.

Google Cloud Platform
RESOLVED: Incident 16005 - Network Connectivity Issues in Europe-West1-C

The issue with network connectivity to VMs in europe-west1-c should have been resolved for all affected instances as of 12:57 PST. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16005 - Network Connectivity Issues in Europe-West1-C

We are currently investigating network connectivity issues affecting the europe-west1-c zone. We will provide another update with more information by 13:00 PST.

Google Cloud Platform
RESOLVED: Incident 16004 - Network connectivity issue in us-central1

The network connectivity issue in us-central1 should have been resolved for all affected projects as of 20:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16003 - Quotas were reset to default values for some Customers

The issue with quotas being reset to default values should have been resolved for all affected customers as of 16:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16003 - Quotas were reset to default values for some Customers

The issue with quotas being reset to default values should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16003 - Quotas were reset to default values for some Customers

We are continuing to investigate the issue with quotas being reset to default values for some customers. We'll provide a new update at 15:30 US / Pacific time.

Google Cloud Platform
UPDATE: Incident 16003 - Quotas were reset to default values for some Customers

We are continuing to investigate the issue with quotas being reset to default values for some customers. We'll provide a new update at 14:30 US / Pacific time.

Google Cloud Platform
UPDATE: Incident 16003 - Quotas were reset to default values for some Customers

We are continuing to investigate the issue with quotas being reset to default values for some customers. We'll provide a new update at 13:30 Pacific time

Google Cloud Platform
UPDATE: Incident 16003 - Quotas were reset to default values for some Customers

We continue to investigate the problem with quotas being reset to default values for some customers. We'll provide a new update at 12:30 PT.

Google Cloud Platform
UPDATE: Incident 16003 - Quotas were reset to default values for some Customers

We are still investigating the problem of some projects quotas being reverted back to Default values and we'll provide a new update in at 11:30 PST time.

Google Cloud Platform
UPDATE: Incident 16003 - Quotas were reset to default values for some Customers

We are investigating a problem with our Quota System where Quotas were reset to default values for some Customers.

Google Cloud Platform
UPDATE: Incident 16001 - Pub Sub Performance Degradation

The issue with Pub/Sub performance should have been resolved for all affected users as of 13:05 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16001 - Pub Sub Performance Degradation

The performance of Google Cloud Pub/Sub in us-central1 is recovering and we expect a full resolution in the near future. We will provide another status update by 13:50 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16001 - Pub Sub Performance Degredatin

We are investigating reports of degraded performance with Pub/Sub in us-central1. We will provide more information by 13:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

The issue with BigQuery returning Internal Error should have been resolved for all affected projects as of 06:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

The issue with BigQuery returning Internal Error should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 07:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

The issue with BigQuery returning Internal Error should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

The issue with BigQuery returning Internal Error should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 04:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

We are still investigating the issue with BigQuery returning Internal Error. Current data indicates that around 2%of projects are affected by this issue. We will provide another status update by 03:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

We are still investigating the issue with BigQuery returning Internal Error. Current data indicates that around 2%of projects are affected by this issue. We will provide another status update by 02:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

We are still investigating the issue with BigQuery returning Internal Error. Current data indicates that around 2%of projects are affected by this issue. We will provide another status update by 01:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

We are experiencing an intermittent issue with BigQuery returning Internal Error beginning at Wednesday, 2016-02-10 13:20 US/Pacific. Current data indicates that around 2% of projects are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 00:20 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18013 - BigQuery returns Internal Error intermittently

We are investigating reports of an issue with BigQuery returning Internal Error intermittently. We will provide more information by 23:40 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16002 - Errors on the Developers Console and cloud.google.com sites

We experienced an issue with the Developers Console and cloud.google.com sites which returned errors beginning at Wednesday, 2016-02-10 17:05 until 17:20 US/Pacific. We have fixed this now and apologize for any inconvenience caused.

Google Cloud Platform
RESOLVED: Incident 16002 - Issues with App Engine Java and Go runtimes

SUMMARY: On Wednesday 3 February 2016, some App Engine applications running on Java7, Go and Python runtimes served errors with HTTP response 500 for a duration of 18 minutes. We sincerely apologize to customers who were affected. We have taken and are taking immediate steps to improve the platform's performance and availability. DETAILED DESCRIPTION OF IMPACT: On Wednesday 3 February 2016, from 18:37 PST to 18:55 PST, 1.1% of Java7, 3.1% of Go and 0.2% of all Python applications served errors with HTTP response code 500. The impact varied across applications, with less than 0.8% of all applications serving more than 100 errors during this time period. The distribution of errors was heavily tail-weighted, with a few applications receiving a large fraction of errors for their traffic during the event. ROOT CAUSE: An experiment meant to test a new feature on a small number of applications was inadvertently applied to Java7 and Go applications globally. Requests to these applications tripped over the incompatible experimental feature, causing the instances to shut down without serving any requests successfully, while the depletion of healthy instances caused these applications to serve HTTP requests with a 500 response. Additionally, the high rate of failure in Java and Go instances caused resource contention as the system tried to start new instances, which resulted in collateral damage to a small number of Python applications. REMEDIATION AND PREVENTION: At 18:35, a configuration change was erroneously enabled globally instead of to the intended subset of applications. Within a few minutes, Google Engineers noticed a drop in global traffic to GAE applications and determined that the configuration change was the root cause. At 18:53 the configuration change was rolled back and normal operations were restored by 18:55. To prevent a recurrence of this problem, Google Engineers are modifying the fractional push framework to inhibit changes which would simultaneously apply to the majority of applications, and creating telemetry to accurately predict the fraction of instances affected by a given change. Google Engineers are also enhancing the alerts on traffic drop and error spikes to quickly identify and mitigate similar incidents.

Google Cloud Platform
RESOLVED: Incident 16002 - Connectivity issue in asia-east1

SUMMARY: On Wednesday 3 February 2016, one third of network connections from external sources to Google Compute Engine instances and network load balancers in the asia-east1 region experienced high rates of network packet loss for 89 minutes. We sincerely apologize to customers who were affected. We have taken and are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: On Wednesday 3 February 2016, from 00:40 PST to 02:09 PST, one third of network connections from external sources to Google Compute Engine instances and network load balancers in the asia-east1 region experienced high rates of network packet loss. Traffic between instances within the region was not affected. ROOT CAUSE: Google Compute Engine maintains a pool of systems that encapsulate incoming packets and forward them to the appropriate instance. During a regular system update, a master failover triggered a latent configuration error in two internal packet processing servers. This configuration rendered the affected packet forwarders unable to properly encapsulate external packets destined to instances. REMEDIATION AND PREVENTION: Google's monitoring system detected the problem within two minutes of the configuration change. Additional alerts issued by the monitoring system for the asia-east1 region negatively affected total time required to root cause and resolve the issue. At 02:09 PST, Google engineers applied a temporary configuration change to divert incoming network traffic away from the affected packet encapsulation systems and fully restore network connectivity. In parallel, the incorrect configuration has been rectified and pushed to the affected systems. To prevent this issue from recurring, we will change the way packet processor configurations are propagated and audited, to ensure that incorrect configurations are detected while their servers are still on standby.In addition, we will make improvements to our monitoring to make it easier for engineers to quickly diagnose and pinpoint the impact of such problems.

Google Cloud Platform
RESOLVED: Incident 16001 - Google App Engine admin permissions

Blacklog processing has completed for all permission changes made during the affected time frame.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine admin permissions

Backlog processing is going slower than expected. At the current rate, it will take another 5 hours of time to reprocess all of the updates. We are going to provide another update on this by 2:00.

Google Cloud Platform
RESOLVED: Incident 16002 - Issues with App Engine Java and Go runtimes

The issue with App Engine Java and Go runtimes serving errors should have been resolved for all affected applications as of 18:57 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16002 - Issues with App Engine Java and Go runtimes

We are investigating reports of an issue with App Engine Java and Go applications. We will provide more information by 19:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine admin permissions

We are backfilling changes to project permissions made between 2016-02-02 11:30 US/Pacific and 2016-02-03 15:10. Our current estimate is to complete this at around 21:00. We are going to provide further updates at that time or in between if the estimate changes significantly.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine admin permissions

The retroactive changes are underway. This means the current impact of the issue is that some changes to project permissions made between 2016-02-02 11:30 US/Pacific and 2016-02-03 15:10 have not applied to App Engine. Some permissions changes during that time and all permissions changes before and after that window are fully applied. We will provide another status update by 18:00 with current details.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine admin permissions

The issue with Google App Engine authorization has been resolved for all new permission changes. Our engineers are now retroactively applying changes made during the last day. We will provide another status update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine admin permissions

We are still working to resolve the issue with Google App Engine authorization. We will provide another status update by 15:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine authentication and authorization

We are still investigating the issue with Google App Engine authorization. In addition to the rollback that is underway we are preparing to retroactively apply permission changes that did not fully take effect. We will provide another status update by 14:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine authentication and authorization

We are experiencing an issue with Google App Engine authorization beginning at Tuesday, 2015-02-02. Effected customers will see that changes to project permissions are not taking effect on App Engine. Our engineers are in the process of a rollback that should restore service. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 14:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine authentication and authorization

We are still investigating the issue with App Engine Authentication and Authorization. We will provide another status update by 14:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16001 - Google App Engine authentication and authorization

We are investigating reports of an issue with Google App Engine authentication and authorization. We will provide more information by 13:35 US/Pacific

Google Cloud Platform
UPDATE: Incident 16002 - Connectivity issue in asia-east1

We are still investigating the issue with Google Compute Engine instances experiencing packet loss in the asia-east1 region. Current data indicates that up to 33% of instances in the region are experiencing up to 10% packet loss when communicating whit external resources. We will provide another status update by 03:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Connectivity issue in asia-east1

The issue with Google Compute Engine instances experiencing packet loss in the asia-east1 region should have been resolved for all affected instances as of 02:11 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16002 - Connectivity issue in asia-east1

We are experiencing an issue with Google Compute Engine seeing packet loss in asia-east1 beginning at Wednesday, 2016-02-03 01:40 US/Pacific. Instances of affected customers may experience packet loss. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 02:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Connectivity issue in asia-east1

We are investigating reports of an issue with Google Compute Engine. We will provide more information by 02:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16001 - Cannot open Cloud Shell in production Pantheon using @google.com account

OMG/1512 Internal only.

Google Cloud Platform
RESOLVED: Incident 16001 -

Starting at approximately 14:47 Customers of Google Container Registry were unable to pull images that had been pushed through the V2 Docker protocol ending at 16:04 PDT. The issue has now been resolved. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16001 - us-central1-c Persistent Disk latency

The issue with persistent disks latency should have been resolved as of 15:20 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16001 - us-central1-c Persistent Disk latency

We are seeing elevated latency for a small number of persistent disks in us-central1-c. We are investigating cause and possible mitigation now.

Google Cloud Platform
RESOLVED: Incident 15025 - Authentication issues with App Engine

SUMMARY: On Monday 7 December 2015, 1.29% of Google App Engine applications received errors when issuing authenticated calls to Google APIs over a period of 17 hours and 3 minutes. During a 45-minute period, authenticated calls to Google APIs from outside of App Engine also received errors, with the error rate peaking at 12%. We apologise for the impact of this issue on you and your service. We consider service degradation of this level and duration to be very serious and we are planning many changes to prevent this occurring again in the future. DETAILED DESCRIPTION OF IMPACT: Between Monday 7 December 2015 20:09 PST and Tuesday 8 December 2015 13:12, 1.29% of Google App Engine applications using service accounts received error 401 "Access Denied" for all requests to Google APIs requiring authentication. Unauthenticated API calls were not affected. Different applications experienced impact at different times, with few applications being affected for the full duration of the incident. In addition, between 23:05 and 23:50, an average of 7% of all requests to Google Cloud APIs failed or timed out, peaking briefly at 12%. Outside of this time only API calls from App Engine were affected. ROOT CAUSE: Google engineers have recently carried out a migration of the Google Accounts system to a new storage backend, which included copying API authentication service credentials data and redirecting API calls to the new backend. To complete this migration, credentials were scheduled to be deleted from the previous storage backend. This process started at 20:09 PST on Monday 7 December 2015. Due to a software bug, the API authentication service continued to look up some credentials, including those used by Google App Engine service accounts, in the old storage backend. As these credentials were progressively deleted, their corresponding service accounts could no longer be authenticated. The impact increased as more credentials were deleted and some Google App Engine applications started to issue a high volume of retry requests. At 23:05, the retry volume exceeded the regional capacity of the API authentication service, causing 1.3% of all authenticated API calls to fail or timeout, including Google APIs called from outside Google App Engine. At 23:30 the API authentication service exceeded its global capacity, causing up to 12% of all authenticated API calls to fail until 23:50, when the overload issue was resolved. REMEDIATION AND PREVENTION: At 23:50 PST on Monday 8 December, Google engineers blocked certain authentication credentials that were known to be failing, preventing retries on these credentials from overloading the API authentication service. On Tuesday 9 December 08:52 PST, the deletion process was halted, having removed 2.3% of credentials, preventing further applications from being affected. At 10:08, Google engineers identified the root cause for the misdirected credentials lookup. After thorough testing, a fix was rolled out globally, resolving the issue for all affected Google App Engine applications by 13:12. Google has conducted a far-reaching review of the issue's root causes and contributory factors, leading to numerous prevention and mitigation actions in the following areas: — Google engineers have deployed monitoring for additional infrastructure signals to detect and analyse similar issues more quickly. — Google engineers have improved internal tools to extend auditing and logging and automatically advise relevant teams on potentially risky data operations. — Additional rate limiting and caching features will be added to the API authentication service, increasing its resilience to load spikes. — Google’s development guidelines are being reviewed and updated to improve the handling of service or backend migrations, including a grace period of disabling access to old data locations before fully decommissioning them. Our customers rely on us to provide a superior service and we regret we did not live up to expectations in this case. We apologize again for the inconvenience this caused you and your users.

Google Cloud Platform
RESOLVED: Incident 15065 - 400 errors when trying to create an external (L3) Load Balancer for GCE/GKE services

SUMMARY: On Monday 7 December 2015, Google Container Engine customers could not create external load balancers for their services for a duration of 21 hours and 38 minutes. If your service or application was affected, we apologize — this is not the level of quality and reliability we strive to offer you, and we have taken and are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: From Monday 7 December 2015 15:00 PST to Tuesday 8 December 2015 12:38 PST, Google Container Engine customers could not create external load balancers for their services. Affected customers saw HTTP 400 “invalid argument” errors when creating load balancers in their Container Engine clusters. 6.7% of clusters experienced API errors due to this issue. The issue also affected customers who deployed Kubernetes clusters in the Google Compute Engine environment. The issue was confined to Google Container Engine and Kubernetes, with no effect on users of any other resource based on Google Compute Engine. ROOT CAUSE: Google Container Engine uses the Google Compute Engine API to manage computational resources. At about 15:00 PST on Monday 7 December, a minor update to the Compute Engine API inadvertently changed the case-sensitivity of the “sessionAffinity” enum variable in the target pool definition, and this variation was not covered by testing. Google Container Engine was not aware of this change and sent requests with incompatible case, causing the Compute Engine API to return an error status. REMEDIATION AND PREVENTION: Google engineers re-enabled load balancer creation by rolling back the Google Compute Engine API to its previous version. This was complete by 8 December 2015 12:38 PST. At 8 December 2015 10:00 PST, Google engineers committed a fix to the Kubernetes public open source repository. Google engineers will increase the coverage of the Container Engine continuous integration system to detect compatibility issues of this kind. In addition, Google engineers will change the release process of the Compute Engine API to detect issues earlier to minimize potential negative impact.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

The issue with App Engine applications accessing Google APIs should have been resolved for all affected customers as of 13:15 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We believe the issue is resolved for most customers. A new update will be provided by 2015-12-08 13:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15065 - 400 errors when trying to create an external (L3) Load Balancer for GCE/GKE services

The problem has been fully addressed as of 2015-12-08 12:22pm US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 15065 - 400 errors when trying to create an external (L3) Load Balancer for GCE/GKE services

We are currently testing a fix that hopefully will address the underlying issue. In the meanwhile please use the workaround provided previously of creating load balancers with client IP session affinity. We'll provide another status update by 2015-12-08 13:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We’re investigating elevated error rates for some Google Cloud Platform users. We believe these errors are affecting between 2-5 percent of Google App Engine (GAE) applications. We are working directly with the customers who are affected to restore full operation in their application as quickly as possible, and apologize for any inconvenience. We will provide another status update by 2015-12-08 12:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15065 - 400 errors when trying to create an external (L3) Load Balancer for GCE/GKE services

We've identified an issue in GCE/GKE when attempting to create external (L3) load balancers for their Kubernetes clusters. A proper fix for this issue is being worked on. Meanwhile, a potential workaround is to create load balancers with client IP session affinity. See an example here: https://gist.github.com/cjcullen/2aad7d51b76b190e2193 . We will provide another status update by 2015-12-08 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are still investigating the issue with App Engine applications accessing Google APIs. We will provide another status update by 2015-12-08 11:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15065 - 400 errors when trying to create an external (L2) Load Balancer for GCE/GKE services

We are still investigating reports of an issue with GCE/GKE when attempting to create an external (L2) load balancer for their services on GCE / GKE. We will provide another status update by 2015-12-08 11:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15065 - 400 errors when trying to create an external (L2) Load Balancer for GCE/GKE services

We are investigating reports of an issue with GCE/GKE when attempting to create an external (L3) load balancer for their services on GCE / GKE.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are still investigating the issue with App Engine applications accessing Google APIs. We will provide another status update by 2015-12-08 10:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are still investigating the issue with App Engine applications accessing Google APIs. We will provide another status update by 2015-12-08 09:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are still investigating the issue with App Engine applications accessing Google APIs. We will provide another status update by 2015-12-08 08:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are still investigating the issue with App Engine applications accessing Google APIs. We will provide another status update by 2015-12-08 07:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are still investigating the issue with App Engine applications accessing Google APIs. We will provide another status update by 2015-12-08 06:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

Despite actions taken to mitigate the problem, a significant number of App Engine applications have continued to experience errors while accessing Google APIs. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2015-12-08 05:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

The issue with App Engine applications accessing Google APIs should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 08:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are experiencing an issue with App Engine applications accessing Google APIs beginning at Monday, 2015-12-08 22:00 US/Pacific. Affected APIs may return a "401 Invalid Credentials" error message. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2015-12-09 03:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are experiencing an issue with App Engine applications accessing Google APIs beginning at Monday, 2015-12-08 22:00 US/Pacific. Affected APIs may return a "401 Invalid Credentials" error message. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2015-12-09 02:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are experiencing an issue with App Engine applications accessing Google APIs beginning at Monday, 2015-12-08 22:00 US/Pacific. Affected APIs may return a "401 Invalid Credentials" error message. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2015-12-09 01:20 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

"We are experiencing an issue with App Engine applications accessing Google APIs beginning at Monday, 2015-12-07 22:00 US/Pacific. Affected APIs may return a "401 Invalid Credentials" error message. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2015-12-08 00:20 US/Pacific with current details."

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

"We are experiencing an issue with App Engine applications accessing Google APIs beginning at Monday, 2015-12-07 22:00 US/Pacific. Affected APIs may return a "401 Invalid Credentials" error message. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2015-12-08 00:20 US/Pacific with current details."

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

"We are experiencing an issue with App Engine applications accessing Google APIs beginning at Monday, 2015-12-07 22:00 US/Pacific. Affected APIs may return a "401 Invalid Credentials" error message. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2015-12-08 00:20 US/Pacific with current details."

Google Cloud Platform
UPDATE: Incident 15025 - Authentication issues with App Engine

We are investigating reports of an issue with App Engine applications accessing Google APIs. We will provide more information by 23:50 US/Pacific. Affected APIs may return a "401 Invalid Credentials"

Google Cloud Platform
RESOLVED: Incident 15024 - App Engine Task Queue Slow Execution

The issue with App Engine task queue tasks should have been resolved for all affected projects as of 14:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

The issue with slow execution of Google App Engine task queue tasks is resolved for the majority of applications. Our engineering team is currently working on measures to ensure that the issue will not resurface. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

The issue with slow execution of Google App Engine task queue tasks should be resolved for the majority of applications and we expect a full resolution in the near future. Our engineering teams are continuing to perform system remediation and monitor system performance. We will provide another status update by tomorrow December 5, 2015 at 11:00 PST with more details.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

We are still investigating the issue with slow execution of Google App Engine task queue tasks. We will provide another status update by 2015-12-04 10:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

We are still investigating the issue with slow execution of Google App Engine task queue tasks. We will provide another status update by 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

We are still investigating the issue with slow execution of Google App Engine task queue tasks. We will provide another status update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

We are still investigating the issue with slow execution of Google App Engine task queue tasks. We will provide another status update by Thursday, 2015-12-03 11:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

We are implementing multiple mitigation strategies to address the slow execution of task queue tasks. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

We are still investigating the issue with slow execution of Google App Engine task queue tasks. Current data indicates that between 1% and 10% of applications are affected. We will provide another status update by 20:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15024 - App Engine Task Queue Slow Execution

We are currently investigating an issue where task queue processing for Google App Engine applications is slower than expected. Current data indicates that between 1% and 10% or applications are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 16:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 18012 - Errors Accessing the Big Query UI and API

SUMMARY: On Sunday 29th of November 2015, for an aggregate of 33 minutes occurring between 7:31am and 8:24am PST, 11% of all requests to the BigQuery API experienced errors. If your service or application was affected, we apologize — this is not the level of quality and reliability we strive to offer you, and we have taken and are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: On Sunday 29th of November 2015, between 7:31am and 7:41am, 7% of BigQuery API requests were redirected (HTTP 302) to a CAPTCHA service. The issue reoccurred between 8:01am and 8:24am PST, affecting 22% of requests. As the CAPTCHA service is intended to verify that the requester is human, any automated requests that were redirected failed. ROOT CAUSE: The BigQuery API is designed to provide fair service to all users during intervals of unusually-high traffic. During this event, a surge in traffic to the API caused traffic verification and fairness systems to activate, causing a fraction of requests to be redirected to the CAPTCHA service. REMEDIATION AND PREVENTION: While investigating the source of the increased traffic, Google engineers assessed that BigQuery’s service capacity was sufficient to handle the additional queries without putting existing queries at risk. The engineers instructed BigQuery to allow the additional queries without verification, ending the incident. To prevent future recurrences of this problem, Google engineers will change BigQuery's traffic threshold policy to an adaptive mechanism appropriate for automated requests, which provides intelligent traffic control and isolation for individual users.

Google Cloud Platform
RESOLVED: Incident 18012 - Errors Accessing the Big Query UI and API

We experienced an intermittent issue with Big Query for requests to the UI or API beginning at Sunday, 2016-11-29 07:30 US/Pacific. Current data indicates that approximately 25% of requests are affected by this issue. This issue should have been resolved for all affected users as of 08:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 15064 - Network Connectivity Issues in europe-west1

SUMMARY: On Monday 23 November 2015, for a duration of 70 minutes, a subset of Internet destinations was unreachable from the Google Compute Engine europe-west1 region. If your service or application was affected, we apologize — this is not the level of quality and reliability we strive to offer you, and we have taken and are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: On Monday 23 November 2015 from 11:55 to 13:05 PST, a number of Internet regions (Autonomous Systems) became unreachable from Google Compute Engine's europe1-west region. The region's traffic volume decreased by 13% during the incident. The majority of affected destination addresses were located in eastern Europe and the Middle East. Traffic to other external destinations was not affected. There was no impact on Google Compute Engine instances in any other region, nor on traffic to any destination within Google. ROOT CAUSE: At 11:51 on Monday 23 November, Google networking engineers activated an additional link in Europe to a network carrier with whom Google already shares many peering links globally. On this link, the peer's network signalled that it could route traffic to many more destinations than Google engineers had anticipated, and more than the link had capacity for. Google's network responded accordingly by routing a large volume of traffic to the link. At 11:55, the link saturated and began dropping the majority of its traffic. In normal operation, peering links are activated by automation whose safety checks would have detected and rectified this condition. In this case, the automation was not operational due to an unrelated failure, and the link was brought online manually, so the automation's safety checks did not occur. The automated checks were expected to protect the network for approximately one hour after link activation, and normal congestion monitoring began at the end of that period. As the post-activation checks were missing, this allowed a 61-minute delay before the normal monitoring started, detected the congestion, and alerted Google network engineers. REMEDIATION AND PREVENTION: Automated alerts fired at 12:56. At 13:02, Google network engineers directed traffic away from the new link and traffic flows returned to normal by 13:05. To prevent recurrence of this issue, Google network engineers are changing procedure to disallow manual link activation. Links may only be brought up using automated mechanisms, including extensive safety checks both before and after link activation. Additionally, monitoring now begins immediately after link activation, providing redundant error detection.

Google Cloud Platform
UPDATE: Incident 15064 - Network Connectivity Issues in europe-west1

The issue with network connectivity issues in europe-west1 should have been resolved for all affected users as of 13:04 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 15064 - Network Connectivity Issues in europe-west1

We are investigating reports of an issue with network connectivity in europe-west1. We will provide more information by 14:22 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16030 -

The issue with Cloud Bigtable is resolved for all affected projects as of 14:15 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16030 -

We have identified a root cause and believe we have resolved the issue for all customers connecting from GCE. We currently estimate resolving the issue for all customers within a few hours. We will provide another update at 15:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16030 -

We are investigating reports of an issue with Cloud Bigtable (currently in Beta) that is affecting customers using Java 7 and Google's client libraries. We will provide more information by 13:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 15063 - Network Connectivity disruption between us-east1 and asia-east1

Between 10:04 and 10:27 am PST time, instances in asia-east1 and us-east1 experienced connectivity issues. Instance creation in both regions and during the same time-frame, was also impacted.

Google Cloud Platform
RESOLVED: Incident 15023 - Network Connectivity and Latency Issues in Europe

SUMMARY: On Tuesday, 10 November 2015, outbound traffic going through one of our European routers from both Google Compute Engine and Google App Engine experienced high latency for a duration of 6h43m minutes. If your service or application was affected, we apologize — this is not the level of quality and reliability we strive to offer you, and we have taken and are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: On Tuesday, 10 November 2015 from 06:30 - 13:13 PST, a subset of outbound traffic from Google Compute Engine VMs and Google App Engine instances experienced high latency. The disruption to service was limited to outbound traffic through one of our European routers, and caused at peak 40% of all traffic being routed through this device to be dropped. This accounted for 1% of all Google Compute Engine traffic being routed from EMEA and <0.05% of all traffic for Google App Engine. ROOT CAUSE: A network component failure in one of our European routers temporarily reduced network capacity in the region causing network congestion for traffic traversing this route. Although the issue was mitigated by changing the traffic priority, the problem was only fully resolved when the affected hardware was replaced. REMEDIATION AND PREVENTION: As soon as significant traffic congestion in the network path was detected, at 09:10 PST, Google Engineers diverted a subset of traffic away from the affected path. As this only slightly decreased the congestion, Google Engineers made a change in traffic priority which fully mitigated the problem by 13:13 PST time. The replacement of the faulty hardware resolved the problem. To address time to resolution, Google engineers have added appropriate alerts to the monitoring of this type of router, so that similar congestion events will be spotted significantly more quickly in future. Additionally, Google engineers will ensure that capacity plans properly account for all types of traffic in single device failures. Furthermore, Google engineers will audit and augment capacity in this region to ensure sufficient redundancy is available.

Google Cloud Platform
RESOLVED: Incident 15062 - Network Connectivity and Latency Issues in Europe

SUMMARY: On Tuesday, 10 November 2015, outbound traffic going through one of our European routers from both Google Compute Engine and Google App Engine experienced high latency for a duration of 6h43m minutes. If your service or application was affected, we apologize — this is not the level of quality and reliability we strive to offer you, and we have taken and are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: On Tuesday, 10 November 2015 from 06:30 - 13:13 PST, a subset of outbound traffic from Google Compute Engine VMs and Google App Engine instances experienced high latency. The disruption to service was limited to outbound traffic through one of our European routers, and caused at peak 40% of all traffic being routed through this device to be dropped. This accounted for 1% of all Google Compute Engine traffic being routed from EMEA and <0.05% of all traffic for Google App Engine. ROOT CAUSE: A network component failure in one of our European routers temporarily reduced network capacity in the region causing network congestion for traffic traversing this route. Although the issue was mitigated by changing the traffic priority, the problem was only fully resolved when the affected hardware was replaced. REMEDIATION AND PREVENTION: As soon as significant traffic congestion in the network path was detected, at 09:10 PST, Google Engineers diverted a subset of traffic away from the affected path. As this only slightly decreased the congestion, Google Engineers made a change in traffic priority which fully mitigated the problem by 13:13 PST time. The replacement of the faulty hardware resolved the problem. To address time to resolution, Google engineers have added appropriate alerts to the monitoring of this type of router, so that similar congestion events will be spotted significantly more quickly in future. Additionally, Google engineers will ensure that capacity plans properly account for all types of traffic in single device failures. Furthermore, Google engineers will audit and augment capacity in this region to ensure sufficient redundancy is available.

Google Cloud Platform
UPDATE: Incident 18011 - BigQuery API Returns "Billing has not been enabled for this project" Error When Billing Is Enabled

The issue with the BigQuery API returning "Billing has not been enabled for this project" errors for a small number of billing-enabled projects has been resolved as of 14:10 PST. We apologize for any disruption to your service or application - this is not the level of service we strive to provide and we are taking immediate steps to ensure this issue does not recur.

Google Cloud Platform
UPDATE: Incident 18011 - BigQuery API Returns "Billing has not been enabled for this project" Error When Billing Is Enabled

The BigQuery API is returning "Billing has not been enabled for this project" error for a small number of billing-enabled projects. Both newly created and existing projects could be affected. Our Engineering team is working on resolving this issue with high priority. We will provide you with an update as soon as more information becomes available.

Google Cloud Platform
UPDATE: Incident 15062 - Network Connectivity and Latency Issues in Europe

We have resolved the issue with high latency and network connectivity to/from services hosted in Europe. This issue started at approximately 08:00 PST and was resolved as of 12:35 PST. We will be conducting an internal investigation and will share the results of our investigation soon. If you continue to see issues with connectivity to/from services in Europe, please create a case and let us know.

Google Cloud Platform
UPDATE: Incident 15023 - Network Connectivity and Latency Issues in Europe

We have resolved the issue with high latency and network connectivity to/from services hosted in Europe. This issue started at approximately 08:00 PST and was resolved as of 13:15 PST. We will be conducting an internal investigation and will share the results of our investigation soon. If you continue to see issues with connectivity to/from services in Europe, pease create a case and let us know.

Google Cloud Platform
UPDATE: Incident 15023 - Network Connectivity and Latency Issues in Europe

We are investigating reports of issues with network connectivity and latency for Google App Engine and Google Compute Engine in Europe. We will provide more information by 13:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15062 - Network Connectivity and Latency Issues in Europe

We are investigating reports of issues with network connectivity and latency for Google App Engine and Google Compute Engine in Europe. We will provide more information by 13:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 15060 - Intermittent Network Connectivity Issues in us-central1-b.

For those who cannot access the link in the previous message, please try https://status.cloud.google.com/incident/compute/15058

Google Cloud Platform
RESOLVED: Incident 15061 - Network connectivity issue in us-central1-b

We posted the incident report at https://status.cloud.google.com/incident/compute/15058, which covers this incident too.

Google Cloud Platform
RESOLVED: Incident 15060 - Intermittent Network Connectivity Issues in us-central1-b.

We posted the incident report at https://status.cloud.google.com/incident/compute/15058, which covers this incident too.

Google Cloud Platform
RESOLVED: Incident 15058 - Intermittent Connectivity Issues In us-central1b

SUMMARY: Between Saturday 31 October 2015 and Sunday 1 November 2015, Google Compute Engine networking in the us-central1-b zone was impaired on 3 occasions for an aggregate total of 4 hours 10 minutes. We apologize if your service was affected in one of these incidents, and we are working to improve the platform’s performance and availability to meet our customer’s expectations. DETAILED DESCRIPTION OF IMPACT (All times in Pacific/US): Outage timeframes for Saturday 31 October 2015: 05:52 to 07:05 for 73 minutes Outage timeframes for Sunday 1 November 2015: 14:10 to 15:30 for 80 minutes 19:03 to 22:40 for 97 minutes During the affected timeframes, up to 14% of the VMs in us-central1-b experienced up to 100% packet loss communicating with other VMs in the same project. The issue impacted both in-zone and intra-zone communications. ROOT CAUSE: Google network control fabrics are designed to permit simultaneous failure of one or more components. When such failures occur, redundant components on the network may assume new roles within the control fabric. A race condition in one of these role transitions resulted the loss of flow information for a subset of the VMs controlled by the fabric. REMEDIATION AND PREVENTION: Google engineers began rolling out a change to eliminate this race condition at 18:03 PST on Monday November 2 2015. The rollout completed on at 11:13 PST on Wednesday November 4 2015. Additionally, monitoring is being improved to reduce the time required to detect, identify and resolve problematic changes to the network control fabric.

Google Cloud Platform
RESOLVED: Incident 15059 - Google Compute Engine Instance operations failing

SUMMARY: On Saturday 31 October 2015, Google Compute Engine (GCE) management operations experienced high latency for a duration of 181 minutes. If your service or application was affected, we apologize — this is not the level of quality and reliability we strive to offer you, and we have taken and are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: On Saturday 31 October 2015 from 18:04 to 21:05 PDT, all Google Compute Engine management operations were slow or timed out in the Google Developers Console, the gcloud tool or the Google Compute Engine API. ROOT CAUSE: An issue in the handling of Google Compute Engine management operations caused requests to not complete in a timely manner, due to older operations retrying excessively and preventing newer operations from succeeding. Once discovered, remediation steps were taken by Google Engineers to reduce the number of retrying operations, enabling recovery from the operation backlog. The incident was resolved at 21:05 PDT when all backlogged operations were processed by the Google Compute Engine management backend and latency and error rates returned to typical values. REMEDIATION AND PREVENTION: To detect similar situations in the future, the GCE Engineering team is implementing additional automated monitoring to detect high numbers of queued management operations and limiting the number of operation retries. Google Engineers are also enabling more robust operation handling and load splitting to better isolate system disruptions.

Google Cloud Platform
RESOLVED: Incident 15061 - Network connectivity issue in us-central1-b

The issue with Google Compute Engine network connectivity in us-central1-b should have been resolved for all affected projects as of Monday, 2015-11-02 00:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 15061 - Network connectivity issue in us-central1-b

We are experiencing a network connectivity issue with Google Compute Engine instances in us-central1-b zone beginning at Sunday, 2015-11-01 21:05 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by Monday, 2015-11-02 00:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15061 - Network connectivity issue in us-central1-b

We are investigating reports of an issue with Google Compute Engine Network in us-central1-b zone. We will provide more information by 23:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15060 - Intermittent Network Connectivity Issues in us-central1-b.

The issue with network connectivity in us-central1-b should have been resolved for all affected instances as of 15:43 PST. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 15060 - Intermittent Network Connectivity Issues in us-central1-b.

We are continuing to investigate an issue with network connectivity in the us-central1-b zone. We will provide another update by 16:30 PST.

Google Cloud Platform
UPDATE: Incident 15060 - Intermittent Network Connectivity Issues in us-central1-b.

We are currently investigating a transient issue with sending internal traffic to and from us-central1b. We will have more information for you by 15:30 PST.

Google Cloud Platform
UPDATE: Incident 15059 - Google Compute Engine Instance operations failing

The issue with Google Compute Engine Instance operation high latency should have been resolved for all affected users as of 21:05 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 15059 - Google Compute Engine Instance operations failing

We are still investigating the issue with Google Compute Engine Instance operation high latency. We will provide another status update by 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15059 - Google Compute Engine Instance operations failing

We are experiencing an issue with Google Compute Engine Instance operation high latency beginning at Saturday, 2015-10-31 18:04 US/Pacific. Current data indicates that only users who are attempting to run instance management operations are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 21:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15059 - Google Compute Engine Instance operations failing

We are investigating reports of an issue with Google Compute Engine Instance operations. We will provide more information by 2015-10-31 19:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 15058 - Intermittent Connectivity Issues In us-central1b

The issue with sending and receiving traffic between VMs in us-central1b should have been resolved for all affected instannces as of 07:08 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We sincerely apologize for any affect this disruption had on your applications and/or services.

Google Cloud Platform
UPDATE: Incident 15058 - Intermittent Connectivity Issues In us-central1b

The issue with sending and receiving internal traffic in us-central1b should have been resolved for the majority of instances and we expect a full resolution in the near future. We will provide an update with the affected timeframe after our investigation is complete.

Google Cloud Platform
UPDATE: Incident 15058 - Intermittent Connectivity Issues In us-central1b

We are continuing to investigate an intermittent issue with sending and receiving internal traffic in us-central1b and will provide another update by 09:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15058 - Intermittent Connectivity Issues In us-central1b

We are currently investigating a transient issue with sending internal traffic to and from us-central1b.

Google Cloud Platform
RESOLVED: Incident 15057 - Google Compute Engine issue with newly created instances

SUMMARY: On Tuesday 27 October 2015, Google Compute Engine instances created within a 90 minute period in us-central1 and asia-east1 regions took longer than usual to obtain external network connectivity. Existing instances in the specified regions were not affected and continued to be available. We know how important it is to be able to create instances both for new deployments and scaling existing deployments, and we apologize for the impairment of these actions. DETAILED DESCRIPTION OF IMPACT: On Tuesday 27 October 2015 GCE instances created between 21:44 and 23:13 PDT in the us-central1 and asia-east1 regions took over 5 minutes before they started to receive traffic via their external IP address or network load balancer. Existing instances continued to operate without any issue, and there was no effect on internal networking for any instance. ROOT CAUSE: This issue was triggered by rapid changes in external traffic patterns. The networking infrastructure automatically reconfigured itself to adapt to the changes, but the reconfiguration involved processing a substantial queue of modifications. The network registration of new GCE instances was required to wait on events in this queue, leading to delays in registration. REMEDIATION AND PREVENTION: This issue was resolved as the backlog of network configuration changes was automatically processed. Google engineers will decouple the GCE networking operations and management systems that were involved in the issue such that a backlog in one system does not affect the other. Although the issue was detected promptly, Google engineers have identified opportunities to further improve internal monitoring and alerting for related issues.

Google Cloud Platform
UPDATE: Incident 15057 - Google Compute Engine issue with newly created instances

The issue with Google Compute Engine for newly created instances should have been resolved for all affected regions (us-central1 and asia-east1) as of 23:15 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 15057 - Google Compute Engine issue with newly created instances

We are experiencing an issue with Google Compute Engine for newly created instances, being delayed to become accessible, beginning at Tuesday, 2015-10-27 22:05 US/Pacific. Current data indicates that zones us-central1 and asia-east1 are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 23:50 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15057 - Google Compute Engine issue with newly created instances

We are investigating reports of an issue with Google Compute Engine. We will provide more information by 23:20 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15022 - App Engine applications using custom domains unreachable from some parts of Central Europe

The issue with App Engine applications using custom domains being unreachable from some parts of Central Europe should have been resolved for all affected users as of 20:25 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 15022 - App Engine applications using custom domains unreachable from some parts of Central Europe

We are investigating reports of an issue with App Engine applications using custom domains being unreachable from some parts of Central Europe. We will provide more information by 21:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We have identified a small number of additional Google Container Engine clusters that were not fixed by the previous round of repair. We have now applied the fix to these clusters, and so this issue should be resolved for all known affected clusters as of 21:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are still working on resolving the issue with Google Container Engine nodes connecting to the metadata server. We are actively testing a fix for it, and once it is validated, we will push this fix into production. We will provide next status update by 2016-10-24 01:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are still working on resolving the issue with Google Container Engine nodes connecting to the metadata server. We are actively testing a fix for it, and once it is validated, we will push this fix into production. We will provide next status update by 2016-10-24 01:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are experiencing an issue with Google Container Engine nodes connecting to the metadata server beginning at Friday, 2016-10-23 15:25 US/Pacific. Current data indicates that a majority of clusters may be affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 19:30 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 15021 - App Engine increased 500 errors

SUMMARY: On Thursday 17 September 2015, Google App Engine experienced increased latency and HTTP errors for 1 hour 28 minutes. We apologize to our customers who were affected by this issue. This is not the level of quality and reliability we s