CloudStatus

Google Cloud Status Alerts

Google Cloud Platform
RESOLVED: Incident 18007 - The issue with with Google Cloud networking in europe-west1-b and europe-west4-b has been resolved.

The issue with VM public IP address traffic in europe-west1-b and europe-west4-b has been resolved for all affected projects as of Wednesday, 2018-07-04 08:53 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18007 - We've received a report of an issue with Google Cloud networking in europe-west1-b and europe-west4-b.

We are allocating additional capacity to handle this load. Most of the changes are complete and access controls have been rolled back. The situation is still being closely monitored and further adjustments are still possible/likely. We will provide another status update by Wednesday, 2018-07-04 10:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 18037 - We've received a report of an issue with Google BigQuery.

ISSUE SUMMARY On Friday 22 June 2018, Google BigQuery experienced increased query failures for a duration of 1 hour 6 minutes. We apologize for the impact of this issue on our customers and are making changes to mitigate and prevent a recurrence. DETAILED DESCRIPTION OF IMPACT On Friday 22 June 2018 from 12:06 to 13:12 PDT, up to 50% of total requests to the BigQuery API failed with error code 503. Error rates varied during the incident, with some customers experiencing 100% failure rate for their BigQuery table jobs. bigquery.tabledata.insertAll jobs were unaffected. ROOT CAUSE A new release of the BigQuery API introduced a software defect that caused the API component to return larger-than-normal responses to the BigQuery router server. The router server is responsible for examining each request, routing it to a backend server, and returning the response to the client. To process these large responses, the router server allocated more memory which led to an increase in garbage collection. This resulted in an increase in CPU utilization, which caused our automated load balancing system to shrink the server capacity as a safeguard against abuse. With the reduced capacity and now comparatively large throughput of requests, the denial of service protection system used by BigQuery responded by rejecting user requests, causing a high rate of 503 errors. REMEDIATION AND PREVENTION Google Engineers initially mitigated the issue by increasing the capacity of the BigQuery router server which prevented overload and allowed API traffic to resume normally. The issue was fully resolved by identifying and reverting the change that caused large response sizes. To prevent future occurrences, BigQuery engineers will also be adjusting capacity alerts to improve monitoring of server overutilization. We apologize once again for the impact of this incident on your business.

Google Cloud Platform
UPDATE: Incident 18006 - We've received a report of an issue with Google Cloud Networking in us-east1.

The issue with Google Cloud Networking in us-east1 has been resolved for all affected projects as of Saturday, 2018-06-23 13:16 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18006 - We've received a report of an issue with Google Cloud Networking in us-east1.

We are experiencing an issue with Google Cloud Networking beginning on Saturday, 2018-06-23 12:02 US/Pacific. Current data that approximately 33% of projects in us-east1 are affected by this issue. For everyone who is affected, we apologize for the disruption. We will provide an update by Saturday, 2018-06-23 14:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18006 - We've received a report of an issue with Google Cloud Networking in us-east1.

We've received a report of an issue with Google Cloud Networking in us-east1.

Google Cloud Platform
RESOLVED: Incident 18037 - We've received a report of an issue with Google BigQuery.

The issue with Google BigQuery has been resolved for all affected projects as of Friday, 2018-06-22 13:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18037 - We've received a report of an issue with Google BigQuery.

Mitigation work is currently underway by our Engineering Team. We will provide another status update by Friday, 2018-06-22 14:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18037 - We've received a report of an issue with Google BigQuery.

We are investigating an issue with Google BiqQuery. Our Engineering Team is investigating possible causes. Affected customers may see their queries fail with 500 errors. We will provide another status update by Friday, 2018-06-22 14:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18037 - We've received a report of an issue with Google BigQuery.

We've received a report of an issue with Google BigQuery.

Google Cloud Platform
RESOLVED: Incident 18005 - Google Compute Engine VM instances allocated with duplicate internal IP addresses, stopped instances networking are not coming up when started.

The issue with Google Compute Engine VM instances being allocated duplicate external IP addresses has been resolved for all affected projects as of Saturday, 2018-06-16 12:59 US/Pacific. Customers with VMs having duplicate internal IP addresses should follow the workaround described earlier, which is to delete (without deleting the boot disk), and recreate the affected VM instances. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18005 - Google Compute Engine VM instances allocated with duplicate internal IP addresses, stopped instances networking are not coming up when started.

Mitigation work is currently underway by our Engineering Team. GCE VMs that have an internal IP that is not assigned to another VM within the same project, region and network should no longer see this issue occurring, however instances where another VM is using their internal IP may fail to start with networking. We will provide another status update by Saturday, 2018-06-16 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18005 - Google Compute Engine VM instances allocated with duplicate internal IP addresses, stopped instances networking are not coming up when started.

Detailed Description We are investigating an issue with Google Compute Engine VM instances failing to start with networking after being stopped, or new instances being allocated with the same IP address as a VM instance which was stopped within the past 24 hours. Our Engineering Team is evaluating a fix in a test environment. We will provide another status update by Saturday, 2018-06-16 03:30 US/Pacific with current details. Diagnosis Instances that were stopped at any time between 2018-06-14 08:42 and 2018-06-15 13:40 US/Pacific may fail to start with networking. A newly allocated VM instance has the same IP address as a VM instance which was stopped within the mentioned time period. Workaround As an immediate mitigation to fix instances for which networking is not working, instances should be recreated, that is a delete (without deleting the boot disk), and a create. e.g: gcloud compute instances describe instance-1 --zone us-central1-a gcloud compute instances delete instance-1 --zone us-central1-a --keep-disks=all gcloud compute instances create instance-1 --zone-us-central1-a --disk='boot=yes,name=instance-1' To prevent new instances from coming up with duplicate IP addresses we suggest creating f1-micros until new ip addresses are allocated, and then stopping the instances to stop incurring charges. Alternatively new instances can be brought up with a static internal ip address.

Google Cloud Platform
UPDATE: Incident 18005 - Google Compute Engine VM instances allocated with duplicate internal IP addresses

Our Engineering Team continues to evaluate the fix in a test environment. We believe that customers can work around the issue by launching then stopping f1 micro instances until no more duplicate IP addresses are obtained. We are awaiting confirmation that the provided workaround works for customers. We will provide another status update by Friday, 2018-06-15 20:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18005 - Google Compute Engine VM instances allocated with duplicate internal IP addresses

Our Engineering Team is evaluating a fix in a test environment. We will provide another status update by Friday, 2018-06-15 17:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18005 - Google Compute Engine VM instances allocated with duplicate internal IP addresses

Investigation continues by our Engineering Team. We are investigating workarounds as well as a method to resolve the issue for all affected projects. We will provide another status update by Friday, 2018-06-15 17:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18005 - Google Compute Engine VM instances allocated with duplicate internal IP addresses

Google Compute Engine VM instances allocated with duplicate internal IP addresses

Google Cloud Platform
RESOLVED: Incident 18011 - The issue with Google Networking in South America should be resolved.

The issue with Google Cloud Networking in South America has been resolved for all affected users as of Monday, 2018-06-04 22:22. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18011 - The rate of errors is decreasing for the Cloud Networking issue in South America.

The issue with Google Networking in South America should be resolved for some of users and we expect a full resolution in the near future. We will provide another status update by Monday, 2018-06-04 23:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18011 - The rate of errors is decreasing for the Cloud Networking issue in South America.

The rate of errors is decreasing. We will provide another status update by Monday, 2018-06-04 23:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18011 - Mitigation work is currently underway by our Engineering Team to restore the loss of traffic with Google Networking in South America.

Mitigation work is currently underway by our Engineering Team for Google Cloud Networking issue affecting South America. We will provide more information by Monday, 2018-06-04 22:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18011 - We are investigating loss of traffic with Google Networking in South America.

We are investigating loss for traffic to and from South America. Our Engineering Team is investigating possible causes. We will provide another status update by Monday, 2018-06-04 22:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18011 - We are investigating loss of traffic with Google Networking in South America.

We are investigating loss of traffic with Google Networking in South America.

Google Cloud Platform
RESOLVED: Incident 18007 - The Stackdriver logging service is experiencing a 30-minute delay.

ISSUE SUMMARY On Sunday, 20 May 2018 for 4 hours and 25 minutes, approximately 6% of Stackdriver Logging logs experienced a median ingest latency of 90 minutes. To our Stackdriver Logging customers whose operations monitoring was impacted during this outage, we apologize. We have conducted an internal investigation and are taking steps to ensure this doesn’t happen again. DETAILED DESCRIPTION OF IMPACT On Wednesday, 20 May 2018 from 18:40 to 23:05 Pacific Time, 6% of logs ingested by Stackdriver Logging experienced log event ingest latency of up to 2 hours 30 minutes, with a median latency of 90 minutes. Customers requesting log events within the latency window would receive empty responses. Logging export sinks were not affected. ROOT CAUSE Stackdriver Logging uses a pool of workers to persist ingested log events. On Wednesday, 20 May 2018 at 17:40, a load spike in the Stackdriver Logging storage subsystem caused 0.05% of persist calls made by the workers to time out. The workers would then retry persisting to the same address until reaching a retry timeout. While the workers were retrying, they were not persisting other log events. This resulted in multiple workers removed from the pool of available workers. By 18:40, enough workers had been removed from the pool to reduce throughput below the level of incoming traffic, creating delays for 6% of logs. REMEDIATION AND PREVENTION After Google Engineering was paged, engineers isolated the issue to these timing out workers. At 20:35, engineers configured the workers to return timed out log events to queue and move on to a different log event after timeout. This allowed workers to catch up with ingest rate. At 23:02, the last delayed message was delivered. We are taking the following steps to prevent the issue from happening again: we are modifying the workers to retry persists using alternate addresses to reduce the impact of persist timeouts; we are increasing the persist capacity of the storage subsystem to manage load spikes; we are modifying Stackdriver Logging workers to reduce their unavailability when the storage subsystem experiences higher latency.

Google Cloud Platform
RESOLVED: Incident 18003 - Issue with Google Cloud project creation

The issue with project creation has been resolved for all affected projects as of Tuesday, 2018-05-22 13:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 18009 - GCE Networking issue in us-east4

ISSUE SUMMARY On Wednesday 16 May 2018, Google Cloud Networking experienced loss of connectivity to external IP addresses located in us-east4 for a duration of 58 minutes. DETAILED DESCRIPTION OF IMPACT On Wednesday 16 May 2018 from 18:43 to 19:41 PDT, Google Compute Engine, Google Cloud VPN, and Google Cloud Network Load Balancers hosted in the us-east4 region experienced 100% packet loss from the internet and other GCP regions. Google Dedicated Interconnect Attachments located in us-east4 also experienced loss of connectivity. ROOT CAUSE Every zone in Google Cloud Platform advertises several sets of IP addresses to the Internet via BGP. Some of these IP addresses are global and are advertised from every zone, others are regional and advertised only from zones in the region. The software that controls the advertisement of these IP addresses contained a race condition during application startup that would cause regional IP addresses to be filtered out and withdrawn from a zone. During a routine binary rollout of this software, the race condition was triggered in each of the three zones in the us-east4 region. Traffic continued to be routed until the last zone received the rollout and stopped advertising regional prefixes. Once the last zone stopped advertising the regional IP addresses, external regional traffic stopped entering us-east4. REMEDIATION AND PREVENTION Google engineers were alerted to the problem within one minute and as soon as investigation pointed to a problem with the BGP advertisements, a rollback of the binary in the us-east4 region was created to mitigate the issue. Once the rollback proved effective, the original rollout was paused globally to prevent any further issues. We are taking the following steps to prevent the issue from happening again. We are adding additional monitoring which will provide better context in future alerts to allow us to diagnose issues faster. We also plan on improving the debuggability of the software that controls the BGP advertisements. Additionally, we will be reviewing the rollout policy for these types of software changes so we can detect issues before they impact an entire region. We apologize for this incident and we recognize that regional outages like this should be rare and quickly rectified. We are taking immediate steps to prevent recurrence and improve reliability in the future.

Google Cloud Platform
UPDATE: Incident 18003 - Issue with Google Cloud project creation

We are experiencing an issue with creating new projects as well as activating some APIs for projects. beginning at Tuesday, 2018-05-22 07:50 US/Pacific. For everyone who is affected, we apologize for the disruption. We will provide an update by Tuesday, 2018-05-22 13:40 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18003 - Issue with Google Cloud project creation

The rate of errors is decreasing. We will provide another status update by Tuesday, 2018-05-22 14:10 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18003 - Issue with Google Cloud project creation

Issue with Google Cloud project creation

Google Cloud Platform
UPDATE: Incident 18002 - Issue with Google Cloud project creation and API activation

We are experiencing an issue with creating new projects as well as activating some APIs for projects. beginning at Tuesday, 2018-05-22 07:50 US/Pacific. For everyone who is affected, we apologize for the disruption. We will provide an update by Tuesday, 2018-05-22 13:40 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 18007 - The Stackdriver logging service is experiencing a 30-minute delay.

The issue with StackDriver logging delay has been resolved for all affected projects as of Sunday, 2018-05-20 22:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18007 - The Stackdriver logging service is experiencing a 30-minute delay.

Mitigation work is currently underway by our Engineering Team. We will provide another status update by Sunday, 2018-05-20 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18007 - The Stackdriver logging service is experiencing a 30-minute delay.

The Stackdriver logging service is experiencing a 30-minute delay. We will provide another status update by Sunday, 2018-05-20 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18007 - We’ve received a report of an issue with StackDriver logging delay. We will provide more information by Sunday 20:45 US/Pacific.

We are investigating an issue with Google Stackdriver. We will provide more information by Sunday, 2018-05-20 20:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18010 - We are investigating an issue with Google Cloud Networking in us-east4.

The issue with external IP allocation in us-east4 has been resolved as of Saturday, 2018-05-19 11:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18010 - We are investigating an issue with Google Cloud Networking in us-east4.

Allocation of new external IP addresses for GCE instance creation continues to be unavailable in us-east4. For everyone who is affected, we apologize for the disruption. We will provide an update by Saturday, 2018-05-19 18:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18010 - We are investigating an issue with Google Cloud Networking in us-east4.

We are experiencing an issue with Google Cloud Networking and Google Compute Engine in us-east4 that prevents the creation of GCE instances that require allocation of new external IP addresses. For everyone who is affected, we apologize for the disruption. We will provide an update by Saturday, 2018-05-19 06:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18010 - We are investigating an issue with Google Cloud Networking in us-east4. We will provide more information by Friday, 2018-05-18 22:00 US/Pacific.

We are experiencing an issue with Google Cloud Networking and Google Compute Engine in us-east4 that prevents the creation of GCE instances with external IP addresses attached. Early investigation indicates that all instances in us-east4 are affected by this issue. For everyone who is affected, we apologize for the disruption. We will provide an update by Friday, 2018-05-18 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18010 - We are investigating an issue with Google Cloud Networking in us-east4. We will provide more information by Friday, 2018-05-18 22:00 US/Pacific.

We are investigating an issue with Google Cloud Networking in us-east4. We will provide more information by Friday, 2018-05-18 22:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18036 - Multiple failing BigQuery job types

ISSUE SUMMARY On Wednesday 16 May 2018, Google BigQuery experienced failures of import, export and query jobs for a duration of 88 minutes over two time periods (55 minutes initially, and 33 minutes in the second, which was isolated to the EU). We sincerely apologize to all of our affected customers; this is not the level of reliability we aim to provide in our products. We will be issuing SLA credits to customers who were affected by this incident and we are taking immediate steps to prevent a future recurrence of these failures. DETAILED DESCRIPTION OF IMPACT On Wednesday 16 May 2018 from 16:00 to 16:55 and from to 17:45 to 18:18 PDT, Google BigQuery experienced a failure of some import, export and query jobs. During the first period of impact, there was a 15.26% job failure rate; during the second, which was isolated to the EU, there was a 2.23% error rate. Affected jobs would have failed with INTERNAL_ERROR as the reason. ROOT CAUSE Configuration changes being rolled out on the evening of the incident were not applied in the intended order. This resulted in an incomplete configuration change becoming live in some zones, subsequently triggering the failure of customer jobs. During the process of rolling back the configuration, another incorrect configuration change was inadvertently applied, causing the second batch of job failures. REMEDIATION AND PREVENTION Automated monitoring alerted engineering teams 15 minutes after the error threshold was met and were able to correlate the errors with the configuration change 3 minutes later. We feel that the configured alert delay is too long and have lowered it to 5 minutes in order to aid in quicker detection. During the rollback attempt, another bad configuration change was enqueued for automatic rollout and when unblocked, proceeded to roll out, triggering the second round of job failures. To prevent this from happening in the future, we are working to ensure that rollouts are automatically switched to manual mode when engineers are responding to production incidents. In addition, we're switching to a different configuration system which will ensure the consistency of configuration at all stages of the rollout.

Google Cloud Platform
RESOLVED: Incident 18001 - Issue affecting Google Cloud Function customers' ability to create and update functions.

The issue with Google Cloud Functions affecting the ability to create and update functions has been resolved for all affected users as of Thursday, 2018-05-17 13:01 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18001 - Issue affecting Google Cloud Function customers' ability to create and update functions.

The rate of errors is decreasing. We will provide another status update by Thursday, 2018-05-17 13:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18001 - Issue affecting the ability to update and create Cloud Functions.

Mitigation work is currently underway by our Engineering Team. We will provide another status update by Thursday, 2018-05-17 13:10 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18001 - Issue affecting the ability to update and create Cloud Functions.

Issue affecting the ability to update and create Cloud Functions.

Google Cloud Platform
RESOLVED: Incident 18009 - GCE Networking issue in us-east4

The issue with GCE Networking (affecting GCE, GKE, Cloud VPN and Cloud Private Interconnect) in us-east4 region has been resolved for all affected users as of Wednesday, 2018-05-16 19:40 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 18004 - The issue with GCE Networking (affecting GCE, GKE, Cloud VPN and Cloud Private Interconnect) in us-east4 region has been resolved for all affected users as of Wednesday, 2018-05-16 19:40 US/Pacific.

The issue with GCE Networking (affecting GCE, GKE, Cloud VPN and Cloud Private Interconnect) in us-east4 region has been resolved for all affected users as of Wednesday, 2018-05-16 19:40 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18009 - GCE Networking issue in us-east4

The issue with GCE Networking (affecting GCE, GKE, Cloud VPN and Cloud Private Interconnect) in us-east4 region should be resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by Wednesday, 2018-05-16 20:20 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18004 - The issue with GCE Networking (affecting GCE, GKE, Cloud VPN and Cloud Private Interconnect) in us-east4 region should be resolved for the majority of users and we expect a full resolution in the near...

The issue with GCE Networking (affecting GCE, GKE, Cloud VPN and Cloud Private Interconnect) in us-east4 region should be resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by Wednesday, 2018-05-16 20:20 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18009 - GCE Networking issue in us-east4

Mitigation work is currently underway by our Engineering Team. We will provide another status update by Wednesday, 2018-05-16 20:10 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18004 - GCE Networking in us-east4 region affecting GCE VMs, Cloud VPN and Cloud Private Interconnect resulting in network packet loss. Mitigation is underway.

Mitigation work is currently underway by our Engineering Team. We will provide another status update by Wednesday, 2018-05-16 20:10 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18009 - GCE Networking issue in us-east4

We are investigating an issue with GCE Networking in us-east4 region affecting GCE VMs, GKE, Cloud VPN and Cloud Private Interconnect resulting in network packet loss. We will provide more information by Wednesday, 2018-05-16 19:43 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18004 - GCE Networking in us-east4 region affecting GCE VMs, Cloud VPN and Cloud Private Interconnect resulting in network packet loss.

We are investigating an issue with GCE Networking in us-east4 region affecting GCE VMs, Cloud VPN and Cloud Private Interconnect resulting in network packet loss. We will provide more information by Wednesday, 2018-05-16 19:43 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18004 - GCE Networking in us-east4 region affecting GCE VMs, Cloud VPN and Cloud Private Interconnect resulting in network packet loss.

GCE Networking in us-east4 region affecting GCE VMs, Cloud VPN and Cloud Private Interconnect resulting in network packet loss.

Google Cloud Platform
UPDATE: Incident 18036 - Multiple failing BigQuery job types

The issue with Google BigQuery has been resolved for all affected users as of 2018-05-16 17:06 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. Short Summary

Google Cloud Platform
UPDATE: Incident 18036 - Multiple failing BigQuery job types

We are rolling back a configuration change to mitigate this issue. We will provide another status update by Wednesday 2018-05-16 17:21 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 18007 - We are investigating an issue with increased packet loss in us-central1 with Google Cloud Networking.

ISSUE SUMMARY On Wednesday 2 May, 2018 Google Cloud Networking experienced increased packet loss to the internet as well as other Google regions from the us-central1 region for a duration of 21 minutes. We understand that the network is a critical component that binds all services together. We have conducted an internal investigation and are taking steps to improve our service. DETAILED DESCRIPTION OF IMPACT On Wednesday 2 May, 2018 from 13:47 to 14:08 PDT, traffic between all zones in the us-central1 region and all destinations experienced 12% packet loss. Traffic between us-central1 zones experienced 22% packet loss. Customers may have seen requests succeed to services hosted in us-central1 as loss was not evenly distributed, some connections did not experience any loss while others experienced 100% packet loss. ROOT CAUSE A control plane is used to manage configuration changes to the network fabric connecting zones in us-central1 to each other as well as the Internet. On Wednesday 2 May, 2018 Google Cloud Network engineering began deploying a configuration change using the control plane as part of planned maintenance work. During the deployment, a bad configuration was generated that blackholed a portion of the traffic flowing over the fabric. The control plane had a bug in it, which caused it to produce an incorrect configuration. New configurations deployed to the network fabric are evaluated for correctness, and regenerated if an error is found. In this case, the configuration error appeared after the configuration was evaluated, which resulted in deploying the erroneous configuration to the network fabric. REMEDIATION AND PREVENTION Automated monitoring alerted engineering teams 2 minutes after the loss started. Google engineers correlated the alerts to the configuration push and routed traffic away from the affected part of the fabric. Mitigation completed 21 minutes after loss began, ending impact to customers. After isolating the root cause, engineers then audited all configuration changes that were generated by the control plane and replaced them with known-good configurations. To prevent this from recurring, we will correct the control plane defect that generated the incorrect configuration and are adding additional validation at the fabric layer in order to more robustly detect configuration errors. Additionally, we intend on adding logic to the network control plane to be able to self-heal by automatically routing traffic away from the parts of the network fabric in an error state. Finally, we plan on evaluating further isolation of control plane configuration changes to reduce the size of the possible failure domain. Again, we would like to apologize for this issue. We are taking immediate steps to improve the platform’s performance and availability.

Google Cloud Platform
RESOLVED: Incident 18008 - We've received a report of connectivity issues from GCE instances.

The network connectivity issues from GCE instances has been resolved for all affected users as of 2018-05-07 10:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18008 - We've received a report of connectivity issues from GCE instances. Our Engineering Team believes they have identified the root cause of the errors and is working to mitigate. We will provide another s...

We are investigating connectivity issues from GCE instances. We will provide more information by 2018-05-07 10:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18007 - We are investigating an issue with increased packet loss in us-central1 with Google Cloud Networking.

The issue with Google Cloud Networking having increased packet loss in us-central1 has been resolved for all affected users as of Wednesday, 2018-05-02 14:10 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18007 - We are investigating an issue with increased packet loss in us-central1 with Google Cloud Networking.

We are investigating an issue with Google Cloud Networking. We will provide more information by 14:45 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18002 - The Cloud Shell availability issue has been resolved for all affected users as of 2018-05-02 08:56 US/Pacific.

The Cloud Shell availability issue has been resolved for all affected users as of 2018-05-02 08:56 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Cloud Shell. We will provide more information by 09:15 US/Pacific.

Our Engineering Team believes they have identified the potential root cause of the issue. We will provide another status update by 09:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Cloud Shell. We will provide more information by 08:45 US/Pacific.

We are investigating an issue with Cloud Shell. We will provide more information by 08:45 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18006 - Users may be experiencing increased error rates when accessing the Stackdriver web UI

The issue with Stackdriver Web UI Returning 500 and 502 error codes has been resolved for all affected users as of Monday, 2018-04-30 13:01 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18006 - Users may be experiencing increased error rates when accessing the Stackdriver web UI

Mitigations are proving to be effective, error rates of 500s and 502s are decreasing, though are still elevated. We will provide more information by Monday, 2018-04-30 14:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18006 - Users may be experiencing increased error rates when accessing the Stackdriver web UI

Users may be experiencing increased error rates when accessing the Stackdriver web UI

Google Cloud Platform
RESOLVED: Incident 18002 - We are investigating an issue with Google Cloud Pub/Sub that is affecting message delivery in some regions.

The issue with Google Cloud Pub/Sub message delivery has been resolved for all affected users as of 09:04 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Cloud Pub/Sub that is affecting message delivery in some regions.

The mitigation work is currently underway by our Engineering Team and appears to be working. The messages that were delayed during this incidents are starting to get delivered. We expect a full resolution in the near future.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Cloud Pub/Sub that is affecting message delivery in some regions.

Our Engineering Team believes they have identified the root cause of the Cloud Pub/Sub message delivery issues and is working to mitigate.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Cloud Pub/Sub that is affecting message delivery in some regions.

We are still investigating message delivery issues with Google Cloud Pub/Sub. Our Engineering Team is investigating possible causes. We will provide another status update by 7:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Cloud Pub/Sub that is affecting message delivery in some regions.

We are investigating an issue with Google Cloud Pub/Sub that is affecting message delivery in some regions.

Google Cloud Platform
RESOLVED: Incident 18001 - We are investigating an issue with Google Cloud Dataflow, Dataproc, GCE and GCR.

The issue with Dataflow, Dataproc, Compute Engine and GCR has been resolved for all affected users as of 19:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Dataflow, Dataproc, GCE and GCR. We will provide more information by 20:00 US/Pacific.

The issue with Dataflow, Dataproc, Compute Engine and GCR has been resolved for all affected users as of 19:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Dataflow, Dataproc, GCE and GCR. We will provide more information by 20:00 US/Pacific.

We are investigating an issue with Google Cloud Dataflow, Dataproc, GCE and GCR. We will provide more information by 20:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18005 - The issue with StackDriver Logging GCS Exports is resolved. We have finished processing the backlog of GCS Export jobs.

The issue with StackDriver Logging GCS Exports has been resolved for all affected users. We have completed processing the backlog of GCS export jobs. An internal investigation has been started to make the appropriate improvements to our systems and help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18005 - The issue with Google StackDriver Logging GCS Export is resolved. We are currently processing the backlog.

The issue with StackDriver Logging GCS Export is mitigated and backlog processing is ongoing. We will provide another update by 08:00 US/Pacific with current status of the backlog.

Google Cloud Platform
UPDATE: Incident 18005 - The issue with Google StackDriver Logging GCS Export is mitigated.

The issue with StackDriver Logging GCS Export is mitigated and backlog processing is ongoing. We will provide another update by 01:00 US/Pacific with current status of the backlog.

Google Cloud Platform
UPDATE: Incident 18005 - The issue with Google StackDriver Logging GCS Export is mitigated. We will provide an update by 22:00 US/Pacific.

Our Engineering Team believes they have identified and mitigated the root cause of the delay on StackDriver Logging GCS Export service. We are actively working to process the backlogs. We will provide an update by 22:00 US/Pacific with current progress.

Google Cloud Platform
UPDATE: Incident 18005 - We are investigating an issue with Google StackDriver Logging GCS Export. We will provide more information by 18:45 US/Pacific.

Mitigation work is still underway by our Engineering Team to address the delay issue with Google StackDriver Logging GCS Export. We will provide more information by 18:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18005 - We are investigating an issue with Google StackDriver Logging GCS Export. We will provide more information by 17:45 US/Pacific.

Mitigation work is currently underway by our Engineering Team to address the issue with Google StackDriver Logging GCS Export. We will provide more information by 17:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18005 - We are investigating an issue with Google StackDriver Logging GCS Export. We will provide more information by 17:15 US/Pacific.

We are investigating an issue with Google StackDriver Logging GCS Export. We will provide more information by 17:15 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18002 - We've received a report of an issue with Google App Engine as of 2018-03-29 04:52 US/Pacific. We will provide more information by 05:30 US/Pacific.

The issue with Cloud Datastore in europe-west2 has been resolved for all affected [users|projects] as of 5.03 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18002 - We've received a report of an issue with Google App Engine as of 2018-03-29 04:52 US/Pacific. We will provide more information by 05:30 US/Pacific.
Google Cloud Platform
UPDATE: Incident 18003 - We've received a report of an issue with Google Compute Engine as of 2018-03-16 11:32 US/Pacific. We will provide more information by 12:15 US/Pacific.

The issue with slow network programming should be resolved for all zones in us-east1 as of 12:44 US/Pacific. The root cause has been identified and we are working to prevent a recurrence. We will provide more information by 14:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18003 - We've received a report of an issue with Google Compute Engine as of 2018-03-16 11:32 US/Pacific. We will provide more information by 12:15 US/Pacific.
Google Cloud Platform
RESOLVED: Incident 18001 - We are investigating an issue with Google Cloud Shell as of 2018-03-13 17:44 US/Pacific.

The issue with Cloud Shell has been resolved for all affected users as of Tue 2018-03-13 19:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Shell as of 2018-03-13 17:44 US/Pacific. We will provide more information by 19:30 US/Pacific.

We are still investigating an issue with Google Cloud Shell starting 2018-03-13 17:44 US/Pacific. We will provide more information by 19:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Shell as of 2018-03-13 17:44 US/Pacific. We will provide more information by 18:30 US/Pacific.

We are investigating an issue with Google Cloud Shell as of 2018-03-13 17:44 US/Pacific. We will provide more information by 18:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18002 - We've received a report of an issue with Google Compute Engine as of 2018-03-10 21:06 US/Pacific. Mitigation work is currently underway by our Engineering Team. We will provide more information by 22:...
Google Cloud Platform
RESOLVED: Incident 18005 - We are investigating an issue with Google Cloud Networking. We will provide more information by 22:30 US/Pacific.

The issue with Google Cloud Networking intermittent traffic disruption to and from us-central has been resolved for all affected users as of 2018-02-23 22:35 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 18006 - The issue with Google Cloud Networking intermittent traffic disruption to and from us-central has been resolved for all affected users as of 2018-02-23 22:35 US/Pacific.

The issue with Google Cloud Networking intermittent traffic disruption to and from us-central has been resolved for all affected users as of 2018-02-23 22:35 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18006 - The issue with Google Cloud Networking intermittent traffic disruption to and from us-central. should now show sign of recovery. We will provide another status update by Friday, 2018-02-23 22:40 US/Pa...

The issue with Google Cloud Networking intermittent traffic disruption to and from us-central. should now show sign of recovery for the majority of users and we expect a full resolution in the near future. We will provide another status update by Friday, 2018-02-23 22:40 US/Pacific. US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18006 - We are investigating an issue with Google Cloud Networking. The issue started around Friday, 2018-02-23 21:40 US/Pacific. This affects traffic to and from us-central. We are rolling out a potential fi...

We are investigating an issue with Google Cloud Networking. The issue started around Friday, 2018-02-23 21:40 US/Pacific. This affects traffic to and from us-central. We are rolling out a potential fix to mitigate this issue. We will provide more information by Friday, 2018-02-23 22:40 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18006 - We are investigating an issue with Google Cloud Networking. The issue started around Friday, 2018-02-23 21:40 US/Pacific. This affects traffic to and from us-central. We are rolling out a potential fi...
Google Cloud Platform
UPDATE: Incident 18005 - We are investigating an issue with Google Cloud Networking. We will provide more information by 22:30 US/Pacific.

We are rolling out a potential fix to mitigate this issue. The affected region seem to be us-central.

Google Cloud Platform
UPDATE: Incident 18005 - We are investigating an issue with Google Cloud Networking. We will provide more information by 22:30 US/Pacific.

We are investigating an issue with Google Cloud Networking. We will provide more information by 22:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18001 - App Engine Admin-API experiencing high error rates

On Thursday 15 February 2018, specific Google Cloud Platform services experienced elevated errors and latency for a period of 62 minutes from 11:42 to 12:44 PST. The following services were impacted: Cloud Datastore experienced a 4% error rate for get calls and an 88% error rate for put calls. App Engine's serving infrastructure, which is responsible for routing requests to instances, experienced a 45% error rate, most of which were timeouts. App Engine Task Queues would not accept new transactional tasks, and also would not accept new tasks in regions outside us-central1 and europe-west1. Tasks continued to be dispatched during the event but saw start delays of 0-30 minutes; additionally, a fraction of tasks executed with errors due to the aforementioned Cloud Datastore and App Engine performance issues. App Engine Memcache calls experienced a 5% error rate. App Engine Admin API write calls failed during the incident, causing unsuccessful application deployments. App Engine Admin API read calls experienced a 13% error rate. App Engine Search API index writes failed during the incident though search queries did not experience elevated errors. Stackdriver Logging experienced delays exporting logs to systems including Cloud Console Logs Viewer, BigQuery and Cloud Pub/Sub. Stackdriver Logging retries on failure so no logs were lost during the incident. Logs-based Metrics failed to post some points during the incident. We apologize for the impact of this outage on your application or service. For Google Cloud Platform customers who rely on the products which were part of this event, the impact was substantial and we recognize that it caused significant disruption for those customers. We are conducting a detailed post-mortem to ensure that all the root and contributing causes of this event are understood and addressed promptly.

Google Cloud Platform
RESOLVED: Incident 18035 - Bigquery experiencing high latency rates

On Thursday 15 February 2018, specific Google Cloud Platform services experienced elevated errors and latency for a period of 62 minutes from 11:42 to 12:44 PST. The following services were impacted: Cloud Datastore experienced a 4% error rate for get calls and an 88% error rate for put calls. App Engine's serving infrastructure, which is responsible for routing requests to instances, experienced a 45% error rate, most of which were timeouts. App Engine Task Queues would not accept new transactional tasks, and also would not accept new tasks in regions outside us-central1 and europe-west1. Tasks continued to be dispatched during the event but saw start delays of 0-30 minutes; additionally, a fraction of tasks executed with errors due to the aforementioned Cloud Datastore and App Engine performance issues. App Engine Memcache calls experienced a 5% error rate. App Engine Admin API write calls failed during the incident, causing unsuccessful application deployments. App Engine Admin API read calls experienced a 13% error rate. App Engine Search API index writes failed during the incident though search queries did not experience elevated errors. Stackdriver Logging experienced delays exporting logs to systems including Cloud Console Logs Viewer, BigQuery and Cloud Pub/Sub. Stackdriver Logging retries on failure so no logs were lost during the incident. Logs-based Metrics failed to post some points during the incident. We apologize for the impact of this outage on your application or service. For Google Cloud Platform customers who rely on the products which were part of this event, the impact was substantial and we recognize that it caused significant disruption for those customers. We are conducting a detailed post-mortem to ensure that all the root and contributing causes of this event are understood and addressed promptly.

Google Cloud Platform
RESOLVED: Incident 18003 - App Engine seeing elevated error rates

On Thursday 15 February 2018, specific Google Cloud Platform services experienced elevated errors and latency for a period of 62 minutes from 11:42 to 12:44 PST. The following services were impacted: Cloud Datastore experienced a 4% error rate for get calls and an 88% error rate for put calls. App Engine's serving infrastructure, which is responsible for routing requests to instances, experienced a 45% error rate, most of which were timeouts. App Engine Task Queues would not accept new transactional tasks, and also would not accept new tasks in regions outside us-central1 and europe-west1. Tasks continued to be dispatched during the event but saw start delays of 0-30 minutes; additionally, a fraction of tasks executed with errors due to the aforementioned Cloud Datastore and App Engine performance issues. App Engine Memcache calls experienced a 5% error rate. App Engine Admin API write calls failed during the incident, causing unsuccessful application deployments. App Engine Admin API read calls experienced a 13% error rate. App Engine Search API index writes failed during the incident though search queries did not experience elevated errors. Stackdriver Logging experienced delays exporting logs to systems including Cloud Console Logs Viewer, BigQuery and Cloud Pub/Sub. Stackdriver Logging retries on failure so no logs were lost during the incident. Logs-based Metrics failed to post some points during the incident. We apologize for the impact of this outage on your application or service. For Google Cloud Platform customers who rely on the products which were part of this event, the impact was substantial and we recognize that it caused significant disruption for those customers. We are conducting a detailed post-mortem to ensure that all the root and contributing causes of this event are understood and addressed promptly.

Google Cloud Platform
RESOLVED: Incident 18001 - Cloud PubSub experiencing missing subscription metrics. Additionally, some Dataflow jobs with PubSub sources appear as they do not consume any messages.

The issue with Cloud PubSub causing watermark increase in Dataflow jobs has been resolved for all affected projects as of Tue, 2018-02-20 05:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18001 - Cloud PubSub experiencing missing subscription metrics. Additionally, some Dataflow jobs with PubSub sources appear as they do not consume any messages.

The watermarks of the affected Dataflow jobs using PubSub are now returning to normal.

Google Cloud Platform
UPDATE: Incident 18001 - Cloud PubSub experiencing missing subscription metrics. Additionally, some Dataflow jobs with PubSub sources appear as they do not consume any messages.

We are experiencing an issue with Cloud PubSub beginning approximately at 20:00 2018-02-19 US/Pacific. Early investigation indicates that approximately 10-15% of Dataflow jobs are affected by this issue. For everyone who is affected, we apologize for the disruption. We will provide an update by 05:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18001 - We are still investigating an issue with Google Cloud Pub/Sub. We will provide more information by 04:00 US/Pacific.

We are investigating an issue with Cloud PubSub. We will provide more information by 04:00 AM US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are still investigating an issue with Google Cloud Pub/Sub. We will provide more information by 04:00 US/Pacific.
Google Cloud Platform
RESOLVED: Incident 18003 - We are investigating an issue with Google Cloud Networking affecting connectivity in us-central1 and europe-west3. We will provide more information by 12:15pm US/Pacific.

ISSUE SUMMARY On Sunday 18 January 2018, Google Compute Engine networking experienced a network programming failure. The two impacts of this incident included the autoscaler not scaling instance groups, as well as migrated and newly-created VMs not communicating with VMs in other zones for a duration of up to 93 minutes. We apologize for the impact this event had on your applications and projects, and we will carefully investigate the causes and implement measures to prevent recurrences. DETAILED DESCRIPTION OF IMPACT On Sunday 18 January 2018, Google Compute Engine network provisioning updates failed in the following zones: europe-west3-a for 34 minutes (09:52 AM to 10:21 AM PT) us-central1-b for 79 minutes (09:57 AM to 11:16 AM PT) asia-northeast1-a for 93 minutes (09:53 AM to 11:26 AM PT) Propagation of Google Compute Engine networking configuration for newly created and migrated VMs is handled by two components. The first is responsible for providing a complete list of VM’s, networks, firewall rules, and scaling decisions. The second component provides a stream of updates for the components in a specific zone. During the affected period, the first component failed to return data. VMs in the affected zones were unable to communicate with newly-created or migrated VMs in another zone in the same private GCE network. VMs in the same zone were unaffected because they are updated by the streaming component. The autoscaler service also relies upon data from the failed first component to scale instance groups; without updates from that component, it could not make scaling decisions for the affected zones. ROOT CAUSE A stuck process failed to provide updates to the Compute Engine control plane. Automatic failover was unable to force-stop the process, and required manual failover to restore normal operation. REMEDIATION AND PREVENTION The engineering team was alerted when the propagation of network configuration information stalled. They manually failed over to the replacement task to restore normal operation of the data persistence layer. To prevent another occurrence of this incident, we are taking the following actions: We still stop VM migrations if the configuration data is stale. Modify the data persistence layer to re-resolve their peers during long-running processes, to allow failover to replacement tasks.

Google Cloud Platform
UPDATE: Incident 18001 - Datastore experiencing elevated error rates

We continue to see significant improvement to all datastore services. We are still continuing to monitor and will provide another update by 15:00 PST.

Google Cloud Platform
RESOLVED: Incident 18001 - App Engine Admin-API experiencing high error rates

The issue with App Engine Admin API has been resolved for all affected users as of Thursday, 2018-02-15 13:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 18003 - Stackdriver Logging Service Degraded

The issue with Google Stackdriver has been resolved for all affected projects as of 13:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 18004 - We are investigating an issue with Google Stackdriver. We will provide more information by 13:15 US/Pacific.

The issue with Google Stackdriver has been resolved for all affected projects as of 13:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18004 - We are investigating an issue with Google Stackdriver. We will provide more information by 13:15 US/Pacific.
Google Cloud Platform
UPDATE: Incident 18001 - Datastore experiencing elevated error rates

We are seeing near return to baseline, however we aren't seeing a consistent view of our quota and are investigating. We will provide another update by roughly 13:45 PST.

Google Cloud Platform
RESOLVED: Incident 18035 - Bigquery experiencing high latency rates

The issue with Bigquery has been resolved for all affected projects as of Thursday, 2018-02-15 13:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18003 - App Engine seeing elevated error rates

We're seeing widespread improvement in error rates in many / most regions since ~12:40 PST. We're continuing to investigate and will provide another update by 13:30 PST.

Google Cloud Platform
UPDATE: Incident 18003 - Stackdriver Logging Service Degraded

We are investigating an issue with Stackdriver Logging Service. We will provide more information by 13:25 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18003 - App Engine seeing elevated error rates

The issue with App Engine has been resolved for all affected projects as of 12:44 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18035 - Bigquery experiencing high latency rates

We are investigating an issue with Bigquery. We will provide more information by 13:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18003 - App Engine seeing elevated error rates

We are investigating an issue with App Engine. We will provide more information by 13:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - Datastore experiencing elevated error rates

We are investigating an issue with Datastore. We will provide more information by 13:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18002 - The issue with reduced URLFetch availability has been resolved for all affected projects as of 2018-02-10 18:55 US/Pacific. We will provide a more detailed analysis of this incident once we have compl...

The issue with reduced URLFetch availability has been resolved for all affected projects as of 2018-02-10 18:55 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google App Engine reduced URLFetch availability starting at 17:15pm US/Pacific. We are currently rolling out a configuration change to mitigate this issue. We will ...

We are investigating an issue with Google App Engine reduced URLFetch availability starting at 17:15pm US/Pacific. We are currently rolling out a configuration change to mitigate this issue. We will provide another status update by 2018-02-10 19:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google App Engine reduced URLFetch availability starting at 17:15pm PT. We will provide more information by 19:00 US/Pacific.

We are investigating an issue with Google App Engine reduced URLFetch availability starting at 17:15pm PT. We will provide more information by 19:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18001 - The issue with Google Stackdriver Logging has been resolved for all affected projects as of 20:02 US/Pacific.

On Wednesday 31 January 2017, some Google Cloud services experienced elevated errors and latency on operations that required inter-data center network traffic for a duration of 72 minutes. The impact was visible during three windows: between 18:20 and 19:08 PST, between 19:10 and 19:29, and again between 19:45 and 19:50. Network traffic between the public internet and Google's data centers was not affected by this incident. The root cause of this incident was an error in a configuration update to the system that allocates network capacity for traffic between Google data centers. To prevent a recurrence, we will improve the automated checks that we run on configuration changes to detect problems before release. We will be improving the monitoring of the canary to detect problems before global rollout of changes to the configuration.

Google Cloud Platform
RESOLVED: Incident 18002 - We are investigating an issue with Google Kubernetes Engine. We will provide more information by 20:15 US/Pacific.

On Wednesday 31 January 2017, some Google Cloud services experienced elevated errors and latency on operations that required inter-data center network traffic for a duration of 72 minutes. The impact was visible during three windows: between 18:20 and 19:08 PST, between 19:10 and 19:29, and again between 19:45 and 19:50. Network traffic between the public internet and Google's data centers was not affected by this incident. The root cause of this incident was an error in a configuration update to the system that allocates network capacity for traffic between Google data centers. To prevent a recurrence, we will improve the automated checks that we run on configuration changes to detect problems before release. We will be improving the monitoring of the canary to detect problems before global rollout of changes to the configuration.

Google Cloud Platform
RESOLVED: Incident 18001 - The issue with Google App Engine services has been resolved for all affected projects as of 21:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements ...

On Wednesday 31 January 2017, some Google Cloud services experienced elevated errors and latency on operations that required inter-data center network traffic for a duration of 72 minutes. The impact was visible during three windows: between 18:20 and 19:08 PST, between 19:10 and 19:29, and again between 19:45 and 19:50. Network traffic between the public internet and Google's data centers was not affected by this incident. The root cause of this incident was an error in a configuration update to the system that allocates network capacity for traffic between Google data centers. To prevent a recurrence, we will improve the automated checks that we run on configuration changes to detect problems before release. We will be improving the monitoring of the canary to detect problems before global rollout of changes to the configuration.

Google Cloud Platform
UPDATE: Incident 18003 - We are investigating an issue with Google Container Engine that is affecting cluster creation and upgrade. We will provide more information by 11:45 US/Pacific.

The issue with cluster creation and upgrade has been resolved for all affected projects as of 11:35 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18003 - We are investigating an issue with Google Container Engine that is affecting cluster creation and upgrade. We will provide more information by 11:45 US/Pacific.

We are investigating an issue with Google Container Engine that is affecting cluster creation and upgrade. We will provide more information by 11:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Stackdriver Trace API.

The issue with elevated failure rates in the Stackdriver Trace API has been resolved for all affected projects as of Friday, 2018-02-02 09:53 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Stackdriver Trace API.

Stackdriver Trace API continues to exhibit an elevated rate of request failures. Our engineering teams have put a mitigation in place, and will proceed to address the cause of the issue. We will provide another update by 10:45 AM US/Pacific.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Stackdriver Trace API.

StackDriver Trace API is experiencing an elevated rate of request failures. There is no workaround at this time. Our engineering teams have identified the cause of the issue and are working on a mitigation. We will provide another update by 9:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Stackdriver Trace API. We will provide more information by 09:00 US/Pacific.

We are investigating an issue with Google Stackdriver Trace API. We will provide more information by 09:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18004 - We are investigating an issue with Google Cloud Networking. We will provide more information by 13:00 US/Pacific.

On Thursday 25 January 2018, while expanding the Google network serving the us-central1 region, a configuration change unexpectedly triggered packet loss and reduced network bandwidth for inter-region data transfer and replication traffic to and from the region. The network impact was observable during two windows, between 11:03 and 12:40 PST, and again between 14:27 and 15:27 PST. The principal user-visible impact was a degradation in the performance of some Google Cloud services that require cross data center traffic. There was no impact to network traffic between the us-central1 region and the internet, or to traffic between Compute Engine VM instances. We sincerely apologize for the impact of this incident on your application or service. We have performed a detailed analysis of root cause and taken careful steps to ensure that this type of incident will not recur.

Google Cloud Platform
UPDATE: Incident 18001 - We are experiencing an issue with multiple Google Cloud Platform Services, beginning at approximately 18:30 US/Pacific. The situation for most products have been completely resolved but some products ...

The issue with Google App Engine has been resolved for all affected projects as of 21:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18002 - We experienced an issue with Google Cloud Storage, beginning at approximately 18:30 US/Pacific. The situation has been completely resolved by 20:08 PST.

We experienced an issue with Google Cloud Storage, beginning at approximately 18:30 US/Pacific. The situation has been completely resolved by 20:08 PST.

Google Cloud Platform
UPDATE: Incident 18001 - The issue with Google Stackdriver Logging has been resolved for all affected projects as of 20:02 US/Pacific.

The issue with Google Stackdriver Logging has been resolved for all affected projects as of 20:02 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. For everyone who is affected, we apologize for the disruption.

Google Cloud Platform
UPDATE: Incident 18001 - We are experiencing an issue with multiple Google Cloud Platform Services, beginning at approximately 18:30 US/Pacific. The situation is getting improved for the most of products, but some products ar...

Google App Engine has mostly recovered and our engineering team is working to completely mitigate the issue. We will provide an update by 21:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Kubernetes Engine. We will provide more information by 20:15 US/Pacific.

The issue with Google Kubernetes Engine has been resolved for all affected projects as of 20:33 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. For everyone who is affected, we apologize for the disruption.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Stackdriver Logging. We will provide more information by 20:00 US/Pacific.

The issue with Google Stackdriver Logging should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 20:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Container Engine. We will provide more information by 20:15 US/Pacific.

The issue with Google Kubernetes Engine should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 20:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18001 - We are experiencing an issue with multiple Google Cloud Platform Services, beginning at approximately 18:30 US/Pacific. The situation is getting improved for the most of products, but some products ar...

Google App Engine is recovering and our engineering team is working to completely mitigate the issue. We will provide an update by 20:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Compute Engine. We will provide more information by 20:30 US/Pacific.

We are investigating an issue with Google Compute Engine. We will provide more information by 20:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Compute Engine. We will provide more information by 20:30 US/Pacific.
Google Cloud Platform
UPDATE: Incident 18001 - We are experiencing an issue with multiple Google Cloud Platform Services, beginning at approximately 18:30 US/Pacific. The situation is getting improved for the most of products, but some products ar...

We are experiencing an issue with multiple Google Cloud Platform Services, beginning at approximately 18:30 US/Pacific. The situation is getting improved for the most of products, but some products are still reporting errors. For everyone who is affected, we apologize for the disruption. We will provide an update by 20:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Container Engine. We will provide more information by 20:15 US/Pacific.

We are investigating an issue with Google Container Engine. We will provide more information by 20:15 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Stackdriver Logging. We will provide more information by 20:00 US/Pacific.

We are investigating an issue with Google Stackdriver Logging. We will provide more information by 20:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18001 - We are investigating an issue with Google Cloud Storage in the US region. We will provide more information by 13:00 US/Pacific.

We have updated our estimated impact to an average of 2% and a peak of 3.6% GCS global error rate, based on a more thorough review of monitoring data and server logs. The initial estimate of impact was based on a internal assessment hosted in a single region; subsequent investigation reveals that Google's redundancy and rerouting infrastructure worked as intended and dramatically reduced the user-visible impact of the event to GCS's global user base. The 2% average error rate is measured over the duration of the event, from its beginning at 11:05 PST to its conclusion at 12:24 PST.

Google Cloud Platform
RESOLVED: Incident 18004 - We are investigating an issue with Google Cloud Networking. We will provide more information by 13:00 US/Pacific.

The issue with Google Cloud Networking has been resolved for all affected users as of 2018-01-25 13:15 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18004 - We are investigating an issue with Google Cloud Networking. We will provide more information by 13:00 US/Pacific.

The issue with Google Cloud Networking should be resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by 2018-01-25 14:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 18001 - We are investigating an issue with Google Cloud Storage in the US region. We will provide more information by 13:00 US/Pacific.

The issue with Google Cloud Storage error rates has been resolved for all affected users as of 12:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 18034 - We are investigating an issue with BigQuery in the US region. We will provide more information by 13:00 US/Pacific.

The issue with BigQuery error rates has been resolved for all affected users as of 12:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Storage in the US region. We will provide more information by 13:00 US/Pacific.

The issue with Google Cloud Storage in the US regions should be resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by 2018-01-25 13:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18004 - We are investigating an issue with Google Cloud Networking. We will provide more information by 13:00 US/Pacific.

We are investigating an issue with Google Cloud Networking. We will provide more information by 13:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Storage in the US region. We will provide more information by 13:00 US/Pacific.

We are experiencing an issue with Google Cloud Storage beginning Thursday, 2018-01-25 11:23 US/Pacific. Current investigation indicates that approximately 100% of customers in the US region are affected and we expect that for affected users the service is mostly or entirely unavailable at this time. For everyone who is affected, we apologize for the disruption. We will provide an update by 2018-01-25 13:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18034 - We are investigating an issue with Google Cloud Storage in the US region. We will provide more information by 12:45 US/Pacific.

We are investigating an issue with Google Cloud Storage in the US region. We will provide more information by 12:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18003 - We are investigating an issue with Google Cloud Networking affecting connectivity in us-central1 and europe-west3. We will provide more information by 12:15pm US/Pacific.

The issue with Google Cloud Networking connectivity has been resolved for all affected zones in europe-west3, us-central1, and asia-northeast1 as of 11:26am US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18003 - We are investigating an issue with Google Cloud Networking affecting connectivity in us-central1 and europe-west3. We will provide more information by 12:15pm US/Pacific.

We are investigating an issue with Google Cloud Networking affecting connectivity in us-central1 and europe-west3. We will provide more information by 12:15pm US/Pacific.

Google Cloud Platform
UPDATE: Incident 18002 - We have resolved an issue with Google Cloud Networking.

The issue with packet loss from North and South America regions to Asia regions has been resolved for all affected users as of 1:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Cloud Networking. We will provide more information by 14:30 US/Pacific.

Our Engineering Team believes they have identified the root cause of the packet loss and is working to mitigate.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Cloud Networking. We will provide more information by 13:15 US/Pacific.

We are experiencing an issue with packet loss from Google North and South America regions to Asia regions beginning at 12:30 US/Pacific. Current data indicates that approximately 40% of packets are affected by this issue. For everyone who is affected, we apologize for the disruption. We will provide an update by 1:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18002 - We are investigating an issue with Google Cloud Networking. We will provide more information by 13:15 US/Pacific.

We are investigating an issue with Google Cloud Networking. We will provide more information by 13:15 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18001 - Some GKE 'create/delete' cluster operations failing

The issue with GKE 'create/delete' cluster operations failing has been resolved for all affected users as of 8:27 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 18001 - Some GKE 'create/delete' cluster operations failing

The issue with GKE 'create/delete' cluster operations failing should be resolved for the majority of users. We will continue monitoring to confirm full resolution. We will provide more information by 11:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - Some GKE 'create/delete' cluster operations failing

The issue with GKE 'create/delete' cluster operations failing should be resolved for the majority of users. We will continue monitoring to confirm full resolution. We will provide more information by 09:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - Some GKE 'create/delete' cluster operations failing

We are still investigating an issue with some GKE 'create/delete' cluster operations failing. We will provide more information by 08:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - GKE 'create cluster' operations failing

We are investigating an issue with GKE 'create cluster' operations failing. We will provide more information by 08:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - The issue with Google Cloud Load Balancing (GCLB) creation has been resolved for all affected projects as of 20:22 US/Pacific.

The issue with Google Cloud Load Balancing (GCLB) creation has been resolved for all affected projects as of 20:22 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 20:30 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 20:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 20:00 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 20:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 19:30 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 19:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 18:30 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 19:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 18:00 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 18:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 18:00 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 18:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 17:30 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 17:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 16:30 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 17:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 16:30 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 16:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Google Cloud Networking. We will provide more information by 16:00 US/Pacific.

We are investigating an issue with Google Cloud Networking that is preventing the creation of new GCLB load balancers and updating of existing ones. We will provide more information by 16:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18001 - Cloud Spanner issues from 12:45 to 13:26 Pacific time have been resolved.

Cloud Spanner issues from 12:45 to 13:26 Pacific time have been resolved.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Cloud Spanner. Customers should be back to normal service as of 13:26 Pacific. We will provide more information by 15:00 US/Pacific.

An incident with Cloud Spanner availability started at 12:45 Pacific time and has been addressed. The service is restored for all customers as of 13:26. Another update will be posted before 15:00 Pacific time to confirm the service health.

Google Cloud Platform
UPDATE: Incident 18001 - We are investigating an issue with Cloud Spanner. Customers should be back to normal service as of 13:26 Pacific. We will provide more information by 15:00 US/Pacific.
Google Cloud Platform
UPDATE: Incident 17002 - The issue with Cloud Machine Learning Engine's Create Version has been resolved for all affected users as of 2017-12-15 10:55 US/Pacific.

The issue with Cloud Machine Learning Engine's Create Version has been resolved for all affected users as of 2017-12-15 10:55 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17009 - The issue with the Google App Engine Admin API has been resolved for all affected users as of Thursday, 2017-12-14 12:15 US/Pacific.

The issue with the Google App Engine Admin API has been resolved for all affected users as of Thursday, 2017-12-14 12:15 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - We are investigating an issue with Google Cloud Storage. We will provide more information by 18:00 US/Pacific.

The issue with Cloud Storage elevated error rate has been resolved for all affected projects as of Friday 2017-11-30 16:10 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - We are investigating an issue with Google Cloud Storage. We will provide more information by 16:30 US/Pacific.

The Cloud Storage service is experiencing less than 10% error rate. We will provide another status update by 2017-11-30 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17006 - We are investigating an issue with Google Cloud Storage. We will provide more information by 15:00 US/Pacific.

The Cloud Storage service is experiencing less than 10% error rate. We will provide another status update by 2017-11-30 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17006 - We are investigating an issue with Google Cloud Storage. We will provide more information by 15:00 US/Pacific.

The Cloud Storage service is experiencing less than 10% error rate. We will provide another status update by YYYY-mm-dd HH:MM US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17006 - From 10:58 to 11:57 US/Pacific, GCE VM instances experienced packet loss from GCE instances to the Internet. The issue has been mitigated for all affected projects.

From 10:58 to 11:57 US/Pacific, GCE VM instances experienced packet loss from GCE instances to the Internet. The issue has been mitigated for all affected projects.

Google Cloud Platform
UPDATE: Incident 17004 - We are investigating an issue with Google Cloud Networking. We will provide more information by 07:00 US/Pacific.

The issue with Google Cloud Engine VM instances losing connectivity has been resolved for all affected users as of Friday, 2017-11-17 7:17am US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 17008 - App Engine increasingly showing 5xx

The issue with App Engine has been resolved for all affected projects as of 4:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17008 - App Engine increasingly showing 5xx

The issue also affected projects in other regions but should be resolved for the majority of projects. We will provide another status update by 05:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17007 - The Memcache service has recovered from a disruption between 12:30 US/Pacific and 15:30 US/Pacific.

ISSUE SUMMARY On Wednesday 6 November 2017, the App Engine Memcache service experienced unavailability for applications in all regions for 1 hour and 50 minutes. We sincerely apologize for the impact of this incident on your application or service. We recognize the severity of this incident and will be undertaking a detailed review to fully understand the ways in which we must change our systems to prevent a recurrence. DETAILED DESCRIPTION OF IMPACT On Wednesday 6 November 2017 from 12:33 to 14:23 PST, the App Engine Memcache service experienced unavailability for applications in all regions. Some customers experienced elevated Datastore latency and errors while Memcache was unavailable. At this time, we believe that all the Datastore issues were caused by surges of Datastore activity due to Memcache being unavailable. When Memcache failed, if an application sent a surge of Datastore operations to specific entities or key ranges, then Datastore may have experienced contention or hotspotting, as described in https://cloud.google.com/datastore/docs/best-practices#designing_for_scale. Datastore experienced elevated load on its servers when the outage ended due to a surge in traffic. Some applications in the US experienced elevated latency on gets between 14:23 and 14:31, and elevated latency on puts between 14:23 and 15:04. Customers running Managed VMs experienced failures of all HTTP requests and App Engine API calls during this incident. Customers using App Engine Flexible Environment, which is the successor to Managed VMs, were not impacted. ROOT CAUSE The App Engine Memcache service requires a globally consistent view of the current serving datacenter for each application in order to guarantee strong consistency when traffic fails over to alternate datacenters. The configuration which maps applications to datacenters is stored in a global database. The incident occurred when the specific database entity that holds the configuration became unavailable for both reads and writes following a configuration update. App Engine Memcache is designed in such a way that the configuration is considered invalid if it cannot be refreshed within 20 seconds. When the configuration could not be fetched by clients, Memcache became unavailable. REMEDIATION AND PREVENTION Google received an automated alert at 12:34. Following normal practices, our engineers immediately looked for recent changes that may have triggered the incident. At 12:59, we attempted to revert the latest change to the configuration file. This configuration rollback required an update to the configuration in the global database, which also failed. At 14:21, engineers were able to update the configuration by sending an update request with a sufficiently long deadline. This caused all replicas of the database to synchronize and allowed clients to read the mapping configuration. As a temporary mitigation, we have reduced the number of readers of the global configuration, which avoids the contention during write and led to the unavailability during the incident. Engineering projects are already under way to regionalize this configuration and thereby limit the blast radius of similar failure patterns in the future.

Google Cloud Platform
RESOLVED: Incident 17007 - The Memcache service has recovered from a disruption between 12:30 US/Pacific and 15:30 US/Pacific.

The issue with Memcache availability has been resolved for all affected projects as of 15:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation. This is the final update for this incident.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service experienced a disruption and is still recovering. We will provide more information by 16:00 US/Pacific.

The Memcache service is still recovering from the outage. The rate of errors continues to decrease and we expect a full resolution of this incident in the near future. We will provide an update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service experienced a disruption and is recovering now. We will provide more information by 15:30 US/Pacific.

The issue with Memcache and MVM availability should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide an update by 15:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service experienced a disruption and is being normalized now. We will provide more information by 15:15 US/Pacific.

We are experiencing an issue with Memcache availability beginning at November 6, 2017 at 12:30 pm US/Pacific. At this time we are gradually ramping up traffic to Memcache and we see that the rate of errors is decreasing. Other services affected by the outage, such as MVM instances, should be normalizing in the near future. We will provide an update by 15:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service is currently experiencing a disruption. We will provide more information by 14:30 US/Pacific.

We are experiencing an issue with Memcache availability beginning at November 6, 2017 at 12:30 pm US/Pacific. Our Engineering Team believes they have identified the root cause of the errors and is working to mitigate. We will provide an update by 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - The Memcache service is currently experiencing a disruption. We will provide more information by 14:30 US/Pacific.

We are experiencing an issue with Memcache availability beginning at November 6, 2017 at 12:30 pm US/Pacific. Current data indicates that all projects using Memcache are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 14:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - Investigating incident with AppEngine and Memcache.

We are experiencing an issue with Memcache availability beginning at November 6, 2017 at 12:30 pm US/Pacific. Current data indicate(s) that all projects using Memcache are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 14:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - Investigating incident with AppEngine and Memcache.

We are investigating an issue with Google App Engine and Memcache. We will provide more information by 13:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17018 - We are investigating an issue with Google Cloud SQL. We see failures for Cloud SQL connections from App Engine and connections using the Cloud SQL Proxy. We are also observing elevated failure rates f...

The issue with Cloud SQL connectivity affecting connections from App Engine and connections using the Cloud SQL Proxy as well as the issue with Cloud SQL admin activities have been resolved for all affected as of 20:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17019 - We are investigating an issue with Google Cloud SQL. We see failures for Cloud SQL connections from App Engine and connections using the Cloud SQL Proxy. We are also observing elevated failure rates f...

The issue with Cloud SQL connectivity affecting connections from App Engine and connections using the Cloud SQL Proxy as well as the issue with Cloud SQL admin activities have been resolved for all affected as of 2017-10-30 20:45 PDT. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17018 - We are investigating an issue with Google Cloud SQL. We see failures for Cloud SQL connections from App Engine and connections using the Cloud SQL Proxy. We are also observing elevated failure rates f...

We are continuing to experience an issue with Cloud SQL connectivity, affecting only connections from App Engine and connections using the Cloud SQL Proxy, beginning at 2017-10-30 17:00 US/Pacific. We are also observing elevated failure rates for Cloud SQL admin activities (using the Cloud SQL portion of the Cloud Console UI, using gcloud beta sql, directly using the Admin API, etc.). Our Engineering Team believes they have identified the root cause and mitigation effort is currently underway. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide another update by 2017-10-30 21:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17003 - We are investigating an issue with GKE. We will provide more information by 16:00 US/Pacific.

We are investigating an issue involving the inability of pods to be rescheduled on Google Container Engine (GKE) nodes after Docker reboots or crashes. This affects GKE versions 1.6.11, 1.7.7, 1.7.8 and 1.8.1. Our engineering team will roll out a fix next week; no further updates will be provided here. If experienced, the issue can be mitigated by manually restarting the affected nodes.

Google Cloud Platform
RESOLVED: Incident 17005 - Elevated GCS Errors from Canada

ISSUE SUMMARY Starting Thursday 12 October 2017, Google Cloud Storage clients located in the Northeast of North America experienced up to a 10% error rate for a duration of 21 hours and 35 minutes when fetching objects stored in multi-regional buckets in the US. We apologize for the impact of this incident on your application or service. The reliability of our service is a top priority and we understand that we need to do better to ensure that incidents of this type do not recur. DETAILED DESCRIPTION OF IMPACT Between Thursday 12 October 2017 12:47 PDT and Friday 13 October 2017 10:12 PDT, Google Cloud Storage clients located in the Northeast of North America experienced up to a 10% rate of 503 errors and elevated latency. Some users experienced higher error rates for brief periods. This incident only impacted requests to fetch objects stored in multi-regional buckets in the US; clients were able to mitigate impact by retrying. The percentage of total global requests to Cloud Storage that experienced errors was 0.03%. ROOT CAUSE Google ensures balanced use of its internal networks by throttling outbound traffic at the source host in the event of congestion. This incident was caused by a bug in an earlier version of the job that reads Cloud Storage objects from disk and streams data to clients. Under high traffic conditions, the bug caused these jobs to incorrectly throttle outbound network traffic even though the network was not congested. Google had previously identified this bug and was in the process of rolling out a fix to all Google datacenters. At the time of the incident, Cloud Storage jobs in a datacenter in Northeast North America that serves requests to some Canadian and US clients had not yet received the fix. This datacenter is not a location for customer buckets (https://cloud.google.com/storage/docs/bucket-locations), but objects in multi-regional buckets can be served from instances running in this datacenter in order to optimize latency for clients. REMEDIATION AND PREVENTION The incident was first reported by a customer to Google on Thursday 12 October 14:59 PDT. Google engineers determined root cause on Friday 13 October 09:47 PDT. We redirected Cloud Storage traffic away from the impacted region at 10:08 and the incident was resolved at 10:12. We have now rolled out the bug fix to all regions. We will also add external monitoring probes for all regional points of presence so that we can more quickly detect issues of this type.

Google Cloud Platform
UPDATE: Incident 17002 - Jobs not terminating

The issue with with Cloud Dataflow in which batch jobs are stuck and cannot be terminated has been resolved for all affected projects as of Wednesday, 201-10-18 02:58 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17002 - Jobs not terminating

A fix for the issue with Cloud Dataflow in which batch jobs are stuck and cannot be terminated is currently getting rolled out. We expect a full resolution in the near future. We will provide another status update by Wednesday, 2017-10-18 03:45 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17007 - Stackdriver Uptime Check Alerts Not Firing

The issue with Stackdriver Uptime Check Alerts not firing has been resolved for all affected projects as of Monday, 2017-10-16 13:08 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17007 - Stackdriver Uptime Check Alerts Not Firing

We are investigating an issue with Stackdriver Uptime Check Alerts. We will provide more information by 13:15 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17005 - Elevated GCS Errors from Canada

The issue with Google Cloud Storage request failures for users in Canada and Northeast North America has been resolved for all affected users as of Friday, 2017-10-13 10:08 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17005 - Elevated GCS Errors from Canada

We are investigating an issue with Google Cloud Storage users in Canada and Northeast North America experiencing HTTP 503 failures. We will provide more information by 10:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17004 - Elevated GCS errors in us-east1

The issue with GCS service has been resolved for all affected users as of 14:31 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our system to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17004 - Elevated GCS errors in us-east1

The issue with GCS service should be resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17004 - Elevated GCS errors in us-east1

We are investigating an issue that occurred with GCS starting at 13:19 PDT. We will provide more information by 14:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17007 - Project creation failure

The issue with Project Creation failing with "Unknown error" has been resolved for all affected users as of Tuesday, 2017-10-03 22:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - Stackdriver console unavailable

The issue with Google Stackdriver has been resolved for all affected users as of 2017-10-03 16:28 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - Stackdriver console unavailable

We are continuing to investigate the Google Stackdriver issue. Graphs are fully restored, but alerting policies and uptime checks are still degraded. We will provide another update at 17:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17006 - Stackdriver console unavailable

We are continuing to investigate the Google Stackdriver issue. In addition to graph and alerting policy unavailability, uptime checks are not completing successfully. We believe we have isolated the root cause and are working on a resolution, and we will provide another update at 16:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17006 - Stackdriver console unavailable

We are investigating an issue with Google Stackdriver that is causing charts and alerting policies to be unavailable. We will provide more information by 15:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17007 - Project creation failue

The project creation is experiencing a 100% error rate on requests. We will provide another status update by Tuesday, 2017-10-03 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17007 - Project creation failue

We are investigating an issue with Project creation. We will provide more information by 12:40PM US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17005 - Errors creating new Stackdriver accounts and adding new projects to existing Stackdriver accounts.

The issue with Google Stackdriver has been resolved for all affected projects as of Friday, 2017-09-29 15:35 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - Activity Stream not showing new Activity Logs

We are currently investigating an issue with the Cloud Console's Activity Stream not showing new Activity Logs.

Google Cloud Platform
RESOLVED: Incident 17003 - Google Cloud Pub/Sub partially unavailable.

The issue with Pub/Sub subscription creation has been resolved for all affected projects as of 08:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17003 - Google Cloud Pub/Sub partially unavailable.

We are experiencing an issue with Pub/Sub subscription creation beginning at 2017-09-13 06:30 US/Pacific. Current data indicates that approximately 12% of requests are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 08:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - Google Cloud Pub/Sub partially unavailable.

We are investigating an issue with Google Pub/Sub. We will provide more information by 07:15 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

ISSUE SUMMARY On Tuesday 29 August and Wednesday 30 August 2017, Google Cloud Network Load Balancing and Internal Load Balancing could not forward packets to backend instances if they were live-migrated. This incident initially affected instances in all regions for 30 hours and 22 minutes. We apologize for the impact this had on your services. We are particularly cognizant of the failure of a system used to increase reliability and the long duration of the incident. We have completed an extensive postmortem to learn from the issue and improve Google Cloud Platform. DETAILED DESCRIPTION OF IMPACT Starting at 13:56 PDT on Tuesday 29 August 2017 to 20:18 on Wednesday 30 August 2017, Cloud Network Load Balancer and Internal Load Balancer in all regions were unable to reach any instance that live-migrated during that period. Instances which did not experience live-migration during this period were not affected. Our internal investigation shows that approximately 2% of instances using Network Load Balancing or Internal Load Balancing were affected by the issue. ROOT CAUSE Live-migration transfers a running VM from one host machine to another host machine within the same zone. All VM properties and attributes remain unchanged, including internal and external IP addresses, instance metadata, block storage data and volumes, OS and application state, network settings, network connections, and so on. In this case, a change in the internal representation of networking information in VM instances caused inconsistency between two values, both of which were supposed to hold the external and internal virtual IP addresses of load balancers. When an affected instance was live-migrated, the instance was deprogrammed from the load balancer because of the inconsistency. This made it impossible for load balancers that used the instance as backend to look up the destination IP address of the instance following its migration, which in turn caused for all packets destined to that instance to be dropped at the load balancer level. REMEDIATION AND PREVENTION Initial detection appeared with reports of lack of backend connectivity at 23:30 on Tuesday to the GCP support team. At 00:28 on Wednesday two Cloud Network engineering teams were paged to investigate the issue. Detailed investigations continued until 08:07 when the configuration change that caused the issue was confirmed as such. The roll back of the new configuration was completed by 08:32, at which point no new live-migration would cause the issue. Google engineers then started to run a program to fix all mismatched network information at 08:56, and all affected instances were restored to a healthy status by 20:18. In order to prevent the issue, Google engineers are working to enhance automated canary testing that simulates live-migration events, detection of load balancing packets loss, and enforce more restrictions on new configuration changes deployment for internal representation changes.

Google Cloud Platform
RESOLVED: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

The issue with Network Load Balancers has been resolved for all affected projects as of 20:18 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 18033 - We are investigating an issue with BigQuery queries failing starting at 10:15am PT

The issue with BigQuery queries failing has been resolved for all affected users as of 12:05pm US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18033 - We are investigating an issue with BigQuery queries failing starting at 10:15am PT

The BigQuery service is experiencing a 16% error rate on queries. We will provide another status update by 12:00pm US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18033 - We are investigating an issue with BigQuery queries failing starting at 10:15am PT

We are investigating an issue with BigQuery queries failing starting at 10:15am PT

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We wanted to send another update with better formatting. We will provide more another update on resolving effected instances by 12 PDT. Affected customers can also mitigate their affected instances with the following procedure (which causes Network Load Balancer to be reprogrammed) using gcloud tool or via the Compute Engine API. NB: No modification to the existing load balancer configurations is necessary, but a temporary TargetPool needs to be created. Create a new TargetPool. Add the affected VMs in a region to the new TargetPool. Wait for the VMs to start working in their existing load balancer configuration. Delete the new TargetPool. DO NOT delete the existing load balancer config, including the old target pool. It is not necessary to create a new ForwardingRule. Example: 1) gcloud compute target-pools create dummy-pool --project=<your_project> --region=<region> 2) gcloud compute target-pools add-instances dummy-pool --instances=<instance1,instance2,...> --project=<your_project> --region=<region> --instances-zone=<zone> 3) (Wait) 4) gcloud compute target-pools delete dummy-pool --project=<your_project> --region=<region>

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

Our first mitigation has completed at this point and no new instances should be effected. We are slowly going through an fixing affected customers. Affected customers can also mitigate their affected instances with the following procedure (which causes Network Load Balancer to be reprogrammed) using gcloud tool or via the Compute Engine API. NB: No modification to the existing load balancer configurations is necessary, but a temporary TargetPool needs to be created. Create a new TargetPool. Add the affected VMs in a region to the new TargetPool. Wait for the VMs to start working in their existing load balancer configuration. Delete the new TargetPool. DO NOT delete the existing load balancer config, including the old target pool. It is not necessary to create a new ForwardingRule. Example: gcloud compute target-pools create dummy-pool --project=<your_project> --region=<region> gcloud compute target-pools add-instances dummy-pool --instances=<instance1,instance2,...> --project=<your_project> --region=<region> --instances-zone=<zone> (Wait) gcloud compute target-pools delete dummy-pool --project=<your_project> --region=<region>

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We are experiencing an issue with a subset of Network Load Balance. The configuration change to mitigate this issue has been rolled out and we are working on further measures to completely resolve the issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 10:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-central1, europe-west1, asia-northeast1 and asia-east1 not being able to connect to backends. The configuration change to mitigate this issue has been rolled out and we are working on further measures to completly resolve the issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 09:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-central1, europe-west1, asia-northeast1 and asia-east1 not being able to connect to backends. We have identified the event that triggers this issue and are rolling back a configuration change to mitigate this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 09:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with Cloud Network Load Balancers connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-central1, europe-west1, asia-northeast1 and asia-east1 not being able to connect to backends. Mitigation work is still in progress. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 08:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-central1, europe-west1, asia-northeast1 and asia-east1 not being able to connect to backends. Our previous actions did not resolve the issue. We are pursuing alternative solutions. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 07:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an issue with a subset of Network Load Balancer in regions us-east1, us-west1 and asia-east1 not being able to connect to backends. Our Engineering Team has reduced the scope of possible root causes and is still investigating. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 06:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. The investigation is still ongoing. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. We have ruled out several possible failure scenarios. The investigation is still ongoing. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 04:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 04:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 03:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are experiencing an intermittent issue with Network Load Balancer connectivity to their backends. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 03:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are investigating an issue with network load balancer connectivity. We will provide more information by 02:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are investigating an issue with network connectivity. We will provide more information by 01:50 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17002 - Issue with network connectivity

We are investigating an issue with network connectivity. We will provide more information by 01:20 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

ISSUE SUMMARY On Tuesday 15 August 2017, Google Cloud SQL experienced issues in the europe-west1 zones for a duration of 3 hours and 35 minutes. During this time, new connections from Google App Engine (GAE) or Cloud SQL Proxy would timeout and return an error. In addition, Cloud SQL connections with ephemeral certs that had been open for more than one hour timed out and returned an error. We apologize to our customers whose projects were affected – we are taking immediate action to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT On Tuesday 15 August 2017 from 17:20 to 20:55 PDT, 43.1% of Cloud SQL instances located in europe-west1 were unable to be managed with the Google Cloud SQL Admin API to create or make changes. Customers who connected from GAE or used the Cloud SQL Proxy (which includes most connections from Google Container Engine) were denied new connections to their database. ROOT CAUSE The issue surfaced through a combination of a spike in error rates internal to the Cloud SQL service and a lack of available resources in the Cloud SQL control plane for europe-west1. By way of background, the Cloud SQL system uses a database to store metadata for customer instances. This metadata is used for validating new connections. Validation will fail if the load on the database is heavy. In this case, Cloud SQL’s automatic retry logic overloaded the control plane and consumed all the available Cloud SQL control plane processing in europe-west1. This in turn made the Cloud SQL Proxy and front end client server pairing reject connections when ACLs and certificate information stored in the Cloud SQL control plane could not be accessed. REMEDIATION AND PREVENTION Google engineers were paged at 17:20 when automated monitoring detected an increase in control plane errors. Initial troubleshooting steps did not sufficiently isolate the issue and reduce the database load. Engineers then disabled non-critical control plane services for Cloud SQL to shed load and allow the service to catch up. They then began a rollback to the previous configuration to bring back the system to a healthy state. This issue has raised technical issues which hinder our intended level of service and reliability for the Cloud SQL service. We have begun a thorough investigation of similar potential failure patterns in order to avoid this type of service disruption in the future. We are adding additional monitoring to quickly detect metadata database timeouts which caused the control plane outage. We are also working to make the Cloud SQL control plane services more resilient to metadata database latency by making the service not directly call the database for connection validation. We realize this event may have impacted your organization and we apologize for this disruption. Thank you again for your business with Google Cloud SQL.

Google Cloud Platform
UPDATE: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

The issue with Cloud SQL connectivity affecting connections from App Engine and connections using the Cloud SQL Proxy in europe-west1 has been resolved for all affected projects as of 20:55 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

We are continuing to experience an issue with Cloud SQL connectivity beginning at Tuesday, 2017-08-15 17:20 US/Pacific. Current investigation indicates that instances running in Europe-West1 are affected by this issue. Engineering is working on mitigating the situation. We will provide an update by 21:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

We are continuing to experience an issue with Cloud SQL connectivity beginning at Tuesday, 2017-08-15 17:20 US/Pacific. Current investigation indicates that instances running in Europe-West1 are affected by this issue. Engineering is currently working on mitigating the situation. We will provide an update by 20:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17017 - Cloud SQL connectivity issue in Europe-West1

We are experiencing an issue with Cloud SQL connectivity beginning at Tuesday, 2017-08-15 17:20 US/Pacific. Current investigation indicates that instances running in Europe-West1 are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 19:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - GCS triggers not fired when objects are updated

We are investigating an issue with Google Cloud Storage object overwrites. For buckets with Google Cloud Functions or Object Change Notification enabled, notifications were not being triggered when a new object overwrote an existing object. Other object operations are not affected. Buckets with Google Cloud Pub/Sub configured are also not affected. The root cause has been found and confirmed by partial rollback. Full rollback is expected to be completed within an hour. Between now and full rollback, affected buckets are expected to begin triggering on updates; it can be intermittent initially, and it is expected to stabilize when the rollback is complete. We will provide another update by 14:00 with any new details. ETA for resolution 14:00 US/Pacific time.

Google Cloud Platform
UPDATE: Incident 17003 - GCS triggers not fired when objects are updated

We are investigating an issue with Google Cloud Storage function triggering on object update. Apiary notifications on object updates were also not sent during this issue. Other object operations are not reporting problems. The root cause has been found and confirmed by partial rollback. Full rollback is expected to be completed within an hour. Between now and full rollback, affected functions are expected to begin triggering on updates; it can be intermittent initially, and it is expected to stabilize when the rollback is complete. We will provide another update by 14:00 with any new details. ETA for resolution 13:30 US/Pacific time.

Google Cloud Platform
UPDATE: Incident 17003 - GCS triggers not fired when objects are updated

We are investigating an issue with Google Cloud Storage function triggering on object update. Other object operations are not reporting problems. We will provide more information by 12:00 US/Pacific

Google Cloud Platform
RESOLVED: Incident 18032 - BigQuery Disabled for Projected

ISSUE SUMMARY On 2017-07-26, BigQuery delivered error messages for 7% of queries and 15% of exports for a duration of two hours and one minute. It also experienced elevated failures for streaming inserts for one hour and 40 minutes. If your service or application was affected, we apologize – this is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve BigQuery’s performance and availability. DETAILED DESCRIPTION OF IMPACT On 2017-07-26 from 13:45 to 15:45 US/PDT, BigQuery jobs experienced elevated failures at a rate of 7% to 15%, depending on the operation attempted. Overall 7% of queries, 15% of exports, and 9% of streaming inserts failed during this event. These failures occurred in 12% of customer projects The errors for affected projects varied from 2% to 69% of exports, over 50% for queries, and up to 28.5% for streaming inserts. Customers affected saw an error message stating that their project has “not enabled BigQuery”. ROOT CAUSE Prior to executing a BigQuery job, Google’s Service Manager validates that the project requesting the job has BigQuery enabled for the project. The Service Manager consists of several components, including a redundant data store for project configurations, and a permissions module which inspects configurations. The project configuration data is being migrated to a new format and new version of the data store, and as part of that migration, the permissions module is being updated to use the new format. As is normal production best practices, this migration is being performed in stages separated by time. The root cause of this event was that, during one stage of the rollout, configuration data for two GCP datacenters was migrated before the corresponding permissions module for BigQuery was updated. As a result, the permissions module in those datacenters began erroneously reporting that projects running there no longer had BigQuery enabled. Thus, while both BigQuery and the underlying data stores were unchanged, requests to BigQuery from affected projects received an error message indicating that they had not enabled BigQuery. REMEDIATION AND PREVENTION Google’s BigQuery on-call engineering team was alerted by automated monitoring within 15 minutes of the beginning of the event at 13:59. Subsequent investigation determined at 14:17 that multiple projects were experiencing BigQuery validation failures, and the cause of the errors was identified at 14:46 as being changed permissions. Once the root cause of the errors was understood, Google engineers focused on mitigating the user impact by configuring BigQuery in affected locations to skip the erroneous permissions check. This change was first tested in a portion of the affected projects beginning at 15:04, and confirmed to be effective at 15:29. That mitigation was then rolled out to all affected projects, and was complete by 15:44. Finally, with mitigations in place, the Google engineering team worked to safely roll back the data migration; this work completed at 23:33 and the permissions check mitigation was removed, closing the incident. Google engineering has created 26 high priority action items to prevent a recurrence of this condition and to better detect and more quickly mitigate similar classes of issues in the future. These action items include increasing the auditing of BigQuery’s use of Google’s Service Manager, improving the detection and alerting of the conditions that caused this event, and improving the response of Google engineers to similar events. In addition, the core issue that affected the BigQuery backend has already been fixed. Google is committed to quickly and continually improving our technology and operations to prevent service disruptions. We appreciate your patience and apologize again for the impact to your organization.

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

The issue with BigQuery access errors has been resolved for all affected projects as of 16:15 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

The issue with BigQuery errors should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

The BigQuery engineers have identified a possible workaround to the issue affecting the platform and are deploying it now. Next update at 16:00PDT

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

At this time BigQuery is experiencing a partial outage, reporting that the service is not available for the project. Engineers are currently investigating the issue.

Google Cloud Platform
UPDATE: Incident 18032 - BigQuery Disabled for Projected

We are investigating an issue with BigQuery. We will provide more information by 15:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Issues with Cloud VPN in us-west1

The issue with connectivity to Cloud VPN and External IPs in us-west1 has been resolved for all affected projects as of 14:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17005 - Issues with Google Cloud Console

The issue with listing projects in the Google Cloud Console has been resolved as of 2017-07-21 07:11 PDT.

Google Cloud Platform
UPDATE: Incident 17005 - Issues with Google Cloud Console

The issue with Google Cloud Console errors should be resolved for majority of users and we expect a full resolution in the near future. We will provide another status update by 09:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - Issues with Google Cloud Console

We are experiencing an issue with Google Cloud Console returning errors beginning at Fri, 2017-07-21 02:50 US/Pacific. Early investigation indicates that users may see errors when listing projects in Google Cloud Console and via the API. Some other pages in Google Cloud Console may also display an error, refreshing the pages may help. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - Issues with Google Cloud Console

We are investigating an issue with Google Cloud Console. We will provide more information by 03:45 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18031 - BigQuery API server returning errors

The issue with BigQuery API returning errors has been resolved for all affected users as of 04:10 US/Pacific. We apologize for the impact that this incident had on your application.

Google Cloud Platform
UPDATE: Incident 18031 - BigQuery API server returning errors

The issue with Google BigQuery should be resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18031 - BigQuery API server returning errors

We are investigating an issue with Google BigQuery. We will provide more information by 04:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18030 - Streaming API errors

ISSUE SUMMARY On Wednesday 28 June 2017, streaming data into Google BigQuery experienced elevated error rates for a period of 57 minutes. We apologize to all users whose data ingestion pipelines were affected by this issue. We understand the importance of reliability for a process as crucial as data ingestion and are taking committed actions to prevent a similar recurrence in the future. DETAILED DESCRIPTION OF IMPACT On Wednesday 28 June 2017 from 18:00 to 18:20 and from 18:40 to 19:17 US/Pacific time, BigQuery's streaming insert service returned an increased error rate to clients for all projects. The proportion varied from time to time, but failures peaked at 43% of streaming requests returning HTTP response code 500 or 503. Data streamed into BigQuery from clients that experienced errors without retry logic were not saved into target tables during this period of time. ROOT CAUSE Streaming requests are routed to different datacenters for processing based on the table ID of the destination table. A sudden increase in traffic to the BigQuery streaming service combined with diminished capacity in a datacenter resulted in that datacenter returning a significant amount of errors for tables whose IDs landed in that datacenter. Other datacenters processing streaming data into BigQuery were unaffected. REMEDIATION AND PREVENTION Google engineers were notified of the event at 18:20, and immediately started to investigate the issue. The first set of errors had subsided, but starting at 18:40 error rates increased again. At 19:17 Google engineers redirected traffic away from the affected datacenter. The table IDs in the affected datacenter were redistributed to remaining, healthy streaming servers and error rates began to subside. To prevent the issue from recurring, Google engineers are improving the load balancing configuration, so that spikes in streaming traffic can be more equitably distributed amongst the available streaming servers. Additionally, engineers are adding further monitoring as well as tuning existing monitoring to decrease the time it takes to alert engineers of issues with the streaming service. Finally, Google engineers are evaluating rate-limiting strategies for the backend to prevent them from becoming overloaded.

Google Cloud Platform
UPDATE: Incident 17004 - Stackdriver Uptime Monitoring - alerting policies with uptime check health conditions will not fire or resolve

We are experiencing an issue with Stackdriver Uptime Monitoring - alerting policies with uptime check health conditions will not fire or resolve and latency charts on uptime dashboard will be missing. Beginning at approx. Thursday, 2017-07-06 17:00:00 US/Pacific. Current data indicates that all projects are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 19:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - We are investigating an issue with Google Cloud Storage. We will provide more information by 18:30 US/Pacific.

We are experiencing an intermittent issue with Google Cloud Storage - JSON API requests are failing with 5XX errors (XML API is unaffected) beginning at Thursday, 2017-07-06 16:50:40 US/Pacific. Current data indicates that approximately 70% of requests globally are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - We are investigating an issue with Google Cloud Storage. We will provide more information by 18:30 US/Pacific.

We are investigating an issue with Google Cloud Storage. We will provide more information by 18:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17001 - Google Cloud Storage elevated error rates

The issue with degraded availability for some Google Cloud Storage objects has been resolved for all affected projects. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 17016 - Cloud SQL V2 instance failing to create

The issue with Cloud SQL V2 incorrect reports of 'Unable to Failover' state has been resolved for all affected instances as of 12:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17016 - Cloud SQL V2 instance failing to create

The issue with Cloud SQL V2 incorrect reports of 'Unable to Failover' state should be resolved for some of instances and we expect a full resolution in the near future. We will provide another status update by 12:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17016 - Cloud SQL V2 instance failing to create

Our Engineering Team believes they have identified the root cause of the incorrect reports of 'Unable to Failover' state and is working to mitigate. We will provide another status update by 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17016 - Cloud SQL V2 instance failing to create

The issue with Cloud SQL V2 some instance maybe failing to create should be resolved for some of porojects and we expect a full resolution in the near future. At this time we do not have additional information related to HA instances may report incorrect 'Unable to Failover' state. We will provide another status update by 11:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17002 - Cloud Pub/Sub admin operations failing

The issue with Cloud Pub/Sub admin operations failing has been resolved for all affected users as of 10:10 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17016 - Cloud SQL V2 instance failing to create

We are experiencing an issue with Cloud SQL V2, some instance maybe failing to create or HA instances may reports incorrect 'Unable to Failover' state beginning at Thursday, 2016-06-29 08:45 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 11:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Cloud Pub/Sub admin operations failing

We are investigating an issue where Cloud Pub/Sub admin operations are failing. We will provide more information by 10:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Google Cloud Storage elevated error rates

Google engineers are continuing to restore the service. Error rates are continuing to decrease. We will provide another status update by 15:00 June 29 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

The issue with Cloud Logging exports to BigQuery failing has been resolved for all affected projects on Tuesday, 2017-06-13 10:12 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 18030 - Streaming API errors

The issue with BigQuery Streaming insert has been resolved for all affected users as of 19:17 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18030 - Streaming API errors

Our Engineering Team believes they have identified the root cause of the errors and have mitigated the issue by 19:17 US/Pacific. We will provide another status update by 20:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18030 - Streaming API errors

We are investigating an issue with BigQuery Streaming insert. We will provide more information by 19:35 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Google Cloud Storage elevated error rates

Starting at Tuesday 27 June 2017 07:30 PST, Google Cloud Storage started experiencing degraded availability for some objects based in us-central1 buckets (Regional, Nearline, Coldline, Durable Reduced Availability) and US multi-region buckets. Between 08:00 and 18:00 PST error rate was ~3.5%, error rates have since decreased to 0.5%. Errors are expected to be consistent for objects. Customers do not need to make any changes at this time. Google engineers have identified the root cause and are working to restore the service. If your service or application is affected, we apologize. We will provide another status update by 05:00 June 29 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18029 - BigQuery Increased Error Rate

ISSUE SUMMARY For 10 minutes on Wednesday 14 June 2017, Google BigQuery experienced increased error rates for both streaming inserts and most API methods due to their dependency on metadata read operations. To our BigQuery customers whose business were impacted by this event, we sincerely apologize. We are taking immediate steps to improve BigQuery’s performance and availability. DETAILED DESCRIPTION OF IMPACT Starting at 10:43am US/Pacific, global error rates for BigQuery streaming inserts and API calls dependent upon metadata began to rapidly increase. The error rate for streaming inserts peaked at 100% by 10:49am. Within that same window, the error rate for metadata operations increased to a height of 80%. By 10:54am the error rates for both streaming inserts and metadata operations returned to normal operating levels. During the incident, affected BigQuery customers would have experienced a noticeable elevation in latency on all operations, as well as increased “Service Unavailable” and “Timeout” API call failures. While BigQuery streaming inserts and metadata operations were the most severely impacted, other APIs also exhibited elevated latencies and error rates, though to a much lesser degree. For API calls returning status code 2xx the operation completed with successful data ingestion and integrity. ROOT CAUSE On Wednesday 14 June 2017, BigQuery engineers completed the migration of BigQuery's metadata storage to an improved backend infrastructure. This effort was the culmination of work to incrementally migrate BigQuery read traffic over the course of two weeks. As the new backend infrastructure came online, there was one particular type of read traffic that hadn’t yet migrated to the new metadata storage. This caused a sudden spike of that read traffic to the new backend. The spike came when the new storage backend had to process a large volume of incoming requests as well as allocate resources to handle the increased load. Initially the backend was able to process requests with elevated latency, but all available resources were eventually exhausted, which lead to API failures. Once the backend was able to complete the load redistribution, it began to free up resources to process existing requests and work through its backlog. BigQuery operations continued to experience elevated latency and errors for another five minutes as the large backlog of requests from the first five minutes of the incident were processed. REMEDIATION AND PREVENTION Our monitoring systems worked as expected and alerted us to the outage within 6 minutes of the error spike. By this time, the underlying root cause had already passed. Google engineers have created nine high priority action items, and three lower priority action items as a result of this event to better prevent, detect and mitigate the reoccurence of a similar event. The most significant of these priorities is to modify the BigQuery service to successfully handle a similar root cause event. This will include adjusting capacity parameters to better handle backend failures and improving caching and retry logic. Each of the 12 action items created from this event have already been assigned to an engineer and are underway.

Google Cloud Platform
RESOLVED: Incident 18029 - BigQuery Increased Error Rate

ISSUE SUMMARY For 10 minutes on Wednesday 14 June 2017, Google BigQuery experienced increased error rates for both streaming inserts and most API methods due to their dependency on metadata read operations. To our BigQuery customers whose business were impacted by this event, we sincerely apologize. We are taking immediate steps to improve BigQuery’s performance and availability. DETAILED DESCRIPTION OF IMPACT Starting at 10:43am US/Pacific, global error rates for BigQuery streaming inserts and API calls dependent upon metadata began to rapidly increase. The error rate for streaming inserts peaked at 100% by 10:49am. Within that same window, the error rate for metadata operations increased to a height of 80%. By 10:54am the error rates for both streaming inserts and metadata operations returned to normal operating levels. During the incident, affected BigQuery customers would have experienced a noticeable elevation in latency on all operations, as well as increased “Service Unavailable” and “Timeout” API call failures. While BigQuery streaming inserts and metadata operations were the most severely impacted, other APIs also exhibited elevated latencies and error rates, though to a much lesser degree. For API calls returning status code 2xx the operation completed successfully with guaranteed data ingestion and integrity. ROOT CAUSE On Wednesday 14 June 2017, BigQuery engineers completed the migration of BigQuery's metadata storage to an improved backend infrastructure. This effort was the culmination of work to incrementally migrate BigQuery read traffic over the course of two weeks. As the new backend infrastructure came online, there was one particular type of read traffic that hadn’t yet migrated to the new metadata storage. This caused a sudden spike of that read traffic to the new backend. The spike came when the new storage backend had to process a large volume of incoming requests as well as allocate resources to handle the increased load. Initially the backend was able to process requests with elevated latency, but all available resources were eventually exhausted, which lead to API failures. Once the backend was able to complete the load redistribution, it began to free up resources to process existing requests and work through its backlog. BigQuery operations continued to experience elevated latency and errors for another five minutes as the large backlog of requests from the first five minutes of the incident were processed. REMEDIATION AND PREVENTION Our monitoring systems worked as expected and alerted us to the outage within 6 minutes of the error spike. By this time, the underlying root cause had already passed. Google engineers have created nine high priority action items, and three lower priority action items as a result of this event to better prevent, detect and mitigate the reoccurence of a similar event. The most significant of these priorities is to modify the BigQuery service to successfully handle a similar root cause event. This will include adjusting capacity parameters to better handle backend failures and improving caching and retry logic. Each of the 12 action items created from this event have already been assigned to an engineer and are underway.

Google Cloud Platform
RESOLVED: Incident 18029 - BigQuery Increased Error Rate

ISSUE SUMMARY For 10 minutes on Wednesday 14 June 2017, Google BigQuery experienced increased error rates for both streaming inserts and most API methods due to their dependency on metadata read operations. To our BigQuery customers whose business were impacted by this event, we sincerely apologize. We are taking immediate steps to improve BigQuery’s performance and availability. DETAILED DESCRIPTION OF IMPACT Starting at 10:43am US/Pacific, global error rates for BigQuery streaming inserts and API calls dependent upon metadata began to rapidly increase. The error rate for streaming inserts peaked at 100% by 10:49am. Within that same window, the error rate for metadata operations increased to a height of 80%. By 10:54am the error rates for both streaming inserts and metadata operations returned to normal operating levels. During the incident, affected BigQuery customers would have experienced a noticeable elevation in latency on all operations, as well as increased “Service Unavailable” and “Timeout” API call failures. While BigQuery streaming inserts and metadata operations were the most severely impacted, other APIs also exhibited elevated latencies and error rates, though to a much lesser degree. For API calls returning status code 2xx the operation completed with successful data ingestion and integrity. ROOT CAUSE On Wednesday 14 June 2017, BigQuery engineers completed the migration of BigQuery's metadata storage to an improved backend infrastructure. This effort was the culmination of work to incrementally migrate BigQuery read traffic over the course of two weeks. As the new backend infrastructure came online, there was one particular type of read traffic that hadn’t yet migrated to the new metadata storage. This caused a sudden spike of that read traffic to the new backend. The spike came when the new storage backend had to process a large volume of incoming requests as well as allocate resources to handle the increased load. Initially the backend was able to process requests with elevated latency, but all available resources were eventually exhausted, which lead to API failures. Once the backend was able to complete the load redistribution, it began to free up resources to process existing requests and work through its backlog. BigQuery operations continued to experience elevated latency and errors for another five minutes as the large backlog of requests from the first five minutes of the incident were processed. REMEDIATION AND PREVENTION Our monitoring systems worked as expected and alerted us to the outage within 6 minutes of the error spike. By this time, the underlying root cause had already passed. Google engineers have created nine high priority action items, and three lower priority action items as a result of this event to better prevent, detect and mitigate the reoccurence of a similar event. The most significant of these priorities is to modify the BigQuery service to successfully handle a similar root cause event. This will include adjusting capacity parameters to better handle backend failures and improving caching and retry logic. Each of the 12 action items created from this event have already been assigned to an engineer and are underway.

Google Cloud Platform
RESOLVED: Incident 17008 - Network issue in asia-northeast1

ISSUE SUMMARY On Thursday 8 June 2017, from 08:24 to 09:26 US/Pacific Time, datacenters in the asia-northeast1 region experienced a loss of network connectivity for a total of 62 minutes. We apologize for the impact this issue had on our customers, and especially to those customers with deployments across multiple zones in the asia-northeast1 region. We recognize we failed to deliver the regional reliability that multiple zones are meant to achieve. We recognize the severity of this incident and have completed an extensive internal postmortem. We thoroughly understand the root causes and no datacenters are at risk of recurrence. We are at work to add mechanisms to prevent and mitigate this class of problem in the future. We have prioritized this work and in the coming weeks, our engineering team will complete the action items we have generated from the postmortem. DETAILED DESCRIPTION OF IMPACT On Thursday 8 June 2017, from 08:24 to 09:26 US/Pacific Time, network connectivity to and from Google Cloud services running in the asia-northeast1 region was unavailable for 62 minutes. This issue affected all Google Cloud Platform services in that region, including Compute Engine, App Engine, Cloud SQL, Cloud Datastore, and Cloud Storage. All external connectivity to the region was affected during this time frame, while internal connectivity within the region was not affected. In addition, inbound requests from external customers originating near Google’s Tokyo point of presence intended for Compute or Container Engine HTTP Load Balancing were lost for the initial 12 minutes of the outage. Separately, Internal Load Balancing within asia-northeast1 remained degraded until 10:23. ROOT CAUSE At the time of incident, Google engineers were upgrading the network topology and capacity of the region; a configuration error caused the existing links to be decommissioned before the replacement links could provide connectivity, resulting in a loss of connectivity for the asia-northeast1 region. Although the replacement links were already commissioned and appeared to be ready to serve, a network-routing protocol misconfiguration meant that the routes through those links were not able to carry traffic. As Google's global network grows continuously, we make upgrades and updates reliably by using automation for each step and, where possible, applying changes to only one zone at any time. The topology in asia-northeast1 was the last region unsupported by automation; manual work was required to be performed to align its topology with the rest of our regional deployments (which would, in turn, allow automation to function properly in the future). This manual change mistakenly did not follow the same per-zone restrictions as required by standard policy or automation, which meant the entire region was affected simultaneously. In addition, some customers with deployments across multiple regions that included asia-northeast1 experienced problems with HTTP Load Balancing due to a failure to detect that the backends were unhealthy. When a network partition occurs, HTTP Load Balancing normally detects this automatically within a few seconds and routes traffic to backends in other regions. In this instance, due to a performance feature being tested in this region at the time, the mechanism that usually detects network partitions did not trigger, and continued to attempt to assign traffic until our on-call engineers responded. Lastly, the Internal Load Balancing outage was exacerbated due to a software defined networking component which was stuck in a state where it was not able to provide network resolution for instances in the load balancing group. REMEDIATION AND PREVENTION Google engineers were paged by automated monitoring within one minute of the start of the outage, at 08:24 PDT. They began troubleshooting and declared an emergency incident 8 minutes later at 08:32. The issue was resolved when engineers reconnected the network path and reverted the configuration back to the last known working state at 09:22. Our monitoring systems worked as expected and alerted us to the outage promptly. The time-to-resolution for this incident was extended by the time taken to perform the rollback of the network change, as the rollback had to be performed manually. We are implementing a policy change that any manual work on live networks be constrained to a single zone. This policy will be enforced automatically by our change management software when changes are planned and scheduled. In addition, we are building automation to make these types of changes in future, and to ensure the system can be safely rolled back to a previous known-good configuration at any time during the procedure. The fix for the HTTP Load Balancing performance feature that caused it to incorrectly believe zones within asia-northeast1 were healthy will be rolled out shortly. SUPPORT COMMUNICATIONS During the incident, customers who had originally contacted Google Cloud Support in Japanese did not receive periodic updates from Google as the event unfolded. This was due to a software defect in the support tooling — unrelated to the incident described earlier. We have already fixed the software defect, so all customers who contact support will receive incident updates. We apologize for the communications gap to our Japanese-language customers. RELIABILITY SUMMARY One of our biggest pushes in GCP reliability at Google is a focus on careful isolation of zones from each other. As we encourage users to build reliable services using multiple zones, we also treat zones separately in our production practices, and we enforce this isolation with software and policy. Since we missed this mark—and affecting all zones in a region is an especially serious outage—we apologize. We intend for this incident report to accurately summarize the detailed internal post-mortem that includes final assessment of impact, root cause, and steps we are taking to prevent an outage of this form occurring again. We hope that this incident report demonstrates the work we do to learn from our mistakes to deliver on this commitment. We will do better. Sincerely, Benjamin Lutch | VP Site Reliability Engineering | Google

Google Cloud Platform
RESOLVED: Incident 17006 - Network issue in asia-northeast1

ISSUE SUMMARY On Thursday 8 June 2017, from 08:24 to 09:26 US/Pacific Time, datacenters in the asia-northeast1 region experienced a loss of network connectivity for a total of 62 minutes. We apologize for the impact this issue had on our customers, and especially to those customers with deployments across multiple zones in the asia-northeast1 region. We recognize we failed to deliver the regional reliability that multiple zones are meant to achieve. We recognize the severity of this incident and have completed an extensive internal postmortem. We thoroughly understand the root causes and no datacenters are at risk of recurrence. We are at work to add mechanisms to prevent and mitigate this class of problem in the future. We have prioritized this work and in the coming weeks, our engineering team will complete the action items we have generated from the postmortem. DETAILED DESCRIPTION OF IMPACT On Thursday 8 June 2017, from 08:24 to 09:26 US/Pacific Time, network connectivity to and from Google Cloud services running in the asia-northeast1 region was unavailable for 62 minutes. This issue affected all Google Cloud Platform services in that region, including Compute Engine, App Engine, Cloud SQL, Cloud Datastore, and Cloud Storage. All external connectivity to the region was affected during this time frame, while internal connectivity within the region was not affected. In addition, inbound requests from external customers originating near Google’s Tokyo point of presence intended for Compute or Container Engine HTTP Load Balancing were lost for the initial 12 minutes of the outage. Separately, Internal Load Balancing within asia-northeast1 remained degraded until 10:23. ROOT CAUSE At the time of incident, Google engineers were upgrading the network topology and capacity of the region; a configuration error caused the existing links to be decommissioned before the replacement links could provide connectivity, resulting in a loss of connectivity for the asia-northeast1 region. Although the replacement links were already commissioned and appeared to be ready to serve, a network-routing protocol misconfiguration meant that the routes through those links were not able to carry traffic. As Google's global network grows continuously, we make upgrades and updates reliably by using automation for each step and, where possible, applying changes to only one zone at any time. The topology in asia-northeast1 was the last region unsupported by automation; manual work was required to be performed to align its topology with the rest of our regional deployments (which would, in turn, allow automation to function properly in the future). This manual change mistakenly did not follow the same per-zone restrictions as required by standard policy or automation, which meant the entire region was affected simultaneously. In addition, some customers with deployments across multiple regions that included asia-northeast1 experienced problems with HTTP Load Balancing due to a failure to detect that the backends were unhealthy. When a network partition occurs, HTTP Load Balancing normally detects this automatically within a few seconds and routes traffic to backends in other regions. In this instance, due to a performance feature being tested in this region at the time, the mechanism that usually detects network partitions did not trigger, and continued to attempt to assign traffic until our on-call engineers responded. Lastly, the Internal Load Balancing outage was exacerbated due to a software defined networking component which was stuck in a state where it was not able to provide network resolution for instances in the load balancing group. REMEDIATION AND PREVENTION Google engineers were paged by automated monitoring within one minute of the start of the outage, at 08:24 PDT. They began troubleshooting and declared an emergency incident 8 minutes later at 08:32. The issue was resolved when engineers reconnected the network path and reverted the configuration back to the last known working state at 09:22. Our monitoring systems worked as expected and alerted us to the outage promptly. The time-to-resolution for this incident was extended by the time taken to perform the rollback of the network change, as the rollback had to be performed manually. We are implementing a policy change that any manual work on live networks be constrained to a single zone. This policy will be enforced automatically by our change management software when changes are planned and scheduled. In addition, we are building automation to make these types of changes in future, and to ensure the system can be safely rolled back to a previous known-good configuration at any time during the procedure. The fix for the HTTP Load Balancing performance feature that caused it to incorrectly believe zones within asia-northeast1 were healthy will be rolled out shortly. SUPPORT COMMUNICATIONS During the incident, customers who had originally contacted Google Cloud Support in Japanese did not receive periodic updates from Google as the event unfolded. This was due to a software defect in the support tooling — unrelated to the incident described earlier. We have already fixed the software defect, so all customers who contact support will receive incident updates. We apologize for the communications gap to our Japanese-language customers. RELIABILITY SUMMARY One of our biggest pushes in GCP reliability at Google is a focus on careful isolation of zones from each other. As we encourage users to build reliable services using multiple zones, we also treat zones separately in our production practices, and we enforce this isolation with software and policy. Since we missed this mark—and affecting all zones in a region is an especially serious outage—we apologize. We intend for this incident report to accurately summarize the detailed internal post-mortem that includes final assessment of impact, root cause, and steps we are taking to prevent an outage of this form occurring again. We hope that this incident report demonstrates the work we do to learn from our mistakes to deliver on this commitment. We will do better. Sincerely, Benjamin Lutch | VP Site Reliability Engineering | Google

Google Cloud Platform
UPDATE: Incident 18029 - BigQuery Increased Error Rate

The BigQuery service was experiencing a 78% error rate on streaming operations and up to 27% error rates on other operations from 10:43 to 11:03 US/Pacific time. This issue has been resolved for all affected projects as of 10:53 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17004 - Cloud console: changing language preferences

The Google Cloud Console issue that was preventing the users from changing their language preferences has been resolved as of 06:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17004 - Cloud console: changing language preferences

The Google Cloud Console issue that is preventing the users from changing their language preferences is ongoing. Our Engineering Team is working on it. We will provide another status update by 06:00 US/Pacific with current details. A known workaround is to change the browser language.

Google Cloud Platform
UPDATE: Incident 17004 - Cloud console: changing language preferences

We are investigating an issue with the cloud console. Users are unable to change their language preferences. We will provide more information by 04:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17005 - High Latency in App Engine

ISSUE SUMMARY On Wednesday 7 June 2017, Google App Engine experienced highly elevated serving latency and timeouts for a duration of 138 minutes. If your service or application was affected the increase in latency, we sincerely apologize – this is not the level of reliability and performance we expect of our platform, and we are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT On Wednesday 7 June 2017, from 13:34 PDT to 15:52 PDT, 7.7% of active applications on the Google App Engine service experienced severely elevated latency; requests that typically take under 500ms to serve were taking many minutes. This elevated latency would have either resulted in users seeing additional latency when waiting for responses from the affected applications or 500 errors if the application handlers timed out. The individual application logs would have shown this increased latency or increases in “Request was aborted after waiting too long to attempt to service your request” error messages. ROOT CAUSE The incident was triggered by an increase in memory usage across all App Engine appservers in a datacenter in us-central. An App Engine appserver is responsible for creating instances to service requests for App Engine applications. When its memory usage increases to unsustainable levels, it will stop some of its current instances, so that they can be rescheduled on other appservers in order to balance out the memory requirements across the datacenter. This transfer of an App Engine instance between appservers consumes CPU resources, a signal used by the master scheduler of the datacenter to detect when it must further rebalance traffic across more appservers (such as when traffic to the datacenter increases and more App Engine instances are required). Normally, these memory management techniques are transparent to customers but in isolated cases, they can be exacerbated by large amounts of additional traffic being routed to the datacenter, which requires more instances to service user requests. The increased load and memory requirement from scheduling new instances combined with rescheduling instances from appservers with high memory usage resulted in most appservers being considered “busy” by the master scheduler. User requests needed to wait for an available instance to either be transferred or created before they were able to be serviced, which results in the increased latency seen at the app level. REMEDIATION AND PREVENTION Latencies began to increase at 13:34 PDT and Google engineers were alerted to the increase in latency at 13:45 PDT and were able to identify a subset of traffic that was causing the increase in memory usage. At 14:08, they were able to limit this subset of traffic to an isolated partition of the datacenter to ease the memory pressure on the remaining appservers. Latency for new requests started to improve as soon as this traffic was isolated; however, tail latency was still elevated due to the large backlog of requests that had accumulated since the incident started. This backlog was eventually cleared by 15:52 PDT. To prevent further recurrence, traffic to the affected datacenter was rebalanced with another datacenter. To prevent future recurrence of this issue, Google engineers will be re-evaluating the resource distribution in the us-central datacenters where App Engine instances are hosted. Additionally, engineers will be developing stronger alerting thresholds based on memory pressure signals so that traffic can be redirected before latency increases. And finally, engineers will be evaluating changes to the scheduling strategy used by the master scheduler responsible for scheduling appserver work to prevent this situation in the future.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

The issue with Cloud Logging exports to BigQuery failing should be resolved for the majority of projects and we expect a full resolution in the next 12 hours. We will provide another status update by 14:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

The issue with Cloud Logging exports to BigQuery failing should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 2am PST with current details.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

The issue with Cloud Logging exports to BigQuery failing should be resolved for some projects and we expect a full resolution in the near future. We will provide another status update by 11pm PST with current details.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

We are still investigating the issue with Cloud Logging exports to BigQuery failing. We will provide more information by 9pm PST. Currently, we are also working on restoring Cloud Logging exports to BigQuery.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

We are still investigating the issue with Cloud Logging exports to BigQuery failing. We will provide more information by 7pm PST. Currently, we are also working on restoring Cloud Logging exports to BigQuery.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

We are working on restoring Cloud Logging exports to BigQuery. We will provide further updates at 6pm PT.

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

Cloud Logging exports to BigQuery failed from 13:19 to approximately 14:30 with loss of logging data. We have stopped the exports while we work on fixing the issue, so BigQuery will not reflect the latest logs. This incident only affects robot accounts using HTTP requests. We are working hard to restore service, and we will provide another update in one hour (by 5pm PT).

Google Cloud Platform
RESOLVED: Incident 17003 - Cloud Logging export to BigQuery failing.

We are investigating an issue with Cloud Logging exports to BigQuery failing. We will provide more information by 5pm PT

Google Cloud Platform
RESOLVED: Incident 17008 - Network issue in asia-northeast1

Network connectivity in asia-northeast1 has been restored for all affected users as of 10:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 17006 - Network issue in asia-northeast1

Network connectivity in asia-northeast1 has been restored for all affected users as of 10:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17006 - Network issue in asia-northeast1

Network connectivity in asia-northeast1 should be restored for all affected users and we expect a full resolution in the near future. We will provide another status update by 09:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17008 - Network issue in asia-northeast1

Network connectivity in asia-northeast1 should be restored for all affected users and we expect a full resolution in the near future. We will provide another status update by 09:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17006 - Network issue in asia-northeast1

Google Cloud Platform services in region asia-northeast1 are experiencing connectivity issues. We will provide another status update by 9:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17008 - Network issue in asia-northeast1

Google Cloud Platform services in region asia-northeast1 are experiencing connectivity issues. We will provide another status update by 9:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17008 - Network issue in asia-northeast1

We are investigating an issue with network connectivity in asia-northeast1. We will provide more information by 09:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17006 - Network issue in asia-northeast1

We are investigating an issue with network connectivity in asia-northeast1. We will provide more information by 09:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17005 - High Latency in App Engine

The issue with Google App Engine displaying elevated error rate has been resolved for all affected projects as of 15:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17005 - High Latency in App Engine

The issue with Google App Engine displaying elevated error rate should be resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 15:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - High Latency in App Engine

We have identified an issue with App Engine that is causing increased latency to a portion of applications in the US Central region. Mitigation is under way. We will provide more information about the issue by 15:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

The fix has been fully deployed. We confirmed that the issue has been fixed.

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

The fix is still being deployed. We will provide another status update by 2017-05-26 19:00 US/Pacific

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

The fix is currently being deployed. We will provide another status update by 2017-05-26 16:00 US/Pacific

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

The root cause has been identified. The fix is currently being deployed. We will provide another status update by 2017-05-26 14:30 US/Pacific

Google Cloud Platform
UPDATE: Incident 5204 - Google App Engine Incident #5204

We are currently investigating a problem that is causing app engine apps to experience an infinite redirect loop when users log in. We will provide another status update by 2017-05-26 13:30 US/Pacific

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

The issue with Cloud SQL First Generation automated backups should be resolved for all affected instances as of 12:52 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

We are still actively working on this issue. We are aiming on making the final fix available in production by end of day today.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

The daily backups continue to be taken and we are expect the final fix to be available tomorrow. We'll provide another update again Wednesday, May 24 10:00 US/Pacific time as originally planed.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

We are still actively working on this issue. We are aiming on making the fix available in production by end of day today. We will provide another update by end of day if there is anything changes.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL First Generation automated backups experiencing errors

Daily backups are being taken for all Cloud SQL First Generation instances. For some instances, backups are being taken outside of defined backup windows. A fix is being tested and will be applied to First Generation instances pending positive test results. We will provide next update Wednesday, May 24 10:00 US/Pacific or if anything changes in between.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL v1 automated backups experiencing errors

The issue with automatic backups for Cloud SQL first generation is mitigated by forcing the backups. We’ll continue with this mitigation until Monday US/Pacific where a permanent fix will be rolled out. We will provide next update Monday US/Pacific or if anything changes in between.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL v1 automated backups experiencing errors

The Cloud SQL service is experiencing errors on automatic backups for 7% of Cloud SQL first generation instances. We’re forcing the backup for affected instances as short-term mitigation. We will provide another status update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17015 - Cloud SQL v1 automated backups experiencing errors

We are investigating an issue with Cloud SQL v1 automated backups. We will provide more information by 17:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Translate API elevated latency / errors

The issue with Translation Service and other API availability has been resolved for all affected users as of 19:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. This will be the final update on this incident.

Google Cloud Platform
UPDATE: Incident 17001 - Translate API elevated latency / errors

Engineering is continuing to investigate the API service issues impacting Translation Service API availability, looking into potential causes and mitigation strategies. Certain other API's, such as Speech and Prediction may also be affected. Next update by 20:00 Pacific.

Google Cloud Platform
UPDATE: Incident 17002 - GKE IP rotation procedure does not include FW rule change

Our Engineering Team has identified a fix for the issue with the GKE IP-rotation feature. We expect the rollout of the fix to begin next Tuesday, 2017-05-16 US/Pacific, completing on Friday, 2017-05-19.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

The issue with Google App Engine deployments and Memcache availability should have been resolved for all affected projects as of 18:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

The issue with Google App Engine deployments and Memcache availability is mitigated. We will provide an update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

We are continuing to investigate the issue with an underlying component that affects Google App Engine deployments and Memcache availability. The engineering team has tried several unsuccessful remediations and are continuing to investigate potential root causes and fixes. We will provide another update at 17:30 PDT.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

We are still investigating the issue with an underlying component that affects both Google App Engine deployments and Memcache availability. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - Deployment Failures and Memcache Unavailability Due to Underlying Component

We are currently investigating an issue with an underlying component that affects both Google App Engine deployments and Memcache availability. Deployments will fail intermittently and memcache is returning intermittent errors for a small number of applications. For affected deployments, please try re-deploying while we continue to investigate this issue. For affected memcache users, retries in your application code should allow you to access your memcache intermittently while the underlying issue is being addressed. Work is ongoing to address the issue by the underlying component's engineering team. We will provide our next update at 15:30 PDT.

Google Cloud Platform
UPDATE: Incident 17002 - GKE IP rotation procedure does not include FW rule change

We are experiencing an issue with GKE ip-rotation feature. Kubernetes features that rely on the proxy (including kubectl exec and logs commands, as well as exporting cluster metrics into stackdriver) are broken by GKE ip-rotation feature. This only affects users who have disabled default ssh access on their nodes. There is a manual fix described [here](https://cloud.google.com/container-engine/docs/ip-rotation#known_issues)

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

The issue with BigQuery jobs being on pending state for too long has been resolved for all affected projects as of 03:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

The issue with BigQuery jobs being on pending state for too long should be resolved for all new jobs. We will provide another status update by 05:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery engineers are still working on a fix for the jobs being on pending state for too long. We will provide another update by 03:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery engineers have identified the root cause for the jobs being on pending state for too long and are still working on a fix. We will provide another update by 02:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery engineers have identified the root cause for the jobs being on pending state for too long and are applying a fix. We will provide another update by 01:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery engineers continue to investigate jobs being on pending state for too long. We will provide another update by 00:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

BigQuery service is experiencing jobs staying on pending state for longer than usual and our engineering team is working on it. We will provide another status update by 23:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18028 - BigQuery import jobs pending

We are investigating an issue with BigQuery import jobs pending. We will provide more information by 23:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Node pool creation failing in multiple zones

We are investigating an issue with Google Container Engine (GKE) that is affecting node-pool creations in the following zones: us-east1-d, asia-northeast1-c, europe-west1-c, us-central1-b, us-west1-a, asia-east1-a, asia-northeast1-a, asia-southeast1-a, us-east4-b, us-central1-f, europe-west1-b, asia-east1-c, us-east1-c, us-west1-b, asia-northeast1-b, asia-southeast1-b, and us-east4-c. We will provide more information by 17:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17007 - 502 errors for HTTP(S) Load Balancers

ISSUE SUMMARY On Friday 5 April 2017, requests to the Google Cloud HTTP(S) Load Balancer experienced a 25% error rate for a duration of 22 minutes. We apologize for this incident. We understand that the Load Balancer needs to be very reliable for you to offer a high quality service to your customers. We have taken and will be taking various measures to prevent this type of incident from recurring. DETAILED DESCRIPTION OF IMPACT On Friday 5 April 2017 from 01:13 to 01:35 PDT, requests to the Google Cloud HTTP(S) Load Balancer experienced a 25% error rate for a duration of 22 minutes. Clients received 502 errors for failed requests. Some HTTP(S) Load Balancers that were recently modified experienced error rates of 100%. Google paused all configuration changes to the HTTP(S) Load Balancer for three hours and 41 minutes after the incident, until our engineers had understood the root cause. This caused deployments of App Engine Flexible apps to fail during that period. ROOT CAUSE A bug in the HTTP(S) Load Balancer configuration update process caused it to revert to a configuration that was substantially out of date. The configuration update process is controlled by a master server. In this case, one of the replicas of the master servers lost access to Google's distributed file system and was unable to read recent configuration files. Mastership then passed to the server that could not access Google's distributed file system. When the mastership changes, it begins the next configuration push as normal by testing on a subset of HTTP(S) Load Balancers. If this test succeeds, the configuration is pushed globally to all HTTP(S) Load Balancers. If the test fails (as it did in this case), the new master will revert all HTTP(S) Load Balancers to the last "known good" configuration. The combination of a mastership change, lack of access to more recent updates, and the initial test failure for the latest config caused the HTTP(S) Load Balancers to revert to the latest configuration that the master could read, which was substantially out-of-date. In addition, the update with the out-of-date configuration triggered a garbage collection process on the Google Frontend servers to free up memory used by the deleted configurations. The high number of deleted configurations caused the Google Frontend servers to spend a large proportion of CPU cycles on garbage collection which lead to failed health checks and eventual restart of the affected Google Frontend server. Any client requests served by a restarting server received 502 errors. REMEDIATION AND PREVENTION Google engineers were paged at 01:22 PDT. They switched the configuration update process to use a different master server at 01:34 which mitigated the issue for most services within one minute. Our engineers then paused the configuration updates to the HTTP(S) Load Balancer until 05:16 while the root cause was confirmed. To prevent incidents of this type in future, we are taking the following actions: * Master servers will be configured to never push HTTP(S) Load Balancer configurations that are more than a few hours old. * Google Frontend servers will reject loading a configuration file that is more than a few hours old. * Improve testing for new HTTP(S) Load Balancer configurations so that out-of-date configurations are flagged before being pushed to production. * Fix the issue that caused the master server to fail when reading files from Google's distributed file system. * Fix the issue that caused health check failures on Google Frontends during heavy garbage collection. Once again, we apologize for the impact that this incident had on your service.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

The issue with Google App Engine Taskqueue has been resolved for all affected users as of 00:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

The issue with Google App Engine Taskqueue in us-east1/asia-northeast1 regions has been resolved. The issue with deployments in us-east1 is mitigated. For everyone who is still affected, we apologize for any inconvenience you may be experiencing. We will continue monitor and will provide another status update by 2017-04-07 02:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

The issue with Google App Engine Taskqueue in us-east1/asia-northeast1 regions has been partially resolved. The issue with deployments in us-east1 is partially mitigated. For everyone who is still affected, we apologize for any inconvenience you may be experiencing. We will provide another status update by 2017-04-07 00:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

The issue with Google App Engine Taskqueue in US-east1/Asia-northeast1 regions has been partially resolved. We are investigating related issues impacting deployments in US-east1. For everyone who is still affected, we apologize for any inconvenience you may be experiencing. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

We are still investigating reports of an issue with App Engine Taskqueue in US-east1/Asia-northeast1 regions. We will provide another status update by 2017-04-06 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - App Engine taskqueue error rate increase in US-east1/Asia-northeast1 region

We are investigating an issue impacting Google App Engine Task Queue in US-east1/Asia-northeast1. We will provide more information by 09:00pm US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18027 - BigQuery Streaming API returned a 500 response from 00:04 to 00:38 US/Pacific.

The issue with BigQuery Streaming API returning 500 response code has been resolved for all affected users as of 00:38 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 17001 - Issues with Cloud Pub/Sub

ISSUE SUMMARY On Tuesday 21 March 2017, new connections to Cloud Pub/Sub experienced high latency leading to timeouts and elevated error rates for a duration of 95 minutes. Connections established before the start of this issue were not affected. If your service or application was affected, we apologize – this is not the level of quality and reliability we strive to offer you, and we are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT On Tuesday 21 March 2017 from 21:08 to 22:43 US/Pacific, Cloud Pub/Sub publish, pull and ack methods experienced elevated latency, leading to timeouts. The average error rate for the issue duration was 0.66%. The highest error rate occurred at 21:43, when the Pub/Sub publish error rate peaked at 4.1%, the ack error rate reached 5.7% and the pull error rate was 0.02%. ROOT CAUSE The issue was caused by the rollout of a storage system used by the Pub/Sub service. As part of this rollout, some servers were taken out of service, and as planned, their load was redirected to remaining servers. However, an unexpected imbalance in key distribution led some of the remaining servers to become overloaded. The Pub/Sub service was then unable to retrieve the status required to route new connections for the affected methods. Additionally, some Pub/Sub servers didn’t recover promptly after the storage system had been stabilized and required individual restarts to fully recover. REMEDIATION AND PREVENTION Google engineers were alerted by automated monitoring seven minutes after initial impact. At 21:24, they had correlated the issue with the storage system rollout and stopped it from proceeding further. At 21:41, engineers restarted some of the storage servers, which improved systemic availability. Observed latencies for Pub/Sub were still elevated, so at 21:54, engineers commenced restarting other Pub/Sub servers, restoring service to 90% of users. At 22:29 a final batch was restarted, restoring the Pub/Sub service to all. To prevent a recurrence of the issue, Google engineers are creating safeguards to limit the number of keys managed by each server. They are also improving the availability of Pub/Sub servers to respond to requests even when in an unhealthy state. Finally they are deploying enhancements to the Pub/Sub service to operate when the storage system is unavailable.

Google Cloud Platform
UPDATE: Incident 17001 - Issues with Cloud Pub/Sub

The issue with Pub/Sub high latency has been resolved for all affected projects as of 22:02 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17001 - Issues with Cloud Pub/Sub

We are investigating an issue with Pub/Sub. We will provide more information by 22:40 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18026 - BigQuery streaming inserts issue

ISSUE SUMMARY On Monday 13 March 2017, the BigQuery streaming API experienced 91% error rate in the US and 63% error rate in the EU for a duration of 30 minutes. We apologize for the impact of this issue on our customers, and the widespread nature of the issue in particular. We have completed a post mortem of the incident and are making changes to mitigate and prevent recurrences. DETAILED DESCRIPTION OF IMPACT On Monday 13 March 2017 from 10:22 to 10:52 PDT 91% of streaming API requests to US BigQuery datasets and 63% of streaming API requests to EU BigQuery datasets failed with error code 503 and an HTML message indicating "We're sorry... but your computer or network may be sending automated queries. To protect our users, we can't process your request right now." All non-streaming API requests, including DDL requests and query, load extract and copy jobs were unaffected. ROOT CAUSE The trigger for this incident was a sudden increase in log entries being streamed from Stackdriver Logging to BigQuery by logs export. The denial of service (DoS) protection used by BigQuery responded to this by rejecting excess streaming API traffic. However the configuration of the DoS protection did not adequately segregate traffic streams resulting in normal sources of BigQuery streaming API requests being rejected. REMEDIATION AND PREVENTION Google engineers initially mitigated the issue by blocking the source of unexpected load. This prevented the overload and allowed all other traffic to resume normally. Engineers fully resolved the issue by identifying and reverting the change that triggered the increase in log entries and clearing the backlog of log entries that had grown. To prevent future occurrences, BigQuery engineers are updating configuration to increase isolation between different traffic sources. Tests are also being added to verify behavior under several new load scenarios.

Google Cloud Platform
UPDATE: Incident 17006 - GCE networking in us-central1 zones is experiencing disruption

GCP Services' internet connectivity has been restored as of 2:12 pm Pacific Time. We apologize for the impact that this issue had on your application. We are still investigating the root cause of the issue, and will take necessary actions to prevent a recurrence.

Google Cloud Platform
UPDATE: Incident 17006 - GCE networking in us-central1 zones is experiencing disruption

We are experiencing a networking issue with Google Compute Engine instances in us-central zones beginning at Wednesday, 2017-03-15 01:00 PM US/Pacific.

Google Cloud Platform
UPDATE: Incident 18026 - BigQuery streaming inserts issue

The issue with BigQuery streaming inserts has been resolved for all affected projects as of 11:06 AM US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18026 - BigQuery streaming inserts issue

We are investigating an issue with BigQuery streaming inserts. We will provide more information by 11:45 AM US/Pacific.

Google Cloud Platform
UPDATE: Incident 16037 - Elevated Latency and Error Rates For GCS in Europe

During the period 12:05 - 13:57 PDT, GCS requests originating in Europe experienced a 17% error rate. GCS requests in other regions were unaffected.

Google Cloud Platform
UPDATE: Incident 16037 - Elevated Latency and Error Rates For GCS in Europe

The issue with elevated latency and error rates for GCS in Europe should be resolved as of 13:56 PDT. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16037 - Elevated Latency and Error Rates For GCS in Europe

We are continuing to investigate elevated errors and latency for GCS in Europe. The reliability team has performed several mitigation steps and error rates and latency are returning to normal levels. We will continue to monitor for recurrence.

Google Cloud Platform
UPDATE: Incident 16037 - Elevated Latency and Error Rates For GCS in Europe

We are currently investigating elevated latency and error rates for Google Cloud Storage traffic transiting through Europe.

Google Cloud Platform
RESOLVED: Incident 17003 - GCP accounts with credits are being charged without credits being applied

We have mitigated the issue as of 2017-03-10 09:30 PST

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

Some Cloud Console users may notice that Dataflow job logs are not displayed correctly. This is a known issue with the user interface that affects up to 35% of jobs. Google engineers are preparing a fix. Pipeline executions are not impacted and Dataflow services are operating as normal.

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

We are still investigating the issue with Dataflow Job Log in Cloud Console. Current data indicates that between 30% and 35% of jobs are affected by this issue. Pipeline execution is not impacted. The root cause of the issue is known and the Dataflow Team is preparing the fix for production.

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

The root cause of the issue is known and the Dataflow Team is preparing the fix for production. We will provide an update by 21:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

We are experiencing a visibility issue with Dataflow Job Log in Cloud Console beginning at Thursday, 2017-03-09 11:34 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 13:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - Dataflow Job Log visibility issue in Cloud Console

We are investigating an issue with Dataflow Job Log visibility in Cloud Console. We will provide more information by 12:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17014 - Cloud SQL instance creation in zones us-central1-c, us-east1-c, asia-northeast1-a, asia-east1-b, us-central1-f may be failing.

The issue should have been mitigated for all zones, except us-central1-c. Creating new Cloud SQL 2nd generation instances in us-central1-c with SSD Persistent Disk still have an issue. Workaround is to create your instances in a different zone or use Standard Persistent Disks in us-central1-c.

Google Cloud Platform
UPDATE: Incident 17014 - Cloud SQL instance creation in zones us-central1-c and asia-east1-c are failing.

Correction: Attempts to create new Cloud SQL instances in zones us-central1-c, us-east1-c, asia-northeast1-a, asia-east1-b, us-central1-f may be intermittently failing. New instances affected in these zones will show a status of "Failed to create". Users will not incur charges for instances that failed to create; these instances can be safely deleted.

Google Cloud Platform
UPDATE: Incident 17014 - Cloud SQL instance creation in zones us-central1-c and asia-east1-c are failing.

Attempts to create new Cloud SQL instances in zones us-central1-c and asia-east1-c are failing. New instances created in these zones will show a status of "Failed to create". Users will not incur charges for instances that failed to create; these instances can be safely deleted.

Google Cloud Platform
RESOLVED: Incident 17005 - Network packet loss to Compute Engine us-west1 region

We confirm that the issue with GCE network connectivity to us-west1 should have been resolved for all affected endpoints as of 03:27 US/Pacific and the situation is stable. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17005 - Network packet loss to Compute Engine us-west1 region

GCE network connectivity to us-west1 remains stable and we expect a final resolution in the near future. We will provide another status update by 05:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - Network packet loss to Compute Engine us-west1 region

Network connectivity to the Google Compute Engine us-west1 region has been restored but the issue remains under investigation. We will provide another status update by 05:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17005 - Network packet loss to Compute Engine us-west1 region

We are experiencing an issue with GCE network connectivity to the us-west1 region beginning at Tuesday, 2017-02-28 02:57 US/Pacific. We will provide a further update by 04:45.

Google Cloud Platform
UPDATE: Incident 17005 - Network packet loss to Compute Engine us-west1 region

We are investigating an issue with network connectivity to the us-west1 region. We will provide more information by 04:15 US/Pacific time.

Google Cloud Platform
RESOLVED: Incident 17002 - Cloud Datastore Internal errors in the European region

ISSUE SUMMARY On Tuesday 14 February 2017, some applications using Google Cloud Datastore in Western Europe or the App Engine Search API in Western Europe experienced 2%-4% error rates and elevated latency for three periods with an aggregate duration of three hours and 36 minutes. We apologize for the disruption this caused to your service. We have already taken several measures to prevent incidents of this type from recurring and to improve the reliability of these services. DETAILED DESCRIPTION OF IMPACT On Tuesday 14 February 2017 between 00:15 and 01:18 PST, 54% of applications using Google Cloud Datastore in Western Europe or the App Engine Search API in Western Europe experienced elevated error rates and latency. The average error rate for affected applications was 4%. Between 08:35 and 08:48 PST, 50% of applications using Google Cloud Datastore in Western Europe or the App Engine Search API in Western Europe experienced elevated error rates. The average error rate for affected applications was 4%. Between 12:20 and 14:40 PST, 32% of applications using Google Cloud Datastore in Western Europe or the App Engine Search API in Western Europe experienced elevated error rates and latency. The average error rate for affected applications was 2%. Errors received by affected applications for all three incidents were either "internal error" or "timeout". ROOT CAUSE The incident was caused by a latent bug in a service used by both Cloud Datastore and the App Engine Search API that was triggered by high load on the service. Starting at 00:15 PST, several applications changed their usage patterns in one zone in Western Europe and began running more complex queries, which caused higher load on the service. REMEDIATION AND PREVENTION Google's monitoring systems paged our engineers at 00:35 PST to alert us to elevated errors in a single zone. Our engineers followed normal practice, by redirecting traffic to other zones to reduce the impact on customers while debugging the underlying issue. At 01:15, we redirected traffic to other zones in Western Europe, which resolved the incident three minutes later. At 08:35 we redirected traffic back to the zone that previously had errors. We found that the error rate in that zone was still high and so redirected traffic back to other zones at 08:48. At 12:45, our monitoring systems detected elevated errors in other zones in Western Europe. At 14:06 Google engineers added capacity to the service with elevated errors in the affected zones. This removed the trigger for the incident. We have now identified and fixed the latent bug that caused errors when the system was at high load. We expect to roll out this fix over the next few days. Our capacity planning team have generated forecasts for peak load generated by the Cloud Datastore and App Engine Search API and determined that we now have sufficient capacity currently provisioned to handle peak loads. We will be making several changes to our monitoring systems to improve our ability to quickly detect and diagnose errors of this type. Once again, we apologize for the impact of this incident on your application.

Google Cloud Platform
UPDATE: Incident 17004 - Persistent Disk Does Not Produce Differential Snapshots In Some Cases

Since January 23rd, a small number of Persistent Disk snapshots were created as full snapshots rather than differential. While this results in overbilling, these snapshots still correctly backup your data and are usable for restores. We are working to resolve this issue and also to correct any overbilling that occurred. No further action is required from your side.

Google Cloud Platform
RESOLVED: Incident 17002 - Cloud Datastore Internal errors in the European region

The issue with Cloud Datastore serving elevated internal errors in the European region should have been resolved for all affected projects as of 14:34 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17002 - Cloud Datastore Internal errors in the European region

We are investigating an issue with Cloud Datastore in the European region. We will provide more information by 15:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17003 - New VMs are experiencing connectivity issues

ISSUE SUMMARY On Monday 30 January 2017, newly created Google Compute Engine instances, Cloud VPNs and network load balancers were unavailable for a duration of 2 hours 8 minutes. We understand how important the flexibility to launch new resources and scale up GCE is for our users and apologize for this incident. In particular, we apologize for the wide scope of this issue and are taking steps to address the scope and duration of this incident as well as the root cause itself. DETAILED DESCRIPTION OF IMPACT Any GCE instances, Cloud VPN tunnels or GCE network load balancers created or live migrated on Monday 30 January 2017 between 10:36 and 12:42 PDT were unavailable via their public IP addresses until the end of that period. This also prevented outbound traffic from affected instances and load balancing health checks from succeeding. Previously created VPN tunnels, load balancers and instances that did not experience a live migration were unaffected. ROOT CAUSE All inbound networking for GCE instances, load balancers and VPN tunnels enter via shared layer 2 load balancers. These load balancers are configured with changes to IP addresses for these resources, then automatically tested in a canary deployment, before changes are globally propagated. The issue was triggered by a large set of updates which were applied to a rarely used load balancing configuration. The application of updates to this configuration exposed an inefficient code path which resulted in the canary timing out. From this point all changes of public addressing were queued behind these changes that could not proceed past the testing phase. REMEDIATION AND PREVENTION To resolve the issue, Google engineers restarted the jobs responsible for programming changes to the network load balancers. After restarting, the problematic changes were processed in a batch, which no longer reached the inefficient code path. From this point updates could be processed and normal traffic resumed. This fix was applied zone by zone between 11:36 and 12:42. To prevent this issue from reoccurring in the short term, Google engineers are increasing the canary timeout so that updates exercising the inefficient code path merely slow network changes rather than completely stop them. As a long term resolution, the inefficient code path is being improved, and new tests are being written to test behaviour on a wider range of configurations. Google engineers had already begun work to replace global propagation of address configuration with decentralized routing. This work is being accelerated as it will prevent issues with this layer having global impact. Google engineers are also creating additional metrics and alerting that will allow the nature of this issue to be identified sooner, which will lead to faster resolution.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

The issue with Google App Engine should have been resolved for all affected users as of 17:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

The issue with Google App Engine has been partially resolved. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide another status update by 18:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

The issue with Google App Engine has been partially resolved. For everyone who is still affected, we apologize for any inconvenience you may be experiencing. We will provide another status update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

The issue with Google App Engine should have been partially resolved . We will provide another status update by 15:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17001 - We are currently investigating reports of Intermittent Errors(502s) with Google App Engine

We are investigating reports of intermittent errors(502s) in Google App Engine. We will provide more information by 15:00 US/Pacific

Google Cloud Platform
UPDATE: Incident 17001 - Investigating possible Google Cloud Datastore Application Monitoring Metrics problem

The issue with Google Cloud Datastore Application Monitoring Metrics has been fully resolved for all affected Applications as of 1:30pm US/Pacific.

Google Cloud Platform
UPDATE: Incident 17001 - Investigating possible Google Cloud Datastore Application Monitoring Metrics

We are investigating an issue with Google Cloud Datastore related with Application Monitoring Metrics. We will provide more information by 1:30pm US/Pacific.

Google Cloud Platform
UPDATE: Incident 17013 - Issue with Cloud SQL 2nd Generation instances beginning at Tuesday, 2017-01-26 01:00 US/Pacific.

The issue with Cloud SQL 2nd Generation Instances should have been resolved for all affected instances as of 21:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17013 - Issue with Cloud SQL 2nd Generation instances beginning at Tuesday, 2017-01-26 01:00 US/Pacific.

We are currently experiencing an issue with Cloud SQL 2nd Generation instances beginning at Tuesday, 2017-01-26 01:00 US/Pacific. This may cause poor performance or query failures for large queries on impacted instances. Current data indicates that 3% of Cloud SQL 2nd Generation Instances were affected by this issue. As of 2016-01-31 20:30 PT, a fix has been applied to the majority of impacted instances, and we expect a full resolution in the near future. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 21:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

We have fully mitigated the network connectivity issues for newly-created GCE instances as of 12:45 US/Pacific, with VPNs connectivity issues being fully mitigated at 12:50 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

The issue with newly-created GCE instances experiencing network connectivity problems should have been mitigated for all GCE regions except europe-west1, which is currently clearing. Newly-created VPNs are effected as well; we are still working on a mitigation for this. We will provide another status update by 13:10 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

The issue with newly-created GCE instances experiencing network connectivity problems should have been mitigated for the majority of GCE regions and we expect a full resolution in the near future. We will provide another status update by 12:40 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

We are experiencing a connectivity issue affecting newly-created VMs, as well as those undergoing live migrations beginning at Monday, 2017-01-30 10:54 US/Pacific. Mitigation work is currently underway. All zones should be coming back online in the next 15-20 minutes. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide another update at 12:10 PST.

Google Cloud Platform
UPDATE: Incident 17003 - New VMs are experiencing connectivity issues

We are investigating an issue with newly-created VMs not having network connectivity. This also affects VMs undergoing live migrations. We will provide more information by 11:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 17002 - Incident in progress - Some projects not visible for customers

The issue with listing GCP projects and organizations should have been resolved for all affected users as of 15:21 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17002 - Incident in progress - Some projects not visible for customers

The issue is still occurring for some projects for some users. Mitigation is still underway. We will provide the next update by 16:30 US/Pacific time.

Google Cloud Platform
UPDATE: Incident 17002 - Incident in progress - Some projects not visible for customers

Listing of Google Cloud projects and organizations is still failing to show some entries. As this only affects listing, users can access their projects by navigating directly to appropriate URLs. Google engineers have a mitigation plan that is underway. We will provide another status update by 14:30 US/Pacific time with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Incident in progress - Some projects not visible for customers

We are experiencing an (intermittent) issue with Google Cloud Projects search index beginning at Monday, 2017-01-23 00:00 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 8:30 US/Pacific with current details

Google Cloud Platform
RESOLVED: Incident 18025 - Bigquery query job failures

The issue with BigQuery's Table Copy service and query jobs should have been resolved for all affected projects as of 07:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence

Google Cloud Platform
UPDATE: Incident 18025 - Bigquery query job failures

The issue with BigQuery's Table Copy service and query jobs should be resolved for majority of users and expect a full resolution in the near future. We will provide another status update by 08:00 US/Pacific, 2017-01-21.

Google Cloud Platform
UPDATE: Incident 18025 - Bigquery query job failures

We believe the issue with BigQuery's Table Copy service and query jobs should be resolved for majority of users and expect a full resolution in the near future. We will provide another status update by 06:00 US/Pacific, 2017-01-21.

Google Cloud Platform
UPDATE: Incident 18025 - Bigquery query job failures

We believe the issue with BigQuery's Table Copy service and query jobs should be resolved for majority of users and expect a full resolution in the near future. We will provide another status update by 04:00 US/Pacific, 2017-01-21.

Google Cloud Platform
UPDATE: Incident 18025 - Bigquery query job failures

We are experiencing an issue with Bigquery query jobs beginning at Friday, 2017-01-21 18:15 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17001 - Cloud Console and Stackdriver may display the number of App Engine instances as zero for Java and Go

The issue with Cloud Console and Stackdriver showing the number of App Engine instances as zero should have been resolved for all affected projects as of 01:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17001 - Cloud Console and Stackdriver may display the number of App Engine instances as zero for Java and Go

We are experiencing an issue with Cloud Console and Stackdriver which may show the number of App Engine instances as zero beginning at Wednesday, 2017-01-18 18:45 US/Pacific. This issue should have been resolved for the majority of projects and we expect a full resolution by 2017-01-20 00:00 PDT.

Google Cloud Platform
RESOLVED: Incident 18024 - BigQuery Web UI currently unavailable for some Customers

The issue with BigQuery UI should have been resolved for all affected users as of 05:35MM US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18024 - BigQuery Web UI currently unavailable for some Customers

The issue with BigQuery Web UI should have been resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by 06:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18024 - BigQuery Web UI currently unavailable for some Customers

We are investigating an issue with BigQuery Web UI. We will provide more information by 2017-01-10 06:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

The issues with Cloud Shell and Compute Engine serial output should have been resolved for all affected instances as of 22:25 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

The issue with Cloud Shell should be resolved for all customers at this time. The issue with Compute Engine serial port output should be resolved for all new instances created after 19:45 PT in all zones. Instances created before 14:10 PT remain unaffected. Some instances created between 14:10 and 19:45 PT in us-east1-c and us-west1 may still be unable to view the serial output. We are currently in the process of applying the fix to zones in this region. Access to the serial console output should be restored for instances created between 14:10 and 19:45 PT in all other zones. We expect a full resolution in the near future. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

The issue with Cloud Shell should be resolved for all customers at this time. The issue with Compute Engine serial port output should be resolved for all new instances created after 19:45 PT in all zones. Instances created before 14:10 PT remain unaffected. Access to the serial console output should be restored for all instances in the asia-east1 and asia-northeast1 regions, and the us-central1-a zone, created between 14:10 and 19:45 PT. Some instances created between 14:10 and 19:45 PT in other zones may still be unable to view the serial console output. We are currently in the process of applying the fix to the remaining zones. We expect a full resolution in the near future. We will provide another status update by 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

The issue with Cloud Shell should be resolved for all customers at this time. The issue with Compute Engine serial port output should be resolved for all new instances created after 19:45 PT, and all instances in us-central1-a, and asia-east1-b created between 14:10 and 19:45 PT. All other instances created before 14:10 PT remain unaffected. We expect a full resolution in the near future. We will provide another status update by 21:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

We are still investigating the issues with Compute Engine and Cloud Shell, and do not have any news at this time. We will provide another status update by 20:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 17002 - Issues with Compute Engine serial port output and Cloud Shell

We are experiencing issues with Compute Engine serial port output and Cloud Shell beginning at Sunday, 2017-01-08 14:10 US/Pacific. Current data indicates that newly-created instances are unable to use the "get-serial-port-output" command in "gcloud", or use the Cloud Console to retrieve serial port output. Customers can still use the interactive serial console on these newly-created instances. Instances created before the impact time do not appear to be affected. Additionally, the Cloud Shell is intermittently available at this time. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 19:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 17001 - Cloud VPN issues in regions us-west1 and us-east1

The issue with Cloud VPN where some tunnels weren’t connecting in us-east1 and us-west1 should have been resolved for all affected tunnels as of 23:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17001 - Cloud VPN issues in regions us-west1 and us-east1

We are investigating reports of an issue with Cloud VPN in regions us-west1 and us-east1. We will provide more information by 23:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16036 - Cloud Storage is showing inconsistent result for object listing for multi-regional buckets in US

The issue with Cloud Storage showing inconsistent results for object listing for multi-regional buckets in US should have been resolved for all affected projects as of 23:50 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16036 - Cloud Storage is showing inconsistent result for object listing for multi-regional buckets in US

We are still investigating the issue with Cloud Storage showing inconsistent result for object listing for multi-regional buckets in US. We will provide another status update by 2016-12-20 02:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16036 - Cloud Storage is showing inconsistent result for object listing for multi-regional buckets in US

We are experiencing an intermittent issue with Cloud Storage showing inconsistent result for object listing for multi-regional buckets in US beginning approximately at Monday, 2016-12-16 09:00 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 2016-12-17 00:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16035 - Elevated Cloud Storage error rate and latency

The issue with Google Cloud Storage seeing increased errors and latency should have been resolved for all affected users as of 09:40 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16035 - Elevated Cloud Storage error rate and latency

We are continuing experience an issue with Google Cloud Storage. Errors and latency have decreased, but are not yet at pre-incident levels. We are continuing to investigate mitigation strategies and identifying root cause. Impact is limited to the US region. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16035 - Elevated Cloud Storage error rate and latency

We are investigating an issue with Google Cloud Storage serving increased errors and at a higher latency. We will provide more information by 10:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16011 - App Engine remote socket API errors in us-central region

The issue with App Engine applications having higher than expected socket API errors should have been resolved for all affected applications as of 22:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

We are still investigating reports of an issue with App Engine applications having higher than expected socket API errors if they are located in the us-central region. We will provide another status update by 2016-12-02 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

We are still investigating reports of an issue with App Engine applications having higher than expected socket API errors if they are located in the us-central region. We will provide another status update by 2016-12-02 21:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

The issue with App Engine remote socket API errors in us-central region should have been resolved for all affected projects as of 19:46 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

We are still investigating reports of an issue with App Engine applications having higher than expected socket API errors if they are located in the us-central region. We will provide another status update by 2016-12-02 20:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - App Engine remote socket API errors in us-central region

We are investigating reports of an issue with App Engine applications having higher than expected socket API errors if they are located in the us-central region. We will provide another status update by 2016-12-02 19:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 18023 - Increased 500 error rate for BigQuery API calls

The issue with increased 500 errors from the BigQuery API has been resolved. We apologize for the impact that this incident had on your application.

Google Cloud Platform
RESOLVED: Incident 18022 - BigQuery Streaming API failing

Small correction to the incident report. The resolution time of the incident was 20:00 US/Pacific, not 20:11 US/Pacific. Similarly, total downtime was 4 hours.

Google Cloud Platform
RESOLVED: Incident 18022 - BigQuery Streaming API failing

SUMMARY: On Tuesday 8 November 2016, Google BigQuery’s streaming service, which includes streaming inserts and queries against recently committed streaming buffers, was largely unavailable for a period of 4 hours and 11 minutes. To our BigQuery customers whose business analytics were impacted during this outage, we sincerely apologize. We will be providing an SLA credit for the affected timeframe. We have conducted an internal investigation and are taking steps to improve our service. DETAILED DESCRIPTION OF IMPACT: On Tuesday 8 November 2016 from 16:00 to 20:11 US/Pacific, 73% of BigQuery streaming inserts failed with a 503 error code indicating an internal error had occurred during the insertion. At peak, 93% of BigQuery streaming inserts failed. During the incident, queries performed on tables with recently-streamed data returned a result code (400) indicating that the table was unavailable for querying. Queries against tables in which data were not streamed within the 24 hours preceding the incident were unaffected. There were no issues with non-streaming ingestion of data. ROOT CAUSE: The BigQuery streaming service requires authorization checks to verify that it is streaming data from an authorized entity to a table that entity has permissions to access. The authorization service relies on a caching layer in order to reduce the number of calls to the authorization backend. At 16:00 US/Pacific, a combination of reduced backend authorization capacity coupled with routine cache entry refreshes caused a surge in requests to the authorization backends, exceeding their current capacity. Because BigQuery does not cache failed authorization attempts, this overload meant that new streaming requests would require re-authorization, thereby further increasing load on the authorization backend. This continual increase of authorization requests on an already overloaded authorization backend resulted in continued and sustained authorization failures which propagated into streaming request and query failures. REMEDIATION AND PREVENTION: Google engineers were alerted to issues with the streaming service at 16:21 US/Pacific. Their initial hypothesis was that the caching layer for authorization requests was failing. The engineers started redirecting requests to bypass the caching layer at 16:51. After testing the system without the caching layer, the engineers determined that the caching layer was working as designed, and requests were directed to the caching layer again at 18:12. At 18:13, the engineering team was able to pinpoint the failures to a set of overloaded authorization backends and begin remediation. The issue with authorization capacity was ultimately resolved by incrementally reducing load on the authorization system internally and increasing the cache TTL, allowing streaming authorization requests to succeed and populate the cache so that internal services could be restarted. Recovery of streaming errors began at 19:34 US/Pacific and the streaming service was fully restored at 20:11. To prevent short-term recurrence of the issue, the engineering team has greatly increased the request capacity of the authorization backend. In the longer term, the BigQuery engineering team will work on several mitigation strategies to address the currently cascading effect of failed authorization requests. These strategies include caching intermediary responses to the authorization flow for the streaming service, caching failure states for authorization requests and adding rate limiting to the authorization service so that large increases in cache miss rate will not overwhelm the authorization backend. In addition, the BigQuery engineering team will be improving the monitoring of available capacity on the authorization backend and will add additional alerting so capacity issues can be mitigated before they become cascading failures. The BigQuery engineering team will also be investigating ways to reduce the spike in authorization traffic that occurs daily at 16:00 US/Pacific when the cache is rebuilt to more evenly distribute requests to the authorization backend. Finally, we have received feedback that our communications during the outage left a lot to be desired. We agree with this feedback. While our engineering teams launched an all-hands-on-deck to resolve this issue within minutes of its detection, we did not adequately communicate both the level-of-effort and the steady progress of diagnosis, triage and restoration happening during the incident. We clearly erred in not communicating promptly, crisply and transparently to affected customers during this incident. We will be addressing our communications — for all Google Cloud systems, not just BigQuery — as part of a separate effort, which has already been launched. We recognize the extended duration of this outage, and we sincerely apologize to our BigQuery customers for the impact to your business analytics.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

The issue with the BigQuery Streaming API should have been resolved for all affected tables as of 20:07 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We're continuing to work to restore the service to the BigQuery Streaming API. We will add an update at 20:30 US/Pacific with further information.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are continuing to investigate the issue with BigQuery Streaming API. We will add an update at 20:00 US/Pacific with further information.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We have taken steps to mitigate the issue, which has led to some improvements. The issue continues to impact the BigQuery Streaming API and tables with a streaming buffer. We will provide a further status update at 19:30 US/Pacific with current details

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are continuing to investigate the issue with BigQuery Streaming API. The issue may also impact tables with a streaming buffer, making them inaccessible. This will be clarified in the next update at 19:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are still investigating the issue with BigQuery Streaming API. There are no other details to share at this time but we are actively working to resolve this. We will provide another status update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are still investigating the issue with BigQuery Streaming API. Current data indicates that all projects are affected by this issue. We will provide another status update by 18:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18022 - BigQuery Streaming API failing

We are investigating an issue with the BigQuery Streaming API. We will provide more information by 17:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 18021 - BigQuery Streaming API fai

We are investigating an issue with the BigQuery Streaming API. We will provide more information by 5:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16008 - Delete operations on Cloud Platform Console not being performed

The issue with Cloud Platform Console not being able to perform delete operations should have been resolved for all affected users as of 12:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 17012 - Issue With Second Generation Cloud SQL Instance Creation

SUMMARY: On Monday, 31 October 2016, 73% of requests to create new subscriptions for Google Cloud Pub/Sub failed for a duration of 124 minutes. Creation of new Cloud SQL Second Generation instances also failed during this incident. If your service or application was affected, we apologize. We have conducted a detailed review of the causes of this incident and are ensuring that we apply the appropriate fixes so that it will not recur. DETAILED DESCRIPTION OF IMPACT: On Monday, 31 October 2016 from 13:11 to 15:15 PDT, 73% of requests to create new subscriptions for Google Cloud Pub/Sub failed. 0.1% of pull requests experienced latencies of up to 4 minutes for end-to-end message delivery. Creation of all new Cloud SQL Second Generation instances also failed during this incident. Existing instances were not affected. ROOT CAUSE: At 13:08, a system in the Cloud Pub/Sub control plane experienced a connectivity issue to its persistent storage layer for a duration of 83 seconds. This caused a queue of storage requests to build up. When the storage layer re-connected, the queued requests were executed, which exceeded the available processing quota for the storage system. The system entered a feedback loop in which storage requests continued to queue up leading to further latency increases and more queued requests. The system was unable to exit from this state until additional capacity was added. Creation of a new Cloud SQL Second Generation instance requires a new Cloud Pub/Sub subscription. REMEDIATION AND PREVENTION: Our monitoring systems detected the outage and paged oncall engineers at 13:19. We determined root cause at 14:05 and acquired additional storage capacity for the Pub/Sub control plane at 14:42. The outage ended at 15:15 when this capacity became available. To prevent this issue from recurring, we have already increased the storage capacity for the Cloud Pub/Sub control plane. We will change the retry behavior of the control plane to prevent a feedback loop if storage quota is temporarily exceeded. We will also improve our monitoring to ensure we can determine root cause for this type of failure more quickly in future. We apologize for the inconvenience this issue caused our customers.

Google Cloud Platform
RESOLVED: Incident 16003 - Issues with Cloud Pub/Sub

SUMMARY: On Monday, 31 October 2016, 73% of requests to create new subscriptions for Google Cloud Pub/Sub failed for a duration of 124 minutes. Creation of new Cloud SQL Second Generation instances also failed during this incident. If your service or application was affected, we apologize. We have conducted a detailed review of the causes of this incident and are ensuring that we apply the appropriate fixes so that it will not recur. DETAILED DESCRIPTION OF IMPACT: On Monday, 31 October 2016 from 13:11 to 15:15 PDT, 73% of requests to create new subscriptions for Google Cloud Pub/Sub failed. 0.1% of pull requests experienced latencies of up to 4 minutes for end-to-end message delivery. Creation of all new Cloud SQL Second Generation instances also failed during this incident. Existing instances were not affected. ROOT CAUSE: At 13:08, a system in the Cloud Pub/Sub control plane experienced a connectivity issue to its persistent storage layer for a duration of 83 seconds. This caused a queue of storage requests to build up. When the storage layer re-connected, the queued requests were executed, which exceeded the available processing quota for the storage system. The system entered a feedback loop in which storage requests continued to queue up leading to further latency increases and more queued requests. The system was unable to exit from this state until additional capacity was added. Creation of a new Cloud SQL Second Generation instance requires a new Cloud Pub/Sub subscription. REMEDIATION AND PREVENTION: Our monitoring systems detected the outage and paged oncall engineers at 13:19. We determined root cause at 14:05 and acquired additional storage capacity for the Pub/Sub control plane at 14:42. The outage ended at 15:15 when this capacity became available. To prevent this issue from recurring, we have already increased the storage capacity for the Cloud Pub/Sub control plane. We will change the retry behavior of the control plane to prevent a feedback loop if storage quota is temporarily exceeded. We will also improve our monitoring to ensure we can determine root cause for this type of failure more quickly in future. We apologize for the inconvenience this issue caused our customers.

Google Cloud Platform
UPDATE: Incident 16008 - Delete operations on Cloud Platform Console not being performed

The issue with the Cloud Platform Console not being able to perform delete operations should have been resolved for the majority of users and we expect a full resolution in the near future. We will provide another status update by Fri 2016-11-04 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - Delete operations on Cloud Platform Console not being performed

The issue with the Cloud Platform Console not being able to perform delete operations has been identified with the root cause and we expect a full resolution in the near future. We will provide another status update by Fri 2016-11-04 12:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - Delete operation on Cloud Platform Console not being performed

We are experiencing an issue with some delete operations within the Cloud Platform Console, beginning at Tuesday, 2016-10-01 10:00 US/Pacific. Current data indicates that all users are affected by this issue. The gcloud command line tool may be used as a workaround for those who need to manage their resources immediately. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 22:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17012 - Issue With Second Generation Cloud SQL Instance Creation

The issue with second generation Cloud SQL instance creation should be resolved for all affected projects as of 15:15 PDT. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 16003 - Issues with Cloud Pub/Sub

The issue with Cloud Pub/Sub should be resolved for all affected projects as of 15:15 PDT. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16003 - Issues with Cloud Pub/Sub

We are continuing to investigate an issue with Cloud Pub/Sub. We will provide an update at 16:00 PDT.

Google Cloud Platform
UPDATE: Incident 17012 - Issue With Second Generation Cloud SQL Instance Creation

We are continuing to investigate an issue with second generation Cloud SQL instance creation. We will provide another update at 16:00 PDT.

Google Cloud Platform
UPDATE: Incident 16003 - Issues with Cloud Pub/Sub

We are currently investigating an issue with Cloud Pub/Sub. We will provide an update at 15:00 PDT with more information.

Google Cloud Platform
UPDATE: Incident 17012 - Issue With Second Generation Cloud SQL Instance Creation

We are currently investigating an issue with second generation Cloud SQL instance creation. We will provide an update with more information at 15:00 PDT.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

The issue with Google Container Engine nodes connecting to the metadata server should have been resolved for all affected clusters as of 10:45 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

The issue with Google Container Engine nodes connecting to the metadata server has now been resolved for some of the existing clusters, too. We are continuing to repair the rest of the clusters. We will provide next status update when this repair is complete.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

The issue with Google Container Engine nodes connecting to the metadata server has been fully resolved for all new clusters. The work to repair the existing clusters is still ongoing and is expected to last for a few more hours. It may also result in the restart of the containers and/or virtual machines in the repaired clusters as per the previous update. If your cluster is affected by this issue and you wish to solve this problem more quickly, you could choose to delete and recreate your GKE cluster. We will provide next status update when the work to repair the existing clusters has completed.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We have now identified the cause of the issue and are in the process of rolling out the fix for it into production. This may result in the restart of the affected containers and/or virtual machines in the GKE cluster. We apologize for any inconvenience this might cause.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are continuing to work on resolving the issue with Google Container Engine nodes connecting to the metadata server. We will provide next status update as soon as the proposed fix for this issue is finalized and validated.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are still working on resolving the issue with Google Container Engine nodes connecting to the metadata server. We are actively testing a fix for it, and once it is validated, we will push this fix into production. We will provide next status update by 2016-10-24 01:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are still investigating the issue with Google Container Engine nodes connecting to the metadata server. Current data indicates that that less than 10% of clusters are still affected by this issue. We are actively testing a fix. Once confirmed we will push this fix into production. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

We are still investigating the issue with Google Container Engine nodes connecting to the metadata server. Further investigation reveals that less than 10% of clusters are affected by this issue. We will provide another status update by 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 15001 - Google Container Engine nodes expericing trouble connecting to http://metadata

Customers experiencing this error will see messages containing the following in their logs: "WARNING: exception thrown while executing request java.net.UnknownHostException: metadata" This is caused by a change that inadvertently prevents hosts from properly resolving the DNS address for the metadata server. We have identified the root cause and are preparing a fix. No action is required by customers at this time. The proposed fix should resolve the issue for all customers as soon as it is prepared, tested, and deployed. We will add another update at 21:00 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16020 - 502s from HTTP(S) Load Balancer

SUMMARY: On Thursday 13 October 2016, approximately one-third of requests sent to the Google Compute Engine HTTP(S) Load Balancers between 15:07 and 17:25 PDT received an HTTP 502 error rather than the expected response. If your service or application was affected, we apologize. We took immediate action to restore service once the problem was detected, and are taking steps to improve the Google Compute Engine HTTP(S) Load Balancer’s performance and availability. DETAILED DESCRIPTION OF IMPACT: Starting at 15:07 PDT on Thursday 13 October 2016, Google Compute Engine HTTP(S) Load Balancers started to return elevated rates of HTTP 502 (Bad Gateway) responses. The error rate rose progressively from 2% to a peak of 45% of all requests at 16:09 and remained there until 17:03. From 17:03 to 17:15, the error rate declined rapidly from 45% to 2%. By 17:25 requests were routing as expected and the incident was over. During the incident, the error rate seen by applications using GCLB varied depending on the network routing of their requests to Google. ROOT CAUSE: The Google Compute Engine HTTP(S) Load Balancer system is a global, geographically-distributed multi-tiered software stack which receives incoming HTTP(S) requests via many points in Google's global network, and dispatches them to appropriate Google Compute Engine instances. On 13 October 2016, a configuration change was rolled out to one of these layers with widespread distribution beginning at 15:07. This change triggered a software bug which decoupled second-tier load balancers from a number of first-tier load balancers. The affected first-tier load balancers therefore had no forwarding path for incoming requests and returned the HTTP 502 code to indicate this. Google’s networking systems have a number of safeguards to prevent them from propagating incorrect or invalid configurations, and to reduce the scope of the impact in the event that a problem is exposed in production. These safeguards were partially successful in this instance, limiting both the scope and the duration of the event, but not preventing it entirely. The first relevant safeguard is a canary deployment, where the configuration is deployed at a single site and that site is verified to be functioning within normal bounds. In this case, the canary step did generate a warning, but it was not sufficiently precise to cause the on-call engineer to immediately halt the rollout. The new configuration subsequently rolled out in stages, but was halted part way through as further alerts indicated that it was not functioning correctly. By design, this progressive rollout limited the error rate experienced by customers. REMEDIATION AND PREVENTION: Once the nature and scope of the issue became clear, Google engineers first quickly halted and reverted the rollout. This prevented a larger fraction of GCLB instances from being affected. Google engineers then set about restoring function to the GCLB instances which had been exposed to the configuration. They verified that restarting affected GCLB instances restored the pre-rollout configuration, and then rapidly restarted all affected GCLB instances, ending the event. Google understands that global load balancers are extremely useful, but also may be a single point of failure for your service. We are committed to applying the lessons from this outage in order to ensure that this type of incident does not recur. One of our guiding principles for avoiding large-scale incidents is to roll out global changes slowly and carefully monitor for errors. We typically have a period of soak time during a canary release before rolling out more widely. In this case, the change was pushed too quickly for accurate detection of the class of failure uncovered by the configuration being rolled out. We will change our processes to be more conservative when rolling out configuration changes to critical systems. As defense in depth, Google engineers are also changing the black box monitoring for GCLB so that it will test the first-tier load balancers impacted by this incident. We will also be improving the black box monitoring to ensure that our probers cover all use cases. In addition, we will add an alert for elevated error rates between first-tier and second-tier load balancers. We apologize again for the impact this issue caused our customers.

Google Cloud Platform
UPDATE: Incident 18020 - BigQuery query failures

The issue with BigQuery failing queries should have been resolved for all affected users as of 8:53 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18020 - BigQuery query failures

The issue with Google BigQuery API calls returning 500 Internal Errors should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 09:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18020 - BigQuery query failures

The issue with Google BigQuery API calls returning 500 Internal Errors should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 07:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18020 - BigQuery query failures

We are investigating an issue with Google BigQuery. We will provide more information by 6:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16020 - 502s from HTTP(s) Load Balancer

The issue with Google Cloud Platform HTTP(s) Load Balancer returning 502 response code should have been resolved for all affected customers as of 17:25 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16020 - 502s from HTTP(s) Load Balancer

We are still investigating the issue with Google Cloud Platform HTTP(S) Load Balancers returning 502 errors, and will provide an update by 18:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16020 - 502s from HTTP(s) Load Balancer

We are still investigating the issue with Google Cloud Platform HTTP(S) Load Balancers returning 502 errors, and will provide an update by 17:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16020 - 502s from HTTP(s) Load Balancer

We are experiencing an issue with Google Cloud Platform HTTP(s) Load Balancer returning 502 response codes, starting at 2016-10-13 15:30 US/Pacific. We are investigating the issue, and will provide an update by 16:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16019 - Hurricane Matthew may impact GCP services in us-east1

The Google Cloud Platform team is keeping a close watch on the path of Hurricane Matthew. The National Hurricane Center 3-day forecast indicates that the storm is tracking within 200 miles of the datacenters housing the GCP region us-east1. We do not anticipate any specific service interruptions. Our datacenter is designed to withstand a direct hit from a more powerful hurricane than Matthew without disruption, and we maintain triple-redundant diverse-path backbone networking precisely to be resilient to extreme events. We have staff on site and plan to run services normally. Despite all of the above, it is statistically true that there is an increased risk of a region-level utility grid or other infrastructure disruption which may result in a GCP service interruption. If we anticipate a service interruption – for example, if the regional grid loses power and our datacenter is operating on generator – our protocol is to share specific updates to our customers with a 12 hour notice.

Google Cloud Platform
UPDATE: Incident 16034 - Elevated Cloud Storage error rate and latency

The issue with Cloud Storage that some projects encountered elevated errors and latency should have been resolved for all affected projects as of 23:40 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16034 - Elevated Cloud Storage error rate and latency

We are investigating an issue with Cloud Storage. We will provide more information by 23:45 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We have restored most of the missing Google Cloud Pub/Sub subscriptions for affected projects. We expect to restore the remaining missing subscriptions within one hour. We have already identified and fixed the root cause of the issue. We will conduct an internal investigation and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are working on restoring the missing PubSub subscriptions for customers that are affected, and will provide an ETA for complete restoration when available.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 12:00 US/Pacific with current details

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 10:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 08:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 06:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. We will provide another status update by Wednesday, 2016-09-28 02:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We are still investigating the issue with Cloud Pub/Sub subscriptions. In the meantime, affected users can re-create missing subscriptions manually in order to make them available. We will provide another status update by Wednesday, 2016-09-28 00:00 US/Pacific with current details

Google Cloud Platform
UPDATE: Incident 16002 - Cloud Pub/Sub subscriptions deleted unexpectedly

We experienced an issue with Cloud Pub/Sub that some subscriptions were deleted unexpectedly approximately from Tuesday, 2016-09-27 13:40-18:45 US/Pacific. We are going to recreate the deleted subscription. We will provide another status update by 22:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

The issue with slow Compute Engine operations in asia-east1-a is resolved since 13:37 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

We are still working on the issue with Compute Engine operations. After mitigation was applied operations in asia-east1-a have continued to run normally. A final fix is still underway. We will provide another status update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

We are still investigating the issue with Compute Engine operations. We have applied mitigation and currently operations in asia-east1-a are processing normally. We are applying some final fixes and monitoring the issue. We will provide another status update by 15:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

This incident only covers instances in the asia-east1-a zone

Google Cloud Platform
UPDATE: Incident 16018 - Slow instance start times in asia-east1-a

It is taking multiple minutes to create new VMs, restart existing VMs that abnormally terminate, or Hot-attach disks.

Google Cloud Platform
RESOLVED: Incident 16033 - Google Cloud Storage serving high error rates.

We have completed our internal investigation and results suggest this incident impacted a very small number of projects. We have reached out to affected users directly and if you have not heard from us, your project(s) were not impacted.

Google Cloud Platform
RESOLVED: Incident 16033 - Google Cloud Storage serving high error rates.

The issue with Google Cloud Storage serving a high percentage of errors should have been resolved for all affected users as of 13:05 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
RESOLVED: Incident 16017 - Networking issue with Google Compute Engine services

SUMMARY: On Monday 22 August 2016, the Google Cloud US-CENTRAL1-F zone lost network connectivity to services outside that zone for a duration of 25 minutes. All other zones in the US-CENTRAL1 region were unaffected. All network traffic within the zone was also unaffected. We apologize to customers whose service or application was affected by this incident. We understand that a network disruption has a negative impact on your application - particularly if it is homed in a single zone - and we apologize for the inconvenience this caused. What follows is our detailed analysis of the root cause and actions we will take in order to prevent this type of incident from recurring. DETAILED DESCRIPTION OF IMPACT: We have received feedback from customers asking us to specifically and separately enumerate the impact of incidents to any service that may have been touched. We agree that this will make it easier to reason about the impact of any particular event and we have done so in the following descriptions. On Monday 22 August 2016 from 07:05 to 07:30 PDT the Google Cloud US-CENTRAL1-F zone lost network connectivity to services outside that zone. App Engine 6% of App Engine Standard Environment applications in the US-CENTRAL region served elevated error rates for up to 8 minutes, until the App Engine serving infrastructure automatically redirected traffic to a failover zone. The aggregate error rate across all impacted applications during the incident period was 3%. The traffic redirection caused a Memcache flush for affected applications, and also loading requests as new instances of the applications started up in the failover zones. All App Engine Flexible Environment applications deployed to the US-CENTRAL1-F zone were unavailable for the duration of the incident. Additionally, 4.5% of these applications experienced various levels of unavailability for up to an additional 5 hours while the system recovered. Deployments for US-CENTRAL Flexible applications were delayed during the incident. Our engineers disabled the US-CENTRAL1-F zone for new deployments during the incident, so that any customers who elected to redeploy, immediately recovered. Cloud Console The Cloud Console was available during the incident, though some App Engine administrative pages did not load for applications in US-CENTRAL and 50% of project creation requests failed to complete and needed to be retried by customers before succeeding. Cloud Dataflow Some Dataflow running jobs in the US-CENTRAL1 region experienced delays in processing. Although most of the affected jobs recovered gracefully after the incident ended, up to 2.5% of affected jobs in this zone became stuck and required manual termination by customers. New jobs created during the incident were not impacted. Cloud SQL Cloud SQL First Generation instances were not impacted by this incident. 30% of Cloud SQL Second Generation instances in US-CENTRAL1 were unavailable for up to 5 minutes, after which they became available again. An additional 15% of Second Generation instances were unavailable for 22 minutes. Compute Engine All instances in the US-CENTRAL1-F zone were inaccessible from outside the zone for the duration of the incident. 9% of them remained inaccessible from outside the zone for an additional hour. Container Engine Container Engine clusters running in US-CENTRAL1-F were inaccessible from outside of the zone during the incident although they continued to serve. In addition, calls to the Container Engine API experienced a 4% error rate and elevated latency during the incident, though this was substantially mitigated if the client retried the request. Stackdriver Logging 20% of log API requests sent to Stackdriver Logging in the US-CENTRAL1 region failed during the incident, though App Engine logging was not impacted. Clients retrying requests recovered gracefully. Stackdriver Monitoring Requests to the StackDriver web interface and the Google Monitoring API v2beta2 and v3 experienced elevated latency and an error rate of up to 3.5% during the incident. In addition, some alerts were delayed. Impact for API calls was substantially mitigated if the client retried the request. ROOT CAUSE: On 18 July, Google carried out a planned maintenance event to inspect and test the UPS on a power feed in one zone in the US-CENTRAL1 region. That maintenance disrupted one of the two power feeds to network devices that control routes into and out of the US-CENTRAL1-F zone. Although this did not cause any disruption in service, these devices unexpectedly and silently disabled the affected power supply modules - a previously unseen behavior. Because our monitoring systems did not notify our network engineers of this problem the power supply modules were not re-enabled after the maintenance event. The service disruption was triggered on Monday 22 August, when our engineers carried out another planned maintenance event that removed power to the second power feed of these devices, causing them to disable the other power supply module as well, and thus completely shut down. Following our standard procedure when carrying out maintenance events, we made a detailed line walk of all critical equipment prior to, and after, making any changes. However, in this case we did not detect the disabled power supply modules. Loss of these network devices meant that machines in US-CENTRAL1-F did not have routes into and out of the zone but could still communicate to other machines within the same zone. REMEDIATION AND PREVENTION: Our network engineers received an alert at 07:14, nine minutes after the incident started. We restored power to the devices at 07:30. The network returned to service without further intervention after power was restored. As immediate followup to this incident, we have already carried out an audit of all other network devices of this type in our fleet to verify that there are none with disabled power supply modules. We have also written up a detailed post mortem of this incident and will take the following actions to prevent future outages of this type: Our monitoring will be enhanced to detect cases in which power supply modules are disabled. This will ensure that conditions that are missed by the manual line walk prior to maintenance events are picked up by automated monitoring. We will change the configuration of these network devices so that power disruptions do not cause them to disable their power supply modules. The interaction between the network control plane and the data plane should be such that the data plane should "fail open" and continue to route packets in the event of control plane failures. We will add support for networking protocols that have the capability to continue to route traffic for a short period in the event of failures in control plane components. We will also be taking various actions to improve the resilience of the affected services to single-zone outages, including the following: App Engine Although App Engine Flexible Environment is currently in Beta, we expect production services to be more resilient to single zone disruptions. We will make this extra resilience an exit criteria before we allow the service to reach General Availability. Cloud Dataflow We will improve resilience of Dataflow to single-zone outages by implementing better strategies for migrating the job controller to a new zone in the event of an outage. Work on this remediation is already underway. Stackdriver Logging We will make improvements to the Stackdriver Logging service (currently in Beta) in the areas of automatic failover and capacity management before this service goes to General Availability. This will ensure that it is resilient to single-zone outages. Stackdriver Monitoring The Google Monitoring API (currently in beta) is already hosted in more than one zone, but we will further improve its resilience by adding additional capacity to ensure a single-zone outage does not cause overload in any other zones. We will do this before this service exits to General Availability. Finally, we know that you depend on Google Cloud Platform for your production workloads and we apologize for the inconvenience this event caused.

Google Cloud Platform
RESOLVED: Incident 16009 - Networking issue with Google App Engine services

SUMMARY: On Monday 22 August 2016, the Google Cloud US-CENTRAL1-F zone lost network connectivity to services outside that zone for a duration of 25 minutes. All other zones in the US-CENTRAL1 region were unaffected. All network traffic within the zone was also unaffected. We apologize to customers whose service or application was affected by this incident. We understand that a network disruption has a negative impact on your application - particularly if it is homed in a single zone - and we apologize for the inconvenience this caused. What follows is our detailed analysis of the root cause and actions we will take in order to prevent this type of incident from recurring. DETAILED DESCRIPTION OF IMPACT: We have received feedback from customers asking us to specifically and separately enumerate the impact of incidents to any service that may have been touched. We agree that this will make it easier to reason about the impact of any particular event and we have done so in the following descriptions. On Monday 22 August 2016 from 07:05 to 07:30 PDT the Google Cloud US-CENTRAL1-F zone lost network connectivity to services outside that zone. App Engine 6% of App Engine Standard Environment applications in the US-CENTRAL region served elevated error rates for up to 8 minutes, until the App Engine serving infrastructure automatically redirected traffic to a failover zone. The aggregate error rate across all impacted applications during the incident period was 3%. The traffic redirection caused a Memcache flush for affected applications, and also loading requests as new instances of the applications started up in the failover zones. All App Engine Flexible Environment applications deployed to the US-CENTRAL1-F zone were unavailable for the duration of the incident. Additionally, 4.5% of these applications experienced various levels of unavailability for up to an additional 5 hours while the system recovered. Deployments for US-CENTRAL Flexible applications were delayed during the incident. Our engineers disabled the US-CENTRAL1-F zone for new deployments during the incident, so that any customers who elected to redeploy, immediately recovered. Cloud Console The Cloud Console was available during the incident, though some App Engine administrative pages did not load for applications in US-CENTRAL and 50% of project creation requests failed to complete and needed to be retried by customers before succeeding. Cloud Dataflow Some Dataflow running jobs in the US-CENTRAL1 region experienced delays in processing. Although most of the affected jobs recovered gracefully after the incident ended, up to 2.5% of affected jobs in this zone became stuck and required manual termination by customers. New jobs created during the incident were not impacted. Cloud SQL Cloud SQL First Generation instances were not impacted by this incident. 30% of Cloud SQL Second Generation instances in US-CENTRAL1 were unavailable for up to 5 minutes, after which they became available again. An additional 15% of Second Generation instances were unavailable for 22 minutes. Compute Engine All instances in the US-CENTRAL1-F zone were inaccessible from outside the zone for the duration of the incident. 9% of them remained inaccessible from outside the zone for an additional hour. Container Engine Container Engine clusters running in US-CENTRAL1-F were inaccessible from outside of the zone during the incident although they continued to serve. In addition, calls to the Container Engine API experienced a 4% error rate and elevated latency during the incident, though this was substantially mitigated if the client retried the request. Stackdriver Logging 20% of log API requests sent to Stackdriver Logging in the US-CENTRAL1 region failed during the incident, though App Engine logging was not impacted. Clients retrying requests recovered gracefully. Stackdriver Monitoring Requests to the StackDriver web interface and the Google Monitoring API v2beta2 and v3 experienced elevated latency and an error rate of up to 3.5% during the incident. In addition, some alerts were delayed. Impact for API calls was substantially mitigated if the client retried the request. ROOT CAUSE: On 18 July, Google carried out a planned maintenance event to inspect and test the UPS on a power feed in one zone in the US-CENTRAL1 region. That maintenance disrupted one of the two power feeds to network devices that control routes into and out of the US-CENTRAL1-F zone. Although this did not cause any disruption in service, these devices unexpectedly and silently disabled the affected power supply modules - a previously unseen behavior. Because our monitoring systems did not notify our network engineers of this problem the power supply modules were not re-enabled after the maintenance event. The service disruption was triggered on Monday 22 August, when our engineers carried out another planned maintenance event that removed power to the second power feed of these devices, causing them to disable the other power supply module as well, and thus completely shut down. Following our standard procedure when carrying out maintenance events, we made a detailed line walk of all critical equipment prior to, and after, making any changes. However, in this case we did not detect the disabled power supply modules. Loss of these network devices meant that machines in US-CENTRAL1-F did not have routes into and out of the zone but could still communicate to other machines within the same zone. REMEDIATION AND PREVENTION: Our network engineers received an alert at 07:14, nine minutes after the incident started. We restored power to the devices at 07:30. The network returned to service without further intervention after power was restored. As immediate followup to this incident, we have already carried out an audit of all other network devices of this type in our fleet to verify that there are none with disabled power supply modules. We have also written up a detailed post mortem of this incident and will take the following actions to prevent future outages of this type: Our monitoring will be enhanced to detect cases in which power supply modules are disabled. This will ensure that conditions that are missed by the manual line walk prior to maintenance events are picked up by automated monitoring. We will change the configuration of these network devices so that power disruptions do not cause them to disable their power supply modules. The interaction between the network control plane and the data plane should be such that the data plane should "fail open" and continue to route packets in the event of control plane failures. We will add support for networking protocols that have the capability to continue to route traffic for a short period in the event of failures in control plane components. We will also be taking various actions to improve the resilience of the affected services to single-zone outages, including the following: App Engine Although App Engine Flexible Environment is currently in Beta, we expect production services to be more resilient to single zone disruptions. We will make this extra resilience an exit criteria before we allow the service to reach General Availability. Cloud Dataflow We will improve resilience of Dataflow to single-zone outages by implementing better strategies for migrating the job controller to a new zone in the event of an outage. Work on this remediation is already underway. Stackdriver Logging We will make improvements to the Stackdriver Logging service (currently in Beta) in the areas of automatic failover and capacity management before this service goes to General Availability. This will ensure that it is resilient to single-zone outages. Stackdriver Monitoring The Google Monitoring API (currently in beta) is already hosted in more than one zone, but we will further improve its resilience by adding additional capacity to ensure a single-zone outage does not cause overload in any other zones. We will do this before this service exits to General Availability. Finally, we know that you depend on Google Cloud Platform for your production workloads and we apologize for the inconvenience this event caused.

Google Cloud Platform
RESOLVED: Incident 17011 - Cloud SQL 2nd generation failing to create new instances

The issue in creating instances on Cloud SQL second generation should have been resolved for all affected projects as of 17:38 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 17011 - Cloud SQL 2nd generation failing to create new instances

We are investigating an issue for creating new instances on Cloud SQL second generation. We will provide more information by 18:50 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16008 - App Engine Outage

SUMMARY: On Thursday 11 August 2016, 21% of Google App Engine applications hosted in the US-CENTRAL region experienced error rates in excess of 10% and elevated latency between 13:13 and 15:00 PDT. An additional 16% of applications hosted on the same GAE instance observed lower rates of errors and latency during the same period. We apologize for this incident. We know that you choose to run your applications on Google App Engine to obtain flexible, reliable, high-performance service, and in this incident we have not delivered the level of reliability for which we strive. Our engineers have been working hard to analyze what went wrong and ensure incidents of this type will not recur. DETAILED DESCRIPTION OF IMPACT: On Thursday 11 August 2016 from 13:13 to 15:00 PDT, 18% of applications hosted in the US-CENTRAL region experienced error rates between 10% and 50%, and 3% of applications experienced error rates in excess of 50%. Additionally, 14% experienced error rates between 1% and 10%, and 2% experienced error rate below 1% but above baseline levels. In addition, the 37% of applications which experienced elevated error rates also observed a median latency increase of just under 0.8 seconds per request. The remaining 63% of applications hosted on the same GAE instance, and applications hosted on other GAE instances, did not observe elevated error rates or increased latency. Both App Engine Standard and Flexible Environment applications in US-CENTRAL were affected by this incident. In addition, some Flexible Environment applications were unable to deploy new versions during this incident. App Engine applications in US-EAST1 and EUROPE-WEST were not impacted by this incident. ROOT CAUSE: The incident was triggered by a periodic maintenance procedure in which Google engineers move App Engine applications between datacenters in US-CENTRAL in order to balance traffic more evenly. As part of this procedure, we first move a proportion of apps to a new datacenter in which capacity has already been provisioned. We then gracefully drain traffic from an equivalent proportion of servers in the downsized datacenter in order to reclaim resources. The applications running on the drained servers are automatically rescheduled onto different servers. During this procedure, a software update on the traffic routers was also in progress, and this update triggered a rolling restart of the traffic routers. This temporarily diminished the available router capacity. The server drain resulted in rescheduling of multiple instances of manually-scaled applications. App Engine creates new instances of manually-scaled applications by sending a startup request via the traffic routers to the server hosting the new instance. Some manually-scaled instances started up slowly, resulting in the App Engine system retrying the start requests multiple times which caused a spike in CPU load on the traffic routers. The overloaded traffic routers dropped some incoming requests. Although there was sufficient capacity in the system to handle the load, the traffic routers did not immediately recover due to retry behavior which amplified the volume of requests. REMEDIATION AND PREVENTION: Google engineers were monitoring the system during the datacenter changes and immediately noticed the problem. Although we rolled back the change that drained the servers within 11 minutes, this did not sufficiently mitigate the issue because retry requests had generated enough additional traffic to keep the system’s total load at a substantially higher-than-normal level. As designed, App Engine automatically redirected requests to other datacenters away from the overload - which reduced the error rate. Additionally, our engineers manually redirected all traffic at 13:56 to other datacenters which further mitigated the issue. Finally, we then identified a configuration error that caused an imbalance of traffic in the new datacenters. Fixing this at 15:00 finally fully resolved the incident. In order to prevent a recurrence of this type of incident, we have added more traffic routing capacity in order to create more capacity buffer when draining servers in this region. We will also change how applications are rescheduled so that the traffic routers are not called and also modify that the system's retry behavior so that it cannot trigger this type of failure. We know that you rely on our infrastructure to run your important workloads and that this incident does not meet our bar for reliability. For that we apologize. Your trust is important to us and we will continue to all we can to earn and keep that trust.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The issue with Compute Engine network connectivity should have been resolved for nearly all instances. For the remaining few remaining instances we are working directly with the affected customers. No further updates will be posted, but we will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will also provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The issue with Compute Engine network connectivity should have been resolved for affected instances in us-central1-a, -b, and -c as of 08:00 US/Pacific. Less than 4% of instances in us-central1-f are currently affected. We will provide another status update by 12:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The issue with Compute Engine network connectivity should have been resolved for affected instances in us-central1-a, -b, and -c as of 08:00 US/Pacific. Less than 4% of instances in us-central1-f are currently affected and we expect a full resolution soon. We will provide another status update by 11:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The work on the remaining instances with network connectivity issues, located in us-central1-f, is still ongoing. We will provide another status update by 11:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The work on the remaining instances with network connectivity issues, located in us-central1-f, is still ongoing. We will provide another status update by 10:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The work on the remaining instances with network connectivity is still ongoing. Affected instances are located in us-central1-f. We will provide another status update by 10:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The work on the remaining instances with network connectivity is still ongoing. Affected instances are located in us-central1-f. We will provide another status update by 09:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

We are still investigating network connectivity issues for a subset of instances that have not automatically recovered. We will provide another status update by 09:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

The issue with network connectivity to Google Compute Engine services should have been resolved for the majority of instances and we expect a full resolution in the near future. We will provide another status update by 08:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16009 - Networking issue with Google App Engine services

The issue with network connectivity to Google App Engine applications should have been resolved for all affected users as of 07:20 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16017 - Networking issue with Google Compute Engine services

We are investigating an issue with network connectivity. We will provide more information by 08:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16009 - Networking issue with Google App Engine services

We are investigating an issue with network connectivity. We will provide more information by 08:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16016 - Networking issue with Google Compute Engine services

We are investigating an issue with network connectivity. We will provide more information by 08:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

The issue with App Engine apis being unavailable should have been resolved for nearly all affected projects as of 14:12 US/Pacific. We will follow up directly with few remaining affecting applications. We will also conduct a thorough internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. Finally, we will provide a more detailed analysis of this incident once we have completed this internal investigation.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

We are still investigating the issue with App Engine apis being unavailable. Current data indicates that some projects are affected by this issue. We will provide another status update by 15:45 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

The issue with App Engine apis being unavailable should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 15:15 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

We are experiencing an issue with App Engine apis being unavailable beginning at Thursday, 2016-08-11 13:45 US/Pacific. Current data indicates that Applications in us-central are affected by this issue.

Google Cloud Platform
UPDATE: Incident 16008 - App Engine Outage

We are investigating reports of an issue with App Engine. We will provide more information by 02:15 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16015 - Networking issue with Google Compute Engine services

SUMMARY: On Friday 5 August 2016, some Google Cloud Platform customers experienced increased network latency and packet loss to Google Compute Engine (GCE), Cloud VPN, Cloud Router and Cloud SQL, for a duration of 99 minutes. If you were affected by this issue, we apologize. We intend to provide a higher level reliability than this, and we are working to learn from this issue to make that a reality. DETAILED DESCRIPTION OF IMPACT: On Friday 5th August 2016 from 00:55 to 02:34 PDT a number of services were disrupted: Some Google Compute Engine TCP and UDP traffic had elevated latency. Most ICMP, ESP, AH and SCTP traffic inbound from outside the Google network was silently dropped, resulting in existing connections being dropped and new connections timing out on connect. Most Google Cloud SQL first generation connections from sources external to Google failed with a connection timeout. Cloud SQL second generation connections may have seen higher latency but not failure. Google Cloud VPN tunnels remained connected, however there was complete packet loss for data through the majority of tunnels. As Cloud Router BGP sessions traverse Cloud VPN, all sessions were dropped. All other traffic was unaffected, including internal connections between Google services and services provided via HTTP APIs. ROOT CAUSE: While removing a faulty router from service, a new procedure for diverting traffic from the router was used. This procedure applied a new configuration that resulted in announcing some Google Cloud Platform IP addresses from a single point of presence in the southwestern US. As these announcements were highly specific they took precedence over the normal routes to Google's network and caused a substantial proportion of traffic for the affected network ranges to be directed to this one point of presence. This misrouting directly caused the additional latency some customers experienced. Additionally this misconfiguration sent affected traffic to next-generation infrastructure that was undergoing testing. This new infrastructure was not yet configured to handle Cloud Platform traffic and applied an overly-restrictive packet filter. This blocked traffic on the affected IP addresses that was routed through the affected point of presence to Cloud VPN, Cloud Router, Cloud SQL first generation and GCE on protocols other than TCP and UDP. REMEDIATION AND PREVENTION: Mitigation began at 02:04 PDT when Google engineers reverted the network infrastructure change that caused this issue, and all traffic routing was back to normal by 02:34. The system involved was made safe against recurrences by fixing the erroneous configuration. This includes changes to BGP filtering to prevent this class of incorrect announcements. We are implementing additional integration tests for our routing policies to ensure configuration changes behave as expected before being deployed to production. Furthermore, we are improving our production telemetry external to the Google network to better detect peering issues that slip past our tests. We apologize again for the impact this issue has had on our customers.

Google Cloud Platform
RESOLVED: Incident 16015 - Networking issue with Google Compute Engine services

The issue with Google Cloud networking should have been resolved for all affected users as of 02:40 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16015 - Networking issue with Google Compute Engine services

We are still investigating the issue with Google Compute Engine networking. Current data also indicates impact on other GCP products including Cloud SQL. We will provide another status update by 03:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16015 - Networking issue with Google Compute Engine services

We are investigating a networking issue with Google Compute Engine. We will provide more information by 02:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18002 - Execution of BigQuery jobs is delayed, jobs are backing up in Pending state

We are experiencing an intermittent issue with BigQuery connections beginning at Thursday, 2016-Aug-04 13:49 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 4:00pm US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 18019 - BigQuery connection issues

We are experiencing an intermittent issue with BigQuery connections beginning at Thursday, 2016-Aug-04 13:49 US/Pacific. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 4:00pm US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16014 - HTTP(S) Load Balancing returning some 502 errors

We are still investigating the issue with HTTP(S) Load Balancing returning 502 errors. We will provide another status update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16014 - HTTP(S) Load Balancing returning some 502 errors

The issue with HTTP(S) Load Balancing returning a small number 502 errors should have been resolved for all affected (instances) as of 11:05 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16014 - HTTP(S) Load Balancing returning some 502 errors

We are experiencing an issue with HTTP(S) Load Balancing returning a small number 502 errors, beginning at Friday, 2016-07-29 around 08:45 US/Pacific. The maximum error rate for affected users was below 2%. Remediation has been applied that should stop these errors; we are monitoring the situation. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 11:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16013 - HTTP(S) Load Balancing 502 Errors

We are investigating an issue with 502 errors from HTTP(S) Load Balancing. We will provide more information by 11:05 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18018 - Streaming API issues with BigQuery

SUMMARY: On Monday 25 July 2016, the Google BigQuery Streaming API experienced elevated error rates for a duration of 71 minutes. We apologize if your service or application was affected by this and we are taking immediate steps to improve the platform’s performance and availability. DETAILED DESCRIPTION OF IMPACT: On Monday 25 July 2016 between 17:03 and 18:14 PDT, the BigQuery Streaming API returned HTTP 500 or 503 errors for 35% of streaming insert requests, with a peak error rate of 49% at 17:40. Customers who retried on error were able to mitigate the impact. Calls to the BigQuery jobs API showed an error rate of 3% during the incident but could generally be executed reliably with normal retry behaviour. Other BigQuery API calls were not affected. ROOT CAUSE: An internal Google service sent an unexpectedly high amount of traffic to the BigQuery Streaming API service. The internal service used a different entry point that was not subject to quota limits. Google's internal load balancers drop requests that exceed the capacity limits of a service. In this case, the capacity limit for the Streaming API service had been configured higher than its true capacity. As a result, the internal Google service was able to send too many requests to the Streaming API, causing it to fail for a percentage of responses. The Streaming API service sends requests to BigQuery's Metadata service in order to handle incoming Streaming requests. This elevated volume of requests exceeded the capacity of the Metadata service which resulted in errors for BigQuery jobs API calls. REMEDIATION AND PREVENTION: The incident started at 17:03. Our monitoring detected the issue at 17:20 as error rates started to increase. Our engineers blocked traffic from the internal Google client causing the overload shortly thereafter which immediately started to mitigate the impact of the incident. Error rates dropped to normal by 18:14. In order to prevent a recurrence of this type of incident we will enforce quotas for internal Google clients on requests to the Streaming service in order to prevent a single client sending too much traffic. We will also set the correct capacity limits for the Streaming API service based on improved load tests in order to ensure that internal clients cannot exceed the service's capacity. We apologize again to customers impacted by this incident.

Google Cloud Platform
RESOLVED: Incident 18018 - Streaming API issues with BigQuery

We experienced an issue with BigQuery streaming API returning 500/503 responses that has been resolved for all affected customers as of 18:11 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
RESOLVED: Incident 16007 - Intermittent Google App Engine URLFetch API deadline exceeded errors.

The issue with Google App Engine URLFetch API service should have been resolved for all affected applications as of 02:50 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16007 - Intermittent Google App Engine URLFetch API deadline exceeded errors.

We are still investigating an intermittent issue with Google App Engine URLFetch API calls to non-Google services failing with deadline exceeded errors. We will provide another status update by 03:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16007 - Intermittent Google App Engine URLFetch API deadline exceeded errors.

We are currently investigating an intermittent issue with Google App Engine URLFetch API service. Fetch requests to non-Google related services are failing with deadline exceeded errors. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 02:30 US/Pacific with current details.

Google Cloud Platform
RESOLVED: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

SUMMARY: On Tuesday, 28 June 2016 Google Compute Engine SSD Persistent Disks experienced elevated write latency and errors in one zone for a duration of 211 minutes. We would like to apologize for the length and severity of this incident. We are taking immediate steps to prevent a recurrence and improve reliability in the future. DETAILED DESCRIPTION OF IMPACT: On Tuesday, 28 June 2016 from 18:15 to 21:46 PDT SSD Persistent Disks (PD) in zone us-central1-a experienced elevated latency and errors for most writes. Instances using SSD as their root partition were likely unresponsive. For instances using SSD as a secondary disk, IO latency and errors were visible to applications. Standard (i.e. non-SSD) PD in us-central1-a suffered slightly elevated latency and errors. Latency and errors also occurred when taking and restoring from snapshots of Persistent Disks. Disk creation operations also had elevated error rates, both for standard and SSD PD. Persistent Disks outside of us-central1-a were unaffected. ROOT CAUSE: Two concurrent routine maintenance events triggered a rebalancing of data by the distributed storage system underlying Persistent Disk. This rebalancing is designed to make maintenance events invisible to the user, by redistributing data evenly around unavailable storage devices and machines. A previously unseen software bug, triggered by the two concurrent maintenance events, meant that disk blocks which became unused as a result of the rebalance were not freed up for subsequent reuse, depleting the available SSD space in the zone until writes were rejected. REMEDIATION AND PREVENTION: The issue was resolved when Google engineers reverted one of the maintenance events that triggered the issue. A fix for the underlying issue is already being tested in non-production zones. To reduce downtime related to similar issues in future, Google engineers are refining automated monitoring such that, if this issue were to recur, engineers would be alerted before users saw impact. We are also improving our automation to better coordinate different maintenance operations on the same zone to reduce the time it takes to revert such operations if necessary.

Google Cloud Platform
RESOLVED: Incident 16005 - Issue with Developers Console

SUMMARY: On Thursday 9 June 2016, the Google Cloud Console was unavailable for a duration of 91 minutes, with significant performance degradation in the preceding half hour. Although this did not affect user resources running on the Google Cloud Platform, we appreciate that many of our customers rely on the Cloud Console to manage those resources, and we apologize to everyone who was affected by the incident. This report is to explain to our customers what went wrong, and what we are doing to make sure that it does not happen again. DETAILED DESCRIPTION OF IMPACT: On Thursday 9 June 2016 from 20:52 to 22:23 PDT, the Google Cloud Console was unavailable. Users who attempted to connect to the Cloud Console observed high latency and HTTP server errors. Many users also observed increasing latency and error rates during the half hour before the incident. Google Cloud Platform resources were unaffected by the incident and continued to run normally. All Cloud Platform resource management APIs remained available, allowing Cloud Platform resources to be managed via the Google Cloud SDK or other tools. ROOT CAUSE: The Google Cloud Console runs on Google App Engine, where it uses internal functionality that is not used by customer applications. Google App Engine version 1.9.39 introduced a bug in one internal function which affected Google Cloud Console instances, but not customer-owned applications, and thus escaped detection during testing and during initial rollout. Once enough instances of Google Cloud Console had been switched to 1.9.39, the console was unavailable and internal monitoring alerted the engineering team, who restored service by starting additional Google Cloud Console instances on 1.9.38. During the entire incident, customer-owned applications were not affected and continued to operate normally. To prevent a future recurrence, Google engineers are augmenting the testing and rollout monitoring to detect low error rates on internal functionality, complementing the existing monitoring for customer applications. REMEDIATION AND PREVENTION: When the issue was provisionally identified as a specific interaction between Google App Engine version 1.9.39 and the Cloud Console, App Engine engineers brought up capacity running the previous App Engine version and transferred the Cloud Console to it, restoring service at 22:23 PDT. The low-level bug that triggered the error has been identified and fixed. Google engineers are increasing the fidelity of the rollout monitoring framework to detect error signatures that suggest negative interactions of individual apps with a new App Engine release, even the signatures are invisible in global App Engine performance statistics. We apologize again for the inconvenience this issue caused our customers.

Google Cloud Platform
RESOLVED: Incident 16012 - Newly created instances may be experiencing packet loss.

SUMMARY: On Wednesday 29 June 2016, newly created Google Compute Engine instances and newly created network load balancers in all zones were partially unreachable for a duration of 106 minutes. We know that many customers depend on the ability to rapidly deploy and change configurations, and apologise for our failure to provide this to you during this time. DETAILED DESCRIPTION OF IMPACT: On Wednesday 29 June 2016, from 11:58 PST until 13:44 US/Pacific, new Google Compute Engine instances and new network load balancers were partially unreachable via the network. In addition, changes to existing network load balancers were only partially applied. The level of unreachability depended on traffic path rather than instance or load balancer location. Overall, the average impact on new instances was 50% of traffic in the US and around 90% in Asia and Europe. Existing and unchanged instances and load balancers were unaffected. ROOT CAUSE: On 11:58 PST, a scheduled upgrade to Google’s network control system started, introducing an additional access control check for network configuration changes. This inadvertently removed the access of GCE’s management system to network load balancers in this environment. Only a fraction of Google's network locations require this access as an older design has an intermediate component doing access updates. As a result these locations did not receive updates for new and changed instances or load balancers. The change was only tested at network locations that did not require the direct access, which resulted in the issue not being detected during testing and canarying and being deployed globally. REMEDIATION AND PREVENTION: After identifying the root cause, the access control check was modified to allow access by GCE’s management system. The issue was resolved when this modification was fully deployed. To prevent future incidents, the network team is making several changes to their deployment processes. This will improve the level of testing and canarying to catch issues earlier, especially where an issue only affects a subset of the environments at Google. The deployment process will have the rollback procedure enhanced to allow the quickest possible resolution for future incidents. The access control system that was at the root of this issue will also be modified to improve operations that interacts with it. For example it will be integrated with a Google-wide change logging system to allow faster detection of issues caused by access control changes. It will also be outfitted with a dry run mode to allow consequences of changes to be tested during development time. Once again we would like to apologise for falling below the level of service you rely on.

Google Cloud Platform
RESOLVED: Incident 16012 - Newly created instances may be experiencing packet loss.

The issue with new Google Compute Engine instance experiencing packet loss on startup should have been resolved for all affected instances as of 13:57 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16012 - Newly created instances may be experiencing packet loss.

The issue with new Google Compute Engine instance experiencing packet loss on startup should have been resolved for some instances and we expect a full resolution in the near future. We will provide another status update by 14:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16012 - Newly created instances may be experiencing packet loss.

We are experiencing an issue with new Google Compute Engine instance experiencing packet loss on startup beginning at Wednesday, 2016-06-29 12:18 US/Pacific. Current data indicates that 100% of instances are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 1:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16012 - Newly created instances may be experiencing packet loss.

We are investigating reports of an issue with Compute Engine. We will provide more information by 01:00 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

The issue with Compute Engine SSD persistent disk latency in zone US-Central1-a should have been resolved for all projects as of 21:57 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

The issue with Compute Engine SSD Persistent disk latency in zone US-Central1-a should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 23:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

We are still investigating the issue with Compute Engine SSD Persistent disk latency in zone US-Central1-a. We will provide another status update by 22:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16011 - Compute Engine SSD Persistent disk latency in zone US-Central1-a

We are investigating an issue with Compute Engine SSD Persistent disk latency in zone US-Central1-a. We will provide more information by 21:00 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16007 - Cloud Console is not displaying project lists

The issue with Cloud Console not displaying lists of projects should have been resolved for all affected projects as of 17:39 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16007 - Cloud Console is not displaying project lists

The issue with Cloud Console not displaying lists of projects should have been resolved for the majority of projects and we expect a full resolution in the near future. We will provide another status update by 18:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16007 - Cloud Console is not displaying project lists

We are experiencing an ongoing issue with the Cloud Console not displaying lists of projects beginning at Tuesday, 2016-06-28 14:29 US/Pacific. Current data indicates that approximately 10% of projects are affected by this issue. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 17:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Cloud Console failing

The issue with Cloud Console serving errors should have been resolved for all affected users as of 11:00 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Cloud Console failing

The issue with Cloud Console serving errors should be resolved for many users and we expect a full resolution shortly. We will provide another status update by 11:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Cloud Console failing

We are experiencing an issue with Cloud Console serving errors beginning at Tuesday, 2016-06-14 08:49 US/Pacific. Current data indicates that errors are intermittent but may affect all users. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 10:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Cloud Console failing

We are investigating reports of an issue with Google Cloud Console. We will provide more information by 09:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16006 - Requests to Pantheon failing

We are investigating reports of an issue with Pantheon. We will provide more information by 09:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16005 - Issue with Developers Console

The issue with Developers Console should have been resolved for all affected users as of 22:25 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence. We will provide a more detailed analysis of this incident once we have completed our internal investigation.

Google Cloud Platform
UPDATE: Incident 16005 - Issue with Developers Console

We are experiencing an issue with Developers Console beginning at Thursday, 2016-06-09 21:09 US/Pacific. Current data indicates that all users are affected by this issue. The gcloud command line tool can be used as a workaround for those who need to manage their resources immediately. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by 22:30 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16005 - Issue with Developers Console

We are investigating an issue with Developers Console. We will provide more information by 21:40 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16004 - We are investigating an issue with Developers Console

We are investigating an issue with Developers Console. We will provide more information by 21:40 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 18016 - Intermittent connectivity issues with BigQuery

A incident report for this issue is available at https://status.cloud.google.com/incident/bigquery/18015 as these issues shared a common cause.

Google Cloud Platform
RESOLVED: Incident 18015 - Google BigQuery issues

SUMMARY: On Wednesday 18 May 2016 the BigQuery API was unavailable for two periods totaling 31 minutes. We understand how important access to your data stored in BigQuery is and we apologize for the impact this had on you. We have investigated the incident to determine how we can mitigate future issues and provide better service for you in the future. DETAILED DESCRIPTION OF IMPACT: On Wednesday 18 May 2016 from 11:50 until 12:15 PDT all non-streaming BigQuery API calls failed, and additionally from 14:41 until 14:47, 70% of calls failed. An error rate of 1% occurred from 11:28 until 15:34. API calls affected by this issue experienced elevated latency and eventually returned an HTTP 500 status with an error message of "Backend Error". The BigQuery web console was also unavailable during these periods. The streaming API and BigQuery export of logs and usage data were unaffected. ROOT CAUSE: In 2015 BigQuery introduced datasets located in Europe. This required infrastructure to allow BigQuery API calls to be routed to an appropriate zone. This infrastructure was deployed uneventfully and has been operating in production for some time. The errors on 18 May were caused when a new configuration was deployed to improve routing of APIs, and then subsequently rolled back. The engineering team has made changes to the routing configuration for BigQuery API calls to prevent this issue from recurring in the future, and to more rapidly detect elevated error levels in BigQuery API calls in the future Finally, we would like to apologize for this issue - particularly its scope and duration. We know that BigQuery is a critical component of many GCP deployments, and we are committed to continually improving its availability.

Google Cloud Platform
UPDATE: Incident 16011 - Snapshotting of some PDs in US-CENTRAL-1A are failing

We are investigating reports that some snapshotting of PDs in US-CENTRAL-1A are failing. We just wanted to let you know that we're aware of it. No action required on your end

Google Cloud Platform
UPDATE: Incident 16031 - Google Cloud Storage POST errors

From Wednesday, 2016-05-26 14:57 until 15:17 US/Pacific, the Google Cloud Storage XML and JSON APIs were unavailable for 72% of POST requests in the US. Requests originating outside of the US or using a method other than POST were unaffected. Affected queries returned error 500. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

The issue with networking in us-central1-a should have been resolved for all affected instances as of 16:55 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

The issue with networking in us-central1-c should have been resolved for the majority of affected instances and we expect a full resolution in the near future. Customers still experiencing networking issues in us-central1-c can perform a Stop/Start cycle on their instances to regain connectivity. We will provide another status update by 17:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

We are still investigating the issue with networking in us-central1-c. Current data indicates that between approximately 0.5% of instances in the zone are affected by this issue. We will provide another status update by 16:00 US/Pacific with current details.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

We are experiencing an issue with networking in us-central1-c beginning at Friday, 2016-05-20 13:08. Some instances will be inaccessible by internal and external IP addresses. No other zones are affected by this incident. We will provide more information by 15:30 US/Pacific.

Google Cloud Platform
UPDATE: Incident 16010 - Network disruption in us-central1-c

We are investigating an issue with networking in us-central1-c. We will provide more information by 14:30 US/Pacific.

Google Cloud Platform
RESOLVED: Incident 17010 - Issues with Cloud SQL First Generation instances

SUMMARY: On Tuesday 17 May 2016, connections to Cloud SQL instances in the Central United States region experienced an elevated error rate for 130 minutes. We apologize to customers who were affected by this incident. We know that reliability is critical for you and we are committed to learning from incidents in order to improve the future reliability of our service. DETAILED DESCRIPTION OF IMPACT: On Tuesday 17 May 2016 from 04:15 to 06:12 and from 08:24 to 08:37 PDT, connections to Cloud SQL instances in the us-central1 region experienced an elevated error rate. The average rate of connection errors to instances in this region was 10.5% during the first part of the incident and 1.9% during the second part of the incident. 51% of in-use Cloud SQL instances in the affected region were impacted during the first part of the incident; 4.2% of in-use instances were impacted during the second part. Cloud SQL Second Generation instances were not impacted. ROOT CAUSE: Clients connect to a Cloud SQL frontend service that forwards the connection to the correct MySQL database server. The frontend calls a separate service to start up a new Cloud SQL instance if a connection arrives for an instance that is not running. This incident was triggered by a Cloud SQL instance that could not successfully start. The incoming connection requests for this instance resulted in a large number of calls to the start up service. This caused increased memory usage of the frontend service as start up requests backed up. The frontend service eventually failed under load and dropped some connection requests due to this memory pressure. REMEDIATION AND PREVENTION: Google received its first customer report at 04:39 PDT and we tried to remediate the problem by redirecting new connections to different datacenters. This effort proved unsuccessful as the start up capacity was used up there also. At 06:12 PDT, we fixed the issue by blocking all incoming connections to the misbehaving Cloud SQL instance. At 08:24 PDT, we moved this instance to a separate pool of servers and restarted it. However, the separate pool of servers did not provide sufficient isolation for the service that starts up instances, causing the incident to recur. We shutdown the instance at 08:37 PDT which resolved the incident. To prevent incidents of this type in the future, we will ensure that a single Cloud SQL instance cannot use up all the capacity of the start up service. In addition, we will improve our monitoring in order to detect this type of issue more quickly. We apologize for the inconvenience this issue caused our customers.

Google Cloud Platform
RESOLVED: Incident 17010 - Issues with Cloud SQL First Generation instances

SUMMARY: On Tuesday 17 May 2016, connections to Cloud SQL instances in the Central United States region experienced an elevated error rate for 130 minutes. We apologize to customers who were affected by this incident. We know that reliability is critical for you and we are committed to learning from incidents in order to improve the future reliability of our service. DETAILED DESCRIPTION OF IMPACT: On Tuesday 17 May 2016 from 04:15 to 06:12 and from 08:24 to 08:37 PDT, connections to Cloud SQL instances in the us-central1 region experienced an elevated error rate. The average rate of connection errors to instances in this region was 10.5% during the first part of the incident and 1.9% during the second part of the incident. 51% of in-use Cloud SQL instances in the affected region were impacted during the first part of the incident; 4.2% of in-use instances were impacted during the second part. Cloud SQL Second Generation instances were not impacted. ROOT CAUSE: Clients connect to a Cloud SQL frontend service that forwards the connection to the correct MySQL database server. The frontend calls a separate service to start up a new Cloud SQL instance if a connection arrives for an instance that is not running. This incident was triggered by a Cloud SQL instance that could not successfully start. The incoming connection requests for this instance resulted in a large number of calls to the start up service. This caused increased memory usage of the frontend service as start up requests backed up. The frontend service eventually failed under load and dropped some connection requests due to this memory pressure. REMEDIATION AND PREVENTION: Google received its first customer report at 04:39 PDT and we tried to remediate the problem by redirecting new connections to different datacenters. This effort proved unsuccessful as the start up capacity was used up there also. At 06:12 PDT, we fixed the issue by blocking all incoming connections to the misbehaving Cloud SQL instance. At 08:24 PDT, we moved this instance to a separate pool of servers and restarted it. However, the separate pool of servers did not provide sufficient isolation for the service that starts up instances, causing the incident to recur. We shutdown the instance at 08:37 PDT which resolved the incident. To prevent incidents of this type in the future, we will ensure that a single Cloud SQL instance cannot use up all the capacity of the start up service. In addition, we will improve our monitoring in order to detect this type of issue more quickly. We apologize for the inconvenience this issue caused our customers.

Google Cloud Platform
UPDATE: Incident 18016 - Intermittent connectivity issues with BigQuery

The issue with BigQuery intermittent connectivity issues should have been resolved. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.

Google Cloud Platform
UPDATE: Incident 18016 - Intermittent connectivity issues with BigQuery

We are currently investigating intermittent connectivity issues with BigQuery that are affecting some of our Customers. We'll provide another update at 5:30pm PDT

Google Cloud Platform
UPDATE: Incident 18015 - Google BigQuery issues

The issue with BigQuery API should have been resolved for all affected projects as of 12:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to prevent or minimize future recurrence.