Windows Azure Storage Status

Storage - UK South - Investigating

Starting at 13:19 UTC on 10 Jan 2019, a subset of customers leveraging Storage in UK South may experience service availability issues. In addition, resources with dependencies on Storage, may also experience downstream impact in the form of availability issues. Engineers have been engaged and are actively investigating.The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 5 days ago

Storage - UK South Investigating

Engineers are investigating a potential service alert for Storage in the UK South region with potential downstream impact to dependent services in this region. More information will be provided as it becomes available.

Last Update: About 5 days ago

Network - South Central US

Starting at approximately 03:30 UTC on 04 Dec 2018, customers in South Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: About 1 month ago

Storage - West US 2 - Investigating

Starting at 04:20 UTC on 28 Nov 2018 a subset of customers in West US 2 may experience issues connecting to Storage resources hosted in this region. Customers using resources dependent on Storage may also see impact. Engineers are aware of this issue and are actively investigating. The next update will be provided within 60 minutes, or as events warrant.

Last Update: About 1 month ago

Storage - West US 2 - Investigating

An issue with Storage in West Us 2 is currently being investigated, more information will be provided as it is known.

Last Update: A few months ago

Azure Service Availability - France Central

Starting at 13:57 UTC on 16 Oct 2018 customers using a subset of resources in France Central may experience difficulties connecting to these resources. Engineers have identified a localized infrastructure event caused a number of storage and virtual machines to experience drops in availability. Service teams have begun restoring impacted storage and virtual machine resources to mitigate. The next update will be provided in 30 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - France Central

Engineers are investigating alerts for Storage and Virtual Machines in France Central. More updates will be provided shortly

Last Update: A few months ago

Azure Service Availability

An investigation for Azure services in France Central is underway. More information will be provided as it becomes available.

Last Update: A few months ago

Storage - East US - Advisory

Engineers are investigating alerts for Storage in East US. Additional information will be provided shortly.

Last Update: A few months ago

Multiple Services - Korea South

Starting at 13:50 UTC on 30 Sep 2018, a subset of Storage customers in Korea South may experience difficulties connecting to resources hosted in this region. A number of services with dependencies on Storage in the region are also experiencing impact, and these are listed below. Engineers have identified the underlying cause, and are currently exploring mitigation options. Some customers may already be seeing signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Korea South

Starting at 13:50 UTC on 30 Sep 2018, a subset of Storage customers in Korea South may experience difficulties connecting to resources hosted in this region. A number of services with dependencies on Storage in the region are also experiencing impact, and these are listed below. Engineers have identified the underlying cause, and are currently exploring mitigation options. Some customers may already be seeing signs of recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Korea South

Starting at 13:50 UTC on 30 Sep 2018, a subset of customers in Korea South may experience difficulties connecting to resources hosted in this region. Impacted services are listed below. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - South Central US - Investigating

Starting at 09:29 UTC on 04 Sep 2018 a subset of customers in South Central US may experience difficulties connecting to resources hosted in this region. At this time Engineers are investigating an issue with cooling in one part of the data center which caused a localized spike in temperature. Automated data center procedures to ensure data and hardware integrity went into effect when temperatures hit a specified threshold and critical hardware entered a structured power down process. The impact to the cooling system has been isolated and engineers are actively working to restore services. The next update will be provided as events warrant.

Last Update: A few months ago

Storage - West Europe

Starting at 08:30 UTC on 20 Aug 2018 you have been identified as a customer using Storage in West Europe who may receive failure notifications when performing create operations for resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - Connectivity Issues

Starting at 13:18 UTC on 01 Aug 2018, a subset of customers using Azure Resources in East US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage availability issue which is currently under investigation. Some impacted customers may encounter difficulties when attempting to RDP to Virtual Machines hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this event. Engineers have identified a potential root cause and are exploring mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Central US

Engineers are investigation a potential outage in North Central US. More information will be provided shortly.

Last Update: A few months ago

Storage - Accessing Resources

Engineers are currently investigating an outage impacting Storage dependent resources in West US, West Central US and Central India. More information will provided as events warrant.

Last Update: A few months ago

Storage - Multiple Regions

Engineers are currently investigating an outage impacting Storage dependent resources in West US, West Central US and Central India. More information will provided as events warrant.

Last Update: A few months ago

Storage/Virtual Machines - South Central US

Engineers are currently investigating alerts in South Central US for Storage, Virtual Machines and App Services. More information will be provided as soon as it is available.

Last Update: A few months ago

Service availability issue in North Europe

Starting at approximately 17:44 UTC on 19 Jun 2018 a subset of customers using Virtual Machines, Storage, Key Vault, App Service, or Site Recovery in North Europe may experience connection failures when trying to access resources hosted in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Service availability issue in North Europe

Starting at approximately 17:45 UTC on 19 Jun 2018 a subset of customers using Virtual Machines or Storage in North Europe may experience connection failures when trying to access resources hosted in the region. Some Virtual Machines may have also restarted unexpectedly. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - South Central US

Starting at 15:57 UTC on 13 Jun 2018 a subset of customers in South Central US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage availability issue which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this. These service may include: Virtual Machines, App Service, Visual Studio Team Services, Logic Apps, Azure Backup, Application Insights, Service Bus, Event Hub, and Site Recovery. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services impacted in West Europe

Engineers are investigating an emerging issue including Storage and Virtual Machines in the West Europe region. In addition, Azure Services that have dependency on Storage may experience issue connecting to their resources. Confirmed impacted services are: Storage., Virtual Machines, SQL Databases, Backup, Azure Site Recovery, Service Bus, Event Hub, App Service, Logic Apps, Automation.

Last Update: A few months ago

Emerging issue in West Europe

Engineers are investigating an emerging issue including Storage and Virtual Machines in the West Europe region. More information will be provided as it is known.

Last Update: A few months ago

Warning Investigating Alerts - Storage - West Central US

Starting at 19:47 UTC on 03 May 2018 a subset of customers using Storage in West Central US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Warning Investigating Alerts - Storage - West Central US

Starting at 19:47 UTC on 03 May 2018 a subset of customers using Storage in West Central US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Issues Performing Service Management Operations - Australia East/Southeast

Starting at approximately 21:00 UTC on 15 Apr 2018, you have been identified as a customer in Australia East and Australia Southeast who may be unable to view resources via the Azure portal or programmatically and may be unable to perform service management operations due to this. Service availability for those resources is not affected by this issue. Engineers have identified a back-end storage component as a possible underlying cause, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - UK South

Starting at 20:12 UTC on 20 Feb 2018 a subset of customers in UK South may experience difficulties connecting to resources hosted in this region. Impacted services include Storage, Virtual Machines, Azure Search and Backup. Customers may begin seeing signs of mitigation. Engineers are investigating a potential power event in the region impacting a single storage scale unit and are actively working on mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - UK South

Starting at 20:12 UTC on 20 Feb 2018 a subset of customers in UK South may experience difficulties connecting to resources hosted in this region. Impacted services include Storage, Virtual Machines and Azure Search. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - West US

Starting at approximately 22:58 UTC on 14 Jan 2018 a limited subset of customers dependent on a storage services in West US may experience latency or failures connecting to certain resources. In addition to the storage service, impacted services which leverage the service include: App Services (Web, Mobile and API Apps), Site Recovery, Azure Search, and Redis Cache. Engineers actively investigating the impacted storage services and developing mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Multiple Regions

An alert for Storage in Multiple Regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - Multiple Regions

Starting at 18:28 UTC on 21 Nov 2017 a subset of customers may receive failure notifications when performing service management operations - such as create, update, delete - when attempting to manage their Storage Accounts. Retries of these operations may succeed. Azure Monitor customers may also see impact to API calls to turn on diagnostic settings. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are also aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying network infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers continue investigating possible underlying causes, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US and South Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are currently investigating previous updates and deployments to the region along with other possible network level issues, and are taking additional steps to mitigate impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, a subset of customers may have experienced issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Engineers have applied mitigation and are in the final stages of validating that there is no further customer impact.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, a subset of customers may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Engineers are observing signs of recovery and the next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, a subset of customers may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Other services with dependencies on Storage may also experience impact such as Virtual Machines, Cloud Services, Backup, Azure Site Recovery, VSTS Load Testing and Azure Search. Retries may be successful. Impact for this issue is limited to Service Management functions and Service Availability for existing resources should not be affected. Engineers are implementing steps to mitigate the incident. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on 2nd November 2017, a subset of customer may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Other services with dependencies on Storage may also experience impact. Virtual Machines and Cloud Services customers may experience intermittent failures when attempting to provision resources. Azure Backup and Azure Site Recovery may also experience failures. VSTS Load Testing customers may see load test failures. Impact for this issue is limited to Service Management functions, and existing resources should not be affected. Engineers have determined the underlying issue and are exploring mitigation options. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on 2nd November 2017, a subset of customer may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Other services with dependencies on Storage may also experience impact. Virtual Machines and Cloud Services customers may experience intermittent failures when attempting to provision resources. Azure Backup and Azure Site Recovery may also experience failures. VSTS Load Testing customers may see load test failures. Impact for this issue is limited to Service Management functions, and existing resources should not be affected. Engineers have determined the underlying issue and are exploring mitigation options. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, a subset of customer may experience issues with Service Management functions (Create, Rename, Delete, etc.) for their Azure Storage resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Other services with dependencies on Storage may also experience impact. Virtual Machines and Cloud Services customers may experience intermittent failures when attempting to provision resources. Azure Backup and Azure Site Recovery may also experience failures. VSTS Load Testing customers may see load test failures. Impact for this issue is limited to Service Management functions, and existing resources should not be affected. Engineers have determined the underlying issue and are exploring mitigation options. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Service Management Operations

Starting at 11:40 UTC on the 2nd November 2017, due to an underlying Storage incident, Azure services that leverage Storage may experience impact. Virtual Machines or Cloud Services customers may experience failures when attempting to provision resources. Storage customers may be unable to provision new Storage resources or perform service management operations on existing resources. Customers leveraging Azure Backup may experience replication failures. Engineers are currently investigating to determine the underlying root cause. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Multiple Regions

Starting at 11:40 UTC on 02 Nov 2017, customers may experience errors when attempting to carry out management operations - such as create, update, delete - on their Storage resources. Engineers are investigating and the next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is partially mitigated. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have enacted mitigation for the primary root cause and are currently assessing and mitigating residual impact. Customers should start seeing signs of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and customers should start seeing signs of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and customers should start seeing signs of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and customers should start seeing signs of mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 16:45 UTC on 26 Oct 2017 a subset of customers using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation.Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 10:37 UTC on 26 Oct 2017 a subset of customers using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation.Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 10:37 UTC on 26 Oct 2017 you have been identified as a customer using Storage in West US who may experience difficulties connecting to resources hosted in this region. Virtual Machines hosted in this region may also experience unexpected reboots. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation.Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

SUMMARY OF IMPACT: Between 13:27 and 20:15 UTC on 29 Sep 2017, a subset of customers in North Europe may have experienced difficulties connecting to resources hosted in this region due to availability loss of a Storage scale unit. Services that depend on the impacted Storage resources in this region that may have seen impact are Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Azure Functions, Time Series Insights, Stream Analytics, HDInsight, Data Factory and Azure Scheduler. PRELIMINARY ROOT CAUSE: Engineers have determined that this was the result of a facility issue that resulted in physical node reboots as a precautionary measure. The nodes impacted were primarily from a single storage stamp. Recovery took longer than expected, and the full Public RCA will include details on why these nodes did not recover more quickly. MITIGATION: Engineers manually checked the resources in the data center and initiated a restart of the Storage nodes that were impacted. NEXT STEPS: Engineers are still assessing any residual customer impact as well as understanding the cause for the initial event. Any residual impacted customers will be contacted via their management portal (https://portal.azure.com).

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a limited subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which remains under investigation. Virtual Machines and Storage remain impacted with a very limited amount of customers experiencing issues. Application Insights, Azure Search, Azure Monitor, Redis Cache, Azure Site Recovery, Data Factory, Azure Scheduler, HDInsight, Azure Backup, App Services\Web Apps, Stream Analytics, Cloud Services and Azure Functions are reporting recovery. Engineers are continuing to work on recovering the remaining two services. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, and Azure Functions. Application Insights, Azure Search, Azure Monitor, Redis Cache, Azure Site Recovery, Data Factory, Azure Scheduler, HDInsight and Stream Analytics are reporting recovery. Engineers are attempting alternative mitigation steps in attempt to recover the remaining unhealthy storage machines and services. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Azure Functions, Stream Analytics, HDInsight, Data Factory and Azure Scheduler. Media Services, Application Insights, Azure Search and Azure Site Recovery are reporting recovery. Engineers are seeing signs of recovery and are continuing to recover the remaining unhealthy storage machines and validate the fix. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight, Data Factory and Azure Scheduler. Engineers are seeing signs of recovery and are continuing to recover unhealthy storage machines and validate the fix. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight, Data Factory and Azure Scheduler. Engineers are continuing to recover unhealthy storage machines in order to mitigate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight and Data Factory. Engineers are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight and Data Factory. Engineers are seeing signs of recovery and are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Microsoft Intune, Application Insights, Azure Functions, Stream Analytics and Media Services. Engineers are seeing signs of recovery and are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache and Azure Monitoring. Engineers are seeing signs of recovery and have identified a potential underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services, Azure Cache and Azure Monitoring. Engineers are seeing signs of recovery and have identified a potential underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 18:50 UTC on 17 Aug 2017 a subset of customers using Storage in West US may experience intermittent difficulties connecting to Storage resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this issue. Engineers have deployed a fix and are validating recovery. Customers should begin to experience improvements. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

Starting at 18:50 UTC on 17 Aug 2017 a subset of customers using Storage in West US may experience intermittent difficulties connecting to Storage resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this issue. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West US

An alert for Storage in West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - South Central US

Starting at 16:49 UTC on 28 Jul 2017 a subset of customers using Storage in South Central US may experience difficulties connecting to resources hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - South Central US

An alert for Storage in South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage Latency - West Europe

Starting at 07:00 UTC on 18 May 2017, a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers have determined that an underlying network issue is impacting communication with a subset of storage scale units. They are continuing to implement a mitigation plan and monitor service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Latency - West Europe

Starting at 07:00 UTC on 18 May 2017, a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers have determined that an underlying network issue is impacting communication with a subset of storage scale units and are implementing a mitigation plan and monitoring service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Latency - West Europe

Starting at 07:00 UTC on 18 May 2017, a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers are implementing a mitigation plan and monitoring service health. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

Starting at 07:00 UTC on 18 May 2017, a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers have identified a possible underlying cause and are evaluating mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

Starting at 05:00 UTC on 18 May 2017 a subset of customers using Storage in West Europe may experience higher than expected latency, timeouts or HTTP 50x errors when accessing data stored in this region. Other services that leverage Storage in this region may also experience impact related to this latency. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe

An alert for Storage in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Cooling Event | Japan East | Validating Mitigation

SUMMARY OF IMPACT: Between 13:50 and 21:00 UTC on 31 Mar 2017, a subset of customers in Japan East may have experienced difficulties connecting to their resources hosted in this region. Customers using the following services may have experienced impact: Storage Virtual Machines API Management Web Apps Automation Backup Cloud Services Azure Container Service Data Movement DocumentDB Event Hubs HDInsight IoT Hub Key Vault Logic Apps Media Services Azure Monitor Redis Cache Service Bus Site Recovery StorSimple Stream Analytics Azure Machine Learning Azure Notification Hub Access Control Service PRELIMINARY ROOT CAUSE: Engineers have identified the underlying root cause as loss of cooling  causing certain Storage and Compute scale units to perform an automated shut down to preserve data integrity & resilience. This affected a number of services with dependencies on these scale units.  MITIGATION: Engineers restored cooling, restarted the affected scale units, verified hardware recovery, and verified recovery for data plane and control plane operations for all the affected services. NEXT STEPS: Customers still experiencing impact will receive communications in their management portals. An internal investigation will be conducted and this post will be updated in approximately 48 - 72 hours.

Last Update: A few months ago

Cooling Event | Japan East | Validating Mitigation

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers in Japan East may experience difficulties connecting to their resources hosted in this region. Engineers have identified the underlying cause as loss of cooling which caused some resources to undergo an automated shutdown to avoid overheating and ensure data integrity & resilience. Engineers have recovered the cooling units, and the affected resources and are in the final stages of verifying recovery. Customers who are still experiencing impact will be communicated to separately via their management portals. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Cooling Event | Japan East | Validating Mitigation

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers in Japan East may experience difficulties connecting to their resources hosted in this region. Engineers have identified the underlying cause as loss of cooling which caused some resources to undergo an automated shutdown to avoid overheating and ensure data integrity & resilience. Engineers have recovered the cooling units, most of the affected resources, and are continuing to recover the rest of affected resources. Some services are performing final checks before declaring mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Cooling Event | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers in Japan East may experience difficulties connecting to their resources hosted in this region. Engineers have identified the underlying cause as loss of cooling which caused some resources to undergo an automated shutdown to avoid overheating and ensure data integrity & resilience. Engineers have recovered the cooling units and are working on recovering the affected resources. Engineers will then validate control plane and data plane availability for all affected services. Some customers may see signs of recovery. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Infrastructure event | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers in Japan East may experience difficulties connecting to their resources hosted in this region. Engineers have identified the underlying cause as an infrastructure alert which caused some resources to undergo an automated shutdown to ensure data integrity & resilience. Engineers have mitigated the infrastructure alert, and are currently undertaking the structured restart sequence for any impacted resources. Some of the previously impacted resources are now reporting as healthy and impacted customers may see early signs of recovery. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Alert - Storage and Virtual Machines - Japan East

Starting at 13:50 UTC on 31 Mar 2017 a subset of customers using storage services in Japan East may experience difficulties connecting to their resources hosted in this region. Other Azure services that leverage storage in this region may also be experiencing impact, and these are detailed in the post below. Engineers have identified the underlying cause as an infrastructure alert which caused some storage resources to undergo an automated shutdown to ensure data integrity & resilience. Engineers have mitigated the infrastructure alert, and are currently undertaking the structured restart sequence for any impacted resources. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Alert - Storage and Virtual Machines - Japan East

Starting at 13:50 UTC on 31 Mar 2017 a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have identified a potential root cause and this is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Alert - Storage and Virtual Machines - Japan East

Starting at 06:50 UTC on 31 Mar 2017 a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have identified a potential root cause and this is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Alert - Storage and Virtual Machines- Japan East

Engineers are investigating alerts in Japan East for Storage and Virtual machines. Additional information will be provided shortly.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, due to the networking infrastructure issue the following services are impacted: App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. HDInsight customers are unable to perform service management operations or provision Linux VMs. Azure Virtual Machine customers may experience VM restarts.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. HDInsight customers are unable to perform service management operations or provision Linux VMs. Azure Virtual Machine customers may experience VM restarts. Engineers are continuing their investigating for an underlying cause and applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. Engineers are continuing their investigating for an underlying cause and have begun applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West

Starting at 18:04 UTC on 27 Mar 2017 a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience high latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Engineers are investigating the issue for an underlying cause and working on mitigation paths. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West

Starting at 18:04 UTC on 27 Mar 2017 customers in Japan West may experience difficulties connecting to resources hosted in this region. Engineers confirmed impact to Redis Cache, Azure Search, Azure Monitor, and App Service / Web Apps. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Japan West

An alert for Storage in Japan West is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - West Europe

Starting at 13:45 UTC on 21 Mar 2017 a subset of customers using Storage in West Europe may experience latency when accessing their Storage resources in this region. Virtual Machine customers may also be experiencing latency as a result of this issue. Retries may be successful for some customers. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant

Last Update: A few months ago

Storage - West Europe

An alert for Storage in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage Availability in East US

SUMMARY OF IMPACT: Starting at 21:50 UTC on 15 Mar 2017 to 06:00 on 16 Mar 2017, due to a incident in East US affecting Storage, customers and service dependent on Storage may have experienced difficulties provisioning new resources or accessing their existing resources in the region. Engineering confirmed that Azure services that experienced downstream impact included Virtual Machines Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, API Management and Azure Stream Analytics. PRELIMINARY ROOT CAUSE: Engineering identified one Storage cluster that lost power and became unavailable. NEXT STEPS: Full detailed Root Cause Analysis is currently being conducted will be published approximately in 72 hours.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Engineers are now recovering Azure services and customers should begin observing improvements in accessing resources. The next update will be provided in 60 minutes or as any new information is made available.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 60 minutes or as any new information is made available.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 2 hours or as any new information is made available.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Engineers are working on a phased recovery per our power-event recovery plan. We are anticipating an extended recovery time for this incident. The next update will be provided in 2 hours or as any new information is made available.

Last Update: A few months ago

Storage Availability in East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US.  Customers in this region may also experience failures when trying to access a subset of their Virtual Machines.  Engineers have identified the root cause as a power event affecting a single scale unit. Data center technicians are on site working to restore power to the scale unit. We are anticipating an extended recovery time for this incident. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Availability in East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Customers in this region may also experience failures when trying to access a subset of their Virtual Machines. Engineers have identified a potential root cause and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage provisioning impacting multiple services

SUMMARY OF IMPACT: Starting at 22:42 on the 15 Mar to 00:00 UTC on the 16th of Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may have experienced impact. Virtual Machines or Cloud Services customers may have experienced failures when attempting to provision resources. Storage customers would have been unable to provision new Storage resources or perform service management operations on existing resources. Azure Search customers may have been be unable to create, scale, or delete services. Azure Monitor customers may have been be unable to turn on diagnostic settings for resources. Azure Site Recovery customers may have experienced replication failures. API Management 'service activation' in South India may have experienced a failure. Azure Batch customers will have been unable to provision new resources. During this time all existing Azure Batch pools would have scheduled tasks as normal. EventHub customers using a service called 'Archive' may have experienced failures. Customers using Visual Studio Team Services Build will have experienced failures. Azure Portal may have been unable to access storage account management operations and would have been unable to deploy new accounts. PRELIMINARY ROOT CAUSE: Engineers have identified a software error as the potential root cause. MITIGATION: Engineers have applied a patch and mitigated the issue. NEXT STEPS: Full detailed Root Cause Analysis will be published approximately in 72 hours.

Last Update: A few months ago

Storage provisioning impacting multiple services

Starting at 22:42 UTC on 15 Mar 2017, customers using Storage may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Other services that leverage Storage may also be experiencing impact. Retries may be successful. In addition, a subset of customers in East US may be unable to access their Storage accounts. Engineers have identified a possible fix for the underlying cause, and are applying mitigations. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage provisioning impacting multiple services

Starting at 22:42 UTC on 15 Mar 2017, customers using Storage may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Other services that leverage Storage may also be experiencing impact. Retries may be successful. In addtion, a subset of customers in East US may be unable to access their Storage accounts. Engineers have identified a possible fix for the underlying cause, and are applying mitigations. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Virtual Machines or Cloud Services customers may experience failures when attempting to provision resources. Azure Search customers may be unable to create, scale, or delete services. Azure Monitor customers may be unable to turn on diagnostic settings for resources. Azure Site Recovery customers may experience replication failures. API Management 'service activation' in South India will experience a failure. Azure Batch customers will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. EventHub customers using a service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. Customers using API Management service activation in South India will experience a failure. Customers using Azure Batch will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. Customers using the feature of the EventHub service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. The next update will be provided in 60 minutes or as events warrant. First Detailed Post  Don’t forget to update SHD Trends! Is there a workaround? What is the customer seeing? What is not working? Is this service management or availability failure? Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - After Action Report: Contact Info CM who wrote Comms on incident? James/Sami Contacted at: 03/15/17 03:03 PM PST How Contacted ICM auto call SMS Skype IM E-mail LSI Handoff if applicable Bridge info Who was LSI handed off to if applicable Time of handoff: SLA Details Missed Advisory SLA? (15 min) Yes No Reason SLA was Missed Engineers unable to confirm customer impact Time Engineers Confirmed impact: HH:MM AM/PM PDT Please provide notes and time stamp if applicable Bridge Issues Time bridge issue resolved: HH:MM AM/PM PDT Post Facto Mats Issues Time issue was fixed: HH:MM:AM/PM PDT Other: Notes on why SLA was missed if applicable Post Details Posted to: SHD Portal SubIDs Time received SubIDs: #Accounts notified in MATS Social/SHD Trend # of Tweets Reach: Did Social Detect First? Yes HH:MM AM/PM PDT No SHD Trend: Updated DRS / RCA Has an RCA been issued? Yes No Posted to: SHD Portal # of SubIDs toasted: Link to ICM Request: IcM Postmortem Internal Postmortem completed? Yes No Postmortem created Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. Customers using API Management service activation in South India will experience a failure. Customers using Azure Batch will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. Customers using the feature of the EventHub service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

First Detailed Post  Don’t forget to update SHD Trends! Is there a workaround? What is the customer seeing? What is not working? Is this service management or availability failure? Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - After Action Report: Contact Info CM who wrote Comms on incident? James/Sami Contacted at: 03/15/17 03:03 PM PST How Contacted ICM auto call SMS Skype IM E-mail LSI Handoff if applicable Bridge info Who was LSI handed off to if applicable Time of handoff: SLA Details Missed Advisory SLA? (15 min) Yes No Reason SLA was Missed Engineers unable to confirm customer impact Time Engineers Confirmed impact: HH:MM AM/PM PDT Please provide notes and time stamp if applicable Bridge Issues Time bridge issue resolved: HH:MM AM/PM PDT Post Facto Mats Issues Time issue was fixed: HH:MM:AM/PM PDT Other: Notes on why SLA was missed if applicable Post Details Posted to: SHD Portal SubIDs Time received SubIDs: #Accounts notified in MATS Social/SHD Trend # of Tweets Reach: Did Social Detect First? Yes HH:MM AM/PM PDT No SHD Trend: Updated DRS / RCA Has an RCA been issued? Yes No Posted to: SHD Portal # of SubIDs toasted: Link to ICM Request: IcM Postmortem Internal Postmortem completed? Yes No Postmortem created Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage provisioning impacting multiple services

Starting at 22:42 UTC on 15 Mar 2017, customers using Storage may receive failure notifications when performing service management operations - such as create, update, delete - for resources hosted in this region. Other services that leverage Storage may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard . The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage

An alert for Storage is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this, and these are detailed below. Engineers have applied a mitigation, and most services should be seeing a return to healthy state at this time. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage in this region may also be experiencing impact related to this, and these are detailed below. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Storage in Japan East may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage and SQL Database in this region may also be experiencing impact related to this, and these are detailed below. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Virtual Machines, HD Insight, Redis Cache or App Service \ Web Apps in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may be experiencing impact related to this and additional services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant

Last Update: A few months ago

Multiple Services - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Virtual Machines, HD Insight, Redis Cache or App Service \ Web Apps in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may be experiencing impact related to this and additional services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant

Last Update: A few months ago

Storage and App Service \ Web Apps - Japan East

An alert for Storage and App Service \ Web Apps in Japan East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Storage and IoT Hub may experience timeouts or errors when accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to mitigate the final impact to storage. A majority of customers will see recovery. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Cloud Services, Storage, Azure Monitor, Activity Logs, and Azure DevTest Labs, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to restore the remaining impacted services. A majority of customers will see signs of recovery for their services, the following services have confirmed mitigation: Virtual Machines, SQL Database, Backup, Site Recovery, Redis Cache, App Service \ Web Apps, Azure IoT Hub, Service Bus, Event Hub, DocumentDB, Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, Cloud Services, Storage, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and Azure DevTest Labs, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to restore the remaining impacted services. A majority of customers will see signs of recovery for their services, the following services have confirmed mitigation: SQL Database, App Service \ Web Apps, Azure IoT Hub, Service Bus, Event Hub, DocumentDB, Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, DocumentDB, Azure DevTest Labs, Service Bus, and Event Hub, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services, the following services have confirmed mitigation: Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The alert has been investigated, and engineers are currently undertaking a structured restart of the scale units involved. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified the underlying cause, and are actively working to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified the underlying cause, and are actively working to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified the underlying cause, and are actively working to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Visual Studio Online, and DocumentDB as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers are continuing to investigate, and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Storage, SQL Database, and associated services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Visual Studio Online, and DocumentDB as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Storage, SQL Database, and associated services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services and Storage, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Storage, SQL Database, and associated services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers using Virtual Machines, SQL Database, Cloud Services and Storage in West US 2 may experience issues accessing their services in this region. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Storage and SQL Database - West US 2

An alert for Virtual Machines, Cloud Services, Storage and SQL Database in West US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines and Cloud Services - West US 2

An alert for Virtual Machines and Cloud Services in West US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - Service Management Operation Failures | Recovered

Engineers are aware of a recent issue for Storage in multiple regions which has now been mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - West US 2

SUMMARY OF IMPACT: Between 22:12 to 22:57 UTC on 10 Jan 2017, a subset of customers using Virtual Machines in West US 2 may have experienced restarts and connection failures when trying to access Virtual Machines hosted in this region. Concurrently, a subset of customers using Storage in West US 2 may have experienced higher than expected latency, timeouts or HTTP 500 errors when accessing data stored on Storage accounts hosted in this region. PRELIMINARY ROOT CAUSE: Engineers have identified a hardware issue as the preliminary root cause. MITIGATION: The issue was self-healed by the Azure platform. NEXT STEPS: An exhaustive root cause analysis will be conducted and a report provided once completed.

Last Update: A few months ago

Storage and Virtual Machines - West US 2

Starting at 22:12 on 10 Jan 2017, a subset of customers using Virtual Machines in West US 2 may experience restarts and connection failures when trying to access Virtual Machines hosted in this region.  Concurrently, a subset of customers using Storage in West US 2 may experience higher than expected latency, timeouts or HTTP 500 errors when accessing data stored on Storage accounts hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - West US 2

Engineers are aware of a recent issue for Storage and Virtual Machines in West US 2 which our telemetry indicates is mitigated. More information will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - West US 2

An alert for Storage and Virtual Machines in West US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - North Europe

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Storage in North Europe may experience difficulties connecting to resources hosted in this region. Retries may succeed for some customers. Other services that leverage Storage in this region may also be experiencing impact related to this, and these are detailed below. Engineers have applied a mitigation, and most services should be seeing a return to healthy state at this time. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Europe

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Storage in North Europe may experience difficulties connecting to resources hosted in this region. Retries may succeed for some customers. Other services that leverage Storage in this region may also be experiencing impact related to this, and these are detailed in the post below. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Europe

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Storage in North Europe may experience difficulties connecting to resources hosted in this region. Other services that leverage Storage in this region, such as Virtual Machines and WebApp may also be experiencing impact related to this. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - North Europe

An alert for Storage in North Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - East US

Starting at 03:20 UTC on 16 Nov 2016, a subset of customers using Virtual Machines in East US may experience connection failures when trying to access Virtual Machines hosted in this region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - East Asia

Starting at 14:10 UTC on 26 Oct 2016, a subset of customers with services hosted in East Asia may experience degraded performance, latency, or time-outs when accessing their resources located in this region. Impacted Services include, but is not limited to, Virtual Machines, App Service \ Web Apps, Storage, Azure Search, and Service Bus. New service creation may also fail for customers. Some Virtual Machine customers will have experienced a reboot of their VMs in this regions. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - East Asia

An alert for Virtual Machines, Network Infrastructure, App Service \ Web Apps, Storage, Service Bus and Azure Search in East Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - East Asia

An alert for Virtual Machines in East Asia is being investigated. More information will be provided as it is known .

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including SQL DB, Virtual Machines, Virtual Network, Media Services, Key Vault and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services are seeing signs of recovery. Engineers are continuing to investigate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

[Extended Recovery] Storage – East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services with dependencies on the affected Storage scale unit may also experience service degradation. These customers with dependencies on the affected Storage scale unit that are still impacted will be provided direct communication via their management portal (https://portal.azure.com). More nodes have been recovered and engineers are working on recovering the few nodes that are still affected. The next update will be provided as more information is made available.

Last Update: A few months ago

[Extended Recovery] Storage – East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services with dependencies on the affected Storage scale unit may also experience service degradation. These customers with dependencies on the affected Storage scale unit that are still impacted will be provided direct communication via their management portal (https://portal.azure.com). Engineers have applied mitigation steps and have observed a majority of the scale unit has recovered. The next update will be provided as more information is made available.

Last Update: A few months ago

[Extended Recovery] Storage – East Asia

Engineers continue to implement mitigation steps, and more customers have reported their services being restored. Service Bus, Site Recovery, Azure Backup, IoT Suite, Managed Cache, Radis Cache, Stream Analytics, HDInsight, Event Hub, and API Management have reported their services are fully mitigated. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Recovery] Storage – East Asia

Engineers continue to implement mitigation steps, and more customers have reported their services being restored. Service Bus, Site Recovery, Azure Backup, IoT Suite, Managed Cache, Radis Cache, and Stream Analytics have reported their services are fully mitigated. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Recovery] Storage – East Asia

Engineers have begun mitigation and continue to see services improving, and some customers have reported their services being restored. Service Bus, Site Recovery, Azure Backup and IoT Suite reported their services are fully mitigated. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Recovery] Storage – East Asia

Engineers have begun mitigation and are seeing services are improving, and some customers have reported their services being restored. Service Bus and Site Recovery reported their services are fully mitigated. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Recovery] Storage – East Asia

Engineers have begun mitigation and are seeing services are improving, and some customers have reported their services being restored. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

[Mitigation in progress] Storage – East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Some Azure services have dependency on the affected Storage scale unit may also experience service degradation, and please refer to another message for details on secondary impacted services.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Some services are seeing improvement and engineers are continuing to work towards full mitigation for the region. The next update will be provided as soon as further information is available.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including Site Recovery, API Management, Stream Analytics, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services have been fully recovered. Engineers have identified a potential root cause and are continuing to work through mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including SQL DB, Virtual Machines, Virtual Network, Media Services, Key Vault and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services are seeing signs of recovery. Engineers have identified a potential root cause and are continuing to work towards mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including SQL DB, Virtual Machines, Virtual Network, Media Services, Key Vault and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services are seeing signs of recovery. Engineers are continuing to investigate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - East Asia - Advisory

Starting at 09:16 UTC on 02 Oct 2016, you have been identified as a customer who may be impacted by an issue with a storage scale unit in East Asia. Customers leveraging this scale unit may experience difficulties connecting to their storage resources hosted in this region. Customers may also experience issues accessing services leveraging this scale unit, including SQL DB, Virtual Machines, Virtual Network, Media Services, Key Vault and App Service \ Web Apps. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Key Vault and Media Services are seeing signs of recovery. Engineers are continuing to investigate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services Impacted by Storage Incident

Starting at 17:19 UTC on 22 August, 2016, a subset of customers may experience 500 errors when attempting to access their Storage resources hosted in East US. Document DB customers in multiple regions may be unable to execute management operations from the portal, powershell, or programmatically. Key Vault customers in multiple regions may also be impacted. Engineers are working on mitigation and we are seeing improvements. The next update will be provided in 60 minutes or as events warrent.

Last Update: A few months ago

Storage - East US

Starting at 17:19 UTC on 22 August, 2016, a subset of customers may experience 500 errors when attempting to access their Storage resources hosted in East US. Engineers are investigating. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - West Europe - Advisory

SUMMARY OF IMPACT: Between 17:20 and 17:46 UTC on 21 Jul 2016 A subset of customers using Storage in West Europe may have experienced timeouts while accessing their storage resources. Additionally, some customers may have experienced failures or unexpected reboots of their Virtual Machines. Majority of customers VMs should be recovered now, and we also identified a very limited subset of customers may be experiencing residual impact. These customers will receive further communications through their Management Portal (https://portal.azure.com). PRELIMINARY ROOT CAUSE: At this stage we do not have a definitive root cause. MITIGATION: This issue was self-healed by the Azure platform. NEXT ACTION: Investigate the underlying root cause of this issue and develop a solution to prevent reoccurrences.

Last Update: A few months ago

Storage, Virtual Machines, and Cloud Services - Japan East - Advisory

Starting at 20:40 UTC on 24 May, 2016, a subset of customers using Storage in Japan East may experience intermittent latency or failures while accessing their storage resources. Additionally, some customers may experience failures or unexpected reboots of their Virtual Machines or Cloud Services. Engineers have identified a potential root cause and are working towards mitigation. The next update will be provided in 2 hours, or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - Japan East - Advisory

An alert for Virtual Machines and Storage in Japan East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Root Cause: Underlying Storage Issue In West Europe Impacting Virtual Machines and Web Apps in the Region

Starting at 19:00 UTC on April 28, 2016  engineers identified a Storage issue in West Europe that has also impacted a number of dependent services. WebApp and Virtual Machines are showing signs of recovery, and Remote App is fully recovered at this time. Engineers are continuing to investigate the issue and restore full system health across all services. Next update will be in 60 minutes or as evets warrant.

Last Update: A few months ago

Root Cause: Underlying Storage Issue In West Europe Impacting Virtual Machines and Web Apps in the Region

Engineers are investigating an underlying Storage issue that has impacted both Web App and Virtual Machines customers in West Europe. Impact has recovered and engineering is continuing its investigation of the underlying issues. An update will be provided with additional details in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines and Storage - East US - Advisory

SUMMARY OF IMPACT: Between 12:50 and 17:15 UTC on 21 Apr 2016, a subset of customers using Virtual Machines in East US may have experienced reboots to their Virtual Machines. In additional, some customers using Storage in this region may have experienced timeouts while accessing their storage resources. PRELIMINARY ROOT CAUSE: Engineers had deployed an update that caused a software error. MITIGATION: Engineers have developed a hotfix, deployed to the impacted scale unit, and confirmed the incident is mitigated. NEXT STEPS: Review procedures for validating service updates as the software error was not identified during the testing phase of the development process. Engineers also identified a very limited subset of customers may be experiencing residual impact. These customers will receive further communications through their management portal.

Last Update: A few months ago

Virtual Machines and Storage - East US - Advisory

Starting at 12:15 UTC on 21 Apr 2016, a subset of customers using Virtual Machines in East US may have experienced a reboot of their Virtual Machines. Some customers may also experience timeouts whilst accessing their storage resources in the East US region (retries should succeed). Engineer have identified a software error as the preliminary root cause and are currently validating a hot fix to mitigate. Some customers should see improvements. Next update will be in 2 hours or as events warrant.

Last Update: A few months ago

Virtual Machines and Storage - East US - Advisory

Starting at 12:15 UTC on 21 Apr 2016, a subset of customers using Virtual Machines in East US may have experienced a reboot of their Virtual Machines. Some customers may also experience time outs whilst accessing their storage resources in the East US region (retries should succeed). Engineers have identified a potential root cause and started recovery process to mitigate. Next update will be in 60 min or as event warrants.

Last Update: A few months ago

Virtual Machines / Storage : Advisory

Starting at 12:15 UTC on 21 Apr 2016, a subset of customers using Virtual Machines in East US may have experienced a reboot of their Virtual Machines hosted in this region. Some customers may also experience time outs whilst accessing their storage resources in the East US region ( retries should succeed ). Engineers are currently investigating a potential underlying storage issue. The next update will be provided in 60 minutes or as event warrant

Last Update: A few months ago

Virtual Machines : Advisory

Starting at 12:15 UTC on 21 Apr 2016, a subset of customers using Virtual Machines in East US may have experienced a reboot of their Virtual Machines hosted in this region. Some customers may also experience time outs whilst accessing their storage resources in the East US region ( retries should succeed ). Engineers are currently investigating a potential underlying storage issue. The next update will be provided in 60 minutes or as event warrant.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , recovered and all services, except a limited subset of Web Apps, have reported recovery as well. The next update will be provided in 2 hours or as events warrant. <br><br>IMPACTED SERVICES: Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are continuing to mitigate the remaining Web Apps impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are continuing to mitigate the remaining Web Apps impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are actively mitigating the Web Apps remaining impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers are showing signs of recovery and some customers may continue to experience errors attempting to connect to resources. Web Apps customers are showing signs of recovery and a subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are reporting recovery are starting to see recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Managed Cache, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are reporting recovery are starting to see recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Redis Cache, Service Bus, SQL Databases, Storage, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics, SQL Databases, Storage, HDInsight.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team is actively mitigating the issue and once healthy, the impacted services (below) will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases on only the Basic or Standard Tiers may experience failures when attempting to connect to databases and when attempting to login to their databases, Premium Tier customers are not impacted. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases on only the Basic or Standard Tiers may experience failures when attempting to connect to databases and when attempting to login to their databases, Premium Tier customers are not impacted. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps customers may experience failures connecting to or using their services.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, Redis Cache and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Initial investigation indicates that the issue is due to an underlying Storage dependency. Engineers are actively investigating the path to mitigation. The next update will be provided in 60 minutes or as events warrant. IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Initial investigation indicates that the issue is due to an underlying Storage dependency. Engineers are actively investigating the path to mitigation. The next update will be provided in 60 minutes or as events warrant. IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to Login. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, and RemoteApp in East US may experience connection failures or long latencies when attempting to access their resources. Initial investigation indicates that the issue is due to an underlying Storage dependency. Engineers are actively investigating the path to mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, and Azure Search in East US may experience connection failures or long latencies when attempting to access their resources. Initial investigation indicates that the issue is due to an underlying Storage dependency. Engineers are actively investigating the path to mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - Southeast Asia: Service Restoration

Starting at approximately 01:40 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Investigations indicate this to be due to fiber cut on a 3rd party network provider's infrastructure. Engineers from the 3rd party network provider have repaired the issue. Azure Services are starting to show restoration. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia: Service Restoration

Starting at approximately 01:40 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Investigations indicate this to be due to fiber cut on a 3rd partner network providers infrastructure that has been repaired. Azure Services are beginning to show restoration. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, Azure Search, Redis Cache, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage and Virtual Machines - South Central US - Advisory

Starting at 28 Mar, 2016 17:00 UTC a subset of customers using Virtual Machines and Storage in South Central US may experience higher than normal latency or errors when attempting to connect to, or use, impacted VHD's or Storage services. Engineers are investigating an underlying storage issue as preliminary root cause. The next update will be provided in 60 minutes.

Last Update: A few months ago

Virtual Machines - South Central US - Advisory

An alert for Virtual Machines in South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using services including Resource Manager, Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Network Infrastructure service interruption. We are starting to see improvements in service avaialbility, and customers should begin to see improvements. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using services including Resource Manager, Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Network Infrastructure service interruption. We are starting to see improvements in service avaialbility, and customers should begin to see improvements. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using services including Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Network Infrastructure service interruption. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Network Infrastructure service interruption. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Storage outage. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Storage outage. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Web App and Redis Cache - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Storage outage. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - South Central US - Partial Performance Degradation

Starting at 24 MAR, 2016 16:00 UTC a subset of customers using Virtual Machines and Storage in South Central US may experience higher than normal latency or errors when attempting to connect to, or use, impacted VHD's or Storage services. Engineers have identified a single Storage scale unit in the region as a preliminary root cause for the issue, and currently working on mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using Storage in East US 2 may experience errors or timeouts when attempting to access resources. Engineers are currently investigating. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US 2 - Advisory (Limited Impact)

An alert for Storage in East US 2 is being investigated. A subset of customers may be impacted. More information will be provided as it is known.

Last Update: A few months ago

Storage and Virtual Machines - South Central US - Partial Performance Degradation

Starting at 24 MAR, 2016 16:00 UTC a subset of customers using Virtual Machines and Storage in South Central US may experience higher than normal latency or errors when attempting to connect to, or use, impacted VHD's or Storage services. Engineers have identified a single Storage scale unit in the region as a preliminary root cause for the issue, and are examining options to mitigate the impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - South Central US - Advisory

Starting at 24 MAR, 2016 16:00 UTC a subset of customers using Virtual Machines in South Central US may experience higher than normal latency or errors when attempting to connect to, or use, impacted machines. Engineers have identified a Storage account hosting customer VHD's as a preliminary root cause for the issue, and are examining options to mitigate the impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Our Engineering teams have mitigated the underlying Networking issue and majority of affected customers should observe recovery now. Please refer to History page for preliminary report for the Networking incident. All the impacted Azure Services also reported services restored, except App Services and HDInsights. Engineers continue to recover the residual affected VMs. Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Our Engineering teams have mitigated the underlying Networking issue and majority of affected customers should observe recovery now. Engineers continue to recover the residual affected VMs. Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights.

Last Update: A few months ago

Mutiple Azure services experienced brief unavailability - South Central US - Advisory

Starting approximately at 04:57 11th Mar 2016 UTC, engineers have detected alerts in South Central US region where a subset of Azure customers may experience brief reboot on their VMs hosted in this region. Potentially impacted Azure services may include: Azure Resource Manager, Web App services, Visual Studio Team Services, and Azure Search. Engineers are now seeing recovery from our monitoring, and customers should not experience impact. Engineers continue to assessing customers impact and investigating the root cause of this incident, and next update will be provided within 60 minutes or as event warrants.

Last Update: A few months ago

Mutiple Azure services experienced brief unavailability - South Central US - Advisory

Engineers have detected Storage related alerts in South Central US region where a subset of Azure customers may experience brief reboot on their VMs hosted in this region. Potentially impacted Azure services may include: Azure Resource Manager, Web App services, and Visual Studio Team Services. Engineers are now seeing recovery from our monitoring, and customers should not experience impact. Engineers continue to assessing customers impact and investigating the root cause of this incident, and next update will be provided within 60 minutes or as event warrants.

Last Update: A few months ago

Storage - South Central US - Advisory

Engineers have detected Storage related alerts in South Central US regions where a subset of Azure customers may experience brief reboot on their VMs hosted in this regions. Potentially impacted Azure services may include: Azure Resource Manager, Web App services, and Visual Studio Team Services. Engineers are seeing recovery from our monitoring now, and customers should not experience impact. Engineers continue to investigate the root cause of this incident, and next update will be provided within 60 minutes or as event warrants.

Last Update: A few months ago

Storage - South Central US - Advisory

An alert for Storage in South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage, Web App, Virtual Machines, Azure Backup, Azure Search, Key Vault, RemoteApp, Stream Analytics, Visual Studio Team Services - Australia East - Partial Service Interruption

Engineers are mitigating an on-going issue where a subset of customers using services with dependency on Storage in Australia East may experience higher than expected latency. Approximately from 01:00 Feb 27 UTC, engineers have observed numbers of Virtual Machines, Web App and other Azure resources become inaccessible. Some Azure services that report impacted were: Azure Backup, Azure Search, Key Vault, RemoteApp, Stream Analytics, Visual Studio Team Services, as of 03:10 Feb 27 UTC, Most of Azure services have reported healthy, except RemoteApp. Preliminary investigation indicates that one of our mitigation processes may unexpectedly cause spike of latency resulting Azure services and resources become inaccessible. The process is complete, and majority of customers should see improvement now. Engineers are continuing to monitoring the system, and next update will be provided in next 60 minutes.

Last Update: A few months ago

Storage, Web App, Virtual Machines, Azure Backup, Azure Search, Key Vault, RemoteApp, Stream Analytics, Visual Studio Team Services - Australia East - Partial Service Interruption

Engineers are aware an on-going issue where a subset of customers using services with dependency on Storage in Australia East may experience higher than expected latency. Approximately from 01:00 Feb 27 UTC, engineers have observed numbers of Virtual Machines and Web App resources become inaccessible. Other Azure services that reported impacted were: Azure Backup, Azure Search, Key Vault, RemoteApp, Stream Analytics, Visual Studio Team Services, as of 03:10 Feb 27 UTC, Most of Azure services have reported healthy, except RemoteApp and Azure Search. Preliminary investigation indicates that one of our mitigation processes may unexpectedly cause spike of latency resulting Azure services and resources become inaccessible. The process is complete, and majority of customers should see improvement now. Engineers are continuing to monitoring the system, and next update will be provided in next 60 minutes.

Last Update: A few months ago

Storage Latency, Web App and Virtual Machines - Australia East

Engineers are aware an on-going issue where a subset of customers using services with dependency on Storage in Australia East may experience higher than expected latency. Approximately from 01:00 Feb 27 UTC, engineers have observed numbers of Virtual Machines and Web App resources become inaccessible. Other Azure services that reported impacted were: Azure Backup, Azure Search, Key Vault, RemoteApp, Stream Analytics, As of 03:10 Feb 27 UTC, Azure services have reported healthy. Preliminary investigation indicates that one of our mitigation processes may unexpectedly cause spike of latency resulting Virtual Machines become inaccessible. The process is complete, and customers should see improvement now. Engineers are continuing to monitoring the system, and next update will be provided in next 60 minutes.

Last Update: A few months ago

Storage Latency, Web App and Virtual Machines - Australia East

Engineers are aware an on-going issue where a subset of customers using services with dependency on Storage in Australia East may experience higher than expected latency. Approximately from 01:00 Feb 27 UTC, engineers have observed numbers of Virtual Machines and Web App resources become inaccessible. Preliminary investigation indicates that one of our mitigation processes may unexpectedly cause spike of latency resulting Virtual Machines become inaccessible. The process is complete, and customers should see improvement now. Engineers are continuing to monitoring the system, and next update will be provided in next 60 minutes.

Last Update: A few months ago

Storage Latency, Web App and Virtual Machines - Australia East

Engineers are aware that a subset of customers using services with dependency on Storage in Australia East may experience higher than expected latency. Approximately from 00:30 Feb 27 UTC, engineers have observed numbers of Virtual Machines become inaccessible. Preliminary investigation indicates that one of our mitigation processes may unexpectedly cause spike of latency resulting Virtual Machines become inaccessible. The process is now complete, and engineers are continuing to monitoring the system. Next update will be provided in next 60 minutes.

Last Update: A few months ago

Storage Latency and Virtual Machines - Australia East

Engineers are aware that a subset of customers using services with dependency on Storage in Australia East may experience higher than expected latency. Approximately from 00:30 Feb 27 UTC, engineers have observed numbers of Virtual Machines become inaccessible. Preliminary investigation indicates that our patching may unexpectedly cause spike of latency resulting Virtual Machines become inaccessible. The patching is complete now, and engineers are continuing to monitoring the system. Next update will be provided in next 60 minutes.

Last Update: A few months ago

Storage Latency and Virtual Machines - Australia East

Engineers are aware that a subset of customers using services with dependency on Storage, such as Virtual Machines, in Australia East may experience higher than expected latency. If you are impacted by this issue, we ask you to login to your Management Portal (https://manage.windowsazure.com) where we will be providing regular updates and self-mitigation options.

Last Update: A few months ago

Storage Latency and Virtual Machines - Australia East

Engineers are aware that a subset of customers using services with dependency on Storage, such as Virtual Machines, in Australia East may experience higher than expected latency. If you are impacted by this issue, we ask you to login to your Management Portal (https://manage.windowsazure.com) where we will be providing regular updates and self-mitigation options.

Last Update: A few months ago

Storage Latency and Virtual Machines - Australia East

Engineers are aware that a subset of customers using services with dependency on Storage, such as Virtual Machines, in Australia East may experience higher than expected latency. If you are impacted by this issue, we ask you to login to your Management Portal (https://manage.windowsazure.com) where we will be providing regular updates and self-mitigation options.

Last Update: A few months ago

Storage Latency and Virtual Machines - Australia East

Engineers are aware that a subset of customers using services with dependency on Storage, such as Virtual Machines, in Australia East may experience higher than expected latency. If you believe you are impacted, we ask you to login into your Management Portal (https://manage.windowsazure.com) where we will be providing regular updates and self-mitigation options.

Last Update: A few months ago

Storage Latency and Virtual Machines - Australia East

Engineers are aware of a subset of customers using services with dependency on Storage, such as Virtual Machines, in Australia East who may experience higher than expected latency. If you believe you are impacted, we ask you to login into your Management Portal (https://manage.windowsazure.com) where we will be providing regular updates and self-mitigation options.

Last Update: A few months ago

Storage - East US - Advisory

Starting at 19 Feb, 2016 19:33 UTC customers using Stream Analytics in East US may experience latency when attempting to start a job. In addition, customers may encounter a message stating that their job is in a degraded state. Engineers are investigating root cause. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US - Advisory

An alert for Storage in East US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage, Virtual Machines - Australia East - Advisory

Starting at 03:45 UTC on 18th Feb 2016, a subset customers using Storage, and Virtual Machines in Australia East may experience intermittent timeouts or latency when trying to access their resources in this region. Impacts to Remote App service in Australia East have been fully mitigated as of 16:24 UTC on 19th Feb 2016. Engineers continue to deploy mitigation steps to a single Storage scale unit in Australia East. As the deployment continues, customers will observe improved service stability, however may encounter intermittent downstream impact to their Storage and Virtual Machines services in Australia East. The next update will be provided in 4 hours or as events warrant.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

• Starting at 03:45 UTC on 18th Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or latency when trying to access their resources in this region. Engineers have identified that a single Storage scale unit that may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Engineering are still in the process of deploying a full mitigation, however most customers in the region will notice improvements in connectivity to their resources. The next update will be provided in 4 hours.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on 18th Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit that may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Many customers will already have noticed improvements in connectivity to their resources. The next update will be provided in 4 hours.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit that may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Many customers will have noticed improvements in connectivity to their resources. The next update will be provided in 4 hours. 

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit that may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Many customers will have noticed improvements in connectivity to their resources. The next update will be provided in 4 hours or as events warrant.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit that may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Many customers will have noticed improvements in connectivity to their resources. The next update will be provided in 4 hours.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit that may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Many customers will have noticed improvements in connectivity to their resources. The next update will be provided in 4 hours or as events warrant.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit that may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Many customers will have noticed improvements in connectivity to their resources. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Many customers will have noticed improvements in connectivity to their resources. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. Engineers are in the process of applying mitigation steps and most customers should notice improvements in service availability. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Storage, Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, a subset of customers using Storage, Virtual Machines & Remote App in Australia East may experience timeouts or failures when trying to access their resources in this region. Engineers have identified that a single Storage scale unit may be experiencing issues which is causing downstream impact for some Storage, Virtual Machines and Remote App customers. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Virtual Machines and Remote App - Australia East - Advisory

Starting at 03:45 UTC on the 18 Feb 2016, customers using Virtual Machines and Remote Apps in Australia East may experience timeouts or failures when trying to access their Virtual Machines or Remote Apps. Engineers have identified that a single Storage scale unit in the region may be experiencing issues. Consequently, some Azure Services experiencing latency may be doing so due to this potential Storage issue. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Storage and Virtual Machines resources hosted in Central US. Engineers have identified that a single Storage scale unit in the region is currently experience issues, and any Virtual Machines with a dependency on that Storage resource may also be encountering issues. Customers will likely experience errors, timeouts, or general service availability issues. Engineers have deployed a mitigation and are beginning to see improvements in availability. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Storage and Virtual Machines resources hosted in Central US. Engineers have identified that a single Storage scale unit in the region is currently experience issues, and any Virtual Machines with a dependency on that Storage resource may also be encountering issues. Customers will likely experience errors, timeouts, or general service availability issues. Our Engineering team has identified a number of unhealthy nodes within the scale unit and are working to restore availability. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Storage and Virtual Machines resources hosted in Central US. Engineers have identified that a single Storage scale unit in the region is currently experience issues, and any Virtual Machines with a dependency on that Storage resource may also be encountering issues. Customers will likely experience errors, timeouts, or general service availability issues. Our Engineering team has identified the resource that is the root cause of the issue and is currently examining options for mitigation. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Storage and Virtual Machines resources hosted in Central US. Engineers have identified that a single Storage scale unit in the region is currently experience issues, and any Virtual Machines with a dependency on that Storage resource may also be encountering issues. Customers will see errors, timeouts, or general service unavailability. Our Engineering team has identified the resource that is the root cause of the issue and is currently examining options for mitigation. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West US - Advisory

Starting at 23 Dec 2015 17:50 UTC customers using Storage in West US may see an error when attempting to use Service Management Operations for storage accounts. Customers should now be able to create a new storage account, but may still see an error with other Service Management Operations. This issue is not impacting existing storage accounts. We are currently evaluating options to restore service. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - West US - Advisory

Starting at 23 Dec 2015 17:50 UTC customers using Storage in West US may see an error when attempting to create new storage accounts. We are currently evaluating options to restore service. The next update will be provided in 60 minutes .

Last Update: A few months ago

Storage - Brazil South - Advisory

Starting at 15 Dec, 2015 00:00 UTC customers using Storage in Brazil South may be experiencing intermittent latency when attempting to reach their Azure storage resources. Blob Storage in particular may be affected. Engineers continue to drive investigation and are actively working towards mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - Brazil South - Advisory

Starting at 15 Dec, 2015 00:00 UTC customers using Storage in Brazil South may be experiencing intermittent latency when attempting to reach their Azure storage resources. Blob Storage in particular may be affected. Engineers are engaged and are actively working towards mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Extended recovery is underway for this incident. Starting on 09 Dec 2015 at approximately 00:00 UTC a subset of customers using Storage in West Europe may be experiencing intermittent latency when attempting to reach their Azure storage resources. Some customers who were believed to be seeing improvement, may see latency reoccur. During this time, customers may find connections to be latent, but should succeed. The next update will be provided in 4 hours or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting on 09 Dec 2015 at approximately 00:00 UTC a subset of customers using Storage in West Europe may be experiencing intermittent latency when attempting to reach their Azure storage resources. Many of the previously affected customers will have noticed improvements. During this time, customers may find connections to be latent, but should succeed. We have taken action to resolve this incident, and are now confirming that the incident is mitigated. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting on 09 Dec 2015 at approximately 00:00 UTC a subset of customers using Storage in West Europe may be experiencing intermittent latency when attempting to reach their Azure storage resources. Many of the previously affected customers will have noticed improvements. During this time, customers may find connections to be latent, but should succeed. We are currently evaluating options to restore full service. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting on 15 Dec 2015 at approximately 10:30 UTC a subset of customers using Storage in West Europe may be experiencing intermittent latency when attempting to reach their Azure storage resources. Many of the previously affected customers will have noticed improvements. During this time, customers may find connections to be latent, but should succeed. The next update will be provided in 2 hours or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting on 15 Dec 2015 at approximately 10:30 UTC a subset of customers using Storage in West Europe may be experiencing intermittent latency when attempting to reach their Azure storage resources. During this time, connections may be latent but should succeed. Engineers are still working to identify potential causes and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting on 15 Dec 2015 at approximately 10:30 UTC a subset of customers using Storage in West Europe may be experiencing intermittent latency when attempting to reach their Azure storage resources. During this time, connections may be latent but should succeed. Engineering are still investigating potential causes and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting at 15 Dec 2015, 10:30 UTC a subset of customers using Storage in West Europe may experience intermittent latency when attempting to reach their Azure storage resources. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Storage in West US may encounter errors or timeouts when attempting to access resources. Engineers have identified a configuration issue and deployed a mitigation. Customers may now be able to access Storage services, and we are continuing to validate service health. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Storage in West US may encounter errors or timeouts when attempting to access resources. Engineers have identified a configuration issue and deployed a mitigation. Customers may now be able to access Storage services, and we are continuing to validate service health. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Storage in West US may encounter errors or timeouts when attempting to access resources. Engineers have identified a configuration issue and deployed a mitigation. We are currently validating service health. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Storage in West US may encounter errors or timeouts when attempting to access resources. Engineers have identified a potential root cause and are working to restore service as quickly as possible. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Storage in West US may encounter errors or timeouts when attempting to access resources. Engineers have identified a potential root cause and are currently working through a number of steps to try and mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Storage in West US may encounter errors or timeouts when attempting to access resources. Our Engineers are currently investigating the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Storage in West US may encounter errors or timeouts when attempting to access resources. Our Engineers are currently investigating the issue. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - West US - Advisory

An alert for Storage in West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Network Infrastructure and Storage - East US 2 - Partial Service Interruption

We have validated that the Network Infrastructure issues experienced by customers in East US 2 are mitigated. Our Engineering teams are working to gather additional details on the preliminary root cause before this incident is resolved. An update will be provided within 30 minutes.

Last Update: A few months ago

Network Infrastructure and Storage - East US 2 - Partial Service Interruption

Starting at approximately 16:40 UTC on 11 Nov, 2015 engineers are investigating an issue with Network Infrastructure in East US 2. Engineering has identified 3 core routers that experienced a failure and is working on a mitigation strategy with the device vendor. At this time, customers will experience intermittent inability to access their services on roughly 5 to 8 minute cycles as a result of the impacted core routers. Services reporting impact as a result of dependencies to core Network Infrastructure and Storage are being updated in a post entitled “Services Experiencing Residual Impact”. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure and Storage - East US 2 - Partial Service Interruption

Starting at approximately 16:40 UTC on 11 Nov, 2015 engineers are investigating an issue with Network Infrastructure impacting Storage in East US 2. Customers may be experiencing latency, packet loss, and intermittent availability issues with their hosted services. Engineering has identified 3 core routers that experienced a failure and is working on a mitigation strategy. Services reporting impact as a result of dependencies to these core services are being updated in a post entitled “Services Experiencing Residual Impact”. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure and Storage - East US 2 - Partial Service Interruption

Starting at approximately 16:40 UTC on 11 Nov, 2015 engineers are investigating an issue with Network Infrastructure impacting Storage in East US 2. Services in the region that have Network and Storage dependencies may also be impacted; engineering is assessing impact to other services in tandem with their investigation of Network and Storage alerts. A list of services reporting impact as a result of observed latency will be posted shortly. Engineering has identified 3 core routers that experienced a failure. At present time, latency levels are improving and customers should see improvement to service availability. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure and Storage - East US 2 - Partial Service Interruption

Engineers are engaging on an emerging issue for Network Infrastructure impacting Storage in East US 2. Services in the region that have Network and Storage dependencies may also be impacted; engineering is assessing impact to other services in tandem with their investigation of Network and Storage alerts. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure and Storage - East US 2 - Partial Service Interruption

Engineers are investigating alerts for Network Infrastructure in East US 2. Additional information will be provided shortly.

Last Update: A few months ago

Storage - East US2 - Advisory

Starting on the 4th of November, 2015 at 23:57 UTC a very limited subset of customers using Storage in East US 2 may experience intermittent latency or time out errors when attempting to reach storage resources/accounts. Retries should mostly succeed, although this may not be the case in every instance. Engineering have identified a potential mitigation and are currently developing a plan for deployment. The next update will be provided in 60 minutes. 

Last Update: A few months ago

Storage - East US2 - Advisory

Starting at 04 Nov, 2015 23:57 UTC a very limited subset of customers using Storage in East US 2 may experience intermittent latency or time out errors when attempting to reach storage resources/accounts. Retries should mostly succeed, although this may not be the case in every instance. Engineering have identified a potential root cause and are working towards a mitigation. The next update will be provided in 60 minutes. 

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, HDInsight, Media Services, Azure Automation, Virtual Machines, Stream Analytics and ExpressRoute- Southeast Asia - Partial Service Interruption

Starting 27 Oct 2015, at approximately 10:51 UTC, a subset of customers in Southeast Asia may be experiencing impact to their Azure Services as a result of an ongoing Networking issue: SQL customers may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Engineers have also determined an impact to a subset of customers attempting to connect to their SQL Databases. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. STREAM ANALYTICS customers will experience jobs failing. Customers will also be unable to create new jobs. Retries may success but this will not always be the case. EXPRESSROUTE customers may find connections to some VNets are lost. STREAM ANALYTICS customers may find that they are unable to start their jobs. REMOTEAPP customers will be unable to create new collections and may be unable to connect to existing collections. APP INSIGHTS customers may have experienced an interruption to their service. Engineers have reported that App Insights customers should now be mitigated. Engineers are engaged and assessing impact to other Azure Services as well as mitigation options. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, HDInsight, Media Services, Azure Automation, Virtual Machines, Stream Analytics and ExpressRoute- Southeast Asia - Partial Service Interruption

Starting 27 Oct 2015, at approximately 10:51 UTC, a subset of customers in Southeast Asia may be experiencing impact to their Azure Services as a result of an ongoing Networking issue: SQL customers may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Engineers have also determined an impact to a subset of customers attempting to connect to their SQL Databases. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. STREAM ANALYTICS customers will experience jobs failing. Customers will also be unable to create new jobs. Retries may success but this will not always be the case. EXPRESSROUTE customers may find connections to some VNets are lost. STREAM ANALYTICS customers may find that they are unable to start their jobs. APP INSIGHTS customers may have experienced an interruption to their service. Engineers have reported that App Insights customers should now be mitigated. Engineers are engaged and assessing impact to other Azure Services as well as mitigation options. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, HDInsight, Media Services, Azure Automation, Virtual Machines and ExpressRoute- Southeast Asia - Partial Service Interruption

Starting 27 Oct 2015, at approximately 10:51 UTC, a subset of customers in Southeast Asia may be experiencing impact to their Azure Services as a result of an ongoing Networking issue: SQL customers may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Engineers have also determined an impact to a subset of customers attempting to connect to their SQL Databases. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. STREAM ANALYTICS customers will experience jobs failing. Customers will also be unable to create new jobs. Retries may success but this will not always be the case. EXPRESSROUTE customers may find connections to some VNets are lost. STREAM ANALYTICS customers may find that they are unable to start their jobs. APP INSIGHTS customers may have experienced an interruption to their service. Engineers have reported that App Insights customers should now be mitigated. Engineers are engaged and assessing impact to other Azure Services as well as mitigation options. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, HDInsight, Media Services, Azure Automation, Virtual Machines and ExpressRoute- Southeast Asia - Partial Service Interruption

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers in Southeast Asia using different Azure Services may be impacted: SQL customers may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Engineers have also determined an impact to a subset of customers attempting to connect to their SQL Databases. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. STREAM ANALYTICS customers will experience jobs failing. Retries may success but this will not always be the case. APP INSIGHTS customers may have experienced an interruption to their service. Engineers have reported that App Insights customers should now be mitigated. EXPRESSROUTE customers may find connections to some VNets are lost. Engineers are engaged and assessing impact to other Azure Services. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, HDInsight, Media Services, Azure Automation, and Virtual Machines, ExpressRoute- Southeast Asia - Partial Service Interruption

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers in Southeast Asia using different Azure Services may be impacted : SQL customers may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. STREAM ANALYTICS customers will experience jobs failing. Retries may success but this will not always be the case. APP INSIGHTS customers may have experienced an interruption to their service. Engineers have reported that App Insights customers should now be mitigated. EXPRESSROUTE customers may find connections to some VNets are lost. Engineers are engaged and assessing impact to other Azure Services. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, HDInsight, Media Services, Azure Automation, and Virtual Machines - Southeast Asia - Partial Service Interruption

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers in Southeast Asia using different Azure Services may be impacted: SQL customers may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. STREAM ANALYTICS customers will experience jobs failing. Retries may success but this will not always be the case. APP INSIGHTS customers may have experienced an interruption to their service. Engineers have reported that App Insights customers should now be mitigated. EXPRESSROUTE customers may find connections to some VNets are lost. Engineers are engaged and assessing impact to other Azure Services. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, HDInsight, Media Services, Azure Automation, and Virtual Machines - Southeast Asia - Partial Service Interruption

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers in Southeast Asia using different Azure Services may be impacted: SQL customers may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. STREAM ANALYTICS customers will experience jobs failing. Retries may success but this will not always be the case. APP INSIGHTS customers may have experienced an interruption to their service. Engineers have reported that App Insights customers should now be mitigated. Engineers are engaged and assessing impact to other Azure Services. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, HDInsight, Media Services, Azure Automation, and Virtual Machines - Southeast Asia - Partial Service Interruption

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers is Southeast Asia using different Azure Services may be impacted: SQL customers will may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. Engineers are engaged and assessing impact to other Azure Services. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, Azure Resource Manager, Storage and Virtual Machines - Southeast Asia and Multi-Region - Advisory

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers is Southeast Asia using different Azure Service may be impact: SQL customers will may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks. Engineers are engaged and assessing impact to other Azure Services. The next update will be in 60 minutes or as events warrant.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, Azure Resource Manager, Storage and Virtual Machines - Southeast Asia and Multi-Region - Advisory

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers is Southeast Asia using different Azure Service may be impact: SQL customers will may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, Azure Resource Manager, Storage and Virtual Machines - Southeast Asia and Multi-Region - Advisory

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers is Southeast Asia using different Azure Service may be impact: SQL customers will may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. HDINSIGHT customers may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted. MEDIA SERVICES customers may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. WEBAPP customers may be experiencing 503 or timeout errors. VIRTUAL MACHINES customers will be unable to connect to their resources. SERVICE BUS customers will be unable to access their resources. AUTOMATION customers will be unable to submit automation tasks.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, Azure Resource Manager, Storage and Virtual Machines - Southeast Asia and Multi-Region - Advisory

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers using SQL Database in Southeast Asia may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. A subset of customers using Web App in Southeast Asia may be experiencing 503 or timeout errors. Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers in Southeast Asia using Media Services may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services. Starting at 27 Oct 2015, at approximately 10:51 UTC, a subset of customers using HDInsight in Southeast Asia may experience failures when attempting to create new HDInsight cluster. Existing HD Insight clusters are not currently impacted.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, Azure Resource Manager, Storage and Virtual Machines - Southeast Asia and Multi-Region - Advisory

Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers using SQL Database in Southeast Asia may experience issues performing service management operations. Server and database create, drop, rename, and change edition or performance tier operations may result in an error or timeout. Availability (connecting to and using existing databases) is not impacted. A subset of customers using Web App in Southeast Asia may be experiencing 503 or timeout errors. Starting at 27 Oct 2015, at approximately 10:51 UTC a subset of customers in Southeast Asia using Media Services may experience stalling on their media encoding jobs. Engineering are continuing to investigate the impact to the remaining services.

Last Update: A few months ago

App Service \ Web App, Service Bus, SQL Database, Azure Resource Manager, Storage and Virtual Machines - Southeast Asia and Multi-Region - Advisory

Alerts for App Service \ Web App, Service Bus, SQL Database, Azure Resource Manager, Storage, and Virtual Machines in Southeast Asia are being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - East Asia - Partial Performance Degradation

From approximately 30 Sep, 2015 20:19 UTC and 1 Oct, 2015 00:23 UTC a subset of customers with Storage deployments in East Asia may have experienced intermittent latency or connection failures when attempting to reach Storage resources. Engineers will work on the residual impact to downstream services and perform full root cause analysis once the services have recovered. This incident is now closed.

Last Update: A few months ago

Storage - East Asia - Partial Performance Degradation

Between 30 Sep, 2015 20:19 UTC and 1 Oct, 2015 00:23 UTC a subset of customers with Storage deployments in East Asia may experience intermittent latency or connection failures when attempting to reach Storage resources. Engineers are currently validating recovery for this issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - East Asia - Partial Performance Degradation

Starting at approximately 30 Sep, 2015 20:05 UTC a subset of customers with Storage deployments in East Asia may experience intermittent latency or connection failures when attempting to reach storage resources. Engineering are beginning to see improvement in restoration of services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - East Asia - Partial Performance Degradation

Starting at approximately 30 Sep, 2015 20:19 UTC a subset of customers with Storage deployments in East Asia may experience intermittent latency or connection failures when attempting to reach storage resources. Engineering are seeing improvement in restoration of services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - East Asia - Advisory

Starting at approximately 30 Sep, 2015 20:19 UTC a subset of customers with Storage deployments in East Asia may experience intermittent latency or connection failures when attempting to reach storage resources. Engineering are seeing improvement in restoration of services. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East Asia - Advisory

Starting at approximately 30 Sep, 2015 20:19 UTC a subset of customers using Storage in East Asia may experience intermittent latency or connection errors when attempting to reach storage resources. Engineering is actively engaged and are working towards mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - East Asia - Advisory

An alert for Storage in East Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - West Europe - Partial Performance Degradation

Starting from 29 Sep, 2015 at approx. 13:02 UTC, customers with Storage deployments or dependencies in West Europe may encounter connection failures to their resources. Engineers observed spikes in ingress and egress data, and believe there may be an underlying load-balancing issue that is causing downstream impact to other Azure services. Impact to Storage currently appears to be recovering. An update will be provided within 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe - Partial Performance Degradation

Starting from 29 Sep, 2015 at approx. 13:02 UTC, customers with Storage deployments or dependencies in West Europe may encounter connection failures to their resources. Engineers are seeing spikes in ingress and egress data, and believe there may be an underlying load-balancing issue that is causing downstream impact to other Azure services. An update will be provided within 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting from 29 Sep, 2015 at approx. 13:02 UTC, customers with Storage deployments or dependencies in West Europe may encounter connection failures to their resources. Engineers are seeing spikes in ingress and egress data, and believe there may be an underlying load-balancing issue that is causing downstream impact to other Azure services. An update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting from 29 Sep, 2015 at approx. 13:02 UTC, a subset of Storage customers in West Europe may encounter latency or connection failures to their resources. Engineers are currently engaged and investigating the issue. An update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - East US 2 - Higher than normal latency due to Network Infrastructure interruption

Due to an on-going Network Infrastructure issue a subset of customers using Virtual Machines, Storage and Cloud Services in East US 2 may experience higher than normal latency. Please refer to the Network Infrastructure communication for updates on this issue.

Last Update: A few months ago

Storage - South Central US - Advisory

Starting at approximately 9 Sept, 2015 12:25 UTC a subset of customers using Storage in South Central US may experience disk latency or transactional latency when connecting to backend Azure resources. We have taken action to resolve this incident, and are now confirming that the incident is mitigated.

Last Update: A few months ago

Storage - South Central US - Advisory

Starting at approximately 9 Sept, 2015 12:25 UTC a subset of customers using Storage in South Central US may experience disk latency or transactional latency when connecting to backend Azure resources. Engineers are engaged and are currently investigating. An update will be provided in 60 minutes.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting 9th Sep, 2015 at 7:49 UTC, some customers leveraging Storage resources in West Europe may be experiencing disk latency or transactional latency when connecting to backend Storage. Engineers are currently deploying a mitigation for this issue. As the deployments occur, some improvement in service should be noted. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting 9th Sep, 2015 at 7:49 AM UTC, some customers leveraging Storage resources in West Europe may be experiencing disk latency or transactional latency when connecting to backend Storage. Engineers have identified a plan for mitigation which is currently being deployed. As the deployments occur, some improvement in service may be noted. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting 9th Sep, 2015 at 7:49 UTC, some customers leveraging Storage resources in West Europe may be experiencing disk latency or transactional latency when connecting to backend Storage. Engineers are engaged and are continuing to investigate potential mitigations. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting 9th Sep, 2015 at 7:49 AM UTC, some customers leveraging Storage resources in West Europe may be experiencing disk latency or transactional latency when connecting to backend Storage. Engineers are engaged and are currently investigating. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

An alert for Storage in West Europe is being investigated. More information will be provided within 60 minutes or as events warrant

Last Update: A few months ago

Storage - South Central US - Advisory

Starting 8th Sep, 2015 at 17:53 UTC, some customers leveraging Storage resources in South Central US may be experiencing disk latency or transactional latency when connecting to backend Storage. Engineers are currently validating the results of their mitigation and impacted customers should begin to see improved disk latency levels. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - South Central US - Advisory

Starting 8th Sep, 2015 at 17:53 UTC, some customers leveraging Storage resources in South Central US may be experiencing disk latency or transactional latency when connecting to backend Storage. Engineers have identified and are implementing their mitigation strategy. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - South Central US - Advisory

Engineers are engaging on an emerging issue where some customers in South Central US may be experiencing disk latency or transactional latency to Storage backend resources. An update will be provided in 30 minutes with additional details.

Last Update: A few months ago

Storage - South Central US - Advisory

An alert for Storage in South Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines and Storage - - Advisory

Users attempting service management functions across multiple services may encounter errors. Engineers are currently engaged and investigating. Existing services are not currently impacted.

Last Update: A few months ago

Virtual Machines and Storage - - Advisory

Alerts are currently being investigated for Virtual Machines and Storage across multiple regions. An update will be provided within 60 minutes.

Last Update: A few months ago

Storage and Virtual Machines - East US 2 - Partial Service Interruption

Starting at approximately 18:30 pm on 24 Jul 2015 UTC a subset of customers using Virtual Machines and Storage in East US 2 may encounter errors or timeouts when attempting to access services, or issues when attempting to perform service management (move/add/change) actions. Engineers have deployed a mitigation to impacted Storage clusters and customers may begin to see their services recover. We are continuing to work to validate service health. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - East US 2 - Partial Service Interruption

Starting at approximately 18:35 pm on 24 Jul 2015 UTC a subset of customers using Virtual Machines and Storage in East US 2 may encounter errors or timeouts when attempting to access services, or issues when attempting to perform service management (move/add/change) actions. Engineers have deployed a mitigation to impacted Storage clusters and customers may begin to see their services recover. We are continuing to work to validate service health. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - East US 2 - Partial Service Interruption

Starting at approximately 18:35 pm on 24 Jul 2015 UTC a subset of customers using Virtual Machines and Storage in East US 2 may encounter errors or timeouts when attempting to access services, or issues when attempting to perform service management (move/add/change) actions. Our engineering teams have identified a potential root cause and are currently deploying a mitigation and validating service health. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - East US 2 - Partial Service Interruption

Starting at approximately 18:35 pm on 24 Jul 2015 UTC a subset of customers using Virtual Machines and Storage in East US 2 may encounter errors or timeouts when attempting to access services. Our engineering teams have identified a potential root cause and are currently deploying a mitigation and validating service health. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - East US 2 - Partial Service Interruption

Starting at approximately 18:35 pm on 24 Jul 2015 UTC a subset of customers using Virtual Machines and Storage in East US 2 may encounter errors or timeouts when attempting to access services. Engineering teams are systematically investigating all potential root causes. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage and Virtual Machines - East US 2 - Partial Service Interruption

Starting at approximately 18:35 pm on 24 Jul 2015 UTC a subset of customers using Virtual Machines and Storage in East US 2 may encounter errors or timeouts when attempting to access resources. Engineering teams are currently investigating a number of alerts to try and identify the root cause. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage and Virtual Machines - East US 2 - Advisory (Limited Impact)

An alert for Storage and Virtual Machines in East US 2 is being investigated. A subset of customers may be impacted. More information will be provided as it is known.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting at 20 Jun, 2015 05:02 UTC a subset of customers using Storage in West Europe may be experiencing issues when accessing their data. Engineering remain engaged and are working towards mitigation. Many customers would have observed restoration in access to their data. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

Starting at 20 Jun, 2015 05:02 UTC a subset of customers using Storage in West Europe may experience issue when accessing their data. Engineering are engaged and the next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Storage - West Europe - Advisory

An alert for Storage in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - East US 2 - Partial Performance Degradation

Starting at approximately 28 May, 2015 21:55 UTC a subset of customers using Storage in East US 2 may experience latency when trying to access storage resources. Engineers are actively investigating and evaluating options to restore service. The next update will be provided in 2 hours.

Last Update: A few months ago

Storage - East US 2 - Partial Performance Degradation

Starting at approximately 28 May, 2015 21:55 UTC a subset of customers using Storage in East US 2 may experience latency when trying to access storage resources. Engineers are actively evaluating options to restore service. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US 2 - Partial Performance Degradation

Starting at approximately 28 May, 2015 21:55 UTC a subset of customers using Storage in East US 2 may experience latency when trying to access storage resources. Engineers are actively engaged, and investigating root cause and options to mitigate. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US 2 - Partial Performance Degradation

Starting at approximately 28 May, 2015 21:55 UTC a subset of customers using Storage in East US 2 may experience latency when trying to access storage resources. Engineers are actively engaged and investigating root cause. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US 2 - Advisory

Starting at approximately 28 May, 2015 21:55 UTC a subset of customers using Storage in East US 2 may experience latency when trying to access storage resources. Engineers are actively engaged and investigating root cause. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US 2 - Advisory

An alert for Storage in East US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - East US 2 - Advisory

An alert for Storage in East US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage, Virtual Machines and SQL Database - West Europe and North Europe - Advisory

An alert for Virtual Machines, Storage and SQL Database in North Europe and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage, Virtual Machines and SQL Database - West Europe and North Europe - Advisory

An alert for SQL Database, Storage and Virtual Machines in North Europe and West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage, Virtual Machines and SQL Database - West Europe and North Europe - Advisory

An alert for Storage, Virtual Machines and SQL Database in West Europe and North Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - East US 2 - Advisory

An alert for Storage in East US 2 is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - East US 2 - Partial Service Interruption

Starting at 19:52 on 28 Apr 2015 UTC a subset of customers using Storage in East US 2 may encounter errors when attempting to access services due to an interruption to our network infrastructure. Our monitoring is now showing improvements to availability, and our engineering teams are working to validate the current service health status. Customers will begin to see improvements to availability. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US 2 - Partial Service Interruption

Starting at 19:52 on 28 Apr 2015 UTC a subset of customers using Storage in East US 2 may encounter errors when attempting to access services due to an interruption to our network infrastructure. We are currently investigating a number of leads to identify the root cause and restore service. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US 2 - Partial Service Interruption

Starting at 19:52 on 28 Apr 2015 UTCa subset of customers using Storage in East US 2 may encounter errors when attempting to access services due to an interruption to our network infrastructure. We are currently investigating a number of leads to identify the root cause and restore service. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - East US 2 - Advisory (Limited Impact)

An alert for Storage in East US 2 is being investigated. A subset of customers may be impacted. More information will be provided as it is known.

Last Update: A few months ago

Storage - South Central US - Advisory

Starting at 15 Apr, 2015 18:45 UTC, engineers are investigating an issue in which a small subset of Storage customers in South Central US may experience intermittent connectivity issue to their Storage accounts. Retry will mostly succeed. Engineers are investigating a potential network root cause. Next update will be provided in 60 minutes or as event warrants.

Last Update: A few months ago

Storage - South Central US - Advisory

An alert for Storage in South Central US is being investigated. Engineers are investigating alerts in which a very small subset of Storage customers in South Central US may experience intermittent connectivity issue to their Storage accounts. Retry will mostly succeed. More information will be provided as it is known.

Last Update: A few months ago

Storage and Virtual Machines - West US - Partial Service Interruption

Starting 15th Apr, 2015 at 05:01 UTC, a limited subset of customers with Storage and Virtual Machines deployments in West US may experience intermittent timeouts and connectivity errors when attempting to access their Services. Engineers are currently deploying a fix to the impacted Storage cluster. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - West US - Partial Service Interruption

Starting 15th Apr, 2015 at 05:01 UTC, a limited subset of customers with Storage and Virtual Machines deployments in West US may experience intermittent timeouts and connectivity errors when attempting to access their Services. Engineers are in the process of deploying a fix to the impacted Storage cluster. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - West US - Partial Service Interruption

Starting 15th Apr, 2015 at 05:01 UTC, a limited subset of customers with Storage and Virtual Machines deployments in West US may experience intermittent timeouts and connectivity errors when attempting to access their Services. Engineers are continuing their investigation of the issue and are forming a mitigation strategy. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - West US - Partial Service Interruption

Starting 15th Apr, 2015 at 05:01 UTC, a limited subset of customers with Storage and Virtual Machines deployments in West US may experience intermittent timeouts and connectivity errors when attempting to access their Services. Engineers are engaged and are investigating the issue. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage and Virtual Machines - West US - Advisory

An alert for Storage and Virtual Machines and Virtual Machines in West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Azure Services - North Central US - Partial Service Interruption

Starting at 11:14 a subset of customers using Network Infrastructure in North Central US may experience intermittent inability to connect to their service resources. Impacted services include: Storage, Compute, Web App, Visual Studio Online, SQL, Service Bus, and Azure Search. Our engineers are engaged and investigating root cause and mitigation options.

Last Update: A few months ago

Azure Services - North Central US - Partial Service Interruption

Starting at 11:14 a subset of customers using Network Infrastructure in North Central US may experience intermittent inability to connect to their service resources. Impacted services include: Storage, Compute, Web App, Visual Studio Online, SQL, Service Bus, and Azure Search. Our engineers are engaged and investigating root cause and mitigation options.

Last Update: A few months ago

Storage - North Central US - Partial Service Interruption

Starting at 18:20 UTC on 8 April, 2015 a subset of customers using Storage in North Central US may be unable to connect to their Storage resources. Our engineers are engaged and investigating root cause and mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Storage - North Central US - Advisory

An alert for Storage in North Central US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Storage - West US - Advisory

An alert for Storage in West US is being investigated. More information will be provided as it is known.

Last Update: A few months ago

© 2019 - Cloudstatus built by jameskenny