Windows Azure Redis Cache Status

Network - South Central US - Applying Mitigation and Validating Recovery

Starting at approximately 03:30 UTC on 04 Dec 2018, customers in South Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network event in this region. Engineers are applying mitigation and validating. Some customers may be seeing signs of improvements at this time. Impacted services will be listed on the Azure Status Health Dashboard. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Service Availability - France Central

Starting at 13:57 UTC on 16 Oct 2018 customers using a subset of resources in France Central may experience difficulties connecting to these resources. Engineers have identified a localized infrastructure event which caused a number of storage and virtual machines to experience drops in availability. Service teams have recovered majority of impacted compute and storage resources. Customers may begin to see resources return to a healthy state. The next update will be provided in 30 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - South Central US - Investigating

Starting at 09:29 UTC on 04 Sep 2018 a subset of customers in South Central US may experience difficulties connecting to resources hosted in this region. At this time Engineers are investigating an issue with cooling in one part of the data center which caused a localized spike in temperature. Automated data center procedures to ensure data and hardware integrity went into effect when temperatures hit a specified threshold and critical hardware entered a structured power down process. The impact to the cooling system has been isolated and engineers are actively working to restore services. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple Services - South Central US

Starting at 19:40 UTC 25 June 2018 customer in South Central US may be experiencing trouble connecting to resources hosted in this region. Customers may also have experienced Virtual Machines rebooting unexpectedly. Engineers are currently investigating. Impacted services include: Storage, Virtual Machines, Key Vault, Site Recovery, Machine Learning, Cloud Shell, Logic Apps, Redis Cache, App Service (Linux) and App Service. More information will be provided as soon as it is available. Next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Service availability issue in North Europe

Starting at approximately 17:44 UTC on 19 Jun 2018 a subset of customers using Virtual Machines, Storage, Key Vault, App Service, Site Recovery, Automation, Service Bus, Event Hubs, Data Factory, Backup, Log Analytics, Application Insight, Azure Batch Azure Search, Redis Cache or Logic Apps in North Europe may experience connection failures when trying to access resources hosted in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines - Service Management Issues - USGov Virginia - Applying Mitigation

Starting at 03:00 EST on 30 May 2018 a subset of customers using Virtual Machines in USGov Virginia may be unable to manage some Virtual Machines hosted in the region. Restart attempts may fail or machines may appear to be stuck in a starting state. Other dependent Azure services may experience downstream impact. These services include Media Services, Redis Cache, Log Analytics, and Virtual Networks. Engineers have identified the preliminary root cause and are applying mitigation. It is recommended to not perform any service management requests, as Virtual Machines will continue to run if no changes are made. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - Service Management Failures - USGov Virginia

Starting at 03:00 EST on 30 May 2018 a subset of customers using Virtual Machines in USGov Virginia may be unable to manage some Virtual Machines hosted in the region. Restart attempts may fail or machines may appear to be stuck in a starting state. Other dependent Azure services may experience downstream impact. These services include Media Services, Redis Cache, Log Analytics, and Virtual Networks. Engineering teams are engaged to investigate the underlying root cause and recommend to not perform any service management requests as Virtual Machines will continue to run if no changes are made. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - USGov Virginia

Starting at 03:00 EST on 30 May 2018 a subset of customers using Virtual Machines in USGov Virginia may be unable to manage some Virtual Machines hosted in the region. Restart attempts may fail or machines may appear to be stuck in a starting state. Other dependent Azure services may experience downstream impact. These services include Media Services, Redis Cache, Log Analytics, and Virtual Networks. Engineering teams are engaged to investigate the underlying root cause and recommend to not perform any service management requests as Virtual Machines will continue to run if no changes are made. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - USGov Virginia

Starting at 03:00 EST on 30 May 2018 a subset of customers using Virtual Machines in USGov Virginia who may be unable to manage some Virtual Machines hosted in the region. Restart attempts may fail or machines may appear to be stuck in a starting state. Other dependent Azure services may experience downstream impact. These services include Media Services, Redis Cache, Log Analytics, and Virtual Networks. Engineering teams are engaged to investigate the underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - West US

Starting at approximately 22:58 UTC on 14 Jan 2018 a limited subset of customers dependent on a storage services in West US may experience latency or failures connecting to certain resources. In addition to the storage service, impacted services which leverage the service include: App Services (Web, Mobile and API Apps), Site Recovery, Azure Search, and Redis Cache. Engineers actively investigating the impacted storage services and developing mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - West US

Starting at approximately 22:58 UTC on 14 Jan 2018 a subset of customers in West US may experience difficulties connecting to certain resources or latency. Impacted services include: App Services (Web, Mobile and API Apps), Site Recovery, Azure Search, and Redis Cache. Engineers are aware of this issue and are actively investigating a potential underlying Storage issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines - North Europe and West Europe

Starting at 09:21 UTC on 10 Nov 2017 a subset of customers using Virtual Machines and Redis Cache in North Europe and West Europe may experience difficulties connecting to resources hosted in these regions. Engineers have identified a possible fix, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are also aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying network infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers continue investigating possible underlying causes, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US and South Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are currently investigating previous updates and deployments to the region along with other possible network level issues, and are taking additional steps to mitigate impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Azure Functions, Stream Analytics, HDInsight, Data Factory and Azure Scheduler. Media Services, Application Insights, Azure Search and Azure Site Recovery are reporting recovery. Engineers are seeing signs of recovery and are continuing to recover the remaining unhealthy storage machines and validate the fix. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight, Data Factory and Azure Scheduler. Engineers are seeing signs of recovery and are continuing to recover unhealthy storage machines and validate the fix. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight, Data Factory and Azure Scheduler. Engineers are continuing to recover unhealthy storage machines in order to mitigate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight and Data Factory. Engineers are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight and Data Factory. Engineers are seeing signs of recovery and are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Microsoft Intune, Application Insights, Azure Functions, Stream Analytics and Media Services. Engineers are seeing signs of recovery and are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache and Azure Monitoring. Engineers are seeing signs of recovery and have identified a potential underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services, Azure Cache and Azure Monitoring. Engineers are seeing signs of recovery and have identified a potential underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, Azure Search, Backup, Azure Scheduler, HDInsight, Virtual Machines, Redis Cache, Logic Apps, Azure Analysis Services, and Azure Resource Manager. Engineers have verified that a majority of impacted services are mitigated and are conducting final steps of validation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, Azure Search, Backup, Azure Scheduler, HDInsight, Virtual Machines, and Redis Cache. Mitigation has been applied and our monitoring system has started showing recovery. Engineers continue to validate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Azure Search – seeing signs of recovery Backup Event Hubs – seeing signs of recovery Log Analytics Redis Cache – seeing signs of recovery Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Azure Search – seeing signs of recovery Backup Event Hubs Redis Cache – seeing signs of recovery Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Backup Event Hubs Redis Cache Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customers using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments. Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache customers may receive intermittent time out notifications when accessing their resources. SQL Database customers may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Backup customers may experience difficulties connecting to resources. Event Hubs may experience difficulties connecting to resources hosted in this region.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customers using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments.  Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache customers may receive intermittent time out notifications when accessing their resources.  SQL Database customers may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Backup customers may experience difficulties connecting to resources. Event Hubs may experience difficulties connecting to resources hosted in this region.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Additional Impacted Services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customer using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments. Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache may receive intermittent time out notifications when accessing Redis Cache resources. SQL Database may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated.

Last Update: A few months ago

Network Infrastructure Impacting Multiple Services - Australia East

SUMMARY OF IMPACT: Between 01:50 and 02:38 UTC on 11 Jun 2017, a Network Infrastructure issue occurred in Australia East. Customers may have experienced degraded performance, network drops, or time outs when accessing their Azure resources hosted in this region. Engineers have confirmed that customers using Virtual Machines, App Services, Web Apps, Mobile Apps, API Apps, Backup, Site Recovery, Azure Search, Redis Cache, Stream Analytics, and Media Services in Australia East were impacted. A subset of services may have encountered a delayed mitigation, all services are confirmed to be mitigated at this point. PRELIMINARY ROOT CAUSE: Engineers determined that a deployment resulted in Virtual IP ranges being incorrectly advertised. MITIGATION: Engineers disabled route advertisements on the newly deployed instances that were incorrectly programmed. NEXT STEPS: A comprehensive root cause analysis report will be published in approximately 72 hours

Last Update: A few months ago

Network Infrastructure Impacting Multiple Services - Australia East

Starting at 01:39 UTC on 11 Jun 2017 monitoring alerts were triggered for Network Infrastructure in Australia East. Customers may experience degraded performance, network drops or time outs when accessing their Azure resources hosted in this region, however, engineers are beginning to see signs of mitigation. Engineers have determined that this is caused by an underlying Network Infrastructure event in this region which is currently under investigation. Engineers have confirmed that customers using Virtual Machines, Web Apps, Mobile Apps, API Apps, Backup, Site Recovery, Azure Search, Redis Cache, Stream Analytics, and Media Services in Australia East may be experiencing impact. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

SUMMARY OF IMPACT: Between 13:50 and 19:50 UTC on 31 Mar 2017, a subset of customers in Japan East experienced Virtual Machine reboots, degraded performance, or connection failures when accessing their Azure resources hosted in Japan West region. Engineers have confirmed the following services are healthy: Redis Cache, Service Bus, Azure SQL Database, Event Hubs, Automation, Steam Analytics, Document DB, Data Factory / Data Movement, Azure Monitor, Media Services, Logic Apps, Azure IoT Hub, API Management, Azure Resource Manager, Azure Machine Learning. NEXT STEPS: A detailed root cause analysis report will be provided in approximately 72 hours

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, Azure IoT Hub, Azure Monitor, and Azure Automation. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, Azure IoT Hub, and Azure Monitor. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services | Japan East

Starting at 13:50 UTC on 31 Mar 2017, a subset of customers with resources which leverage Storage in Japan East may experience latency or connection issues. Impacted services include App Service\WebApps , Virtual Machines, Azure SQL DB, Azure Cache, Service Bus, Cloud Services, Stream Analytics, Event Hubs, Backup, DocumentDB, Storsimple, Site Recovery, Key Vault, Data Factory, Azure Container Service, HDInsight, Media Services, API Management, Logic Apps, Redis Cache, and Azure Monitor. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

SUMMARY OF IMPACT: Between 18:04 and 21:16 UTC on 27 Mar 2017, a subset of customers in Japan West may have experienced degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. PRELIMINARY ROOT CAUSE: A storage scale unit being added to the Japan West region announced routes that blocked some network connectivity between two datacenters in the region. VMs and services dependent on that connectivity would have experienced restarts or failed connections. Unfortunately, automated recovery did not mitigate the issue. The manual health checks that are conducted around all new cluster additions were performed, but did not detect a problem. This led to a delay in correct root cause analysis and mitigation. MITIGATION: Engineers isolated the newly deployed scale unit, which mitigated the issue. NEXT STEPS: Investigations are currently in progress to determine exactly how incorrect routing information was configured into the storage scale unit being added and how that incorrect information escaped the many layers of validations designed to prevent such issues. A full detailed Root Cause Analysis will be published approximately in 72 hours.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, due to the networking infrastructure issue the following services are impacted: App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. HDInsight customers are unable to perform service management operations or provision Linux VMs. Azure Virtual Machine customers may experience VM restarts.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. HDInsight customers are unable to perform service management operations or provision Linux VMs. Azure Virtual Machine customers may experience VM restarts. Engineers are continuing their investigating for an underlying cause and applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. Engineers are continuing their investigating for an underlying cause and have begun applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West

Starting at 18:04 UTC on 27 Mar 2017 a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience high latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Engineers are investigating the issue for an underlying cause and working on mitigation paths. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017 to 04:54 on 16 Mar 2017, due to a incident in East US affecting Storage, customers using Storage and service depending on Storage may have experienced difficulties accessing their resources in the region. PRELIMINARY ROOT CAUSE: Engineering identified one Storage cluster that lost power and became unavailable.  NEXT STEPS: Full detailed Root Cause Analysis is currently being conducted will be published approximately in 72 hours.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017 to 04:54 on 16 Mar 2017, due to a incident in East US affecting Storage, customers using Storage and service depending on Storage may have experienced difficulties accessing their resources in the region. Engineering have confirmed that Azure Logic Apps Azure SQL Database have now recovered.  PRELIMINARY ROOT CAUSE: Engineering identified one Storage cluster that lost power and became unavailable.  NEXT STEPS: Full detailed Root Cause Analysis is currently being conducted will be published approximately in 72 hours.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 60 minutes or as any new information is made available.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 2 hours or as any new information is made available

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple Services - Japan East

Summary of impact: Between 12:42 and 14:38 UTC on 08 Mar 2017, a subset of customers using App Service \ Web Apps, Redis Cache, StorSimple, Logic Apps, Stream Analytics and IoT Hub in Japan East may have experienced difficulties connecting to resources hosted in this region related to a Storage incident in this region. Full resolution will be provided once the Storage issue is fully mitigated.

Last Update: A few months ago

Multiple Services - Japan East

Between 12:42 and 14:38 UTC on 08 Mar 2017, customers leveraging Storage-dependent services in Japan East may have experienced issues accessing some of their services in this region. The Storage incident is mitigated, but some of the services are still in a recovery phase. These services include: App Service \ Web Apps, Redis Cache, StorSimple, Logic Apps, SQL Database, Stream Analytics and IoT Hub. Engineers are continuing to monitor the recovery progress and the next update will be provided within 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Japan East

Starting at 12:42 UTC on 08 Mar 2017, as a result of a Storage incident in Japan East a subset of customers using App Service \ Web Apps, Site Recovery, Virtual Machines, Redis Cache, Data Movement, StorSimple, Logic Apps, Media Services, Key Vault, HDInsight, SQL Database, Automation, Stream Analytics, Backup, IoT Hub and Cloud Services in Japan East customers may experience issues accessing their services in this region. Engineers have applied a mitigation and some services should be seeing improvements in availability, the next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - Japan East

Starting at 12:42 UTC on 08 Mar 2017, as a result of a Storage incident in Japan East a subset of customers using App Service \ Web Apps, Site Recovery, Virtual Machines, Redis Cache, Data Movement, StorSimple, Logic Apps, Media Services, Key Vault, HDInsight, SQL Database, Automation, Stream Analytics and Cloud Services in Japan East customers may experience issues accessing their services in this region. Engineers are actively investigating this issue, and the next update will be provided in 60 minutes, or as event warrant.

Last Update: A few months ago

Multiple Services - Japan East

Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Virtual Machines, HD Insight, Redis Cache or App Service \ Web Apps in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may be experiencing impact related to this and additional services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, Cloud Services, Storage, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and Azure DevTest Labs, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, engineers have finished a structured restart of the scale units involved and are working to restore the remaining impacted services. A majority of customers will see signs of recovery for their services, the following services have confirmed mitigation: SQL Database, App Service \ Web Apps, Azure IoT Hub, Service Bus, Event Hub, DocumentDB, Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as soon as new information is available.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, DocumentDB, Azure DevTest Labs, Service Bus, and Event Hub, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services, the following services have confirmed mitigation: Azure Scheduler and Logic Apps. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to an infrastructure monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The monitoring alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. Some impacted customers may already be seeing recovery for their services. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The alert has been mitigated, and engineers are continuing to undertake a structured restart of the scale units involved. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified that an automated shutdown of a small number of Compute and Storage scale units occurred in response to a monitoring alert. The alert has been investigated, and engineers are currently undertaking a structured restart of the scale units involved. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, Azure Monitor, Activity Logs, Redis Cache, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified the underlying cause, and are actively working to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - West US 2

Starting at 09:50 UTC on 19 Feb 2017, a subset of customers in West US 2 using Virtual Machines, SQL Database, Cloud Services, Storage, App Service \ Web Apps, Azure IoT Hub, Backup, Site Recovery, and DocumentDB, as well as services with dependencies on these, may experience issues accessing their services in this region. Engineers have identified the underlying cause, and are actively working to mitigate the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia): Fully recovered.Media Services: Customers may experience time outs.HDInsight: Customers may experience errors when creating new HDI clusters. Existing services are not impacted.Site Recovery: Fully recovered.API Management: Customers may experience service management operation errors via API calls or Portal.SQL Database: Customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: Customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customers may experience timeouts or 500 errors.Service Bus: Fully recovered.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Fully recovered.Managed Cache and Redis Cache: Fully recovered.Azure Backup: Fully recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia): Fully mitigated. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Fully recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: Fully mitigated. A subset of Azure Backup users with vaults in East Asia may encounter operation failures.The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia): Fully mitigated. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: Fully mitigated. A subset of Azure Backup users with vaults in East Asia may encounter operation failures. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state including Key Vault, Service Bus, Site Recovery, Azure Backup, and IoT Suite (only in East Asia). Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia): Validating mitigation IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations.Media Services: Validating recovery: Customer may experience time out.HDInsight: Customers may experience errors when creating new HDI cluster. Existing services are not impacted.Site Recovery: Recovered at 20:00 UTC . Customer may have experienced operation failures.API Management: Customers may experience service management operations errors via API calls or Portal.SQL Database: customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities.Virtual Machines: customers may experience issues accessing their VMs, or may experience a reboot of an affected VM.App Service \ Web Apps: Customer may experience timeouts or 500 errors.Service Bus: Fully mitigated. Customers may have experienced issue accessing their resources in the regions.Event Hub: Customers may experience issues when accessing their resources.Stream Analytics: Customers may experience timeouts when accessing their resources.Managed Cache and Redis Cache: Customers may be unable to access to their services.Azure Backup: A subset of Azure Backup users with vaults in East Asia may encounter operation failures. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Suite (only in East Asia)Media ServicesHDInsightSite RecoveryAPI ManagementSQL DBVirtual MachinesVirtual NetworkApp Service \ Web AppsService BusEvent HubStream AnalyticsManaged CacheRedis CacheAzure BackupKey VaultSQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Suite customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Site Recovery, Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing the following services leveraging this scale unit:IoT Hub (only in East Asia)Media ServicesHDInsightSite RecoveryAPI ManagementSQL DBVirtual MachinesVirtual NetworkApp Service \ Web AppsService BusEvent HubStream AnalyticsManaged CacheRedis CacheAzure BackupKey Vault SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Site Recovery, Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Recovery] - Multiple services impacted by underlying Storage incident - East Asia

Engineers have begun mitigation and are seeing some services return to a healthy state. Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, Azure Backup, Key Vault and IoT Hub only in the East Asia region. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Service Bus and Key Vault have been recovered. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Mitigation in progress] - Multiple services impacted by underlying Storage incident - East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, Azure Backup, and IoT Hub. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided as soon as more information is available.

Last Update: A few months ago

[Mitigation in progress] - Multiple services impacted by underlying Storage incident - East Asia

Starting at 09:16 UTC on 02 Oct 2016, a subset of customers may be impacted by an issue with a storage scale unit in East Asia. Customers may also experience issues accessing services leveraging this scale unit, including Media Services, HDInsight, Site Recovery, API Management, SQL DB, Virtual Machines, Virtual Network, and App Service \ Web Apps, Service Bus, Event Hub, Stream Analytics, Managed Cache, Redis Cache, and IoT Hub. SQL customers may also experience login failure, or issues with Service Management (Create, Rename, Delete, etc.). Azure SQL Database customers who have configured active geo-replication can minimize downtime by performing a failover to a geo-secondary. Please visit https://aka.ms/sql-business-continuity for more information on these capabilities. Virtual Machine customers may experience issues accessing their VMs, or may experience a reboot of an affected VM. Site Recovery customers may see degradation with replication performance and timeouts. HDInsight customers may experience the inability to create new clusters, but existing clusters are not impacted. Web Apps customers may experience 503 errors when trying to access their apps. IoT Hub customers may experience failures with CRUD (Create, Read, Update, Delete) operations. Key Vault and Media Services have been fully recovered. Engineers have begun mitigation and are seeing some services return to a healthy state. The next update will be provided as soon as more information is available.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

We have validated that the issues experienced by customers using App Service \ Web Apps, Visual Studio Team Services, Virtual Machines, SQL Database, Service Bus, Redis Cache, Media Services, HDInsight, DocumentDB, Data Catalog, Cloud Services, Azure Search and Automation in North Europe and West Europe are mitigated. . Our Engineering teams are working to gather additional details on the preliminary root cause before this incident is resolved. An update will be provided within 30 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may experience connectivity issues when attempting to connect to their resources. Multiple Services in the region are impacted. These services are showing recovered: SQL Database, Azure Automation, Azure Data Factory, Service Bus, Event Hubs. The following services are still impacted at this time: Visual Studio Team Services, App Service \ Web Apps, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Azure Search, Log Analytics and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may experience connectivity issues when attempting to connect to their resources. Multiple Services in the region are impacted. These services are showing recovered: SQL Database, Azure Automation, Azure Data Factory, Service Bus, Event Hubs. The following services are still impacted at this time: Visual Studio Team Services, App Service \ Web Apps, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Azure Search, Log Analytics and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe may experience connectivity issues when attempting to connect to their resources. Impacted services include. Visual Studio Team Services, App Service \ Web Apps and SQL Database, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Service Bus , Azure Search, and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe may experience connectivity issues when attempting to connect to their resources. Impacted services include. Visual Studio Team Services, App Service \ Web Apps and SQL Database, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Service Bus , Azure Search, and Document DB. Engineers identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers using Visual Studio Team Services, App Service \ Web Apps and SQL Database in West Europe and Virtual Machines, Cloud Services, , HD Insight , Redis Cache , Service Bus , Azure Search, Document DB  and Data Factory may experience degraded availability when accessing their resources. Engineers identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers using Visual Studio Team Services, App Service \ Web Apps and SQL Database in West Europe and Virtual Machines, Cloud Services, , HD Insight , Redis Cache , Service Bus , Azure Search, Document DB  in North Europe and West Europe will experience degraded availability when accessing their resources. Engineers are currently investigating and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - Partial Service Interruption

Starting at 23:12 UTC on 8 June, 2016 a subset of customers using Azure services in West Europe may experience intermittent inability to connect or access their service resources including: App Services / Web App, SQL, Azure Active Directory proxy, Virtual Machines, Cloud Services, Azure Search. Redis Cache, and HDInsight. The underlying network infrastructure event has been mitigated as of 01:24 UTC on 9th June, 2016. Azure Machine Learning is fully mitigated as of 01:50 UTC on 9th June, 2016. Other impacted services continue to recover and customers will continue to see improved service availability as engineers continue to mitigate residual impacts. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - West Europe Partial Service Interruption

Starting at 23:12 UTC on 8 June, 2016 a subset of customers using Azure services in West Europe may experience intermittent inability to connect or access their service resources including: App Services / Web App, Machine Learning, SQL, Azure Active Directory proxy, Virtual Machines, Cloud Services, Azure Search. Redis Cache, and HDInsight. The underlying network infrastructure event has been mitigated as of 01:24 UTC on 9th June, 2016. Impacted services are recovering and customers will see improved service availability as engineers continue to mitigate residual impacts. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - West Europe Partial Service Interruption

Starting at 23:12 UTC on 8 June, 2016 a subset of customers using Azure services in West Europe may experience inability to connect or access their service resources including: App Services / Web App, Machine Learning, SQL, Azure Active Directory proxy, Virtual Machines, Cloud Services, Azure Search. Redis Cache, and HDInsight. Engineers are investigating a Network Infrastructure event in West Europe. The next update will be provided in 60 minutes or as new information is available.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , recovered and all services, except a limited subset of Web Apps, have reported recovery as well. The next update will be provided in 2 hours or as events warrant. <br><br>IMPACTED SERVICES: Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are continuing to mitigate the remaining Web Apps impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are continuing to mitigate the remaining Web Apps impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are actively mitigating the Web Apps remaining impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers are showing signs of recovery and some customers may continue to experience errors attempting to connect to resources. Web Apps customers are showing signs of recovery and a subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are reporting recovery are starting to see recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Managed Cache, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are reporting recovery are starting to see recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Redis Cache, Service Bus, SQL Databases, Storage, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics, SQL Databases, Storage, HDInsight.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team is actively mitigating the issue and once healthy, the impacted services (below) will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases on only the Basic or Standard Tiers may experience failures when attempting to connect to databases and when attempting to login to their databases, Premium Tier customers are not impacted. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases on only the Basic or Standard Tiers may experience failures when attempting to connect to databases and when attempting to login to their databases, Premium Tier customers are not impacted. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps customers may experience failures connecting to or using their services.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, Redis Cache and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Network Infrastructure - Southeast Asia: Service Restoration

Starting at approximately 01:40 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Investigations indicate this to be due to fiber cut on a 3rd party network provider's infrastructure. Engineers from the 3rd party network provider have repaired the issue. Azure Services are starting to show restoration. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia: Service Restoration

Starting at approximately 01:40 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Investigations indicate this to be due to fiber cut on a 3rd partner network providers infrastructure that has been repaired. Azure Services are beginning to show restoration. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, Azure Search, Redis Cache, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using services including Resource Manager, Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Network Infrastructure service interruption. We are starting to see improvements in service avaialbility, and customers should begin to see improvements. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using services including Resource Manager, Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Network Infrastructure service interruption. We are starting to see improvements in service avaialbility, and customers should begin to see improvements. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using services including Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Network Infrastructure service interruption. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using Automation, Alerts, Key Vault, HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Network Infrastructure service interruption. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using HD Insight, Application Insights, Data Lake, Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Storage outage. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Web App and Redis Cache - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using Storage, Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Storage outage. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Web App and Redis Cache - East US 2 - Partial Service Interruption

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using Virtual Machines, Web App and Redis Cache in East US 2 may be encountering issues due to an ongoing Storage outage. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines - East US 2 - Advisory

Starting at 20:40 on 24 Mar 2016 UTC a subset of customers using Virtual Machines in East US 2 may be encountering issues due to an ongoing Storage outage. Further information will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Our Engineering teams have mitigated the underlying Networking issue and majority of affected customers should observe recovery now. Please refer to History page for preliminary report for the Networking incident. All the impacted Azure Services also reported services restored, except App Services and HDInsights. Engineers continue to recover the residual affected VMs. Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Our Engineering teams have mitigated the underlying Networking issue and majority of affected customers should observe recovery now. Engineers continue to recover the residual affected VMs. Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services.

Last Update: A few months ago

HD Insight, Event Hubs, App Service \ Web App, Logic App, Media Services, Key Vault and Redis Cache - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with HD Insight, Event Hubs, Media Services, App Service \ Logic App, App Service \ Web App, Key Vault and Redis Cache in Central US due to an on-going Storage issue. Engineers have deployed a mitigation for the Storage issue and customers may begin to see improvements. The next update will be provided in 60 minutes.

Last Update: A few months ago

Event Hubs, App Service \ Web App, Logic App, Media Services, Key Vault and Redis Cache - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Event Hubs, Media Services, App Service \ Logic App, App Service \ Web App, Key Vault and Redis Cache in Central US due to an on-going Storage issue. Engineers have deployed a mitigation for the Storage issue and customers may begin to see improvements. The next update will be provided in 60 minutes.

Last Update: A few months ago

Event Hubs, App Service \ Web App, Logic App, Media Services, Key Vault and Redis Cache - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Event Hubs, Media Services, App Service \ Logic App, App Service \ Web App, Key Vault and Redis Cache in Central US due to an on-going Storage issue. Engineers are currently working to mitigate the Storage issue. The next update will be provided in 60 minutes.

Last Update: A few months ago

App Service \ Web App, Logic App, Media Services, Key Vault and Redis Cache - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with Media Services, App Service \ Logic App, App Service \ Web App, Key Vault and Redis Cache in Central US due to an on-going Storage issue. Engineers are currently working to mitigate the Storage issue. The next update will be provided in 60 minutes.

Last Update: A few months ago

App Service \ Web App, Logic App and Media Services - Central US - Partial Service Interruption

Starting on 30 Jan 2016 at 04:00 UTC a subset of customers may be experiencing issues with App Service \ Web App, App Service \ Logic App and Media Services in Central US due to an on-going Storage issue. Engineers are currently working to mitigate the Storage issue. The next update will be provided in 60 minutes.

Last Update: A few months ago

Stream Analytics, Redis Cache, and Data Factory : Advisory : Multiple Regions

Starting as early as 00:00 UTC on 7 December, 2015 for Azure Stream Analytics and 03:36 UTC on 7 December, 2015 for Data Factory and Redis Cache, customers began experiencing issues viewing metrics in the Azure Management Portal or "Classic" Azure Management Portal. Additionally, customers may not have received alerts on any existing jobs. A bug present in a back-end data service, which each impacted service leverages, was identified as the root cause of this incident. Engineers have implemented a hotfix for the issue and are currently validating recovery for all services involved. Customers should see significant improvement as a result of the hotfix. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Stream Analytics, Redis Cache, and Data Factory : Advisory : Multiple Regions

Starting as early as 00:00 UTC on 7 December, 2015 for Azure Stream Analytics and 03:36 UTC on 7 December, 2015 for Data Factory and Redis Cache, customers will be unable to view metrics in the Azure Management Portal or "Classic" Azure Management Portal. Customers may also be unable to receive alerts on any existing jobs. Engineers have identified a potential root cause where data being called from an internal service, leveraged by Stream Analytics, Data Factory, and Redis Cache, is not properly updating. The team is in the process of developing a hotfix for the issue and an update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Stream Analytics, Redis Cache, and Data Factory : Advisory : Multiple Regions

12/6/2015 21:00:04 - Starting as early as 00:00 UTC on 7 December, 2015 for customers using Azure Stream Analytics and 03:36 UTC on 7 December, 2015 for customers using Data Factory and Redis Cache in multiple regions, customers may be unable to view metrics in the Azure Management Portal or "Classic" Azure Management Portal. Customers may also be unable to receive alerts on any existing jobs. Engineers are continuing to investigate root cause and mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Stream Analytics, Redis Cache, and Data Factory : Advisory : Multiple Regions

Starting as early as 00:00 UTC on 7 December, 2015 for customers using Azure Stream Analytics and 03:36 UTC on 7 December, 2015 for customers using Data Factory and Redis Cache in multiple regions, customers may be unable to view metrics in the Azure Management Portal or "Classic" Azure Management Portal. Customers may also be unable to to receive alerts on any existing jobs. Engineers are continuing to investigate root cause and mitigation options. The next update will be provided in 60 minutes .

Last Update: A few months ago

Stream Analytics : Advisory : Multiple Regions

Starting as early as 00:00 UTC on 7 December, 2015 for customers using Azure Stream Analytics and 03:36 UTC on 7 December, 2015 for customers using Data Factory and Redis Cache in multiple regions, customers may be unable to view metrics in the Azure Management Portal or "Classic" Azure Management Portal. Customers may also be unable to to receive alerts on any existing jobs. Engineers are continuing to investigate root cause and mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Data Factory and Redis Cache : Advisory : Multiple Regions

An alert for Data Factory and Redis Cache in multiple regions is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Multiple Services - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Key Vault, Logic App, Stream Analytics, Web App, Data Factory, Application Insights, API Management, HD Insight, Managed Cache, Mobile Services, Virtual Machines, and RemoteApp, may encounter errors or timeouts when attempting to access services. In addition, customers may encounter errors attempting to create or view support tickets through http://portal.azure.com. This is due to an on-going Storage issue in the West US region. The next update will be provided in 60 minutes.

Last Update: A few months ago

Multiple Services - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Key Vault, Logic App, Stream Analytics, Web App, Data Factory, Application Insights, API Management, HD Insight, Managed Cache, Mobile Services, and Virtual Machines, may encounter errors or timeouts when attempting to access services. In addition, customers may encounter errors attempting to create or view support tickets through http://portal.azure.com. This is due to an on-going Storage issue in the West US region. The next update will be provided in 60 minutes.

Last Update: A few months ago

Multiple Services - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Key Vault, Logic App, Stream Analytics, Web App, Data Factory, Application Insights, API Management, HD Insight, Managed Cache, Mobile Services, and Virtual Machines, may encounter errors or timeouts when attempting to access services. In addition, customers my encounter errors attempting to create or view support tickets through http://portal.azure.com. This is due to an on-going Storage issue in the West US region. The next update will be provided in 60 minutes.

Last Update: A few months ago

Multiple Services - West US - Partial Service Interruption

Starting at approximately 00:45 on 26 Nov 2015 UTC some customers using Key Vault, Logic App, Stream Analytics, Web App, Data Factory, Application Insights, and API Management may encounter errors or timeouts when attempting to access services. In addition, customers my encounter errors attempting to create or view support tickets through http://portal.azure.com. This is due to an on-going Storage issue in the West US region. The next update will be provided in 60 minutes.

Last Update: A few months ago

Additionally impacted services - West Europe

Multiple services in Europe are impacted by an ongoing Networking issue. Impacted services include: App Service \ Web App, API Management, Stream Analytics, Azure Search, Event Hubs, Service Bus, SQL Database, Operational Insights, Azure Active Directory B2C, Key Vault, Media Services, Data Catalog, Virtual Machines, Automation, Visual Studio Online, Managed Cache, Redis Cache, DocumentDB and RemoteApp. More information will be provided as it is known.

Last Update: A few months ago

Additionally impacted services - West Europe

Multiple services in Europe are impacted by an ongoing Networking issue. Impacted services include: SQL Database, API Management, Media Services, Azure Search, App Service \ Web App, Service Bus, Event Hubs, Azure Active Directory B2C, Operational Insights, Key Vault, Virtual Machines, Data Catalog and Stream Analytics. More information will be provided as it is known .

Last Update: A few months ago

© 2019 - Cloudstatus built by jameskenny