Windows Azure Azure Search Status

Azure Services - South Central US

Starting at 14:16 UTC on 16 Aug 2019 customers using resources in South Central US may experience difficulties connecting to resources hosted in this region.Update posted at 17:36 UTC : Engineers have identified a cabling issue in the region which are impacting dependent services and are working through extended recovery workstreams. Next update will be provided around 18:30 UTC

Last Update: A few months ago

Service Management Failures - West Europe - Applying Mitigation

Starting at approximately 15:20 UTC on 27 Mar 2019 a subset of customers may receive failure notifications when performing service management operations such as create, update, deploy, scale, and delete for resources hosted in the West Europe region.Engineers have taken initial steps to mitigate and are pursuing additional mitigation workstreams. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - UK South - Investigating

Starting at 13:19 UTC on 10 Jan 2019, a subset of customers leveraging Storage in UK South may experience service availability issues. Engineers have identified that a single storage scale unit experienced availability issues for a subset of storage nodes. Resources with dependencies on this scale unit, may also experience downstream impact.Current Update: Engineers have identified a potential root cause and are actively working with other teams on a mitigation path for this issue.The next update will be provided in 60 minutes, or as events warrant. Last updated at 21:00 UTC

Last Update: A few months ago

Storage - West US 2 - Investigating

Starting at 04:20 UTC on 28 Nov 2018 a subset of customers in West US 2 may experience issues connecting to Storage resources hosted in this region. Customers using resources dependent on Storage may also see impact. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided within 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - South Central US - Investigating

Starting at 09:29 UTC on 04 Sep 2018 a subset of customers in South Central US may experience difficulties connecting to resources hosted in this region. At this time Engineers are investigating an issue with cooling in one part of the data center which caused a localized spike in temperature. Automated data center procedures to ensure data and hardware integrity went into effect when temperatures hit a specified threshold and critical hardware entered a structured power down process. The impact to the cooling system has been isolated and engineers are actively working to restore services. The next update will be provided as events warrant.

Last Update: A few months ago

Azure Service Management Failures - Resolved

SUMMARY OF IMPACT: Between 22:15 on 26 Jun 2018 and 06:20 UTC on 27 Jun 2018, a subset of customers may have experienced timeouts or failures when attempting to perform service management operations on their API Management, App Service, Microsoft Stream, Media Services, Azure SQL Database, Azure Search, Azure Active Directory B2C, Azure IoT hubs, Azure Batch, Event Hubs and Service Bus services in Azure. In addition, some customers may have experienced connection failures to the Azure Portal. Some services with a reliance on triggers from service management calls may have seen failures for running instances.PRELIMINARY ROOT CAUSE: Engineers identified a service management API code configuration that impacted background services. This was causing service management requests to fail for a subset of customers.MITIGATION: Engineers performed a rollback of the recent deployment task to mitigate the issue.NEXT STEPS: Engineers will continue to investigate to establish the full root cause and prevent future occurrences.

Last Update: A few months ago

Service availability issue in North Europe

Starting at approximately 17:44 UTC on 19 Jun 2018 a subset of customers using Virtual Machines, Storage, Key Vault, App Service, Site Recovery, Automation, Service Bus, Event Hubs, Data Factory, Backup, Log Analytics, Azure Search, or Logic Apps in North Europe may experience connection failures when trying to access resources hosted in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Azure Services availability issue and Service Management issues for a subset of Classic Azure resources - South Central US - Applying Mitigation

Starting at 15:57 UTC on 13 Jun 2018 a subset of customers in South Central US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage availability issue which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this. These service may include: Virtual Machines, App Service, Visual Studio Team Services, Logic Apps, Azure Backup, Application Insights, Service Bus, Event Hub, Site Recovery, Azure Search, and Media Services.In addition, for services that have dependencies on Classic Resources may experience failures when performing Service Management operations on their Classic resources. Communications for Service Management operations issue is published to the Azure Portal.The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services impacted in West Europe

Starting at approximately 02:24 UTC on 10 Jun 2018, a subset of customers in the West Europe region may experience difficulties connecting to their resources. Azure Services that have dependency on Storage and Virtual Machines may also experience secondary impact to their resources. Confirmed impacted services are: Storage., Virtual Machines, SQL Databases, Backup, Azure Site Recovery, Service Bus, Event Hub, App Service, Logic Apps, Automation, Data Factory, Log Analytics, Azure Map, Azure Search, Media Services.

Last Update: A few months ago

Storage - West Central US - Applying FIx

Starting at 19:47 UTC on 03 May 2018 a subset of customers using Storage in West Central US may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this and impacted services will be listed on the Azure Status Health Dashboard. Engineers have identified a possible fix for the underlying cause, and are exploring implementation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - UK South

Starting at 20:12 UTC on 20 Feb 2018 a subset of customers in UK South may experience difficulties connecting to resources hosted in this region. Impacted services include Storage, Virtual Machines, Azure Search and Backup. Customers may begin seeing signs of mitigation. Engineers are investigating a potential power event in the region impacting a single storage scale unit and are actively working on mitigation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage - UK South

Starting at 20:12 UTC on 20 Feb 2018 a subset of customers in UK South may experience difficulties connecting to resources hosted in this region. Impacted services include Storage, Virtual Machines and Azure Search. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - West US

Starting at approximately 22:58 UTC on 14 Jan 2018 a limited subset of customers dependent on a storage services in West US may experience latency or failures connecting to certain resources. In addition to the storage service, impacted services which leverage the service include: App Services (Web, Mobile and API Apps), Site Recovery, Azure Search, and Redis Cache. Engineers actively investigating the impacted storage services and developing mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - West US

Starting at approximately 22:58 UTC on 14 Jan 2018 a subset of customers in West US may experience difficulties connecting to certain resources or latency. Impacted services include: App Services (Web, Mobile and API Apps), Site Recovery, Azure Search, and Redis Cache. Engineers are aware of this issue and are actively investigating a potential underlying Storage issue. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are also aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying network infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers continue investigating possible underlying causes, and are working to determine mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US and South Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops, or timeouts when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are currently investigating previous updates and deployments to the region along with other possible network level issues, and are taking additional steps to mitigate impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers are aware of additional alerts for App Service in South Central US. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - North Central US

Starting at approximately 19:34 UTC on 06 Nov 2017 a subset of customers in North Central US may experience degraded performance, network drops or time outs when accessing Azure resources hosted in this region. Engineers have determined that this is caused by an underlying Network Infrastructure Event in this region which is currently under investigation. Impacted services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Azure Functions, Stream Analytics, HDInsight, Data Factory and Azure Scheduler. Media Services, Application Insights, Azure Search and Azure Site Recovery are reporting recovery. Engineers are seeing signs of recovery and are continuing to recover the remaining unhealthy storage machines and validate the fix. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight, Data Factory and Azure Scheduler. Engineers are seeing signs of recovery and are continuing to recover unhealthy storage machines and validate the fix. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight, Data Factory and Azure Scheduler. Engineers are continuing to recover unhealthy storage machines in order to mitigate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight and Data Factory. Engineers are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Application Insights, Azure Functions, Stream Analytics, Media Services, HDInsight and Data Factory. Engineers are seeing signs of recovery and are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache, Azure Monitor, Microsoft Intune, Application Insights, Azure Functions, Stream Analytics and Media Services. Engineers are seeing signs of recovery and are continuing to implement multiple mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services\Web Apps, Azure Cache and Azure Monitoring. Engineers are seeing signs of recovery and have identified a potential underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Storage Related Incident - North Europe

Starting at 13:27 UTC on 29 Sep 2017 a subset of customers may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may also be experiencing impact related to this, including Virtual Machines, Cloud Services, Azure Search, Azure Site Recovery, Azure Backup, App Services, Azure Cache and Azure Monitoring. Engineers are seeing signs of recovery and have identified a potential underlying root cause. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, Azure Search, Backup, Azure Scheduler, HDInsight, Virtual Machines, Redis Cache, Logic Apps, Azure Analysis Services, and Azure Resource Manager. Engineers have verified that a majority of impacted services are mitigated and are conducting final steps of validation. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, Azure Search, Backup, Azure Scheduler, HDInsight, Virtual Machines, and Redis Cache. Mitigation has been applied and our monitoring system has started showing recovery. Engineers continue to validate. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - UK South

Starting at approximately 21:41 UTC on 20 Jul 2017, a subset of customers in UK South may experience degraded performance, connectivity drops or timeouts when accessing their Azure resources hosted in this region. Engineers are investigating underlying Network Infrastructure issues in this region. Impacted services may include, but are not limited to App Services, Automation, Service Bus, Log Analytics, Key Vault, SQL Database, Service Fabric, Event Hubs, Stream Analytics, Azure Data Movement, API Management, and Azure Search. Multiple engineering teams are engaged in multiple workflows to mitigate the impact. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Azure Search – seeing signs of recovery Backup Event Hubs – seeing signs of recovery Log Analytics Redis Cache – seeing signs of recovery Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Azure Search – seeing signs of recovery Backup Event Hubs Redis Cache – seeing signs of recovery Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience connectivity failures to their resources: App Service \ Web Apps Application Insights Backup Event Hubs Redis Cache Stream Analytics SQL Database Virtual Machines

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customers using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments. Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache customers may receive intermittent time out notifications when accessing their resources. SQL Database customers may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Backup customers may experience difficulties connecting to resources. Event Hubs may experience difficulties connecting to resources hosted in this region.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Downstream impacted services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customers using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments.  Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache customers may receive intermittent time out notifications when accessing their resources.  SQL Database customers may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated. Backup customers may experience difficulties connecting to resources. Event Hubs may experience difficulties connecting to resources hosted in this region.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Additional Impacted Services

Starting at 16:21 UTC on 07 Jul 2017, due to an issue relating to Network Infrastructure in Southeast Asia. The following services may experience downstream impact: Customer using App Service \ Web Apps in the region may receive HTTP 500-level response codes, experience timeouts or high latency when accessing Web Apps deployments. Virtual Machines may experience connection failures when trying to access some Virtual Machines. Redis Cache may receive intermittent time out notifications when accessing Redis Cache resources. SQL Database may experience issues accessing services. New connections to existing databases in this region may result in an error or timeout, and existing connections may have been terminated.

Last Update: A few months ago

Network Infrastructure Impacting Multiple Services - Australia East

SUMMARY OF IMPACT: Between 01:50 and 02:38 UTC on 11 Jun 2017, a Network Infrastructure issue occurred in Australia East. Customers may have experienced degraded performance, network drops, or time outs when accessing their Azure resources hosted in this region. Engineers have confirmed that customers using Virtual Machines, App Services, Web Apps, Mobile Apps, API Apps, Backup, Site Recovery, Azure Search, Redis Cache, Stream Analytics, and Media Services in Australia East were impacted. A subset of services may have encountered a delayed mitigation, all services are confirmed to be mitigated at this point. PRELIMINARY ROOT CAUSE: Engineers determined that a deployment resulted in Virtual IP ranges being incorrectly advertised. MITIGATION: Engineers disabled route advertisements on the newly deployed instances that were incorrectly programmed. NEXT STEPS: A comprehensive root cause analysis report will be published in approximately 72 hours

Last Update: A few months ago

Network Infrastructure Impacting Multiple Services - Australia East

Starting at 01:39 UTC on 11 Jun 2017 monitoring alerts were triggered for Network Infrastructure in Australia East. Customers may experience degraded performance, network drops or time outs when accessing their Azure resources hosted in this region, however, engineers are beginning to see signs of mitigation. Engineers have determined that this is caused by an underlying Network Infrastructure event in this region which is currently under investigation. Engineers have confirmed that customers using Virtual Machines, Web Apps, Mobile Apps, API Apps, Backup, Site Recovery, Azure Search, Redis Cache, Stream Analytics, and Media Services in Australia East may be experiencing impact. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Azure Services - Australia East

An alert for Virtual Machines, App Services, Backup, Site Recovery, Azure Search, and Media Services in Australia East is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Multiple Services - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Azure Search, Virtual Machines or HDInsight in West Europe may experience high latency or degraded performance when accessing the Azure Search services or Virtual Machines hosted in this region. Customers using HDInsight may receive deployment failure notifications when creating new HDInsight clusters in this region. This is related to an ongoing Storage issue in the region. Engineers have identified a possible underlying cause and are working to implement mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Azure Search, Virtual Machines or HDInsight in West Europe may experience high latency or degraded performance when accessing the Azure Search services or Virtual Machines hosted in this region. Customers using HDInsight may receive deployment failure notifications when creating new HDInsight clusters in this region. This is related to an ongoing Storage issue in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Azure Search or Virtual Machines in West Europe may experience high latency or degraded performance when accessing the Azure Search services or Virtual Machines hosted in this region. This is related to an ongoing Storage issue in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search - West Europe

Starting at 07:00 UTC on 18 May 2017 a subset of customers using Azure Search in West Europe may experience high latency when accessing the Azure Search service hosted in this region. This is related to an ongoing Storage issue in the region. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

SUMMARY OF IMPACT: Between 18:04 and 21:16 UTC on 27 Mar 2017, a subset of customers in Japan West may have experienced degraded performance, network drops or time outs when accessing their Azure resources hosted in this region. PRELIMINARY ROOT CAUSE: A storage scale unit being added to the Japan West region announced routes that blocked some network connectivity between two datacenters in the region. VMs and services dependent on that connectivity would have experienced restarts or failed connections. Unfortunately, automated recovery did not mitigate the issue. The manual health checks that are conducted around all new cluster additions were performed, but did not detect a problem. This led to a delay in correct root cause analysis and mitigation. MITIGATION: Engineers isolated the newly deployed scale unit, which mitigated the issue. NEXT STEPS: Investigations are currently in progress to determine exactly how incorrect routing information was configured into the storage scale unit being added and how that incorrect information escaped the many layers of validations designed to prevent such issues. A full detailed Root Cause Analysis will be published approximately in 72 hours.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, due to the networking infrastructure issue the following services are impacted: App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. HDInsight customers are unable to perform service management operations or provision Linux VMs. Azure Virtual Machine customers may experience VM restarts.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. HDInsight customers are unable to perform service management operations or provision Linux VMs. Azure Virtual Machine customers may experience VM restarts. Engineers are continuing their investigating for an underlying cause and applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West - Applying Mitigation

Starting at 18:04 UTC on 27 Mar 2017, a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Storage customers may receive failure notifications or experience latency when connecting to storage accounts. Azure Backup customers may experience backup failures. Azure Media Services customers may experience video encoding failures. Engineers are continuing their investigating for an underlying cause and have begun applying mitigation steps. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Azure Services - Japan West

Starting at 18:04 UTC on 27 Mar 2017 a subset of customers in Japan West may experience difficulties connecting to resources hosted in this region. App Service \ Web Apps may receive HTTP 503 errors or experience high latency when accessing App Service deployments hosted in this region. Azure Search resources may be unavailable. Attempts to provision new Search services in the region may fail. Redis Cache customers may be unable to connect to their resources. Azure Monitor may be unable to autoscale and alerting functionality may fail. Azure Stream Analytics jobs may fail when attempting to start. All existing Stream Analytics job that are in a running state will be unaffected. Engineers are investigating the issue for an underlying cause and working on mitigation paths. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017 to 04:54 on 16 Mar 2017, due to a incident in East US affecting Storage, customers using Storage and service depending on Storage may have experienced difficulties accessing their resources in the region. PRELIMINARY ROOT CAUSE: Engineering identified one Storage cluster that lost power and became unavailable.  NEXT STEPS: Full detailed Root Cause Analysis is currently being conducted will be published approximately in 72 hours.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017 to 04:54 on 16 Mar 2017, due to a incident in East US affecting Storage, customers using Storage and service depending on Storage may have experienced difficulties accessing their resources in the region. Engineering have confirmed that Azure Logic Apps Azure SQL Database have now recovered.  PRELIMINARY ROOT CAUSE: Engineering identified one Storage cluster that lost power and became unavailable.  NEXT STEPS: Full detailed Root Cause Analysis is currently being conducted will be published approximately in 72 hours.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 60 minutes or as any new information is made available.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. Engineers have now begun the recovery of Azure services and customers should start seeing recovery. The next update will be provided in 2 hours or as any new information is made available

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Media Services, Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Application Insights, Azure Logic Apps, Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Data Factory, Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure Event Hubs, Azure SQL Database, Azure Data Factory and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 21:50 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub, Azure SQL Database and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customers using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Site Recovery, Azure Cache, Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Multiple services impacted due to Storage - East US

Starting at 15:19 UTC on 15 Mar 2017, a subset of customer using Storage in East US may experience difficulties accessing their Storage accounts in East US. Due to a dependency on Storage, customers using the following services may experience failures provisioning or connecting to Azure Search, Azure Service Bus, Azure EventHub and Azure Stream Analytics in East US. The next update will be provided as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Virtual Machines or Cloud Services customers may experience failures when attempting to provision resources. Azure Search customers may be unable to create, scale, or delete services. Azure Monitor customers may be unable to turn on diagnostic settings for resources. Azure Site Recovery customers may experience replication failures. API Management 'service activation' in South India will experience a failure. Azure Batch customers will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. EventHub customers using a service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. Customers using API Management service activation in South India will experience a failure. Customers using Azure Batch will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. Customers using the feature of the EventHub service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. Customers using API Management service activation in South India will experience a failure. Customers using Azure Batch will be unable to provision new resources. All Existing Azure Batch pools will schedule tasks as normal. Customers using the feature of the EventHub service called 'Archive' may experience failures. Customers using Visual Studio Team Services Build will experience failures. The next update will be provided in 60 minutes or as events warrant. First Detailed Post  Don’t forget to update SHD Trends! Is there a workaround? What is the customer seeing? What is not working? Is this service management or availability failure? Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - After Action Report: Contact Info CM who wrote Comms on incident? James/Sami Contacted at: 03/15/17 03:03 PM PST How Contacted ICM auto call SMS Skype IM E-mail LSI Handoff if applicable Bridge info Who was LSI handed off to if applicable Time of handoff: SLA Details Missed Advisory SLA? (15 min) Yes No Reason SLA was Missed Engineers unable to confirm customer impact Time Engineers Confirmed impact: HH:MM AM/PM PDT Please provide notes and time stamp if applicable Bridge Issues Time bridge issue resolved: HH:MM AM/PM PDT Post Facto Mats Issues Time issue was fixed: HH:MM:AM/PM PDT Other: Notes on why SLA was missed if applicable Post Details Posted to: SHD Portal SubIDs Time received SubIDs: #Accounts notified in MATS Social/SHD Trend # of Tweets Reach: Did Social Detect First? Yes HH:MM AM/PM PDT No SHD Trend: Updated DRS / RCA Has an RCA been issued? Yes No Posted to: SHD Portal # of SubIDs toasted: Link to ICM Request: IcM Postmortem Internal Postmortem completed? Yes No Postmortem created Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Other services impacted due to Storage

First Detailed Post  Don’t forget to update SHD Trends! Is there a workaround? What is the customer seeing? What is not working? Is this service management or availability failure? Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - Next Post  Don’t forget to update SHD Trends! Comms: Time Posted - SHD - Portal - Internal - After Action Report: Contact Info CM who wrote Comms on incident? James/Sami Contacted at: 03/15/17 03:03 PM PST How Contacted ICM auto call SMS Skype IM E-mail LSI Handoff if applicable Bridge info Who was LSI handed off to if applicable Time of handoff: SLA Details Missed Advisory SLA? (15 min) Yes No Reason SLA was Missed Engineers unable to confirm customer impact Time Engineers Confirmed impact: HH:MM AM/PM PDT Please provide notes and time stamp if applicable Bridge Issues Time bridge issue resolved: HH:MM AM/PM PDT Post Facto Mats Issues Time issue was fixed: HH:MM:AM/PM PDT Other: Notes on why SLA was missed if applicable Post Details Posted to: SHD Portal SubIDs Time received SubIDs: #Accounts notified in MATS Social/SHD Trend # of Tweets Reach: Did Social Detect First? Yes HH:MM AM/PM PDT No SHD Trend: Updated DRS / RCA Has an RCA been issued? Yes No Posted to: SHD Portal # of SubIDs toasted: Link to ICM Request: IcM Postmortem Internal Postmortem completed? Yes No Postmortem created Starting at 22:42 UTC on 15 Mar 2017, due to an underlying storage incident, other Azure services that leverage Storage may also be experiencing impact. Customers attempting to create new Virtual Machines or Cloud Services may experience failures. Customers using Azure Search may be unable to create, scale, or delete services. Customers using Azure Monitor may be unable to turn on diagnostic settings for resources. Customers using the Azure Portal may be unable to access storage account management operations and may be unable to deploy new accounts. Customers using Azure Site Recovery may experience replication failures. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Multiple Services - North Europe - Service degradation.

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Virtual Machines, Azure Search and Web Apps in North Europe may experience intermittent connection failures when trying to access resources hosted in this region. Engineers have applied a mitigation, and most services should be seeing a return to healthy state at this time. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Multiple Services - North Europe - Service degradation.

Starting at 13:22 UTC on 07 Dec 2016, a subset of customers using Virtual Machines, Azure Search and Web Apps in North Europe may experience intermittent connection failures when trying to access resources hosted in this region. Engineers are aware of this issue which is caused by an underlying storage issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Network Infrastructure - East Asia

Starting at 14:10 UTC on 26 Oct 2016, a subset of customers with services hosted in East Asia may experience degraded performance, latency, or time-outs when accessing their resources located in this region. Impacted Services include, but is not limited to, Virtual Machines, App Service \ Web Apps, Storage, Azure Search, and Service Bus. New service creation may also fail for customers. Some Virtual Machine customers will have experienced a reboot of their VMs in this regions. Engineers are aware of this issue, and are actively investigating. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - East Asia

An alert for Virtual Machines, Network Infrastructure, App Service \ Web Apps, Storage, Service Bus and Azure Search in East Asia is being investigated. More information will be provided as it is known.

Last Update: A few months ago

Virtual Machines - East Asia

An alert for Virtual Machines in East Asia is being investigated. More information will be provided as it is known .

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

We have validated that the issues experienced by customers using App Service \ Web Apps, Visual Studio Team Services, Virtual Machines, SQL Database, Service Bus, Redis Cache, Media Services, HDInsight, DocumentDB, Data Catalog, Cloud Services, Azure Search and Automation in North Europe and West Europe are mitigated. . Our Engineering teams are working to gather additional details on the preliminary root cause before this incident is resolved. An update will be provided within 30 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may experience connectivity issues when attempting to connect to their resources. Multiple Services in the region are impacted. These services are showing recovered: SQL Database, Azure Automation, Azure Data Factory, Service Bus, Event Hubs. The following services are still impacted at this time: Visual Studio Team Services, App Service \ Web Apps, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Azure Search, Log Analytics and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes or as events warrant.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe and North Europe may experience connectivity issues when attempting to connect to their resources. Multiple Services in the region are impacted. These services are showing recovered: SQL Database, Azure Automation, Azure Data Factory, Service Bus, Event Hubs. The following services are still impacted at this time: Visual Studio Team Services, App Service \ Web Apps, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Azure Search, Log Analytics and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe may experience connectivity issues when attempting to connect to their resources. Impacted services include. Visual Studio Team Services, App Service \ Web Apps and SQL Database, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Service Bus , Azure Search, and Document DB. Engineers have identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers with resources deployed in West Europe may experience connectivity issues when attempting to connect to their resources. Impacted services include. Visual Studio Team Services, App Service \ Web Apps and SQL Database, Virtual Machines, Cloud Services, HD Insight , Redis Cache , Service Bus , Azure Search, and Document DB. Engineers identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers using Visual Studio Team Services, App Service \ Web Apps and SQL Database in West Europe and Virtual Machines, Cloud Services, , HD Insight , Redis Cache , Service Bus , Azure Search, Document DB  and Data Factory may experience degraded availability when accessing their resources. Engineers identified an underlying network issue and made a configuration change to improve network health. Service availability is starting to improve. The next update will be provided within 60 minutes.

Last Update: A few months ago

Degradation in Multiple Services - Multiple Regions

Starting at 15:42 UTC on 09 Sep 2016, customers using Visual Studio Team Services, App Service \ Web Apps and SQL Database in West Europe and Virtual Machines, Cloud Services, , HD Insight , Redis Cache , Service Bus , Azure Search, Document DB  in North Europe and West Europe will experience degraded availability when accessing their resources. Engineers are currently investigating and the next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - Partial Service Interruption

Starting at 23:12 UTC on 8 June, 2016 a subset of customers using Azure services in West Europe may experience intermittent inability to connect or access their service resources including: App Services / Web App, SQL, Azure Active Directory proxy, Virtual Machines, Cloud Services, Azure Search. Redis Cache, and HDInsight. The underlying network infrastructure event has been mitigated as of 01:24 UTC on 9th June, 2016. Azure Machine Learning is fully mitigated as of 01:50 UTC on 9th June, 2016. Other impacted services continue to recover and customers will continue to see improved service availability as engineers continue to mitigate residual impacts. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - West Europe Partial Service Interruption

Starting at 23:12 UTC on 8 June, 2016 a subset of customers using Azure services in West Europe may experience intermittent inability to connect or access their service resources including: App Services / Web App, Machine Learning, SQL, Azure Active Directory proxy, Virtual Machines, Cloud Services, Azure Search. Redis Cache, and HDInsight. The underlying network infrastructure event has been mitigated as of 01:24 UTC on 9th June, 2016. Impacted services are recovering and customers will see improved service availability as engineers continue to mitigate residual impacts. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - West Europe Partial Service Interruption

Starting at 23:12 UTC on 8 June, 2016 a subset of customers using Azure services in West Europe may experience inability to connect or access their service resources including: App Services / Web App, Machine Learning, SQL, Azure Active Directory proxy, Virtual Machines, Cloud Services, Azure Search. Redis Cache, and HDInsight. Engineers are investigating a Network Infrastructure event in West Europe. The next update will be provided in 60 minutes or as new information is available.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , recovered and all services, except a limited subset of Web Apps, have reported recovery as well. The next update will be provided in 2 hours or as events warrant. <br><br>IMPACTED SERVICES: Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are continuing to mitigate the remaining Web Apps impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are continuing to mitigate the remaining Web Apps impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Web Apps customers are showing signs of recovery and a small subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. Engineers are actively mitigating the Web Apps remaining impact. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights. <br><br> A small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers are showing signs of recovery and some customers may continue to experience errors attempting to connect to resources. Web Apps customers are showing signs of recovery and a subset of customers may still be experiencing 503 errors, connection failures or long latencies when accessing websites. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are starting to report recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Log Analytics/Operational Insights, Managed Cache, Mobile Apps, Mobile Services, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics, Visual Studio Application Insights.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and other services are reporting recovery are starting to see recovery as well. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Event Hubs, HDInsight, Managed Cache, Redis Cache, RemoteApps, Service Bus, SQL Databases, Storage, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, Service Bus, Event Hubs, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics, SQL Databases, Storage, HDInsight.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources. <br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs, Stream Analytics.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search, Service Bus, Event Hubs

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.<br><br>RECOVERED SERVICES: Azure Search

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team, as of 23:45 UTC , is seeing recovery and once fully healthy the below impacted services will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team is actively mitigating the issue and once healthy, the impacted services (below) will start recovering. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables, Premium Storage is not impacted. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases on only the Basic or Standard Tiers may experience failures when attempting to connect to databases and when attempting to login to their databases, Premium Tier customers are not impacted. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps and Mobile Services customers may experience 503 errors, connection failures or long latencies when accessing resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:05 (updated) UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Log Analytics, Stream Analytics, Visual Studio Application Insights, Redis Cache, Managed Cache Service, and Mobile Apps in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br> IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases on only the Basic or Standard Tiers may experience failures when attempting to connect to databases and when attempting to login to their databases, Premium Tier customers are not impacted. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Log Analytics customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources. Mobile Apps customers may experience failures connecting to or using their services.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, Redis Cache and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Redis Cache customers may experience failures logging to Storage accounts. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Multiple Azure Services - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login to their databases. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Engineers have confirmed that the incident is due to an underlying Storage memory consumption issue. The storage team has identified a mitigation path are actively executing it. The next update will be provided in 60 minutes or as events warrant. <br><br>IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Visual Studio Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Initial investigation indicates that the issue is due to an underlying Storage dependency. Engineers are actively investigating the path to mitigation. The next update will be provided in 60 minutes or as events warrant. IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to login. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections. Operational Insights customers may be unable to search and ingest data. Stream Analytics customers will be unable to start jobs in East US. Application Insights customers may experience data latency. Managed Cache customers may be unable to access service resources.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

SUMMARY OF IMPACT: Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, RemoteApp, Operational Insights, Stream Analytics, Visual Studio Application Insights, and Managed Cache Service in East US may experience connection failures or long latencies when attempting to access their resources. Initial investigation indicates that the issue is due to an underlying Storage dependency. Engineers are actively investigating the path to mitigation. The next update will be provided in 60 minutes or as events warrant. IMPACTED SERVICES: Following is the impact to customers for of the impacted services. Cloud Services and Virtual Machines customers would experience errors attempting to connect to resources. Web Apps customers may experience 503 errors, connection failures or long latencies when accessing websites. HDInsight customers may experience failures in creating clusters as well as impact on existing HDInsight clusters. Storage customers may experience failures when connecting to Blobs and Tables. SQL Databases may experience failures when attempting to connect to databases and when attempting to Login. Azure Search customers may experience failures when issuing search queries. RemoteApp will experience failures trying to connect to RemoteApp Collections.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, Azure Search, and RemoteApp in East US may experience connection failures or long latencies when attempting to access their resources. Initial investigation indicates that the issue is due to an underlying Storage dependency. Engineers are actively investigating the path to mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Storage - East US - Partial Service Interruption

Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, Storage, SQL Database, and Azure Search in East US may experience connection failures or long latencies when attempting to access their resources. Initial investigation indicates that the issue is due to an underlying Storage dependency. Engineers are actively investigating the path to mitigation. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines, Cloud Services, Web Apps, HDInsight, and Azure Search - East US - Partial Service Interruption

Starting at 16:45 UTC Apr 9th, 2016 a subset of customers using Virtual Machines, Cloud Services, Web Apps, HDInsight, and Azure Search in East US may experience connection failures or long latencies when attempting to access their resources. Engineers are currently investigating the underlying root cause. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Network Infrastructure - Southeast Asia: Service Restoration

Starting at approximately 01:40 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Investigations indicate this to be due to fiber cut on a 3rd party network provider's infrastructure. Engineers from the 3rd party network provider have repaired the issue. Azure Services are starting to show restoration. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, SQL Databases, HDInsight, Batch, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, Azure Search, Redis Cache, Storage, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Network Infrastructure - Southeast Asia - Partial Service Interruption

Starting at approximately 02:00 UTC on 02 Apr, 2016 customers using Web Apps, Cloud Services, Virtual Machines, Azure IoT Hub, Azure Search, Redis Cache, and Event Hubs in Southeast Asia may experience failures when trying to access their resources. Initial investigations indicate this to be due to a network infrastructure issue. Engineers are investigating mitigation options. The next update will be provided in 60 minutes.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Our Engineering teams have mitigated the underlying Networking issue and majority of affected customers should observe recovery now. Please refer to History page for preliminary report for the Networking incident. All the impacted Azure Services also reported services restored, except App Services and HDInsights. Engineers continue to recover the residual affected VMs. Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Our Engineering teams have mitigated the underlying Networking issue and majority of affected customers should observe recovery now. Engineers continue to recover the residual affected VMs. Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, Mobile App and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, Event Hub, and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp, and Data Catalog. Our Engineering teams have mitigated the underlying Networking issue and seeing improvements. Engineers are continuing gather additional details on the preliminary root cause before this incident is resolved.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, RemoteApp and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler, and Data Catalog.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services, Stream Analytics, Redis Cache, Azure Scheduler.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights, IoT Hub, and Visual Studio Team Services.

Last Update: A few months ago

Multiple Azure Services impacted by an underlying Network Infrastructure incident in East US

Starting at 16 MAR, 2016 20:28 UTC, our engineers have identified a network related issue affecting connectivity to some Azure Services in East US region. The confirmed impacted services include: Service Bus, SQL, Storage, Azure Operational Insight, Azure Search, Web App, Application Insights, Media Services, API Management, HDInsights.

Last Update: A few months ago

Virtual Machines, Service Bus, Azure Search, and Visual Studio Team Services - South Central US - Partial Service Interruption

Starting at 12 Mar, 2016 07:14 UTC a subset of customers using Virtual Machines, Service Bus, Azure Search, Visual Studio Team Services and Visual Studio Application Insights in South Central US may experience latency or inability to access their services located in this region. Engineers have identified a potential underlying cause, and are continuing to investigate mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines, Service Bus, Azure Search, and Visual Studio Team Services - South Central US - Partial Service Interruption

Starting at 12 Mar, 2016 07:14 UTC a subset of customers using Virtual Machines, Service Bus, Azure Search, Visual Studio Team Services and Visual Studio Application Insights in South Central US may experience latency or inability to access their services located in this region. Engineers have identified a potential underlying cause, and are investigating mitigation options for this. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines, Service Bus, Azure Search, and Visual Studio Team Services - South Central US - Partial Service Interruption

Starting at 12 Mar, 2016 07:14 UTC a subset of customers using Virtual Machines, Service Bus, Azure Search, Visual Studio Team Services and Visual Studio Application Insights in South Central US may experience latency or inability to access their services located in this region. Engineers have identified a potential underlying cause, and are investigating mitigation options. The next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines, Service Bus, Azure Search, and Visual Studio Team Services - South Central US - Partial Service Interruption

Starting at 12 Mar, 2016 07:14 UTC a subset of customers using Virtual Machines, Service Bus, Azure Search, Visual Studio Team Services and Visual Studio Application Insights in South Central US may experience latency or inability to access their services located in this region. Engineers are continuing to investigate, and the next update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Virtual Machines, HDInsights, Azure Search - South Central US - Advisory

Starting at 20 Feb, 2016 07:10 UTC a subset of customers using Virtual Machines, HDInsights and Azure Search in South Central US may experience issues with their services. Virtual Machine customers may see their VMs rebooting. HDInsight customers might not be able to submit their jobs, and Azure Search customers might not be able to connect to their services. Engineers are actively investigating the issue. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Virtual Machines - South Central US - Advisory

Starting at <DD MMM, YYYY HH:HH> UTC customers/a subset of customers using Virtual Machines in South Central US will/may experience <Customer Impact or Experience>. <Available workaround or Call to Action>. <Current Status>. The next update will be provided in 60 minutes.

Last Update: A few months ago

Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) - West Europe - Advisory

Starting at 19:50 on 07 FEB 2016 UTC a subset of customers using a number of services, including Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) in West Europe may experience errors when attempting to perform service management operations on their Virtual Machines or Cloud Services due to an underlying Network Infrastructure incident noted above. Existing instances of Virtual Machines and Cloud Services will remain unaffected. We have deployed a mitigation and customers should start to see recovery. The next update will be provided in 60 minutes.

Last Update: A few months ago

Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) - West Europe - Advisory

Starting at 19:50 on 07 FEB 2016 UTC a subset of customers using a number of services, including Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) in West Europe may experience errors when attempting to perform service management operations on their Virtual Machines or Cloud Services due to an underlying Network Infrastructure incident noted above. Existing instances of Virtual Machines and Cloud Services will remain unaffected. We have identified a potential root cause, and are working to restore service. The next update will be provided in 60 minutes.

Last Update: A few months ago

Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) - West Europe - Advisory

Starting at 07 FEB 2016 19:50 UTC a subset of customers using Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) in West Europe may experience errors when attempting to perform service management operations on their Virtual Machines or Cloud Services due to an underlying Network Infrastructure incident noted above. Existing instances of Virtual Machines and Cloud Services will remain unaffected. Engineers are currently evaluating options to restore service. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) - West Europe - Advisory

Starting at 07 FEB 2016 19:50 UTC customers using Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) in West Europe may experience errors when attempting to perform service management operations on their Virtual Machines or Cloud Services due to an underlying Network Infrastructure incident noted above. Existing instances of Virtual Machines and Cloud Services will remain unaffected. We are currently evaluating options to restore service. The next update will be provided in 60 minutes.

Last Update: A few months ago

HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) - West Europe - Advisory

Starting at 07 FEB 2016 19:50 UTC customers using Visual Studio Team Services, Azure Search, Azure Batch, HDInsight, Cloud Services, Virtual Machines and Virtual Machines \ Virtual Machines (v2) in West Europe may experience errors when attempting to perform service management operations on their Virtual Machines or Cloud Services due to an underlying Network Infrastructure incident noted above. Existing instances of Virtual Machines and Cloud Services will remain unaffected. We are currently evaluating options to restore service. The next update will be provided in 60 minutes or as events warrant.

Last Update: A few months ago

Additionally impacted services - West Europe

Multiple services in Europe are impacted by an ongoing Networking issue. Impacted services include: App Service \ Web App, API Management, Stream Analytics, Azure Search, Event Hubs, Service Bus, SQL Database, Operational Insights, Azure Active Directory B2C, Key Vault, Media Services, Data Catalog, Virtual Machines, Automation, Visual Studio Online, Managed Cache, Redis Cache, DocumentDB and RemoteApp. More information will be provided as it is known.

Last Update: A few months ago

Additionally impacted services - West Europe

Multiple services in Europe are impacted by an ongoing Networking issue. Impacted services include: SQL Database, API Management, Media Services, Azure Search, App Service \ Web App, Service Bus, Event Hubs, Azure Active Directory B2C, Operational Insights, Key Vault, Virtual Machines, Data Catalog and Stream Analytics. More information will be provided as it is known .

Last Update: A few months ago

Additionally impacted services - West and North Europe

Multiple services in West and North Europe are impacted by an ongoing Networking issue. Impacted services include: SQL Database, API Management, Media Services, Azure Search, App Service \ Web App, Service Bus, Event Hubs, Azure Active Directory B2C, Operational Insights, Key Vault, Virtual Machines, Data Catalog and Stream Analytics. More information will be provided as it is known .

Last Update: A few months ago

Aditionally impacted services - West Europe

Multiple services in West Europe are impacted by an ongoing Networking issue. Impacted services include: SQL Database, API Management, Media Services, Azure Search, App Service \ Web App, Service Bus, Event Hubs, Azure Active Directory B2C, Operational Insights, Key Vault, Virtual Machines, Data Catalog and Stream Analytics. More information will be provided as it is known.

Last Update: A few months ago

Azure Search, Virtual Machines, and Web App - West Europe - Advisory

Starting on 29 Sep, 2015 at approx. 13:02 UTC customers with Web App, Virtual Machines, and Azure Search Services deployed in West Europe experienced impact as the result of dependencies related to a Storage incident which resolved at 16:44 UTC in the West Europe region. Impact to Web App has recovered as of 17:50 UTC. Impact to Azure Search has recovered as of 19:50 UTC. Virtual Machines impact has recovered for a large number of our customers and engineering is focused on mitigating impact for the remaining number of deployments impacted. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search, Virtual Machines, and Web App - West Europe - Advisory

Starting from 29 Sep, 2015 at approx. 13:02 UTC, customers with Web App, Virtual Machines, and Azure Search Services deployed in West Europe experienced impact as the result of dependencies related to a Storage incident which resolved at 16:44 UTC in the West Europe region. Impact to Web App has recovered as of 17:50 UTC. Less than 1% of Azure Search customers in the region are still experiencing issues with their deployments. Engineers are continuing to work on residual impact to Virtual Machines at this time. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search, Virtual Machines, and Web App - West Europe - Advisory

Starting from 29 Sep, 2015 at approx. 13:02 UTC, customers with Web App, Virtual Machines, and Azure Search Services deployed in West Europe experienced impact as the result of dependencies related to a Storage incident which resolved at 16:44 UTC in the West Europe region. Impact to Web App has recovered as of 17:50 UTC. Roughly 1% of Azure Search customers in the region are still experiencing issues with their deployments. Engineers are continuing to work on residual impact to Virtual Machines at this time. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search, Virtual Machines, and Web App - West Europe - Advisory

Starting from 29 Sep, 2015 at approx. 13:02 UTC, customers with Web App, Virtual Machines, and Azure Search Services deployed in West Europe experienced impact as the result of dependencies related to underlying Storage issues in the West Europe region. Impact to Web App has recovered as of 17:50 UTC. Roughly 5% of Azure Search customers in the region are still experiencing issues with their deployments. Engineers are currently working on residual impact to Virtual Machines at this time. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search, Virtual Machines, and Web App - West Europe - Advisory

Customers with Web App, Virtual Machines, and Azure Search Services deployed in West Europe experienced impact as the result of dependencies related to underlying Storage issues in the West Europe region. Impact to Azure Search and Web App has recovered. Engineers are currently working on residual impact to Virtual Machines at this time. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search, Virtual Machines, and Web App - West Europe - Advisory

Engineers are investigating alerts indicating impact for Web App, Virtual Machines, and Azure Search customers deployed in West Europe, which preliminary investigations indicate may be related to underlying Storage issues in the West Europe region. Azure Search impact is stabilizing, however recovery has not yet been fully realized. Web App customers may experience Error 500 or may be unable to access their Web App services deployed in West Europe. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search and Web App - West Europe - Advisory

Engineers are investigating alerts indicating impact for Web App and Azure Search customers deployed in West Europe, which preliminary investigations indicate may be related to underlying Storage issues in the West Europe region. Azure Search impact is stabilizing, however recovery has not yet been fully realized. Web App customers may experience Error 500 or may be unable to access their Web App services deployed in West Europe. An update will be provided in 60 minutes, or as events warrant.warrant.

Last Update: A few months ago

Azure Search - West Europe - Advisory

Starting on 29 Sep, 2015 at 14:45 UTC customers using Azure Search in West Europe may be unable to access their Search deployments and may be unable to perform service management operations (such as create, rename, change, delete) in the region. Engineers are seeing significant recovery and customers should begin to see improvement in service availability. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search - West Europe - Advisory

Engineers are engaged on an emerging issue for Azure Search in West Europe where a subset of customers may be unable to access their Search deployments. Services are beginning to self-heal and engineers are implementing their mitigation strategy. An update will be provided in 60 minutes, or as events warrant.

Last Update: A few months ago

Azure Search - West Europe - Advisory

An alert for Azure Search in West Europe is being investigated. More information will be provided as it is known.

Last Update: A few months ago
Check out my other project Contentr.app. Your content marketing tool.

© 2019 - Cloudstatus built by jameskenny